text
stringlengths 6
128k
|
---|
# Rapid Computation of the Plasma Dispersion Function: Rational and Multi-pole
Approximation, and Improved Accuracy
Huasheng Xie Email<EMAIL_ADDRESS><EMAIL_ADDRESS>Hebei Key
Laboratory of Compact Fusion, Langfang 065001, China ENN Science and
Technology Development Co., Ltd., Langfang 065001, China
###### Abstract
The plasma dispersion function $Z(s)$ is a fundamental complex special
integral function widely used in the field of plasma physics. The simplest and
most rapid, yet accurate, approach to calculating it is through rational or
equivalent multi-pole expansions. In this work, we summarize the numerical
coefficients that are practically useful to the community. Besides the Padé
approximation to obtain coefficients, which are accurate for both small and
large arguments, we also employ optimization methods to enhance the accuracy
of the approximation for the intermediate range. The best coefficients
provided here for calculating $Z(s)$ can deliver twelve significant decimal
digits. This work serves as a foundational database for the community for
further applications.
## I Introduction and motivation
The plasma dispersion functionFried1961 ; Huba2009 ; Gurnett2005 is defined
as
$Z(s)=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\frac{e^{-t^{2}}}{t-s}dt,$
(1)
with $s=x+iy$, which is valid for $y>0$. For $y\leq 0$, the function is
analytically continued from the above upper plane form to the lower plane. The
function is relevant to the Faddeev function $w(s)$ and error function ${\rm
erf}(s)$ by
$\displaystyle Z(s)$ $\displaystyle=$ $\displaystyle i\sqrt{\pi}w(s),$ (2)
$\displaystyle Z(s)$ $\displaystyle=$ $\displaystyle
i\sqrt{\pi}e^{-s^{2}}[1+{\rm erf}(is)]$ (3) $\displaystyle=$ $\displaystyle
i\sqrt{\pi}e^{-s^{2}}[1+i\cdot{\rm erfi}(s)]$ (4) $\displaystyle=$
$\displaystyle i\sqrt{\pi}e^{-s^{2}}-2F(s),$ (5)
where $i=\sqrt{-1}$, ${\rm
erf}(s)=\frac{2}{\sqrt{\pi}}\int_{0}^{s}e^{-t^{2}}dt$ and ${\rm
erfi}(s)=-i\cdot{\rm erf}(is)$. Here, $F(x)$ is Dawson integral
$F(x)=e^{-x^{2}}\int_{0}^{x}e^{t^{2}}dt=\frac{\sqrt{\pi}}{2}e^{-x^{2}}\cdot{\rm
erfi}(x)$. Note also $dZ/ds=-2[1+sZ(s)]$. The below symmetry properties (the
asterisk denotes complex conjugation) hold
$\displaystyle Z(s)=-[Z(-s^{*})]^{*},$ (6) $\displaystyle
Z(s)=[Z(s^{*})]^{*}+2i\sqrt{\pi}\exp[-s^{2}]~{}~{}(y<0).$ (7)
And the two-side Taylor expansion approximation
$\displaystyle Z(s)\simeq$ (8) $\displaystyle\left\\{\begin{aligned}
\sum_{k=0}^{\infty}a_{k}s^{k}\simeq
i\sqrt{\pi}e^{-s^{2}}-s\sum_{n=0}^{\infty}(-s^{2})^{n}\frac{\Gamma(1/2)}{\Gamma(n+3/2)},~{}s\to
0\\\ \sum_{k=0}^{\infty}a_{-k}s^{-k}\simeq
i\sigma\sqrt{\pi}e^{-s^{2}}-\sum_{n=0}^{\infty}\frac{\Gamma(n+1/2)}{\Gamma(1/2)s^{2n+1}},~{}s\to\infty\end{aligned}\right.$
where
$\displaystyle\sigma=\left\\{\begin{aligned} &0,~{}~{}y>0,\\\ &1,~{}~{}y=0,\\\
&2,~{}~{}y<0,\end{aligned}\right.$ (9)
and $\Gamma$ is Euler’s Gamma function. A further expansion is
$e^{-s^{2}}=\sum_{n=0}^{\infty}\frac{(-s^{2})^{n}}{n!}$. If term
$i\sigma\sqrt{\pi}e^{-s^{2}}$ is omitted, Eq.(8) would not match well of the
imag part for the range $y<\sqrt{\pi}x^{2}e^{-x^{2}}$ when $x\gg 1$. Since
$e^{-x^{2}}<10^{-40}$ for $x>10$, the term actually mainly affects the
intermediate range, e.g., $1<x<10$.
The direct numerical integral of Eq.(1) would be time-consuming and inaccurate
due to the pole in the integral dominator. Series methods, cf., Martin1980 ;
Hui1978 ; Humlicek1979 ; Humlicek1982 ; Weideman1994 ; Zaghloul2011 ;
Franklin1968 ; Nemeth1981 ; Fried1968 ; Newberger1986 ; Tjulin2000 ;
Ronnmark1982 ; Xie2016 ; Xie2019 , have been proposed to numerically calculate
$Z(s)$. A comparison of different methods is also provided Zaghloul2011 . The
yet simplest and fastest, yet still accurate, approach to calculate it is to
use rational expansion Hui1978 ; Humlicek1979 ; Humlicek1982 ; Hunana2019 or
equivalent multi-pole expansion Fried1968 ; Martin1980 ; Ronnmark1982 ;
Xie2016 ; Xie2019 .
It is also surprising that the rational and multi-pole approximation of the
plasma dispersion function inspire two unexpected applications: (1) The
development of Landau fluid models to mimic the kinetic Landau damping effects
Hammett1990 ; Hammett1992 ; Hunana2019 ; (2) The first solver to obtain all
the kinetic dispersion relation solutions without the requirement of an
initial guess Xie2016 ; Xie2019 . These two applications are possible only
when the approximation of $Z(s)$ keeps the following features: (a) The
approximation should be rational functions; (b) One formula for all the
interesting regions; (c) To maintain the same accuracy with as few terms as
possible.
Hence, segmentation calculations (cf., Zaghloul2011 ) and non-rational
expansion are not our choices. We find our only choice is the rational and
multi-pole approximation. The standard approach to obtaining the rational
coefficients is to use Pade approximation to match Eq.(8), which is accurate
for small ($s\to 0$) and large ($s\to\infty$) arguments. The coefficients can
be calculated rigorously via matrix inverse. The Pade approximation is less
accurate at the intermediate range $s\simeq 1$. We then use optimization
methods to reduce the error of the approximation at the intermediate range,
which can improve one order of magnitude. Weideman’s Weideman1994 method is
also in rational form; however, it will require series of high-order terms.
We found rational and multi-pole coefficients are not provided systematically
in literature. The methods used in this work are probably not new, and some of
the coefficients have been provided in different literature, for example, the
Pade rational form with also analytic coefficients up to $J=2-8$ Martin1980 ;
Hunana2019 , multi-pole for small $J\leq 8$ Huba2009 ; Martin1980 ;
Ronnmark1982 ; Xie2016 and extended to $J=24$ Xie2019 . The coefficients with
optimization for $J=5,6,7$ can also be found in rapid calculation of $Z$, such
as Refs. Hui1978 ; Humlicek1979 ; Humlicek1982 . A systematic summary of the
rational coefficients and corresponding multi-pole coefficients could be
useful to the community, especially for beginners.
The purpose of the present work is to provide comprehensive numerical Pade
coefficients from small order to high accuracy, typically $J=2$ to $J=24$,
which can have an error less than $10^{-13}$, and could be used for rapid
numerical calculation of $Z(s)$ to high accuracy. The coefficients of improved
fitting for small $J\leq 8$ are also provided, which can be used to save
computation time or improve accuracy with fewer terms in Landau fluid model
and kinetic dispersion relation solver.
In Section II, we describe the approaches we used and the results we obtained.
In Section III, summary and conclusion are given.
## II Approaches and Results
The rational approximation and multi-pole expansion of $Z(s)$ is (typos in
Ref. Xie2016 are fixed here)
$\displaystyle Z(s)\simeq
Z_{A}^{J}(s)=\frac{\sum_{l=0}^{J-1}p_{l}s^{l}}{q_{0}+\sum_{k=1}^{J}q_{k}s^{k}}=\sum_{j=1}^{J}\frac{b_{j}}{s-c_{j}},$
(10)
with $q_{0}=1$. This form is valid for the upper plane to high accuracy and is
analytical (i.e., automatically be analytically continued) and thus would also
have a good approximation at the real axis and lower plane in case the $y$ is
not far from the real axis. If one needs a more accurate value in the lower
plane, Eq.(7) can be used to use the value from the upper plane and symmetry
property. Hence, the entire plane can be calculated to high accuracy.
### II.1 Pade method
The Pade expansion is to match Eq.(10) to Eq.(8), i.e., with terms to terms
match
$\displaystyle\Big{[}q_{0}+\sum_{k=1}^{J}q_{k}s^{k}\Big{]}Z(s)=\sum_{l=0}^{J-1}p_{l}s^{l}.$
(11)
To obtain the coefficients, the system of equations to be solved are
$\displaystyle p_{j}=\sum_{k=0}^{j}a_{k}q_{j-k},~{}~{}1\leq j\leq I,$ (12a)
$\displaystyle p_{J-j}=\sum_{k=0}^{j}a_{-k}q_{J+k-j},~{}~{}1\leq j\leq K,$
(12b)
where $I+K=2J$, and $p_{j}=0$ for $j>J-1$ and $j<0$, and $q_{j}=0$ for $j>J$
and $j<0$. The $2J$ equations determine $2J$ coefficients $p_{j}$ and $q_{j}$.
Here $I$ means keeping $I$ equations for $s\to 0$, and $K$ means keeping $K$
equations for $s\to\infty$. The above equation can be solved via matrix
inverse, both analytically for small $J$ or numerically for arbitrary $J$.
The term $i\sigma\sqrt{\pi}e^{-s^{2}}$ for $s\to\infty$ in Eq.(8) is omitted,
and can be explicitly rewritten as
$\displaystyle Z(s)\simeq$ (13) $\displaystyle\left\\{\begin{aligned}
\sum_{k=0}^{\infty}a_{k}s^{k}\simeq&i\sqrt{\pi}-2s-i\sqrt{\pi}s^{2}+\frac{4}{3}s^{3}+\\\
&\frac{i\sqrt{\pi}}{2}s^{4}-\frac{8}{15}s^{5}+\cdots,~{}s\to 0\\\
\sum_{k=0}^{\infty}a_{-k}s^{-k}\simeq&-\frac{1}{s}-\frac{1}{2s^{3}}-\frac{3}{4s^{5}}-\frac{15}{8s^{7}}+\cdots,~{}s\to\infty\end{aligned}\right.$
And Eq.(12) is rewritten as
$\displaystyle p_{0}=i\sqrt{\pi},$ (14a) $\displaystyle
p_{1}=-2+i\sqrt{\pi}q_{1},$ (14b) $\displaystyle
p_{2}=-i\sqrt{\pi}-2q_{1}+i\sqrt{\pi}q_{2},$ (14c) $\displaystyle\cdots,$
(14d) $\displaystyle-q_{J}=p_{J-1},$ (14e) $\displaystyle-q_{J-1}=p_{J-2},$
(14f) $\displaystyle-q_{J-2}-\frac{1}{2}q_{J}=p_{J-3},$ (14g)
$\displaystyle\cdots.$ (14h)
For a given $J$ and $I$, the coefficients $p_{l}$ and $q_{k}$ can be solved
easily via matrix inverse. Solving for the multi-pole coefficients $b_{j}$ and
$c_{j}$ is also straightforward (e.g., using the residue() function in
Matlab), i.e., $c_{j}$ are the roots of the equation
$q_{0}+\sum_{k=1}^{J}q_{k}s^{k}=0$. There are also symmetric features to
ensure that $b_{j}$ and $c_{j}$ occur in pairs: $b_{j}=b_{J+1-j}^{*}$ and
$c_{j}=-c_{J+1-j}^{*}$.
For multi-pole expansion, we have
$\displaystyle Z_{A}(s)\simeq\sum_{j=1}^{J}b_{j}\left\\{\begin{aligned}
-\frac{1}{c_{j}}-\frac{s}{c_{j}^{2}}-\frac{s^{2}}{c_{j}^{3}}+\cdots,~{}s\to
0\\\
\frac{1}{s}+\frac{c_{j}}{s^{2}}+\frac{c_{j}^{2}}{s^{3}}+\cdots,~{}s\to\infty\end{aligned}\right.$
(15)
Comparing Eq.(15) with Eq.(13), we have $\sum_{j}b_{j}/c_{j}=-i\sqrt{\pi}$,
$\sum_{j}b_{j}/c_{j}^{2}=2$, $\sum_{j}b_{j}/c_{j}^{3}=i\sqrt{\pi}$, and
$\sum_{j}b_{j}=-1$, $\sum_{j}b_{j}c_{j}=0$, and $\sum_{j}b_{j}c_{j}^{2}=-1/2$.
For kinetic dispersion relation solver Xie2016 ; Xie2019 ,
$\sum_{j}b_{j}c_{j}^{2}=-1/2$ is used, which means that we need to keep to
$O(1/s^{3})$ when calculating the multi-pole coefficient. Hence, we should
have $K\geq 3$. This also implies that $-q_{J}=p_{J-1}$, $-q_{J-1}=p_{J-2}$,
and $-q_{J-2}-\frac{1}{2}q_{J}=p_{J-3}$ hold, and the $R(s)=1+sZ(s)$
coefficients in the Landau fluid model can be obtained straightforwardly. For
small $J=2,3,4$, we may only keep to $O(1/s)$ or $O(1/s^{2})$. The
coefficients taken by Ronnmark Ronnmark1982 are $J=8$, $I=10$.
It also appears that a slightly larger $I$ Martin1980 than $K$ can provide a
better overall approximation. The notation $Z_{IK}$ is used to describe
different orders of approximation.
Figure 1: Errors of $Z$ using different $J$ poles coefficients and Weideman
coefficients, with $y=-0.1$. Figure 2: Errors of $Z$ using different $J$ poles
coefficients and Weideman coefficients, with $y=0$. Figure 3: Errors of $Z$
using different $J$ poles coefficients and Weideman coefficients, with
$y=0.1$.
### II.2 Search minimum method
It appears that $p_{2j}$ and $q_{2j+1}$ are pure imaginary numbers, and
$p_{2j+1}$ and $q_{2j}$ are pure real numbers, for $j=0,1,2,\cdots$.
We minimize both the absolute and relative errors by
$\displaystyle min\Big{\\{}w\delta_{a}^{2}+(1-w)\delta_{r}^{2}\Big{\\}},$ (16)
with
$\displaystyle\delta_{a}=|Z_{A}(s)-Z(s)|,~{}~{}~{}\delta_{r}=|Z_{A}(s)/Z(s)-1|,$
(17)
and $Z_{A}(s)$ is the approximation value, and $Z(s)$ is the accurate value.
In this work, we set $w=0.5$. We take the accurate $Z(s)$ to be the one using
the Dawson function in Matlab (though it may not be accurate for all ranges).
We use some of the constraint Eq.(14) to make small $s$ to $O(s^{2})$ and
large $s$ to $O(1/s^{3})$. There are some standard approaches to obtain the
optimized $p_{l}$ and $q_{k}$. Here, we use the Matlab function fminsearch()
to perform the calculations and use the Pade coefficients as initial guesses.
The results could be sensitive to the initial guess and the iterative
convergence criteria. Hence, the final results may not be determined. We
choose our best obtained results to list here. The optimization is performed
for $s=x+iy$, with $x=[-50,50]$ and $y=0.1$.
Figure 4: Coefficients $p_{l}$, $q_{k}$, $b_{j}$, $c_{j}$ and corresponding
errors $\delta_{a}$ and $\delta_{r}$ for $J=2-7$. For Pade $K\geq 3$ and for
optimized $J\geq 4$, we ensure $|\sum_{j}b_{j}c_{j}^{2}+1/2|<10^{-12}$. Note
also $p_{0}=i\sqrt{\pi}$, $b_{j}=b_{J+1-j}^{*}$ and $c_{j}=-c_{J+1-j}^{*}$.
Figure 5: Coefficients $p_{l}$, $q_{k}$, $b_{j}$, $c_{j}$ and corresponding
errors $\delta_{a}$ and $\delta_{r}$ for $J=8,10,12$. Note also
$p_{0}=i\sqrt{\pi}$, $b_{j}=b_{J+1-j}^{*}$ and $c_{j}=-c_{J+1-j}^{*}$. Figure
6: Coefficients $p_{l}$, $q_{k}$, $b_{j}$, $c_{j}$ and corresponding errors
$\delta_{a}$ and $\delta_{r}$ for $J=16,20,24$. Note also $p_{0}=i\sqrt{\pi}$,
$b_{j}=b_{J+1-j}^{*}$ and $c_{j}=-c_{J+1-j}^{*}$.
### II.3 Coefficients tables
After the methods described in the subsections above, we performed
calculations to obtain the coefficients $p_{l}$, $q_{k}$, $b_{j}$, $c_{j}$,
and corresponding errors $\delta_{a}$ and $\delta_{r}$ in a table. For Pade
coefficients, we calculated from $J=2$ to $24$, and from $J=2$ to $8$ for
optimized coefficients. The error data are taken for the lower plane with
$y=-0.1$ and $x=[-50,50]$, ensuring that the approximation can also accurately
capture the Landau damping effect even without the $2i\sqrt{\pi}e^{-s^{2}}$
term. Data for $J>24$ are not listed here, as the double precision is not
adequate and not necessary for most applications.
Figures 4, 5, and 6 show the coefficients, while Figures 1, 2, and 3 show the
results of the errors. ‘Pade best’ in the figures means that $I$ is chosen
such that the error is minimized according to the tables.
It is possible to optimize for $J\geq 9$, but it is less useful, and thus we
do not provide it here. The reason is that $J=8$ is sufficient for most
purposes, and those requiring higher accuracy can use slightly higher $J$ Pade
coefficients; for example, the $J=10$ Pade may have better accuracy than the
optimized $J=8$. For $J>20$, random error becomes the major issue for the
approximations, and reducing the error to less than $10^{-13}$ becomes
difficult. Therefore, for double precision usage, $J=20$ is sufficient. The
$J=24$ coefficients listed here are for reference, for those who hope to
develop more accurate approximations. Due to the random off error, calculating
using $p_{l}$ and $q_{l}$ would yield better accuracy for large $z$ than using
$b_{j}$ and $c_{j}$.
It is claimed Zaghloul2011 that if the term $i\sigma\sqrt{\pi}e^{-s^{2}}$ is
omitted, Eq.(8) would not match well for the range
$y<\sqrt{\pi}x^{2}e^{-x^{2}}$ when $x\gg 1$. However, in practical tests, this
only slightly affects the imag part of intermediate $x$, and the Weideman
method also holds well for $y\simeq 0$.
In Appendix B, we provide Matlab and Python code examples of how to use these
coefficients to calculate $Z(s)$. In Appendix A, we also provide the
coefficients using the Weideman method.
Figure 7: Validation of different expansion coefficients through comparison
with the Landau damping roots. Figure 8: Validation of different expansion
coefficients through comparison with the Landau damping roots, for $k=0.5$.
### II.4 Validation
The accuracy of $Z$ is not just about the function itself, but rather whether
it can accurately capture the physical phenomenon. We employ the one-
dimensional electrostatic Landau damping problem to demonstrate the accuracy
of the data provided in the table. The problem can be simplified as follows
Xie2013
$D(\omega,k)=1+\frac{1}{k^{2}}[1+zZ(z)]=0,$ (18)
with $z=\omega/(\sqrt{2}k)$. For a given $k$, we aim to solve for $\omega$,
focusing on the least damping branch.
Results are depicted in Fig. 7 and Fig. 8. The $J=2$ method cannot be used, as
the large error leads to incorrect roots. $J=3,4$ could be viable options for
low accuracy calculations. We observed that for the Pade method, larger $J$
generally results in better accuracy across all arguments. However, for the
same $J$, optimized coefficients may only control total errors, with local
errors potentially increasing, especially for small $s$. Thus, optimization
coefficients may not always outperform unoptimized coefficients. This suggests
that if only accurate $Z(s)$ is needed (i.e., not aiming to reduce the
expansion order), using coefficients from the rational expansion with higher
orders (i.e., larger $J$) is preferable over optimizing the coefficients.
Hence, we recommend using $J\geq 8$ from our table, instead of the older lower
$J$ data Hui1978 ; Humlicek1979 ; Humlicek1982 . The coefficients provided
here for calculating $Z(s)$ can deliver up to twelve significant decimal
digits, limited by double precision data rather than the approach itself.
Therefore, the Pade expansion method can compete with Weideman’s method and
can even be simpler.
Breakdown region is for $k\leq 0.15$, where $y<10^{-7}\ll 1$, incorrect
positive numerical solutions of $\omega_{i}$ may occur. However, since
$\omega_{i}/\omega_{r}\ll 10^{-8}$ is small, it is less critical to achieve
high accuracy for most applications. Humlicek Humlicek1982 reported that any
rational approximation suffers inevitable failure near the real axis. For
methods to address this issue, one can refer to Humlicek1982 ; Zaghloul2011 .
## III Summary and Conclusion
We revisit the issue of rapid calculation of the plasma dispersion function
and provide comprehensive rational and multi-pole coefficients for reference.
A practical application of this work is to accelerate the computation of the
PDRK/BOXie2016 ; Xie2019 code. For instance, an optimized $J=6$ can achieve
the same maximum error as the former Pade $J=8$ method, with a complexity of
$O(J^{2.7})$, resulting in a speedup of approximately 2.1 times. The optimized
$J=8$ method also reduces the maximum error by around two orders of magnitude
(80 times) compared to the usually used Ronnmark $J=8,I=10$ Pade coefficients.
However, if reducing the order is not necessary, we recommend using larger $J$
values (such as $J=10,12,16,20,24$) to improve global accuracy. The major
possible inaccuracy occurs at small $y$ for intermediate/large $x$, which is
found to be not a significant issue for most practical applications.
The demonstration of errors of $Z(s)$ itself and its application to the Landau
damping problem also provide a validation range for different coefficients and
offer guidance for further selection.
Figure 9: Weideman coefficients for $N=16$, $N=32$, and $N=64$ for the
calculation of the $Z$ function.
## Appendix A Weideman coefficients
The Weideman method for calculating $Z$ relies on the Fast Fourier Transform
(FFT), which may be less convenient for coding, such as in Fortran. However,
the FFT coefficients can be separated out for practical applications. We
provide coefficients for $N=16$, $N=32$, and $N=64$ here, which could be used
for high accuracy computation of $Z$ for most ranges. The expansion is given
by Weideman1994
$Z(z)\simeq\frac{i}{L-iz}+\frac{2i\sqrt{\pi}}{(L-iz)^{2}}\sum_{n=0}^{N-1}a_{n+1}\Big{(}\frac{L+iz}{L-iz}\Big{)}^{n},~{}y\geq
0,$ (19)
with $L=2^{-1/4}N^{1/2}$. The above form also holds for weak damping case,
i.e., $y<0$, but not too far from the real axis as demonstrate in Figs.1, 2,
and 3. For accurate calculation when $y<0$, we can still use Eq. (7). The
results are shown in Fig. 9.
Note that this method can also be used for non-Maxwellian distributions
directly Xie2013 .
Figure 10: Sample code for calculate $Z$ function with optimized $J=8$ pole
for all range of argument $z$, with max errors of $10^{-6}$. One who needs
higher accurary, can use the larger $J$ coefficients, such as
$J=10,12,16,20,24$.
## Appendix B Short code example
It is usually surprising to the beginners that the simple several rational
terms can calculate the complicated complex integral function $Z(s)$ to high
accurate. We thus also provide sample codes in Fig.10 for easy access, which
is valid for $z$ at entire plane.
## References
* (1) B. D. Fried and S. D. Conte, The Plasma Dispersion Function the Hilbert Transform of the Gaussian (Academic Press, New York and London, 1961); “Erratum,” Math. Comput. 26(119), 814 (1972), see http://www.ams.org/journals/mcom/1972-26-119/S0025-5718-1972-0655816-4/; “Reviews and descriptions of tables and books,” Math. Comput. 17, 94–95 (1963), see http://www.ams.org/journals/mcom/1963-17-081/S0025-5718-1963-1781088-8/.
* (2) J. D. Huba, NRL PLASMA FORMULARY, 2009, The Office of Naval Research.
* (3) D.A. Gurnett and A. Bhattacharjee, Introduction to Plasma Physics: With Space and Laboratory Applications, Cambridge, 2005.
* (4) P. Martin, G. Donoso, and J. Zamudio-Cristi, J. Math. Phys. (N.Y.) 21, 280 (1980).
* (5) A. K. Hui, B. H. Armstrong and A. A. Wray, Rapid computation of the Voigt and complex error functions, Journal of Quantitative Spectroscopy and Radiative Transfer, 19, 5, 509-516 (1978).
* (6) J. Humlicek, An efficient method for evaluation of the complex probability function: The Voigt function and its derivatives, Journal of Quantitative Spectroscopy and Radiative Transfer, 21, 4, 309-313 (1979).
* (7) J. Humlicek, Optimized computation of the voigt and complex probability functions, Journal of Quantitative Spectroscopy and Radiative Transfer, 27, 4, 437-444 (1982).
* (8) M. R. Zaghloul and A. N. Ali. 2011. Algorithm 916: Computing the Faddeyeva and voigt functions.ACMTrans. Math. Softw. 38, 2, 1–22.
* (9) J. A. C. Weideman. 1994. Computation of the complex error function. SIAM J. Numer. Anal. 31, 5, 1497–1518.
* (10) R. N. Franklin, The computation of the plasma dispersion function, Plasma Phys. 10, 805 (1968).
* (11) G. Németh, A. Ág, and G. Páris, Two-sided Padé approximations for the plasma dispersion function, J. Math. Phys. 22, 1192 (1981).
* (12) B. D. Fried, Burton, C. L. Hedrick and J. Mccune, Two-Pole Approximation for the Plasma Dispersion Function, Physics of Fluids, 11, 1, 249-252 (1968).
* (13) B. S. Newberger, Efficient numerical computation of the plasma dispersion function, Comput. Phys. Commun. 42, 305 (1986).
* (14) A. Tjulin, A. I. Eriksson, and M. André, Physical interpretation of the Padé approximation of the plasma dispersion function, J. Plasma Phys. 64, 287 (2000).
* (15) H.S. Xie, Y. Xiao, PDRK: A General Kinetic Dispersion Relation Solver for Magnetized Plasma, Plasma Sci. Technol. 18 (2) (2016) 97, http://dx.doi.org/10.1088/1009-0630/18/2/01, Update/bugs fixed at http://hsxie.me/codes/pdrk/ or https://github.com/hsxie/pdrk/.
* (16) H.S. Xie, BO: A unified tool for plasma waves and instabilities analysis, Comput. Phys. Comm. 244 (2019) 343-371. Xie, H. S., Denton, R., Zhao, J. S. and Liu, W, BO 2.0: Plasma Wave and Instability Analysis with Enhanced Polarization Calculations arXiv:2103.16014, 2021. https://github.com/hsxie/bo/.
* (17) Ronnmark, K. 1982, WHAMP - Waves in Homogeneous Anisotropic Multicomponent Plasmas, Tech. Rep. 179, Kiruna Geophysical Institue, Kiruna, Sweden.
* (18) P. Hunana, A. Tenerani, G. P. Zank, M. L. Goldstein, G. M. Webb, M. Velli and L. Adhikari, A brief guide to fluid models with anisotropic temperatures Part 2 - Kinetic theory, Padé approximants and Landau fluid closures, 2019.
* (19) G. W. Hammett and W. F. Perkins, Phys. Rev. Lett., 64, 3019 (1990).
* (20) G. Hammett, W. Dorland, and F. Perkins, Phys. Fluids B 4, 2052 (1992).
* (21) H. S. Xie, Generalized plasma dispersion function: One-solve-all treatment, visualizations, and application to Landau damping, Phys. Plasmas 20, 092125 (2013).
|
# Real and complex $\displaystyle K$-theory for higher rank graph algebras
arising from cube complexes
Jeffrey L. Boersema Seattle University
Department of Mathematics
Seattle, Washington 98133, USA<EMAIL_ADDRESS>and Alina Vdovina The
City College of New York and Graduate Center, CUNY
Department of Mathematics
New York, 10031, USA<EMAIL_ADDRESS>
###### Abstract.
Using the Evans spectral sequence and its counter-part for real $\displaystyle
K$-theory, we compute both the real and complex $\displaystyle K$-theory of
several infinite families of $\displaystyle C^{*}$-algebras based on higher-
rank graphs of rank $\displaystyle 3$ and $\displaystyle 4$. The higher-rank
graphs we consider arise from double-covers of cube complexes. By considering
the real and complex $\displaystyle K$-theory together, we are able to carry
these computations much further than might be possible considering complex
$\displaystyle K$-theory alone. As these algebras are classified by
$\displaystyle K$-theory, we are able to characterize the isomorphism classes
of the graph algebras in terms of the combinatorial and number-theoretic
properties of the construction ingredients.
## 1\. Introduction
Higher rank graphs were defined in [9] and their further theory was developed
in [15] and [7]. The main motivation was a systematic study of a large class
of $\displaystyle C^{*}$-algebras classifiable by their $\displaystyle
K$-theory. Despite of the vast literature on the subject, explicit
computations of the K-theory of the higher-rank $\displaystyle C^{*}$-algebras
is very rare, especially in rank three and higher. The first rank three
example was done in [5], and the first infinite series of rank three and
higher examples were described in [12]. Nevertheless, in both [5] and [12],
there were open questions on exact order of certain abelian subgroups in
K-theory. In this paper we present several infinite series of $\displaystyle
C^{*}$-algebras associated to rank-3 and rank-4 graphs and we compute their
K-theory completely and explicitly.
Not only are we considering the (complex) $\displaystyle C^{*}$-algebras
associated to these higher rank graphs, we are also considering real
$\displaystyle C^{*}$-algebras for these graphs and we are computing the real
$\displaystyle K$-theory. Real $\displaystyle C^{*}$-algebras associated to a
higher rank graph, and more generally real $\displaystyle C^{*}$-algebras
associated to a higher rank graph with an involution, were introduced in [2].
The analog of Evans’ spectral sequence was also developed in [2] to compute
the real $\displaystyle K$-theory of such algebras. The examples that we
consider in this paper are rank-3 and rank-4 graphs with two vertices and a
non-trivial involution that swaps the two vertices. We will be calculating the
$\displaystyle K$-theory of both real $\displaystyle C^{*}$-algebras: the one
associated with the graph with the trivial involution and the one associated
with the graph with the non-trivial involution. Previous calculations of
$\displaystyle K$-theory for real $\displaystyle C^{*}$-algebras of higher
rank graphs have been conducted in [2] and [3]. However, in those cases the
graphs either had rank no more than 2, or the graphs could be factored as a
product of graphs with rank no more than 2. Remarkably, we find that the
consideration of the real $\displaystyle K$-theory also allows us to compute
the (complex) $\displaystyle K$-theory in some cases where that was otherwise
intractable. In particular, we are able to resolve the open question in [12]
mentioned above.
The complex $\displaystyle C^{*}$-algebras associated to our higher rank
graphs fall in the category of purely infinite simple $\displaystyle
C^{*}$-algebras classified by $\displaystyle K$-theory in [8] and [13].
Similarly, the real $\displaystyle C^{*}$-algebras in this paper fall in the
category of purely infinite simple $\displaystyle C^{*}$-algebras classified
in [4]. Based on this, we will be able to characterize the isomorphism classes
of the resulting algebras in terms of the combinatoric and number-theory
properties of the construction ingredients.
The higher-rank graphs that we consider in this paper arise from cube
complexes, and their double covers, as in Section 6 of [12]. We will review
this construction in the next section. We will also review the key preliminary
notions, including the definition of the real and complex $\displaystyle
C^{*}$-algebras based on higher rank graphs, real $\displaystyle K$-theory,
and the spectral sequence technology to calculate the $\displaystyle K$-theroy
in the real case.
The geometric core of higher-rank graphs was introduced in [10]. Initially the
higher-rank graphs were defined as small categories in [9]. Connections of
higher rank graphs with geometry and combinatorics was known before, see, for
example, [7], the combinatorial analogue without reference to category theory
was done only in [10]. The higher-dimensional digraphs introduced in [10]
provide a bridge between cube complexes and higher-rank graphs. The
automorphism groups of cube complexes covered by products of trees induce
automorphism groups of higher-rank graphs. If the fundamental group of such a
cube complex has a subgroup of index two, then the double-cover of the cube
complex has an involution.
We would like to mention a couple of connections of the C*-algebras we
consider with other topics in mathematics. In [11], K-theory of C*-algebras is
used as an invariant of higher-dimensional Thompson groups which are otherwise
very hard to distinguish. In [6], K-theory was studies in connection with
Matui’s HK-conjecture. The further development of both real and complex
$\displaystyle K$-theory will serve to increase the possibility of such
positive connections.
## 2\. Preliminaries
### 2.1. Higher rank graphs
We recall the definition of a $\displaystyle k$-graph due to Kumjian and Pask
[9]. For an integer $\displaystyle k\geq 1$, we view
$\displaystyle{\mathbb{N}}^{k}$ as a monoid under pointwise addition. A
$\displaystyle k$-graph is a countable small category $\displaystyle\Lambda$
together with an assignment of a _degree_ $\displaystyle
d(\mu)\in{\mathbb{N}}^{k}$ to every morphism $\displaystyle\mu\in\Lambda$ such
that for all $\displaystyle\mu,\nu,\pi\in\Lambda$ the following hold
1. (1)
$\displaystyle d(\mu\nu)=d(\mu)+d(\nu)$; and
2. (2)
whenever $\displaystyle d(\pi)=m+n$ for $\displaystyle
m,n\in{\mathbb{N}}^{k}$, there is a unique factorisation
$\displaystyle\pi=\mu\nu$ such that $\displaystyle d(\mu)=m$ and
$\displaystyle d(\nu)=n$.
Condition (2) is known as the _factorisation property_ in the $\displaystyle
k$-graph. The composition in $\displaystyle\mu\nu$ is understood in the sense
of morphisms, thus the source $\displaystyle s(\mu)$ of $\displaystyle\mu$
equals the range $\displaystyle r(\nu)$ of $\displaystyle\nu$. Note that the
morphisms of degree $\displaystyle 0$ (in $\displaystyle{\mathbb{N}}^{k}$) are
necessarily the identity morphisms in the category. Denote this set by
$\displaystyle\Lambda^{0}$, and refer to its elements as _vertices_ of
$\displaystyle\Lambda$. With $\displaystyle e_{1},\dots,e_{k}$ denoting the
generators of $\displaystyle{\mathbb{N}}^{k}$, the set
$\displaystyle\Lambda^{e_{i}}=\\{\lambda\in\Lambda\mid d(\lambda)=e_{i}\\}$
consists of edges (or morphisms) of degree $\displaystyle e_{i}$, for
$\displaystyle i=1,\dots,k$. We write $\displaystyle v\Lambda^{n}$ for the set
of morphisms of degree $\displaystyle n\in{\mathbb{N}}^{k}$ with range
$\displaystyle v$.
Throughout this paper we are concerned with $\displaystyle k$-graphs where
$\displaystyle\Lambda^{0}$ and all $\displaystyle\Lambda^{e_{i}}$,
$\displaystyle i=1,\dots,k$, are finite. A $\displaystyle k$-graph
$\displaystyle\Lambda$ so that $\displaystyle 0<\\#v\Lambda^{n}<\infty$ for
all $\displaystyle v\in\Lambda^{0}$ and all $\displaystyle
n\in{\mathbb{N}}^{k}$ is source free and row-finite. The _adjacency matrices_
$\displaystyle
M_{1},\dots,M_{k}\in\operatorname{Mat}_{\Lambda^{0}}({\mathbb{N}})$ of
$\displaystyle\Lambda$ are $\displaystyle\Lambda^{0}\times\Lambda^{0}$
matrices with
$\displaystyle M_{i}(v,w)=|v\Lambda^{e_{i}}w|.$
By the factorisation property, the matrices $\displaystyle M_{i}$ pairwise
commute for $\displaystyle i=1,\dots,k$.
### 2.2. Rank-$\displaystyle k$ graphs with two vertices
We now review the specific construction of two-vertex $\displaystyle k$-graphs
involving cube complexes discussed in Section 6 of [12]. The construction
consists of two steps: First, we construct a family of cube complexes with two
vertices, covered by products of $\displaystyle k$ trees, and second, we
explain how to get a $\displaystyle k$-graph from each complex. These
$\displaystyle k$-graphs happen to have a natural non-trivial involution
$\displaystyle\gamma$, which will be important later on. For the background on
cube complexes covered by products of $\displaystyle k$ trees, see [10], [12],
and [16].
Step 1. Let $\displaystyle X_{1},...,X_{k}$ be distinct alphabets, such that
$\displaystyle\left|X_{i}\right|=m_{i}$ and
$\displaystyle X_{i}=\\{x_{1}^{i},x_{2}^{i},...,x_{m_{i}}^{i}\\}.$
Let $\displaystyle F_{i}$ be the free group generated by $\displaystyle
X_{i}$. Then the direct product
$\displaystyle G=F_{1}\times F_{2}\times\ldots\times F_{k}$
of $\displaystyle k$ free groups $\displaystyle F_{1}$,$\displaystyle
F_{2}$,…,$\displaystyle F_{k}$ has a presentation
$\displaystyle G=\langle
X_{1},X_{2},...,X_{k}\mid[x^{i}_{s},x^{j}_{l}]=1,i\neq
j=1,...,k;s=1,...,m_{i};l=1,...,m_{j}\rangle,$
where $\displaystyle[x,y]$ means commutator $\displaystyle xyx^{-1}y^{-1}$.
The group $\displaystyle G$ acts simply transitively on a Cartesian product
$\displaystyle\Delta$ of $\displaystyle k$ trees $\displaystyle
T_{1},T_{2},..,T_{k}$ of valencies $\displaystyle 2m_{1},2m_{2},...,2m_{k}$
respectively. The quotient of this action is a cube complex $\displaystyle P$
with one vertex such that the universal cover of $\displaystyle P$ is
$\displaystyle\Delta$. The edges of the cube complex $\displaystyle P$ are
naturally equipped with orientations and labellings by elements of
$\displaystyle X=X_{1}\cup X_{2}...\cup X_{k}$ and the $\displaystyle
1$-skeleton of $\displaystyle P$ is a wedge of
$\displaystyle\sum_{i=1}^{k}m_{i}$ circles. We construct a family of double
covers of $\displaystyle P$ in the following way. A double cover
$\displaystyle P^{2}$ of $\displaystyle P$ has two vertices, say
$\displaystyle v_{1}$ and $\displaystyle v_{2}$. For each edge $\displaystyle
x$ there are two edges, say $\displaystyle x_{1}$ and $\displaystyle x_{2}$,
in the $\displaystyle 1$-skeleton of $\displaystyle P^{2}$. In fact there are
two choices for the structure of these edges: either both $\displaystyle
x_{1}$ and $\displaystyle x_{2}$ are loops, one at $\displaystyle v_{1}$ and
the other at $\displaystyle v_{2}$; or one of the edges $\displaystyle
x_{1},x_{2}$ points from $\displaystyle v_{1}$ to $\displaystyle v_{2}$ and
the other points from $\displaystyle v_{2}$ to $\displaystyle v_{1}$. We will
say that the edge pair $\displaystyle x_{1},x_{2}$ has type one in the first
case, and has type two otherwise.
For example, in Figure 1, the edge pair $\displaystyle b_{1},b_{2}$ is type 1
and the edge pair $\displaystyle a_{1},a_{2}$ is type 2. Figures 1,2,3,4,5
show that our double covers are well defined.
Fig. 1
In the double covers we consider, we stipulate that at least one edge pair has
type two (so the graph is connected) and that all of the edge pairs with
labels in the same set $\displaystyle X_{i}$ will have the same type (just for
convenience).
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Step 2. We explain now how to construct a $\displaystyle k$-graph
$\displaystyle C$ from the cube complex $\displaystyle P^{2}$. The graph
$\displaystyle C$ will have the same set of vertices as $\displaystyle P^{2}$,
but the number of edges will double. Specifically, for each edge
$\displaystyle x$ in $\displaystyle P^{2}$, we obtain two edges $\displaystyle
x$ and $\displaystyle x^{\prime}$ where $\displaystyle s(x)=r(x^{\prime})$ and
$\displaystyle s(x^{\prime})=r(x)$. Furthermore, the degree of $\displaystyle
x$ and $\displaystyle x^{\prime}$ is $\displaystyle e_{i}$, descending from
the labels associated from the edges of $\displaystyle\Delta$ (colloquially we
say that the edges $\displaystyle x$ and $\displaystyle x^{\prime}$ have color
$\displaystyle i$). Each geometric square $\displaystyle abcd$ in
$\displaystyle P^{2}$ will give rise to four squares (or commutativity
relations) in $\displaystyle C$: namely,
$\displaystyle
ab=d^{\prime}c^{\prime},bc=a^{\prime}d^{\prime},cd=b^{\prime}a^{\prime},da=c^{\prime}b^{\prime}.$
For example, the square $\displaystyle a_{1}b_{2}a_{2}^{-1}b_{1}^{-1}$ (the
front face of the cube in Fig. 5) will give rise to four squares in
$\displaystyle C$: namely,
$\displaystyle
a_{1}b_{2}=b_{1}a_{2},~{}b_{2}a_{2}^{\prime}=a_{1}^{\prime}b_{1},~{}a_{2}^{\prime}b_{1}^{\prime}=b_{2}^{\prime}a_{1}^{\prime},~{}b_{1}^{\prime}a_{1}=a_{2}b_{2}^{\prime}.$
This completes the description of the construction of a large collection of
examples of rank-$\displaystyle k$ graphs. In the rest of this paper we will
consider graphs that arise from this construction, restricting our attention
to the ones in which $\displaystyle P$ has one vertex, so that the
rank-$\displaystyle k$ graph $\displaystyle C$ has two vertices (and the
number of edges is a multiple of 4).
To proceed, we need to define two special kinds of matrices,
$\displaystyle D_{i}=\begin{bmatrix}2m_{i}&0\\\
0&2m_{i}\end{bmatrix}\text{~{}and~{}}T_{i}=\begin{bmatrix}0&2m_{i}\\\
2m_{i}&0\end{bmatrix}.$
If the edges in $\displaystyle P$ associated with color $\displaystyle i$ have
lifts in $\displaystyle P^{2}$ that are type 1, then the adjacency matrix of
the rank-$\displaystyle k$ graph $\displaystyle C$ will have the form
$\displaystyle T_{i}$ for $\displaystyle i\in\\{1,2,\dots,k\\}$. If the lifts
are of type 2, then the adjacency matrix will have the form $\displaystyle
D_{i}$.
### 2.3. Real and Complex $\displaystyle C^{*}$-algebras
From these higher-rank graphs, we construct real and complex $\displaystyle
C^{*}$-algebras, following [2] and [9], as follows. For any row-finite source-
free k-graph $\displaystyle\Lambda$, $\displaystyle C^{*}(\Lambda)$ is the
universal complex $\displaystyle C^{*}$-algebra generated by partial
isometries $\displaystyle s_{\lambda}$, for $\displaystyle\lambda\in\Lambda$,
subject to the relations
1. (1)
For each $\displaystyle v\in\Lambda^{0}$, $\displaystyle s_{v}$ is a
projection, and $\displaystyle s_{v}s_{w}=\delta_{v,w}s_{v}$.
2. (2)
For each $\displaystyle\lambda\in\Lambda,\
s_{\lambda}^{*}s_{\lambda}=s_{s(\lambda)}$.
3. (3)
For each $\displaystyle\lambda,\mu\in\Lambda,\
s_{\lambda}s_{\mu}=s_{\lambda\mu}$.
4. (4)
For each $\displaystyle v\in\Lambda^{0}$ and each $\displaystyle
n\in{\mathbb{N}}^{k}$, $\displaystyle\displaystyle s_{v}=\sum_{\lambda\in
v\Lambda^{n}}s_{\lambda}s_{\lambda}^{*}.$
The real $\displaystyle C^{*}$-algebra $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ is the universal real
$\displaystyle C^{*}$-algebra generated by the same partial isometries
$\displaystyle s_{\lambda}$ as above subject to the same relations. We can and
do typically represent $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ as the real subalgebra of
$\displaystyle C^{*}(\Lambda)$ generated by $\displaystyle s_{\lambda}$. Thus
$\displaystyle C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ is the closure
of the set of all linear combinations of products of $\displaystyle
s_{\lambda}$ and $\displaystyle s_{\lambda}^{*}$.
In addition, there is a obvious involution $\displaystyle\gamma$ on the graph
$\displaystyle\Lambda$ that interchanges the two vertices and interchanges
pairs of edges in a way consistent with the action on the vertices. In this
situation, there is a different real $\displaystyle C^{*}$-algebra
$\displaystyle C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda,\gamma)$,
associated to this graph with involution, as constructed in [2], which is
represented as the real $\displaystyle C^{*}$-algebra in $\displaystyle
C^{*}(\Lambda)$ generated by the elements of the form $\displaystyle
zs_{\lambda}+\overline{z}s_{\gamma(\lambda)}$ for $\displaystyle
z\in{\mathbb{C}}$.
The two real $\displaystyle C^{*}$-algebras $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ and $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$ are both real
structures associated with $\displaystyle C^{*}(\Lambda)$, in the sense that
the complexification of each one is isomorphic to the complex $\displaystyle
C^{*}$-algebra $\displaystyle C^{*}(\Lambda)$. A typical problem in the theory
of real $\displaystyle C^{*}$-algebras is to identify up to isomorphism all of
the real structures associated with a given complex $\displaystyle
C^{*}$-algebra. The constructions of these real $\displaystyle C^{*}$-algebras
depend on the integer values of $\displaystyle m_{i}$ (for $\displaystyle
i\in\\{1,2,\dots,k\\}$), on the choices of the type of lifts for each
$\displaystyle i$ (that is the form of the adjacency matrices $\displaystyle
M_{i}$), and the choice of whether we are consider the real $\displaystyle
C^{*}$-algebra $\displaystyle C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$
or $\displaystyle C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$.
### 2.4. K-theory
In our work, we will use the abbreviated version of united $\displaystyle
K$-theory $\displaystyle K^{\scriptscriptstyle{\it CR}}(A)$ that was
introduced in [1] for real $\displaystyle C^{*}$-algebras. From Theorem 10.2
of [4], this invariant classifies the category of real purely infinite simple
$\displaystyle C^{*}$-algebras consisting of exactly those real $\displaystyle
C^{*}$-aglebras whose complexifications fall under the classification theorem
for complex purely infinite simple $\displaystyle C^{*}$-algebras, by
Kirchberg and Phillips in [8] and [13]. This category includes all of the real
graph algebras we will consider in this paper. Specifically, for a real
$\displaystyle C^{*}$-algebra $\displaystyle A$ we define
$\displaystyle K^{\scriptscriptstyle{\it CR}}(A)=\\{KO_{*}(A),KU_{*}(A)\\}$
where $\displaystyle KO_{*}(A)$ is the standard 8-periodic real $\displaystyle
K$-theory for a real $\displaystyle C^{*}$-algebra and $\displaystyle
KU_{*}(A)=K_{*}({\mathbb{C}}\otimes_{\scriptscriptstyle{\mathbb{C}}}A)$ is the
standard 2-periodic $\displaystyle K$-theory of the complexification of
$\displaystyle A$. The invariant $\displaystyle K^{\scriptscriptstyle{\it
CR}}(A)$ also includes the natural transformations
$\displaystyle\displaystyle r_{i}$ $\displaystyle\displaystyle\colon
KU_{i}(A)\rightarrow KO_{i}(A)$ $\displaystyle\displaystyle\text{induced by
the standard inclusion }\mathbb{C}\rightarrow M_{2}(\mathbb{R})$
$\displaystyle\displaystyle c_{i}$ $\displaystyle\displaystyle\colon
KO_{i}(A)\rightarrow KU_{i}(A)$ $\displaystyle\displaystyle\text{induced by
the standard inclusion }\mathbb{R}\rightarrow\mathbb{C}$
$\displaystyle\displaystyle\psi_{i}$ $\displaystyle\displaystyle\colon
KU_{i}(A)\rightarrow KU_{i}(A)$ $\displaystyle\displaystyle\text{induced by
conjugation }\mathbb{C}\rightarrow\mathbb{C}$
$\displaystyle\displaystyle\eta_{i}$ $\displaystyle\displaystyle\colon
KO_{i}(A)\rightarrow KO_{i+1}(A)$ induced by multiplication by
$\displaystyle\eta\in KO_{1}({\mathbb{R}})={\mathbb{Z}}_{2}$
$\displaystyle\displaystyle\xi_{i}$ $\displaystyle\displaystyle\colon
KO_{i}(A)\rightarrow KO_{i+4}(A)$ induced by multiplication by
$\displaystyle\xi\in KO_{4}({\mathbb{R}})={\mathbb{Z}}$.
The additional structure tends to aid in the computations of $\displaystyle
KO_{*}(A)$ because the natural transformations satisfy the relations
$\displaystyle\displaystyle rc=2$ $\displaystyle\displaystyle cr=1+\psi$
$\displaystyle\displaystyle 2\eta=0$ $\displaystyle\displaystyle r\psi=r$
$\displaystyle\displaystyle\psi^{2}=\mathrm{id}$
$\displaystyle\displaystyle\eta^{3}=0$ $\displaystyle\displaystyle\psi c=c$
$\displaystyle\displaystyle\psi\beta_{\scriptscriptstyle
U}=-\beta_{\scriptscriptstyle U}\psi$
$\displaystyle\displaystyle\xi=r\beta_{\scriptscriptstyle U}^{2}c$
and they fit into a long exact sequence
$\displaystyle\cdots\xrightarrow{r\beta_{\scriptscriptstyle
U}^{-1}}KO_{i}(A)\xrightarrow{\eta}KO_{i+1}(A)\xrightarrow{c}KU_{i+1}(A)\xrightarrow{r\beta_{\scriptscriptstyle
U}^{-1}}KO_{i-1}(A)\xrightarrow{\eta}\cdots$
The target category of this functor is the category of all
$\displaystyle\mathcal{CR}$-modules.
To compute $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$, we will use the
spectral sequence of [2, Theorem 3.13], which generalizes the spectral
sequence of [5] for complex $\displaystyle K$-theory. The $\displaystyle
E^{2}$ page of the spectral sequence arises from the homology of a certain
chain complex $\displaystyle\mathcal{C}$, which is based on the
$\displaystyle\mathcal{CR}$-modules $\displaystyle K^{\scriptscriptstyle{\it
CR}}({\mathbb{R}})$ and $\displaystyle K^{\scriptscriptstyle{\it
CR}}({\mathbb{C}})$ and relies on the combinatorial data of
$\displaystyle\Lambda$ and $\displaystyle\gamma$. We will review the details
of the formation of this spectral sequence in our calculations in the
following sections. The spectral sequence converges to $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$ in the sense
that there is a filtration of $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$, the subfactors
of which appear as the groups of the $\displaystyle E^{\infty}$ page.
Specifically, the groups $\displaystyle
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$ and
$\displaystyle KU_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$
are obtained from the groups
$\displaystyle(E^{\infty}_{p,q})^{\scriptscriptstyle O}$ and
$\displaystyle(E^{\infty}_{p,q})^{\scriptscriptstyle U}$where $\displaystyle
p+q=n$.
This spectral sequence exists in the category of
$\displaystyle\mathcal{CR}$-modules, which means that it has both a real
component and a complex component, as alluded to above, and these components
are connected by the natural transformations including $\displaystyle r$ and
$\displaystyle c$. This is the case on each page of the spectral sequence
starting with the chain complex $\displaystyle\mathcal{C}$ and the natural
transformations commute with the differentials $\displaystyle d$. The complex
component of this spectral sequence coincides with the spectral sequence of
Evans in [5].
For reference, the groups of $\displaystyle K^{\scriptscriptstyle{\it
CR}}({\mathbb{R}})$ and $\displaystyle K^{\scriptscriptstyle{\it
CR}}({\mathbb{C}})$ are shown below from Tables 1 and 2 of [2].
Fig. 6 – $\displaystyle K^{\scriptscriptstyle{\it CR}}({\mathbb{R}})$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}({\mathbb{R}})&{\mathbb{Z}}&{\mathbb{Z}}_{2}&{\mathbb{Z}}_{2}&0&{\mathbb{Z}}&0&0&0\\\
\hline\cr
KU_{n}({\mathbb{R}})&{\mathbb{Z}}&0&{\mathbb{Z}}&0&{\mathbb{Z}}&0&{\mathbb{Z}}&0\\\
\hline\cr\hline\cr c_{n}&1&0&0&0&2&0&0&0\\\ \hline\cr r_{n}&2&0&1&0&1&0&0&0\\\
\hline\cr\psi_{n}&1&0&-1&0&1&0&-1&0\\\ \hline\cr\eta_{n}&1&1&0&0&0&0&0&0\\\
\hline\cr\hline\cr\end{array}$
Fig. 7 – $\displaystyle K^{\scriptscriptstyle{\it CR}}({\mathbb{C}})$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}({\mathbb{C}}))&{\mathbb{Z}}&0&{\mathbb{Z}}&0&{\mathbb{Z}}&0&{\mathbb{Z}}&0\\\
\hline\cr
KU_{n}({\mathbb{C}})&{\mathbb{Z}}^{2}&0&{\mathbb{Z}}^{2}&0&{\mathbb{Z}}^{2}&0&{\mathbb{Z}}^{2}&0\\\
\hline\cr\hline\cr c_{n}&\bigl{(}\begin{smallmatrix}{1}\\\
{1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}\\\
{-1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}\\\
{1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}\\\
{-1}\end{smallmatrix}\bigr{)}&0\\\ \hline\cr
r_{n}&\bigl{(}\begin{smallmatrix}{1}&{1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}&{-1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}&{1}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{1}&{-1}\end{smallmatrix}\bigr{)}&0\\\
\hline\cr\psi_{n}&\bigl{(}\begin{smallmatrix}{0}&{1}\\\
{1}&{0}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{0}&{-1}\\\
{-1}&{0}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{0}&{1}\\\
{1}&{0}\end{smallmatrix}\bigr{)}&0&\bigl{(}\begin{smallmatrix}{0}&{-1}\\\
{-1}&{0}\end{smallmatrix}\bigr{)}&0\\\ \hline\cr\eta_{n}&0&0&0&0&0&0&0&0\\\
\hline\cr\hline\cr\end{array}$
## 3\. The rank-3 case, with no involution
Let $\displaystyle\Lambda$ be a rank-3 graph of the form discussed above.
Specifically, $\displaystyle\Lambda$ is a two-vertex graph and the incidence
matrices $\displaystyle M_{i}$ for $\displaystyle\Lambda$ each have the form
$\displaystyle T_{i}=\begin{bmatrix}0&2n_{i}\\\
2n_{i}&0\end{bmatrix}\quad\text{or}\quad D_{i}=\begin{bmatrix}2m_{i}&0\\\
0&2m_{i}\end{bmatrix}$
for $\displaystyle i=1,2,3$ (with the restriction that at least one of the
incidence matrices must have the form $\displaystyle T_{i}$). From [12], we
have the complex $\displaystyle K$-theory $\displaystyle
KU(C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda)=K_{*}(C^{*}(\Lambda))$
given by
$\displaystyle\displaystyle K_{*}(C^{*}(\Lambda))$
$\displaystyle\displaystyle=0$ if $\displaystyle g=1$
$\displaystyle\displaystyle K_{0}(C^{*}(\Lambda))$
$\displaystyle\displaystyle=\text{some extension of
$\displaystyle{\mathbb{Z}}_{g}$ by $\displaystyle{\mathbb{Z}}_{g}$}$ if
$\displaystyle g\geq 3$ $\displaystyle\displaystyle K_{1}(C^{*}(\Lambda))$
$\displaystyle\displaystyle={\mathbb{Z}}_{g}^{2}$ if $\displaystyle g\geq 3$.
However, in [12] the nature of the extension was not determined. Furthermore,
in the cases where more than one of the matrices $\displaystyle M_{i}$ has the
off-diagonal form, the formula for $\displaystyle g$ in [12] is incorrect. In
this section, we will compute both $\displaystyle
KO_{*}(C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda))$ and $\displaystyle
KU_{*}(C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda))$ (thereby determining
the previously unknown extension), and we correct the formula for
$\displaystyle g$.
Here we define $\displaystyle g$ as follows:
1. (1)
If $\displaystyle M_{1}=T_{1},M_{2}=D_{2},M_{3}=D_{3}$ then
$\displaystyle g=\gcd(1-4n_{1}^{2},1-2m_{2},1-2m_{3})\;.$
2. (2)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=D_{3}$ then
$\displaystyle g=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{1}n_{2},1-2m_{3})\;.$
3. (3)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=T_{3}$ then
$\displaystyle
g=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{3}^{2},1-4n_{1}n_{2},1-4n_{1}n_{3},1-4n_{2}n_{3})\;.$
As mentioned, this formula for $\displaystyle g$ agrees with [12] (Proposition
6.2) in case (a), but is a correction in cases (b) and (c).
###### Proposition 3.1.
For the rank-3 graph described above, $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda))$ is given by the table
below, for $\displaystyle g\geq 3$.
Note that if $\displaystyle g=1$, then $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C_{\scriptscriptstyle{\mathbb{R}}}^{*}(\Lambda))=0$ in all degrees.
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
\hline\cr
KU_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}\\\
\hline\cr\hline\cr\end{array}$
###### Proof.
The graph $\displaystyle\Lambda$ has two vertices. So, following [2], we set
$\displaystyle\mathcal{A}=K^{\scriptscriptstyle{\it CR}}({\mathbb{R}})\oplus
K^{\scriptscriptstyle{\it CR}}({\mathbb{R}})$ and consider the chain complex
$\displaystyle
0\rightarrow\mathcal{A}\xrightarrow{\partial_{3}}\mathcal{A}^{3}\xrightarrow{\partial_{2}}\mathcal{A}^{3}\xrightarrow{\partial_{1}}\mathcal{A}\rightarrow
0$
the homology of which gives the $\displaystyle E^{2}$ page of a spectral
sequence which converges to $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}(\Lambda))$.
The complex part of this chain complex in degree 0 is exactly the chain
complex analyzed in the proof of Proposition 6.3 of [12], specifically we have
$\displaystyle
0\rightarrow{\mathbb{Z}}^{2}\xrightarrow{\partial_{3}}{\mathbb{Z}}^{6}\xrightarrow{\partial_{2}}{\mathbb{Z}}^{6}\xrightarrow{\partial_{1}}{\mathbb{Z}}^{2}\rightarrow
0$
where
$\displaystyle\displaystyle\partial_{1}$
$\displaystyle\displaystyle=\begin{bmatrix}I-M_{1}^{T}&I-M_{2}^{T}&I-M_{3}^{T}\end{bmatrix}$
$\displaystyle\displaystyle\partial_{2}$
$\displaystyle\displaystyle=\begin{bmatrix}-(I-M_{2}^{T})&-(I-M_{3}^{T})&0\\\
I-M_{1}^{T}&0&-(I-M_{3}^{T})\\\ 0&I-M_{1}^{T}&I-M_{2}^{T}&\end{bmatrix}$
$\displaystyle\displaystyle\partial_{3}$
$\displaystyle\displaystyle=\begin{bmatrix}I-M_{3}^{T}\\\ -(I-M_{2}^{T})\\\
I-M_{1}^{T}\end{bmatrix}\;.$
We refer to Lemma 3.4 at the end of this section the calculation of the Smith
normal forms of these matrices, which come out to the following:
$\displaystyle\mathcal{S}(\partial_{1})=\mathcal{S}(\partial_{3})^{T}=\begin{bmatrix}1&0&0&0&0&0\\\
0&g&0&0&0&0\end{bmatrix}\\\
\quad\text{and}\quad\mathcal{S}(\partial_{2})=\mathrm{diag}(1,1,g,g,0,0)\;.$
Thus the homology of this chain complex in $\displaystyle
H_{*}(\mathcal{C})=({\mathbb{Z}}_{g},{\mathbb{Z}}_{g}^{2},{\mathbb{Z}}_{g},0)$
is degrees $\displaystyle p=0,1,2,3$.
The real part of this chain complex has period 8. In degrees 0 and 4, it is
identical to the complex part
$\displaystyle
0\rightarrow{\mathbb{Z}}^{2}\xrightarrow{\partial_{3}}{\mathbb{Z}}^{6}\xrightarrow{\partial_{2}}{\mathbb{Z}}^{6}\xrightarrow{\partial_{1}}{\mathbb{Z}}^{2}\rightarrow
0$
and with the same partial maps, so the homology is the same. The real part of
this chain complex in degrees 1 and 2 consists of $\displaystyle 2$-torsion
subgroups
$\displaystyle
0\rightarrow{\mathbb{Z}}_{2}^{2}\xrightarrow{\partial_{3}}{\mathbb{Z}}_{2}^{6}\xrightarrow{\partial_{2}}{\mathbb{Z}}_{2}^{6}\xrightarrow{\partial_{1}}{\mathbb{Z}}_{2}^{2}\rightarrow
0$
but the matrices describing the partials are the same as above, modulo 2.
Since $\displaystyle g$ is odd, the chain complex is exact and the homology
vanishes.
Therefore, the $\displaystyle E^{2}$ page of the spectral sequence of
$\displaystyle K^{\scriptscriptstyle{\it CR}}(C^{*}(\Lambda))$ looks like the
following, in the real and complex parts.
$\displaystyle E^{2}_{p,q}$ (for $\displaystyle g$ odd)
$\displaystyle\displaystyle\begin{array}[]{ ccccc
}\lx@intercol\hfil\underline{\text{real part}}\hfil\lx@intercol\\\
\vspace{.25cm}\hfil\\\ \vdots\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
2.84544pt\vdots\\\ 7\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\
6\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 5\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 1\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3\end{array}\hskip
85.35826pt\begin{array}[]{ ccccc }\lx@intercol\hfil\underline{\text{complex
part}}\hfil\lx@intercol\\\ \vspace{.25cm}\hfil\\\ \vdots\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 2.84544pt\vdots\\\ 7\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3\end{array}$
Then the structure of this spectral sequence implies that there are no non-
trivial differentials. Therefore $\displaystyle E_{p,q}^{2}=E_{p,q}^{\infty}$
in both the real and complex part. Furthermore, in the real case, there is
never more than one non-trivial group along a single diagonal $\displaystyle
p+q=i$ (for $\displaystyle i$ fixed), so there are no non-trivial extension
problems for $\displaystyle
KO_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$. Thus the real
$\displaystyle K$-theory is as shown in the table. For the complex part, we
get (repeating what was obtained in [12]) that $\displaystyle
KU_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ is an extension of
$\displaystyle{\mathbb{Z}}_{g}$ by $\displaystyle{\mathbb{Z}}_{g}$ and
$\displaystyle
KU_{1}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong{\mathbb{Z}}_{g}^{2}$.
It remains to show that $\displaystyle
KU_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong{\mathbb{Z}}_{g}^{2}$.
We make use of the natural transformation $\displaystyle c\colon
KO_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\rightarrow
KU_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ which can be traced
back from the spectral sequence as follows.
The map $\displaystyle c$ on $\displaystyle\mathcal{A}$ commutes with the
chain maps $\displaystyle\partial_{i}$ and induces the map $\displaystyle
c\colon(E^{2}_{p,q})^{\scriptscriptstyle
O}\rightarrow(E^{2}_{p,q})^{\scriptscriptstyle U}$. The maps $\displaystyle c$
on each page of the spectral sequence induces maps on the following pages, and
ultimately on the $\displaystyle E^{\infty}$ page. Finally the map
$\displaystyle c\colon
KO_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\rightarrow
KU_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ commutes with the
filtrations and on each subfactor is equal to the map $\displaystyle c$
obtained on the $\displaystyle E^{\infty}$ page.
The real and complex parts of $\displaystyle\mathcal{A}$ are isomorphic in
degree 0 and the complexification map $\displaystyle c_{0}$ on
$\displaystyle\mathcal{A}$ actually implements this isomorphism, as seen in
Figure 6. Furthermore, the maps $\displaystyle\partial_{i}$ are the same and
commute with $\displaystyle c_{0}$. Thus $\displaystyle c$ is an isomorphism
from the first row of the spectral sequence for $\displaystyle
KO_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ to the first row of
the spectral sequence for $\displaystyle
KU_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$.
We now focus on the filtration of the $\displaystyle E^{\infty}$ page giving
$\displaystyle KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ and
$\displaystyle KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$. Since
$\displaystyle c$ commutes with the filtration we obtain the following
diagram.
$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{(E^{\infty}_{0,2})^{\scriptscriptstyle
O}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{(E^{\infty}_{2,0})^{\scriptscriptstyle
O}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{0}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{(E^{\infty}_{0,2})^{\scriptscriptstyle
U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{(E^{\infty}_{0,2})^{\scriptscriptstyle
U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0\;}$
which can be rewritten as
$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{0}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0\;}$
Now since the vertical map $\displaystyle c$ on the right is an isomorphism,
and the horizontal map from $\displaystyle
KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ is an isomorphism,
the exact sequence on the bottom has a splitting. This proves that
$\displaystyle
KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong{\mathbb{Z}}_{g}^{2}$,
and by periodicity $\displaystyle
KU_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong{\mathbb{Z}}_{g}^{2}$
for all even $\displaystyle i$.
∎
###### Remark 3.2.
Comparing this result with the calculations of $\displaystyle
K^{\scriptscriptstyle{\it CR}}(\mathcal{O}_{n}^{\scriptstyle{\mathbb{R}}})$
from [1], we see that the $\displaystyle{\mathcal{C}R}$-module $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ decomposes as a direct
sum with four summands, each of which is isomorphic to $\displaystyle
K^{\scriptscriptstyle{\it CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}})$
or a certain suspension thereof. Specifically,
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong
K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}})\oplus(\Sigma^{-1}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}}))^{2}\oplus\Sigma^{-2}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}})\;.$
###### Remark 3.3.
In the special case that $\displaystyle M_{1}=T_{1}$, $\displaystyle
M_{2}=D_{2}$, $\displaystyle M_{3}=D_{3}$, the graph $\displaystyle\Lambda$
decomposes as a product graph (in the sense of Kumjian-Pask) of rank-1 graphs.
Specifically, we have
$\displaystyle\Lambda=\Lambda_{1}\times\Lambda_{2}\times\Lambda_{3}$
where $\displaystyle\Lambda_{1}$ is a graph with two vertices and
$\displaystyle 2n_{1}$ edges from each vertex to the other; and
$\displaystyle\Lambda_{2}$, $\displaystyle\Lambda_{3}$ are graphs with 1
vertex and $\displaystyle 2m_{i}$ loops. Therefore,
$\displaystyle C^{*}(\Lambda)=C^{*}(\Lambda_{1})\otimes
C^{*}(\Lambda_{2})\otimes C^{*}(\Lambda_{3})\;.$
It can further be shown that all factors in this product are isomorphic to
Cuntz algebras. Namely $\displaystyle
C^{*}(\Lambda_{1})\cong\mathcal{O}_{4n_{1}^{2}-1}$ and $\displaystyle
C^{*}(\Lambda_{i})\cong\mathcal{O}_{2m_{i}-1}$ (for $\displaystyle i=2,3$).
Similarly, at the level of real $\displaystyle C^{*}$-algebras we have
$\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)=C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda_{1})\otimes
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda_{2})\otimes
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda_{3})\;$
where $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda_{1})\cong\mathcal{O}^{\scriptstyle{\mathbb{R}}}_{4n_{1}^{2}-1}$
and $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda_{i})\cong\mathcal{O}^{\scriptstyle{\mathbb{R}}}_{2m_{i}-1}$
(for $\displaystyle i=2,3$).
Therefore, this spectral sequence calculation above has given us an approach
to calculating the $\displaystyle K$-theory of these products which is
alternative to using the Künneth formula. The Künneth formula can be difficult
when there are more than 2 factors and when there is torsion involved. This is
especially true for the real case.
Also, in the more general case (without restriction on the forms of
$\displaystyle M_{i}$), we find a posteriori (from Proposition 3.1) that the
$\displaystyle K$-theory depends only on the value of $\displaystyle g$. Using
the classification theorems for purely infinite simple $\displaystyle
C^{*}$-algebras (see the manuscripts of Kirchberg [8] and Phillips [13] in the
complex case in and the work of the first author and others in the real case
[4]) it follows that the isomorphism classes of $\displaystyle\Lambda$ depend
only on the value of $\displaystyle g$. Therefore, in all cases the
$\displaystyle C^{*}$-algebra $\displaystyle C^{*}(\Lambda)$ is isomorphic to
an appropriate product of three Cuntz algebras (one with the same value of
$\displaystyle g$) and similarly the real $\displaystyle C^{*}$-algebra
$\displaystyle C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ is isomorphic
to a product of three real Cuntz algebras.
###### Lemma 3.4.
The Smith normal form of the matrices $\displaystyle\partial_{1}$,
$\displaystyle\partial_{2}$, $\displaystyle\partial_{3}$ (in the complex part
in degree 0) are equal to
$\displaystyle\mathcal{S}(\partial_{1})=\mathcal{S}(\partial_{3})^{T}=\begin{bmatrix}1&0&0&0&0&0\\\
0&g&0&0&0&0\end{bmatrix}\\\
\quad\text{and}\quad\mathcal{S}(\partial_{2})=\mathrm{diag}(1,1,g,g,0,0)\;.$
###### Proof.
We note that the proof in Case (1) is correct in [12].
In Case (2), proceed as in the proof of Lemma 6.1 of [12] where it is
necessary to compute the smith normal form of
$\displaystyle\partial_{1}=\begin{bmatrix}1&-2n_{1}&1&-2n_{2}&1-2m_{3}&0\\\
-2n_{1}&1&-2n_{2}&1&0&1-2m_{3}\end{bmatrix}\;.$
The list of the $\displaystyle 2\times 2$ minors (up to sign) of
$\displaystyle\partial_{1}$ is
(1)
$\begin{gathered}1-4n_{1}^{2},~{}1-4n_{2}^{2},~{}1-4n_{1}n_{2},~{}2(n_{1}-n_{2})\\\
1-2m_{3},~{}(1-2m_{3})^{2},~{}2n_{1}(1-2m_{3}),~{}2n_{2}(1-2m_{3})\end{gathered}$
As we are interested in the gcd of this list, we can clearly reduce everything
on the second line to just $\displaystyle 1-2m_{3}$. Furthermore, by Lemma 6.1
we can eliminate the last entry of the first row. Hence the
$\displaystyle\gcd$ of the $\displaystyle 2\times 2$ minors is $\displaystyle
g$ and $\displaystyle\mathcal{S}(\partial_{1})=\mathrm{diag}(1,g)$. The result
for $\displaystyle\partial_{3}$ is the same (up to transpose).
Now we consider $\displaystyle\partial_{2}$, where
$\displaystyle\partial_{2}=\begin{bmatrix}-1&2n_{2}&-(1-2m_{3})&0&0&0\\\
2n_{2}&-1&0&-(1-2m_{3})&0&0\\\ 1&-2n_{1}&0&0&-(1-2m_{3})&0\\\
-2n_{1}&1&0&0&0&-(1-2m_{3})\\\ 0&0&1&-2n_{1}&1&-2n_{2}\\\
0&0&-2n_{1}&1&-2n_{2}&1\end{bmatrix}\;$
Here the $\displaystyle\gcd$ of the list of $\displaystyle 2\times 2$ minors
is seen to be 1. Furthermore, each $\displaystyle 4\times 4$ minors is a
product of at least 2 factors from the list of $\displaystyle 2\times 2$
minors. It follows that the $\displaystyle\gcd$ of the list of $\displaystyle
4\times 4$ minors is $\displaystyle g^{2}$ and that
$\displaystyle\mathcal{S}(\partial_{2})=\mathrm{diag}(1,1,g,g,0,0)$.
For Case (3), we consider
$\displaystyle\partial_{1}=\begin{bmatrix}1&-2n_{1}&1&-2n_{2}&1&-2n_{3}\\\
-2n_{1}&1&-2n_{2}&1&-2n_{3}&1\end{bmatrix}\;.$
The list of the $\displaystyle 2\times 2$ minors is
(2) $\begin{gathered}1-4n_{1}^{2},~{}1-4n_{2}^{2},~{}1-4n_{3}^{3},\\\
~{}1-4n_{1}n_{2},~{}1-4n_{1}n_{3},~{}1-4n_{2}n_{3}\\\
~{}2(n_{1}-n_{2}),~{}2(n_{1}-n_{3}),~{}2(n_{2}-n_{3})\;.\end{gathered}$
Using Lemma 6.3, we can eliminate the third row of this set of formulae,
giving the desired result. This proves the result for
$\displaystyle\partial_{1}$ and $\displaystyle\partial_{3}$. The calculation
for $\displaystyle\partial_{2}$ is now similar to that in Case (2).
∎
## 4\. The Rank-3 case, with a non-trivial involution
Now we consider the same rank-3 graph $\displaystyle\Lambda$ with a non-
trivial involution $\displaystyle\gamma$. The involution $\displaystyle\gamma$
swaps the two vertices and this extends consistently to an involution on all
higher-degree edges. The next theorem shows the $\displaystyle K$-theory of
the real $\displaystyle C^{*}$-algebra $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$.
Again, we have three cases to consider, depending on the structure of the
adjacency matrices.
1. (1)
If $\displaystyle M_{1}=T_{1},M_{2}=D_{2},M_{3}=D_{3}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd(1-2n_{1},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd(1+2n_{1},1-2m_{2},1-2m_{3})\;.$
2. (2)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=D_{3}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{1}n_{2},1-2m_{3})$
$\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd(1-2n_{1},1-2n_{2},1-2m_{3})$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd(1+2n_{1},1+2n_{2},1-2m_{3})\;.$
3. (3)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=T_{3}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{3}^{2},1-4n_{1}n_{2},1-4n_{1}n_{3},1-4n_{2}n_{3})$
$\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd(1-2n_{1},1-2n_{2},1-2n_{3})$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd(1+2n_{1},1+2n_{2},1+2n_{3})\;.$
Note that $\displaystyle g=hk$ in each case, because $\displaystyle 1-2n_{i}$
and $\displaystyle 1+2n_{i}$ are relatively prime, and by Lemmas 6.2 and 6.4.
###### Proposition 4.1.
Let $\displaystyle\Lambda$ be the rank-3 graph described above, with non-
trivial involution $\displaystyle\gamma$. Then $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$ is given by the
table below.
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}^{2}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{2}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}^{2}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{2}\\\
\hline\cr
KU_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}^{2}\\\
\hline\cr\hline\cr\end{array}$
###### Remark 4.2.
Note that
$\displaystyle{\mathbb{Z}}_{g}\cong{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}$.
So, similar to the previous examples, there is a direct sum decomposition.
Here it can be written as
$\displaystyle\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))\cong$
$\displaystyle\displaystyle\left(K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}})\oplus\left(\Sigma^{-1}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}})\right)^{2}\oplus\Sigma^{-2}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}})\right)$
$\displaystyle\displaystyle\oplus\left(\Sigma^{-4}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}})\oplus\left(\Sigma^{-5}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}})\right)^{2}\oplus\Sigma^{-6}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}})\right)\;.$
###### Remark 4.3.
Let $\displaystyle
A(n_{1},m_{2},m_{3})=C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$
be the real $\displaystyle C^{*}$-algebra obtained from a particular choice of
integers $\displaystyle n_{1},m_{2},m_{3}$ in Case (1). Let
$\displaystyle\widetilde{g}=\gcd(m_{2},m_{3})$. If $\displaystyle n_{1}$ and
$\displaystyle n^{\prime}_{1}$ are two positive integers satifying
$\displaystyle n_{1}\equiv n_{1}^{\prime}\pmod{\widetilde{g}}$ then we have
$\displaystyle\displaystyle\gcd(1-4n_{1}^{2},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1-4(n_{1}^{\prime})^{2},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle\gcd(1-2n_{1},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1-2n_{1}^{\prime},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle\gcd(1+n_{1},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1+2n_{1}^{\prime},1-2m_{2},1-2m_{3})\;.$
Then it follows by Proposition 4.1 that $\displaystyle
K^{\scriptscriptstyle{\it CR}}(A(n_{1},m_{2},m_{3}))\cong
K^{\scriptscriptstyle{\it CR}}(A(n_{1}^{\prime},m_{2},m_{3}))$ and therefore
using [4] that $\displaystyle A(n_{1},m_{2},m_{3}))\cong
A(n_{1}^{\prime},m_{2},m_{3}))$.
Suppose on the other hand, we replace $\displaystyle n_{1}$ by $\displaystyle
n^{\prime}_{1}$ where $\displaystyle n_{1}\equiv-
n_{1}^{\prime}\pmod{\widetilde{g}}$. Then we have
$\displaystyle\displaystyle\gcd(1-4n_{1}^{2},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1-4(n_{1}^{\prime})^{2},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle\gcd(1-2n_{1},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1+2n_{1}^{\prime},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle\gcd(1+n_{1},1-2m_{2},1-2m_{3})$
$\displaystyle\displaystyle=\gcd(1-2n_{1}^{\prime},1-2m_{2},1-2m_{3})\;.$
Thus the roles of the $\displaystyle h$ and $\displaystyle k$ change places
and it follows by Proposition 4.1 that $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(A(n_{1},m_{2},m_{3}))\cong\Sigma^{2}K^{\scriptscriptstyle{\it
CR}}(A(n_{1}^{\prime},m_{2},m_{3}))$. The real $\displaystyle C^{*}$-algebras
$\displaystyle A(n_{1},m_{2},m_{3})$ and $\displaystyle
A(n_{1}^{\prime},m_{2},m_{3})$ are not isomorphic in this case, though their
respective complexifications are, since $\displaystyle
KU_{*}(A(n_{1},m_{2},m_{3}))\cong KU_{*}(A(n_{1}^{\prime},m_{2},m_{3}))$.
###### Proof of Proposition 4.1.
The complex part $\displaystyle
KU_{*}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$ is the same as
what we obtained for $\displaystyle
KU(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$, since both are
isomorphic to the $\displaystyle K$-theory of the complex graph algebra
associated to $\displaystyle\Lambda$, that is to $\displaystyle
K_{*}(C^{*}(\Lambda))$. But to find $\displaystyle
KO_{*}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$ we go back to
the spectral sequence again.
By [2] there is again a chain complex
$\displaystyle
0\rightarrow\mathcal{A}\xrightarrow{\partial_{3}}\mathcal{A}^{3}\xrightarrow{\partial_{2}}\mathcal{A}^{3}\xrightarrow{\partial_{1}}\mathcal{A}\rightarrow
0$
the homology of which gives the $\displaystyle E^{2}$ page of a spectral
sequence which converges to $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}(\Lambda))$, but this time we have
$\displaystyle\mathcal{A}=K^{\scriptscriptstyle{\it CR}}({\mathbb{C}})$.
The real part of this chain complex in even degrees is
$\displaystyle
0\rightarrow{\mathbb{Z}}\xrightarrow{\partial_{3}}{\mathbb{Z}}^{3}\xrightarrow{\partial_{2}}{\mathbb{Z}}^{3}\xrightarrow{\partial_{1}}{\mathbb{Z}}\rightarrow
0$
and the real part vanishes in the odd degrees. In degree 0 we have where
$\displaystyle\displaystyle\partial_{1}$
$\displaystyle\displaystyle=\begin{bmatrix}I-MO_{1}^{T}&I-MO_{2}^{T}&I-MO_{3}^{T}\end{bmatrix}$
$\displaystyle\displaystyle\partial_{2}$
$\displaystyle\displaystyle=\begin{bmatrix}-(I-MO_{2}^{T})&-(I-MO_{3}^{T})&0\\\
I-MO_{1}^{T}&0&-(I-MO_{3}^{T})\\\ 0&I-MO_{1}^{T}&I-MO_{2}^{T}&\end{bmatrix}$
$\displaystyle\displaystyle\partial_{3}$
$\displaystyle\displaystyle=\begin{bmatrix}I-MO_{1}^{T}\\\ -(I-MO_{2}^{T})\\\
I-MO_{3}^{T}\end{bmatrix}$
and the $\displaystyle 1\times 1$ matrices for $\displaystyle I-MO_{i}^{T}$
are found from $\displaystyle I-M_{i}^{T}$ using the instructions from Table 3
in Section 3D of [2].
Now, we consider Case (1) specifically, so that we have
$\displaystyle I-M_{i}=\begin{cases}\begin{bmatrix}1&-2n_{i}\\\
-2n_{i}&1\end{bmatrix}&i=1\\\ ~{}\\\ \begin{bmatrix}1-2m_{i}&0\\\
0&1-2m_{i}\end{bmatrix}&i=2,3.\end{cases}$
We then find that
$\displaystyle I-MO_{i}=\begin{cases}1-2n_{i}&i=1\\\
1-2m_{i}&i=2,3\end{cases}$
(the rule here is that we add the entries in the first row of each
$\displaystyle 2\times 2$ matrix to get a new $\displaystyle 1\times 1$
matrix). So we have
$\displaystyle\partial_{1}=\partial_{3}^{T}=\begin{bmatrix}1-2n_{1}&1-2m_{2}&1-2m_{3}\end{bmatrix}\quad\text{and}\quad\partial_{2}=\begin{bmatrix}-(1-2m_{2})&-(1-2m_{3})&0\\\
1-2n_{1}&0&-(1-2m_{3})\\\ 0&1-2n_{1}&1-2m_{2}\end{bmatrix}$
We claim that
$\displaystyle
S(\partial_{1})=S(\partial_{3}^{T})=\begin{bmatrix}h&0&0\end{bmatrix}\quad\text{and}\quad
S(\partial_{2})=\begin{bmatrix}h&0&0\\\ 0&h&0\\\ 0&0&0\end{bmatrix}$
where $\displaystyle h=\gcd(1-2n_{1},1-2m_{2},1-2m_{3})$. The statements about
$\displaystyle S(\partial_{1})$ and $\displaystyle S(\partial_{3})$ are clear,
but for $\displaystyle S(\partial_{2})$ first note that
$\displaystyle\partial_{2}$ has rank 2. The $\displaystyle\gcd$ of all the
entries of $\displaystyle\partial_{2}$ is $\displaystyle h$; while the
$\displaystyle\gcd$ of all the $\displaystyle 2\times 2$ minors
$\displaystyle(1-2n_{1})(1-2m_{2}),(1-2n_{1})(1-2m_{3}),(1-2m_{2})(1-m_{3}),(1-2n_{1})^{2},(1-2m_{2})^{2},(1-2m_{3})^{2}\;$
which is $\displaystyle h^{2}$. From this the statement about
$\displaystyle\mathcal{S}(\partial_{2})$ follows.
The result is that the homology of the chain complex in degree 0 is
$\displaystyle
H_{*}(\mathcal{C})=({\mathbb{Z}}_{h},{\mathbb{Z}}_{h}^{2},{\mathbb{Z}}_{h},0)$
in degrees $\displaystyle i=0,1,2,3$ and this gives us the 0th row of the
$\displaystyle E^{2}$ page of the real part of the spectral sequence.
For row 2 of the spectral sequence, Table 3 of [2] dictates that we subtract
instead of add the adjacent entries of $\displaystyle I-M_{i}$ so we have
$\displaystyle\partial_{1}^{T}=\begin{bmatrix}1+2n_{1}&1-2m_{2}&1-2m_{3}\end{bmatrix}\quad\text{and}\quad\partial_{2}=\begin{bmatrix}-(1-2m_{2})&1-2m_{3}&0\\\
1+2n_{1}&0&-(1-2m_{3})\\\ 0&-(1+2n_{1})&1-2m_{2}\end{bmatrix}$
Thus
$\displaystyle
S(\partial_{1})=S(\partial_{3}^{T})=\begin{bmatrix}k&0&0\end{bmatrix}\quad\text{and}\quad
S(\partial_{2})=\begin{bmatrix}k&0&0\\\ 0&k&0\\\ 0&0&0\end{bmatrix}$
where $\displaystyle k=\gcd(1+2n_{1},1-2m_{2},1-2m_{3})$. The homology of the
chain complex is $\displaystyle
H_{*}(\mathcal{C})=({\mathbb{Z}}_{k},{\mathbb{Z}}_{k}^{2},{\mathbb{Z}}_{k},0)$
in degrees $\displaystyle i=0,1,2,3$ and this gives us row 2 of the
$\displaystyle E^{2}$ page of the real part of the spectral sequence.
Rows 4 and 6 are the same as rows 0 and 2, respectively. So the $\displaystyle
E^{2}$ page of the spectral sequence is the following. For both the real and
complex parts, we have $\displaystyle E^{2}=E^{\infty}$, because no non-zero
differentials are possible.
$\displaystyle E^{2}_{p,q}$ (for $\displaystyle g$ odd)
$\displaystyle\displaystyle\begin{array}[]{ ccccc
}\lx@intercol\hfil\underline{\text{real part}}\hfil\lx@intercol\\\
\vspace{.25cm}\hfil\\\ \vdots\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
2.84544pt\vdots\\\ 7\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\
6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{2}&{\mathbb{Z}}_{k}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{h}&{\mathbb{Z}}_{h}^{2}&{\mathbb{Z}}_{h}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{2}&{\mathbb{Z}}_{k}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{h}&{\mathbb{Z}}_{h}^{2}&{\mathbb{Z}}_{h}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3\end{array}\hskip
85.35826pt\begin{array}[]{ ccccc }\lx@intercol\hfil\underline{\text{complex
part}}\hfil\lx@intercol\\\ \vspace{.25cm}\hfil\\\ \vdots\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 2.84544pt\vdots\\\ 7\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{2}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3\end{array}$
From the spectral sequence we immediately find the isomorphism class of
$\displaystyle KO_{j}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$
when $\displaystyle j$ is odd. Now, for $\displaystyle j$ even there is a
short exact sequence
$\displaystyle 0\rightarrow{\mathbb{Z}}_{h}\rightarrow
KO_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))\rightarrow{\mathbb{Z}}_{k}\rightarrow
0\;,$
or
$\displaystyle 0\rightarrow{\mathbb{Z}}_{k}\rightarrow
KO_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))\rightarrow{\mathbb{Z}}_{h}\rightarrow
0\;,$
depending on the parity of $\displaystyle j/2$. But since $\displaystyle h,k$
are relatively prime we must have $\displaystyle
KO_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))\cong{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}\cong{\mathbb{Z}}_{g}$
in both cases.
The proofs in Cases (2) and (3) proceed similarly. ∎
## 5\. Rank-4 graph with 2 vertices
Now let $\displaystyle\Lambda$ be the rank-4 graph with 2 vertices discussed
in Section 6 of [12]. In Proposition 6.4 of [12] some partial results are
described for $\displaystyle K_{*}(C^{*}(\Lambda))$. We again find that using
the real and complex $\displaystyle K$-theory together, we can complete these
computations. In this section we present the $\displaystyle K$-theory of both
real $\displaystyle C^{*}$-algebras $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda)$ and $\displaystyle
C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$ where
$\displaystyle\gamma$ is the non-trivial involution. We also show the
additional complications for $\displaystyle k>4$ which present us from making
further progress.
The adjacency matrices $\displaystyle M_{i}$ for $\displaystyle\Lambda$ are
all of the form $\displaystyle T_{i}$ or $\displaystyle D_{i}$, as before. We
define $\displaystyle g$ as follows, according to the four possible cases. We
also define $\displaystyle h$ and $\displaystyle k$ for reference when
describing $\displaystyle
KO_{*}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$.
1. (1)
If $\displaystyle M_{1}=T_{1},M_{2}=D_{2},M_{3}=D_{3},M_{4}=D_{4}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd\\{1-4n_{1}^{2},1-2m_{k}\mid
k\in\\{2,3,4\\}\\}$ $\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd\\{1-2n_{1},1-2m_{k}\mid k\in\\{2,3,4\\}\\}$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd\\{1+2n_{1},1-2m_{k}\mid k\in\\{2,3,4\\}\\}$
2. (2)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=D_{3},M_{4}=D_{4}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j},1-2m_{k}\mid
i,j\in\\{1,2\\},k\in\\{3,4\\}\\}$ $\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd\\{1-2n_{i},1-2m_{k}\mid
i\in\\{1,2\\},k\in\\{3,4\\}\\}$ $\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd\\{1+2n_{i},1-2m_{k}\mid
i\in\\{1,2\\}.,k\in\\{3,4\\}\\}\;.$
3. (3)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=T_{3},M_{4}=D_{4}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j},1-2m_{4}\mid
i,j\in\\{1,2,3\\}\\}$ $\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd\\{1-2n_{i},1-2m_{4}\mid i\in\\{1,2,3\\}\\}$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd\\{1+2n_{i},1-2m_{4}\mid
i\in\\{1,2,3\\}\\}\;.$
4. (4)
If $\displaystyle M_{1}=T_{1},M_{2}=T_{2},M_{3}=T_{3},M_{4}=T_{4}$ then
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j}\mid
i,j\in\\{1,2,3,4\\}\\}$ $\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd\\{1-2n_{i}\mid i\in\\{1,2,3,4\\}\\}$
$\displaystyle\displaystyle k$ $\displaystyle\displaystyle=\gcd\\{1+2n_{i}\mid
i\in\\{1,2,3,4\\}\\}\;.$
Then using the methods of the two previous sections, we obtain the following
propositions.
###### Proposition 5.1.
For the rank-4 graph described above, with non-trivial involution
$\displaystyle\gamma$ we have $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}(\Lambda))$ and $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$ given by the
table below, for $\displaystyle g\geq 3$. If $\displaystyle g=1$, then
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))=K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)=0$.
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}\\\
\hline\cr
KU_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}\\\
\hline\cr\hline\cr\end{array}$
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$
$\displaystyle\begin{array}[]{|c|c|c|c|c|c|c|c|c|}\hline\cr\hline\cr
n&\makebox[28.45274pt][c]{0}&\makebox[28.45274pt][c]{1}&\makebox[28.45274pt][c]{2}&\makebox[28.45274pt][c]{3}&\makebox[28.45274pt][c]{4}&\makebox[28.45274pt][c]{5}&\makebox[28.45274pt][c]{6}&\makebox[28.45274pt][c]{7}\\\
\hline\cr\hline\cr
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{h}^{3}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}^{3}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{h}^{3}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}^{3}\oplus{\mathbb{Z}}_{k}&{\mathbb{Z}}_{h}\oplus{\mathbb{Z}}_{k}^{3}\\\
\hline\cr
KU_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}&{\mathbb{Z}}_{g}^{4}\\\
\hline\cr\hline\cr\end{array}$
###### Proof.
We first consider the spectral sequence for $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$. The incidence matrices
$\displaystyle M_{i}$ for $\displaystyle\Lambda$ are of the form
$\displaystyle D_{i}$ and $\displaystyle T_{i}$ as above, for $\displaystyle
i=1,2,3,4$. The chain complex we consider is
$\displaystyle
0\rightarrow\mathcal{A}\xrightarrow{\partial_{4}}\mathcal{A}^{4}\xrightarrow{\partial_{3}}\mathcal{A}^{6}\xrightarrow{\partial_{2}}\mathcal{A}^{4}\xrightarrow{\partial_{1}}\mathcal{A}\rightarrow
0$
where $\displaystyle\mathcal{A}=K^{\scriptscriptstyle{\it
CR}}({\mathbb{R}})\oplus K^{\scriptscriptstyle{\it CR}}({\mathbb{R}})$. The
analysis of Propositions 6.3 and 6.4 of [12] obtains the following chain
complex in the degree 0 complex part,
$\displaystyle
0\rightarrow{\mathbb{Z}}^{2}\xrightarrow{\partial_{4}}{\mathbb{Z}}^{8}\xrightarrow{\partial_{3}}{\mathbb{Z}}^{12}\xrightarrow{\partial_{2}}{\mathbb{Z}}^{8}\xrightarrow{\partial_{1}}{\mathbb{Z}}^{2}\rightarrow
0;.$
Now, following the method of calculation of [12] but using the corrections as
in Section 3 we find the following
$\displaystyle\displaystyle\mathcal{S}(\partial_{1})=\mathcal{S}(\partial_{3})^{T}$
$\displaystyle\displaystyle=\begin{bmatrix}1&0&\textbf{0}\\\
0&g&\textbf{0}\end{bmatrix}\text{~{}in~{}}M_{2,8}({\mathbb{R}})$
$\displaystyle\displaystyle\text{and}\quad\mathcal{S}(\partial_{2})=\mathcal{S}(\partial_{4})^{T}$
$\displaystyle\displaystyle=\begin{bmatrix}I_{3}&0_{3}&\textbf{0}\\\
0_{3}&gI_{3}&\textbf{0}\\\
\textbf{0}&\textbf{0}&\textbf{0}\end{bmatrix}\text{~{}in~{}}M_{8,12}({\mathbb{R}})$
Thus in the even degree complex part we have $\displaystyle
H_{*}(\mathcal{C})=({\mathbb{Z}}_{g},{\mathbb{Z}}_{g}^{3},{\mathbb{Z}}_{g}^{3},{\mathbb{Z}}_{g},0)$.
The real part of the chain complex in degrees 0 and 4 are the same. The real
part in degrees 1 and 2 are the same modulo 2, so $\displaystyle
H_{*}(\mathcal{C})=0$ in those degrees. The $\displaystyle E^{2}$ page of the
spectral sequence in the real and complex parts are then given by
$\displaystyle E^{2}_{p,q}$ (for $\displaystyle g$ odd)
$\displaystyle\displaystyle\begin{array}[]{ cccccc
}\lx@intercol\hfil\underline{\text{real part}}\hfil\lx@intercol\\\
\vspace{.25cm}\hfil\\\ \vdots\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 2.84544pt\vdots\\\ 7\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 5\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 1\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3&4\end{array}\hskip
85.35826pt\begin{array}[]{ cccccc }\lx@intercol\hfil\underline{\text{complex
part}}\hfil\lx@intercol\\\ \vspace{.25cm}\hfil\\\ \vdots\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
2.84544pt\vdots\\\ 7\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\
6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3&4\end{array}$
We note that again, the complexification map $\displaystyle c$ is an
isomorphism on $\displaystyle E^{2}_{p,0}$, the bottom row of the spectral
sequence. In the real part of this spectral sequence all differentials must
vanish but in the complex part, while $\displaystyle d_{2}=0$ there appears to
be possible non-zero differentials $\displaystyle d_{3}$. We use the map c to
show that $\displaystyle d_{3}=0$. The complexification map $\displaystyle c$
is an isomorphism in degree 0 on $\displaystyle\mathcal{A}$ and passes to a
map $\displaystyle c$ which is an isomorphism in on the first row of the
$\displaystyle E^{2}=E^{3}$ pages of the spectral sequence. Furthermore,
$\displaystyle c$ commutes with $\displaystyle d_{3}$. In particular there is
a commutative diagram
$\displaystyle\textstyle{(E_{3,0}^{3})^{\scriptscriptstyle
O}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{d_{3}}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{(E_{0,2}^{3})^{\scriptscriptstyle
O}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$or$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{d_{3}}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{(E_{3,0}^{3})^{\scriptscriptstyle
U}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{d_{3}}$$\displaystyle\textstyle{(E_{0,2}^{3})^{\scriptscriptstyle
U}}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{d_{3}}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}}$
Since $\displaystyle(E_{0,2}^{3})^{\scriptscriptstyle O}=0$ and $\displaystyle
c_{3,0}^{3}$ is an isomorphism, the commutative diagram forces $\displaystyle
d_{3}\colon(E_{3,0}^{3})^{\scriptscriptstyle
U}\rightarrow(E_{0,2}^{3})^{\scriptscriptstyle U}$ to vanish. By periodicity
$\displaystyle d_{3}$ vanishes on all
$\displaystyle(E_{3,i}^{3})^{\scriptscriptstyle U}$. Thus $\displaystyle
E^{2}=E^{3}=E^{\infty}$ on both the real and complex parts.
Now, in the real part there are no extension problems so the calculation of
$\displaystyle KO_{*}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ is
complete. But in the complex part, there are questions of extensions for both
$\displaystyle KU_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ and
$\displaystyle KU_{1}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$. In
both cases, we use the complexification map $\displaystyle c$ as in the rank 3
case to show that the extension is split. First we use the $\displaystyle
p+q=2$ diagonal of the spectral sequence to find a diagram involving
$\displaystyle c\colon
KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\rightarrow
KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$.
$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{KO_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}^{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{0}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}^{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0}$
The vertical map $\displaystyle c$ on the right is an isomorphism, coming from
the first row of the spectral sequence. This shows that the extension on the
bottom of the diagram has a splitting and thus that $\displaystyle
KU_{2}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong{\mathbb{Z}}_{g}^{2}$.
Using the $\displaystyle p+q=3$ diagonal, we obtain the diagram
$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{KO_{3}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\scriptstyle{c}$$\displaystyle\textstyle{0}$$\displaystyle\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}^{3}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{KU_{3}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{{\mathbb{Z}}_{g}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle\textstyle{0}$
where again the vertical map $\displaystyle c$ is an isomorphism and again we
find that $\displaystyle
KU_{3}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))={\mathbb{Z}}_{g}^{2}$.
Therefore $\displaystyle
KU_{i}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))={\mathbb{Z}}_{g}^{2}$
for all $\displaystyle i$. This completes the calculation of $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$.
For $\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$, the
$\displaystyle E^{2}$ page of the spectral sequence in the real and complex
parts are as follows.
$\displaystyle E^{2}_{p,q}$ (for $\displaystyle g$ odd)
$\displaystyle\displaystyle\begin{array}[]{ cccccc
}\lx@intercol\hfil\underline{\text{real part}}\hfil\lx@intercol\\\
\vspace{.25cm}\hfil\\\ \vdots\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 2.84544pt\vdots\\\ 7\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{k}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{h}&{\mathbb{Z}}_{h}^{3}&{\mathbb{Z}}_{h}^{3}&{\mathbb{Z}}_{h}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{k}&{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{k}^{3}&{\mathbb{Z}}_{k}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{h}&{\mathbb{Z}}_{h}^{3}&{\mathbb{Z}}_{h}^{3}&{\mathbb{Z}}_{h}&0\\\
\hline\cr~{}\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3&4\end{array}\hskip
85.35826pt\begin{array}[]{ cccccc }\lx@intercol\hfil\underline{\text{complex
part}}\hfil\lx@intercol\\\ \vspace{.25cm}\hfil\\\ \vdots\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&\hskip 8.5359pt\vdots&\hskip
8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip 8.5359pt\vdots&\hskip
2.84544pt\vdots\\\ 7\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\
6\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
5\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 4\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
3\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 2\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
1\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&0&0&0&0\\\ 0\hfil\kern
5.0pt\vline\kern-5.0pt\hfilneg&{\mathbb{Z}}_{g}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}^{3}&{\mathbb{Z}}_{g}&0\\\
\hline\cr~{}\hfil\kern 5.0pt\vline\kern-5.0pt\hfilneg&0&1&2&3&4\end{array}$
The complex part of this is the same as what we analyzed in the first part of
this proof. For the real part, since $\displaystyle h$ and $\displaystyle k$
are relatively prime, we have $\displaystyle d_{r}=0$ for all $\displaystyle
r$, so $\displaystyle(E^{2}_{p,q})^{\scriptscriptstyle
O}=(E^{\infty}_{p,q})^{\scriptscriptstyle O}$. Furthermore, along each
diagonal $\displaystyle p+q=n$, the extensions determining $\displaystyle
KO_{n}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))$ must be direct
sums, again because $\displaystyle h$ and $\displaystyle k$ are relatively
prime. ∎
###### Remark 5.2.
The $\displaystyle{\mathcal{C}R}$-modules $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ and $\displaystyle
K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma)$ can be seen to
have decompositions as a direct sum with eight summands,
$\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))\cong
K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}})\oplus(\Sigma^{-1}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus(\Sigma^{-2}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus\Sigma^{-3}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{g+1}^{\scriptstyle{\mathbb{R}}})\;$
and
$\displaystyle\displaystyle K^{\scriptscriptstyle{\it
CR}}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda,\gamma))\cong$
$\displaystyle\displaystyle\left(K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}})\oplus(\Sigma^{-1}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus(\Sigma^{-2}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus\Sigma^{-3}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{h+1}^{\scriptstyle{\mathbb{R}}})\right)$
$\displaystyle\displaystyle\oplus\left(K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}})\oplus(\Sigma^{-1}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus(\Sigma^{-2}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}}))^{3}\oplus\Sigma^{-3}K^{\scriptscriptstyle{\it
CR}}(\mathcal{O}_{k+1}^{\scriptstyle{\mathbb{R}}})\right)\;.$
###### Remark 5.3.
We note that if $\displaystyle k\geq 5$, there will be an extra non-zero
column to these spectral sequences. We can still analyze the spectral sequence
and find that $\displaystyle(E^{2}_{p,q})^{\scriptscriptstyle
O}=(E^{\infty}_{p,q})^{\scriptscriptstyle O}$ and
$\displaystyle(E^{2}_{p,q})^{\scriptscriptstyle
U}=(E^{\infty}_{p,q})^{\scriptscriptstyle U}$, using similar arguments as in
the previous cases. However, there will be extension problems that we are
unable to determine. For example, $\displaystyle
KO_{0}(C^{*}_{\scriptscriptstyle{\mathbb{R}}}(\Lambda))$ will be extension of
$\displaystyle{\mathbb{Z}}_{g}$ by $\displaystyle{\mathbb{Z}}_{g}$ with no
clear way to determine the isomorphism class of group.
Furthermore, if $\displaystyle k\geq 6$, there is a more fundamental problem.
There will be two extra non-zero columns of the spectral sequences used in
these computations. As a result, there will be the possibility of a non-zero
differential, say $\displaystyle d_{5}\colon E^{5}_{5,0}\rightarrow
E^{5}_{0,4}$, in both the real and complex parts. We have no clear way of
determining the value of this differential. In general, we have no
understanding of how the differential maps $\displaystyle d_{r}$ relate to the
structure of the higher rank graph.
## 6\. Appendix: Number Theory Lemmas
###### Lemma 6.1.
Let $\displaystyle n_{1}$ and $\displaystyle n_{2}$ be positive integers,
greater than 1. Then $\displaystyle g_{1}=g_{2}=g_{3}$ where
$\displaystyle\displaystyle g_{1}$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{1}n_{2})$
$\displaystyle\displaystyle g_{2}$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},2n_{1}-2n_{2})$
$\displaystyle\displaystyle g_{3}$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{1}n_{2},2n_{1}-2n_{2})$
###### Proof.
Let Suppose that $\displaystyle p^{k}$ is a odd prime power such that
$\displaystyle p^{k}|\gcd(1-4n_{1}^{2},1-4n_{2}^{2})$. Since $\displaystyle
1-4n_{1}^{2}=(1-2n_{1})(1+2n_{1})$, and since $\displaystyle 1-2n_{1}$ and
$\displaystyle 1+2n_{1}$ are relatively prime, we have either $\displaystyle
p^{k}|(1-2n_{1})$ or $\displaystyle p^{k}|(1+2n_{1})$. If $\displaystyle
p^{k}|(1-2n_{1})$ then $\displaystyle p^{k}$ divides
$\displaystyle(1-2n_{1})(1+2n_{2})=(1-4n_{1}n_{2})-2(n_{1}-n_{2})\;.$
It follows that if $\displaystyle p^{k}$ divides one of $\displaystyle
1-4n_{1}n_{2}$ and $\displaystyle 2(n_{1}-n_{2})$, then it divides both.
Similarly, if $\displaystyle p^{k}|(1+2n_{1})$ we find that $\displaystyle
p^{k}$ divides
$\displaystyle(1+2n_{1})(1-2n_{2})=(1-4n_{1}n_{2})+2(n_{1}-n_{2})$
and the same conclusion is made. This proves the lemma. ∎
###### Lemma 6.2.
Let $\displaystyle n_{1}$ and $\displaystyle n_{2}$ be positive integers,
greater than 1. Then $\displaystyle\gcd(h,k)=1$ and $\displaystyle g=hk$ where
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd(1-4n_{1}^{2},1-4n_{2}^{2},1-4n_{1}n_{2})$
$\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd(1-2n_{1},1-2n_{2})$
$\displaystyle\displaystyle k$
$\displaystyle\displaystyle=\gcd(1+2n_{1},1+2n_{2})\;.$
###### Proof.
The first statement follows since $\displaystyle 1-2n_{1}$ and $\displaystyle
1+2n_{1}$ are relatively prime.
Let $\displaystyle p^{\ell}$ be a prime power and if $\displaystyle
p^{\ell}|hk$, then $\displaystyle p^{\ell}|h$ or $\displaystyle p^{\ell}|k$.
As $\displaystyle 1-4n_{i}^{2}=(1+2n_{i})(1-2n_{i})$, it follows that
$\displaystyle p^{\ell}|(1-4n_{i}^{2})$ for both $\displaystyle i$.
Furthermore, working modulo $\displaystyle p^{\ell}$, if $\displaystyle
p^{\ell}|h$ we have $\displaystyle 2n_{1}\equiv 2n_{2}\equiv 1$ so
$\displaystyle 4n_{1}n_{2}\equiv 1$. If $\displaystyle p^{\ell}|k$ we have
$\displaystyle 2n_{1}\equiv 2n_{2}\equiv-1$ so also $\displaystyle
4n_{1}n_{2}\equiv 1$. Either way, $\displaystyle p^{\ell}|(1-4n_{1}n_{2})$,
which implies $\displaystyle p^{\ell}|g$.
Conversely, suppose that $\displaystyle p^{\ell}|g$. So $\displaystyle
p^{\ell}|(1-2n_{i})$ or $\displaystyle p^{\ell}|(1+2n_{i})$, for each
$\displaystyle i$. If $\displaystyle p^{\ell}$ divides both $\displaystyle
1-2n_{1}$ and $\displaystyle 1-2n_{2}$, then by subtracting we find that
$\displaystyle p^{\ell}|2(n_{1}-n_{2})$. Similarly, if $\displaystyle
p^{\ell}$ divides both $\displaystyle 1+2n_{1}$ and $\displaystyle 1+2n_{2}$,
we also find $\displaystyle p^{\ell}|2(n_{1}-n_{2})$. In either of these cases
we have $\displaystyle p^{\ell}|g$, using Lemma 6.1.
If on the other hand $\displaystyle p^{\ell}$ divides both $\displaystyle
1-2n_{1}$ and $\displaystyle 1+2n_{2}$, then modulo $\displaystyle p^{\ell}$
we have
$\displaystyle 1\equiv
4n_{1}n_{2}\equiv(2n_{1})(2n_{2})\equiv(1)(-1)\equiv-1\;.$
This is a contradiction, as $\displaystyle p$ is odd. ∎
Using the same methods, we obtain the following extensions to these results.
###### Lemma 6.3.
Let $\displaystyle n_{1},\dots,n_{\ell}$ be positive integers, greater than 1.
Then $\displaystyle g_{1}=g_{2}=g_{3}$ where
$\displaystyle\displaystyle g_{1}$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j}\mid
i,j\in\\{1,\dots\ell\\}\\}$ $\displaystyle\displaystyle g_{2}$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},2n_{i}-2n_{j}\mid
i,j\in\\{1,\dots\ell\\}\\}$ $\displaystyle\displaystyle g_{3}$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j},2n_{i}-2n_{j}\mid
i,j\in\\{1,\dots\ell\\}\\}$
###### Lemma 6.4.
Let $\displaystyle n_{1},\dots,n_{\ell}$ be positive integers, greater than 1.
Then $\displaystyle\gcd(h,k)=1$ and $\displaystyle g=hk$ where
$\displaystyle\displaystyle g$
$\displaystyle\displaystyle=\gcd\\{1-4n_{i}^{2},1-4n_{i}n_{j}\mid
i,j\in\\{1,\dots\ell\\}\\}$ $\displaystyle\displaystyle h$
$\displaystyle\displaystyle=\gcd\\{1-2n_{i}\mid i\in\\{1,\dots\ell\\}\\}$
$\displaystyle\displaystyle k$ $\displaystyle\displaystyle=\gcd\\{1+2n_{i}\mid
i\in\\{1,\dots\ell\\}\\}$
## References
* [1] J. Boersema, Real $\displaystyle C^{*}$-algebras, united $\displaystyle K$-theory, and the Künneth formula, K-Theory 26 (2002), no. 4, 345–402.
* [2] J. Boersema and E. Gillaspy, K-theory for real $\displaystyle k$-graph $\displaystyle C^{*}$-algebras, Ann. K-Theory 7 (2022), no. 2, 395–440.
* [3] J. Boersema, S. Browne, and E. Gillaspy, The stable exotic Cuntz algebras are higher-rank graph algebras, Proceedings of the American Mathematical Society, Series B 11 (2024), 47–62.
* [4] J. Boersema, E. Ruiz, and P. Stacey, The classification of real purely infinite simple $\displaystyle C^{*}$-algebras. Doc. Math. 16 (2011), 619–655.
* [5] D.G. Evans, On the K-theory of higher rank graph $\displaystyle C^{*}$-algebras, New York J. Math. 14 (2008), 1–31.
* [6] C. Farsi, A. Kumjian, D. Pask and A. Sims, Ample groupoids: equivalence, homology, and Matui’s HK-conjecture, Münster J. Math. 12 (2019), 411–451.
* [7] R. Hazlewood, I. Raeburn, A. Sims and S.B.G. Webster _Remarks on some fundamental results about higher-rank graphs and their $\displaystyle C^{*}$-algebras_, Proc. Edinb. Math. Soc. (2) 56 (2013), no. 2, 575–597.
* [8] E. Kirchberg, The classification of purely infinite $\displaystyle C^{*}$-algebras using Kasparov’s theory, Preprint (1994).
* [9] A. Kumjian and D. Pask, Higher rank graph $\displaystyle C^{*}$-algebras, New York J. Math. 14 (2000), 1–20.
* [10] N. Larsen and A. Vdovina, Higher dimensional digraphs from cube complexes and their spectral theory, Groups Geom. Dyn. (2024), published Online First.
* [11] M. V. Lawson, A. Sims, A. Vdovina, Higher dimensional generalizations of the Thompson groups via higher rank graphs, 228 J. Pure Appl. Algebra (2023), no. 1 Paper No. 107456, 40 pp.
* [12] A. Mutter, A.-C. Radu, and A. Vdovina, $\displaystyle C^{*}$-algebras of higher-rank graphs from groups acting on buildings, and explicit computation of their K-theory, Publications Mathemátiques 68 (2024), no. 1, 187–217.
* [13] N.C. Phillips., A classification theorem for nuclear purely infinite simple $\displaystyle C^{*}$-algebras, Doc. Math. 5 (2000), 49–114.
* [14] S.C. Power, Classifying higher rank analytic Toeplitz algebras, New York J. Math. 13 (2007), 271–298.
* [15] I. Raeburn, A. Sims and T. Yeend, Higher-rank graphs and their $\displaystyle C^{*}$-algebras, Proc. Edinburgh Math. Soc. 46 (2003), 99–115.
* [16] N. Rungtanapirom, J. Stix, A. Vdovina, Infinite series of quaternionic 1-vertex cube complexes, the doubling construction, and explicit cubical Ramanujan complexes, Internat. J. Algebra Comput. 29 (2019), no. 6, 951–1007.
|
# Joint Power and Blocklength Optimization for URLLC in a Factory Automation
Scenario
Hong Ren, Cunhua Pan, Yansha Deng, Maged Elkashlan, and Arumugam Nallanathan
H. Ren, C. Pan, M. Elkashlan, and A. Nallanathan are with School of Electronic
Engineering and Computer Science, Queen Mary University of London, London, E1
4NS, U.K. (Email: h.ren, c.pan, maged.elkashlan, a.nallanathan@qmul.ac.uk). Y.
Deng is with the Department of Informatics, King s College London, London WC2R
2LS, U.K. (e-mail: yansha.deng@kcl.ac.uk).
###### Abstract
Ultra-reliable and low-latency communication (URLLC) is one of three pillar
applications defined in the fifth generation new radio (5G NR), and its
research is still in its infancy due to the difficulties in guaranteeing
extremely high reliability (say $10^{-9}$ packet loss probability) and low
latency (say 1 ms) simultaneously. In URLLC, short packet transmission is
adopted to reduce latency, such that conventional Shannon’s capacity formula
is no longer applicable, and the achievable data rate in finite blocklength
becomes a complex expression with respect to the decoding error probability
and the blocklength. To provide URLLC service in a factory automation
scenario, we consider that the central controller transmits different packets
to a robot and an actuator, where the actuator is located far from the
controller, and the robot can move between the controller and the actuator. In
this scenario, we consider four fundamental downlink transmission schemes,
including orthogonal multiple access (OMA), non-orthogonal multiple access
(NOMA), relay-assisted, and cooperative NOMA (C-NOMA) schemes. For all these
transmission schemes, we aim for jointly optimizing the blocklength and power
allocation to minimize the decoding error probability of the actuator subject
to the reliability requirement of the robot, the total energy constraints, as
well as the latency constraints. We further develop low-complexity algorithms
to address the optimization problems for each transmission scheme. For the
general case with more than two devices, we also develop a low-complexity
efficient algorithm for the OMA scheme. Our results show that the relay-
assisted transmission significantly outperforms the OMA scheme, while NOMA
scheme performs well when the blocklength is very limited. We further show
that the relay-assisted transmission has superior performance over the C-NOMA
scheme due to larger feasible region of the former scheme.
## I Introduction
The fifth-generation (5G) networks are envisaged to support three pillar use
cases: enhanced mobile broadband (eMBB), massive machine type communication
(mMTC), and _mission-critical_ internet of things (IoT) [1]. Extensive
research has focused on eMBB and mMTC, but the research on mission-critical
IoT is still in its infancy [2, 3, 4, 5, 6]. The applications of mission-
critical tasks include factory automation (FA), autonomous driving, remote
surgery, smart grid automation, unmanned aerial vehicles (UAVs) control
information delivery [7], which require ultra reliable and low latency
communication (URLLC) [8, 9, 10]. For example, in Industrial 4.0 [11], wired
connection will be replaced by wireless transmission to enhance the
flexibility and reduce the infrastructure cost. This change imposes
challenging requirements on the wireless transmission in terms of latency and
reliability [12]. For mission-critical tasks in FA, the transmission duration
is expected to be lower than 100 $\mu s$ to allow processing delays during
queuing, scheduling, backhaul transmission, and propagation [13], while
guaranteeing the packet error probability of $10^{-9}$.
In conventional human-to-human (H2H) communications, the transmission delay is
relatively long (say 20-30 ms) and the packet size is large (say 1500 bytes),
thus Shannon’s capacity can be served as a tight upper bound of the achievable
data rate due to the law of large numbers [14]. In contrast, in URLLC, the
packet size should be extremely low (say 20 bytes) to support the low-latency
transmission [13]. In this case, Shannon’s capacity formula is no longer
applicable as the law of large numbers is not valid. Thus, the achievable data
rate under short blocklength needs to be retreated. In [15], the achievable
data rate in finite blocklength regime has been derived as a complicated
function of the signal-to-noise (SNR), the blocklength, and the decoding error
probability.
Recently, extensive research attention has been devoted to the short packet
transmission (SPT) design [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]. In
particular, the frame structure is designed in [16] for SPT, where their
results showed that it is beneficial to group multiple messages from some
users into a single packet based on approximations from finite blocklength
information theory. In [17], She _et al._ studied the network available range
maximization problem by dynamically selecting the transmission modes between
device-to-device (D2D) and cellular links. The non-asymptotic upper and lower
bounds on the coding rate for SPT over a Rician memoryless block-fading
channel were derived in [18] under a given packet error probability
requirement. The overall error probability of relay-assisted transmission
under finite blocklength was derived in [19] under the assumption of perfect
channel state information (CSI). They further extended this model to the
quasi-static Rayleigh channels where only the average CSI is available at the
source in [20], as well as to the two-way amplify-and-forward relay network in
[21]. Recently, the delay and decoding error probability were analyzed in [22]
for simultaneous wireless information and power transfer (SWIPT) relay-
assisted system, where the relay first harvests energy from the source and
then uses the harvested energy to forward the source’s information to the
destination node.
The aforementioned studies [16, 17, 18, 19, 20, 21, 22] mainly focused on the
performance analysis of finite blocklength transmission. In order to design a
practical URLLC system, it is imperative to intelligently optimize the
resource allocation including blocklength and power allocation under the given
error probability and latency requirements. Unfortunately, the achievable
coding rate expression is neither convex nor concave with respect to the
blocklength and the transmit power, which brings the difficulty in obtaining
the globally optimal solution[5]. This motivates the recent studies in
resource allocation for the SPT in [23, 24, 25, 26, 27]. Specifically, the
average throughput and the max-min throughput optimization under the latency
constraint was solved via the exhaustive search method with high complexity in
[23]. Sun _et al._ in [24] considered the SPT for a two-user downlink non-
orthogonal multiple access (NOMA) system, with an aim to maximize the
throughput of user 1 subject to the throughput requirements for user 2. Note
that the decoding error probability requirement has not been considered in
[23] and [24], and the throughput is less important in URRLC as only control
signals or measurement data with small packet size are transmitted in URLLC.
In [25], She _et al._ jointly optimized the uplink and downlink transmission
blocklengths to minimize the required total bandiwidth based on statistical
channel state information (CSI). However, the optimization is based on the
simplified expression of the rate for SPT, which cannot accurately
characterize the relationship between the decoding error probability and
blocklength. In addition, several approximations are involved in the
derivation of the decoding error probability for each user due to the fact
that only statistical CSI is available. Most recently, Hu _et al._ [26]
considered SWIPT in relay-assisted URLLC systems, where the SWIPT parameters
and blocklength are jointly optimized to maximize the reliability performance.
However, the decoding error probability at the relay cannot be guaranteed and
the power is assumed to be fixed in [26]. Most recently, in [27] we jointly
optimize the blocklength and unmanned aerial vehicle’s (UAV’s) location to
minimize the decoding error probability while guaranteeing the latency
requirement and decoding error probability target. However, the power
allocation was not considered. Furthermore, the optimization over UAV’s
location is obtained by observing the curve of the second-order derivative of
the objective function over location variable without strict proof.
In this paper, we consider a typical mission-critical scenario (i.e., a FA
scenario), where the central controller needs to transmit a certain amount of
different data to two devices within a given transmission time and under a
very low packet error probability. One device named actuator is located far
away from the controller, while the other device named robot can move between
the controller and the actuator. We consider four fundamental transmission
schemes, namely, orthogonal multiple access (OMA), NOMA, relay-assisted
transmission and cooperative NOMA (C-NOMA). In this scenario, we aim for
jointly optimizing the blocklength and the transmit power of these two devices
to minimize the decoding error probability for the actuator while guaranteeing
the decoding error probability for the robot, taking into account the energy
and blocklength constraints, which were not considered in [24, 25, 26] and new
methods needs to be developed. The main contributions of this paper are
summarized as follows:
1. 1.
For the OMA scheme, we first prove that both the decoding error probability
and energy constraints hold with equality at the optimal point, and then
propose a novel iterative algorithm to obtain tight lower and upper bounds of
the blocklength to reduce the search complexity. A low-complexity algorithm is
proposed to find the globally optimal solution of transmit power. For the case
of more than two devices, we also develop a novel low-complexity algorithms to
find the suboptimal solution of the optimization problem.
2. 2.
For the NOMA scheme, the search set of blocklength is first derived to reduce
the search complexity. In contrast to the OMA case, the decoding error
probability function for each given blocklength in the NOMA case is non-
continuous with respect to the transmit power, which complicates the
optimization problem. Fortunately, we rigorously proved that the decoding
error probability holds with equality at the optimal point, such that the one-
dimensional line search algorithm can be used to find the optimal solution. We
also provide a sufficient condition when the decoding error probability
function is a convex function, which facilitates the application of a low-
complexity bisection search method.
3. 3.
For the relay-assisted scheme, we also adopt the iterative algorithm to reduce
the search complexity of blocklength. Unlike the OMA and NOMA schemes, the
decoding error probability constraint of relay-assisted transmission does not
hold with equality. To resolve this issue, we fix the blocklength, such that
the original optimization problem is reduced to a one-dimension search
optimization problem.
4. 4.
For the C-NOMA scheme, we adapt the iterative algorithm to reduce the search
complexity of blocklength, and then one-dimension search is proposed to find
the optimal transmit power. For the special case, low-complexity bisection
search method is applied.
5. 5.
To compare the performance of our proposed four transmission schemes, we
perform extensive simulation results, which show that the relay-assisted
scheme significantly outperforms the other three schemes for most times in
terms of both the decoding error probability and the network availability. Our
results demonstrate the effectiveness of relaying transmission in enhancing
the reliability performance in the industrial automation scenario.
The remainder of this paper is organized as follows. In Section II, the system
model and the problem formulation are provided. In Section III, the
transmission scheme is presented. The general case with more than two devices
is considered in Section IV. Simulation results and analysis are presented in
Section V. Finally, Section VI concludes the paper.
## II System Model
### II-A System model
Consider a downlink communication in one factory, where a central controller
serves a robot and an actuator as shown in Fig. 1. The robot is assumed to be
located in the vicinity of the controller, and the actuator is far away from
the controller. Both the robot and the actuator are equipped with a single
antenna. The controller needs to transmit two small packets to the two
devices. The packet sizes for the actuator and the robot are assumed to be the
same, and are denoted as $D$ bits.
The transmission of these two packets is subject to a latency constraint,
i.e., the transmission has to finish within $M$ symbols or channel uses. The
transmission time corresponds to $t_{\max}=MT_{s}$ seconds, where $T_{s}$ is
the symbol duration that is equal to $1/B$ with $B$ as the system bandwidth.
For the applications with URLLC requirement, short frame structure is adopted
and the end-to-end delay should be kept within 1 ms [8], which is much shorter
than the channel coherence time. Hence, the channels are quasi-static fading
and remain constant during the whole transmission. The channel fading
coefficients from the central controller to the robot and the actuator are
denoted as ${{\tilde{h}}_{1}}$ and ${{\tilde{h}}_{2}}$, respectively. The
channel fading coefficient between the robot and the actuator is denoted as
${{\tilde{h}}_{3}}$. We also assume that these channels are perfectly known at
the controller, and the total energy consumption of the system should be below
${\tilde{E}_{{\rm{tot}}}}$ Joule. Since we have assumed that the actuator is
far away from the controller, the channel power gain
$\left|{{{\tilde{h}}_{2}}}\right|^{2}$ is very small.
Figure 1: Illustration of a Factory Automation Scenario.
### II-B Achievable data rate for a simple point-to-point system
The data rate (coding rate) $R$ of a communication system is defined as the
fraction of the number of information bits to the number of transmission
symbols. According to Shannon’s coding theorem, the Shannon capacity is
defined as the highest coding rate that there exists an encoder/decoder pair
whose decoding error probability becomes negligible when the blocklength
approaches infinity [28]. However, in URLLC, the blocklength for each frame is
limited and small, in this case, the decoding error probability at the
receiver cannot be ignored.
In URLLC scenarios, the required transmission delay is much shorter than the
channel coherence time, thus the channel is quasi-static. According to the
results in [29], for a simple point-to-point communication system transmitting
over a quasi-static Rayleigh fading channel, the channel dispersion is zero
and the achievable data rate converges to the outage capacity as the
blocklength increases. However, the closed-form expression of the outage
capacity for short-packet transmission is unavailable. In [15], the normal
approximation was adopted to approximate the coding rate $R$ at finite
blocklength, which is given by
$R\approx{\log_{2}}(1+\gamma)-\sqrt{\frac{V}{m}}\frac{{{Q^{-1}}\left(\varepsilon\right)}}{{\ln
2}},\vspace{-0.2cm}$ (1)
where $m$ is the channel blocklength, $\varepsilon$ is the decoding error
probability, $\gamma$ denotes the signal-to-noise ratio (SNR) at the receiver,
${Q^{-1}}\left(\cdot\right)$ is the inverse function
$Q(x)\\!=\\!\frac{1}{{\sqrt{2\pi}}}\int_{x}^{\infty}{{e^{-\frac{{{t^{2}}}}{2}}}dt}$,
and $V$ is given by $V\\!=\\!1-{(1+\gamma)^{-2}}$. As shown in the numerical
results in [29], this approximation is very accurate when $m$ is larger than
50, which is the case in our simulations. From (1), the decoding error
probability can be obtained as follows:
${\varepsilon}=Q\left({f\left({{{\gamma}},{m},D}\right)}\right),$ (2)
where $f\left({\gamma,m,D}\right)=\ln
2\sqrt{\frac{m}{V}}\left({{{\log}_{2}}(1+\gamma)-\frac{D}{m}}\right)$. In the
following, we aim to jointly optimize the transmission blocklength and power
to minimize the decoding error probability for four different transmission
schemes.
## III Transmission schemes
In this section, we aim for designing efficient resource allocation algorithms
to minimize the decoding error probability of the actuator under three sets of
constraints: 1) the packets for robot and actuator need to be transmitted
within $M$ symbols; 2) the robot should satisfy its reliability requirement;
3) the total consumed energy should be kept within ${\tilde{E}_{{\rm{tot}}}}$.
The OMA, NOMA, relay-assisted transmission, and C-NOMA transmission schemes
are studied in the following subsections.
### III-A OMA transmission
The OMA scheme is the simplest transmission scheme, where the controller
serves the robot and the actuator in two different orthogonal channel uses or
blocklengths. In detail, the controller transmits signal $x_{1}$ to the robot
with $m_{1}$ blocklength. Due to this orthogonal property, the received signal
at the robot can be represented as
$\vspace{-0.15cm}{y_{1}}=\sqrt{{p_{1}}}{{\tilde{h}}_{1}}{x_{1}}+{n_{1}},$ (3)
where $p_{1}$ is the transmit power of the robot, $n_{1}$ is the zero-mean
additive complex white Gaussian noise (AWGN) with variance $\sigma_{1}^{2}$,
$x_{1}$ carries information knowledge for the robot with packet size $D$.
Hence, the coding rate at the robot is given by $D/m_{1}$.
From (3), the received signal to noise ratio (SNR) at the robot is given by
$\vspace{-0.1cm}{\gamma_{1}}={p_{1}}{h_{1}},$ (4)
where $h_{1}=|\tilde{h}_{1}|^{2}/{\sigma_{1}^{2}}$ denotes the normalized
channel gain from the controller to the robot. Then, according to (2), the
decoding error probability of $x_{1}$ at the robot is given by
$\vspace{-0.1cm}{\varepsilon_{1}}=Q\left({f\left({{{\gamma_{1}}},{m_{1}},D}\right)}\right).$
(5)
The controller transmits signal $x_{2}$ to the actuator with blocklength equal
to $m_{2}$. The corresponding error probability at the actuator is derived as
$\vspace{-0.1cm}{\varepsilon_{2}}=Q\left({f\left({{{\gamma_{2}}},{m_{2}},D}\right)}\right),\vspace{-0.3cm}$
(6)
where ${\gamma_{2}}={p_{2}}{h_{2}}$ with $p_{2}$ as the transmit power of the
actuator and $h_{2}=|{\tilde{h}}_{2}|^{2}/{\sigma_{2}^{2}}$ as the normalized
channel gain for the actuator. Without loss of generality (w.l.o.g.), we
assume that in this paper the robot has higher normalized channel gain than
the actuator, i.e., ${{h}_{1}}>{{h}_{2}}$.
The resource allocation problem for the OMA transmission can be formulated as:
$\displaystyle\vspace{-0.1cm}\mathop{\min}\limits_{\left\\{{{m_{1}},{m_{2}},{p_{1}},{p_{2}}}\right\\}}\;\;\;$
$\displaystyle{{\varepsilon}_{2}}$ (7a) $\displaystyle{\rm{s.t.}}\;\;\;$
$\displaystyle{{\varepsilon}_{1}}\leq\varepsilon_{1}^{\max},$ (7b)
$\displaystyle{m_{1}}{p_{1}}+{m_{2}}{p_{2}}\leq{E_{{\rm{tot}}}},$ (7c)
$\displaystyle m_{1}+m_{2}\leq M,$ (7d) $\displaystyle
m_{1},m_{2}\in\mathbb{Z},$ (7e)
where constraint in (7b) is the decoding error probability requirement of the
robot, constraint (7c) ensures the system total energy consumption is within a
budget $\tilde{E}_{\rm{tot}}=E_{\text{tot}}T_{s}$, (7d) is the constraint on
the latency constraint, and constraint (7e) ensures that the blocklength for
each transmission phase is integer with $\mathbb{Z}$ denoting the positive
integer set. The maximum decoding error probability $\varepsilon_{1}^{\max}$
is assumed to be much less than 0.1 to ensure the stringent reliability
requirement, and this assumption holds for the remaining transmission schemes.
As a result, $\varepsilon_{1}$ should be smaller than 0.1. Then, the
inequality $\frac{D}{m_{1}}<{\rm{log}}_{2}(1+\gamma_{1})$ should hold.
To solve the optimization problem in (7), we first provide the following
lemma.
_Lemma 1_ : Constraints (7b) and (7c) hold with equality at the optimum
solution.
_Proof_ : Please see Appendix A.
With $m_{1}$ and $m_{2}$ to be integers, the exhaustive search method can be
used to find the optimal solution. To reduce the search complexity when $M$ is
large, we shorten the search range of $m_{1}$ and $m_{2}$. In the following,
we aim to derive the bounds of $m_{1}$ and $m_{2}$.
#### III-A1 The upper and lower bounds of $m_{1}$ and $m_{2}$
Since $\varepsilon_{1}^{\max}$ is assumed to be a very small value that is
much smaller than $10^{-1}$, a necessary condition for constraint (7b) to hold
is that ${\log_{2}}\left({1+{p_{1}}{h_{1}}}\right)>D/{{m_{1}}}$ 111 This can
be proved as follows:
$\varepsilon_{1}=Q\left({f\left({\gamma_{1},m_{1},D}\right)}\right)<\varepsilon_{1}^{\max}<0.5=Q\left(0\right)$.
Since Q-function is a decreasing function, we have
${f\left({\gamma_{1},m_{1},D}\right)}>0$. By substituting the expression of
$f\left({\gamma_{1},m_{1},D}\right)$, the proof is complete., which leads to
$\vspace{-0.1cm}{p_{1}}>\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right)/{{h_{1}}}.$
(8)
On the other hand, based on the energy constraint (7c), we have:
$p_{1}<E_{{\rm{tot}}}/m_{1}$. Thus, the blocklength allocation of the robot
$m_{1}$ should satisfy the following inequality:
$\vspace{-0.1cm}{E_{{\rm{tot}}}}>\frac{{{m_{1}}}}{{{h_{1}}}}({2^{\frac{D}{{{m_{1}}}}}}-1)\buildrel\Delta\over{=}g({m_{1}}).$
(9)
To investigate the properties of $g(m_{1})$, the first-order and second-order
derivatives of function $g({m_{1}})$ w.r.t. $m_{1}$ are give by
$\displaystyle\vspace{-0.4cm}g^{\prime}({m_{1}})$ $\displaystyle=$
$\displaystyle{2^{\frac{D}{{{m_{1}}}}}}-1-\ln
2\cdot\frac{D}{{{m_{1}}}}{2^{\frac{D}{{{m_{1}}}}}},$ (10)
$\displaystyle\vspace{-0.4cm}g^{\prime\prime}({m_{1}})$ $\displaystyle=$
$\displaystyle{\left({\ln
2}\right)^{2}}\cdot\frac{{{D^{2}}}}{{m_{1}^{3}}}{2^{\frac{D}{{{m_{1}}}}}}\geq
0.\vspace{-0.1cm}$ (11)
Thus, $g^{\prime}({m_{1}})$ is a monotonically increasing function of $m_{1}$,
and we have
$\vspace{-0.1cm}g^{\prime}({m_{1}})\leq\mathop{\lim}\limits_{{m_{1}}\to+\infty}g^{\prime}({m_{1}})=0.\vspace{-0.2cm}$
(12)
Hence, function $g({m_{1}})$ is a monotonically decreasing function of
$m_{1}$. Then, we can find the lower bound of $m_{1}$ that satisfies the
inequality (9) which is denoted as $m_{1}^{\rm{lb}(0)}$, and $m_{1}$ should be
no smaller than $m_{1}^{\rm{lb}(0)}$, i.e., ${m_{1}}\geq
m_{1}^{{\rm{lb}}({\rm{0}})}$. Similarly, for practical applications, the
decoding error probability of the actuator ${\varepsilon}_{2}$ should be very
small, e.g., much lower than 0.5. In this case, the inequality
${\log_{2}}\left({1+{p_{2}}{h_{2}}}\right)>D/{{m_{2}}}$ should hold, which
leads to
${p_{2}}>\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right)/{{h_{2}}}.$ (13)
By using the inequality $m_{2}\leq M-m_{1}^{\rm{lb}(0)}$, we have
${m_{2}}{p_{2}}\geq\frac{{{m_{2}}}}{{{h_{2}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right)\geq\frac{{M-m_{1}^{{\rm{lb}}({\rm{0}})}}}{{{h_{2}}}}\left({{2^{\frac{D}{{M-m_{1}^{{\rm{lb}}({\rm{0}})}}}}}-1}\right)\buildrel\Delta\over{=}{A^{(0)}}.$
(14)
By using constraint (7c), we have
${E_{{\rm{tot}}}}-{A^{(0)}}>\frac{{{m_{1}}}}{{{h_{1}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right).$
(15)
By using (15), the updated lower bound of $m_{1}$ can be obtained, and denoted
as $m_{1}^{\rm{lb}(1)}$. Similar to (14), we can obtain $A^{(1)}$ by
substituting $m_{1}^{\rm{lb}(1)}$ into $m_{1}^{\rm{lb}(0)}$, and the lower
bound of $m_{1}$ can be obtained by using (15), where $A^{(0)}$ is replaced by
$A^{(1)}$, and the updated lower bound is denoted as $m_{1}^{\rm{lb}(2)}$.
Repeat the above procedure until $m_{1}^{{\rm{lb}}(n)}=m_{1}^{{\rm{lb}}(n-1)}$
or $m_{1}^{{\rm{lb}}(n)}=M-m_{2}^{{\rm{lb}}(0)}$. Then, denote the final lower
bound of $m_{1}$ as $m_{1}^{\rm{lb}}$.
This procedure is proved to converge as follows. By using ${A^{(0)}}\geq 0$
and comparing (9) and (15), we can obtain $m_{1}^{\rm{lb}(1)}\geq
m_{1}^{\rm{lb}(0)}$, and thus ${A^{(1)}}\geq A^{(0)}$, which leads to
$m_{1}^{\rm{lb}(2)}\geq m_{1}^{\rm{lb}(1)}$. Hence, the sequence of the lower
bound $m_{1}^{\rm{lb}(n)}$ is monotonically increasing. Furthermore, the
sequence is upper bounded by $M-m_{2}^{\rm{lb}(0)}$. As a result, the sequence
generated by the above iterative procedure is guaranteed to converge.
By using the similar iterative procedure, we can also obtain the lower bound
of $m_{2}$, which is denoted as $m_{2}^{\rm{lb}}$. As a result, the search
region of $m_{1}$ is given by $m_{1}^{\rm{lb}}\leq
m_{1}\leq(M-m_{2}^{\rm{lb}})\buildrel\Delta\over{=}m_{1}^{\rm{ub}}$.
For each given $m_{1}$, we need to find the search range of $m_{2}$, which is
detailed as follows. The optimal $p_{1}$ can be obtained by solving the
equation ${\varepsilon_{1}}=\varepsilon_{1}^{\max}$ with given $m_{1}$, which
is denoted as $p_{1}^{*}$. The solution can be readily obtained by using the
bisection search method due to the fact that ${\varepsilon_{1}}(p_{1})$ is a
monotonically decreasing function of $p_{1}$. Then, we have
${E_{{\rm{tot}}}}-{m_{1}}p_{1}^{*}={m_{2}}{p_{2}}\geq\frac{{{m_{2}}}}{{{h_{2}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right).$
(16)
Hence, the lower bound of $m_{2}$ with given $m_{1}$ (denoted as
$m_{2}^{\rm{lb}}(m_{1})$) can be obtained from (16), which is the minimum
integer that satisfies (16). Obviously, the upper bound of $m_{2}$ with given
$m_{1}$ is $M-m_{1}$. Hence, the search region of $m_{2}$ is given by
$m_{2}^{\rm{lb}}(m_{1})\leq m_{2}\leq(M-m_{1})$.
#### III-A2 Algorithm to solve Problem (7)
Based on the above analysis, the algorithm to solve Problem (7) is given in
Algorithm 1. The main idea can be summarized as follows. For each given
integer value of $m_{1}$ that satisfies $m_{1}^{\rm{lb}}\leq m_{1}\leq
m_{1}^{\rm{ub}}$, we calculate the value of ${{\varepsilon}_{1}}$ when $p_{1}$
is set as ${E_{{\rm{tot}}}}/m_{1}$. If
${{\varepsilon}_{1}}>\varepsilon_{1}^{\max}$, then the value of $m_{1}$ is not
feasible, and we increase the value of $m_{1}$ by one and continue to check
the updated $m_{1}$. Otherwise, we apply the bisection search method to find
the value of $p_{1}$ such that ${{\varepsilon}_{1}}=\varepsilon_{1}^{\max}$
due to the monotonically decreasing property of decoding error probability
${{\varepsilon}_{1}}$ w.r.t. $p_{1}$ [24]. By using Lemma 1, we have
$m_{2}p_{2}=E_{{\rm{tot}}}-m_{1}p_{1}$. The search range of $m_{2}$ is given
by $m_{2}^{\rm{lb}}(m_{1})\leq m_{2}\leq M-m_{1}$. For each given $m_{2}$, the
corresponding $p_{2}$ is given by $p_{2}=(E_{{\rm{tot}}}-m_{1}p_{1})/m_{2}$,
and we can calculate the value of ${\varepsilon}_{2}$. For each feasible
$m_{1}$, we can find the optimal solutions for $m_{2}$ and $p_{2}$ that yield
the minimum value of ${\varepsilon}_{2}$, respectively. At last, we check all
feasible $m_{1}$ in the range of $m_{1}^{\rm{lb}}\leq m_{1}\leq
m_{1}^{\rm{ub}}$, and choose the final globally optimal solution.
Input : $h_{1},h_{2},D,M,\varepsilon_{1}^{\max},E_{{\rm{tot}}}$
Output : $p_{1}^{\star},p_{2}^{\star},m_{1}^{\star},m_{2}^{\star}$
1 Apply the iterative procedure to calculate $m_{1}^{\rm{lb}},m_{1}^{\rm{ub}}$
and $m_{2}^{\rm{lb}}$;
2for _ $m_{1}=m_{1}^{\rm{lb}}:m_{1}^{\rm{ub}}$ _ do
3 Set $p_{1}={E_{{\rm{tot}}}}/m_{1}$, and calculate the value of
${{\varepsilon}_{1}}$.
4 if _${{\varepsilon}_{1}} >\varepsilon_{1}^{\max}$_ then
5 The current $m_{1}$ is not feasible, and return to the next $m_{1}$;
6 else
7 Use (16) to find the lower bound of $m_{2}$, denoted as
$m_{2}^{\rm{lb}}(m_{1})$. Apply the bisection search method to find the value
of $p_{1}$ such that ${{\varepsilon}_{1}}=\varepsilon_{1}^{\max}$;
8 for _$m_{2}=m_{2}^{\rm{lb}}(m_{1}):M-m_{1}$_ do
9 Calculate $p_{2}=(E_{{\rm{tot}}}-m_{1}p_{1})/m_{2}$, and the value of
${\varepsilon}_{2}$, denoted as ${\varepsilon}_{2}(m_{1},m_{2})$.
10 end for
11 Given $m_{1}$, find the blocklength $m_{2}$ with the minimum value of
${\varepsilon}_{2}(m_{1},m_{2})$:
${\left.{m_{2}^{\\#}}\right|_{{m_{1}}}}=\mathop{\arg\min}\limits_{m_{2}^{{\rm{lb}}}\leq{m_{2}}\leq
M-{m_{1}}}{\varepsilon_{2}}\left({{m_{1}},{m_{2}}}\right).$
12 end if
13
14 end for
Return
$m_{1}^{\star}=\mathop{\arg\min}\limits_{m_{1}^{{\rm{lb}}}\leq{m_{1}}\leq
m_{1}^{{\rm{ub}}}}{\varepsilon_{2}}\left({{m_{1}},{{\left.{m_{2}^{\\#}}\right|}_{{m_{1}}}}}\right),m_{2}^{\star}={\left.{m_{2}^{\\#}}\right|_{m_{1}^{\star}}}$
and the corresponding $p_{1}^{\star}$ and $p_{2}^{\star}$.
Algorithm 1 Algorithm for Problem (7)
#### III-A3 Special case of Problem (7)
In steps 8-10 of Algorithm 1, one has to calculate the value of
${\varepsilon}_{2}$ for each $m_{2}$, which may incur high complexity. In this
subsection, we consider one special case when the SNR value $\gamma$ is very
high, i.e., $\gamma\gg 1$. In this case, $V$ in (2) can be approximated as
one, i.e., $V\approx 1$ 222 In general, when $\gamma>20$ dB, the value of $V$
is larger than 0.99, which can be approximated as one.. The optimization
problem in this special case can be efficiently solved. Specifically, the
decoding error probability in (2) can be approximated as
$\tilde{\varepsilon}=Q\left({{\tilde{f}}\left({{{\gamma}},{m},D}\right)}\right),$
(17)
where ${\tilde{f}}\left({\gamma,m,D}\right)=\ln
2\sqrt{m}\left({{{\log}_{2}}(1+\gamma)-\frac{D}{m}}\right)$.
For given $m_{1}$ and $p_{1}$, the product of $m_{2}$ and $p_{2}$ should
satisfy
${m_{2}}{p_{2}}={E_{{\rm{tot}}}}-{m_{1}}{p_{1}}\buildrel\Delta\over{=}{E_{2}}$
according to Lemma 1. Then, the original problem defined in (7) can be
transformed to the following optimization problem:
$\mathop{\min}\limits_{m_{\rm{2}}^{{\rm{lb}}}\leq{m_{2}}\leq
M-{m_{1}},{m_{2}}\in\mathbb{Z}}Q\left({\tilde{f}\left({{\gamma_{2}},{m_{2}},D}\right)}\right).$
(18)
Since $Q$-function is a decreasing function, the above problem is equivalent
to the following problem by substituting $p_{2}=E_{2}/m_{2}$ into it as
$\\!\\!\mathop{\max}\limits_{m_{\rm{2}}^{{\rm{lb}}}\leq{m_{2}}\leq
M\\!-\\!{m_{1}},{m_{2}}\in\mathbb{Z}}\ln
2\sqrt{{m_{2}}}\\!\left(\\!{{{\log}_{2}}\\!\left(\\!{1\\!+\\!\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}}\\!\right)\\!\\!-\\!\\!\frac{D}{{{m_{2}}}}}\\!\right).$
(19)
To solve the above problem, we first relax the integer variable $m_{2}$ to a
continuous variable, and define
$\tilde{g}({m_{2}})\buildrel\Delta\over{=}\sqrt{{m_{2}}}\left({{{\log}_{2}}\left({1+\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}}\right)-\frac{D}{{{m_{2}}}}}\right).$
(20)
In the following theorem, we provide a sufficient condition for
$\tilde{g}({m_{2}})$ to be a concave function.
_Theorem 1_ : $\tilde{g}({m_{2}})$ is a concave function when
$\frac{{{E_{2}}{h_{2}}}}{{M-{m_{1}}}}\geq e-1$, where $e$ is the natural
constant.
_Proof_ : Please see Appendix B.
When the condition in Theorem 1 is satisfied, Problem (19) is a convex
optimization problem. If $\tilde{g}^{\prime}(m_{\rm{2}}^{{\rm{lb}}})\leq 0$,
the optimal $m_{2}$ is given by $m_{2}=m_{\rm{2}}^{{\rm{lb}}}$. If
$\tilde{g}^{\prime}(M-{m_{1}})\geq 0$, the optimal $m_{2}$ is
$m_{2}=M-{m_{1}}$. Otherwise, the optimal $m_{2}^{*}$ satisfies
$\tilde{g}^{\prime}(m_{\rm{2}})=0$, and the low-complexity bisection search
method can be used to find $m_{2}^{*}$. The final optimal integer $m_{2}$ is
the one with lower objective value for its two neighbor integers, i.e.,
$\left\lfloor{m_{2}^{*}}\right\rfloor$ and
$\left\lfloor{m_{2}^{*}}\right\rfloor+1$.
### III-B NOMA transmission
In NOMA transmission, superposition coding is employed at the controller so
that the controller can transmit signals to the two devices simultaneously
with different power levels. The controller allocates higher transmit power to
the user with lower channel gains and lower power to the one with higher
channel gains. On the one hand, the robot decodes the actuator’s signal first.
If decoding correctly, the robot will subtract the actuator’s signal from its
received signals and decodes its own signal. This is the so-called successive
interference cancellation (SIC). Otherwise, it has to decode its own signal by
treating actuator’s information as interference. On the other hand, the
actuator directly decodes its own signal by treating the robot’s signal as
interference since the controller allocates higher transmit power than the
robot. To implement this scheme, it is crucial that the robot knows whether
SIC is successful or not. To this end, we assume that the controller sends the
actuator’s channel coding information along with the robot’s channel coding
information to the robot through dedicated error-free channels. The channel
coding information for both devices are different and the channel coding can
assist in detecting whether the decoded information is correct or not. Hence,
the robot knows whether the SIC is successful or not. In general, the channel
coding information changes when CSI changes, which is much longer than the
URLLC transmission. Hence, each channel coherence time can accommodate
multiple URLLC transmissions. Then, the coding information only needs to be
transmitted to the robot at the beginning of channel coherence time, which
causes negligible overhead consumption.
In NOMA, the transmission blocklength for two devices is equal to $M$.
Specifically, the received signals at the robot and the actuator are given by
$\begin{array}[]{l}{y_{1}}=\sqrt{{p_{1}}}{{\tilde{h}}_{1}}{x_{1}}+\sqrt{{p_{2}}}{{\tilde{h}}_{1}}{x_{2}}+{n_{1}},\\\
{y_{2}}=\sqrt{{p_{1}}}{{\tilde{h}}_{2}}{x_{1}}+\sqrt{{p_{2}}}{{\tilde{h}}_{2}}{x_{2}}+{n_{2}},\end{array}$
(21)
where the notations in (21) has the same meaning as those in the OMA
transmission scheme. For the robot, it first decodes the actuator’s signal,
where the decoding signal to interference plus noise ratio (SINR) is given by
$\gamma_{2}^{1}=\frac{{{p_{2}}{h_{1}}}}{{{p_{1}}{h_{1}}+1}}.$ (22)
Following (2), the decoding error probability of $x_{2}$ at the robot can be
written as
$\varepsilon_{2}^{1}=Q\left({f\left({{{\gamma_{2}^{1}}},{M},D}\right)}\right)$.
This equivalently indicates that the information $x_{2}$ can be accurately
cancelled at the robot with probability $1-\varepsilon_{2}^{1}$. Note that
this is different from the infinite blocklength case in NOMA, where perfect
decoding can be achieved by the robot. If the SIC is successful, the robot
decodes its signal $x_{1}$ by removing the decoded signal $x_{2}$. By using
the first equality in (21), the SINR of decoding the signal $x_{1}$ is given
by
${\gamma_{1}}={p_{1}}{h_{1}}.$ (23)
Thus, following (2), the decoding error probability of $x_{1}$ at the robot
under perfect SIC condition is given by
${\varepsilon_{1}}=Q\left({f\left({{{\gamma_{1}}},{M},D}\right)}\right)$.
However, if the SIC fails, the robot will decode its information $x_{1}$ while
treating $x_{2}$ as interference, and the corresponding SINR is given by
$\vspace{-0.2cm}{{\hat{\gamma}_{1}}}=\frac{{{p_{1}}{h_{1}}}}{{{p_{2}}{h_{1}}+1}}.$
(24)
Thus, the decoding error probability of $x_{1}$ at the robot is given by
${{\hat{\varepsilon}_{1}}}=Q\left({f\left({{{\hat{\gamma}_{1}}},{M},D}\right)}\right)$.
Based on the above discussion, the decoding error probability of $x_{1}$ at
the robot is Bernoulli-distributed. With probability $1-\varepsilon_{2}^{1}$,
the decoding error probability is equal to ${\varepsilon_{1}}$, and with
probability $\varepsilon_{2}^{1}$, it is equal to ${{\hat{\varepsilon}_{1}}}$.
Hence, the average decoding error probability of $x_{1}$ at the robot is
formulated as
$\vspace{-0.2cm}{{\bar{\varepsilon}}_{1}}={\varepsilon_{1}}(1-\varepsilon_{2}^{1})+{{\hat{\varepsilon}_{1}}}\varepsilon_{2}^{1}.$
(25)
Recall that the actuator directly decodes its own signal by treating the
signal from the robot as interference, and its SINR is given by
${{\gamma_{2}}}=\frac{{{p_{2}}{h_{2}}}}{{{p_{1}}{h_{2}}+1}}.$ (26)
The corresponding decoding error probability is given by
${\varepsilon_{2}}=Q\left({f\left({{{\gamma_{2}}},{M},D}\right)}\right)$.
Now, we can formulate the optimization problem under NOMA transmission as:
$\displaystyle\vspace{-0.5cm}\mathop{\min}\limits_{\left\\{{{p_{1}},{p_{2}}}\right\\}}\;\;\;$
$\displaystyle{{\varepsilon}_{2}}$ (27a) $\displaystyle{\rm{s.t.}}\;\;\;$
$\displaystyle{{\bar{\varepsilon}}_{1}}\leq\varepsilon_{1}^{\max},$ (27b)
$\displaystyle M{p_{1}}+M{p_{2}}\leq{E_{{\rm{tot}}}},$ (27c) $\displaystyle
p_{1}\leq p_{2},$ (27d)
where (27d) represents that more power should be allocated to the user with
weaker channel gains.
Similar to the proof of Lemma 1, we can show that the energy constraint in
(27c) holds with equality at the optimum point. Then, we study the feasible
range of the power allocation $p_{1}$ to facilitate the search algorithm. The
expression of ${{\bar{\varepsilon}}_{1}}$ can be reexpressed as
$\vspace{-0.1cm}{{\bar{\varepsilon}}_{1}}={\varepsilon_{1}}+({{\hat{\varepsilon}_{1}}}-{\varepsilon_{1}})\varepsilon_{2}^{1}\geq{\varepsilon_{1}}.$
(28)
By using constraints (27b) and (28), we have
${\varepsilon_{1}}\leq\varepsilon_{1}^{\max}$. By denoting
${\bar{f}}(\gamma)=f(\gamma,M,D)$, the lower bound of $p_{1}$ can be derived
as
${p_{1}}\geq\frac{{{{\bar{f}}^{-1}}\left({{Q^{-1}}(\varepsilon_{1}^{\max})}\right)}}{{{h_{1}}}}\buildrel\Delta\over{=}p_{1}^{{\rm{lb}}}.$
(29)
From constraint (27d), we know that
${p_{1}}\leq\frac{{{E_{{\rm{tot}}}}}}{{2M}}$. To guarantee the meaningfulness
of $\varepsilon_{2}^{1}$, the inequality
${\log_{2}}\left({1+\gamma_{2}^{1}}\right)\geq D/M$ should hold. Then, we have
${p_{1}}\leq\frac{{{E_{{\rm{tot}}}}{2^{-\frac{D}{M}}}}}{M}-\frac{1}{{{h_{1}}}}+\frac{{{2^{-\frac{D}{M}}}}}{{{h_{1}}}}.$
(30)
In addition, to guarantee the meaningfulness of $\varepsilon_{2}$, the
inequality ${\log_{2}}\left({1+\gamma_{2}}\right)\geq D/M$ should hold, which
yields
${p_{1}}\leq\frac{{{E_{{\rm{tot}}}}{2^{-\frac{D}{M}}}}}{M}-\frac{1}{{{h_{2}}}}+\frac{{{2^{-\frac{D}{M}}}}}{{{h_{2}}}}.$
(31)
Since $h_{1}>h_{2}$, the upper bound of $p_{1}$ is given by
${p_{1}}\leq\min\left\\{{\frac{{{E_{{\rm{tot}}}}{2^{-\frac{D}{M}}}}}{M}-\frac{1}{{{h_{2}}}}+\frac{{{2^{-\frac{D}{M}}}}}{{{h_{2}}}},\frac{{{E_{{\rm{tot}}}}}}{{2M}}}\right\\}\buildrel\Delta\over{=}p_{1}^{{\rm{ub}}}.$
(32)
To further reduce the search complexity, in the following theorem, we prove
that constraint (27b) holds with equality at the optimum point.
_Theorem 2_ : Constraint (27b) holds with equality at the optimum solution.
Proof: Please see Appendix C.
Based on Theorem 2, we can readily know that the one-dimensional line search
algorithm can be used to find the optimal $p_{1}^{\star}$.
### III-C Relay-assisted transmission
In this scheme, the robot acts as a relay that assists the transmission for
actuator, where decode-and-forward (DF) relay is assumed at the robot. The
packet ID is inserted in the packet head for each device to differentiate
their corresponding data information. The whole blocklength is divided into
two phases, the broadcast phase with blocklength $m_{1}$ and the relay phase
with blocklength $m_{2}$, which satisfy the constraint of $m_{1}+m_{2}\leq M$.
In the first phase, the controller broadcasts a large packet that is a
combination of two packets to both devices, where the combined packet size is
$2D$. The received signals at both devices are given by
$\begin{array}[]{l}{y_{1,1}}=\sqrt{{p_{s}}}{{\tilde{h}}_{1}}{\tilde{x}_{1}}+{n_{1}},\\\
{y_{1,2}}=\sqrt{{p_{s}}}{{\tilde{h}}_{2}}{\tilde{x}_{1}}+{n_{2}},\end{array}$
(33)
where $p_{s}$ denotes the power allocated to the combined packet,
$\tilde{x}_{1}$ carries the data information of the combined packet with
coding rate $2D/m_{1}$. Then, the SNR of the robot to decode the combined
packet is given by ${\gamma_{1}}={p_{s}}{h_{1}}$, and the decoding error
probability at the robot is given by
${\varepsilon_{1}}=Q\left({f\left({{{\gamma_{1}}},{m_{1}},2D}\right)}\right)$.
Since the robot acts as a relay based on the DF mode, if the robot
successfully decodes the combined packet, it will forward the actuator’s
packet to the actuator with coding rate $D/m_{2}$ in the second phase, and the
received signal at the actuator is given by
$\vspace{-0.1cm}{y_{2,2}}=\sqrt{{p_{r}}}{{\tilde{h}}_{3}}{x_{2}}+{n_{3}},$
(34)
where $p_{r}$ is the transmit power at the actuator. The received SNR is
${\gamma_{2}}={p_{r}}{h_{3}}$, where $h_{3}$ is the normalized channel gain
given by
$h_{3}={{{{\left|{{{\tilde{h}}_{3}}}\right|}^{2}}}\mathord{\left/{\vphantom{{{{\left|{{{\tilde{h}}_{3}}}\right|}^{2}}}{\sigma_{2}^{2}}}}\right.\kern-1.2pt}{\sigma_{2}^{2}}}$.
The error probability is given by
${\varepsilon_{2}}=Q\left({f\left({{{\gamma_{2}}},{m_{2}},D}\right)}\right)$.
There is a possibility that the actuator cannot decode its packet due to the
following two reasons: 1) the robot is not able to correctly decode the
combined packet and will not forward anything to the actuator with probability
$\varepsilon_{1}$; and 2) the robot correctly decodes the combined packet and
forwards the packet to the actuator with probability $1-\varepsilon_{1}$, but
with probability ${\varepsilon_{2}}$, the actuator fails to decode the packet.
In this case the actuator will have to decode the combined packet by using the
received signal from the first phase, i.e., ${y_{1,2}}$. The achieved SNR of
the actuator for decoding the combined packet is given by
${{\hat{\gamma}_{2}}}={{p_{s}}{h_{2}}}$, and the corresponding decoding error
probability is given by
${{\hat{\varepsilon}_{2}}}=Q\left({f\left({{{\hat{\gamma}_{2}}},{m_{1}},2D}\right)}\right)$.
As a result, the expected error probability of the actuator decoding its
packet in the relay-assisted transmission scheme is given by
${{\bar{\varepsilon}}_{2}}=\left({\left({1-\varepsilon_{1}}\right){\varepsilon_{2}}+\varepsilon_{1}}\right)\hat{\varepsilon}_{2}.\vspace{-0.4cm}$
(35)
Then, the resource allocation problem is formulated as
$\displaystyle\vspace{-0.5cm}\mathop{\min}\limits_{\left\\{{{m_{1}},{m_{2}},{p_{s}},{p_{r}}}\right\\}}\;\;\;$
$\displaystyle{{\bar{\varepsilon}}_{2}}$ (36a)
$\displaystyle{\rm{s.t.}}\;\;\;$
$\displaystyle{{\varepsilon}_{1}}\leq\varepsilon_{1}^{\max},$ (36b)
$\displaystyle{m_{1}}{p_{s}}+{m_{2}}{p_{r}}\leq{E_{{\rm{tot}}}},$ (36c)
$\displaystyle m_{1}+m_{2}\leq M,$ (36d) $\displaystyle
m_{1},m_{2}\in\mathbb{Z}.$ (36e)
By using the contradiction method, we can easily prove that constraint (36c)
holds with equality at the optimal solution. However, in contrast to the above
two transmission schemes, the decoding error probability constraint (36b) may
not hold with equality at the optimal solution, as the objective function may
also decrease with ${\varepsilon}_{1}$. The algorithms proposed for the OMA
and NOMA transmission schemes cannot be applied.
By using the similar iterative procedure in OMA scheme, we are able to obtain
the feasible region of $m_{1}$ as $m_{1}^{\rm{lb}}\leq m_{1}\leq
m_{1}^{\rm{ub}}$. For given $m_{1}$, the search region of $m_{2}$ can also be
obtained as $m_{2}^{\rm{lb}}(m_{1})\leq m_{2}\leq(M-m_{1})$.
In the following, we study the optimization problem of the power allocation
$p_{s}$ and $p_{r}$ under fixed $m_{1}$ and $m_{2}$. For each given $m_{2}$,
we can obtain the lower bound of $p_{r}$ to make $\varepsilon_{2}$ meaningful:
${p_{r}}\geq{{\left({{2^{{D\mathord{\left/{\vphantom{D{{m_{2}}}}}\right.\kern-1.2pt}{{m_{2}}}}}}-1}\right)}\mathord{\left/{\vphantom{{\left({{2^{{D\mathord{\left/{\vphantom{D{{m_{2}}}}}\right.\kern-1.2pt}{{m_{2}}}}}}-1}\right)}{{h_{3}}}}}\right.\kern-1.2pt}{{h_{3}}}}\buildrel\Delta\over{=}p_{r}^{{\rm{lb}}}$.
Thus, the upper bound of $p_{s}$ can be derived as
${p_{s}}\leq\frac{{{E_{{\rm{tot}}}}}}{{{m_{1}}}}-\frac{{{m_{2}}}}{{{m_{1}}}}p_{r}^{{\rm{lb}}}\buildrel\Delta\over{=}p_{s}^{{\rm{up}}}.\vspace{-0.2cm}$
(37)
Hence, the feasible region of ${p_{s}}$ is given by $p_{s}^{\rm{lb}}\leq
p_{s}\leq p_{s}^{\rm{ub}}$, where $p_{s}^{\rm{lb}}$ is the solution to
equation ${\varepsilon_{1}}({p_{s}})=\varepsilon_{1}^{\max}$ with given
$m_{1}$. When $p_{s}$ is given, $p_{r}$ can be calculated as
$p_{r}={{\left({{E_{{\rm{tot}}}}-{m_{1}}{p_{s}}}\right)}\mathord{\left/{\vphantom{{\left({{E_{{\rm{tot}}}}-{m_{1}}{p_{s}}}\right)}{{m_{2}}}}}\right.\kern-1.2pt}{{m_{2}}}}$.
Then, the original optimization problem reduces to a one-dimension
optimization problem as
$\displaystyle\mathop{\min}\limits_{{p_{s}}}\;\;\;$
$\displaystyle{{\bar{\varepsilon}}_{2}}$ (38a)
$\displaystyle{\rm{s.t.}}\;\;\;$ $\displaystyle p_{s}^{\rm{lb}}\leq p_{s}\leq
p_{s}^{\rm{ub}}.$ (38b)
The one-dimensional line search method can be used to solve Problem (38).
In summary, we provide Algorithm 2 to solve Problem (36).
Input : $h_{1},h_{2},D,M,\varepsilon_{1}^{\max},E_{{\rm{tot}}}$
Output : $p_{s}^{\star},p_{r}^{\star},m_{1}^{\star},m_{2}^{\star}$
1 Apply the iterative procedure to calculate $m_{1}^{\rm{lb}},m_{1}^{\rm{ub}}$
and $m_{2}^{\rm{lb}}$;
2for _ $m_{1}=m_{1}^{\rm{lb}}:m_{1}^{\rm{ub}}$ _ do
3 Calculate the solution to the equation
${{\varepsilon}_{1}}=\varepsilon_{1}^{\max}$, which is denoted as
$p_{s}^{\rm{lb}}$. Calculate the lower bound of $m_{2}$ with given $m_{1}$,
denoted as $m_{2}^{\rm{lb}}(m_{1})$.
4 for _$m_{2}=\tilde{m}_{2}^{\rm{lb}}:(M-m_{1})$_ do
5 Calculate the upper bound of $p_{s}$ as $p_{s}^{\rm{ub}}$ in (37), and solve
Problem (38). Calculate the objective value
${{\bar{\varepsilon}}_{2}}(m_{1},m_{2})$.
6 end for
7 Given $m_{1}$, find the blocklength $m_{2}$ with the minimum value of
${\varepsilon}_{2}(m_{1},m_{2})$:
${\left.{m_{2}^{\\#}}\right|_{{m_{1}}}}=\mathop{\arg\min}\limits_{\tilde{m}_{2}^{{\rm{lb}}}\leq{m_{2}}\leq
M-{m_{1}}}{\varepsilon_{2}}\left({{m_{1}},{m_{2}}}\right).$
8 end for
Return
$m_{1}^{\star}=\mathop{\arg\min}\limits_{m_{1}^{{\rm{lb}}}\leq{m_{1}}\leq
m_{1}^{{\rm{ub}}}}{\varepsilon_{2}}\left({{m_{1}},{{\left.{m_{2}^{\\#}}\right|}_{{m_{1}}}}}\right),m_{2}^{\star}={\left.{m_{2}^{\\#}}\right|_{m_{1}^{\star}}}$
and the corresponding $p_{s}^{\star}$ and $p_{r}^{\star}$.
Algorithm 2 Algorithm for Problem (36)
### III-D C-NOMA transmission
In this part, we consider the C-NOMA transmission in [30], which is a
combination of the NOMA scheme and relay-assisted scheme. Specifically, in the
first phase, the controller transmits two signals $x_{1}$ and $x_{2}$ to the
two devices via the NOMA technique. In the second phase, the robot acts as a
relay and forwards the packet to the actuator. The blocklength for these two
phases are denoted by $m_{1}$ and $m_{2}$, which satisfies $m_{1}+m_{2}\leq
M$.
Specifically, in the first phase, the received signals at the robot and the
actuator are given by
$\begin{array}[]{l}{y_{1,1}}=\sqrt{{p_{1}}}{{\tilde{h}}_{1}}{x_{1}}+\sqrt{{p_{2}}}{{\tilde{h}}_{1}}{x_{2}}+{n_{1}},\\\
{y_{1,2}}=\sqrt{{p_{1}}}{{\tilde{h}}_{2}}{x_{1}}+\sqrt{{p_{2}}}{{\tilde{h}}_{2}}{x_{2}}+{n_{2}},\end{array}$
(39)
respectively, where $p_{1}$ and $p_{2}$ are the transmit power allocated to
the robot and the actuator, $x_{1}$ and $x_{2}$ carries different information
knowledge for different packets with size $D$. Hence, the coding rate for the
transmission to the robot and the actuator are given by $D/m_{1}$.
By using the NOMA scheme, the SIC technique is employed at the robot side to
cancel the interference from the actuator. Similar to the analysis in the NOMA
scheme, the decoding error probability of $x_{2}$ at the robot is given by
$\vspace{-0.1cm}{{\varepsilon_{2}^{1}}}=Q\left({f\left({{{\gamma_{2}^{1}}},{m_{1}},D}\right)}\right),$
(40)
where $\gamma_{2}^{1}$ is the same as that in (22). Under perfect SIC
condition, the decoding error probability of $x_{1}$ at the robot is given by
${\varepsilon_{1}}=Q\left({f\left({{{\gamma_{1}}},{m_{1}},D}\right)}\right),$
(41)
where ${\gamma_{1}}={p_{1}}{h_{1}}$. However, if SIC fails, the corresponding
decoding error probability of $x_{1}$ at the robot is given by
${\hat{\varepsilon}_{1}}=Q\left({f\left({{{\hat{\gamma}_{1}}},{m_{1}},D}\right)}\right)$,
where $\hat{\gamma}_{1}$ is given by (24). Using the same analysis as in NOMA,
the average decoding probability at the robot is given by
$\vspace{-0.1cm}{{\bar{\varepsilon}}_{1}}={\varepsilon_{1}}(1-\varepsilon_{2}^{1})+{{{\hat{\varepsilon}_{1}}}}\varepsilon_{2}^{1}.$
(42)
By using the similar analysis as in the relay-assisted scheme, the decoding
error probability of the actuator decoding $x_{2}$ under the C-NOMA scheme is
given by
$\vspace{-0.1cm}{{\bar{\varepsilon}}_{2}}=\left({\left({1-\varepsilon_{2}^{1}}\right){\varepsilon_{2}}+\varepsilon_{2}^{1}}\right)\hat{\varepsilon}_{2},$
(43)
where $\varepsilon_{2}^{1}$ and $\varepsilon_{2}$ are given in Subsection-
III-B, and $\hat{\varepsilon}_{2}$ is the decoding error probability of the
actuator when the actuator has to decode $x_{2}$ from the received signal in
the first phase. The expression of $\hat{\varepsilon}_{2}$ is given by
${\hat{\varepsilon}_{2}}=Q\left({f\left({{\hat{\gamma}_{2}},{m_{1}},D}\right)}\right),$
(44)
where ${{\hat{\gamma}_{2}}}$ is given by
${{\hat{\gamma}_{2}}}=\frac{{{p_{2}}{h_{2}}}}{{{p_{1}}{h_{2}}+1}}.$ (45)
Therefore, the optimization problem of C-NOMA transmission scheme can be
formulated as
$\displaystyle\vspace{-0.2cm}\mathop{\min}\limits_{\left\\{{{m_{1}},{m_{2}},{p_{1}},{p_{2}},{p_{r}}}\right\\}}\;\;\;$
$\displaystyle{{\bar{\varepsilon}}_{2}}$ (46a)
$\displaystyle{\rm{s.t.}}\;\;\;$
$\displaystyle{{\bar{\varepsilon}}_{1}}\leq\varepsilon_{1}^{\max},$ (46b)
$\displaystyle{m_{1}}({p_{1}}+{p_{2}})+{m_{2}}{p_{r}}\leq{E_{{\rm{tot}}}},$
(46c) $\displaystyle m_{1}+m_{2}\leq M,$ (46d) $\displaystyle
m_{1},m_{2}\in\mathbb{Z},$ (46e) $\displaystyle p_{1}\leq p_{2}.$ (46f)
Following the similar proof as Lemma 1, we can show that constraints (46b) and
(46c) hold with equality at the optimal point, thus the search method can be
used to find the optimal solution of Problem (46). To reduce the search
complexity, we need to find tight lower and upper bounds on $m_{1}$ and
$m_{2}$.
However, unlike the previous schemes that only two power allocation variables
are involved, the number of power allocation variables in C-NOMA scheme is
three. This will complicate the analysis of deriving the bounds of $m_{1}$ and
$m_{2}$. To deal with this difficulty, we regard the summation of
$p_{1}+p_{2}$ as a whole entity. To realize the functionality of the C-NOMA
scheme, $\varepsilon_{1}$ and ${{\varepsilon_{2}^{1}}}$ should be very small,
e.g., much lower than 0.5. Then, we have
$\displaystyle\vspace{-0.8cm}{p_{1}}$ $\displaystyle\geq$
$\displaystyle\frac{1}{{{h_{1}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right),$
(47) $\displaystyle{p_{2}}$ $\displaystyle\geq$
$\displaystyle\frac{1}{{{h_{1}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right)\left({1+{p_{1}}{h_{1}}}\right).\vspace{-0.2cm}$
(48)
By substituting (47) into the right hand side of (48), we have
${p_{2}}\geq\frac{1}{{{h_{1}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right)2^{\frac{D}{{{m_{1}}}}}.$
(49)
By adding (47) and (49), one can obtain
$\vspace{-0.1cm}{p_{1}}+{p_{2}}\geq\frac{1}{{{h_{1}}}}\left({{2^{\frac{{2D}}{{{m_{1}}}}}}-1}\right).$
(50)
To ensure that $\varepsilon_{2}$ is meaningful, we have
${p_{r}}\geq\frac{1}{{{h_{3}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right).$
(51)
By using the similar iterative procedure, we can also obtain the lower bounds
of $m_{1}$ and $m_{2}$, which are denoted as $m_{1}^{{\rm{lb}}}$ and
$m_{2}^{{\rm{lb}}}$, respectively. As a result, the search region of $m_{1}$
is given by $m_{1}^{{\rm{lb}}}\leq
m_{1}\leq(M-m_{2}^{{\rm{lb}}})\buildrel\Delta\over{=}m_{1}^{{\rm{ub}}}$. For
each given $m_{1}$ within the range, we need to find the search range of
$m_{2}$, which is detailed as follows. Since
${\varepsilon_{1}}<{{\bar{\varepsilon}}_{1}}\leq\varepsilon_{1}^{\max}$, the
lower bound of $p_{1}$ can be obtained by solving the equation of
${\varepsilon_{1}}({p_{1}})=\varepsilon_{1}^{\max}$ for given $m_{1}$, which
is denoted as $p_{1}^{\rm{lb}}$. By using (48), we can obtain the lower bound
of $p_{2}$ as follows:
$\vspace{-0.1cm}{p_{2}}\geq\frac{1}{{{h_{1}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right)\left({1+{p_{1}^{\rm{lb}}}{h_{1}}}\right)\buildrel\Delta\over{=}p_{2}^{\rm{lb}}.$
(52)
Based on (46c), we have
$\vspace{-0.05cm}{E_{{\rm{tot}}}}-{m_{1}}(p_{1}^{{\rm{lb}}}+p_{2}^{{\rm{lb}}})\geq{m_{2}}{p_{r}}\geq\frac{{{m_{2}}}}{{{h_{3}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right),$
(53)
where the last inequality is due to the fact that
${p_{r}}\geq\frac{1}{{{h_{3}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right)$
must hold to guarantee the meaningfulness of ${\varepsilon_{2}}$. The lower
bound of $m_{2}$ under given $m_{1}$ (denoted as $m_{2}^{{\rm{lb}}}(m_{1})$)
can be obtained from (53), which is the minimum integer that satisfies (53).
Obviously, the upper bound of $m_{2}$ with given $m_{1}$ is $M-m_{1}$. Hence,
the search region of $m_{2}$ is given by $m_{2}^{{\rm{lb}}}(m_{1})\leq
m_{2}\leq(M-m_{1})$.
Given $m_{1}$ and $m_{2}$, we need to find the optimal $p_{1}$, $p_{2}$ and
$p_{s}$. These variables are coupled and it is difficult to find the optimal
solution by using the optimization method. The one-dimensional search is
adopted to find the optimal solution. In particular, we first fix the value of
the sum of $p_{1}$ and $p_{2}$ as $t$, i.e., $t=p_{1}+p_{2}$. Since constraint
(46b) holds with equality at the optimal point, the optimal $p_{1}$ can be
obtained by solving the equation
$\bar{\varepsilon}_{1}(p_{1})=\varepsilon_{1}^{\rm{max}}$ by inserting
$p_{2}=t-p_{1}$ into this equation. By combining (48) and (46f), the upper
bound of $p_{1}$ is obtained as $p_{1}\leq\min{\left(t\cdot
2^{-\frac{D}{m_{1}}}-\frac{1}{h_{1}}+\frac{1}{h_{1}}\cdot
2^{-\frac{D}{m_{1}}},\frac{t}{2}\right)}\triangleq p_{1}^{\rm{up}}$, and
$p_{1}$ should be within the domain
$p_{1}\in(p_{1}^{\rm{lb}},p_{1}^{\rm{up}})$. This equation has only one
variable $p_{1}$ and the one-dimensional search method can be adopted to solve
the equation. As constraint (46c) holds with equality, $p_{r}$ can be directly
obtained as
$p_{r}={{({E_{{\rm{tot}}}}-t{m_{1}})}\mathord{\left/{\vphantom{{({E_{{\rm{tot}}}}-t{m_{1}})}{{m_{2}}}}}\right.\kern-1.2pt}{{m_{2}}}}$.
Calculate the objective value with given $m_{1}$, $m_{2}$, $t$ and $p_{r}$.
The remaining task is to find the tight search region $t$. Obviously, the
lower bound of $t$ is given by
$t^{\rm{lb}}=p_{1}^{{\rm{lb}}}+p_{2}^{{\rm{lb}}}$. To obtain the upper bound
of $t$, we first find the lower bound of $m_{2}p_{r}$, which is given by
$\vspace{-0.1cm}{m_{2}}{p_{r}}\geq\frac{{{m_{2}}}}{{{h_{3}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right).$
(54)
Then, the upper bound of $t$ is given by
$t\leq\frac{1}{{{m_{1}}}}\left({{E_{{\rm{tot}}}}-\frac{{{m_{2}}}}{{{h_{3}}}}\left({{2^{\frac{D}{{{m_{2}}}}}}-1}\right)}\right)={t^{{\rm{ub}}}}.$
(55)
Based on the above analysis, we provide Algorithm 3 to solve Problem (36).
Input : $h_{1},h_{2},h_{3},D,M,\varepsilon_{1}^{\max},E_{{\rm{tot}}}$
Output :
$p_{1}^{\star},p_{2}^{\star},p_{r}^{\star},m_{1}^{\star},m_{2}^{\star}$
1 Apply the iterative procedure to calculate $m_{1}^{\rm{lb}},m_{1}^{\rm{ub}}$
and $m_{2}^{\rm{lb}}$;
2for _ $m_{1}=m_{1}^{\rm{lb}}:m_{1}^{\rm{ub}}$ _ do
3 Calculate the solution to the equation
${{\varepsilon}_{1}}=\varepsilon_{1}^{\max}$, which is denoted as
$p_{1}^{\rm{lb}}$. Use (52) to calculate the lower bound of $p_{2}$, denoted
as $p_{2}^{\rm{lb}}$. Use (53) to find the lower bound of $m_{2}$, denoted as
$m_{2}^{{\rm{lb}}}(m_{1})$.
4 if _$m_{2}^{{\rm{lb}}}(m_{1})\leq(M-m_{1})$ _ then
5 for _$m_{2}=m_{2}^{{\rm{lb}}}(m_{1}):(M-m_{1})$_ do
6 Calculate the lower bound of $t$ as
$t^{\rm{lb}}=p_{1}^{{\rm{lb}}}+p_{2}^{{\rm{lb}}}$ , and the upper bound of $t$
as $t^{\rm{ub}}$ from (55). Use the one-dimensional search to find the optimal
$t$ that achieves the minimum objective value. Denote the optimal objective
value ${{\bar{\varepsilon}}_{2}}(m_{1},m_{2})$.
7 end for
8 Given $m_{1}$, find the blocklength $m_{2}$ with the minimum value of
${\varepsilon}_{2}(m_{1},m_{2})$:
${\left.{m_{2}^{\\#}}\right|_{{m_{1}}}}=\mathop{\arg\min}\limits_{m_{2}^{{\rm{lb}}}(m_{1})\leq{m_{2}}\leq
M-{m_{1}}}{\varepsilon_{2}}\left({{m_{1}},{m_{2}}}\right).$
9 end if
10
11 end for
Return
$m_{1}^{\star}=\mathop{\arg\min}\limits_{m_{1}^{{\rm{lb}}}\leq{m_{1}}\leq
m_{1}^{{\rm{ub}}}}{\varepsilon_{2}}\left({{m_{1}},{{\left.{m_{2}^{\\#}}\right|}_{{m_{1}}}}}\right),m_{2}^{\star}={\left.{m_{2}^{\\#}}\right|_{m_{1}^{\star}}}$
and the corresponding $p_{1}^{\star}$ and $p_{2}^{\star}$.
Algorithm 3 Algorithm for Problem (46)
Remark: It is noted that the feasible region of C-NOMA scheme is smaller than
that of the relay-assisted transmission scheme. Specifically, if $p_{1}^{*}$
and $p_{2}^{*}$ is any one feasible solution of Problem (46), it can be
readily checked that $p_{s}=p_{1}^{*}+p_{2}^{*}$ is also a feasible solution
of Problem (36). However, if $\\{p_{1}^{*},p_{2}^{*}\\}$ is not a feasible
solution of Problem (46), $p_{s}=p_{1}^{*}+p_{2}^{*}$ may still be feasible
for Problem (36). For example, by letting
${p_{2}}=\frac{1}{{{h_{2}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right),{p_{1}}=\frac{1}{{{h_{1}}}}\left({{2^{\frac{{2D}}{{{m_{1}}}}}}-1}\right)-\frac{1}{{{h_{2}}}}\left({{2^{\frac{D}{{{m_{1}}}}}}-1}\right),$
(56)
it can be readily checked that $p_{1}$ and $p_{2}$ do not satisfy condition
(48), which is not feasible for Problem (46). However by setting
$p_{s}=p_{1}+p_{2}$, $p_{s}$ is still feasible for Problem (36). This
observation means the feasible region for Problem (36) is larger than that of
Problem (46).
## IV Extension to More Devices for the OMA Scheme
In this section, we consider the more general case when the system has more
than two devices for the OMA scheme. The extension to other schemes will be
studied in the future work.
### IV-A Sytem Model and Problem Formulation
Let us denote the total number of devices as $K$, and the set of all devices
as $\cal K$. We assume that the normalized channel gains of all $K$ devices
are arranged in a decreasing order, i.e., $h_{1}>h_{2}>\cdots>h_{K}$333Due to
the small-scale fading, the probability that any two or more devices have the
same channel gain is equal to zero.. Then, we aim to jointly optimize the
power and blocklength allocation to minimize the decoding error probability of
the $K$th device while guaranteeing the decoding error probability
requirements of the first $K-1$ devices. Mathematically, the optimization
problem can be formulated as follows:
$\displaystyle\min_{\left\\{m_{k},k\in\cal K\right\\},\left\\{p_{k},k\in\cal
K\right\\}}\;\;\;\;$ $\displaystyle\varepsilon_{K}$ (57a) s.t.
$\displaystyle\varepsilon_{k}\leq\varepsilon_{k}^{\max},\;\;k\in{\cal
K}\backslash K,$ (57b) $\displaystyle\sum\nolimits_{k\in\cal K}m_{k}p_{k}\leq
E_{\text{tot}},$ (57c) $\displaystyle\sum\nolimits_{k\in\cal K}m_{k}\leq M,$
(57d) $\displaystyle m_{k}\in\mathbb{Z},k\in\cal K.$ (57e)
In contrast to the case of two devices where the globally optimal solution to
Problem (7) can be obtained, the globally optimal solution to Problem (57) for
the more general case is not available. In the following, we aim to obtain a
suboptimal solution to Problem (57).
### IV-B Problem Reformulation
To make Problem (57) tractable, we again approximate $V$ as one, i.e.,
$V\approx 1$. This approximation is very accurate when the SNR value $\gamma$
is very high, i.e., $\gamma\gg 1$. As the decoding error probability is a
decreasing function of power and blocklength, we can readily prove that
constraints (57b), (57c) and (57d) hold with equality at the optimum point by
using the contradiction method. By using the fact that
$\varepsilon_{k}=\varepsilon_{k}^{\max},k\in{\cal K}\backslash K$, $p_{k}$ can
be derived as a function of $m_{k}$, given by
$p_{k}=\frac{2^{\frac{D}{m_{k}}+\frac{Q^{-1}(\varepsilon_{k}^{\max})}{\ln
2\sqrt{m_{k}}}}-1}{h_{k}}\triangleq\chi(m_{k}),{k\in\cal K}\backslash K.$ (58)
By substituting (58) into (57), Problem (57) can be transformed as follows:
$\displaystyle\min_{\left\\{m_{k},k\in\cal K\right\\},p_{K}}\;\;\;\;$
$\displaystyle\varepsilon_{K}$ (59a) s.t. $\displaystyle\sum\limits_{k\in{\cal
K}\backslash K}m_{k}\chi(m_{k})+m_{K}p_{K}=E_{\text{tot}},$ (59b)
$\displaystyle\sum\limits_{k\in{\cal K}}m_{k}=M,m_{k}\in\mathbb{Z},k\in\cal
K.$ (59c)
Compared with the original Problem (57), the number of optimization variables
of Problem (59) is significantly reduced. However, this problem is still
difficult to solve. In the following, we first use the exhaustive search to
find $m_{K}$, and then optimize $p_{K}$. To this end, we need to find tight
lower and upper bounds of $m_{K}$ to reduce the computational complexity.
### IV-C Bounds of $m_{K}$
In this subsection, we attempt to obtain the bounds of $m_{K}$. We first
provide the following theorem.
_Theorem 3_ : Define
$A_{k}={{{Q^{-1}}(\varepsilon_{k}^{\max})}\mathord{\left/{\vphantom{{{Q^{-1}}(\varepsilon_{k}^{\max})}{\ln
2}}}\right.\kern-1.2pt}{\ln 2}}$ and
$g\left({{m_{k}}}\right)\buildrel\Delta\over{=}{m_{k}}\chi({m_{k}})$. Then,
$g\left({{m_{k}}}\right)$ is a monotonically decreasing and convex function
when $m_{k}$ satisfies:
$\sqrt{{m_{k}}}<\frac{{\frac{3}{4}{A_{k}}\ln
2+\sqrt{\frac{9}{{16}}{{\left({\ln 2}\right)}^{2}}A_{k}^{2}+8D\ln 2}}}{2}.$
(60)
_Proof_ : Please see Appendix D.
In general, for a typical URLLC system, the number of transmission bits is
around $100$ bits and the decoding error probability requirement is around
$10^{-9}$. Then, $A_{k}$ is 8.653, and the value of the right hand side of
(60) is given by 14.236. Then, when $m_{k}\leq 202$, the inequality (60)
holds. In short packet transmission with OMA scheme, the number of blocklength
to each device is generally smaller than 100. Hence, in our considered
scenario, $g\left({{m_{k}}}\right)$ can be regarded as a monotonically
decreasing and convex function.
In the following, we provide an iterative procedure to obtain the tight bounds
of $m_{K}$. Since $m_{k}p_{k}=g(m_{k})<E_{\rm{tot}}$ and $g(m_{k})$ is a
monotonically decreasing function, we can obtain the lower bound of $m_{k}$ by
using the bisection search method, which is denoted as
$m_{k}^{\rm{lb}(0)},k\in{\cal K}\backslash K$. To guarantee the meaningfulness
of ${\varepsilon_{K}}$, the following inequality holds
${p_{K}}>{{\left({{2^{\frac{D}{{{m_{K}}}}}}-1}\right)}\mathord{\left/{\vphantom{{\left({{2^{\frac{D}{{{m_{K}}}}}}-1}\right)}{{h_{K}}}}}\right.\kern-1.2pt}{{h_{K}}}}.$
(61)
Then, we have
${E_{{\rm{tot}}}}>{m_{K}}{p_{K}}>\frac{{{m_{K}}}}{{{h_{K}}}}\left({{2^{\frac{D}{{{m_{K}}}}}}-1}\right)\buildrel\Delta\over{=}q({m_{K}}).$
(62)
As a result, we can obtain the lower bound of $m_{K}$ from (62), which is
denoted as $m_{K}^{\rm{lb}(0)}$. Then, for each device $k$, the upper bound of
$m_{k}$ is given by $m_{k}^{\rm{ub}(0)}=M-\sum\nolimits_{i\in{\cal
K}\backslash k}{m_{i}^{{\rm{lb}}(0)}},k\in\cal K$. Since $q({m_{K}})$ defined
in (62) is a monotonically decreasing function, we have
$q({m_{K}})>q({m_{K}^{\rm{ub}(0)}})$. In addition, $g(m_{k})$ is a
monotonically decreasing function of $m_{k}$, and we have
$g(m_{k})>g(m_{k}^{\rm{ub}(0)}),k\in{\cal K}\backslash K$. Then, for each
$k\in{\cal K}\backslash K$, we have
${E_{{\rm{tot}}}}-\sum\nolimits_{i\in{\cal
K}\backslash\\{K,k\\}}{g(m_{i}^{{\rm{ub}(0)}})}-q({m_{K}^{\rm{ub}(0)}})>g(m_{k}),k\in{\cal
K}\backslash K.$ (63)
Then, the lower bound of $m_{k}$ for $k\in{\cal K}\backslash K$ can be
obtained as ${m_{k}^{{\rm{lb}}(1)}},k\in{\cal K}\backslash K$. For the $K$th
device, we have
${E_{{\rm{tot}}}}-\sum\nolimits_{k\in{\cal K}\backslash
K}{g(m_{k}^{{\rm{ub}(0)}})}>\frac{{{m_{K}}}}{{{h_{K}}}}\left({{2^{\frac{D}{{{m_{K}}}}}}-1}\right).$
(64)
Then, based on (64) we can update the lower bound of $m_{K}$ as
$m_{K}^{{\rm{lb}(1)}}$. Then, for each device $k$, the upper bound of $m_{k}$
is given by $m_{k}^{\rm{ub}(1)}=M-\sum\nolimits_{i\in{\cal K}\backslash
k}{m_{i}^{{\rm{lb}}(1)}},k\in\cal K$. Finally, repeat the above procedure
until $m_{K}^{{\rm{lb}}(n)}=m_{K}^{{\rm{lb}}(n+1)}$ and
$m_{K}^{{\rm{ub}}(n)}=m_{K}^{{\rm{ub}}(n+1)}$, where $n$ is the iteration
number. Similar to the case of two devices, the above procedure can be proved
to be convergent, and denote the final converged upper and lower bounds of
$m_{K}$ as $m_{K}^{\rm{ub}}$ and $m_{K}^{\rm{lb}}$, respectively.
### IV-D Optimization of $p_{K}$ with Given $m_{K}$
Given $m_{K}$, $\varepsilon_{K}$ is a monotonically decreasing function of
$p_{K}$ and $p_{K}$ is given by
${p_{K}}=\frac{1}{{{m_{K}}}}\left({{E_{{\rm{tot}}}}-\sum\limits_{k\in{\cal
K}\backslash K}{{m_{k}}\chi({m_{k}})}}\right),$ (65)
Problem (59) can then be equivalently transformed as
$\displaystyle\min_{\left\\{m_{k},k\in{\cal K}\backslash K\right\\}}\;\;\;\;$
$\displaystyle\sum\limits_{k\in{\cal K}\backslash K}m_{k}\chi(m_{k})$ (66a)
s.t. $\displaystyle\sum\limits_{k\in{\cal K}\backslash K}m_{k}=M-m_{K},$ (66b)
$\displaystyle m_{k}\in\mathbb{Z},k\in{\cal K}\backslash K.$ (66c)
This problem is still difficult to solve due to the integer constraint (66c).
To resolve this issue, we relax $\\{m_{k},k\in{\cal K}\backslash K\\}$ to
continuous values. Then, Problem (66) can be relaxed as follows:
$\displaystyle\min_{\left\\{m_{k},k\in{\cal K}\backslash K\right\\}}\;\;\;\;$
$\displaystyle\sum\limits_{k\in{\cal K}\backslash K}m_{k}\chi(m_{k})$ (67a)
s.t. $\displaystyle m_{k}\geq m_{k}^{\rm{lb}},k\in{\cal K}\backslash
K,(\ref{edsawxs}),$ (67b)
where $\\{m_{k}^{\rm{lb}},k\in{\cal K}\backslash K\\}$ are given in the above
subsection. Since $m_{k}\chi(m_{k})$ is proved to be convex as shown in
Theorem 2, Problem (67) is a convex optimization problem, which can be solved
by using the Lagrangian dual decomposition method [31]. We first introduce the
Lagrange multiplier $\lambda$ associated with constraint (66b), the partial
Lagrangian function of Problem (67) is given by
${\cal L}({\bf{m}},\lambda)=\sum\limits_{k\in{\cal K}\backslash
K}m_{k}\chi(m_{k})+\lambda\left(\sum\limits_{k\in{\cal K}\backslash
K}m_{k}-M-m_{K}\right),$ (68)
where ${\bf{m}}=\\{m_{k},k\in{\cal K}\backslash K\\}$.
In the following, we aim to obtain the optimal $m_{k},k\in{\cal K}\backslash
K$ for given $\lambda$, which is denoted as $m_{k}^{\star}(\lambda),k\in{\cal
K}\backslash K$. As ${\cal L}({\bf{m}},\lambda)$ is a convex function of
$m_{k},k\in{\cal K}\backslash K$, the optimal $m_{k}$ for given $\lambda$ can
be obtained in the following. If
${\left.{\frac{{\partial{\cal
L}({\bf{m}},\lambda)}}{{\partial{m_{k}}}}}\right|_{{m_{k}}=m_{k}^{{\rm{lb}}}}}\geq
0,$ (69)
the optimal $m_{k}$ is given by $m_{k}^{\star}(\lambda)=m_{k}^{{\rm{lb}}}$.
Otherwise, $m_{k}^{\star}(\lambda)$ is the solution to the following equation:
$\frac{{\partial{\cal L}({\bf{m}},\lambda)}}{{\partial{m_{k}}}}=0,$ (70)
which can be obtained by using the bisection search method.
Upon obtaining the optimal ${m_{k}^{\star}}(\lambda),k\in{\cal K}\backslash
K$, we can obtain the value of the left hand side of (66b), which is defined
as function $F(\lambda)$
$F(\lambda)\buildrel\Delta\over{=}\sum\limits_{k\in{\cal K}\backslash
K}m_{k}^{\star}(\lambda).$ (71)
By using the similar technique as in Appendix A of [32], we can prove that
$F(\lambda)$ is a monotonically decreasing function of $\lambda$. Hence, the
bisection search method can be adopted to find the solution of $\lambda$ to
the equation $F(\lambda)=M-m_{K}$ if the original problem is feasible.
Denote the solution obtained by solving the relaxed problem (67) as
$\left\\{{\bar{m}}_{k},k\in{\cal K}\backslash K\right\\}$. In general,
$\left\\{{\bar{m}}_{k},k\in{\cal K}\backslash K\right\\}$ may violent the
integer requirement. Hence, we need to convert the continuous
$\left\\{{\bar{m}}_{k},k\in{\cal K}\backslash K\right\\}$ to integer
solutions, denoted as $\left\\{{m}_{k}^{\star},k\in{\cal K}\backslash
K\right\\}$. However, the integer conversion problem is a combinatorial
optimization problem, which is NP to solve. In the following, we apply the
greedy search method to solve the integer conversion problem. Specifically, we
first initialize the integer solution as
$m_{k}^{\star}=\left\lfloor{{{\bar{m}}_{k}}}\right\rfloor,k\in{\cal
K}\backslash K$. Note that $g(m_{k})$ is a monotonically decreasing function
of $m_{k}$. Each time we allocate one blocklength to the device with the
largest decrement of $g(m_{k})$, i.e.,
${k^{*}}=\mathop{\arg}\max\nolimits_{k\in{\cal K}\backslash
K}\left\\{{g({m_{k}})-g({m_{k}}+1)}\right\\}$. The rational behind this is
that based on (66b) more energy can be allocated to the $K$th device, thus
decreasing ${\varepsilon_{K}}$ most. For the $k^{*}$th device, we set
$m_{k^{*}}^{\star}=m_{k^{*}}^{\star}+1$. If $p_{K}^{\star}$ is smaller than
zero, set ${\varepsilon_{K}^{\star}}=1$. Repeat the above procedure until
$\sum\nolimits_{k\in{\cal K}\backslash K}m_{k}^{\star}=M-m_{K}$. Then, the
power allocated to the $K$th device can be recalculated as
$p_{K}=\frac{{{E_{{\rm{tot}}}}-\sum\limits_{k\in{\cal K}\backslash
K}{g(m_{k}^{\star})}}}{{m_{K}}}.$ (72)
Thus, we can calculate ${\varepsilon_{K}}$ based on current $m_{K}$ and
$p_{K}^{\star}$.
## V Simulations Results
In this section, simulation results are provided to evaluate the performance
of the proposed algorithms. For simplicity, we assume that the controller, the
robot and the actuator are located on the same line, and the robot is moving
from the controller to the actuator, and the robot is served as the relay to
help the transmission of the actuator. The distance between the controller and
the actuator is set as $500$ m. Let us denote $d_{1}$, $d_{2}$ and $d_{3}$ as
the distances from the controller to the robot, the controller to the
actuator, and the robot to the actuator, respectively. The system bandwidth is
set as $B=1$ MHz. Hence, the downlink transmission delay duration is
calculated as $100\ {\rm{us}}$ that meets a criterion of industrial
standards[13]. The noise power spectral density is -173 dBm/Hz. The decoding
(packet) error probability requirement for the robot is set as $10^{-9}$. The
large-scale path loss model is $35.3+37.6\log_{10}{\rm{dB}}$ [33]. The
simulation section is divided into two subsections. In the first subsection,
we assume that the channel gain is only determined by the path loss in order
to obtain the insights of all the schemes. In the second subsection, we
consider the network availability performance [17] taking into account small-
scale fading obeying the Rayleigh distribution.
### V-A Only Large-scale Fading
In Fig. 3, we first study the impact of distance $d_{1}$ on the decoding error
probability. We observe that relay-assisted transmission outperforms the other
three schemes. It is interesting to see that when the robot moves from the
controller to the actuator, the decoding error probability achieved by the OMA
and NOMA schemes always decreases. The main reason is that the channel gain
from controller to robot decreases with increasing the distance, so the energy
and blocklength required for the robot to guarantee its error probability
requirement increases. As a result, the available energy and blocklength for
the actuator will decrease. On the other hand, the reliability performances
achieved by the C-NOMA and relay-assisted schemes first increase and then
decrease when the robot moves in the line. This can be explained as follows.
When the robot moves from $50\ {\rm{m}}$ to $150\ {\rm{m}}$ for the C-NOMA and
$200\ {\rm{m}}$ for relay-assisted scheme, the channel gain from the robot to
the actuator becomes weak, which is the performance bottleneck that limits the
decoding error probability of the actuator. However, when the robot continues
to move towards the actuator, the transmission link from the controller to the
robot becomes the bottleneck link. Hence, the distance $d_{1}$ can be
optimized to additionally improve the system performance, which can be treated
in the future work. It is interesting to observe that the C-NOMA performs
worse than the relay-assisted scheme, which is due to the larger feasible
region for the latter scheme as explained at the end of Section III.
In Fig. 3, we examine the impact of available blocklength $M$ on the decoding
error probability of the actuator. As expected, larger $M$ leads to much
better reliability performance in all schemes, and the decoding error
probability achieved by the relay scheme decreases from $1$ to $10^{-22}$ with
$M$ increasing from 50 to 100. It is interesting to find that when the
blocklength $M$ is equal to 50 and 60, the NOMA scheme has the best
reliability performance since the whole transmission blocklength can be used
for transmission in NOMA, while the whole blocklength should be divided into
two parts for the other schemes. Importantly, this provides insights for the
system designer that when the blocklength is very limited as in URLLC, relay
may not be a good option since some blocklengths needs to be reserved for the
two-stage transmission. However, further increasing $M$, the relay-assisted
transmission and the C-NOMA start to perform better than the NOMA scheme, and
the performance gain monotonically increases with $M$. However, the cross-
point associated with the relay scheme is much lower than that of the C-NOMA
scheme due to the shrinking feasible region associated with the latter scheme.
Furthermore, the curves of both schemes have the same slope with different
bias.
0.95
Figure 2: The decoding error probability of the actuator versus the distance
from the controller to the robot under four schemes, when $D=100$ bits,
$M=100$ symbols, $\tilde{E}_{\rm{tot}}=5\times 10^{-5}$ Joule.
0.95
Figure 3: The decoding error probability of the actuator versus the number of
symbols under four schemes, when $D=100$ bits, $\tilde{E}_{\rm{tot}}=5\times
10^{-5}$ Joule, $d_{1}=200$ m, $d_{2}=500$ m, and $d_{3}=300$ m.
In Fig. 5, we study the impact of the packet size $D$ on the decoding error
probability. As expected, a larger packet size leads to a higher error
probability for all schemes. The performance advantage of the relay-assisted
scheme over the OMA and NOMA schemes shrinks with the increase of $D$. It is
interesting to find that the curves associated with the OMA and NOMA schemes
have almost the same slope, while those of the relay-assisted transmission and
the C-NOMA scheme are similar. The main reason may be that the latter two
schemes apply relay to assist the transmission. Similar to the observations in
[24], the NOMA achieves better performance than the OMA scheme. When $D=125$
bits, the C-NOMA is even worse than the NOMA since some blocklengths should be
reserved for the two-stage transmission in the former scheme.
In Fig. 5, we study the impact of the total energy on the decoding error
probability. It is observed that more available energy leads to better
reliability performance as expected. It is also seen that the relay-assisted
transmission has the best performance, and the performance gain increases with
the amount of available energy. It is shown that with sufficient energy,
transmission with the aid of relay (i.e., the relay-assisted transmission and
the C-NOMA transmission) is beneficial for the system performance. When
${\tilde{E}_{{\rm{tot}}}}=5\times 10^{-5}$ Joule, the decoding error
probability achieved by the relay-assisted transmission is extremely low.
0.95
Figure 4: The decoding error probability of the actuator versus packet size
under four schemes, when $M=100$ sysmbols, $\tilde{E}_{\rm{tot}}=5\times
10^{-5}$ Joule, $d_{1}=200$ m, $d_{2}=500$ m, and $d_{3}=300$ m.
0.95
Figure 5: The decoding error probability of the actuator versus the energy
constraint under four schemes, when $D=100$ bits, $M=100$ symbols, $d_{1}=200$
m, $d_{2}=500$ m, and $d_{3}=300$ m.
0.95
Figure 6: The decoding error probability of the actuator versus the number of
symbols for the OMA scheme the general OMA scheme, when $D=100$ bits, and
$\tilde{E}_{\rm{tot}}=5\times 10^{-5}$ Joule.
0.95
Figure 7: The network availability percentage versus the packet size $D$ under
four schemes, when $\tilde{E}_{\rm{tot}}=5\times 10^{-4}$ Joule, $M=100$
symbols.
In Fig. 7, we study the performance comparison between the OMA scheme in
Section III and the general OMA in Section IV. Denote the number of devices as
$K$. If $K=2$, both the OMA scheme and the general OMA scheme are applicable.
However, for the case with $K>2$, only the general OMA scheme is applicable.
For the first $K-1$th devices, the distance of the $k$th device to the
controller is set as $50\times k$ m, while the distance of the last device to
the controller is set as $500$ m. The other parameters are the same as the
previous figures. It is interesting to find that the decoding error
probability achieved by the OMA scheme and the general OMA scheme is almost
the same when $K=2$, which implies that the general OMA can achieve almost the
globally optimal solution in this setup. However, the general OMA scheme has
lower complexity than the OMA scheme. It is also noted from this figure that
the decoding error probability achieved by the $K$th device increases when the
number of total devices increases. This can be explained as follows. When the
number of total devices increases, the total resource such as energy and
channel blocklength allocated to the first $K-1$ devices will increase. Then,
the left resource allocated for the $K$th device decreases, leading to its
sworse decoding error probability performance.
### V-B Network Availability Performance (Channel Generation Times=1000)
In this subsection, the small-scale fading channel is taken into consideration
in the channel gain, and we study the network availability performance, which
is defined as the ratio of the number of channel generations, where the
decoding error probability achieved by both devices is no larger than
$10^{-9}$, to the total number of channel generations [2]. In the following
simulations, the total number of channel generations is set as $1000$. The
distances are set as $d_{1}=200$ m, $d_{2}=500$ m, and $d_{3}=300$ m,
respectively.
Fig. 7 illustrates the network availability performance versus the packet size
$D$ for all schemes. As expected, the network availability performance
achieved by all schemes decreases with $D$. The relay-assisted transmission
has the best network availability performance over the whole region of $D$. It
is observed that when $D=100$ bits, the network availability percentage of the
relay-assisted scheme and the C-NOMA scheme is almost the same, as high as
98%. However, the performance gap of these two schemes increases rapidly with
$D$ due to the shrinking feasible region of the C-NOMA scheme compared to the
relay-assisted transmission. However, the network availability performance for
both the OMA scheme and the NOMA scheme are lower than that of relay-assisted
scheme and C-NOMA scheme, and the network availability percentage is as low as
87% for NOMA scheme even when $D=100$ bits.
Fig. 9 shows the network availability performance versus the number of symbols
for four schemes. As expected, the network availability performance increases
with $M$ for all schemes. The NOMA scheme performs slightly better than the
relay scheme when $M=50$. It is interesting to note that the C-NOMA scheme has
the worst performance when $M=50$, which means that this scheme is not a good
option when there is stringent latency requirement. However, the network
availability percentage of the C-NOMA increases rapidly with $M$, and finally
converges to almost the same value as that of the relay-assisted scheme, that
is equal to 97% when $M=100$. It is also noted that the OMA scheme converges
to almost the same performance as that of the NOMA scheme, and is low (86%
when $M=100$). It is interesting to find that the network availability
performance of all the schemes saturates in the high region of $M$, which
indicates that the number of available blocklength is not necessary to be very
large. This can be explained by using the result in [29]: The dispersion of
quasi-static fading channels converges to zero, which implies that the maximum
achievable data rate converges quickly to the outage capacity.
0.95
Figure 8: The network availability percentage versus the number of symbols $M$
under four schemes, when $\tilde{E}_{\rm{tot}}=5\times 10^{-4}$ Joule, $D=100$
bits.
0.95
Figure 9: The network availability percentage versus energy limit under four
schemes, when $D=100$ bits, $M=100$ symbols.
Finally, Fig. 9 depicts the network availability performance versus the energy
limit ${\tilde{E}_{{\rm{tot}}}}$ for all schemes. As expected, the network
performance achieved by all schemes increases with ${\tilde{E}_{{\rm{tot}}}}$.
It is also observed that relay-assisted scheme has the best network
availability performance. However, the performance gain over the C-NOMA scheme
decreases with ${\tilde{E}_{{\rm{tot}}}}$ and both curves coincide in the high
regime of ${\tilde{E}_{{\rm{tot}}}}$, where both schemes can achieve the
network availability percentage of 98%. On the other hand, both the NOMA
scheme and OMA scheme have very low network availability percentage, e.g., 86%
when ${\tilde{E}_{{\rm{tot}}}}=5\times{10^{-4}}$ Joule. The performance gap
between the relay-assisted scheme and NOMA is significant, up to 30%.
## VI Conclusions
This work studied the resource allocation of short packet transmission for
mission-critical IoT to achieve low latency and high reliability under
fundamental transmission schemes, which include OMA, NOMA, relay-assisted
transmission and C-NOMA transmission. We formulated an optimization problem to
minimize the decoding error probability for the actuator with lower channel
gain while guaranteeing that the robot achieved a low error probability
target. To facilitate the optimal design of the blocklength and power
allocation, we derived the tight bounds on the blocklength and the transmit
power for all schemes. Simulation results demonstrated that relay-assisted
transmission significantly outperforms the other schemes for most cases in
terms of packet error probability as well as network availability percentage
performance. It was also noted that the NOMA scheme performs well when the
delay requirement is very stringent. For the C-NOMA and relay-assisted
schemes, there exists one optimal transmission distance between the central
controller and the robot. We also observed that the general OMA scheme can
achieve almost the same performance as the OMA scheme, while the former scheme
has a lower complexity.
Concerning our future work, we will consider a more general scenario with more
than two devices for the other three schemes.
## Appendix A Proof of Lemma 1
We prove it by using contradiction. In the following, we first prove that
constraint (7b) holds with equality at the optimum solution. The second one
can be proved similarly.
Denote the optimal solution of Problem (7) as
${\bf{s}}^{\star}=\\{m_{1}^{\star},m_{2}^{\star},{p_{1}^{\star}},{p_{2}^{\star}}\\}$
and the corresponding ${\varepsilon}_{1}$ and ${\varepsilon}_{2}$ are denoted
as ${\varepsilon}_{1}^{\star}$ and $\varepsilon_{2}^{\star}$, respectively.
Suppose that ${\varepsilon}_{1}^{\star}$ is strictly smaller than
$\varepsilon_{1}^{\max}$, i.e.,
${\varepsilon}_{1}^{\star}<\varepsilon_{1}^{\max}$. In Proposition 1 of [24],
the author proved that
$Q\left({f\left({{{\gamma_{1}}},{m_{1}},D}\right)}\right)$ monotonically
decreases with $\gamma_{1}$. Then, we can construct a new solution
${\bf{s}}^{\\#}=\\{m_{1}^{\star},m_{2}^{\star},{p_{1}^{\\#}},{p_{2}^{\\#}}\\}$,
where ${p_{1}^{\\#}}={p_{1}^{\star}}-\Delta p$ and
${p_{2}^{\\#}}={p_{2}^{\star}}+\frac{{m_{1}^{\star}\Delta
p}}{{m_{2}^{\star}}}$ with $\Delta p>0$. It can be verified that the following
equation holds,
$\vspace{-0.1cm}m_{1}^{\star}p_{1}^{\\#}+m_{2}^{\star}p_{2}^{\\#}=m_{1}^{\star}p_{1}^{\star}+m_{2}^{\star}p_{2}^{\star}\leq{E_{{\rm{tot}}}}.$
(73)
Hence, the new constructed solution ${\bf{s}}^{\\#}$ still satisfies the
energy constraint (7c). In addition, we can always find a proper positive
$\Delta p$ such that the new ${\varepsilon}_{1}^{\\#}$ with the new solution
${\bf{s}}^{\\#}$ is equal to $\varepsilon_{1}^{\max}$, i.e.,
${\varepsilon}_{1}^{\\#}=\varepsilon_{1}^{\max}$, which satisfies constraint
(7b). Hence, the new constructed solution ${\bf{s}}^{\\#}$ is a feasible
solution of Problem (7). Since $p_{2}^{\\#}>p_{2}^{\star}$, we have
${\varepsilon}_{2}^{\\#}<{\varepsilon}_{2}^{\star}$. This contradicts with the
assumption that ${\bf{s}}^{\star}$ is an optimal solution. The same method is
applicable to the proof of the second conclusion.
## Appendix B Proof of Theorem 1
The first and second derivative of function $\tilde{g}({m_{2}})$ w.r.t.
$m_{2}$ can be calculated as
$\displaystyle\vspace{-0.4cm}\tilde{g}^{\prime}({m_{2}})$ $\displaystyle=$
$\displaystyle\frac{1}{{2\ln
2}}\frac{1}{{\sqrt{m}_{2}}}\ln\left({1+\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}}\right)-\frac{1}{{\ln
2}}\frac{1}{{\sqrt{m}_{2}}}\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}+{E_{2}}{h_{2}}}}+\frac{D}{2}m_{2}^{-\frac{3}{2}}$
$\displaystyle\tilde{g}^{\prime\prime}({m_{2}})$ $\displaystyle=$
$\displaystyle\underbrace{-\frac{1}{4\ln
2}\frac{{\ln\left({1+\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}}\right)}}{{{m_{2}{\sqrt{m}_{2}}}}}+\frac{{{E_{2}}{h_{2}}}}{{{\ln
2{\sqrt{m}_{2}}{\left({{m_{2}}+{E_{2}}{h_{2}}}\right)}^{2}}}}}_{?}\underbrace{-\frac{3}{4}Dm_{2}^{-\frac{5}{2}}}_{<0}.$
Obviously, the last term of $g^{\prime\prime}({m_{2}})$ is negative, we only
need to prove that the sum of the first two terms is negative under the
condition of $\frac{{{E_{2}}{h_{2}}}}{{M-{m_{1}}}}\geq e-1$.
Since $m_{\rm{2}}^{{\rm{lb}}}\leq{m_{2}}\leq M-{m_{1}}$, we have
$\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}\geq\frac{{{E_{2}}{h_{2}}}}{{M-{m_{1}}}}\geq
e-1.$ (74)
Then, the following inequality follows:
$4\leq\left({\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}+2+\frac{{{m_{2}}}}{{{E_{2}}{h_{2}}}}}\right)\ln\left({1+\frac{{{E_{2}}{h_{2}}}}{{{m_{2}}}}}\right).$
(75)
By rearranging the terms of the above inequality, we can prove that the sum of
the first two terms is negative, which completes the proof.
## Appendix C Proof of Theorem 2
We prove this theorem by using the method of contradiction. Denote the optimal
$p_{1}$ of Problem (27) as $p_{1}^{\star}$ and the corresponding decoding
error probability is given by ${\bar{\varepsilon}}_{1}(p_{1}^{\star})$.
Suppose that ${\bar{\varepsilon}}_{1}(p_{1}^{\star})$ is strictly less than
$\varepsilon_{1}^{\max}$, i.e.,
${\bar{\varepsilon}}_{1}(p_{1}^{\star})<\varepsilon_{1}^{\max}$. Since
$\hat{\varepsilon}_{1}(p_{1}^{{\rm{lb}}})>{\varepsilon_{1}}(p_{1}^{{\rm{lb}}})$,
we have
${{\bar{\varepsilon}}_{1}}(p_{1}^{{\rm{lb}}})={\varepsilon_{1}}(p_{1}^{{\rm{lb}}})+(\hat{\varepsilon}_{1}(p_{1}^{{\rm{lb}}})-{\varepsilon_{1}}(p_{1}^{{\rm{lb}}}))\varepsilon_{2}^{1}(p_{1}^{{\rm{lb}}})=\varepsilon_{1}^{\max}+(\hat{\varepsilon}_{1}(p_{1}^{{\rm{lb}}})-{\varepsilon_{1}}(p_{1}^{{\rm{lb}}}))\varepsilon_{2}^{1}(p_{1}^{{\rm{lb}}})>\varepsilon_{1}^{\max},$
(76)
where ${\varepsilon_{1}}(p_{1}^{{\rm{lb}}})=\varepsilon_{1}^{\max}$ is used in
the second equality. As ${{\bar{\varepsilon}}_{1}}({p_{1}})$ is a continuous
function, there must exist a value $p_{1}^{\&}$ within the range of
$p_{1}^{{\rm{lb}}}<p_{1}^{\&}<p_{1}^{\star}$ such that
${{\bar{\varepsilon}}_{1}}({p_{1}^{\&}})=\varepsilon_{1}^{\max}$. On the other
hand, the objective value $\varepsilon_{2}(p_{1})$ is a monotonically
increasing function of $p_{1}$ since $p_{2}=E_{\rm{tot}}/m-p_{1}$. Hence, we
have $\varepsilon_{2}(p_{1}^{\&})<\varepsilon_{2}(p_{1}^{\star})$, which
contradicts the assumption that $p_{1}^{\star}$ is an optimal solution.
## Appendix D Proof of Theorem 3
We first prove its convexity. Define function
$J({m_{k}})\buildrel\Delta\over{=}{m_{k}}{2^{\frac{D}{{{m_{k}}}}+\frac{{{A_{k}}}}{{\sqrt{{m_{k}}}}}}}.$
(77)
Then, $g\left({{m_{k}}}\right)$ can be rewritten as
$g\left({{m_{k}}}\right)={{\left({J({m_{k}})-{m_{k}}}\right)}\mathord{\left/{\vphantom{{\left({J({m_{k}})-{m_{k}}}\right)}{{h_{k}}}}}\right.\kern-1.2pt}{{h_{k}}}}$.
Then, if $J({m_{k}})$ is convex, function $g\left({{m_{k}}}\right)$ is also
convex. Hence, in the following, we prove that $J({m_{k}})$ is a convex
function. Define function ${\tilde{J}({m_{k}})}$ as
$\tilde{J}({m_{k}})\buildrel\Delta\over{=}\ln\left({J({m_{k}})}\right)=\ln({m_{k}})+\left({\frac{D}{{{m_{k}}}}+\frac{{{A_{k}}}}{{\sqrt{{m_{k}}}}}}\right)\ln
2.$ (78)
The second-order derivative of $\tilde{J}({m_{k}})$ w.r.t. $m_{k}$ is given by
$\tilde{J}^{\prime\prime}({m_{k}})=\frac{1}{{m_{k}^{3}}}\left({2D\ln
2-{m_{k}}+\frac{3}{4}{A_{k}}\sqrt{{m_{k}}}\ln 2}\right).$ (79)
Note that the denominator of (79) is a quadratic function of
${\sqrt{{m_{k}}}}$. Hence, if the inequality in (60) is satisfied,
$\tilde{J}^{\prime\prime}({m_{k}})$ is always positive, which means
$\tilde{J}({m_{k}})$ is a convex function of $m_{k}$. Since
$J({m_{k}})={e^{\tilde{J}({m_{k}})}}$, according to the composition rule in
[31], we can show that $J({m_{k}})$ is also a convex function. Hence,
$g\left({{m_{k}}}\right)$ is a convex function of $m_{k}$ when the inequality
in (60) is satisfied.
Now, we proceed to prove that $g\left({{m_{k}}}\right)$ is a monotonically
decreasing function of $m_{k}$. The first-order derivative of
$g\left({{m_{k}}}\right)$ w.r.t. $m_{k}$ is given by
$g^{\prime}\left({{m_{k}}}\right)=\frac{1}{{{h_{k}}}}\left[{{2^{\frac{D}{{{m_{k}}}}+\frac{{{A_{k}}}}{{\sqrt{{m_{k}}}}}}}\left({-\frac{D}{{{m_{k}}}}\ln
2-\frac{{\ln 2}}{2}\frac{{{A_{k}}}}{{\sqrt{{m_{k}}}}}+1}\right)-1}\right].$
(80)
Since $g\left({{m_{k}}}\right)$ is a convex function, we have
$g^{\prime\prime}\left({{m_{k}}}\right)\geq 0$, which means
$g^{\prime}\left({{m_{k}}}\right)$ is a monotonically increasing function.
Hence, we have
$g^{\prime}\left({{m_{k}}}\right)<g^{\prime}\left(\infty\right)=0.$ (81)
Hence, $g\left({{m_{k}}}\right)$ is a monotonically decreasing function of
$m_{k}$ when the inequality in (60) holds.
## References
* [1] M. Shafi, A. F. Molisch, P. J. Smith, T. Haustein, P. Zhu, P. De Silva, F. Tufvesson, A. Benjebbour, and G. Wunder, “5G: A Tutorial Overview of Standards, Trials, Challenges, Deployment, and Practice,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 6, pp. 1201–1221, June 2017.
* [2] P. Schulz, M. Matthe, H. Klessig, M. Simsek, G. Fettweis, J. Ansari, S. A. Ashraf, B. Almeroth, J. Voigt, and I. Riedel, “Latency Critical IoT Applications in 5G: Perspective on the Design of Radio Interface and Network Architecture,” _IEEE Commun. Mag._ , vol. 55, no. 2, pp. 70–78, February 2017.
* [3] M. Bennis, M. Debbah, and H. V. Poor, “Ultrareliable and low-latency wireless communication: Tail, risk, and scale,” _Proceedings of the IEEE_ , vol. 106, no. 10, pp. 1834–1853, 2018.
* [4] P. Popovski, J. J. Nielsen, C. Stefanovic, E. de Carvalho, E. G. Ström, K. F. Trillingsgaard, A. Bana, D. Kim, R. Kotaba, and J. Park, “Ultra-reliable low-latency communication (URLLC): principles and building blocks,” _arXiv preprint arXiv:1708.07862_ , 2017.
* [5] C. She, C. Yang, and T. Q. S. Quek, “Radio Resource Management for Ultra-Reliable and Low-Latency Communications,” _IEEE Commun. Mag._ , vol. 55, no. 6, pp. 72–78, 2017.
* [6] J. J. Nielsen, R. Liu, and P. Popovski, “Ultra-Reliable Low Latency Communication Using Interface Diversity,” _IEEE Trans. Commun._ , vol. 66, no. 3, pp. 1322–1334, March 2018.
* [7] M. Mozaffari, W. Saad, M. Bennis, Y.-H. Nam, and M. Debbah, “A tutorial on UAVs for wireless networks: Applications, challenges, and open problems,” _IEEE Commun. Surveys & Tutorials_, 2019.
* [8] M. Simsek, A. Aijaz, M. Dohler, J. Sachs, and G. Fettweis, “5G-enabled tactile internet,” _IEEE J. Sel. Areas Commun._ , vol. 34, no. 3, pp. 460–473, 2016.
* [9] H. Ren, N. Liu, C. Pan, M. Elkashlan, A. Nallanathan, X. You, and L. Hanzo, “Power-and rate-adaptation improves the effective capacity of C-RAN for Nakagami-$m$ fading channels,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 11, pp. 10 841–10 855, 2018.
* [10] ——, “Low-latency C-RAN: An next-generation wireless approach,” _IEEE Veh. Technol. Mag._ , vol. 13, no. 2, pp. 48–56, 2018.
* [11] A. Varghese and D. Tandur, “Wireless requirements and challenges in Industry 4.0,” in _2014 IC3I_ , Nov 2014, pp. 634–638.
* [12] L. Liu and W. Yu, “A D2D-Based Protocol for Ultra-Reliable Wireless Communications for Industrial Automation,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 8, pp. 5045–5058, Aug 2018.
* [13] O. N. Yilmaz, Y.-P. E. Wang, N. A. Johansson, N. Brahmi, S. A. Ashraf, and J. Sachs, “Analysis of ultra-reliable and low-latency 5G communication for a factory automation use case,” in _2015 IEEE ICC)_. IEEE, 2015, pp. 1190–1195.
* [14] G. Durisi, T. Koch, and P. Popovski, “Toward Massive, Ultrareliable, and Low-Latency Wireless Communication With Short Packets,” _Proc. IEEE_ , vol. 104, no. 9, pp. 1711–1726, Sept 2016.
* [15] Y. Polyanskiy, H. V. Poor, and S. Verdu, “Channel Coding Rate in the Finite Blocklength Regime,” _IEEE Trans. Inf. Theory_ , vol. 56, no. 5, pp. 2307–2359, May 2010.
* [16] K. F. Trillingsgaard and P. Popovski, “Downlink Transmission of Short Packets: Framing and Control Information Revisited,” _IEEE Trans. Commun._ , vol. 65, no. 5, pp. 2048–2061, May 2017.
* [17] C. She, Z. Chen, C. Yang, T. Q. Quek, Y. Li, and B. Vucetic, “Improving network availability of ultra-reliable and low-latency communications with multi-connectivity,” _IEEE Trans. Commun._ , vol. 66, no. 11, pp. 5482–5496, 2018.
* [18] J. Östman, G. Durisi, E. G. Ström, M. C. Coşkun, and G. Liva, “Short packets over block-memoryless fading channels: Pilot-assisted or noncoherent transmission?” _IEEE Trans. Commun._ , vol. 67, no. 2, pp. 1521–1536, 2019.
* [19] Y. Hu, J. Gross, and A. Schmeink, “On the Capacity of Relaying With Finite Blocklength,” _IEEE Trans. Veh. Technol._ , vol. 65, no. 3, pp. 1790–1794, March 2016.
* [20] Y. Hu, A. Schmeink, and J. Gross, “Blocklength-Limited Performance of Relaying Under Quasi-Static Rayleigh Channels,” _IEEE Trans. Wireless Commun._ , vol. 15, no. 7, pp. 4548–4558, July 2016.
* [21] Y. Gu, H. Chen, Y. Li, L. Song, and B. Vucetic, “Short-Packet Two-Way Amplify-and-Forward Relaying,” _IEEE Sig. Proc. Lett._ , vol. 25, no. 2, pp. 263–267, Feb 2018.
* [22] O. L. A. López, R. D. Souza, H. Alves, and E. M. G. Fernández, “Ultra reliable short message relaying with wireless power transfer,” in _2017 ICC_ , May 2017, pp. 1–6.
* [23] Y. Hu, M. C. Gursoy, and A. Schmeink, “Efficient transmission schemes for low-latency networks: NOMA vs. relaying,” in _2017 IEEE PIMRC_ , Oct 2017, pp. 1–6.
* [24] X. Sun, S. Yan, N. Yang, Z. Ding, C. Shen, and Z. Zhong, “Short-Packet Downlink Transmission With Non-Orthogonal Multiple Access,” _IEEE Trans. Wireless Commun._ , vol. 17, no. 7, pp. 4550–4564, July 2018.
* [25] C. She, C. Yang, and T. Q. S. Quek, “Joint Uplink and Downlink Resource Configuration for Ultra-Reliable and Low-Latency Communications,” _IEEE Trans. Commun._ , vol. 66, no. 5, pp. 2266–2280, May 2018.
* [26] Y. Hu, Y. Zhu, M. C. Gursoy, and A. Schmeink, “SWIPT-enabled relaying in IoT networks operating with finite blocklength codes,” _IEEE J. Sel. Areas Commun._ , vol. 37, no. 1, pp. 74–88, 2019.
* [27] C. Pan, H. Ren, Y. Deng, M. Elkashlan, and A. Nallanathan, “Joint blocklength and location optimization for URLLC-enabled UAV relay systems,” _IEEE Commun. Lett._ , vol. 23, no. 3, pp. 498–501, March 2019\.
* [28] C. Shannon, “A Mathematical Theory of Communication,” _The Bell System Technical Journal_ , vol. 27, no. 1, pp. 379–423, July 1948.
* [29] W. Yang, G. Durisi, T. Koch, and Y. Polyanskiy, “Quasi-static multiple-antenna fading channels at finite blocklength,” _IEEE Trans. Inf. Theory_ , vol. 60, no. 7, pp. 4232–4265, July 2014.
* [30] Y. Liu, Z. Ding, M. Elkashlan, and H. V. Poor, “Cooperative Non-orthogonal Multiple Access With Simultaneous Wireless Information and Power Transfer,” _IEEE J. Sel. Areas Commun._ , vol. 34, no. 4, pp. 938–953, April 2016.
* [31] S. Boyd and L. Vandenberghe, “Convex optimization,” _Cambridge, U.K.: Cambridge Univ. Press_ , 2004.
* [32] C. Pan, H. Ren, K. Wang, M. Elkashlan, A. Nallanathan, J. Wang, and L. Hanzo, “Intelligent reflecting surface enhanced MIMO broadcasting for simultaneous wireless information and power transfer,” _arXiv preprint arXiv:1908.04863_ , 2019.
* [33] E. U. T. R. Access, “Further advancements for E-UTRA physical layer aspects,” _3GPP TR 36.814, Tech. Rep._ , 2010.
|
# The Regularity Problem in Domains with Lower Dimensional Boundaries
Zanbing Dai Zanbing Dai. School of Mathematics, University of Minnesota,
Minneapolis, MN 55455, USA<EMAIL_ADDRESS>, Joseph Feneuil Joseph
Feneuil. Mathematical Sciences Institute, Australian National University,
Acton, ACT, Australia<EMAIL_ADDRESS>and Svitlana Mayboroda
Svitlana Mayboroda. School of Mathematics, University of Minnesota,
Minneapolis, MN 55455, USA<EMAIL_ADDRESS>
###### Abstract.
In the present paper we establish the solvability of the Regularity boundary
value problem in domains with lower dimensional boundaries (flat and
Lipschitz) for operators whose coefficients exhibit small oscillations
analogous to the Dahlberg-Kenig-Pipher condition.
The proof follows the classical strategy of showing bounds on the square
function and the non-tangential maximal function. The key novelty and
difficulty of this setting is the presence of multiple non-tangential
derivatives. To solve it, we consider a cylindrical system of derivatives and
establish new estimates on the “angular derivatives”.
S. Mayboroda was partly supported by the NSF RAISE-TAQS grant DMS-1839077 and
the Simons foundation grant 563916, SM. J. Feneuil was partially supported by
the Simons foundation grant 601941, GD and by the European Research Council
via the project ERC-2019-StG 853404 VAREG
###### Contents
1. 1 Introduction
1. 1.1 Remarks on the proof of Theorem 1.1.
1. 1.1.1 Cylindrical Coordinate Derivatives
2. 1.1.2 Commutators
3. 1.1.3 Local bounds
4. 1.1.4 Approximation Results
5. 1.1.5 Self-improvement
2. 2 Equation in Cylindrical Coordinates
3. 3 $N\leq S$ Local Estimates, Part 1: Integration by Parts
4. 4 $N\leq S$ Local Estimates, Part 2: the Good Lambda Argument
5. 5 $S\leq N$ Local Estimates
6. 6 Global Estimates for Energy Solutions
7. 7 Approximation by operators with Lipschitz coefficients.
8. 8 Proof of Theorem 1.5: The Regularity Problem for a Reduced Class of Operators.
9. 9 Proof of Theorem 1.1
10. 10 A complement of a Lipschitz graph
## 1\. Introduction
There are three principal types of boundary value problems for elliptic
operators with rough ($L^{p}$) data: Dirichlet, Neumann, and Regularity. The
Dirichlet problem consists of establishing the existence and uniqueness of
solutions with a given trace on the boundary, the Neumann problem corresponds
to prescribing the flux, that is, the normal derivative on the boundary,
again, in $L^{p}$. The Regularity problem postulates that the tangential
derivative of the trace of the solution is known, once again, in some $L^{p}$
space. As such, it can be seen as a companion of the Neumann problem in which
the tangential rather than the normal derivative of the solution is given, or
as a version of the Dirichlet problem corresponding to the smoother boundary
data.
The Dirichlet problem has received a lot of attention in the past 30-40 years
and we will not be able to even briefly mention all the references in the
subject. Its well-posedness was established, in particular, for
$t$-independent operators on all Lipschitz domains [JK81, KKPT00, HKMP15], for
the the Laplacian on all uniformly rectifiable sets with mild topological
conditions [Dah77, HM14, Azz21, AHM+20], which was then extended to the sharp
class of the so-called Dahlberg-Kenig-Pipher (DKP) operators [KP01, DPP07,
HMM+21] and for their analogues in domains with lower-dimensional boundaries
[DFM19, FMZ21].
The Neumann and Regularity problems in $L^{p}$ proved to be much more
challenging. In particular, concerning the latter, up until recently the only
known results pertained to either $t$-independent scenario [KP93] or a “small
constant” DKP case [DPR17]. The breakthrough article [MT21] by Mourgoglou and
Tolsa was the first one to consider the regularity problem on domains beyond
Lipschitz graphs: they proved the solvability of the regularity problem for
the Laplacian on domains with uniformly rectifiable boundaries and some mild
topology. Just in the past few months the first “big constant” DKP result was
announced, by two different arguments, by Dindoš, Hofmann, Pipher [DHP22] in
the half plane and Lipschitz domains, and simultaneously, by Mourgoglou,
Poggi, Tolsa [MPT22] on domains with uniformly rectifiable boundaries.
The present paper is devoted to the setting of domains with lower dimensional
boundaries. It establishes the solvability of the regularity problem in the
complement of $\mathbb{R}^{d}$, or more generally, of a Lipschitz graph, for
an appropriate analogue of the “small constant” DKP coefficients. The higher
co-dimensional setting presented numerous new challenges, particularly, due to
the presence of “torsion”, the derivatives which roughly speaking turn the
solution around a thin boundary which are not present in the traditional
$(n-1)$-dimensional case. Respectively, we had to invent new structural
properties of the operators which on one hand, are amenable to the analysis in
desired geometric scenarios, and on the other, still allow for a control of
the second derivatives of a solution in a square function. All this will be
discussed in detail below.
Let us also mention that in the setting of the domains with lower dimensional
boundaries we are bound to work with degenerate elliptic operators, whose
coefficients grow as powers of the distance to the boundary. This provides a
curious new motivation point. Our operators, as explained below, essentially
look like $-{\rm div}\operatorname{dist}(\cdot,\partial\Omega)^{\beta}\nabla$
with a suitable power $\beta$ depending on the dimension of the set and of the
boundary. This is reminiscent of the Caffarelli-Silvestre extension operator
which allows one to view the fractional Laplacian $(-\Delta)^{\gamma}$,
$\gamma\in(0,1)$, on $\mathbb{R}^{d}$ as a Dirichlet-to-Neumann map for the
operator $-{\rm div}\,{\rm dist}(\cdot,\mathbb{R}^{d})^{\beta}\nabla$ on
$\mathbb{R}^{d+1}$, where $\beta=1-2\gamma$ (see [CS07] and also an extension
to higher powers by A. Chang and co-authors in [CY17]). Respectively, the
mapping properties of the Dirichlet-to-Neumann map become the mapping
properties of the fractional Laplacian. By the same token, one could view the
Dirichlet-to-Neumann map of our operators as an embodiment of a new concept of
differentiation or integration on rough lower-dimensional sets, and in this
vein the appropriate estimates correspond exactly to the solution of the
Regularity and Neumann problems. This paper is the first step in the
direction.
Let us now turn to definitions and statements of the main results. Let $0<d<n$
be two integers. If $d=n-1$, the domain $\Omega$ is the half-space
$\mathbb{R}^{n}_{+}:=\\{(x,t)\in\mathbb{R}^{d}\times(0,\infty)\\}$ and if
$d<n-1$, then
$\Omega:=\mathbb{R}^{n}\setminus\mathbb{R}^{d}:=\\{(x,t)\in\mathbb{R}^{d}\times(\mathbb{R}^{n-d}\setminus\\{0\\})\\}$.
In the rest of the article, $t$ will be seen as a horizontal vector, and
thence $t^{T}$ will correspond to the vertical vector. It is technically
simpler and more transparent to work in $\mathbb{R}^{n}_{+}$ and
$\mathbb{R}^{n}\setminus\mathbb{R}^{d}$ rather than a more general graph
domain, but the goal is to treat the class of coefficients which would
automatically cover the setting of Lipschitz domains via a change of variables
– see Corollary 1.2.
We take an operator $L:=-{\operatorname{div}}|t|^{d+1-n}\mathcal{A}\nabla$ and
the first condition that we impose is of course the ellipticity and
boundedness of $\mathcal{A}$: there exists $\lambda>0$ such that for
$\xi,\zeta\in\mathbb{R}^{n}$, and $(x,t)\in\Omega$,
(1.1) $\displaystyle\lambda|\xi|^{2}\leq\mathcal{A}(x,t)\xi\cdot\xi\ \
\text{and}\ \ |\mathcal{A}(x,t)\xi\cdot\zeta|\leq\lambda^{-1}|\xi||\zeta|.$
We write (1.1)λ when we want to refer to the constant in (1.1). Then, we say
that $u\in W^{1,2}_{loc}(\Omega)$ is a weak solution to $Lu=0$ if for any
$\varphi\in C^{\infty}_{0}(\Omega)$, we have
(1.2) $\displaystyle\iint_{\Omega}\mathcal{A}\nabla
u\cdot\nabla\varphi\frac{dt}{|t|^{n-d-1}}dx=0.$
When $d=n-1$ these are the classical elliptic operators and when $d<n-1$ the
weight given by the power of distance to the boundary is necessary and
natural: if the coefficients are not degenerate, the solutions do not see the
lower dimensional sets. For instance, a harmonic function in
$\mathbb{R}^{n}\setminus\mathbb{R}^{d}$ is the same as a harmonic function in
$\mathbb{R}^{n}$ for sufficiently small $d$. All this is discussed in detail
in [DFM21b] where we develop the elliptic theory for the operators at hand. In
particular, in the aforementioned work we construct the elliptic measure
$\omega_{L}^{X}$ associated to $L$ so that for any continuous and compactly
supported boundary data $g$, the function
(1.3) $u(X):=\int_{\mathbb{R}^{d}}g(y)\,d\omega^{X}(y)$
is a weak solution to $Lu=0$, which continuously extends to
$\overline{\Omega}$ by taking the values $u=g$ on
$\partial\Omega=\mathbb{R}^{d}$.
With this at hand, we turn to the definition of the Regularity problem. The
averaged non-tangential maximal function $\widetilde{N}$ is defined for any
function $u\in L^{2}_{loc}(\Omega)$ as
(1.4) $\displaystyle\widetilde{N}(u)(x)=\sup_{(z,r)\in\Gamma(x)}u_{W}(z,r),$
where $\Gamma(x)$ is the cone $\\{(z,r)\in\mathbb{R}^{d+1}_{+},\,|z-x|<r\\}$,
and $u_{W}(z,r)$ is the $L^{2}$-average
$\displaystyle
u_{W}(z,r):=\bigg{(}\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{W(z,r)}|u(y,s)|^{2}dy\,ds\bigg{)}^{\frac{1}{2}},$
over the Whitney box
(1.5) $\displaystyle W(z,r):=\\{(y,s)\in\Omega,\,|y-z|<r/2,\,r/2\leq|s|\leq
2r\\}.$
Observe that when $d<n-1$, a Whitney cube is a bounded, annular region, so in
particular, the higher co-dimensional Whitney cubes $W(z,r)$ are invariant
under rotation around the boundary. We say that the Regularity problem is
solvable in $\mathbf{L^{p}}$ if for any $g\in C^{\infty}_{0}(\mathbb{R}^{d})$,
the solution given by (1.3) verifies
(1.6) $\|\widetilde{N}(\nabla u)\|_{L^{p}(\mathbb{R}^{d})}\leq C\|\nabla
g\|_{L^{p}(\mathbb{R}^{d})}$
with a constant $C>0$ that is independent of $g$. If the Regularity problem is
solvable in $L^{p}$, then we deduce by density that for any $g\in
L^{1}_{loc}(\mathbb{R}^{d})$ such that $\|\nabla
g\|_{L^{p}(\mathbb{R}^{d})}<\infty$, there exists a solution to $Lu=0$ subject
to (1.6) which converges non-tangentially to $g$. The proof of this fact is
non-trivial, but classical. See for instance Theorem 3.2 of [KP93] for the
proof of the non-tangential convergence from the bound (1.6), and since the
space $\\{g\in L^{1}_{loc}(\mathbb{R}^{d}),\,\|\nabla
g\|_{L^{p}(\mathbb{R}^{d})}<\infty\\}$ is homogenous and only equipped of a
semi-norm, we need density results analogous to Lemma 5.7, Remark 5.10, Lemma
5.11 in [DFM21b].
Going further, we say that a function $f$ satisfies the Carleson measure
condition if $\sup_{W(z,s)}|f|^{2}\frac{dsdz}{s}$ is a Carleson measure on
$\Omega$, that is, there exists a constant $M\geq 0$ such that
(1.7) $\sup_{x\in\mathbb{R}^{d},r>0}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{z\in
B(x,r)}\int_{0}^{r}\sup_{W(z,s)}|f|^{2}\frac{dsdz}{s}\leq M.$
We write $f\in CM$, or $f\in CM(M)$ when we want to refer to the constant in
(1.7). It is fairly easy to check that $f\in L^{\infty}(\Omega)$, and we even
have
(1.8) $\|f\|_{L^{\infty}(\Omega)}\leq CM^{1/2}\qquad\text{ whenever }f\in
CM(M),$
with a constant that depends only on $d$ and $n$.
The main result of the present paper is as follows.
###### Theorem 1.1.
Let $0\leq d<n$ be two integers. For any $\lambda>0$, there exists a small
parameter $\kappa>0$ and a large constant $C$, both depending only on
$\lambda$, $d$, and $n$, with the following property. Consider an elliptic
operator $L:=-\operatorname{div}[|t|^{d+1-n}\mathcal{A}\nabla]$ that satisfies
(1.1)λ and such that $\mathcal{A}$ can be decomposed as
$\mathcal{A}=\mathcal{B}+\mathcal{C}$, $\mathcal{B}$ is a block matrix
(1.9) $\mathcal{B}=\begin{pmatrix}B_{1}&B_{2}\frac{t}{|t|}\\\
\frac{t^{T}}{|t|}B_{3}&b_{4}I\end{pmatrix},$
where $B_{1}$, $B_{2}$, $B_{3}$, and $b_{4}$ are respectively a $d\times d$
matrix, a $d$-dimensional vertical vector111Since $t$ is a horizontal vector,
$B_{2}\frac{t}{|t|}$ is seen as a matrix product giving a $d\times(n-d)$
matrix., a $d$-dimensional horizontal vector222That is
$\frac{t^{T}}{|t|}B_{3}$ is a $(n-d)\times d$ matrix., a scalar function, and
(1.10) $|t||\nabla B_{1}|+|t||\nabla B_{2}|+|t||\nabla B_{3}|+|t||\nabla
b_{4}|+|\mathcal{C}|\in CM(\kappa).$
Then the Regularity problem is solvable in $L^{2}(\mathbb{R}^{d})$, that is
(1.11) $\|\widetilde{N}(\nabla u_{g})\|_{L^{2}(\mathbb{R}^{d})}\leq C\|\nabla
g\|_{L^{2}(\mathbb{R}^{d})}$
whenever $g\in C^{\infty}_{0}(\mathbb{R}^{d})$ and $u_{g}$ is a solution to
$Lu=0$ given by in (1.3).
Note that when $d=n-1$ our result corresponds to the main result in [DPR17] by
Dindoš, Pipher, and Rule. In this case, the coefficients of $\mathcal{B}$
satisfy the so-called Dahlberg-Kenig-Pipher (DKP) condition with a small
constant and the addition of $\mathcal{C}$ is made possible by the
perturbation results [KP95, DFM21]. The DKP condition is sharp, that is, its
failure could result in the failure of solvability of the Dirichlet problem
[FKP91] and hence, a failure of solvability of the Regularity problem by
[FKP91].
In the setting of the domains with lower dimensional boundaries the special
structure (1.9) is new. It is dictated by the aforementioned need to control
the “torsion” of the coefficients, that is, not only to control the
oscillations of the coefficients in the transversal direction to the boundary,
but also to make sure that they are well-behaved, in a very peculiar sense, in
the angular coordinate in cylindrical coordinates naturally induced by
$\mathbb{R}^{n}\setminus\mathbb{R}^{d}$. Roughly speaking, we want to have an
almost isometry to some constant coefficient matrix as far as the $t$
direction is concerned.
One good test for whether our class of coefficients is sound structure-wise is
whether it allows for a change of variables that would yield the results on
rougher, e.g., Lipschitz, domains. After all, this was an initial motivation
for the DKP Carleson conditions on the coefficients in half-space back when
Dahlberg suggested them. To this end, consider $d<n-1$ and take a Lipschitz
function $\varphi:\,\mathbb{R}^{d}\mapsto\mathbb{R}^{n-d}$. Let
$\Omega_{\varphi}:=\\{(x,t)\in\mathbb{R}^{n},\,t\neq\varphi(x)\\}$. We set
$\sigma:=\mathcal{H}^{d}|_{\partial\Omega_{\varphi}}$ to be the
$d$-dimensional Hausdorff measure on the graph of $\varphi$, which is the
boundary of $\Omega_{\varphi}$, and we construct the “smooth distance”
$D_{\varphi}(X):=\left(\int_{\partial\Omega_{\varphi}}|X-y|^{-d-\alpha}\,d\sigma(y)\right)^{-\frac{1}{\alpha}},\quad\alpha>0.$
The quantity $D_{\varphi}(X)$ is equivalent to
$\operatorname{dist}(X,\partial\Omega_{\varphi})$, see Lemma 5.1 in [DFM19],
so the operator $L_{\varphi}:=-\operatorname{div}[D_{\varphi}^{d+1-n}\nabla]$
falls under the elliptic theory developed in [DFM21b]. Moreover, it was proved
that the Dirichlet problem for such an operator $L_{\varphi}$ is solvable in
$L^{p}$ in a complement of a small Lipschitz graph [FMZ21] and much more
generally, in a complement of a uniformly rectifiable set [DM20, Fen20]. It is
also explained in the aforementioned works why $D_{\varphi}$ as opposed to the
Euclidean distance has to be used in this context. Using the results from
[FMZ21], one can prove solvability of the Dirichlet problem in $L^{2}$. Here
we establish solvability of Regularity problem.
###### Corollary 1.2.
Let $\varphi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n-d}$ be a Lipschitz
function, and set $\Omega_{\varphi}$ and $L_{\varphi}$ as above. There exists
$\kappa>0$ such that if
$\|\nabla\varphi\|_{L^{\infty}(\mathbb{R}^{d})}\leq\kappa$, then the
Regularity problem is solvable in $L^{2}(\partial\Omega_{\varphi})$.
The reader can consult Section 10 for the proof and the detailed definitions.
### 1.1. Remarks on the proof of Theorem 1.1.
At this point let us return to the Main result, Theorem 1.1, and discuss some
highlights of the proof along with the particular challenges of the higher co-
dimensional setting.
Similarly to the strategy used in codimension 1, we want to prove that for any
$g$ smooth enough and $u_{g}$ constructed as in (1.3), we have
(1.12) $\|S(\nabla u_{g})\|_{L^{2}(\mathbb{R}^{d})}\leq
C\|g\|_{L^{2}(\mathbb{R}^{d})}+C\kappa\|\widetilde{N}(\nabla
u_{g})\|_{L^{2}(\mathbb{R}^{d})}$
and
(1.13) $\|\tilde{N}(\nabla u_{g})\|_{L^{2}(\mathbb{R}^{d})}\leq C\|S(\nabla
u_{g})\|_{L^{2}(\mathbb{R}^{d})},$
for some $C>0$. Here, $S$ is a square function that will be defined in (1.19)
below. We can see that when $\kappa$ is small the two estimates above would
formally imply the bound (1.11). They are the crux of the matter and the core
of the argument. However, even in this passage there are considerable
additional difficulties. Nothing guarantees that $\|\widetilde{N}(\nabla
u)\|_{L^{2}(\mathbb{R}^{d})}$ is finite, and if we do not know a priori
whether $\|\widetilde{N}(\nabla u)\|_{L^{2}(\mathbb{R}^{d})}$ is finite, we
cannot use (1.12)–(1.13) to deduce that $\|\tilde{N}(\nabla
u)\|_{L^{2}(\mathbb{R}^{d})}\leq C\|g\|_{L^{2}(\mathbb{R}^{d})}$. For that
reason we cannot simply concentrate on (1.12)–(1.13), but rather have to prove
local versions of those estimates, where all the terms are guaranteed to be
finite, and we then carefully take a limit to directly establish
(1.14) $\|\tilde{N}(\nabla u)\|_{L^{2}(\mathbb{R}^{d})}\leq C\|\nabla
g\|_{L^{2}(\mathbb{R}^{d})}<+\infty.$
Unfortunately, taking the limit is already far from trivial, because the term
$\|\nabla g\|_{2}$ is obtained roughly by taking the limit of $\|\nabla
u(x,\epsilon)\|_{2}$, and to ensure convergence, we had to assume that
$\|\nabla\mathcal{A}\|_{\infty}<+\infty$ as in [KP93], and then obtain (1.14)
for all $\mathcal{A}$ by interchanging two limits. In the classical case of
codimension 1, the situation is considerably easier because more tools are
available to us (for instance layer potential representations).
The principal issue though are still the estimates on the quantity $S(\nabla
u)$, (1.19). Clearly, it involves two derivatives, and in principle we do not
have enough regularity of the coefficients ($\mathcal{C}$ is not necessarily
continuous) to be able to directly bound the second derivatives of the
solution, not to mention the actual refined estimates that we are targeting.
This led us to a separate paper devoted to the Carleson perturbation theory
for the Regularity problem [DFM21] (cf. [KP95] when $d=n-1$). However, even
with that and even for $\mathcal{A}=\mathcal{B}$ we could not follow the route
paved for $d=n-1$ in [DPR17]. We finally realized that these arguments are not
well adapted to the cylindrical structure of our space and the additional,
quite involved, structural considerations are necessary. Let us try to give
some ideas here.
#### 1.1.1. Cylindrical Coordinate Derivatives
As we mentioned, we shall use $S(\nabla u)$ as an intermediate quantity in our
computations, and so we will need to estimate second derivatives. However,
taking the second derivatives in the cartesian system of coordinates will not
be adapted to our context, and we prefer to consider “cylindrical derivatives”
defined below.
We notice that there are three difference types of directions. One is the
tangential direction, which goes alone the boundary
$\mathbb{R}^{d}\times\\{t=0\\}$. The second one is the angular direction,
which rotates around the boundary, and the last one is the radial direction
that moves away from the boundary. We write
$\nabla_{x}=(\partial_{1},\partial_{2},...,\partial_{d})$ and
$\nabla_{t}=(\partial_{d+1},\partial_{d+2},...,\partial_{n})$, where
$\partial_{i}=\vec{e}_{i}\cdot\nabla$ and $\vec{e}_{i}\in\mathbb{R}^{n}$
denotes the vector with a $1$ in the $i$-th coordinate and $0$’s elsewhere.
###### Definition 1.3.
The radial directional derivative $\partial_{r}$ is defined as:
(1.15)
$\displaystyle\partial_{r}:=\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|}\partial_{\alpha}.$
For each $d+1\leq i,j\leq n$, the directional derivative
$\partial_{\varphi_{ij}}$ is defined as:
(1.16)
$\displaystyle\partial_{\varphi_{ij}}:=-\frac{t_{i}}{|t|}\partial_{j}+\frac{t_{j}}{|t|}\partial_{i}.$
The important property of $\partial_{\varphi}$ is that
(1.17) $\partial_{\varphi}|t|=0$
To lighten the notation, we write $\partial_{\varphi}$ for any angular
directional derivative. We will mention $i,j$ explicitly when it is necessary.
Furthermore we define the angular gradient $\nabla_{\varphi}$ as a vector
derivative whose components are all angular directional derivatives
$(\partial_{\varphi_{ij}})_{d+1\leq i,j\leq n}$ and
$|\nabla_{\varphi}u|^{2}=\frac{1}{2}\sum_{i,j=d+1}^{n}|\partial_{\varphi_{ij}}u|^{2}.$
Note that $\partial_{\varphi_{ii}}=0$ for all $d+1\leq i\leq n$ and
$\partial_{\varphi_{ij}}=\partial_{\varphi_{ij}}$ for all $d+1\leq i,j\leq n$.
Also, we can easily check that the tangential, angular, and radial directions
are perpendicular to each other. More importantly, for any $u\in
W^{1,2}_{loc}$, we have the identity that
$|\nabla_{t}u|^{2}=|\partial_{r}u|^{2}+|\nabla_{\varphi}u|^{2}$ almost
everywhere (see Proposition 2.1). Consequently, it suffices to establish
estimates for the average non-tangential maximal functions of $\nabla_{x}$,
$\nabla_{\varphi}$ and $\partial_{r}$. In the rest of the article, we will
write
(1.18)
$\displaystyle\overline{\nabla}=(\nabla_{x},\nabla_{\varphi},\partial_{r}).$
One of the main reasons for using the cylindrical coordinate system is that
the operator $L=-\operatorname{div}[|t|^{d+1-n}\mathcal{A}\nabla]$ can be
written in terms of $\partial_{x},\partial_{\varphi},$ and $\partial_{r}$ (see
Proposition 2.3) when the coefficient matrix $\mathcal{A}$ is in the form of
(1.26). The expression (2.1) not only simplifies the computations, but also
helps us to better understand the geometric structure of the operator $L$.
###### Remark 1.4.
The notation $\partial_{r}$, $\partial_{\varphi}$, … might be confusing at
first, as these are not derivatives in a new system of coordinates. We will
not use a change of variable to turn our system of coordinates from a
cartesian to a cylindrical one. Instead, $\partial_{r}$ and
$\partial_{\varphi}$ denote linear combinations of derivatives in cartesian
coordinates, or derivatives along some curves (i.e., $r$ and $\varphi$ are not
“new variables”). They are used for properly grouping the derivatives. In
particular, we do not need to properly define a bijection
$(x,t)\mapsto(x,r,\varphi)$ or its Jacobian.
#### 1.1.2. Commutators
The common point between $\partial_{x}$ and $\partial_{\varphi}$ is that they
both cancel out the weight $|t|^{d+1-n}$, so they will be handled in a similar
manner by commuting them with the operator $L$; the estimates on the last
derivative $\partial_{r}$ will then be obtained by using the equation
(Proposition 2.3). The difference between the two differential operators
$\partial_{x}$ and $\partial_{\varphi}$ is that $\partial_{x}$ commute with
$\nabla$ and $\overline{\nabla}$, and $\partial_{\varphi}$ do not commute with
the radial and other angular derivatives, but fortunately, everything will
work out at the end because the commutators have zero average on $W(z,r)$. The
computations pertaining to commutators are performed in Section 2, for
instance Proposition 2.4 gives that
$[\partial_{r},\partial_{\varphi}]:=\partial_{r}\partial_{\varphi}-\partial_{\varphi}\partial_{r}=-\frac{\partial_{\varphi}}{|t|}.$
#### 1.1.3. Local bounds
We want to prove local versions of (1.12)–(1.13). Before introducing the
notation, let us mention that a weak solution is in $W^{2,2}_{loc}$ whenever
$\nabla A\in L^{\infty}_{loc}$, this is a well known fact which we proved
again in Proposition 7.1.
We have already defined the non-tangential maximal function in (1.4), and the
square function of $v\in W^{1,2}_{loc}(\Omega)$ is defined as:
(1.19) $\displaystyle S(v)(x):=\bigg{(}\iint_{\widehat{\Gamma}_{a}(x)}|\nabla
v(y,s)|^{2}\,\frac{dyds}{|s|^{n-2}}\bigg{)}^{\frac{1}{2}},$
where
$\widehat{\Gamma}(x)=\\{(y,s)\in\mathbb{R}^{n}\setminus\mathbb{R}^{d}:|y-x|\leq|s|\\}$
is a higher-codimension cone with vertex $x\in\mathbb{R}^{d}$. We write
(1.20) $\displaystyle
S(\overline{\nabla}u)^{2}:=\sum_{i=1}^{d}S(\partial_{x_{i}}u)^{2}+\sum_{d<i,j\leq
n}S(\partial_{\varphi_{ij}}u)^{2}+S(\partial_{r}u)^{2},$
and the square functions of $\nabla_{x}u$ and $\nabla_{\varphi}u$ are defined
in a similar manner.
For a function $0\leq\Psi\leq 1$, the definitions of the localized square
functions and the non-tangential maximal functions are
(1.21) $\displaystyle S(v|\Psi)(x):=\bigg{(}\iint_{\widehat{\Gamma}(x)}|\nabla
v|^{2}\Psi\frac{dsdy}{|s|^{n-d}}\bigg{)}^{1/2}$
and
$\displaystyle\widetilde{N}(v|\Psi)(x)=\sup_{(z,r)\in\Gamma(x)}(v|\Psi)_{W,a}(z,r)$
where $(v|\Psi)_{W,a}$ is defined on $\mathbb{R}^{d+1}_{+}$ by
$\displaystyle(v|\Psi)_{W,a}(z,r):=\bigg{(}\frac{1}{|W_{a}(z,r)|}\iint_{W_{a}(z,r)}|v|^{2}\Psi
dyds\bigg{)}^{1/2}.$
“Good” cut-off functions will satisfy the following hypothesis.
###### Hypothesis ($\mathcal{COF}$).
We say that a function $\Psi$ satisfies ($\mathcal{COF}$) if $\Psi$ is a cut-
off function, that is if $\Psi\in C^{\infty}(\overline{\Omega})$,
$0\leq\Psi\leq 1$, $\Psi$ is radial - i.e. there exists $\psi\in
C^{\infty}(\mathbb{R}^{d+1}_{+})$ such that $\Psi(x,t)=\psi(x,|t|)$ \- and we
have the bound
$|t||\nabla\Psi|\leq K\quad\text{ and
}\quad{\mathds{1}}_{\operatorname{supp}\nabla\Psi}\in CM(K).$
We write ($\mathcal{COF}$)K when we want to refer to a constant for which
$|t|\nabla\Psi|\leq K$ and ${\mathds{1}}_{\operatorname{supp}\nabla\Psi}\in
CM(K)$, and $K$ will always be chosen $\geq 1$.
We show that if $\Psi$ is a “good” cut-off function, then for any weak
solution $u\in W^{2,2}_{loc}(\Omega)$ to the equation $Lu=0$, we have
$\|S(\overline{\nabla}u|\Psi)\|^{2}_{2}\leq C_{1}\kappa\|\widetilde{N}(\nabla
u|\Psi)\|^{2}_{2}+\|\operatorname{Tr}_{\Psi}(\nabla_{x}u)\|^{2}_{2}+\text{``error
terms''},$
where $\operatorname{Tr}_{\Psi}(\nabla_{x}u)$ is an approximation of trace of
$\nabla_{x}u$ that depends on how far is $\operatorname{supp}\Psi$ to
$\partial\Omega$. The precise statement can be found in Lemma 5.5. In
addition, for a reduced class of “good” cut-off function we will obtain the
local $N\leq S$
$\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}\lesssim\|S(\overline{\nabla}u|\Psi)\|^{2}_{2}+\text{``error
terms''},$
where an exact estimate is given in Lemma 4.6. The “error terms” that we
mentioned above go to zero once we extend local estimates to global ones. The
careful definitions of the “good” cutoffs, a delicate splitting of the
derivatives, and an enhanced structure of the operator are all important for
the algebra of the computations. Afterwards, when $\kappa$ is small, by taking
$\Psi\uparrow 1$, we are able obtain the estimate
(1.22) $\|\widetilde{N}(\nabla u)\|_{2}\lesssim\lim_{\epsilon\to
0}\|\operatorname{Tr}_{\epsilon}(\nabla_{x}u)\|_{2}$
whenever $u$ is an energy solution (see Theorem 6.4). Finally, with this at
hand, two natural questions now arise. Does the limit
$\lim_{\epsilon}\|\operatorname{Tr}_{\epsilon}(\nabla_{x}u_{g})\|_{2}$ exists
and does it converge to $\|\nabla g\|_{2}$?
#### 1.1.4. Approximation Results
We want to follow the strategy that Kenig and Pipher used in [KP93]. The idea
is to construct a sequence of coefficients
$\\{\mathcal{A}^{j}\\}_{j\in\mathbb{N}}$ such that
$\mathcal{A}^{j}\equiv\mathcal{A}$ on $\\{|t|>1/j\\}$ and $\mathcal{A}^{j}$ is
Lipschitz up to the boundary. In particular $\mathcal{A}_{j}$ converges
pointwise to $\mathcal{A}$, which guarantees the convergence of the solution
$u^{j}_{g}$ to $u_{g}$ (see Theorem 8.1). Meanwhile, since $\mathcal{A}^{j}$
is continuous up to the boundary,
$\|\operatorname{Tr}_{\epsilon}(\nabla_{x}u^{j}_{g})\|_{2}$ converges indeed
to $\|\nabla g\|_{2}$ because $\nabla_{x}u_{j}$ is continuous/smooth up to the
boundary. We can swap the two limits (in $\epsilon$ and in $j$), because
(1.22) entails a uniform convergence of the traces in $j$.
However, the construction of the $\mathcal{A}^{j}$ used by Kenig and Pipher
does not immediately transfer to our higher codimensional setting. In
addition, we only succeeded to obtain global bounds on $\nabla\nabla_{x}u$
(and not on all the second derivatives, like we could do in the codimension 1
setting), and this forced us to prove Theorem 6.4 before doing the
approximation. For that reason, even if we globally follow the spirit of Kenig
and Pipher’s method, we cannot say that our argument is a simple adaptation of
[KP93].
#### 1.1.5. Self-improvement
All the arguments that we presented will allow us to prove the
$L^{2}$-solvability of the Regularity problem for a reduced class of
operators, and then we will “self improve” it to Theorem 1.1. The reduced
class of operators on which most of intermediate results will be written is
given as follows.
###### Hypothesis ($\mathcal{H}$).
We say that the operator
$L:=-\operatorname{div}(|t|^{d+1-n}\mathcal{A}\nabla)$ satisfies the
assumption ($\mathcal{H}$) if
* •
$L$ is uniformly elliptic, that is there exists $\lambda\in(0,1)$ such that
(1.23) $\lambda|\xi|^{2}\leq\mathcal{A}(x,t)\xi\cdot\xi\ \ \text{and}\ \
|\mathcal{A}(x,t)\xi\cdot\zeta|\leq\lambda^{-1}|\xi||\zeta|\qquad\text{ for
}(x,t)\in\Omega,\,\xi,\zeta\in\mathbb{R}^{n};$
* •
the matrix $\mathcal{A}$ can be written as
(1.26)
$\displaystyle\mathcal{A}(x,t)=\left({\begin{array}[]{cc}\mathcal{A}_{1}(x,t)&\mathcal{A}_{2}(x,t)\frac{t}{|t|}\\\
0&Id_{(n-d)\times(n-d)}\\\ \end{array}}\right),$
where $\mathcal{A}_{1}$ is a $d\times d$-matrix function, and
$\mathcal{A}_{2}$ is vertical vector of length $d$333That is,
$\mathcal{A}_{2}(x,t)\frac{t}{|t|}$ is a matrix operation which gives a
$d\times(n-d)$ matrix;
* •
There exists $\kappa>0$ such that
(1.27) $|t||\nabla\mathcal{A}_{1}|+|t||\nabla\mathcal{A}_{2}|\in CM(\kappa).$
We write ($\mathcal{H}$)λ,κ when we want to refer to the constants in (1.23),
and (1.27). The constant $\kappa$ will ultimately be small.
Keep in mind that we consider the operators satisfying ($\mathcal{H}$) at
first, partially because some of our intermediate results can not be stated
with the assumptions from Theorem 1.1 (for instance we need $u\in
W^{2,2}_{loc}$ for Lemma 5.9, and so cannot consider Carleson perturbation
$\mathcal{C}$ for this result), but also because we want to simplify the
proofs (for instance, our proofs would work with $\mathcal{A}$ in the form
(1.9) instead of (1.26), but many extra computations would be needed in
Sections 3 and 5). That is, we sacrificed the optimality of the intermediate
results in order to shorten our proof.
We prove in Section 8 the following result, which seems at first glance weaker
than Theorem 1.1.
###### Theorem 1.5.
Take $\lambda,M>0$. There exists $\kappa\in(0,1)$ small enough (depending only
on $\lambda$, $d$, and $n$) such that if
$L:=-\operatorname{div}(|t|^{d+1-n}\mathcal{A}\nabla)$ is an elliptic operator
satisfying ($\mathcal{H}$)λ,κ, then for any boundary data $g\in
C^{\infty}_{0}(\mathbb{R}^{d})$, the solution $u$ to $Lu=0$ constructed as in
(1.3) or equivalently by using Lax-Milgram theorem (see Lemma 6.1) verifies
(1.28) $\displaystyle\|\widetilde{N}(\nabla u)\|_{L^{2}(\mathbb{R}^{d})}\leq
C\|\nabla g\|_{L^{2}(\mathbb{R}^{d})},$
where $C>0$ depends only on $\lambda$, $d$, and $n$.
Then, using the theory of Carleson perturbations for the Regularity problem
[DFM21, KP95] we improve the above result in Section 9 and we get Theorem 1.1,
as desired.
## 2\. Equation in Cylindrical Coordinates
In Subsection 1.1.1, we introduced a set of directional derivatives adapted to
the cylindrical structure of $\Omega$ (when $d<n-1$). The gradient
$\overline{\nabla}=(\nabla_{x},\nabla_{\varphi},\partial_{r})$ in cylindrical
coordinate has a norm equivalent to the one of the classical gradient (see
Proposition 2.1), which makes $\overline{\nabla}$ equivalent to $\nabla$ for
estimates on first order derivatives. We compute the expression of our
elliptic operator in the cylindrical system of derivatives (see Proposition
2.3).
For the second order derivatives in cylindrical coordinates, we will need to
know the commutators between $\nabla_{x}$, $\nabla_{\varphi}$, and
$\partial_{r}$, which we compute in Proposition 2.4 and Proposition 2.6. We
observe that the non trivial commutators will always involve the angular
derivative $\nabla_{\varphi}$. In order to deal with them, we shall crucially
rely on Proposition 2.7, which uses the fact that the angular directional
derivative $\partial_{\varphi}u(x,r\theta)$ has zero mean on the unit sphere
for almost every $(x,r)\in\mathbb{R}^{d+1}_{+}$. From there, we will be able
to use the Poincaré inequality and recover second order derivatives (that will
eventually be controlled).
Recall that, as mentioned in Remark 1.4, $r$ and $\varphi$ are not “new
variables in a cylindrical system”, and $\partial_{r}$ and
$\partial_{\varphi_{ij}}$ are just a linear combination of Euclidean
derivatives.
###### Proposition 2.1.
Let $\partial_{\varphi_{ij}}$ and $\partial_{r}$ be directional derivatives
defined in Definition 1.3, and let $\overline{\nabla}u$ be the cylindrical
gradient defined in (1.18). We have
$\nabla u\cdot\nabla
v=\nabla_{x}u\cdot\nabla_{x}v+(\partial_{r}u)(\partial_{r}v)+\frac{1}{2}\sum_{i,j=d+1}^{n}(\partial_{\varphi_{ij}}u)(\partial_{\varphi_{ij}}v)=\overline{\nabla}u\cdot\overline{\nabla}v$
whenever it makes sense (for instance for $u\in W^{1,2}_{loc}(\Omega)$ and
almost every $(x,t)\in\Omega$). In particular, we have $|\nabla
u|^{2}=|\overline{\nabla}u|^{2}$.
###### Proof.
We just need to prove
$\nabla_{t}u\cdot\nabla_{t}v:=\sum_{\alpha=d+1}^{n}(\partial_{t_{\alpha}}u)(\partial_{t_{\alpha}}v)=(\partial_{r}u)(\partial_{r}v)+\frac{1}{2}\sum_{i,j=d+1}^{n}(\partial_{\varphi_{ij}}u)(\partial_{\varphi_{ij}}v).$
According to the definition of $\partial_{\varphi_{ij}}$ in (1.16), we have
$\displaystyle\sum_{i,j=d+1}^{n}(\partial_{\varphi_{ij}}u)(\partial_{\varphi_{ij}}v)=\sum_{i,j=d+1}^{n}\Big{\\{}\frac{t_{i}^{2}}{|t|^{2}}(\partial_{t_{j}}u)(\partial_{t_{j}}v)-2\frac{t_{i}t_{j}}{|t|^{2}}(\partial_{t_{i}}u)(\partial_{t_{j}}v)+\frac{t^{2}_{j}}{|t|^{2}}(\partial_{t_{i}}u)(\partial_{t_{i}}v)\Big{\\}}.$
The first term on the righthand side equals $\nabla_{t}u\cdot\nabla_{t}v$
since $\sum_{i=d+1}^{n}t_{i}^{2}/|t|^{2}=1$. For the same reason, the last
term is also $\nabla_{t}u\cdot\nabla_{t}v$. We can factorize the second term
of the righthand side into the product of a sum in $i$ and a sum in $j$, and
we easily observe from the definition (1.15) that the middle term is indeed
$-2(\partial_{r}u)(\partial_{r}v)$. The proposition follows. ∎
The second proposition establishes an integration by parts for the angular and
radial derivatives.
###### Proposition 2.2.
Let $u,v\in\mathcal{C}^{\infty}(\mathbb{R}^{n})$ be such that either $u$ or
$v$ is compactly supported in $\Omega$. We have the identities
$\iint_{\Omega}(\partial_{r}u)\,v\,|t|^{d+1-n}\,dt\,dx=-\iint_{\Omega}u\,(\partial_{r}v)\,|t|^{d+1-n}\,dt\,dx$
and
$\iint_{\Omega}(\partial_{\varphi}u)\,v\,|t|^{d+1-n}\,dt\,dx=-\iint_{\Omega}u\,(\partial_{\varphi}v)\,|t|^{d+1-n}\,dt\,dx$
where $\partial_{\varphi}$ stands for any of the $\partial_{\varphi_{ij}}$,
$d+1\leq i,j\leq n$.
###### Proof.
If one writes the integrals in cylindrical coordinates, the integration by
parts for $\partial_{r}$ is immediate once you notice that we imposed the
boundary condition $uv=0$ when $r=0$.
The second identity is also expected, but let us write is formally. Take
$d+1\leq i,j\leq n$ and we have by definition of $\partial_{\varphi_{ij}}$
that
$I:=\iint_{\Omega}(\partial_{\varphi_{ij}}u)\,v\,|t|^{d+1-n}\,dt\,dx=\iint_{\Omega}v\Big{[}(\partial_{i}u)\,t_{j}|t|^{d-n}-(\partial_{j}u)\,t_{i}|t|^{d-n}\Big{]}\,dt\,dx$
We use the integration by part to remove $\partial_{i}$ and $\partial_{j}$
from $u$, and we get
$\begin{split}I=-\iint_{\Omega}u(\partial_{\varphi_{ij}}v)\,|t|^{d+1-n}\,dt\,dx-\iint_{\Omega}uv\Big{[}\,\partial_{i}(t_{j}|t|^{d-n})-\,\partial_{j}(t_{i}|t|^{d-n})\Big{]}\,dt\,dx.\end{split}$
It is easy to check that
$\partial_{i}(t_{j}|t|^{d-n})-\,\partial_{j}(t_{i}|t|^{d-n})=0$ in $\Omega$,
thus the proposition follows. ∎
The following proposition rearranges the derivatives, in order to use
$\partial_{r}$ and $\partial_{\varphi}$ instead of the $t$-derivatives in the
expression of $L$.
###### Proposition 2.3.
Let $L=-\operatorname{div}(|t|^{d+1-n}\mathcal{A}\nabla)$ be such that
$\displaystyle\mathcal{A}(x,t)=\left({\begin{array}[]{cc}\mathcal{A}_{1}(x,t)&\mathcal{A}_{2}(x,t)\frac{t}{|t|}\\\
\frac{t^{T}}{|t|}\mathcal{A}_{3}(x,t)&b(x,t)Id_{(n-d)\times(n-d)}\\\
\end{array}}\right),$
where $t\in\mathbb{R}^{n-d}$ is seen as a $d$-dimensional horizontal vector,
and where $\mathcal{A}_{1}$, $\mathcal{A}_{2}$, $\mathcal{A}_{3}$, and $b$ are
respectively a $d\times d$ matrix, a $d$-dimensional vertical vector, a
$d$-dimensional horizontal vector, and a scalar function.
Then:
$L=-|t|^{d+1-n}\Big{[}\operatorname{div}_{x}(\mathcal{A}_{1}\nabla_{x})+\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{r})+\partial_{r}(\mathcal{A}_{3}\nabla_{x})+\partial_{r}(b\partial_{r})+\frac{1}{2}\sum_{i,j=d+1}^{n}\partial_{\varphi_{ij}}(b\partial_{\varphi_{ij}})\Big{]}.$
In particular, if $L$ satisfies ($\mathcal{H}$), then
(2.1)
$L=-|t|^{d+1-n}\Big{[}\operatorname{div}_{x}(\mathcal{A}_{1}\nabla_{x})+\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{r})+\partial^{2}_{r}+\frac{1}{2}\sum_{i,j=d+1}^{n}\partial^{2}_{\varphi_{ij}}\Big{]}.$
###### Proof.
We first decompose as
(2.2)
$L=-\operatorname{div}_{x}(|t|^{d+1-n}\mathcal{A}_{1}\nabla_{x})-\operatorname{div}_{x}(|t|^{d-n-1}\mathcal{A}_{2}\frac{t}{|t|}\nabla_{t})-\operatorname{div}_{t}(|t|^{d-n-1}\frac{t^{T}}{|t|}\mathcal{A}_{3}\nabla_{x})\\\
-\operatorname{div}_{t}(|t|^{d+1-n}b\nabla_{t})=:L_{1}+L_{2}+L_{3}+L_{4}.$
Since the weight $|t|^{d+1-n}$ is independent of $x$, one has
$L_{1}=-|t|^{d+1-n}\operatorname{div}_{x}(\mathcal{A}_{1}\nabla_{x})$ and
$L_{2}=-|t|^{d+1-n}\operatorname{div}_{x}(\mathcal{A}_{2}\frac{t}{|t|}\nabla_{t})=-|t|^{d+1-n}\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{r})$,
since by definition $\partial_{r}$ is $\frac{t}{|t|}\nabla_{t}$.
Recall that $\mathcal{A}_{3}$ is a horizontal vector and $\nabla_{x}$ is a
vertical vector differential operator, so $\mathcal{A}_{3}\nabla_{x}$ is a
scalar (differential operator). In conclusion,
$L_{3}=-\operatorname{div}_{t}(|t|^{d-n}t^{T})\mathcal{A}_{3}\nabla_{x}-|t|^{d-n-1}\frac{t}{|t|}\cdot\nabla_{t}(\mathcal{A}_{3}\nabla_{x})\\\
=0-|t|^{d-n-1}\partial_{r}(\mathcal{A}_{3}\nabla_{x}).$
At this point, it remains to treat $L_{4}$. The integration by parts entails
that, for $u,v\in C^{\infty}_{0}(\Omega)$,
$\begin{split}\iint_{\Omega}(L_{4}u)v\,dt\,dx&=\iint_{\Omega}b\nabla_{t}u\cdot\nabla_{t}v\,|t|^{d+1-n}\,dt\,dx\\\
&=\iint_{\Omega}b(\partial_{r}u)(\partial_{r}v)\,|t|^{d+1-n}\,dt\,dx+\frac{1}{2}\sum_{i,j=d+1}^{n}\iint_{\Omega}b(\partial_{\varphi_{ij}}u)(\partial_{\varphi_{ij}}v)\,|t|^{d+1-n}\,dt\,dx\end{split}$
by Proposition 2.1. Using the integration by part for $\partial_{r}$ and
$\partial_{\varphi_{ij}}$ given by Proposition 2.2, we deduce
$\iint_{\Omega}(L_{4}u)v\,dt\,dx=-\iint_{\Omega}\partial_{r}(b\partial_{r}u)\,v\,|t|^{d+1-n}\,dt\,dx-\frac{1}{2}\sum_{i,j=d+1}^{n}\iint_{\Omega}b\partial_{\varphi_{ij}}(b\partial_{\varphi_{ij}}u)v\,|t|^{d+1-n}\,dt\,dx$
Since the above equality is true for all $u,v\in C^{\infty}_{0}(\Omega)$, we
conclude
$L_{4}=-|t|^{d+1-n}\Big{[}\partial_{r}(b\partial_{r})+\frac{1}{2}\sum_{i,j=d+1}^{n}\partial_{\varphi_{ij}}(b\partial_{\varphi_{ij}})\Big{]}.$
The proposition follows. ∎
In the next results, we want to compute commutators. We immediately have that
$[\partial_{x},\partial_{r}]=0$ and $[\partial_{x},\partial_{\varphi}]=0$. The
normal derivative $\partial_{r}$ and the angular directional derivative
$\partial_{\varphi}$ do not commute, therefore we want to compute their
commutator.
###### Proposition 2.4.
Let $\partial_{\varphi}$ and $\partial_{r}$ be the derivatives defined in
Definition 1.3. Then we have
$[\partial_{r},\partial_{\varphi}]:=\partial_{r}\partial_{\varphi}-\partial_{\varphi}\partial_{r}=-\frac{\partial_{\varphi}}{|t|}.$
###### Proof.
Fix a angular directional derivative $\partial_{\varphi_{ij}}$. We use the
expressions of $\partial_{\varphi_{ij}}$ and $\partial_{r}$ given in
Definition 1.3 to write
(2.3)
$\partial_{r}\partial_{\varphi_{ij}}=\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|}\partial_{\alpha}\partial_{\varphi_{ij}}=\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|}\Big{[}\partial_{\varphi_{ij}}\partial_{\alpha}-\partial_{\alpha}\Big{(}\frac{t_{i}}{|t|}\Big{)}\partial_{j}+\partial_{\alpha}\Big{(}\frac{t_{j}}{|t|}\Big{)}\partial_{i}\Big{]}\\\
=\sum_{\alpha=d+1}^{n}\Big{[}\partial_{\varphi_{ij}}\Big{(}\frac{t_{\alpha}}{|t|}\partial_{\alpha}\Big{)}-\partial_{\varphi_{ij}}\Big{(}\frac{t_{\alpha}}{|t|}\Big{)}\partial_{\alpha}-\frac{t_{\alpha}}{|t|}\partial_{\alpha}\Big{(}\frac{t_{i}}{|t|}\Big{)}\partial_{j}+\frac{t_{\alpha}}{|t|}\partial_{\alpha}\Big{(}\frac{t_{j}}{|t|}\Big{)}\partial_{i}\Big{]}.$
We notice that the first term on the last line of (2.3) is exactly
$\partial_{\varphi_{ij}}\partial_{r}$ after summing over all
$d+1\leq\alpha\leq n$. The third and forth terms of (2.3) are similar, and are
both zero. Indeed,
$-\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|}\partial_{\alpha}\Big{(}\frac{t_{i}}{|t|}\Big{)}\partial_{j}=-\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|}\Big{(}\frac{\delta_{i\alpha}}{|t|}-\frac{t_{i}t_{\alpha}}{|t|^{3}}\Big{)}\partial_{j}=-\frac{t_{i}}{|t|}\partial_{j}+\frac{t_{i}}{|t|}\sum_{\alpha=d+1}^{n}\frac{t_{\alpha}}{|t|^{2}}\partial_{j}=0.$
The second term on the last line of (2.3) can be handled as follows:
(2.4)
$-\sum_{\alpha=d+1}^{n}\partial_{\varphi_{ij}}\Big{(}\frac{t_{\alpha}}{|t|}\Big{)}\partial_{\alpha}=-\sum_{\alpha=d+1}^{n}\Big{[}-\frac{t_{i}}{|t|}\partial_{j}\Big{(}\frac{t_{\alpha}}{|t|}\Big{)}+\frac{t_{j}}{|t|}\partial_{i}\Big{(}\frac{t_{\alpha}}{|t|}\Big{)}\Big{]}\partial_{\alpha}\\\
=\sum_{\alpha=d+1}^{n}\Big{[}\frac{t_{i}}{|t|}\Big{(}\frac{\delta_{j\alpha}}{|t|}-\frac{t_{j}t_{\alpha}}{|t|^{3}}\Big{)}-\frac{t_{j}}{|t|}\Big{(}\frac{\delta_{i\alpha}}{|t|}-\frac{t_{i}t_{\alpha}}{|t|^{3}}\Big{)}\Big{]}\partial_{\alpha}=\frac{t_{i}}{|t|^{2}}\partial_{j}-\frac{t_{j}}{|t|^{2}}\partial_{i}=-\frac{\partial_{\varphi_{ij}}}{|t|}.$
By combining our observations all together, the proposition follows. ∎
Different angular derivatives do not commute either, and we give their
commutator below.
###### Proposition 2.5.
We trivially have
$[\partial_{\varphi_{ij}},\partial_{\varphi_{\alpha\beta}}]=0$ when
$i,j,\alpha,\beta$ are all different. If $i,j,k$ are all different, we have
$[\partial_{\varphi_{ij}},\partial_{\varphi_{ik}}]=-[\partial_{\varphi_{ji}},\partial_{\varphi_{ik}}]=\frac{1}{|t|}\partial_{\varphi_{jk}}.$
###### Proof.
The identity
$[\partial_{\varphi_{ij}},\partial_{\varphi_{ik}}]=-[\partial_{\varphi_{ji}},\partial_{\varphi_{ik}}]$
comes from the fact that $\partial_{\varphi_{ij}}=-\partial_{\varphi_{ji}}$.
For the second identity, we brutally compute. We use the definitions of the
angular derivatives, and develop the expressions to obtain 8 terms that we
pair as follows:
$[\partial_{\varphi_{ij}},\partial_{\varphi_{ik}}]=\left[\frac{t_{i}}{|t|}\partial_{j}\Big{(}\frac{t_{i}}{|t|}\partial_{k}\Big{)}-\frac{t_{i}}{|t|}\partial_{k}\Big{(}\frac{t_{i}}{|t|}\partial_{j}\Big{)}\right]-\left[\frac{t_{i}}{|t|}\partial_{j}\Big{(}\frac{t_{k}}{|t|}\partial_{i}\Big{)}-\frac{t_{k}}{|t|}\partial_{i}\Big{(}\frac{t_{i}}{|t|}\partial_{j}\Big{)}\right]\\\
-\left[\frac{t_{j}}{|t|}\partial_{i}\Big{(}\frac{t_{i}}{|t|}\partial_{k}\Big{)}-\frac{t_{i}}{|t|}\partial_{k}\Big{(}\frac{t_{j}}{|t|}\partial_{i}\Big{)}\right]+\left[\frac{t_{j}}{|t|}\partial_{i}\Big{(}\frac{t_{k}}{|t|}\partial_{i}\Big{)}-\frac{t_{k}}{|t|}\partial_{i}\Big{(}\frac{t_{j}}{|t|}\partial_{i}\Big{)}\right]\\\
:=T_{1}+T_{2}+T_{3}+T_{4}.$
By using the product rule for every term and the fact that $i,j,k$ are
pairwise different, we easily get that $T_{1}=T_{4}=0$ and
$T_{2}=\frac{t_{k}}{|t|^{2}}\partial_{j}\quad\text{ and
}T_{3}=-\frac{t_{j}}{|t|^{2}}\partial_{k}.$
We conclude that
$[\partial_{\varphi_{ij}},\partial_{\varphi_{ik}}]=-\frac{t_{j}}{|t|^{2}}\partial_{k}+\frac{t_{k}}{|t|^{2}}\partial_{j}=\frac{1}{|t|}\partial_{\varphi_{jk}}$
as desired. ∎
Now it is time to compute the commutator $[L,\partial_{\varphi}]$, which is a
crucial step for establishing local bounds between the square functions and
the non-tangential maximal functions. We will explain more when we start
building up these estimates. We compute the commutator when $L$ satisfies
($\mathcal{H}$); we could compute the commutator for general elliptic operator
$L$, but we do not need it, so we spare ourselves the extra complications.
###### Proposition 2.6.
Let $\mathcal{A}$ be a $n\times n$ matrix in the form of (1.26), then for any
$v\in W^{2,2}_{loc}(\Omega)$
$[L,\partial_{\varphi}](v)=|t|^{d-n}\Big{[}\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{\varphi})+2\partial_{r}\partial_{\varphi}v\Big{]}+\operatorname{div}_{x}(|t|^{d+1-n}(\partial_{\varphi}\mathcal{A})\nabla
v).$
Here we identity $\partial_{\varphi}\mathcal{A}$ with its non-trivial
submatrix, that is the first $d$ rows.
###### Proof.
Fix an angular directional derivative $\partial_{\varphi}$. We rearrange the
derivatives to avoid using any $t$-derivatives, and Proposition 2.3 entails
that
$L=-|t|^{d+1-n}\Big{[}\operatorname{div}_{x}(\mathcal{A}_{1}\nabla_{x})+\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{r})+\partial_{r}^{2}+\frac{1}{2}\sum_{i,j=d+1}^{n}\partial_{\varphi_{ij}}^{2}\Big{]}=:L_{1}+L_{2}+L_{3}+L_{4}.$
We note that
$[L,\partial_{\varphi}]=\sum_{\alpha=1}^{4}[L_{\alpha},\partial_{\varphi}].$
So we will compute each $[L_{\alpha},\partial_{\varphi}]$ individually. Let us
start from the easiest one $[L_{1},\partial_{\varphi}]$. Since $\nabla_{x}$
and $\partial_{\varphi}$ commute and $\partial_{\varphi}|t|=0$, we have
(2.5)
$\displaystyle[L_{1},\partial_{\varphi}]=\operatorname{div}_{x}(|t|^{d+1-n}(\partial_{\varphi}\mathcal{A}_{1})\nabla_{x}).$
We turn to the operator $L_{2}$. By product rule, one has
(2.6)
$L_{2}\partial_{\varphi}=-\operatorname{div}_{x}(|t|^{d+1-n}\mathcal{A}_{2}\partial_{r}\partial_{\varphi})=-\operatorname{div}_{x}(|t|^{d+1-n}\mathcal{A}_{2}\partial_{\varphi}\partial_{r})-\operatorname{div}_{x}(|t|^{d+1-n}\mathcal{A}_{2}[\partial_{r},\partial_{\varphi}])\\\
=-\operatorname{div}_{x}(|t|^{d+1-n}\partial_{\varphi}(\mathcal{A}_{2}\partial_{r}))+\operatorname{div}_{x}(|t|^{d+1-n}(\partial_{\varphi}\mathcal{A}_{2})\partial_{r})+|t|^{d-n}\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{\varphi}),$
where we used Proposition 2.4 to compute the commutator. The first term on the
last line of (2.6) is exactly $\partial_{\varphi}L_{2}$ because
$\partial_{\varphi}|t|^{d+1-n}\equiv 0$ and $\partial_{x}$ and
$\partial_{\varphi}$ commute. Thus, (2.6) becomes
(2.7)
$\displaystyle[L_{2},\partial_{\varphi}]=\operatorname{div}_{x}(|t|^{d+1-n}(\partial_{\varphi}\mathcal{A}_{2})\partial_{r})-|t|^{d-n}\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{\varphi}).$
For simplicity, we group $[L_{1},\partial_{\varphi}]$ and
$[L_{2},\partial_{\varphi}]$. We have for any $v\in W^{2,2}_{loc}(\Omega)$
that
$[L_{1},\partial_{\varphi}](v)+[L_{2},\partial_{\varphi}](v)=\operatorname{div}_{x}\Big{(}|t|^{d+1-n}(\partial_{\varphi}\mathcal{A})\nabla
v\Big{)}+|t|^{d-n}|\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{\varphi})|.$
As for the commutator between $L_{3}$ and $\partial_{\varphi}$, we use
Proposition 2.4 multiple times to write
$\partial_{r}^{2}\partial_{\varphi}=\partial_{r}\partial_{\varphi}\partial_{r}-\partial_{r}\Big{(}\frac{\partial_{\varphi}}{|t|}\Big{)}=\partial_{\varphi}\partial_{r}^{2}-\frac{1}{|t|}\partial_{\varphi}\partial_{r}-\frac{1}{|t|}\partial_{r}\partial_{\varphi}+\frac{1}{|t|^{2}}\partial_{\varphi}=\partial_{\varphi}\partial_{r}^{2}-2\partial_{r}\partial_{\varphi}$
and thus deduce
$[L_{3},\partial_{\varphi}]=2|t|^{d+1-n}\partial_{r}\partial_{\varphi}.$
It remains to establish that $[L_{4},\partial_{\varphi}]=0$. We take
$d+1\leq\alpha,\beta\leq n$ so that
$\partial_{\varphi}=\partial_{\varphi_{\alpha\beta}}$. We invoke the fact that
$\partial_{\varphi_{ij}}=-\partial_{\varphi_{ij}}$ and then Proposition 2.5 to
obtain
$\begin{split}-2|t|^{n-d-1}[L_{4},\partial_{\varphi}]&=-\sum_{i\neq\alpha}\Big{(}\partial_{\varphi_{\beta
i}}^{2}\partial_{\varphi_{\beta\alpha}}-\partial_{\varphi_{\beta\alpha}}\partial_{\varphi_{\beta
i}}^{2}\Big{)}+\sum_{j\neq\beta}\Big{(}\partial_{\varphi_{\alpha
j}}^{2}\partial_{\varphi_{\alpha\beta}}-\partial_{\varphi_{\alpha\beta}}\partial_{\varphi_{\alpha
j}}^{2}\Big{)}\\\ &=-\sum_{i\neq\alpha}\Big{(}\partial_{\varphi_{\beta
i}}\partial_{\varphi_{i\alpha}}-\partial_{\varphi_{\alpha
i}}\partial_{\varphi_{\beta
i}}\Big{)}+\sum_{j\neq\beta}\Big{(}\partial_{\varphi_{\alpha
j}}\partial_{\varphi_{j\beta}}-\partial_{\varphi_{\beta
j}}\partial_{\varphi_{\alpha j}}\Big{)}\\\
&=-\sum_{i\neq\alpha,\beta}\Big{(}\partial_{\varphi_{\beta
i}}\partial_{\varphi_{i\alpha}}-\partial_{\varphi_{\alpha
i}}\partial_{\varphi_{\beta
i}}\Big{)}+\sum_{j\neq\alpha,\beta}\Big{(}\partial_{\varphi_{\alpha
j}}\partial_{\varphi_{j\beta}}-\partial_{\varphi_{\beta
j}}\partial_{\varphi_{\alpha j}}\Big{)}.\end{split}$
because $\partial_{\varphi_{kk}}=0$. We can freely change $j$ in $i$ in the
second sum, and after recalling again that
$\partial_{\varphi_{ij}}=-\partial_{\varphi_{ij}}$, we observe that two sums
in the right-hand side above cancel with each other. We conclude that
$[L_{4},\partial_{\varphi}]=0$, which finishes the proof of the proposition. ∎
Finally, we will need the following version of the Poincaré inequality.
###### Proposition 2.7.
Let $u\in W^{1,2}_{loc}(\Omega)$ and let $\Phi\in
C^{\infty}_{0}(\Omega,\mathbb{R}^{+})$ be a radial function. Then
$\partial_{\varphi}u$ has zero mean on sphere, that is, for almost every
$(x,r)\in\mathbb{R}^{d+1}_{+}$, we have
$\displaystyle(\partial_{\varphi}u)_{\mathbb{S}^{n-d}}(x,r):=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\mathbb{S}^{n-d}}\partial_{\varphi}u(x,r\theta)d\sigma(\theta)=0,$
where $\sigma$ is the surface measure on the unit sphere $\mathbb{S}^{n-d}$.
As a result, we have
$\iint_{\Omega}|\partial_{\varphi}u|^{2}\Phi\,dt\,dx\leq
C\iint_{\Omega}|t|^{2}|\partial_{\varphi}^{2}u|^{2}\Phi\,dt\,dx,$
where $C>0$ is a universal constant.
###### Proof.
Let $\phi(x,r)\in C_{0}^{\infty}(\mathbb{R}^{d+1}_{+})$ and set
$\Phi(x,t):=\phi(x,|t|)$. Observe that
$\iint_{\mathbb{R}^{d+1}_{+}}\left|\int_{\mathbb{S}^{n-d}}\partial_{\varphi}u\,d\theta\right||\phi|\,r^{n-d-1}\,dr\,dx\leq\iint_{\mathbb{R}^{n}}|\partial_{\varphi}u||\Phi|\,dx\,dt\leq
C_{\phi}$
which proves by Fubini’s theorem that
$\int_{\mathbb{S}^{n-d}}\partial_{\varphi}u\,d\theta$ and thus
$(\partial_{\varphi}u)_{\mathbb{S}^{n-d}}$ exists for almost every
$(x,r)\in\mathbb{R}^{d+1}_{+}$. Notice now that
$\partial_{\varphi}\Phi=(\partial_{\varphi}|t|)(\partial_{r}\phi)\equiv 0$
because $\partial_{\varphi}|t|\equiv 0$ and $|\partial_{r}\phi|<\infty$.
Therefore the integration by parts (see Proposition 2.2) entails that
(2.8)
$\iint_{\mathbb{R}^{d+1}_{+}}(\partial_{\varphi}u)_{\mathbb{S}^{n-d-1}}\,\phi\,r^{n-d-1}\,dr\,dx=\frac{1}{\sigma(\mathbb{S}^{n-d-1})}\iint_{\Omega}(\partial_{\varphi}u)\Phi\,dt\,dx\\\
=-\frac{1}{\sigma(\mathbb{S}^{n-d-1})}\iint_{\Omega}u\partial_{\varphi}\Phi\,\,dt\,dx=0.$
Since the identity (2.8) holds for every $\phi\in
C_{0}^{\infty}(\mathbb{R}^{d+1}_{+})$, it is enough to conclude that
$(\partial_{\varphi}u)_{\mathbb{S}^{n-d-1}}(x,r)=0$ for almost every
$(x,r)\in\mathbb{R}^{d+1}_{+}$.
Let us turn to the second part of the Proposition. Without loss of generality,
we can assume that $d\leq n-2$ (because otherwise angular derivatives do not
exist) and $\partial_{\varphi}=\partial_{\varphi_{ij}}$ with $i=n$ and
$j=n-1$. Write a running point of $\mathbb{R}^{n}$ as
$(x,t^{\prime},t_{n-1},t_{n})\in\mathbb{R}^{d}\times\mathbb{R}^{n-d-2}\times\mathbb{R}\times\mathbb{R}$.
We consider a function
$\psi\in\mathbb{R}^{n-1}_{+}:=\\{(x,t^{\prime},r)\in\mathbb{R}^{n-2}\times(0,\infty)\\}$,
and then $\Psi(x,t):=\psi(x,t^{\prime},|(t_{n-1},t_{n})|)$. The same argument
as before shows that for almost every
$(x,t^{\prime},r)\in\mathbb{R}^{n-1}_{+}$, the function
$\theta\to\partial_{\varphi}u(x,t^{\prime},r\theta)$ lies in
$L^{2}(\mathbb{S}^{1},d\sigma)$ and
$(\partial_{\varphi}u)_{\mathbb{S}^{1}}(x,t^{\prime},r):=\fint_{\mathbb{S}_{1}}\partial_{\varphi}u(x,t^{\prime},r\theta)\,d\sigma(\theta)=0.$
However, $\mathbb{S}^{1}$ is just the unit circle, so we have the bijection
$\rho:\,z\in[0,2\pi)\mapsto\theta=(\cos(z),\sin(z))\in\mathbb{S}^{1}$
and we even have $d\sigma(\theta)=dz$. Moreover,
$\begin{split}\frac{\partial}{\partial
z}[u(x,t^{\prime},r\theta)]&=-r\sin(\theta)\frac{\partial}{\partial
t_{n-1}}u(x,t^{\prime},r\theta)+r\cos(\theta)\frac{\partial}{\partial
t_{n}}u(x,t^{\prime},r\theta)=r\partial_{\varphi}u(x,t^{\prime},r\theta)\end{split}$
and similarly
$\frac{\partial^{2}}{\partial
z^{2}}[u(x,t^{\prime},r\theta)]=r^{2}\partial_{\varphi}^{2}u(x,t^{\prime},r\theta).$
We deduce that, for almost every $(x,t^{\prime},r)\in\mathbb{R}^{n-1}_{+}$,
$\fint_{0}^{2\pi}\frac{\partial}{\partial
z}[u(x,t^{\prime},r\rho(z))]\,dz=(\partial_{\varphi}u)_{\mathbb{S}^{1}}(x,t^{\prime},r)=0$
and then, by the Poincaré inequality on $[0,2\pi]$,
$\begin{split}\int_{\mathbb{S}^{1}}|\partial_{\varphi}u(x,t^{\prime},r\theta)|^{2}\,d\sigma(\theta)&=r^{-2}\int_{0}^{2\pi}\Big{|}\frac{\partial}{\partial
z}[u(x,t^{\prime},r\rho(z))]\Big{|}^{2}dz\\\ &\leq
Cr^{2}\int_{0}^{2\pi}\Big{|}\frac{\partial^{2}}{\partial
z^{2}}[u(x,t^{\prime},r\rho(z))]\Big{|}^{2}dz=Cr^{2}\int_{\mathbb{S}^{1}}|\partial_{\varphi}^{2}u(x,t^{\prime},r\theta)|^{2}\,d\sigma(\theta).\end{split}$
We conclude by integrating over $(x,t^{\prime},r)\in\mathbb{R}^{n-1}_{+}$.
Since a radial function $\Phi$ depends on $t_{n-1}$ and $t_{n}$ only via the
norm $|(t_{n-1},t_{n})|$, we get
$\begin{split}\iint_{\Omega}|\partial_{\varphi}u|^{2}\Phi\,dt\,dx&=\int_{\mathbb{R}^{n-2}}\int_{0}^{\infty}\Phi\left(\int_{\mathbb{S}^{1}}|\partial_{\varphi}u(x,t^{\prime},r\theta)|^{2}\,d\sigma(\theta)\right)\,r\,dr\,dt^{\prime}\,dx\\\
&\lesssim\int_{\mathbb{R}^{n-2}}\int_{0}^{\infty}\Phi\left(\int_{\mathbb{S}^{1}}|\partial_{\varphi}^{2}u(x,t^{\prime},r\theta)|^{2}\,d\sigma(\theta)\right)\,r^{3}dr\,dt^{\prime}\,dx\\\
&\quad=\iint_{\Omega}|t|^{2}|\partial_{\varphi}^{2}u|^{2}\Phi\,dt\,dx.\end{split}$
The lemma follows. ∎
## 3\. $N\leq S$ Local Estimates, Part 1: Integration by Parts
We want to bound of the non-tangential maximal function by the square
functional. In this section, we prove preliminary estimates that will be
improved to the desired $N<S$ estimate in the next section by using a “good
$\lambda$” argument.
We observe first that if $v\in L^{2}_{loc}(\Omega)$ and $\Psi$ is a cut-off
function, we have by a simple application of Fubini’s theorem that
(3.1)
$\iint_{\Omega}|v|^{2}\Psi\,\frac{dt}{|t|^{n-d-2}}\,dx\approx\int_{\mathbb{R}^{d}}\left(\iint_{(y,t)\in\widehat{\Gamma}(x)}|v(y,t)|^{2}\Psi(y,t)\,\frac{dt}{|t|^{n-2}}\,dy\right)\,dx$
so in particular, for any $v\in W^{1,2}_{loc}(\Omega)$
(3.2) $\iint_{\Omega}|\nabla
v|^{2}\Psi\,\frac{dt}{|t|^{n-d-2}}\,dx\approx\|S(v|\Psi)\|^{2}_{L^{2}(\mathbb{R}^{d})}.$
The constants in (3.1) and (3.2) depends only on $d$ and $n$.
Moreover, the Carleson measure condition is well adapted to the averaged non-
tangential maximal function, in that we have
(3.3) $\iint_{\Omega}|v|^{2}|f|^{2}\Psi\,\frac{dt}{|t|^{n-d}}\,dx\leq
CM\|\widetilde{N}(u|\Psi)\|^{2}_{L^{2}(\mathbb{R}^{d})}$
whenever $f\in CM(M)$. The statement in this particular context can be found
as Proposition 4.3 in [FMZ21], but the proof is an easy consequence of the
classical Carleson inequality.
###### Lemma 3.1.
In this lemma, $\partial_{v}$ stands for either a tangential derivative
$\partial_{x}$ or an angular derivative $\partial_{\varphi}$. For any function
$u\in W^{2,2}_{loc}(\Omega)$, any cut-off function $\Psi\in
C^{\infty}_{0}(\Omega,[0,1])$ satisfying ($\mathcal{COF}$)K, any real constant
$\alpha$, and any $\delta\in(0,1)$, we have
(3.4)
$\left|\iint_{\Omega}|\partial_{v}u-\alpha|^{2}\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}\right|\leq\delta\|\widetilde{N}(\partial_{v}u-\alpha|\Psi^{3})\|^{2}_{2}+C(1+\delta^{-1}K)\|S(\overline{\nabla}u|\Psi)\|^{2}_{2},$
where $C>0$ depends only on $n$.
###### Proof.
To lighten the notation, we write $V$ for $\partial_{v}u$. First, by the
integration by parts (Proposition 2.2), we have
$\mathcal{T}:=\iint_{\Omega}|V-\alpha|^{2}\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}=-2\iint_{\Omega}(V-\alpha)(\partial_{r}V)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}$
We introduce $1=\partial_{r}|t|$, and we proceed to another integration by
parts in order to write
$\mathcal{T}=-2\iint_{\Omega}(V-\alpha)(\partial_{r}V)\Psi^{3}(\partial_{r}|t|)\frac{dtdx}{|t|^{n-d-1}}=2\iint_{\Omega}|\partial_{r}V|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\\\
+2\iint_{\Omega}(V-\alpha)(\partial_{r}V)\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-2}}+2\iint_{\Omega}(V-\alpha)(\partial_{r}^{2}V)\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}:=\textup{I}+\textup{II}+\textup{III}.$
Thanks to (3.2), the term I is bounded by the square function
$\|S(V|\Psi^{3})\|^{2}_{2}$. Since $\Psi$ satisfies ($\mathcal{COF}$)K, the
cut-off function $|t|\nabla\Psi\in CM(K)$ and so the Cauchy-Schwarz inequality
and the Carleson inequality (3.3) imply
$\displaystyle\textup{II}\leq
CK^{\frac{1}{2}}\|\widetilde{N}(V-\alpha|\Psi^{3})\|_{2}\|S(V|\Psi)\|_{2}\leq\frac{\delta}{3}\|\widetilde{N}(V-\alpha|\Psi^{3})\|^{2}_{2}+C\delta^{-1}K\|S(\overline{\nabla}u|\Psi)\|^{2}_{2}$
for any $\delta\in(0,1)$. As for the term III, we have
$\textup{III}=2\iint_{\Omega}(V-\alpha)\Big{(}[\partial_{r}^{2},\partial_{v}]u\Big{)}\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}+2\iint_{\Omega}(V-\alpha)(\partial_{v}\partial^{2}_{r}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}:=\textup{III}_{1}+\textup{III}_{2}.$
Since $\partial_{v}|t|=0$ whenever $\partial_{v}=\partial_{x}$ or
$\partial_{v}=\partial_{\varphi}$, an integration by parts yields that
$\textup{III}_{2}=-2\iint_{\Omega}(\partial_{v}^{2}u)(\partial^{2}_{r}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}-2\iint_{\Omega}(V-\alpha)(\partial^{2}_{r}u)\partial_{v}(\Psi^{3})\frac{dtdx}{|t|^{n-d-2}}:=\textup{III}_{21}+\textup{III}_{22}.$
The term $\textup{III}_{21}$ is easily bounded by
$C\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2}$, and similarly to II, since
$|t||\partial_{v}\Psi|\in CM(K)$, we have that
$|\textup{III}_{22}|\leq\frac{\delta}{3}\|\widetilde{N}(V-\alpha|\Psi^{3})\|^{2}_{2}+C\delta^{-1}K\|S(\overline{\nabla}u|\Psi)\|_{2}^{2}.$
It remains to bound $\textup{III}_{1}$. Since $\partial_{x}$ and
$\partial_{r}$ commute, the commutator $[\partial^{2}_{r},\partial_{x}]$ is
zero, and hence - when $\partial_{v}=\partial_{x}$ \- we have
$\textup{III}_{1}=0$. Using Proposition 2.4 multiple times gives that
$[\partial_{r}^{2},\partial_{\varphi}]=\partial_{r}[\partial_{r},\partial_{\varphi}]+[\partial_{r},\partial_{\varphi}]\partial_{r}=-\frac{2}{|t|}\partial_{r}\partial_{\varphi}.$
So when $\partial_{v}=\partial_{\varphi}$, we have
$\textup{III}_{1}=-4\iint_{\Omega}(\partial_{\varphi}u-\alpha)(\partial_{r}\partial_{\varphi}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}\\\
=4\iint_{\Omega}(\partial_{r}\partial_{\varphi}u)(\partial_{\varphi}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+4\iint_{\Omega}(\partial_{\varphi}u-\alpha)(\partial_{\varphi}u)\partial_{\varphi}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}:=\textup{III}_{11}+\textup{III}_{12}$
by the integration by part given in Proposition 2.2. Observe that Proposition
2.7 and (3.2) infer that
(3.5) $\iint_{\Omega}|\partial_{\varphi}u|^{2}\Phi\frac{dt\,dx}{|t|^{n-d}}\leq
C\|S(\partial_{\varphi}u|\Phi)\|_{2}^{2}$
So by the Cauchy-Schwarz inequality, we have
$\textup{III}_{11}\leq\|S(\partial_{\varphi}u|\Psi^{3})\|_{2}\left(\iint_{\Omega}|\partial_{\varphi}u|^{2}\Psi^{3}\frac{dt\,dx}{|t|^{n-d}}\right)^{\frac{1}{2}}\lesssim\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2}$
and similarly to II and $\textup{III}_{22}$,
$\textup{III}_{12}\leq\delta\|\widetilde{N}(\partial_{\varphi}u-\alpha|\Psi^{3})\|^{2}_{2}+C(\delta
K)^{-1}\left(\iint_{\Omega}|\partial_{\varphi}u|^{2}\Psi\frac{dt\,dx}{|t|^{n-d}}\right)^{\frac{1}{2}}\\\
\leq\frac{\delta}{3}\|\widetilde{N}(\partial_{\varphi}u-\alpha|\Psi^{3})\|^{2}_{2}+C\delta^{-1}K\|S(\overline{\nabla}u|\Psi)\|_{2}^{2}.$
The lemma follows. ∎
Now, we prove the analogue of the previous lemma for the radial derivative,
and we shall use that $u$ is solution to $Lu=0$.
###### Lemma 3.2.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, any cut-off function $\Psi\in
C^{\infty}_{0}(\Omega,[0,1])$ satisfying ($\mathcal{COF}$)K, any real constant
$\alpha$, and any $\delta\in(0,1)$, we have
$\Big{|}\iint_{\Omega}|\partial_{r}u-\alpha|^{2}\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}\Big{|}\leq\delta\|\widetilde{N}(\partial_{r}u-\alpha|\Psi^{3})\|^{2}_{2}+C(1+\delta^{-1}K^{2})\kappa\|\widetilde{N}(\nabla
u|\Psi)\|^{2}_{2}\\\
+C(1+\delta^{-1}K^{2})\|S(\overline{\nabla}u|\Psi)\|^{2}_{2},$
and
(3.6)
$\Big{|}\iint_{\Omega}|\partial_{r}u|^{2}\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}\Big{|}\leq(\delta+\delta^{-1}K^{2}\kappa)\|\widetilde{N}(\partial_{r}u|\Psi)\|^{2}_{2}+C(1+\delta^{-1}K^{2})\|S(\overline{\nabla}u|\Psi^{3})\|^{2}_{2},$
$C$ depends only on $\lambda$, $d$, and $n$.
###### Proof.
We only prove the first bound, since (3.6) is established with the same
computations, by simply shifting switching the place of $\Psi$ and $\Psi^{3}$
when we bound $|\textup{I}_{3}|+|\textup{I}_{5}|$ below. By integration by
parts (see Proposition 2.2), we have
$\mathcal{T}:=\iint_{\Omega}|\partial_{r}u-\alpha|^{2}\partial_{r}(\Psi^{3})\frac{dtdx}{|t|^{n-d-1}}=-2\iint_{\Omega}(\partial_{r}u-\alpha)(\partial_{r}^{2}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}$
But now, we can use the equation in cylindrical coordinate, that is (2.1), to
obtain
$\partial^{2}_{r}u=-\operatorname{div}_{x}\mathcal{A}_{1}\nabla_{x}u+\operatorname{div}_{x}\mathcal{A}_{2}\partial_{r}u-\frac{1}{2}\sum_{i,j=d+1}^{n}\partial_{\varphi_{ij}}^{2}u$
and then
$\mathcal{T}=2\iint_{\Omega}(\partial_{r}u-\alpha)(\operatorname{div}_{x}\mathcal{A}_{1}\nabla_{x}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+2\iint_{\Omega}(\partial_{r}u-\alpha)(\operatorname{div}_{x}\mathcal{A}_{2}\partial_{r}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}\\\
+\sum_{i,j=d+1}^{n}\iint_{\Omega}(\partial_{r}u-\alpha)(\partial_{\varphi_{ij}}^{2}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}:=\textup{I}+\textup{II}+\textup{III}.$
We first deal with III, which is easier. Since $\Psi$ and $|t|$ are radial,
$\partial_{\varphi_{ij}}(|t|^{d+1-n}\Psi^{3})=0$ and thus, thanks to
integration by parts, III becomes
$\textup{III}=-\sum_{i,j=d+1}^{n}\iint_{\Omega}(\partial_{\varphi_{ij}}\partial_{r}u)(\partial_{\varphi_{ij}}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}\lesssim\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2}$
by the Cauchy-Schwarz inequality and then (3.5). The terms I and II are
similar. We write $\mathcal{A}_{1,2}\nabla_{x,r}u$ for
$\mathcal{A}_{1}\nabla_{x}u+\mathcal{A}_{2}\partial_{r}u$, and by using the
fact that $\partial_{r}|t|=1$, we get
$\textup{I}+\textup{II}=2\iint_{\Omega}(\partial_{r}u-\alpha)(\operatorname{div}_{x}\mathcal{A}_{1,2}\nabla_{x,r}u)\,\Psi^{3}\,\partial_{r}(|t|)\frac{dtdx}{|t|^{n-d-1}}.$
So with an integration by parts to move the derivative $\partial_{r}$ away
from $|t|$, we have
$\textup{I}+\textup{II}=-2\iint_{\Omega}(\partial_{r}u-\alpha)(\partial_{r}\operatorname{div}_{x}\mathcal{A}_{1,2}\nabla_{x,r}u)\,\Psi^{3}\,\frac{dtdx}{|t|^{n-d-2}}\\\
-2\iint_{\Omega}(\partial^{2}_{r}u)(\operatorname{div}_{x}\mathcal{A}_{1,2}\nabla_{x,r}u)\,\Psi^{3}\,\frac{dtdx}{|t|^{n-d-2}}\\\
-2\iint_{\Omega}(\partial_{r}u-\alpha)(\operatorname{div}_{x}\mathcal{A}_{1,2}\nabla_{x,r}u)\,\partial_{r}(\Psi^{3})\,\frac{dtdx}{|t|^{n-d-2}}=\textup{I}_{1}+\textup{I}_{2}+\textup{I}_{3}.$
The integrate further by parts in $\textup{I}_{1}$ to move the
$\operatorname{div}_{x}$ away from
$\partial_{r}\mathcal{A}_{1,2}\nabla_{x,r}u$ (note beforehand that
$\partial_{r}$ and $\operatorname{div}_{x}$ commute), and we obtain
$\textup{I}_{1}=2\iint_{\Omega}(\nabla_{x}\partial_{r}u)\cdot(\partial_{r}\mathcal{A}_{1,2}\nabla_{x,r}u)\,\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\\\
+2\iint_{\Omega}(\partial_{r}u-\alpha)\,(\partial_{r}\mathcal{A}_{1,2}\nabla_{x,r}u)\cdot\nabla_{x}(\Psi^{3})\,\frac{dtdx}{|t|^{n-d-2}}:=\textup{I}_{4}+\textup{I}_{5}.$
So it remains to bounds $\textup{I}_{2}$, $\textup{I}_{3}$, $\textup{I}_{4}$,
and $\textup{I}_{5}$. The terms $\textup{I}_{2}$ and $\textup{I}_{4}$ are
similar, in that
$\begin{split}|\textup{I}_{2}|+|\textup{I}_{4}|&\lesssim\iint_{\Omega}|\nabla\overline{\nabla}u||\nabla\mathcal{A}_{1,2}\nabla_{x,r}u|\,\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\\\
&\lesssim\iint_{\Omega}|\nabla\overline{\nabla}u||\nabla\mathcal{A}_{1,2}||\nabla_{x,r}u|\,\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+\iint_{\Omega}|\mathcal{A}_{1,2}||\nabla\overline{\nabla}u|^{2}\,\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\\\
\end{split}$
We use the boundedness of $\mathcal{A}_{1,2}$ and (3.2) to get that the last
term in the right-hand side is bounded by
$\|S(\overline{\nabla}u|\Psi)\|_{2}^{2}$. As the first term in the right-hand
side above, we use the inequality $ab\leq a^{2}+b^{2}/4$, the fact that
$\nabla\mathcal{A}_{1,2}\in CM(\kappa)$, and the Carleson inequality (3.3) to
bound it by $C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|_{2}^{2}+C\|S(\overline{\nabla}u|\Psi)\|_{2}^{2}$. Altogether,
$|\textup{I}_{2}|+|\textup{I}_{4}|\lesssim\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|_{2}^{2}+\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2}.$
The terms $\textup{I}_{3}$ and $\textup{I}_{5}$ are also similar, in that they
are bounded as follows
$\begin{split}|\textup{I}_{3}|+|\textup{I}_{5}|&\lesssim\iint_{\Omega}|\partial_{r}u-\alpha||\nabla\mathcal{A}_{1,2}\nabla_{x,r}u|\,|\nabla\Psi^{3}|\frac{dtdx}{|t|^{n-d-2}}\\\
&\lesssim\iint_{\Omega}|\partial_{r}u-\alpha||\nabla\mathcal{A}_{1,2}||\nabla_{x,r}u|\,|\nabla\Psi^{3}|\frac{dtdx}{|t|^{n-d-2}}+\iint_{\Omega}|\mathcal{A}_{1,2}||\partial_{r}u-\alpha||\nabla\overline{\nabla}u|\,|\nabla\Psi^{3}|\frac{dtdx}{|t|^{n-d-1}}\\\
&\lesssim\delta\|\widetilde{N}(\partial_{r}u-\alpha|\Psi^{3})\|_{2}^{2}+\delta^{-1}K^{2}\kappa\|\widetilde{N}(\nabla
u|\Psi)\|_{2}^{2}+\delta^{-1}K^{2}\|S(\overline{\nabla}u|\Psi)\|_{2}^{2}\end{split}$
by using the inequality $ab\leq\delta a^{2}+b^{2}/4\delta$, the Carleson
inequality (3.3), the fact that $\Psi$ satisfies $|\nabla\Psi|\leq K/|t|$ and
${\mathds{1}}_{\operatorname{supp}\nabla\Psi}\in CM(K)$, and the fact that
$|\nabla\mathcal{A}_{1,2}|^{2}\leq\kappa$ by (1.8). The lemma follows. ∎
In the following, we summarize the results from Lemma 3.1 and Lemma 3.2.
Before stating the precise result, we should introduce a notation first. We
write $|\nabla u-\vec{\alpha}|^{2}$ for a sum of
$|\nabla_{x}u-\vec{\alpha}_{\parallel}|^{2}$,
$|\nabla_{\varphi}u-\vec{\alpha}_{\varphi}|^{2}$, and
$|\partial_{r}u-\vec{\alpha}_{r}|^{2}$, where $\vec{\alpha}_{x}$,
$\vec{\alpha}_{\varphi}$, and $\vec{\alpha}_{r}$ are different components of
constant vector $\vec{\alpha}$ corresponding to $\nabla_{x}$,
$\nabla_{\varphi}$, and $\partial_{r}$ respectively.
###### Lemma 3.3.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, any cut-off function $\Psi\in
C^{\infty}_{0}(\Omega,[0,1])$ satisfying ($\mathcal{COF}$)K, any real constant
$\alpha$, and any $\delta\in(0,1)$, we have
(3.7) $\Big{|}\iint_{\Omega}|\nabla
u-\vec{\alpha}|^{2}\partial_{r}\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}\Big{|}\leq\delta\|\widetilde{N}(\partial_{r}u-\alpha|\Psi^{3})\|^{2}_{2}+C(1+\delta^{-1}K^{2})\kappa\|\widetilde{N}(\nabla
u|\Psi)\|^{2}_{2}\\\
+C(1+\delta^{-1}K^{2})\|S(\overline{\nabla}u|\Psi)\|^{2}_{2},$
$C$ depends only on $\lambda$, $d$, and $n$.
###### Proof.
Immediate from Lemma 3.1 and Lemma 3.2. ∎
## 4\. $N\leq S$ Local Estimates, Part 2: the Good Lambda Argument
The main goal of this section is to establish the “good-lambda” distributional
inequality, that will give the desired $N<S$ estimate.
In this section, a boundary ball (a ball in $\mathbb{R}^{d}$) with center $x$
and radius $l$ will be written $B_{l}(x)$. First, we recall several results
from [FMZ21]. Let $h_{\beta}:\,\mathbb{R}^{d}\rightarrow\mathbb{R}$ be a
function such that for any compactly supported and continuous function $v$,
$\displaystyle
h_{\beta}(v)(x):=\inf\Big{\\{}r>0,\sup_{(y,s)\in\Gamma(x,r)}|v(y,s)|<\beta\Big{\\}},$
where $\Gamma(x,r)\subset\mathbb{R}^{d+1}_{+}$ is defined as the translation
of the cone $\Gamma(0)$ with vertex at $(x,r)$.
###### Lemma 4.1 (Lemma 6.1 in [FMZ21]).
For any $v$ such that $h_{\beta}(v)<\infty$, the map $h_{\beta}(v)$ is a
$1$-Lipschitz function.
###### Lemma 4.2 (Lemma 6.2 in [FMZ21]).
Let $v\in L^{2}_{loc}(\Omega)$ and $\Psi$ be a smooth function which satisfies
$0\leq\Psi\leq 1$. Set $h_{\beta}:=h_{\beta}((v|\Psi^{3})_{W})$. There exists
a small constant $c>0$ depending only on $d$ and $n$ such that for any
$\beta>0$ and $\widetilde{N}(v|\Psi^{3})(x)>\beta$, we have:
$\displaystyle\mathcal{M}\Big{[}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{y\in
B_{h_{\beta}(.)/2}(.)}\int_{s\in\mathbb{R}^{n-d}}|v|^{2}\Psi^{3}\partial_{r}[\chi_{\beta}^{3}]\frac{ds}{|s|^{n-d-1}}dy\Big{)}^{\frac{1}{2}}\Big{]}(x)\geq
c\beta,$
where $\chi_{\beta}$ is a cut-off function defined as $\chi_{\beta}(y,.)\equiv
0$ if $h_{\beta}(y)=0$ and
$\displaystyle\chi_{\beta}(y,s):=\phi\Big{(}\frac{|s|}{h_{\beta}(y)}\Big{)},\
\ \text{ with }\ \ \phi(r)$ $\displaystyle:=\begin{cases}0&\text{if }0\leq
r<1/5,\\\ (25-5r)/24&\text{ if }1/5\leq r\leq 5,\\\ 1&\text{if
}r>5\end{cases}$
otherwise.
The two above lemmas are analogues to results from [KKPT00] and [DP19] adapted
to our setting and to the use of cut-off functions $\Psi$. Let us first
introduce some specific cut-off functions.
###### Definition 4.3.
Let $\phi\in C_{0}^{\infty}(\mathbb{R})$ be a non-increasing function such
that $\phi\equiv 1$ on $[0,1]$ and $\phi\equiv 0$ on $[2,\infty)$. We define
the cut-off functions on $\Omega$ as
$\Psi_{e}(y,t):=\phi\Big{(}\frac{e(x)}{|t|}\Big{)}{\mathds{1}}_{\Omega}(x,t)$
if $x\to e(x)\geq 0$ is a 1-Lipschitz function, in particular,
$\Psi_{\epsilon}(y,t):=\phi\Big{(}\frac{\epsilon}{|t|}\Big{)}{\mathds{1}}_{\Omega}(x,t)$
if $\epsilon>0$. Also, let us denote
$\Psi_{B}(y,t):=\phi\Big{(}\frac{\operatorname{dist}(y,B)}{100|t|}\Big{)}{\mathds{1}}_{\Omega}(x,t)$
if $B$ is a boundary ball. Moreover, we write $\Psi_{B,l,\epsilon}$ for the
product $\Psi_{B}(1-\Psi_{2l})\Psi_{\epsilon}$.
Note that from the fact that $\phi$ is non-increasing, for any (non-negative)
$1$-Lipschitz function $e$, we have
(4.1) $\partial_{r}\Psi_{e}\geq 0.$
The proof of next lemma is easy but can nevertheless be found after Lemma 4.5
in [FMZ21].
###### Lemma 4.4 ([FMZ21]).
There exists a uniform $K$ that depends only on $d$ and $n$ such that the
functions $\Psi_{e}$ and their “complements” $1-\Psi_{e}$ satisfy
($\mathcal{COF}$)K. Since $\Psi_{\epsilon}$ and $\Psi_{B}$ are particular
cases of $\Psi_{e}$, then (of course) they also satisfy ($\mathcal{COF}$)K
with the same uniform constant $K$. In addition, the property
($\mathcal{COF}$)K is stable under the product, in the sense that if $\Psi$
satisfies ($\mathcal{COF}$)${}_{K_{1}}$ and $\Phi$ satisfies
($\mathcal{COF}$)${}_{K_{2}}$, then $\Psi\Phi$ satisfies
($\mathcal{COF}$)${}_{K_{1}+K_{2}}$.
We state the precise statement of the “good-lambda” distributional inequality
that we will need in the following.
###### Lemma 4.5.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,M,κ. There exists
$\eta\in(0,1)$ that depends only on $d$ and $n$ and $C>0$ that depends on
$\lambda$, $d$ and $n$ such that the following holds.
For any a weak solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, any cut-off
function in the form $\Psi:=\Psi_{B,l,\epsilon}$ for some $\epsilon>0$, some
$l>100\epsilon$, and some boundary ball $B$ of radius $l$, and for any triplet
$\beta>0$, $\gamma>0$, $\delta\in(0,1)$, we have
(4.2) $\displaystyle|\\{x\in\mathbb{R}^{d},\widetilde{N}(\nabla
u|\Psi^{3})(x)>\beta\\}\cap E_{\beta,\gamma,\delta}|\leq
C\gamma^{2}|\\{x\in\mathbb{R}^{d},\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x)>\eta\beta\\}|,$
where
$E_{\beta,\gamma,\delta}:=\bigg{\\{}x\in\mathbb{R}^{d}:\mathcal{M}\Big{[}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{l}(.)}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{l\leq|s|\leq 2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{1/2}\Big{]}(x)+\delta^{1/2}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x)\\\ +\delta^{-1/2}\kappa^{1/2}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x)+\delta^{-1/2}\mathcal{M}[S(\overline{\nabla}u|\Psi)](x)\leq\gamma\beta\bigg{\\}}.$
###### Proof.
Step 1: The Whitney decomposition. We fix $\beta,\delta>0$ and we take a ball
$B\subset\mathbb{R}^{d}$ with radius $l>0$. Define
$\mathcal{E}:=\\{x\in\mathbb{R}^{d},\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x)>\eta\beta\\}.$
We notice that $(\nabla u|\Psi^{3})_{W,a}$ is continuous and $\Psi$ is
compactly supported. Hence $\mathcal{E}$ is open and bounded. We pick a ball
$B_{r}(x)$ of radius $r:=\text{dist}(x,\mathcal{E}^{c})/10$ centered at
$x\in\mathcal{E}$. Under this construction, $\mathcal{E}=\bigcup_{i\in
J}B_{r_{i}}(x_{i})$ and $\sup_{i\in J}r_{i}<\infty$. By Vitali covering lemma,
there exists a countable subcollection of balls $\\{B_{r_{i}}(x_{i})\\}_{i\in
I}$, which are disjoint and satisfy that $\mathcal{E}\subseteq\bigcup_{i\in
I}B_{5r_{i}}(x_{i})$. For each $i\in I$, we set $B_{i}:=B_{10r_{i}}(x_{i})$
and thus there exists a
(4.3) $y_{i}\in\overline{B_{i}}\cap\mathcal{E}^{c}$, in particular
$\mathcal{M}[\widetilde{N}_{a}(\nabla u|\Psi^{3})](y_{i})\leq\eta\beta$.
We define the set $F_{\beta}^{i}$ such that
$\displaystyle
F^{i}=F^{i}_{\beta,\gamma,\delta}:=\\{x\in\overline{B_{i}},\,\widetilde{N}_{a}(\nabla
u|\Psi^{3})(x)>\beta\\}\cap E_{\beta,\gamma,\delta}.$
It suffices to prove that for each $i\in I$,
(4.4) $\displaystyle|F^{i}|\lesssim C\gamma^{2}|B_{i}|$
because $\sum_{i\in I}|B_{i}|\leq 10^{d}\sum_{i\in I}|B_{r_{i}}(x_{i})|\leq
10^{d}|\mathcal{E}|.$ The inequality (4.4) is trivial when
$F_{\beta}^{i}=\emptyset$. Hence we assume that
$F_{\beta}^{i}\supset\\{x_{i}\\}$ is non-empty in the sequel of the proof.
Step 2: Localization of $\widetilde{N}(\nabla u|\Psi^{3})$ in $B_{i}$. In this
step, we show that if $x\in F^{i}$, then $\widetilde{N}(\nabla u|\Psi^{3})(x)$
has to reach its maximum value at a point $(z,r)\in\Gamma(x)$ verifying $r\leq
r_{i}$. Indeed, take $x\in F^{i}$ and then $(z,r)\in\Gamma(x)$ such that
$r>r_{i}$. Notice that $(z,r)\in\bigcup_{y\in B_{r}(z)}\Gamma(y)$, so
(4.5) $\displaystyle(\nabla u|\Psi^{3})_{W}(z,r)\leq\widetilde{N}(\nabla
u|\Psi^{3})(y)\ \ \text{for all $y\in B_{r}(z)\subset B_{20r}(y_{i})$}.$
Therefore, for a constant $C$ that depends only on $d$,
$(\nabla u|\Psi^{3})_{W}(z,r)\leq\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](y_{i})\leq C\eta\beta<\beta$
by (4.3), if $\eta$ is small enough (depending only on $d$). So it means that
for any $x\in F^{i}$, we have
(4.6) $\beta<\widetilde{N}(\nabla
u|\Psi^{3})(x)=\sup_{(z,r)\in\Gamma(x)}{\mathds{1}}_{r\leq r_{i}}(\nabla
u|\Psi^{3})_{W}(z,r)\qquad\text{ for }x\in F^{i}.$
We construct the cut-off function
$\Phi_{i}(y,s):=(1-\Psi_{K_{i}r_{i}})\Psi_{F^{i}}$ where
$\Psi_{K_{i}r_{i}}:=\Psi_{\epsilon_{i}}$ for $\epsilon_{i}:=K_{i}r_{i}$ and
$\Psi_{F^{i}}:=\Psi_{e^{i}}$ for the $1$-Lipschitz function
$e^{i}(x):=\operatorname{dist}(y,F^{i})/M_{i}$. The constants $10$ and $K_{i}$
in the construction of $\Phi_{i}$ are large enough so that $\Phi_{i}(y,t)=1$
whenever $(y,t)\in W(z,r)$ for $(z,r)\in\Gamma(x)\cap\\{r\leq r_{i}\\}$ and
$x\in F_{i}$. With such a choice and by (4.6), we have that
(4.7) $\widetilde{N}(\nabla u|\Psi^{3}\Phi_{i}^{3})(x)=\widetilde{N}(\nabla
u|\Psi^{3})(x)>\beta\qquad\text{ for }x\in F^{i}.$
We let a little bit of freedom on the choice of $K_{i}$ to avoid some future
complication. Notice that
(4.8)
$\operatorname{supp}(\nabla\Psi_{K_{i}r_{i}})\cap\operatorname{supp}(\Psi_{F_{i}})\subset
S_{i}:=(K_{i}+1)B_{i}\times\\{s\in\mathbb{R}^{n-d},\,K_{i}r_{i}/2<|s|<K_{i}r_{i}\\}.$
We first try $K_{i}=4$, which is large enough for (4.7) to be satisfied. If
$S_{i}$ intersects $\\{\Psi\neq 0\\}\cap\\{\Psi\neq 1\\}$, then we test
$K_{i}=8$ instead. If $S_{i}$ still intersects $\\{\Psi\neq
0\\}\cap\\{\Psi\neq 1\\}$, we multiply $K_{i}$ by 2 and we stop at the first
time when
(4.9) $S_{i}\cap\\{\Psi\neq 0\\}\cap\\{\Psi\neq 1\\}=\emptyset,\ \text{ i.e.
either }\ S_{i}\subset\\{\Psi\equiv 1\\}\text{ or }S_{i}\subset\\{\Psi\equiv
0\\}.$
Since $\Psi=\Psi_{B,\epsilon}$ is constructed from the product of three cut-
off function $\Psi_{e}$ where $e$ is either constant or a a slowly growing
$1/100$-Lipschitz function, while $\Psi_{F_{i}}$ is constructed with a faster
growing $1/10$-Lipschitz function, $K_{i}$ can only take a uniformly finite
number of values (i.e. we think that $K_{i}\leq 2^{7}$ and we say that
$K_{i}\leq 2^{10}$ to have some error margin).
Step 3: Catching the level sets of $\widetilde{N}(\nabla u|\Psi^{3})$. Let
$h_{\beta}:=h_{\beta}((\nabla u|\Psi^{3}\Phi_{i}^{3})_{W})$. Lemma 4.2 and
(4.7) entails that
(4.10) $\displaystyle
c\beta\leq\mathcal{M}\bigg{[}\bigg{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{y\in
B_{h_{\beta}(.)/2}(.)}\int_{s\in\mathbb{R}^{n-d}}|\nabla
u|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}\bigg{)}^{\frac{1}{2}}\bigg{]}(x)\qquad\text{
for }x\in F^{i}.$
We know from (4.9) that either $S_{i}\subset\\{\Psi\equiv 1\\}$ or
$S_{i}\subset\\{\Psi\equiv 0\\}$. We set
(4.13)
$\displaystyle\vec{\alpha}_{i}:=\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{S_{i}}(\nabla
u)\Psi^{3/2}\,dy\,ds=\left\\{\begin{array}[]{l}\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{S_{i}}(\nabla
u)\,dy\,ds\quad\text{ if }S_{i}\subset\\{\Psi\equiv 1\\}\\\ 0\quad\text{
otherwise}\end{array}\right.$
and we want to show that $|\vec{\alpha}_{i}|$ is smaller than $c\beta/2$,
where $c$ is the constant in (4.10). We select $N$ points
$\\{z_{j}\\}_{j=1}^{N}\in 2K_{i}B_{i}$ such that
$S_{i}\subset\bigcup_{j=1}^{N}W(z_{j},K_{i}r_{i})$. We can always to so with a
uniformly bounded number $N$ of points, because $K_{i}$ is itself uniformly
bounded (between 4 and $2^{10}$). So we easily have by simply using the
definition of $\vec{\alpha}_{i}$, $(\nabla u|\Psi^{3})_{W}$,
$\widetilde{N}(\nabla u|\Psi^{3})$ and then (4.3) that
(4.14) $|\vec{\alpha}_{i}|\leq C\sum_{j=1}^{N}(\nabla
u|\Psi^{3})_{W}(z_{j},K_{i}r_{i})\leq
C\sum_{j=1}^{N}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{K_{i}r_{i}}(z_{j})}\widetilde{N}(\nabla
u|\Psi^{3})dx\\\ \leq C^{\prime}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{30K_{i}B_{i}}\widetilde{N}_{a}(\nabla
u|\Psi^{3})dx\leq C^{\prime}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](y_{i})\leq C^{\prime}\eta\beta\leq c\beta/2,$
if $\eta$ is small enough (depending only on $d$). The combination of (4.10)
and (4.14) infers that
(4.15) $\displaystyle
c\beta/2\leq\mathcal{M}\bigg{[}\bigg{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{y\in
B_{h_{\beta}(.)/2}(.)}\int_{s\in\mathbb{R}^{n-d}}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}\bigg{)}^{\frac{1}{2}}\bigg{]}(x)\
\text{ for }x\in F^{i}.$
Step 4: From a pointwise estimate to integral estimates. The result (4.15)
from the previous step implies that
$|F^{i}|\lesssim\frac{1}{\beta^{2}}\bigg{\|}\mathcal{M}\bigg{[}\bigg{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{y\in
B_{h_{\beta}(.)/2}(.)}\int_{s\in\mathbb{R}^{n-d}}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}\bigg{)}^{\frac{1}{2}}\bigg{]}\bigg{\|}^{2}_{2}\\\
\lesssim\frac{1}{\beta^{2}}\int_{\mathbb{R}^{d}}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{y\in
B_{h_{\beta}(x)/2}(x)}\int_{s\in\mathbb{R}^{n-d}}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}dx$
thanks to $L^{2}$ boundedness of the Hardy-Littlewood maximal operator
$\mathcal{M}$. According to Lemma 4.1, the function $h_{\beta}$ is 1-Lipschitz
, that is, $|h_{\beta}(x)-h_{\beta}(y)|\leq|x-y|$. If $y\in
B_{h_{\beta}(x)/2}(x)$, then the Lipschitz condition implies that
$|h_{\beta}(x)-h_{\beta}(y)|\leq|x-y|\leq h_{\beta}(x)/2$ and thus
$h_{\beta}(x)/2\leq h_{\beta}(y)\leq 3h_{\beta}(x)/2$. Consequently, by
Fubini’s theorem,
$|F^{i}|\lesssim\frac{1}{\beta^{2}}\iint_{\Omega}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}\bigg{(}\int_{x\in
B_{h_{\beta}(y)}(y)}h_{\beta}(x)^{-d}dx\bigg{)}\\\
\lesssim\frac{1}{\beta^{2}}\iint_{\Omega}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}.$
Recall that
$\Psi\Phi_{i}=\Psi_{B}\Psi_{F^{i}}\Psi_{\epsilon}(1-\Psi_{K_{i}r_{i}})(1-\Psi_{2l}).$
By (4.1), $\partial_{r}[\Psi_{e}\Psi_{F^{i}}\Psi_{\epsilon}]\geq 0$ and thus
the product rule implies
$\displaystyle\Psi^{3}\Phi^{3}_{i}\partial_{r}[\chi_{\beta}^{3}]\leq\partial_{r}[\Psi^{3}\Phi^{3}_{i}\chi_{\beta}^{3}]+\partial_{r}[\Psi^{3}_{K_{i}r_{i}}]\Psi^{3}_{F^{i}}\Psi^{3}\chi^{3}_{\beta}+\partial_{r}[\Psi^{3}_{2l}]\Psi_{B}^{3}\Psi^{3}_{\epsilon}\Phi_{i}^{3}\chi^{3}_{\beta}.$
It follows that:
$|F^{i}|\lesssim\frac{1}{\beta^{2}}\iint_{\Omega}|\nabla
u-\vec{\alpha}_{i}|^{2}\partial_{r}[\Psi^{3}_{K_{i}r_{i}}]\Psi^{3}_{F^{i}}\Psi^{3}\chi^{3}_{\beta}\frac{dsdy}{|s|^{n-d-1}}\\\
+\frac{1}{\beta^{2}}\iint_{\Omega}|\nabla
u-\vec{\alpha}_{i}|^{2}\partial_{r}[\Psi^{3}_{2l}]\Psi_{B}^{3}\Psi^{3}_{\epsilon}\Phi_{i}^{3}\chi^{3}_{\beta}\frac{dsdy}{|s|^{n-d-1}}\\\
+\frac{1}{\beta^{2}}\iint_{\Omega}|\nabla
u-\vec{\alpha}_{i}|^{3}\partial_{r}[\Psi^{3}\Phi^{3}_{i}\chi_{\beta}^{3}]\frac{dsdy}{|s|^{n-d-1}}:=\textup{I}+\textup{II}+\textup{III}.$
In order to prove the claim (4.4), and hence the lemma, it suffices to show
$\textup{I}+\textup{II}+\textup{III}\leq C\gamma^{2}|B_{i}|$ with a constant
$C$ that depends only on $\lambda$, $d$, and $n$.
Step 5: We treat I. We recall that
$S_{i}\supset\operatorname{supp}(\partial_{r}\Psi_{K_{i}r_{i}})\cap\operatorname{supp}(\Psi_{F_{i}})$,
see (4.8), and $|\nabla\Psi_{K_{i}r_{i}}|\lesssim|t|$ since $\Psi$ satisfies
($\mathcal{COF}$). Therefore,
$\textup{I}\lesssim\frac{|B_{i}|}{\beta^{2}}\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{S_{i}}|\nabla
u-\vec{\alpha}_{i}|^{2}\Psi^{3}\,ds\,dy=\frac{|B_{i}|}{\beta^{2}}{\mathds{1}}_{S_{i}\cap\operatorname{supp}\Psi}\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{S_{i}}|\overline{\nabla}u-\vec{\alpha}_{i}|^{2}ds\,dy$
since we chose $K_{i}$ so that $\Psi$ is either constant equal to 0 or
constant equal to 1 in $S_{i}$, see (4.9), and since changing $\nabla$ to
$\overline{\nabla}$ is just rewriting a vector with a different system of
coordinates (and of course we rewrite $\vec{\alpha}_{i}$ in this system of
coordinates too). If $\Psi\equiv 0$ on $S_{i}$, the bound $I=0\leq
C\gamma^{2}|B_{i}|$ is trivial. So we assume for the rest of the step that
$\Psi\equiv 1$ on $S_{i}$. In this case, since $\vec{\alpha}_{i}$ is the
average of $\nabla u$ on $S_{i}$, the Poincaré inequality yields that:
(4.16)
$\textup{I}\lesssim\frac{r_{i}^{2}|B_{i}|}{\beta^{2}}\mathchoice{{\vbox{\hbox{$\textstyle{-}\mkern-3.5mu{-}$}}\kern-9.63316pt}}{{\vbox{\hbox{$\scriptstyle{-}\mkern-3.5mu{-}$}}\kern-5.21329pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-3.3858pt}}{{\vbox{\hbox{$\scriptscriptstyle{-}\mkern-3.5mu{-}$}}\kern-2.6208pt}}\\!\iint_{S_{i}}|\nabla\overline{\nabla}u|^{2}dsdy\lesssim\frac{|B_{i}|}{\beta^{2}}\iint_{S_{i}}|\nabla\overline{\nabla}u|^{2}\Psi^{3}\frac{ds\,dy}{|s|^{n-2}}$
because $\Psi\equiv 1$ on $S_{i}$. We adapt the argument that we used to
establish (4.14). We pick a collection of points $\\{z_{j}\\}_{j=1}^{N}\in
2K_{i}B_{i}$ such that
$S_{i}\subset\bigcup_{j=1}^{N}B_{K_{i}r_{i}/4}(z_{j})\times\\{K_{i}r_{i}/2<|s|<K_{i}r_{i}\\}.$
We can choose the collection so that $N$ is uniformly bounded. Since
$B_{K_{i}r_{i}/4}(z_{j})\times\\{K_{i}r_{i}/2<|s|<K_{i}r_{i}\\}\subset\widehat{\Gamma}(x)\
\text{ for }x\in B_{K_{i}r_{i}/4}(z_{j}),$
we have
(4.17)
$\textup{I}\lesssim\frac{|B_{i}|}{\beta^{2}}\sum_{j=1}^{N}\iint_{B_{K_{i}r_{i}/4}(z_{j})\times\\{K_{i}r_{i}/2<|s|<K_{i}r_{i}\\}}|\nabla\overline{\nabla}u|^{2}\Psi^{3}\frac{ds\,dy}{|s|^{n-d-2}}\\\
\lesssim\frac{|B_{i}|}{\beta^{2}}\sum_{j=1}^{N}\left(\fint_{B_{K_{i}r_{i}/4}(z_{j})}S(\overline{\nabla}u|\Psi^{3})(x)\,dx\right)^{2}\lesssim\frac{|B_{i}|}{\beta^{2}}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{30K_{i}B_{i}}S(\overline{\nabla}u|\Psi^{3})(x)\,dx\right)^{2}\\\
\leq\frac{|B_{i}|}{\beta^{2}}\left(\mathcal{M}[S(\overline{\nabla}u|\Psi)](x_{i})\right)^{2}\leq\gamma^{2}|B_{i}|,$
where $x_{i}$ is any point of the non-emptyset $F^{i}\subset
E_{\beta,\gamma,\delta}$ and the last inequality comes from the fact that
$x_{i}\in E_{\beta,\gamma,\delta}$ (we could even have
$\textup{I}\lesssim\gamma^{2}\delta^{2}|B_{i}|$).
Step 6: We deal with II. Observe that
$\operatorname{supp}\\{\Psi_{B}\partial_{r}\Psi_{2l}\\}\subset
500B\times\\{\,l\leq|s|\leq 2l\\}\ \text{and }\
\operatorname{supp}\\{\Phi_{i}\\}\subset\\{\operatorname{dist}(y,B_{i})/20\leq|s|\leq
K_{i}r_{i}\\}$
since we know that $K_{i}\leq 2^{10}$. The integral II is non-zero only if the
(interior of the) two above supports intersect, and in this case, we
necessarily have
(4.18) $l<K_{i}r_{i}\leq 2^{10}r_{i}\ \text{ and }\
\operatorname{dist}(500B,F_{i})<40l$
which we know assume. So $500B\subset 2^{20}B_{i}$ and we can find a boundary
point $x_{i}\in\mathbb{R}^{d}$ such that
(4.19) $x_{i}\in F_{i}\cap 550B.$
By the triangle inequality and the fact that $|\nabla\Psi_{2l}|\lesssim l$ on
$\operatorname{supp}(\nabla\Psi_{2l})$, we have
$\displaystyle\textup{II}\lesssim\frac{|2^{20}B_{i}|}{\beta^{2}}\fint_{500B}\fint_{l\leq|s|\leq
2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy+\frac{|2^{20}B_{i}||\vec{\alpha}_{i}|^{2}}{\beta^{2}}\Psi_{B}^{3}=\textup{II}_{1}+\textup{II}_{2}.$
We want to bound $\textup{II}_{1}$ with the help of the Hardy Littlewood
maximal function of $x\to\Big{(}\fint_{B_{l}(x)}\fint_{l<|s|<2l}|\nabla
y|^{2}dsdy\Big{)}^{1/2}$. So we proceed like we already several times, see
around (4.14) and (4.17). We take a uniformly finite collection of points
$\\{z_{j}\\}_{j=1}^{N}\in 501B$ such that $500B\subset\bigcup B_{l/2}(z_{j})$,
and since $\fint_{B_{l/2}(z_{j})}|\nabla u|^{2}\lesssim\fint_{B_{l}(x)}|\nabla
u|^{2}$ for any $x\in B_{l/2}(z_{j})|\nabla u|^{2}$, we have
(4.20)
$\textup{II}_{1}\lesssim\frac{|B_{i}|}{\beta^{2}}\sum_{j=1}^{N}\fint_{B_{l/2}(z_{j})}\fint_{l<|s|<2l}|\nabla
u|^{2}\Psi_{B}^{3}ds\,dy\\\
\lesssim\frac{|B_{l}|}{\beta^{2}}\sum_{j=1}^{N}\left(\fint_{B_{l/2}(z_{j})}\Big{(}\fint_{B_{l}(x)}\fint_{l<|s|<2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{\frac{1}{2}}dx\right)^{2}\\\
\lesssim\frac{|B_{i}|}{\beta^{2}}\left(\fint_{B_{2000l}(x_{i})}\Big{(}\fint_{B_{l}(x)}\fint_{l<|s|<2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{\frac{1}{2}}dx\right)^{2}\\\
\leq\frac{|B_{i}|}{\beta^{2}}\left(\mathcal{M}\Big{[}\fint_{B_{l}(.)}\fint_{l<|s|<2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{\frac{1}{2}}\Big{]}(x_{i})\right)^{2}\leq\gamma^{2}|B_{i}|,$
because $x_{i}\in E_{\beta,\gamma,\delta}$. It remains to bound
$\textup{II}_{2}$, but that one will be easy. Without loss of generality, we
can assume the support of the function $\phi$ used to construct $\Psi_{2l}$ to
be exactly $[1,2]$ and hence the support of $\partial_{r}\Psi_{2l}$ to be
exactly $\\{l\leq|s|\leq 2l\\}$. But the set $S_{i}$ defined in (4.8) and used
to build $\vec{\alpha}_{i}$ has to be included by construction in either
$\\{\Psi\equiv 1\\}$ or $\\{\Psi\equiv 0\\}$. Combined with (4.18), it forces
$S_{i}\subset\\{\Psi\equiv 0\\}$, and thus
$\textup{II}_{2}=|\vec{\alpha}_{i}|=0$.
Step 7: We bound III to conclude. As discussed at the end of Step 4, we needed
to bound I, II, and III by $C\gamma^{2}|B_{i}|$ to finish the proof of the
lemma. We already proved the desired estimates of I and II in Steps 5 and 6,
so it remains to show $\textup{III}\lesssim\gamma^{2}|B_{i}|$.
We did not use Section 3 at this point, so as one could expect, it will appear
in this last Step. We easily have that
(4.21) $\displaystyle\|\widetilde{N}(\nabla
u-\vec{\alpha}_{i}|\Psi^{3}\Phi_{i}^{3})\|^{2}_{2}\leq\|\widetilde{N}(\nabla
u|\Psi^{3}\Phi^{3}_{i})\|^{2}_{2}+|\vec{\alpha}_{i}|^{2}\|\widetilde{N}(1|\Phi_{i})\|^{2}_{2}.$
Lemma 4.4 shows that $\Psi^{3}\Phi_{i}^{3}\chi_{\beta}^{3}$ satisfies
($\mathcal{COF}$)with a constant that depends only on $d$ and $n$. Thus we
apply Lemma 3.3 to the term III. Together with (4.21), we deduce that
$\textup{III}\leq\frac{1}{\beta^{2}}\Big{\\{}\delta\|\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})\|^{2}_{2}+\delta|\vec{\alpha}_{i}|^{2}\|\widetilde{N}(1|\Phi_{i})\|^{2}_{2}\\\
+C(1+\delta^{-1}K^{2})\kappa\|\widetilde{N}(\nabla
u|\Psi\Phi_{i})\|^{2}_{2}+C(1+\delta^{-1}K^{2})\|S(\overline{\nabla}u|\Psi\Phi_{i})\|^{2}_{2}\Big{\\}}$
Let $v$ be any function for which $\widetilde{N}(v|\Phi_{i})$ or
$S(v|\Phi_{i})$ makes sense, and in this situation, the non-tangential maximal
function $\widetilde{N}(v|\Phi_{i})$ and the square function $S(v|\Phi_{i})$
are supported in a ball $CB_{i}$, where $C$ is universal. Why? Because
$\Phi_{i}$ is supported in a saw-tooth region on top of $F^{i}\subset B_{i}$,
which is truncated above by $K_{i}r_{i}$. Hence the Whitney box $W(z,r)$ for
which $(v|\Phi_{i})_{W(z,r)}\neq 0$ are such that $r\leq 2K_{i}r_{i}$ and then
$z\in 10K_{i}B_{i}$, which means $\operatorname{supp}N(v|\Phi_{i})\subset
10K_{i}B_{i}$. Similarly, a point $(y,t)$ for which $\nabla v(y,t)\neq 0$ are
such that $|t|\leq K_{i}r_{i}$ and then $y\in 3K_{i}B_{i}$, which implies that
$\operatorname{supp}S(v|\Phi_{i})\subset 3K_{i}B_{i}$. Altogether
(4.22)
$\operatorname{supp}S(v|\Phi_{i})\cup\operatorname{supp}N(v|\Phi_{i})\subset
10K_{i}B_{i}\subset B^{*}_{i}:=2^{14}B_{i}.$
With this observation, we have
$\textup{III}\lesssim\frac{|B_{i}|}{\beta^{2}}\Big{\\{}\delta|\vec{\alpha}_{i}|^{2}+\delta\big{[}\sup_{B^{*}_{i}}\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})\big{]}^{2}+\delta^{-1}\kappa\big{[}\sup_{B^{*}_{i}}\widetilde{N}(\nabla
u|\Psi\Phi_{i})\big{]}^{2}+\delta^{-1}\big{[}\sup_{B^{*}_{i}}S(\overline{\nabla}u|\Psi^{3}\Phi_{i}^{3})\big{]}^{2}\Big{\\}}\\\
:=\textup{III}_{1}+\textup{III}_{2}+\textup{III}_{3}+\textup{III}_{4}.$
The three terms above are handled in a similar manner. Recall that $\Phi_{i}$
is supported in a saw-tooth region over $F_{i}$ truncated at $K_{i}r_{i}$. If
$(\nabla u|\Psi^{3}\Phi_{i}^{3})_{W}(z^{\prime},r^{\prime})\neq 0$, then
$W(z^{\prime},r^{\prime})\cap{\rm supp}\\{\Phi_{i}\\}\neq\emptyset$ and there
exists a $x_{i}\in F^{i}\subset B_{i}$ such that $|x_{i}-z^{\prime}|\leq
1000r^{\prime}\leq 2^{20}r_{i}$. It follows that for all
$(z^{\prime},r^{\prime})\in\mathbb{R}^{d+1}_{+}$,
(4.23) $(\nabla
u|\Psi^{3}\Phi_{i}^{3})_{W}(z^{\prime},r^{\prime})\leq\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{r^{\prime}}(z^{\prime})}\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})(z)dz\\\ \lesssim\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{1000r^{\prime}}(x_{i})}\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})(z)dz\leq\mathcal{M}[\widetilde{N}_{a}(\nabla
u|\Psi^{3})](x_{i}).$
Consequently, for each $z\in\mathbb{R}^{d}$, there exists a $x_{i}\in
F_{\beta}^{i}$ such that
(4.24) $\displaystyle\delta\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})(z)\lesssim\delta\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x_{i})\leq\gamma\beta$
where the last inequality follows from the fact that $x_{i}\in
E_{\beta,\gamma,\delta}$. We easily deduce
$\textup{III}_{2}:=\frac{\delta|B_{i}|}{\beta^{2}}\big{[}\sup_{B^{*}_{i}}\widetilde{N}(\nabla
u|\Psi^{3}\Phi_{i}^{3})\big{]}^{2}\lesssim\gamma^{2}|B_{i}|.$
Similarly, we have
$\textup{III}_{3}:=\frac{\delta^{-1}\kappa|B_{i}|}{\beta^{2}}\big{[}\sup_{B^{*}_{i}}\widetilde{N}(\nabla
u|\Psi\Phi_{i})\big{]}^{2}\lesssim\gamma^{2}|B_{i}|.$
The term $\textup{III}_{4}$ follows the same lines. If $y\in F^{i}$, then
$S(\overline{\nabla}u|\Psi\Phi_{i})(y)\leq
S(\overline{\nabla}u|\Psi\Phi_{i})(y)\leq\gamma\beta$. If $y\notin F_{i}$, we
take $x_{i}\in F^{i}$ such that
$r_{y}:=\operatorname{dist}(y,F_{i})=|y-x_{i}|$. We know from the construction
of $\Psi_{F^{i}}$ that
(4.25) $\Phi_{i}(z,s)\equiv 0\ \text{ for $|z-y|<r_{i}/10$ and
$|s|<r_{i}/400$.}$
We cover $B_{r_{i}/20}(y)$ by a uniformly finite collection of balls
$\\{B_{r_{i}/800}(z_{j})\\}_{j=1}^{N}$, and we notice that for any collection
$\\{w_{j}\\}_{j=1}^{N}$ of points satisfying $w_{j}\in B_{r_{i}/800}(z_{j})$,
we have
$\widehat{\Gamma}(y)\cap\operatorname{supp}\Phi_{i}\subset\bigcup_{j=1}^{N}\widehat{\Gamma}(w_{j}).$
We conclude that
$S(\overline{\nabla}u|\Psi\Phi_{i})(y)\leq\sum_{j=1}^{N}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{r_{i}/800}(z_{j})}S(\overline{\nabla}u|\Psi\Phi_{i})(w)\,dw\\\
\lesssim\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{2r_{i}}(x_{i})}S(\overline{\nabla}u|\Psi)(w)\,dw\leq\mathcal{M}\Big{[}S(\overline{\nabla}u|\Psi)\Big{]}(x_{i})\leq\frac{\delta^{1/2}}{\gamma}\beta.$
and then $\textup{III}_{4}\lesssim\gamma^{2}|B_{i}|$ as desired.
It remains to bound $\textup{III}_{1}$, We apply the same argument as of
$(\ref{eqGL06})$ using $x_{i}\in F^{i}$ instead of $y_{i}$. So we have
(4.26)
$\displaystyle\delta|\vec{\alpha}_{i}|^{2}\lesssim\delta\big{|}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})](x_{i})\big{|}^{2}\lesssim\gamma^{2}\beta^{2},$
because $x_{i}\in E_{\beta,\gamma,\delta}$, from which we easily deduce
$\textup{III}_{1}\lesssim\gamma^{2}|B_{i}|$. The lemma follows. ∎
The “good-lambda” distributional inequality (4.2) can be used to derive the
$L^{p}-L^{p}$ boundedness result.
###### Lemma 4.6.
Let $p>1$ and $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,M,κ. For
any a weak solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, any cut-off
function in the form $\Psi:=\Psi_{B,l,\epsilon}$ for some $\epsilon>0$, some
$l>100\epsilon$, and some boundary ball $B$ of radius $l$, we have
$\|\widetilde{N}(\nabla u|\Psi^{3})\|^{p}_{p}\leq
C_{p}\bigg{\|}\bigg{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{l}(.)}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{l\leq|s|\leq 2l}|\nabla
u|^{2}\Psi_{B}^{3}ds\,dy\bigg{)}^{1/2}\bigg{\|}^{p}_{p}+C_{p}\kappa^{p/2}\|\widetilde{N}(\nabla
u|\Psi)\|^{p}_{p}\\\ +C_{p}\|S(\overline{\nabla}u|\Psi)\|^{p}_{p}$
where $C_{p}>0$ depends only on $\lambda$, $d$, $n$, and $p$.
###### Remark 4.7.
The limitation $p>1$ comes from the fact that we used the maximal function
$\mathcal{M}$ in Lemma 4.5. However, with the same arguments, we could prove
an analogue of (4.2) where we replace $\mathcal{M}$ by $\mathcal{M}_{q}$
defined as $\mathcal{M}_{q}[f]:=\big{(}\mathcal{M}[f^{q}]\big{)}^{1/q}$ for
any $q>0$ (with a constant $C$ depending now also on $q$). Then we could
establish Lemma 4.6 for any $p>0$ by invoking (4.2) that used
$\mathcal{M}_{q}$ with $0<q<p$.
###### Proof.
We apply the distribution inequality (4.2) to obtain that there exists a
$\eta>0$ such that for any $\gamma,\delta\in(0,1)$, we have
$\begin{split}\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{p}_{p}&=c_{p}\int_{0}^{\infty}\beta^{p-1}|\\{\widetilde{N}(\nabla
u|\Psi^{3})>\beta\\}|d\beta\\\ &\leq
c_{p}\int_{0}^{\infty}\beta^{p-1}|\\{\widetilde{N}(\nabla
u|\Psi^{3})>\beta\\}\cap
E_{\beta,\gamma,\delta}|d\beta+c_{p}\int_{0}^{\infty}\beta^{p-1}|E^{c}_{\beta,\gamma,\delta}|d\beta\\\
&\lesssim
c_{p}\gamma^{2}\int_{0}^{\infty}\beta^{p-1}|\\{\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})]>\eta\beta\\}|d\beta+c_{p}\int_{0}^{\infty}\beta^{p-1}|E^{c}_{\beta,\gamma,\delta}|d\beta\\\
:=\textup{I}+\textup{II}\end{split}$
where the implicit constant depends only on $p$. But in one had, we have
$\textup{I}=\frac{\gamma^{2}}{\eta^{p}}\|\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})]\|^{p}_{p}\lesssim\frac{\gamma^{2}}{\eta^{p}}\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{p}_{p}$
by the $L^{p}$-boundedness of the Hardy-Littlewood maximal operator. On the
other hand,
$\begin{split}\textup{II}&=\gamma^{-p}\Big{\|}\mathcal{M}\Big{[}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{l}(.)}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{l\leq|s|\leq 2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{1/2}\Big{]}+\delta^{1/2}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi^{3})]\\\
&\qquad\qquad\qquad\qquad\qquad\qquad+\delta^{-1/2}\kappa^{1/2}\mathcal{M}[\widetilde{N}(\nabla
u|\Psi)]+\delta^{-1/2}\mathcal{M}[S(\overline{\nabla}u|\Psi)]\Big{\|}_{p}^{p}\\\
&\lesssim\gamma^{-p}\Big{\|}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{l}(.)}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{l\leq|s|\leq 2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{1/2}+\delta^{1/2}\widetilde{N}(\nabla
u|\Psi^{3})+\delta^{-1/2}\kappa^{1/2}\widetilde{N}(\nabla u|\Psi)\\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\delta^{-1/2}S(\overline{\nabla}u|\Psi)\Big{\|}_{p}^{p}\end{split}$
again using the $L^{p}$-boundedness of the Hardy-Littlewood maximal operator.
Altogether, we have
$\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{p}_{p}\lesssim\Big{(}\frac{\gamma^{2}}{\eta^{p}}+\frac{\delta^{p/2}}{\gamma^{p}}\Big{)}\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{p}_{p}+\gamma^{-p}\Big{\|}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{B_{l}(.)}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{l\leq|s|\leq 2l}|\nabla
u|^{2}\Psi_{B}^{3}\,ds\,dy\Big{)}^{1/2}\Big{\|}_{p}^{p}\\\
+\delta^{-p/2}\kappa^{p/2}\gamma^{-p}\|\widetilde{N}(\nabla
u|\Psi)\|^{p}_{p}+\delta^{-p/2}\gamma^{-p}\Big{\|}S(\overline{\nabla}u|\Psi)\Big{\|}_{p}^{p}.$
The lemma follows by taking $\gamma$ and then $\delta$ (depending only on
$\lambda$, $d$, $n$, and $p$) such that
$(\gamma^{2}\eta^{-p}+\delta^{p/2}\gamma^{-p})$ is small enough, so that the
first term on the right-hand side above can be hidden in the left-hand side
(which is allowed because all the terms are finite, due to the use of the
compactly supported cut-off function $\Psi$). ∎
## 5\. $S\leq N$ Local Estimates
In this section, we aim to establish that the square function is locally
bounded by the non-tangential maximal function, result that is eventually
given in Lemma 5.5 below.
Remember that we have three different directional derivatives to deal with,
which are the tangential derivatives $\nabla_{x}$, angular derivatives
$\nabla_{\varphi}$, and radial derivative $\partial_{r}$. To prove these
estimates, we first bound the square function of the radial derivative by the
square functions of the tangential and angular derivatives, and we shall rely
on Proposition 2.3, i.e. the expression of the equation in cylindrical
coordinates. Then, we treat the tangential and angular directional
derivatives, and a key point is the fact that those derivatives verify
$\partial_{r}|t|=\partial_{\varphi}|t|=0$.
###### Lemma 5.1.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$ and any radial cut-off
function $\Psi\in C^{\infty}_{0}(\Omega,[0,1])$, we have
(5.1) $\|S(\partial_{r}u|\Psi^{3})\|^{2}_{2}\leq
C\Big{(}\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}+\|S(\nabla_{\varphi}u|\Psi^{3})\|^{2}_{2}\Big{)},$
where the constant $C>0$ depends only on $\lambda$ and the dimensions $d$ and
$n$.
###### Proof.
This is basically an outcome of the equation: some derivatives can be
represented in terms of others. Observe that
$|\nabla\partial_{r}u|\leq|\nabla_{x}\partial_{r}u|+|\nabla_{\varphi}\partial_{r}u|+|\partial_{r}^{2}u|\leq|\nabla\nabla_{x}u|+|\nabla\nabla_{\varphi}u|+\frac{1}{|t|}|\nabla_{\varphi}u|+|\partial_{r}^{2}u|$
because $\nabla_{x}$ and $\partial_{r}$ commute and the commutator
$[\partial_{r},\partial_{\varphi}]$ is $-\frac{1}{|t|}\partial_{\varphi}$ (see
Proposition 2.4). But since $u$ is a weak solution to $Lu=0$, 2.1 implies that
$\begin{split}|\partial_{r}^{2}u|&=\Big{|}-\operatorname{div}_{x}(\mathcal{A}_{1}\nabla_{x}u)-\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{r}u)-\frac{1}{2}\sum_{i,j=d+1}^{n}\partial_{\varphi_{ij}}^{2}u\Big{|}\\\
&\lesssim|\nabla_{x}\mathcal{A}||\nabla
u|+\lambda^{-1}|\nabla_{x}\partial_{r}u|+\lambda^{-1}|\nabla_{x}\nabla_{x}u|+|\nabla_{\varphi}^{2}u|\leq|\nabla_{x}\mathcal{A}||\nabla
u|+\lambda^{-1}|\nabla\nabla_{x}u|+|\nabla\nabla_{\varphi}u|\end{split}$
by using again the fact that $\nabla_{x}$ and $\partial_{r}$ commute. By
combining the two inequalities above, we obtain
$|\nabla\partial_{r}u|\lesssim|\nabla\nabla_{x}u|+|\nabla\nabla_{\varphi}u|+\frac{1}{|t|}|\nabla_{\varphi}u|+|\nabla_{x}\mathcal{A}||\nabla
u|$
Now, (3.2) entails that
$\|S(\partial_{r}u|\Psi^{3})\|_{L^{2}(\mathbb{R}^{d})}^{2}\lesssim\|S(\nabla_{x}u|\Psi^{3})\|_{L^{2}(\mathbb{R}^{d})}^{2}+\|S(\nabla_{\varphi}u|\Psi^{3})\|_{L^{2}(\mathbb{R}^{d})}^{2}\\\
+\iint_{\Omega}|\nabla_{\varphi}u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}+\iint_{\Omega}|t|^{2}|\nabla_{x}\mathcal{A}|^{2}|\nabla
u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}.$
However, since $|t||\nabla_{x}\mathcal{A}|\in CM(\kappa)$, the Carleson
inequality (3.3) implies that
$\iint_{\Omega}|t|^{2}|\nabla_{x}\mathcal{A}|^{2}|\nabla
u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}\lesssim\kappa\|\widetilde{N}(u|\Psi^{3})\|^{2}_{L^{2}}.$
In addition, Proposition 2.7 applied with $\Phi=|t|^{d-n}\Psi^{3}$ infers that
$\iint_{\Omega}|\nabla_{\varphi}u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}\lesssim\iint_{\Omega}|\nabla\nabla_{\varphi}u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\lesssim\|S(\nabla_{\varphi}u|\Psi^{3})\|^{2}_{L^{2}(\mathbb{R}^{d})}$
by (3.2). The lemma follows. ∎
In order to deal with the tangential and angular directional derivatives, we
will first prove a generalized result that works for both of them. Let us
write $\partial_{v}$ for either a tangential derivative $\partial_{x_{i}}$, an
angular derivative $\partial_{\varphi_{ij}}$, or the radial derivative
$\partial_{r}$. The key step is to use the equation $Lu=0$ properly. Since we
want to estimate the gradient of solutions, we should study the commutators
$[L,\partial_{v}]$ and try to bound them in a clever way. In the next lemma,
we will estimate the square function of $\partial_{v}u$ and we are able to see
how the commutator $[L,\partial_{v}]$ plays an important role in the
estimates.
It will be convenient to introduce the bilinear form
$\mathcal{B}(\cdot,\cdot)$ defined for $f\in L^{1}_{loc}(\Omega)$ and $\Psi\in
C_{0}^{\infty}(\Omega)$,
(5.2)
$\displaystyle\mathcal{B}(f,\Psi):=-\frac{1}{2}\iint_{\Omega}\partial_{r}[|t|f]\,\partial_{r}\Psi\frac{dtdx}{|t|^{n-d-1}}.$
Beware that $\mathcal{B}(f,\Psi)$ may be negative even when the function $f$
is positive. We are now ready for our next lemma.
###### Lemma 5.2.
Let $L:=-\operatorname{div}(|t|^{d+1-n}\mathcal{A}\nabla)$ be an elliptic
operator satisfying (1.23) and (1.26). For any weak solution $u\in
W^{1,2}_{loc}(\Omega)$ to $Lu=0$ and any radial cut-off function $\Psi\in
C^{\infty}_{0}(\Omega,[0,1])$, we have
(5.3)
$\frac{7}{8}\lambda\|S(\partial_{v}u|\Psi^{3})\|_{2}^{2}\leq\iint_{\Omega}|\partial_{v}u|^{2}\partial_{r}\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+C\iint_{\Omega}|\partial_{v}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}\\\
+\mathcal{B}(|\partial_{v}u|^{2},\Psi^{3})+\iint_{\Omega}\Big{(}[L,\partial_{v}]u\Big{)}\Big{(}\Psi^{3}|t|\partial_{v}u\Big{)}dtdx,$
where $C>0$ depends only on the ellipticity constant $\lambda$ and the
dimensions $d$ and $n$.
The bound (5.3) may look a bit cryptic. The last term of (5.3) is the one that
contains the commutator $[L,\partial_{v}]$, and will be removed in the next
lemmas. The first term in the right-hand side is the “trace” term, that is the
term that will become $\operatorname{Tr}(\partial_{v}u)$ when we take
$\Psi\uparrow 1$. The two other quantities are “error” terms that contain
derivatives of the cut-off function $\Psi$, and that will eventually disappear
when we take $\Psi\uparrow 1$.
###### Proof.
To lighten the notation, we write $V$ for $\partial_{v}u$. First of all, since
$\mathcal{A}$ satisfies the uniform ellipticity condition (1.23), we have
$\displaystyle\lambda\|S(V|\Psi^{3})\|^{2}_{2}=\lambda\iint_{\Omega}|\nabla
V|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}\leq\iint_{\Omega}\mathcal{A}\nabla
V\cdot\nabla V\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}$
By product rule,
$\iint_{\Omega}\mathcal{A}\nabla V\cdot\nabla
V\,\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}=\iint_{\Omega}\mathcal{A}\nabla
V\cdot\nabla\Big{(}V\Psi^{3}|t|\Big{)}\frac{dtdx}{|t|^{n-d-1}}\\\
-\iint_{\Omega}\mathcal{A}\nabla
V\cdot\nabla\Psi^{3}\,V\frac{dtdx}{|t|^{n-d-2}}-\iint_{\Omega}\mathcal{A}\nabla
V\cdot\nabla(|t|)\,V\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}=\textup{I}+\textup{II}+\textup{III}.$
We start from the term I. Recall that $u$ is a weak solution to the equation
$Lu=0$. It follows that:
$\displaystyle
LV=L(\partial_{v}u)=\partial_{v}(Lu)+[L,\partial_{v}]=[L,\partial_{v}]\
\text{a.e. in $\Omega$.}$
Consequently,
$\displaystyle\textup{I}=\iint_{\Omega}\Big{(}[L,\partial_{v}]u\Big{)}\Big{(}V\Psi^{3}|t|\Big{)}\,dtdx,$
which is one of the term from the right-hand side of (5.3). For the term II,
since matrix is in the form of (1.26),
$\textup{II}=-\iint_{\Omega}\mathcal{A}_{1}\nabla_{x}V\cdot\nabla_{x}\Psi^{3}\,V\frac{dtdx}{|t|^{n-d-2}}-\iint_{\Omega}\mathcal{A}_{2}\frac{t}{|t|}\nabla_{t}V\cdot\nabla_{x}\Psi^{3}\,V\frac{dtdx}{|t|^{n-d-2}}\\\
-\iint_{\Omega}\nabla_{t}V\cdot\nabla_{t}\Psi^{3}\,V\frac{dtdx}{|t|^{n-d-2}}=:\textup{II}_{1}+\textup{II}_{2}+\textup{II}_{3}.$
The terms $\textup{II}_{1}$ and $\textup{II}_{2}$ are estimated together by
$\displaystyle\textup{II}_{1}+\textup{II}_{2}\leq\frac{1}{8}\lambda\|S(V|\Psi^{3})\|^{2}_{2}+C_{\lambda}\iint_{\Omega}|V|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}.$
We hide the term $\frac{1}{8}\lambda\|S(V|\Psi^{3})\|^{2}_{2}$ in the left-
hand side of (5.3), and the second term in the right-hand side above stays on
the right-hand side of (5.3).
As for $\textup{II}_{3}$, since $\Psi$ is radial, we have that
$\nabla_{t}V\cdot\nabla_{t}\Psi^{3}=(\nabla_{t}V\cdot\nabla_{t}|t|)\,\partial_{r}\Psi^{3}=(\partial_{r}V)(\partial_{r}\Psi^{3})$
and thus,
$\textup{II}_{3}=-\frac{1}{2}\iint_{\Omega}(\partial_{r}V^{2})(\partial_{r}\Psi^{3})\,\frac{dtdx}{|t|^{n-d-2}}=\mathcal{B}(V^{2},\Psi^{k})+\frac{1}{2}\iint_{\Omega}V^{2}\partial_{r}\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}$
by definition of $\mathcal{B}(.,.)$, see (5.2). The last term III is similar,
because we have
$\textup{III}=-\iint_{\Omega}\partial_{r}V\,V\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}=-\frac{1}{2}\iint_{\Omega}(\partial_{r}V^{2})\,\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}=\frac{1}{2}\iint_{\Omega}V^{2}\,\partial_{r}\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}$
thanks to the integration by parts given in Proposition 2.2. The lemma
follows. ∎
Now we bound the square function of the tangential derivatives by applying
Lemma 5.2 with $\partial_{v}=\partial_{x}$. Recall that we write
$\partial_{x}$ for any tangential directional derivative
$\partial_{i}:=\vec{e}_{i}\cdot\nabla$ where $i\leq d$. As we have discussed
in the previous paragraphs, the commutator $[L,\partial_{x}]$ plays an
important role in computing the square function of $\partial_{x}$. In our
particular case, an easy computation shows that
(5.4)
$[L,\partial_{x}]=\operatorname{div}_{x}(|t|^{d+1-n}\partial_{x}\mathcal{A})\nabla$
because $\partial_{x}|t|=0$.
###### Corollary 5.3.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$ and any radial cut-off
function $\Psi\in C^{\infty}_{0}(\Omega,[0,1])$, we have
(5.5)
$\frac{3}{4}\lambda\|S(\nabla_{x}u|\Psi^{3})\|_{2}^{2}\leq\iint_{\Omega}|\nabla_{x}u|^{2}\partial_{r}\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}\\\
+C\iint_{\Omega}|\nabla_{x}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}+\mathcal{B}(|\nabla_{x}u|^{2},\Psi^{3}),$
where $C>0$ depends only on $\lambda$, $d$, and $n$.
###### Proof.
The bound (5.5) is a consequence of the same bound on each of the tangential
derivative $\partial_{x}$, and then summing up. For a given tangential
derivative, (5.5) is an immediate consequence of Lemma 5.2 and the bound
(5.6)
$\left|\iint_{\Omega}\Big{(}[L,\partial_{x}]u\Big{)}\big{(}\Psi^{3}|t|\partial_{x}u\big{)}dtdx\right|\leq\frac{1}{8}\lambda\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}+C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}\\\
+C\iint_{\Omega}|\nabla_{\varphi}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}.$
for any tangential derivative $\partial_{x}$. So we fix a tangential
directional derivative $\partial_{x}$, and by (5.4) and then integration by
parts, we have
(5.7)
$\iint_{\Omega}\Big{(}[L,\partial_{x}]u\Big{)}\Big{(}(\partial_{x}u)\Psi^{3}|t|\Big{)}dtdx=\iint_{\Omega}\Big{(}\operatorname{div}(|t|^{d+1-n}\partial_{x}\mathcal{A})\nabla
u\Big{)}\Big{(}\Psi^{3}|t|\partial_{x}u\Big{)}dtdx\\\
=-\iint_{\Omega}(\partial_{x}\mathcal{A})\nabla
u\cdot\nabla(\partial_{x}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}-\iint_{\Omega}(\partial_{x}\mathcal{A})\nabla
u\cdot\nabla\big{(}|t|\Psi^{3}\big{)}\partial_{x}u\frac{dtdx}{|t|^{n-d-1}}\\\
=\textup{I}+\textup{II}.$
Since $|t||\nabla_{x}\mathcal{A}|\in CM(\kappa)$, the term I is bounded as
follows
(5.8)
$|\textup{I}|\leq\frac{1}{8}\lambda\|S(\partial_{x}u|\Psi^{3})\|^{2}_{2}+C_{\lambda}\int_{\Omega}|t|^{2}|\nabla_{x}\mathcal{A}|^{2}|\nabla
u|^{2}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-1}}\\\
\leq\frac{1}{8}\lambda\|S(\partial_{x}u|\Psi^{3})\|^{2}_{2}+C_{\lambda}\kappa\|\widetilde{N}_{a}(\nabla
u|\Psi^{3})\|^{2}_{2}.$
For II, remark that the special structure of $\mathcal{A}$ given in (1.26)
implies that the only derivatives that hit $|t|\Psi^{3}$ are tangential
derivative, for which $\nabla_{x}|t|=0$. Therefore,
(5.9) $|\textup{II}|=\left|\iint_{\Omega}(\partial_{x}\mathcal{A})\nabla
u\cdot(\nabla_{x}\Psi^{3})(\partial_{x}u)\frac{dtdx}{|t|^{n-d-2}}\right|\\\
\leq\iint_{\Omega}|t|^{2}|\partial_{x}\mathcal{A}|^{2}|\nabla
u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}+\iint_{\Omega}|\nabla_{x}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}\\\
\leq C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+\iint_{\Omega}|\nabla_{x}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}.$
The lemma follows. ∎
It remains to estimate the square function of the angular directional
derivatives.
###### Corollary 5.4.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$ and any radial cut-off
function $\Psi\in C^{\infty}_{0}(\Omega,[0,1])$, we have
(5.10) $\frac{3}{4}\lambda\|S(\nabla_{\varphi}u|\Psi^{3})\|_{2}^{2}\leq
C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+C\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}\\\
+C\iint_{\Omega}|\nabla_{\varphi}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}+\mathcal{B}(|\nabla_{\varphi}u|^{2},\Psi^{3}),$
where $C>0$ depends only on $\lambda$, $d$, and $n$.
###### Proof.
Fix an angular directional derivative $\partial_{\varphi}$. Thanks to Lemma
5.2, it suffices to show that
(5.11)
$\left|\iint_{\Omega}\Big{(}[L,\partial_{\varphi}]u\Big{)}\big{(}\Psi^{3}|t|\partial_{\varphi}u\big{)}dtdx+\iint_{\Omega}|\partial_{\varphi}u|^{2}\partial_{r}\Psi^{3}\,\frac{dt\,dx}{|t|^{n-d-1}}\right|\\\
\leq\frac{1}{8}\lambda\|S(\nabla_{\varphi}u|\Psi^{3})\|^{2}_{2}+C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+C\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}\\\
+C\iint_{\Omega}|\nabla_{x}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}.$
It will be important to estimate the two terms in the left-hand side of (5.11)
together, because there will be some cancellation.
We invoke Proposition 2.6 to say that
(5.12)
$\iint_{\Omega}\Big{(}[L,\partial_{\varphi}]u\Big{)}\big{(}\Psi^{3}|t|\partial_{\varphi}u\big{)}dtdx=\iint_{\Omega}\Big{(}\operatorname{div}_{x}(|t|^{d+1-n}\partial_{\varphi}\mathcal{A})\nabla
u\Big{)}\big{(}\Psi^{3}|t|\partial_{x}u\big{)}dtdx\\\
+2\iint_{\Omega}(\partial_{r}\partial_{\varphi}u)\Psi^{3}(\partial_{\varphi}u)\frac{dt\,dx}{|t|^{n-d-1}}+\iint_{\Omega}\operatorname{div}_{x}(\mathcal{A}_{2}\partial_{\varphi}u)\big{(}\Psi^{3}\partial_{\varphi}u\big{)}\frac{dt\,dx}{|t|^{n-d-2}}\\\
=:\textup{I}+\textup{II}+\textup{III}.$
By the product rule,
$\textup{III}\lesssim\iint_{\Omega}|\nabla_{x}\mathcal{A}_{2}||\nabla_{\varphi}u||\partial_{\varphi}u|\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}+\iint_{\Omega}|\mathcal{A}_{2}|\nabla_{\varphi}\nabla_{x}u||\partial_{\varphi}u|\Psi^{3}\,\frac{dtdx}{|t|^{n-d-1}}:=\textup{III}_{1}+\textup{III}_{2}.$
Since $|t||\nabla_{x}\mathcal{A}_{2}|\in CM(\kappa)$, the term
$\textup{III}_{1}$ can be estimated as follows
$\displaystyle\textup{III}_{1}\leq\epsilon\iint_{\Omega}|\nabla_{\varphi}u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}+C\epsilon^{-1}\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}\leq\frac{1}{24}\lambda\|S(\partial_{\varphi}u|\Psi^{3})\|_{2}^{2}+C_{\lambda}\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2},$
by using Proposition 2.7, (3.2), and by taking $\epsilon$ small enough
(depending only on $\lambda$, $d$ and $n$).
Based on the same arguments, the term $\textup{III}_{2}$ is bounded by
$\displaystyle\textup{III}_{2}\leq\epsilon\iint_{\Omega}|\nabla_{\varphi}u|^{2}\Psi^{3}\frac{dtdx}{|t|^{n-d}}+C\epsilon^{-1}\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}\leq\frac{1}{24}\lambda\|S(\partial_{\varphi}u|\Psi^{3})\|_{2}^{2}+C_{\lambda}\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}.$
The term I is analogous to the one obtained from the commutator in the proof
of Corollary 5.3. We repeat quickly the argument. By integration by parts,
$\textup{I}=-\iint_{\Omega}(\partial_{\varphi}\mathcal{A})\nabla
u\cdot\nabla_{x}(\partial_{\varphi}u)\Psi^{3}\frac{dtdx}{|t|^{n-d-2}}-\iint_{\Omega}(\partial_{\varphi}\mathcal{A})\nabla
u\cdot\nabla_{x}\Psi^{3}\,(\partial_{\varphi}u)\,\frac{dtdx}{|t|^{n-d-1}}:=\textup{I}_{1}+\textup{I}_{2}.$
The first integral is bounded by using the inequlity $2ab\leq\epsilon
a^{2}+\epsilon^{-1}b^{2}$, and the fact that $|t||\nabla_{\varphi}A|\in
CM(\kappa)$ we get similarly to (5.8) that
$|\textup{I}_{1}|\leq\frac{1}{24}\lambda\|S(\partial_{\varphi}u|\Psi^{3})\|_{2}^{2}+C_{\lambda}\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|_{2}^{2}.$
As for the second integral, we proceed as in (5.9) and we obtain
$|\textup{I}_{2}|\leq C\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+\iint_{\Omega}|\nabla_{\varphi}u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}.$
The term II cancels out with the “trace” term. Indeed, we have
$\textup{II}=\iint_{\Omega}\partial_{r}|\partial_{\varphi}u|^{2}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-1}}=-\iint_{\Omega}|\partial_{\varphi}u|^{2}\partial_{r}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-1}}$
by the integration by parts (Proposition 2.2). Observe that all our
computations proved the claim (5.11), thus the lemma follows. ∎
In the following, we combine all the previous results of this section
together. We recall that $\overline{\nabla}$ stands for the gradient in
cylindrical coordinates. Remember that we write respectively
$\|S(\nabla_{x}u|\Psi^{3})\|^{2}_{2}$ and
$\|S(\nabla_{\varphi}u|\Psi^{3})\|^{2}_{2}$ for the sums of the square
functions over all tangential derivatives and angular derivatives in $L^{2}$
norm.
###### Lemma 5.5.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. There exists
three constants $C_{1},C_{2},C_{3}>0$ depending only on $\lambda$, $d$, and
$n$ such that for any weak solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$ and
any cut-off radial function $\Psi\in C^{\infty}_{0}(\Omega,[0,1])$, we have
(5.13) $\|S(\overline{\nabla}u|\Psi^{3})\|^{2}_{2}\leq
C_{1}\bigg{(}\iint_{\Omega}|\nabla_{x}u|^{2}\partial_{r}\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+\mathcal{B}(|\nabla_{x}u|^{2},\Psi^{3})\bigg{)}+C_{2}\,\mathcal{B}(|\nabla_{\varphi}u|^{2},\Psi^{3})\\\
+C_{3}\bigg{(}\kappa\|\widetilde{N}_{a}(\nabla
u|\Psi^{3})\|^{2}_{2}+\iint_{\Omega}|\nabla
u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}\bigg{)}$
In addition, if $\Psi$ satisfies ($\mathcal{COF}$)K, we have
(5.14) $\displaystyle\|S(\overline{\nabla}u|\Psi^{3})\|^{2}_{2}\leq
C_{K}\|\widetilde{N}(\nabla u|\Psi)\|^{2}_{2},$
where $C_{K}$ depends on $\lambda$, $n$, and $K$.
###### Remark 5.6.
Remember that the first term in the right-hand side of (5.13) is the “trace”
term, and all the other terms are meant to disappear when $\Psi\uparrow 1$.
Moreover, we have different constants because the terms that are multiplied by
$C_{1}$ and $C_{2}$ may be negative. We can say that $1\leq C_{2}\leq
C_{1}\leq C_{3}$, but nothing more, in particular taking $C_{1}=C_{2}=C_{3}$
would probably render the inequality false.
###### Remark 5.7.
The result (5.14) tells us that the sum of the square functions of all
tangential directional derivatives, angular directional derivatives, and
radial direction derivatives can be estimated locally by the non-tangential
maximal function of the full gradient.
###### Proof.
The inequality (5.13) is an immediate consequence of Lemma 5.1, Corollary 5.3,
and Corollary 5.4.
We turn to the proof of (5.14). Since $\Psi$ satisfies ($\mathcal{COF}$)K, we
have
(5.15) $|t||\nabla\Psi|\leq K\quad\text{ and
}\quad{\mathds{1}}_{\operatorname{supp}\nabla\Psi}\in CM(K),$
in particular $|t||\nabla\Psi|\in CM(K^{3})$. We deduce that
(5.16) $\iint_{\Omega}|\nabla
u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}+\iint_{\Omega}|\nabla_{x}u|^{2}|\partial_{r}\Psi^{3}|\frac{dtdx}{|t|^{n-d-1}}\\\
\leq C\iint_{\Omega}|\nabla
u|^{2}\big{[}|t|^{2}|\nabla\Psi|^{2}+|t||\nabla\Psi|\big{]}\Psi\frac{dtdx}{|t|^{n-d}}\leq
CK^{3}\|\widetilde{N}(\nabla u|\Psi)\|^{2}_{2}.$
by (5.15) and the Carleson inequality (3.3).
Consequently, it suffices to show for any $V\in L^{2}_{loc}(\Omega)$ and
$\delta\in(0,\infty)$,
(5.17)
$\displaystyle|\mathcal{B}(|V|^{2},\Psi^{3})|\leq\delta\|S(V|\Psi^{3})\|^{2}_{2}+C\delta^{-1}\|\widetilde{N}(V|\Psi)\|^{2}_{2}$
because then (5.14) follows easily by choosing $\epsilon$ small enough. From
the definition of $\mathcal{B}(.,.)$, see (5.2), and the product rule, we have
(5.18)
$|\mathcal{B}(|V|^{2},\Psi^{3})|\leq\iint_{\Omega}|V||\partial_{r}V||\partial_{r}\Psi|\Psi^{2}\frac{dtdx}{|t|^{n-d-2}}+\frac{1}{2}\iint_{\Omega}|V|^{2}|\partial_{r}\Psi|\Psi^{2}\frac{dtdx}{|t|^{n-d-1}}\\\
\leq\delta\|S(V|\Psi^{3})\|^{2}_{2}+C\delta^{-1}\iint_{\Omega}|V|^{2}|\partial_{r}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}+\frac{1}{2}\iint_{\Omega}|V|^{2}|\partial_{r}\Psi|\Psi^{2}\frac{dtdx}{|t|^{n-d-1}}.$
By using again (5.15) and the Carleson inequality, the last two terms of
(5.18) are bounded by $K^{3}\|\widetilde{N}(V|\Psi)\|^{2}_{2}$. Hence (5.17)
follows. ∎
Let us get a little bit further, since it will help us when we pass from local
to global estimates.
###### Lemma 5.8.
Let $L$ be an elliptic operator satisfying ($\mathcal{H}$)λ,κ. For any weak
solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, any cut-off function $\Psi\in
C^{\infty}_{0}(\Omega,[0,1])$ satisfying ($\mathcal{COF}$)K, any
$\delta\in(0,1)$, we have
$|\mathcal{B}(|\partial_{r}u|^{2},\Psi^{3})|=\frac{1}{2}\left|\iint_{\Omega}\Big{(}|\partial_{r}u|^{2}+|t|\partial_{r}(|\partial_{r}u|^{2})\Big{)}\partial_{r}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-1}}\right|\\\
\leq(\delta+\delta^{-1}K^{2}\kappa)\|\widetilde{N}(\nabla
u|\Psi)\|_{2}^{2}+C\delta^{-1}K^{2}\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2},$
where $C$ depends on $\lambda$, $n$, $\delta$ and $K+M+\kappa$.
###### Proof.
The equality is just the product rule and the definition of
$\mathcal{B}(|\partial_{r}u|^{2},\Psi^{3})$, see (5.2). The bound
$\left|\iint_{\Omega}|\partial_{r}u|^{2}\partial_{r}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-1}}\right|\leq(\delta+\delta^{-1}K^{2}\kappa)\|\widetilde{N}(\nabla
u|\Psi)\|_{2}^{2}+C\delta^{-1}K^{2}\|S(\overline{\nabla}u|\Psi^{3})\|_{2}^{2}$
was established in (3.6). It remains to show a bound on
$\textup{I}:=\left|\iint_{\Omega}\partial_{r}(|\partial_{r}u|^{2})\partial_{r}\Psi^{3}\,\frac{dt\,dx}{|t|^{n-d-2}}\right|,$
but that is similar to (5.18). Indeed, we use $|t||\nabla\Psi|\in CM(K)$,
(3.3), and the inequality $ab\leq\delta a^{2}+b^{2}/4\delta$ to obtain
$I=2\left|\iint_{\Omega}(\partial_{r}^{2}u)(\partial_{r}u)\partial_{r}\Psi^{3}\frac{dt\,dx}{|t|^{n-d-2}}\right|\leq\delta\|\widetilde{N}(\partial_{r}u|\Psi)\|_{2}^{2}+CK\delta^{-1}\|S(\partial_{r}u|\Psi^{3})\|_{2}^{2}.$
The lemma follows. ∎
###### Lemma 5.9.
Let $\kappa\in(0,1)$ and let $L$ be an elliptic operator satisfying
($\mathcal{H}$)λ,κ. Take $\Psi\in C^{\infty}_{0}(\Omega)$ satisfying
($\mathcal{COF}$)K. There exist four constants $c_{0},C_{1},C_{2},C_{3}>0$
depending on $\lambda$ and $n$ \- the first one being small and the last three
being large - such that, after defining
$|\nabla
u|^{2}_{\kappa}:=C_{1}|\nabla_{x}u|^{2}+C_{2}|\nabla_{\varphi}u|^{2}+\kappa^{1/2}c_{0}K^{-2}|\partial_{r}u|^{2},$
we have, for any weak solution $u\in W^{1,2}_{loc}(\Omega)$ to $Lu=0$, that
(5.19) $\|S(\overline{\nabla}u|\Psi^{3})\|^{2}_{2}\leq
C_{1}\iint_{\Omega}|\nabla_{x}u|^{2}\partial_{r}\Psi^{3}\frac{dtdx}{|t|^{n-d-1}}+\mathcal{B}(|\nabla
u|^{2}_{\kappa},\Psi^{3})\\\ +C_{3}\bigg{(}\kappa\|\widetilde{N}(\nabla
u|\Psi^{3})\|^{2}_{2}+\iint_{\Omega}|\nabla
u|^{2}|\nabla_{x}\Psi|^{2}\Psi\frac{dtdx}{|t|^{n-d-2}}\bigg{)}.$
###### Proof.
By Lemma 5.5, we have three constants
$C^{\prime}_{1},C^{\prime}_{2},C^{\prime}_{3}$ depending only on $\lambda$,
$d$, and $n$ such that
(5.20) $\|S(\overline{\nabla}u|\Psi^{3})\|^{2}_{2}\leq |
# Magnetic-field driven evolution of zero-energy mode on Bi islands deposited
on Fe(Te,Se)
Kailun Chen, Chuanhao Wen, Zhiyong Hou, Huan Yang,∗ and Hai-Hu Wen† National
Laboratory of Solid State Microstructures and Department of Physics,
Collaborative Innovation Center of Advanced Microstructures, Nanjing
University, Nanjing 210093, China
###### Abstract
We investigate the magnetic-field dependent evolution of the zero-bias
conductance peaks (ZBCPs) on the nanoscale bismuth islands grown on the
FeTe0.55Se0.45 substrate. The ZBCPs can be observed throughout the entire
region on these islands, and their characteristics align with the signatures
of Majorana zero modes. Remarkably, the evolution of ZBCPs on these islands
exhibits anomalous behavior under varying magnetic fields: The magnitude of
ZBCPs is first enhanced at weak fields lower than 2 T and then suppressed as
the fields further increase. We attribute the non-monotonic evolution of the
ZBCPs to the magnetic-field-enhanced topological edge states on these Bi
islands. Our findings provide valuable insights into the probable origin of
the Majorana zero modes in the Bi-island platform and the magnetic-field
response of topological edge states.
## I Introduction
Majorana zero modes (MZMs) have attracted intensive interest because of their
potential applications in fault-tolerant topological quantum computing
SCZhangReview ; AndoReview ; NonAbelian , and these modes can be realized in
topological superconductors. One of the effective methods for achieving
topological superconductivity is to induce superconductivity in the
topologically insulating layer through the proximity effect from the adjacent
superconductor LFuProximity . For example, zero-energy modes were observed in
vortex cores of the heterostructures composed by the topological insulator
Bi2Te3 and the conventional superconductor NbSe2 JiaReview ; Jiaprl or the
iron-based superconductor Fe(Te,Se) ChenMYSA . In addition, MZMs were also
observed at terminals of vortex lines in superconductors with topologically
non-trivial bands, such as some iron-based superconductors FeTeSeMajorana ;
FeTeSeHanaguri ; HuReview ; LiFeOHFeSe ; CaKFe4As4 ; LiFeAs or other
materials WS2LiW ; CVSWangZY .
On the surface of a topologically non-trivial superconductor, topological edge
states appear at some boundaries SCZhangReview ; Hasan , such as the twin
boundary WangZYScience or the step edge JiaoLUTe2 . In one-dimensional cases,
topological edge states can exist in a semiconducting or spin-orbit-coupled
nanowire, as well as in a ferromagnetic atom chain neighboring to a
superconductor AliceaReview , where they appear as MZMs. Experimental
observations of these modes are realized at the ends of one-dimensional
semiconducting nanowires SemiChain ; Quantumdot and magnetic atomic chains
FeChain grown on the surface of an $s$-wave superconductor. As for two-
dimensional heterostructures, MZMs CrBr3 or Majorana edge modes Feisland ;
PbCoSi are observed on ferromagnetic islands grown on $s$-wave
superconductors. In addition, a robust zero-energy mode is observed in a
trilayer heterostructure MnTe/Bi2Te3/Fe(Te,Se) ZhangT .
Bismuth is a semimetal with strong spin-orbit coupling, and it is a good
platform for investigating the topological superconductivity or the MZMs when
it is adjacent to a superconductor. The Majorana edge states may exist at the
boundary of the Bi layer with the former mentioned configuration. And the
experimental evidence has been demonstrated at the edges of Bi bilayers
Bibilayer and Bi films decorated with magnetic iron clusters Edgechannel
grown on the superconducting substrate. In our previous research, robust zero-
energy modes were observed on specific Bi islands deposited on the iron-based
superconductor Fe(Te,Se) Biisland . The zero-energy modes are likely caused by
the interference of two counter-propagating topological edge states at the
boundary of Bi islands. This kind of edge states in a topologically non-
trivial system is protected by the time-reversal symmetry, and is impervious
to impurity scattering in the absence of magnetic fields. Therefore, after
applying varying magnetic fields, the evolution of the edge states is also an
interesting issue that has been scarcely reported in experiments.
In this work, we examined the evolution of the ZBCP magnitude on some Bi
islands grown on the FeTe0.55Se0.45 substrate when applying magnetic fields.
Based on the statistic of the measuring areas, the ZBCPs exist only on some of
the islands with a diameter of 4-8 nm and can be observed in the entire region
of these islands. The characteristics of all the ZBCPs are also consistent
with those of MZMs, indicating a topologically non-trivial origin of the
ZBCPs. Notably, we observed an anomalous, yet general behavior of the ZBCPs
upon varying magnetic fields: The intensity of the ZBCPs on Bi islands is
first enhanced at weak fields lower than 2 T, and subsequently decreases as
the fields further increase. The strengthening of the ZBCPs at weak fields may
be attributed to the magnetic-field tuning on the edge states to the inner
part of the island.
## II Experimental methods
The single crystals of FeTe0.55Se0.45 were synthesized by the self-flux method
samplegrowth . The crystals were annealed at $400^{\circ}$C for 20 h in an O2
atmosphere to eliminate the interstitial Fe atoms and then quenched in the
liquid nitrogen. The single crystal was cleaved in an ultrahigh vacuum with a
pressure of about $1\times 10^{-10}$ Torr before the growth of the Bi islands.
High-purity Bi (99.999%) powders were heated to $360^{\circ}$C in the effusion
cell (CreaTec) and then evaporated to the cleaved surface of Fe(Te,Se) at room
temperature by molecular beam epitaxy method. The nanoscale bismuth islands
can be grown on the Fe(Te,Se) substrate. Afterwards the sample was transferred
to the scanning tunneling microscopy/spectroscopy (STM/STS) head which was
kept at a low temperature. The STM/STS measurements were carried out in a
USM-1300 system (Unisoku Co. Ltd.) with an ultrahigh vacuum, low temperature,
and high magnetic field. The tunneling spectra were measured by a lock-in
technique with an amplitude of 0.2 mV and a frequency of 938 Hz. The tips in
the measurements were made by tungsten using the electrochemically etching
method. All measurements were taken at 0.4 K unless in some specified cases.
The magnetic field was applied along the $c$-axis of Fe(Te,Se) substrate or
equivalently perpendicular to the Bi islands.
## III Results
### III.1 Bi islands with and without ZBCPs
Figure 1: (a) Topography of an area containing 6 Bi islands grown on the
Fe(Te,Se) substrate (setpoint conditions: $V_{\mathrm{set}}=1$ V,
$I_{\mathrm{set}}=20$ pA). (b) Typical tunneling spectra measured on the
islands and the substrate nearby ($V_{\mathrm{set}}=10$ mV,
$I_{\mathrm{set}}=200$ pA). The island #1 is the only one with ZBCP. (c)
Statistics on the number of islands with and without ZBCPs versus the area of
all islands we have measured.
Figure 1(a) shows a typical topography of the Bi islands grown on the
Fe(Te,Se) substrate. The islands are randomly distributed on the flat surface
of the substrate, with the dimensions of several nanometers. The shapes of
these islands are also irregular, and there are even some wrinkles near the
boundary indicating the lattice distortion there. Although the islands have
different sizes, the height of them is all about 7 Å which is consistent with
the thickness of Bi(110) monolayer islands Biheight . These features of the Bi
islands are consistent with those in our previous work Biisland .
Figure 1(b) shows typical tunneling spectra measured on the six islands in the
field of view of Fig. 1(a). We also present the typical tunneling spectrum
measured on the Fe(Te,Se) substrate in Fig. 1(b), and it shows a fully gapped
feature. The superconducting gap of the Fe(Te,Se) substrate varies from 1.1 to
2.1 meV determined by calculating the energy difference between coherence
peaks ChenMYCdGM . Tunneling spectra measured on Bi islands #2-#6 are similar
to the one measured on the Fe(Te,Se) substrate. In contrast, the tunneling
spectrum measured on island #1 is different, i.e., a ZBCP appears in the
tunneling spectrum measured on this island. It should be noted that the ZBCP
can be observed in the spectra measured on the whole island Biisland . We have
investigated 146 islands with monolayer thickness, and only 23 of them exhibit
ZBCPs. The probability of finding a Bi monolayer island with the ZBCPs is
about 16%. We also note that there are some bilayer Bi islands, but none of
them hold the ZBCPs. The number statistics are also carried out in the region
of monolayer islands, and the result is shown in Fig. 1(c). One can see that
the areas of islands with ZBCPs are mainly distributed from 10 nm2 to 50 nm2,
corresponding to the diameter of about 4-8 nm. When the areas of Bi islands
exceed 50 nm2, no ZBCPs has been observed in these islands.
Figure 2: (a) Topography of a Bi island (#7) ($V_{\mathrm{set}}=1$ V,
$I_{\mathrm{set}}=20$ pA). (b) Typical tunneling spectra measured at marked
positions shown in (a) ($V_{\mathrm{set}}=10$ mV, $I_{\mathrm{set}}=200$ pA).
(c) Line-profile of tunneling spectra taken along the dashed arrow in panel
(a) ($V_{\mathrm{set}}=10$ mV, $I_{\mathrm{set}}=200$ pA).
Figure 2(a) shows the topography of a nanoscale monolayer Bi island numbered
as #7. The ZBCPs can be observed in the tunneling spectra measured on the
island, such as two spectra measured at the edge and the center of the island
shown in Fig. 2(b). Besides, the energies of the coherence peaks of these two
spectra are similar to those obtained from the spectra measured on the
Fe(Te,Se) substrate, which is a demonstration of the proximity-induced
superconductivity on the Bi island. It should be noted that ZBCPs can be
observed in the spectra taken all over the island, and one can obtain the
conclusion from a set of tunneling spectra measured across the island as shown
in Fig 2(c). Obvious in-gap peaks can be seen in all the spectra in this
panel, and the peak positions are fixed near zero energy. These observations
are similar to those in our previous work Biisland .
### III.2 Magnetic-field dependent evolution of ZBCPs on Bi islands
Figure 3: (a) Tunneling spectra measured at different magnetic fields and at
the location of red dot in Fig. 2(a) ($V_{\mathrm{set}}=10$ mV,
$I_{\mathrm{set}}=200$ pA). (b)-(d) Zero-energy d$I$/d$V$ mappings recorded in
the same region in Fig. 2(a), and they are measured at different fields
($V_{\mathrm{set}}=40$ mV, $I_{\mathrm{set}}=200$ pA). The edge of the island
is marked out by dashed lines. The color bars are the same in these mappings.
The ZBCPs on some Bi islands are weakened by increase of the temperature
Biisland , which can be understood as the temperature suppression to the
superconductivity in the Fe(Te,Se) substrate. Since the upper critical field
of Fe(Te,Se) is extremely high FeSeTeHc2 , it is interesting to investigate
the field-dependent evolution of the ZBCPs on the Bi islands. Figure 3(a)
shows tunneling spectra measured at different fields of 0, 2, and 5 T at the
center of the Bi island #7. Surprisingly, the increment of the magnetic field
does not suppress the ZBCP monotonically, and conversely the peak magnitude
increases at 2 T compared to that obtained at 0 T. At a higher field of 5 T,
the magnitude of the ZBCP is significantly suppressed, but the ZBCP does not
show any splitting or broadening features.
The differential conductance mapping is a useful method to get the information
about the spatial distribution of density of states (DOS) STMReview1 ;
STMReview2 . Figures 3(b)-3(d) show the recorded spatial distributions of
zero-bias differential conductance of the Bi island #7 in the same area but
under different fields. These three mappings are presented in the same color
scale, thus the brightness directly corresponds to the zero-energy DOS. The
zero-bias differential conductance is almost zero on the Fe(Te,Se) substrate,
reflecting the fully gapped feature. The value is finite on the whole island
at 0 T, which corresponds to the robust zero mode on the island. Some weak
ZBCP magnitude may be due to the surface-lattice distortion of the Bi islands.
At the magnetic field of 2 T [Fig. 3(c)], the inner part of the island becomes
notably brighter, suggesting an increment of the ZBCP magnitude. However, at 5
T [Fig. 3(d)], the zero-energy differential conductance becomes much weaker.
These observations are consistent with the ZBCP evolution in the tunneling
spectra at different fields shown in Fig. 3(a).
Figure 4: (a) Tunneling spectra measured on another island #8 at different
magnetic fields ($V_{\mathrm{set}}=10$ mV, $I_{\mathrm{set}}=200$ pA). The
topography of the island is shown in the inset, and the spectra are measured
at the marked position in the island. (b) Magnetic field-dependent evolution
of the ZBCP intensity on several islands with ZBCPs. The values of the zero-
energy differential conductance at different fields are normalized by the
value at zero field.
A control experiment is carried out on the Bi island #8, and Fig. 4(a) shows
the tunneling spectra measured at the same position but under different
magnetic fields. The magnitude of the ZBCP increases when the magnetic field
increases from 0 to 2 T, and then the value decreases as the field further
increases. Similar experiments are also carried on other four islands #9-#12
with ZBCPs, and the field-dependent zero-bias differential conductance is
shown in Fig. 4(b). Here it should be mentioned that these data are recorded
in the Bi islands away from vortex cores in the Fe(Te,Se) layer, otherwise the
tunneling spectra behave very differently from the spectra measured at other
neighbored fields. The curves in Fig. 4(b) share similar features with varying
magnetic fields: the intensity of the ZBCP increases with increasing field and
reaches its maximum at about 2 T, and afterward the ZBCP magnitude decreases
rapidly with the increase of the field. Therefore, the non-monotonic field-
dependent evolution of the ZBCP magnitude is a common property on these Bi
islands grown on Fe(Te,Se).
## IV Discussion
As presented above, we have investigated the magnetic-field dependence of the
ZBCP magnitude in some monolayer Bi islands grown on the superconducting
FeTe0.55Se0.45 substrate. On these specific islands with the size of 4-8 nm,
the zero-energy peak is the common feature of the spectra measured throughout
the entire island. In the process of applying magnetic fields, the ZBCPs are
fixed at zero energy and do not split. Additionally, the width of the ZBCPs
does not broaden. These features exclude the trivial origin of the ZBCPs
caused by Yu-Shiba-Rusinov states or the Andreev bound states ChenXY , and are
consistent with the characteristic of MZMs as reported previously Jiaprl ; IFI
; ChenXY ; Adatom . Thus, the ZBCPs on the Bi islands probably have a
topologically non-trivial origin. As discussed before ChenXY , the ZBCPs may
be due to the magnetic moment just below the particular Bi island. The
magnetic moment can be induced by an interstitial iron atom, and it leads to
the time-reversal symmetry breaking AliceaReview . However, in the present
work, the ZBCP magnitude increases with the increase of magnetic fields and
reaches its maximum at about 2 T; afterwards, the magnitude decreases with
further increase of the fields. This is very different from the situation of
excess iron impurities, and the ZBCP magnitude is robust at a field as high as
8 T IFI . In addition, the effective range by the excess iron atom is very
limited with a radius of about 1 nm IFI which is much smaller than the size
of the Bi island. Therefore, the excess iron atom as the origin of the ZBCP
may be excluded, and the ZBCP is likely caused by the topological
superconductivity induced on the Bi island with a strong spin-orbital
coupling.
From our previous study ChenXY , the zero-energy states observed on these Bi
islands may be caused by two counter-propagating topological edge states TSC3D
emerging at the edge of the islands. The edge states usually behave as the
spatial oscillation as a form of the Bessel function, and the real-space
period approximately equals to $\pi/k_{\mathrm{F}}$ HaoN1 . Since the Fermi
vector $k_{\mathrm{F}}$ is very small in Bi, the real-space period can be the
scale of several nanometers. When the size of the island is suitable, the pair
of the edge states may form an interfered resonant state which behaves as the
ZBCP. Based on this picture, the applied magnetic fields can tune the real-
space period of the oscillation as well as the decaying parameter of the edge
state to the inner part of the island HaoN1 , which may help to increase the
ZBCP magnitude at fields lower than 2 T. In comparison, the ZBCP magnitude
decreases monotonically with increase of the magnetic field in the situation
of the iron impurity IFI ; HaoN2 , the magnetic monolayer film in the trilayer
heterostructure HaoN1 and the vortex core in Fe(Te,Se) ChenXY .
Another possibility of the non-monotonic evolution of the ZBCP magnitude is
from the vortex core in the Fe(Te,Se) substrate. Although our tunneling
spectra are recorded when the Bi island is away from the vortex core in
Fe(Te,Se), there are also some vortex cores nearby. One of them are shown in
the upper right corner of Fig. 3(d). The superconducting current surrounding
the vortex cores may pass through the Fe(Te,Se) underneath, which may probably
enhance the edge states on the Bi island and induce an increment of the ZBCP
magnitude. At higher magnetic fields, the proximity-induced superconductivity
is strongly suppressed by the fields, and the ZBCP magnitude decreases.
Clearly, further theoretical consideration is highly desired to understand the
non-monotonic relationship between the ZBCP magnitude and the magnetic field
quantitatively, as well as the reason why ZBCPs only appear on specific Bi
islands with particular sizes/shapes.
## V Conclusion
In summary, we have observed the zero energy modes on certain Bi islands with
the diameter of 4-8 nm deposited on the FeTe0.55Se0.45 substrate. These zero
energy modes may be MZMs and can be found throughout the entire region on
these islands. Further measurements on these islands under varying magnetic
fields reveal an unusual behavior of the MZM magnitude. Specifically, the
magnitude is initially enhanced at weak fields lower than 2 T, but then
becomes suppressed as the field strength increases. Concerning the probable
origin of the zero-energy modes we discussed above, the anomalous evolution is
probably caused by the magnetic-field tuning on the edge states or influenced
by the superconducting current surrounding the vortex cores on the Fe(Te,Se)
substrate. Our findings will stimulate theoretical efforts for understanding
the mechanism of the bismuth topological systems and shed new light on the
topological property of Majorana zero modes under varying magnetic fields.
###### Acknowledgements.
We appreciate very useful discussions with Jörg Schmalian, Ning Hao, and
Haijun Zhang. The work was supported by the National Natural Science
Foundation of China (Grants No. 11974171, No. 12061131001, and No. 11927809)
and the National Key R&D Program of China (Grant No. 2022YFA1403201).
∗<EMAIL_ADDRESS>†<EMAIL_ADDRESS>
## References
* (1) X. L. Qi and S. C. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
* (2) M. Sato and Y. Ando, Rep. Prog. Phys. 80, 076501 (2017).
* (3) C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. 80, 1083 (2008).
* (4) L. Fu and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008).
* (5) H.-H. Sun and J.-F. Jia, npj Quantum Mater. 2, 34 (2017).
* (6) J.-P. Xu, M.-X. Wang, Z. L. Liu, J.-F. Ge, X. Yang, C. Liu, Z. A. Xu, D. Guan, C. L. Gao, D. Qian, Y. Liu, Q.-H. Wang, F.-C. Zhang, Q.-K. Xue, and J.-F. Jia, Phys. Rev. Lett. 114, 017001 (2015).
* (7) M. Chen, X. Chen, H. Yang, Z. Du, and H.-H. Wen, Sci. Adv. 4, eaat1084 (2018).
* (8) D. F. Wang, L. Y. Kong, P. Fan, H. Chen, S. Y .Zhu, W. Y. Liu, L. Cao, Y. J. Sun, S. X. Du, J. Schneeloch, R. D. Zhong, G. D. Gu, L. Fu, H. Ding, and H.-J. Gao, Science 362, 333 (2018).
* (9) T. Machida, Y. Sun, S. Pyon, S. Takeda, Y. Kohsaka, T. Hanaguri, T. Sasagawa, and T. Tamegai, Nat. Mater. 18, 811 (2019).
* (10) N. Hao and J. Hu, Natl. Sci. Rev. 6, 213 (2018).
* (11) Q. Liu, C. Chen, T. Zhang, R. Peng, Y. J. Yan, C. H. P. Wen, X. Lou, Y. L. Huang, J. P. Tian, X. L. Dong, G. W. Wang, W. C. Bao, Q. H. Wang, Z. P. Y, Z. X. Zhao, and D. L. Feng, Phys. Rev. X 8, 041056 (2018).
* (12) W. Liu, L. Cao, S. Zhu, L. Kong, G. Wang, M. Papaj, P. Zhang, Y.-B. Liu, H. Chen, G. Li, F. Yang, T. Kondo, S. Du, G.-H. Cao, S. Shin, L. Fu, Z. Yin, H.-J. Gao, and H. Ding, Nat. Commun. 11, 5688 (2020).
* (13) M. Li, G. Li, L. Cao, X. Zhou, X. Wang, C. Jin, C.-K. Chiu, S. J. Pennycook, Z. Wang, and H.-J. Gao, Nature (London) 606, 890 (2022).
* (14) Y. Yuan, J. Pan, X. Wang, Y. Fang, C. Song, L. Wang, K. He, X. Ma, H. Zhang, F. Huang, W. Li, and Q.-K. Xue, Nat. Phys. 15, 1046 (2019).
* (15) Z. Liang, X. Hou, F. Zhang, W. Ma, P. Wu, Z. Zhang, F. Yu, J.-J. Ying, K. Jiang, L. Shan, Z. Wang, and X.-H. Chen, Phys. Rev. X 11, 031026 (2021).
* (16) M. Z. Hasan and C. L. Kane. Rev. Mod. Phys. 82, 3045 (2010).
* (17) Z. Wang, J. O. Rodriguez, L. Jiao, S. Howard, M. Graham, G. D. Gu, T. L. Hughes, D. K. Morr, and V. Madhavan, Science 367, 104 (2020).
* (18) L. Jiao, S. Howard, S. Ran, Z. Wang, J. O. Rodriguez, M. Sigrist, Z. Wang, N. P. Butch, and V. Madhavan, Nature (London) 579, 523 (2020).
* (19) J. Alicea, Rep. Prog. Phys. 75, 076501 (2012).
* (20) V. Mourik, K. Zuo, S. M. Frolov, S. R. Plissard, E. Bakkers, and L. P. Kouwenhoven, Science 336, 1003 (2012).
* (21) M. T. Deng, S. Vaitiekenas, E. B. Hansen, J. Danon, M. Leijnse, K. Flensberg, J. Nygård, P. Krogstrup, and C. M. Marcus, Science 354, 1557 (2016).
* (22) S. Nadj-Perge, I. K. Drozdov, J. Li, H. Chen, S. Jeon, J. Seo, A. H. MacDonald, B. A. Bernevig, and A. Yazdani, Science 346, 602 (2014).
* (23) S. Kezilebieke, M. N. Huda, V. Van̆o, M. Aapro, S. C. Ganguli, O. J. Silveira, S. Głodzik, A. S. Foster, T. Ojanen, and P. Liljeroth, Nature (London) 588, 424 (2020).
* (24) A. Palacio-Morales, E. Mascot, S. Cocklin, H. Kim, S. Rache, D. K. Morr, and R. Wiesendanger, Sci. Adv. 5, eaav6600 (2019).
* (25) G. C. Ménard, S. Guissart, C. Brun, R. T. Leriche, M. Trif, F. Debontridder, D. Demaille, D. Roditchev, P. Simon, and T. Cren, Nat. Commun. 8, 2040 (2017).
* (26) S. Ding, C. Chen, Z. Cao, D. Wang, Y. Pan, R. Tao, D. Zhao, Y. Hu, T. Jiang, Y. Yan, Z. Shi, X. Wan, D. Feng, and T. Zhang, Sci. Adv. 8, eabq4578 (2022).
* (27) I. K. Drozdov, A. Alexandradinata, S. Jeon, S. Nadj-Perge, H. Ji, R. J. Cava, B. A. Bernevig, and A. Yazdani, Nat. Phys. 10, 664 (2014).
* (28) B. Jäck, Y. Xie, J. Li, S. Jeon, B. A. Bernevig, and A. Yazdani, Science 364, 1255 (2019).
* (29) X. Chen, M. Chen, W. Duan, H. Yang, and H.-H. Wen, Nano Lett. 20, 2965 (2020).
* (30) Y. Liu and C. T. Lin, J. Supercond. Nov. Magn. 24, 183 (2011).
* (31) Y. Lu, W. Xu, M. Zheng, G. Yao, L. Shen, M. Yang, Z. Luo, F. Pan, K. Wu, T. Das, J. Jiang, J. Martin, Y. P. Feng, H. Lin, and X.-S. Wang, Nano Lett. 15, 80 (2015).
* (32) M. Chen, X. Chen, H. Yang, Z. Du, X. Zhu, E. Wang, and H.-H. Wen, Nat. Commun. 9, 970 (2018).
* (33) H. Lei, R. Hu, E. S. Choi, J. B. Warren, and C. Petrovic, Phys. Rev. B 81, 094518 (2010).
* (34) Ø. Fischer, M. Kugler, I. Maggio-Aprile, C. Berthod, and C. Renner, Rev. Mod. Phys. 79, 353 (2007).
* (35) J. E. Hoffman, Rep. Prog. Phys. 74, 124513 (2011).
* (36) X. Chen, M. Chen, W. Duan, X. Zhu, H. Yang, and H.-H. Wen, arXiv:1909.01686 (2019).
* (37) J. X. Yin, Z. Wu, J. H. Wang, Z. Y. Ye, J. Gong, X. Y. Hou, L. Shan, A. Li, X. J. Liang, X. X. Wu, J. Li, C. S. Ting, Z. Q. Wang, J. P. Hu, P. H. Hor, H. Ding, and S. H. Pan, Nat. Phys. 11, 543 (2015).
* (38) P. Fan, F. Yang, G. Qian, H. Chen, Y. Y. Zhang, G. Li, Z. Huang, Y. Xing, L. Kong, W. Liu, K. Jiang, C. Shen, S. Du, J. Schneeloch, R. Zhong, G. Gu, Z. Wang, H. Ding, and H. J. Gao, Nat. Commun. 12, 1348 (2021).
* (39) A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Phys. Rev. B 78, 195125 (2008).
* (40) R. Song and N. Hao, Phys. Rev. B 108, L100509 (2003).
* (41) R. Song, P. Zhang, X.-T. He, and N. Hao, Phys. Rev. B 106, L180504 (2022).
|
Valentin Churavy, Computer Science and Artificial Intelligence Laboratory,
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
# Bridging HPC Communities through the Julia Programming Language
Valentin Churavy11affiliationmark: William F Godoy22affiliationmark: Carsten
Bauer33affiliationmark: Hendrik Ranocha44affiliationmark: Michael Schlottke-
Lakemper5,65,6affiliationmark: Ludovic Räss7,87,8affiliationmark: Johannes
Blaschke99affiliationmark: Mosè Giordano1010affiliationmark: Erik
Schnetter11,12,1311,12,13affiliationmark: Samuel Omlin1414affiliationmark:
Jeffrey S. Vetter22affiliationmark: Alan Edelman11affiliationmark:
11affiliationmark: Massachussetts Institute of Technology, USA
22affiliationmark: Oak Ridge National Laboratory, USA
33affiliationmark: Paderborn Center for Parallel Computing, Paderborn
University, Germany
44affiliationmark: Department of Mathematics, University of Hamburg, Germany
55affiliationmark: Applied and Computational Mathematics, RWTH Aachen
University, Germany
66affiliationmark: High-Performance Computing Center Stuttgart (HLRS),
University of Stuttgart, Germany
77affiliationmark: Laboratory of Hydraulics, Hydrology and Glaciology (VAW),
ETH Zurich, Switzerland
88affiliationmark: Swiss Federal Institute for Forest, Snow and Landscape
Research (WSL), Birmensdorf, Switzerland
99affiliationmark: National Energy Research Scientific Computing Center,
Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720,
USA
1010affiliationmark: Centre for Advanced Research Computing, University
College London, Gower Street, London, WC1E 6BT, United Kingdom
1111affiliationmark: Perimeter Institute, 31 Caroline St. N., Waterloo, ON,
Canada N2L 2Y5
1212affiliationmark: Department of Physics and Astronomy, University of
Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada N2L 3G1
1313affiliationmark: Center for Computation & Technology, Louisiana State
University, Baton Rouge, LA 70803, USA
1414affiliationmark: Swiss National Supercomputing Centre (CSCS), ETH Zurich,
Switzerland<EMAIL_ADDRESS>
(2022-11-01)
###### Abstract
The Julia programming language has evolved into a modern alternative to fill
existing gaps in scientific computing and data science applications. Julia
leverages a unified and coordinated single-language and ecosystem paradigm and
has a proven track record of achieving high performance without sacrificing
user productivity. These aspects make Julia a viable alternative to high-
performance computing’s (HPC’s) existing and increasingly costly many-body
workflow composition strategy in which traditional HPC languages (e.g.,
Fortran, C, C++) are used for simulations, and higher-level languages (e.g.,
Python, R, MATLAB) are used for data analysis and interactive computing.
Julia’s rapid growth in language capabilities, package ecosystem, and
community make it a promising universal language for HPC. This paper presents
the views of a multidisciplinary group of researchers from academia,
government, and industry that advocate for an HPC software development
paradigm that emphasizes developer productivity, workflow portability, and low
barriers for entry. We believe that the Julia programming language, its
ecosystem, and its community provide modern and powerful capabilities that
enable this group’s objectives. Crucially, we believe that Julia can provide a
feasible and less costly approach to programming scientific applications and
workflows that target HPC facilities. In this work, we examine the current
practice and role of Julia as a common, end-to-end programming model to
address major challenges in scientific reproducibility, data-driven AI/machine
learning, co-design and workflows, scalability and performance portability in
heterogeneous computing, network communication, data management, and community
education. As a result, the diversification of current investments to fulfill
the needs of the upcoming decade is crucial as more supercomputing centers
prepare for the exascale era.
###### keywords:
High-Performance Computing, HPC, Julia, Programming Language, Workflows,
Productivity, Performance Portability
## 1 Introduction
The Julia programming language (Bezanson et al., 2018) was designed in the
last decade to be a novel, high-level, dynamic, and high-performance approach
to numerical computing. Julia programs compile as efficient native code for
several heterogeneous architectures via the open-source LLVM compiler (Lattner
and Adve, 2004). The syntax builds upon the success of Fortran for
multidimensional arrays and mathematical abstractions (Backus and Heising,
1964) and combines with a rich ecosystem that includes high-level interfaces
for data structures, analysis, visualization, AI frameworks, and interactive
computing. Julia was also designed to address aspects that are typically
offloaded to a language ecosystem but are still necessary in the overall
scientific discovery process (e.g., reproducibility, packaging, environment
portability). Julia also includes a powerful macros system for code
instrumentation, interactive computing capabilities, and lightweight
interoperability with existing C and Fortran codes---especially highly
optimized high-performance computing (HPC) software frameworks and libraries.
Julia offers a powerful workflow composition strategy because existing highly
optimized HPC frameworks can be combined seamlessly with high-performance
Julia kernel code for computation and data management on heterogeneous
systems. This creates a powerful synergy for programming HPC systems as more
emphasis is placed on performance portability and programmer productivity in
the overall workflow process, beyond simulations (Ben-Nun et al., 2020).
Software development that targets HPC facilities for scientific discovery is a
nontrivial and highly specialized task (Parashar et al., 1994). Efficient use
of HPC facilities for computational science and engineering (CSE) is a
multidisciplinary orchestration among several stakeholders. This process
requires intimate knowledge of the application’s target domain, the targeted
system’s architecture, and the algorithms in the frameworks and libraries that
handle the scalable computation, communication, and data performance aspects
within the co-design process. As we reach the physical limits of Moore’s Law
in semiconductor technology (Moore, 1998; Shalf and Leland, 2015), several
heterogeneous architectures and programming models have emerged (Vetter et
al., 2018) during a time in which the first exascale systems are being
deployed for the HPC community. On the software technology side, major vendors
have converged around the LLVM open-source project (Lattner and Adve, 2004) as
the back-end technology of choice for their plethora of compilers and
programming models. LLVM’s modularity, reusability, and platform-agnostic
intermediate representation (IR) enables the desired productivity and
performance portability characteristics. At the same time, custom hardware
accelerators are powering the computational demands associated with AI
applications at a wide range of scales. Consequently, the current landscape
offers unique opportunities to rethink traditional HPC aspects such as end-to-
end co-design for performance portability of complex workflows, large-scale
rapid prototyping, and collaboration with dominant cloud and mobile computing
ecosystems (Reed et al., 2022).
The present work outlines our view that Julia can challenge the current status
quo---in which high-level languages designed with productivity in mind cannot
easily achieve the desired levels of performance---while also reducing the
costs associated with the learning curve, implementation, and maintenance of
an infrastructure based on compiled HPC languages. Much of Fortran’s success
can be attributed to providing an answer to the original question (Backus,
1980): ‘‘Can a machine translate a sufficiently rich mathematical language
into a sufficiently economical program at a sufficiently low cost to make the
whole affair feasible?’’ Julia attempts to solve a similar technical and
economical challenge according to the current landscape by expanding on the
traditional HPC focus of simulation performance towards workflow applications.
Just like Fortran has been the dominant language for science in the last
several decades, Julia can be seen as a unifying domain-specific language
(DSL) for science that targets modern HPC requirements for simulations, data
analysis, workflows, and interactive computing. The expected return on
investment for leveraging Julia is an increase in productivity when addressing
the end-to-end co-design needs of multidisciplinary HPC projects, without a
drop performance portability, while also keeping development in a single
unifying language and ecosystem. The latter is particularly important in the
convergence of AI + HPC workflows for science as AI has been one of the
primary drivers in computational sciences in the past decade (Stevens et al.,
2020).
The rest of the paper describes what makes the Julia language an attractive
investment for scientific discovery with HPC. Section 2 provides background
information on the history and efforts around programming languages for HPC,
including initiatives that led to the proliferation of current programming
models. Section 3 describes the community adoption, interest in leadership
facilities around the world, and the package development and deployment
process to enable reproducible science at those centers. Section 4 outlines
the value of Julia as a first language for teaching HPC concepts. Performance
and scalability, which are key aspects of HPC’s ethos, are described in
Section 5, including experiences in heterogeneous architectures that combine
the power of CPUs and GPUs (graphics processing units). Section 6 presents an
overview of Julia success stories, including recent research studies that
describe performance aspects and community adoption in the broader field of
CSE. Section 7 describes the central aspect of Julia’s interoperability with C
and Fortran that allows access to highly optimized HPC frameworks, along with
reusability with Python’s existing frameworks, for a powerful workflow
composability strategy. Section 8 summarizes our conclusions and vision for
Julia and potential opportunities and investments for the HPC community.
## 2 Background
The development of programming languages for HPC has a rich and varied
history. Early on, the needs of HPC and mainstream computing were mostly
aligned around number crunching for numerical calculations, which led to the
development of Fortran (Backus and Heising, 1964) as the first high-level HPC
language in the 1950s. To this day, Fortran continues strongly as a leading
programming language for HPC owing to its legacy of investments and highly
optimized implementations (Kedward et al., 2022). As computing evolved and
added more requirements at the system level to perform data movement, parallel
processing, analysis, and visualization, C (Kernighan and Ritchie, 1988) and
C++ (Stroustrup, 2013) became the dominant system-level and numerical
computing languages in HPC.
At the beginning of the 21st century, the Defense Advanced Research Projects
Agency’s (DARPA’s) High-Productivity Computing Systems (HPCS) program
(Dongarra et al., 2008) described the common practice for HPC software as
writing kernels in a compiled sequential language (e.g., Fortran, C, C++) and
then parallelizing them in a memory-distributed model based on the standard
Message Passing Interface (MPI) (Gropp et al., 1999). HPCS funded an effort to
develop new programming languages that targeted productivity, and this
resulted in Cray’s Chapel Parallel Programming Language (Chamberlain et al.,
2007), IBM’s X10 (Saraswat et al., 2007), and Sun’s Fortress (Allen et al.,
2005). Other efforts included those based on Fortran and C extensions, such as
Coarray Fortran (Numrich and Reid, 1998) and the unified parallel C (El-
Ghazawi et al., 2005). In general, these new programming languages offered an
alternative to traditional message passing and multithreaded programming
models by using approaches such as partitioned global address space (El-
Ghazawi et al., 2005; Almasi, 2011).
The past decade has seen several disruptive trends that led to the current
landscape of extreme heterogeneity: (1) the emergence and adoption of GPU
computing as a disruptive technology in HPC (Kindratenko et al., 2009) owing
to its performance, programmability, and energy efficiency (Enos et al.,
2010); (2) the flattening of Moore’s Law in the CMOS technology manufacturing
industry; and (3) the adoption of LLVM as the compiler of choice from major
vendors. These trends have led to the proliferation of new standardized,
vendor-specific, and third-party programming models in the past decade. These
models target HPC languages used to manage the increased heterogeneity of
contemporary systems: OpenCL (Munshi, 2009), CUDA (Buck, 2007), HIP (AMD,
2008), OpenMP (Dagum and Menon, 1998), OpenACC (Wienke et al., 2012), SYCL
(Reyes and Lomüller, 2016), Kokkos (Carter Edwards et al., 2014), and RAJA
(Beckingsale et al., 2019) among others.
Overall, programming languages used in HPC are not specifically designed for
science, with Fortran being the exception. This has been a sustainable model
owing to vendor and community support, especially for C++ and Python as
rapidly evolving general-purpose languages. The HPC software stacks funded by
the US Department of Energy’s (DOE’s) Exascale Computing Project (ECP) (Heroux
et al., 2018; Heroux, 2019; Dongarra et al., 2011) have continued to build
upon the legacy of Fortran, C, and C++, and Python’s high-productivity
ecosystem has been widely adopted for data analysis, AI, and workflow
composition (Straßel et al., 2020). Ousterhout (1998) already observed the
split of programming languages into two distinct groups: _implementation_ and
_scripting_. It was anticipated that scripting language interfaces that glue
together the underlying system components would become a dominant model with
trade-offs and challenges of its own. A major challenge is the bifurcation of
the different communities and the high cost for learning and maintaining
multiple technologies and ecosystems. This is even more noticeable in the era
of AI because frameworks such as TensorFlow (Abadi et al., 2015), PyTorch
(Paszke et al., 2019), JAX (Bradbury et al., 2018), and Firedrake (Bercea et
al., 2016) target end users in high-productivity languages. Closing the gaps
between HPC’s needs and ease of use is a nontrivial effort that adds overheads
costs (Zhu et al., 2021; Lavrijsen and Dutta, 2016).
Julia was designed to prioritize research and development cycles from idea to
performance portability for scientific discovery. Reducing the overhead
development costs in this landscape is crucial as future systems become more
complex and heterogeneous. The unified language approach builds upon the
requirements of the scientific communities that are facing these challenges.
In this regard, Julia has attracted domain scientists and practitioners from
multiple disciplines to create a community that continues to grow and
establish synergistic collaborations. We propose that Julia is a sustainable
investment for HPC software projects as future challenges continue to add
costs to the scientific discovery objectives that drive and justify the large
strategic investments in these systems.
## 3 Community
The Julia language community is made up of many people working in various
scientific and technical domains, and even the original Julia
manifesto111https://julialang.org/blog/2012/02/why-we-created-julia/, accessed
08-16-2022. described the target demographic as including scientific
computing, machine learning, data mining, large-scale linear algebra, and
distributed and parallel computing. The umbrella term for these domains is
technical computing.
The original developers of Julia aimed to design an open-source language to
tackle problems in technical computing, and from there the community has grown
to encompass a wide variety of use cases---from web servers, to databases, to
numerical simulations on HPC systems. Although Julia is now recognized as a
general-purpose programming language, the early focus on technical computing
is still apparent. Common challenges for people working in technical computing
are reproducibility and software distribution, and we will discuss these
problems in Section 3.1. The rest of this section focuses on the HPC
subdemographic of the Julia community (Section 3.2), Julia at the National
Energy Research Scientific Computing Center (NERSC) (Section 3.3), and the HPC
centers around the world (Section 3.4).
### 3.1 Package development and reproducibility
Julia was specifically designed to fulfill the Fortran dream of automating the
translation of formulas into efficient executable code (Bezanson et al.,
2017). Additionally, Julia addresses the two-language problem by closing the
gap between developers and users of scientific software. This is achieved with
an intuitive language and by providing users with tools to more easily follow
good, modern programming practices---including documentation, testing, and
continuous integration. A recent survey of the packages collected in the
General registry showed a strong adoption of these practices: over 95% of
packages had tests and ran them with continuous integration services, and
almost 90% of packages had documentation (Hanson and Giordano, 2021). The
adoption of these practices is also made simpler by package templates such as
those provided by PkgTemplates.jl (de Graaf and contributors, 2022).
Building on the experience of other languages, Julia comes with a built-in
package manager, Pkg.jl, which can install packages and manage package
environments similar to the concept of virtual environments in Python. Julia
package environments are defined by two text files: Project.toml and
Manifest.toml. Project.toml specifies the list of direct dependencies of an
environment and their compatibility constraints. Manifest.toml captures all
direct and indirect dependencies of the environment and uses the appropriate
versions of each software module for the present environment. When both files
are provided, they fully define a computational environment, and this
environment can then be recreated later or on a different machine. We use
these features in the reproducibility repository described in this paper
(Churavy et al., 2022).
Julia packages are set up as Git repositories that can be hosted on any Git
hosting services. Many development tools, including continuous integration
tools and online package documentation solutions, are well integrated with
GitHub and GitLab, which are the two most popular repository hosting services
within the Julia community. All versions of packages recorded in the General
registry are automatically duplicated by the servers used by Pkg.jl to prevent
deleted packages from taking their dependents out with them---an unfortunate
scenario that played out with the left-pad JavaScript package (Williams,
2016).
Julia allows for writing an entire software stack in a single language thanks
to its unique combination of ease-of-use and speed. However, Julia users often
want to use legacy code already written in other languages, such as C, C++,
Fortran, Python, or R. Julia offers the capability to call functions in shared
libraries written in C and Fortran and libraries written in any other
languages that provide a C-like interface. Third-party packages such as
Clang.jl (Norton et al., 2022) and CBinding.jl (Rutkowski, 2022) enable the
automatic creation of Julia bindings for C libraries by parsing their header
files. Some packages enable other languages to be used directly from within a
Julia process, including but not limited to PyCall.jl (Johnson and
contributors, 2022) and PythonCall.jl (Rowley, 2022) for Python, RCall.jl (Lai
and contributors, 2022) for R, and MATLAB.jl (Mohamad and contributors, 2022)
for MATLAB. CxxWrap.jl (Janssens, 2022) makes it possible to interface C++
shared libraries by using a static binding generator.
Within the Julia ecosystem, binary libraries and executables are usually
managed with BinaryBuilder.jl (Saba and contributors, 2022). This framework
allows package developers to compile pre-built versions of the binaries for
all Julia-supported platforms and then upload them to GitHub. The
corresponding and automatically generated packages, called JLLs, provide a
programmatic interface to call into libraries or run executables. The JLLs are
regular Julia packages that, when installed, automatically download the
corresponding libraries or executables, thus relieving users from the effort
of installing or compiling external libraries themselves. That the JLLs are
regular Julia packages also means that they can be recorded in the package
environment, thus extending the reproducibility of a computing environment to
libraries and programs in other languages. The BinaryBuilder.jl framework is
usually seen as successful because it provides straightforward handling of
external libraries in the general cases. This may cause some friction in HPC
settings in which users would like to leverage the system’s fine-tuned
libraries. However there are mechanisms to override the pre-built libraries
provided by JLL packages while still using their programmatic interface.
### 3.2 Uptake of Julia in the HPC community
As Julia places performance at the core of the language, the HPC community has
been among the early adopters of the Julia language. Notable examples of early
HPC readiness are the petaflop runs at DOE’s NERSC (HPCWire, 2017). The
Celeste Julia code, which analyzes astronomical images, achieved 1.54
petaflops using 1.3 million threads on 9,300 Knights Landing (KNL) nodes of
the Cori supercomputer. At the time, this represented an important milestone
because experimental and observational science workflows are typically coded
using high-productivity interpreted languages that are optimized for rapid
prototyping but not for performance. These scientific domains have some of the
highest adoption rates for Julia and rely on rapid prototyping, complex
workflows, and interactive computing.
### 3.3 A detailed look at Julia use at NERSC
NERSC is a DOE user facility with approximately 8,000 users. Most users are
employed at universities and DOE laboratories, and half are early career
scientists, including graduate students and postdocs. Projects using NERSC’s
HPC systems are funded by DOE program offices: Basic Energy Sciences, High-
Energy Physics, Biological and Environmental Research, Fusion Energy Sciences,
Nuclear Physics, Advanced Computing Research, and Small Business Innovation
Research. Owing to this breadth of research, a survey of NERSC users provides
insights into a broad research community.
NERSC monitors the use of the module load julia command (among many others)
with MODS (Monitoring of Data Systems). MODS captures workflows that use
NERSC’s official Julia install---users that install their own version of Julia
are not tracked. MODS reports that 132 unique, non-staff users loaded a Julia
module at least once in 2021. MODS also shows a gradual increase in Julia
module usage at NERSC, but this view is limited. To see a clearer picture of
the community’s future plans, we surveyed NERSC users and received 415
responses. Most responded within the first 2 days, thereby indicating strong
interest. The survey results showed that 44% of respondents are planning to
use Julia (Figure 1).
Figure 1: NERSC user survey: 44% of all respondents (415 NERSC users) plan to
use Julia in the future. Of those, 44% plan to use Julia at NERSC.
### 3.4 User support and interest at major HPC centers
Julia is supported by several major HPC centers surveyed in the United States
and Europe (see Table 1). Official support at HPC centers takes the form of
(1) inclusion of Julia and possibly packages in the official module tree; (2)
site-specific configurations (e.g., MPI, I/O); (3) official user
documentation; and (4) support for user trouble tickets.
Center Name | System Names | | Support Level
---
CPU Architecture | Accelerators
| | P | U | I | D | |
Australasia | | | | | | |
NeSI | Mahuika, Māui | ✓ | ✓ | ✓ | ✓ | Intel Broadwell, Intel Cascade Lake, AMD Milan | NVIDIA P100, NVIDIA P100
Europe | | | | | | |
ARC (UCL) | Myriad, Kathleen, Michael, Young | ✓ | ✓ | | ✓ | Various Intel Xeon | Various GPUs
CSC (EuroHPC) | LUMI | ✓ | ✓ | | ✓ | AMD Milan | AMD M250X
CSCS | Piz Daint | ✓ | ✓ | ✓ | ✓ | Intel Broadwell, Intel Haswell | NVIDIA P100
DESY IT | Maxwell | ✓ | | ✓ | ✓ | Various AMD Epyc Various Intel Xeon | Various GPUs
HLRS | Hawk | ✓ | ✓ | ✓ | ✓ | AMD Rome | NVIDIA A100
HPC2N (Umeå) | Kebnekaise | ✓ | ✓ | | ✓ | Intel Broadwell, Intel Skylake | NVIDIA K80, NVIDIA V100
IT4I (EuroHPC) | Karolina | ✓ | ✓ | ✓ | ✓ | AMD Rome | NVIDIA A100
IZUM (EuroHPC) | Vega | ✓ | ✓ | ✓ | ✓ | AMD Rome | NVIDIA A100
LuxProvide (EuroHPC) | MeluXina | ✓ | | ✓ | ✓ | AMD Rome | NVIDIA A100
PC2 (Paderborn) | Noctua 1 | ✓ | ✓ | ✓ | ✓ | Intel Skylake | Various GPUs
PC2 (Paderborn) | Noctua 2 | ✓ | ✓ | ✓ | ✓ | AMD Milan | NVIDIA A100, Xilinx U280, Intel Stratix 10
ULHPC (Luxembourg) | Aion, Iris | ✓ | | ✓ | ✓ | AMD Rome, Intel Broadwell, Intel Skylake | NVIDIA V100
ZDV (Mainz) | MOGON II | ✓ | | | ✓ | Intel Broadwell, Intel Skylake | None
ZIB | HLRN-IV | ✓ | ✓ | | ✓ | Intel Cascade Lake AP | NVIDIA A100, Intel PVC
North America | | | | | | |
Carnegie Mellon College of Engineering | Arjuna, Hercules | ✓ | ✓ | ✓ | ✓ | Intel Xeon, AMD Milan | NVIDIA A100, NVIDIA K80
Dartmouth College | Discovery | ✓ | | ✓ | ✓ | Various Intel Xeon, AMD Rome | NVIDIA V100
FARSC (Harvard) | Cannon | ✓ | | ✓ | ✓ | Intel Cascade Lake | NVIDIA V100, NVIDIA A100
HPC LLNL | Various Systems | ✓ | | ✓ | ✓ | Various Processors | Various GPUs
OLCF | Frontier/Crusher | ✓ | ✓ | ✓ | | AMD Epyc | AMD MI250X
NERSC | Cori | ✓ | ✓ | ✓ | ✓ | Intel Haswell, Intel KNL, Intel Skylake | NVIDIA V100
NERSC | Perlmutter | ✓ | ✓ | ✓ | ✓ | AMD Milan | NVIDIA A100
Open Science Grid | | X | ✓ | | ✓ | Various Processors | Various GPUs
Perimeter Institute for Theoretical Physics | Symmetry | ✓ | ✓ | ✓ | X | AMD Epyc, Intel Xeon, | NVIDIA A100
Pittsburgh Supercomputing Center | Bridges-2 | ✓ | ✓ | ✓ | ✓ | AMD Epyc, Intel Xeon, | NVIDIA V100
Princeton University | Several (including Tiger) | ✓ | ✓ | ✓ | ✓ | Intel Skylake, Intel Broadwell | NVIDIA P100
Table 1: August 8, 2022 snapshot of the Julia support level at different HPC
centers (current list is available at https://github.com/hlrs-tasc/julia-on-
hpc-systems). User support legend: P = official version preinstalled, U =
center provides user support (e.g., center staff answers user questions), I =
support for interactive workflows, and D = center provides documentation.
Current support at Oak Ridge National Laboratory’s Oak Ridge Leadership
Computing Facility (OLCF) (Oak Ridge Leadership Computing Facility, ) for
Summit and Crusher, which is Frontier’s test bed system, include recent Julia
versions in the user modules. Similarly, the OLCF JupyterHub interface
provides custom multithreaded Julia kernels for access to the high-performance
file systems. Although user support is available, gaps exist in the official
documentation and training (Marques and Barker, 2020), and these gaps must be
closed to make Julia a viable option for exascale computing.
## 4 Teaching
Julia’s dynamic characteristics and interactive features make it a powerful
entry-level tool for teaching, and the official Julia
website222https://julialang.org/learning/classes/ offers a selection of online
courses. Examples include the Massachusetts Institute of Technology (MIT)
modern numerical computing course using Julia for a
decade333http://courses.csail.mit.edu/18.337/2018. While ETH Zurich offers a
GPU for HPC programming classes using Julia444https://pde-on-gpu.vaw.ethz.ch.
The high-level of abstraction enables classroom experiences comparable to
Python or MATLAB, and the rich collection of scientific libraries spans a
broad spectrum of applications. As an answer to the two-language problem,
Julia can empower domain scientists to dive into HPC development, thereby
removing most of the usual barriers that the endeavor would encounter. As
such, Julia offers a fast track for domain scientists interested in promoting
the development of code on a high level while also offering opportunities for
further optimizations, performance engineering, and native tools for precise
code analysis.
### 4.1 Code introspection and performance engineering
In addition to Julia’s REPL (read-eval-print loop) component, interactive
interfaces such as Jupyter555Although Jupyter supports several languages, it
derives its name from three programming languages: Julia, Python, and R.
(Jupyter Development Team, 2022) and Pluto (van der Plas et al., 2022) provide
an engaging learning environment for students with a low barrier to entry.
Combined with Julia’s high-level syntax, readily available 2D and 3D
visualization packages such Plots.jl (Christ et al., 2022) and Makie.jl
(Danisch and Krumbiegel, 2021), and a built-in package manager---which also
reliably delivers binary dependencies across different operating systems---
these frameworks allow one to dive right into the concepts of interest rather
than dealing with distracting technicalities or working around missing
language features.
At the same time, Julia’s just-ahead-of-time compilation delivers fast and
pure native code by leveraging the modular LLVM compiler infrastructure. This
distinguishes Julia from other dynamic high-level languages, which are
typically several orders of magnitude slower, and puts it in the ranks of
traditional HPC programming languages (e.g., C, Fortran) in terms of
performance and low-level interpretability. As for the latter, the built-in
introspection tools, @code_typed, @code_llvm, and @code_native, provide a
unique way to interactively explore the compilation of high-level Julia code
to intermediate LLVM-IR and low-level machine instructions. In particular,
this feature allows one to demonstrate the connection between different
variants of code and their respective performance (e.g., owing to the presence
or absence of Single Instruction Multiple Data [SIMD] vectorization). Given
Julia’s competitive speed, students can readily use the language’s interactive
capabilities to write, analyze, and improve their own domain-specific
production codes, thereby making the effort of learning Julia much more
profitable for their science.
### 4.2 Transferable knowledge and experience
Teaching may become a challenging endeavour because it requires the instructor
to extract the key concepts from a complex workflow and expose them to
students as clear, simple, and concise incremental steps. Conciseness is
crucial there because reducing complexity and new concepts to the strict
minimum usually accounts for enhanced focus, which in turn enables a steeper
learning curve. Teaching is mostly about introducing, exemplifying, and
exercising new concepts. Julia’s conciseness, performance, and interactive
features enable the instructor to go through all these steps with a single
code. Julia’s high-level syntax permits the instructor to efficiently
prototype new concepts into code, and that code actually executes with optimal
performance. This is important when teaching algorithmic concepts because
users/students usually do not like to wait for their algorithm to complete.
However, the story is dramatically different for HPC. In HPC, one would
ideally have some simple high-level code snippets that demonstrate
performance-oriented, often parallel and accelerator-based implementations
with strong focus on run-time (or implementation) performance. High-level or
interpreted languages will mostly fail at this stage because the algorithm
design will remain conceptual or require a low-level implementation to fulfil
the performance expectations, thereby introducing a significant barrier in the
teaching workflow owing to the inherent complexity overhead. The same
challenges apply when targeting accelerators such as GPUs. It may be possible
to conceptually design GPU kernels in any language; however, when it comes to
testing the actual implementation in terms of performance, one would obviously
need to have a GPU-compatible code. Julia overcomes the two-language barrier
as it allows a single high-level and concise code to be regrouped as the
essence of the algorithm or implementation of interest and will most likely
enable a high-performance execution of it---be it for demonstration or
production purposes. The SAXPY code (Figure 2) exemplifies this by achieving a
memory throughput of $\sim$1,260 GB/s for a high-level broadcasting
implementation and $\sim$1,350 GB/s for compact CUDA kernel and CUBLAS
variants on an NVIDIA A100 SXM4 GPU.
Ultimately, students and users can learn about and experiment with basic and
advanced HPC concepts within the same interactive language in a portable way.
Teaching material can be prototyped on personal computers or laptops, and the
same codes can be later deployed on GPUs or HPC servers without code
duplication or explicit porting between languages. Moreover, Julia provides a
single language to enable experimenting with HPC that can be readily deployed
in domain sciences.
⬇
u
const dim = 100_000_000
const a = 3.1415
x = CUDA.ones(dim)
y = CUDA.ones(dim)
z = CUDA.zeros(dim)
# (a) SAXPY via high-level broadcasting
CUDA.@sync z .= a .* x .+ y
# (b) SAXPY via CUBLAS
CUDA.@sync CUBLAS.axpy!(dim, a, x, y)
# (c) SAXPY via CUDA kernel
function saxpy_gpu_kernel!(z, a, x, y)
i = (blockIdx().x - 1) * blockDim().x +
threadIdx().x
if i <= length(z)
@inbounds z[i] = a * x[i] + y[i]
end
return nothing
end
# launch configuration
nthreads = 1024
nblocks = cld(dim, nthreads)
# execute the kernel
CUDA.@sync @cuda(
threads = nthreads,
blocks = nblocks,
saxpy_gpu_kernel!(z, a, x, y)
)
Figure 2: Three different SAXPY implementations based on CUDA.jl (Besard et
al., 2018) for NVIDIA GPUs: (a) high-level variant that utilizes broadcasting
and array abstractions, (b) simple call into the cuBLAS vendor library, and
(c) custom SAXPY CUDA kernel written in and launched from Julia.
## 5 Scalability and portability
The ability to efficiently deploy a single HPC code on different architectures
and at different scales is a key feature for productivity in scientific HPC.
Julia offers features that help reduce the complexity of this task, including
multiple dispatch, cost-less, high-level abstractions and extensive
metaprogramming capabilities. As a result, powerful low- and high-level
packages for performance-portable shared and distributed parallelization have
emerged.
### 5.1 Performance scalability
Julia’s base multithreading support and generic high-level packages (e.g.,
LoopVectorization.jl (Elrod and Lilly, 2019), SIMD.jl (Schnetter and
contributors, 2016)) enable straightforward intranode CPU parallelization.
Packages such as CUDA.jl (Besard et al., 2019), AMDGPU.jl (Samaroo et al.,
2013), and OneAPI.jl (Besard and other contributors, 2020) provide the ability
to run Julia code natively on GPUs. Various domain- and method-specific
packages (e.g., ParallelStencil.jl (Omlin and Räss, 2019), Flux.jl (Innes et
al., 2018; Innes, 2018)) simplify efficient shared-memory parallelization on
GPUs and CPUs for the targeted applications and make it accessible to domain
scientists.
Julia includes a generic approach to distributed computing via the
Distributed.jl module. A convenient and zero-overhead wrapper for MPI is also
available via the MPI.jl package (Byrne et al., 2021). MPI.jl supports CUDA-
and ROCm-aware MPI and enables packages that build on it to leverage remote
direct memory access (RDMA). Similarly, MPI.jl enables wrappers for MPI-based
libraries for scalable parallel I/O such as:
HDF5.jl666https://github.com/JuliaIO/HDF5.jl (Byna et al., 2017; The HDF
Group, 2000-2010), and the more streaming oriented
ADIOS2.jl777https://github.com/eschnett/ADIOS2.jl (Godoy et al., 2020) for
data storage and streaming at scale. As for shared memory parallelization,
high-level packages can render distributed parallelization simple and
efficient for certain classes of applications. Examples include
ImplicitGlobalGrid.jl (Omlin et al., 2019), which builds on MPI.jl and renders
efficient RDMA-enabled distributed parallelization of stencil-based GPU and
CPU applications on a regular staggered grid almost trivial, and
DistributedArrays.jl (Contributors, 2015), which is a global-array interface
that relies on the Distributed.jl module.
By combining high-level Julia packages for shared and distributed computing
(e.g., ParallelStencil.jl, ImplicitGlobalGrid.jl), a single high-level HPC
code can be readily deployed on a single CPU core or on thousands of CPUs or
GPUs. The weak scaling of a Julia-based, coupled, hydro-mechanical 3D
multiphysics solver achieves a parallel efficiency of more than 95% on 1--
1,024 NVIDIA Tesla P100 GPUs on the Piz Daint Cray XC50 supercomputer at the
Swiss National Supercomputing Centre (Figure 3, adapted from Omlin et al.
(2020)). These results were confirmed recently by close-to-ideal weak scaling
achievements on up to 2,197 P100 GPUs (Räss et al., 2022). The solver was
written in CUDA C using MPI (blue data) and translated to Julia (red data) by
using ParallelStencil.jl and ImplicitGlobalGrid.jl. On a single node, the
Julia solver achieved 90% of the CUDA C solver’s performance (after the
initial direct translation) without extensive Julia language--specific
optimizations. It should be noted that we apply a strict definition of
parallel efficiency, in which the reference performance for one GPU is given
by the best known serial implementation in CUDA C and Julia. As a result, the
reported parallel efficiency for one GPU is below 100%, and this accounts for
the performance loss caused by splitting boundary and inner-point calculations
to enable communication/computation overlap (see Räss et al. (2019) for
details). This performance loss was more significant for the CUDA C
experiments than for the Julia experiments because less-refined parameters
were used for the definition of the computation splitting. Thus, the results
obtained with CUDA C could certainly be improved by redoing the experiments
with better-suited parameters.
Figure 3: Parallel efficiency of a weak-scaling benchmark using 1 to 1,024
NVIDIA P100 GPUs on the Piz Daint Cray XC50. The blue and orange surfaces
visualize the 95% confidence interval of the reported medians. Adapted from
Omlin et al. (2020). The raw data and plotting script are available in the
reproducibility repository (Churavy et al., 2022).
### 5.2 Performance portability
Julia’s performance portability story unfolds along several main threads.
First, Julia is capable of retargeting the language at a low-level for diverse
platforms and accelerators. Second, library writers can use Julia’s
capabilities to build powerful abstractions. Last but not least, a common
array abstraction allows for high-level performance-portable codes.
At the core of Julia’s infrastructure sits a flexible and extensible compiler
design and a multiple-dispatch language feature that enables code
specialization for a given run-time type.
#### Array abstractions.
Julia provides powerful array abstractions (Bezanson et al., 2017) that when
combined with several implementations allow the user to efficiently express
concepts in linear algebra, access optimized implementations, and retarget
their programs. At the core of the Julia standard library lies a common super-
type, AbstractArray{T,N}, for arrays with element type T and N dimensions.
Many subtypes exist: the dense array type Array{T,N} (the most commonly used
storage type for arrays allocated on the CPU), Tridiagonal{T},
Transpose{T,<:AbstractArray{T,N}} (a behavioral wrapper that transforms A[i,j]
into A[j,i]), SparseMatrixCSC{T}, and CUDA.CuArray{T,N} (for arrays on NVIDIA
GPUs). The LinearOperators.jl (Orban et al., 2020) and LinearMaps.jl (Karrasch
et al., 2022) packages also provide types that implement linear operators
specified as functions without storing any elements (i.e., matrix shell).
All subtypes of AbstractArray{T,N} implement an N-dimensional array with
element type T. The way in which elements are stored, which elements are
stored, and how the various operations (e.g., addition, multiplication,
element access, iteration) are used is left to the implementation. Typically,
code that uses arrays (e.g., vectors, matrices, tensors) does not choose a
particular implementation but works with any array type. This leads to the
same freedom that Kokkos provides---storage and iteration implementation
details are decoupled from the algorithms that use these arrays (as much as
possible). New hardware back ends for accelerators can be supported in a
straightforward manner by implementing the appropriate array storage types,
similar to CUArray.
The user can apply high-level abstractions (e.g., map, reduce, mapreduce,
broadcasting) as well as linear algebra routines and other numerical computing
operations (e.g., Fourier transforms) to solve scientific problems. For
example, the code in Figure 4 implements a simple train loop for a neural
network. Notably, to execute this code on the GPU, the user does not need to
change the code itself---the user only has to move the data to the GPU. One
can achieve this by adding x = CuArray(x), y = CuArray(y), and w = CuArray(w)
before the loop.
⬇
u
loss(w,b,x,y) = sum(abs2, y - (w*x .+ b)) / size(y,2)
loss${\scriptstyle\nabla}$w(w, b, x, y) = ...
lossdb(w, b, x, y) = ...
function train(w, b, x, y ; lr=0.1)
w -= lmul!(lr, loss${\scriptstyle\nabla}$w(w, b, x, y))
b -= lr * lossdb(w, b, x, y)
return w, b
end
n = 100; p = 10
x = randn(n, p)'
y = sum(x[1:5, :]; dims=1) .+ randn(n)' * 0.1
w = 0.0001 * randn(1,p)
b = 0.0
for i in 1:50
w, b = train(w, b, x, y)
end
Figure 4: A neural network training loop that uses Julia’s linear algebra
routines.
These abstractions are all implemented in Julia itself. Most often, they are
dispatched to optimized and specialized operations appropriate for the compute
device as well as libraries that provide optimized BLAS operations.
Because the implementation is primarily in Julia, an enterprising user can
provide a specialized array implementation and leverage the structure in their
own problem. We demonstrate such an scenario in Figure 5. The user can create
a wrapper array to encode mathematical knowledge into the array type. In this
case, the user needs $n$ numbers to represent a matrix that is dense but
structured. The user knows a special algorithm for the largest eigenvalue.
With the higher-level abstractions, essentially the same code works on a
single CPU, in a distributed setting, or on a GPU.
⬇
u
# Build a custom array type
struct DMatrix{T, V<:AbstractVector{T}} <: AbstractMatrix{T}
v::V
end
Base.size(A::DMatrix) = length(A.v), length(A.v)
Base.getindex(A::DMatrix,i,j) =
A.v[i]*(i==j) + A.v[i]*A.v[j]
# Eigensolver for DMatrix
f(A::DMatrix) =
$\lambda$ -> 1 + mapreduce(v -> v^2 / (v - $\lambda$) , +, A.v)
f′(A::DMatrix) =
$\lambda$ -> mapreduce(v -> v^2 / (v - $\lambda$)^2, +, A.v)
import LinearAlgebra: eigmax
function eigmax(A::DMatrix; tol = eps(2.0))
x0 = maximum(A.v) + maximum(A.v)^2
$\delta$ = f(A)(x0) / f′(A)(x0)
while abs($\delta$) > x0 * tol
x0 -= $\delta$
$\delta$ = f(A)(x0) / f′(A)(x0)
end
x0
end
Figure 5: A user-defined array type that only stores a vector, $v$, yet
presents the full matrix $vv^{T}+\textrm{diag}(v)$ to indexing operations. A
custom largest-eigenvalue-solver makes efficient use of this structure via
multiple dispatch. Adapted from Edelman (2019).
⬇
u
addprocs(4)
using CUDA
using DistributedArrays
N = 4_000_000
v = randn(N)*0.1
A = DMatrix(v)
# Explicit data-movement
distA = DMatrix(distribute(v))
gpuA = DMatrix(CuArray(v))
# Execute eigmax on the CPU,
# distributed across multiple processes,
# and on a GPU.
eigmax(A)
eigmax(distA)
eigmax(gpuA)
Figure 6: Transparent execution of a program in multiple execution domains.
#### Powerful libraries.
One guiding principle in Julia is that _it is Julia all the way down_.
Packages are implemented mostly in Julia itself, as is the base language,
standard library, and parts of the compiler. Consequently, there is very
little _special code_. By special code, we mean things that the base language
(i.e., C or C++) can do that one could not instead implement in pure Julia as
a package author. Because of this, there are very few cases in which users
would need to write an extension in C or C++.
That said, Julia does rely on external libraries to interact with the
operating system and hardware, and it leverages these libraries when standard
solutions already exist for common problems.
The combination of Julia’s type system, compiler, efficient execution,
metaprogramming and staged programming allows library authors to implement
powerful libraries that interact with user code and other libraries. As an
example, both KernelAbstractions.jl and ParallelStencil.jl use macros
(metaprogramming) to extend the Julia language with new concepts.
The differential equation ecosystem uses higher-level functions and the
capability of the Julia compiler to specialize these higher-level functions on
the user-defined function, thereby leading to cross-optimization between the
user and the library code.
#### Compiling code.
Starting at a function call, Julia selects and compiles the most specific
function signature. First, Julia propagates the argument types through the
body of the function by using an abstract interpretation. At this level, in-
lining and constant propagation occur. Afterward, a few optimization passes
written in Julia optimize the IR, and the optimized function is translated to
LLVM-IR. Julia uses LLVM as a single-function optimizer and to perform
scheduling optimization (e.g., loop-vectorization). Then, the function is
emitted as a binary and linked in-memory using LLVM’s ORC just-in-time.
GPUCompiler.jl reuses this infrastructure to collect all statically reachable
functions into one LLVM module, which is then compiled and uploaded to the
accelerators. This approach is shared among the packages that provide support
for accelerators and is flexible enough to support new
accelerators/compilation targets.
GPUArrays.jl provides generic abstractions and implementations of common
functionalities on accelerators, and KernelAbstractions.jl provides an
extension of the Julia language to write GPU kernels that can be retargeted to
different accelerators.
### 5.3 A language for both beginners and experts
Considerable resources must be invested to train a scientist or engineer to
make effective use of HPC. This training typically starts with learning how to
program in an undergraduate-level class that is not focused on HPC before
being exposed to more advanced topics such as parallel programming, GPU
programming, or performance optimization. Often, these introductory
programming courses start with a language that is somewhat easy to learn, has
a simple syntax, good support for interactivity and visualization, and a
strong ecosystem with additional packages and learning material (e.g., Python
or MATLAB).
However, this path can be problematic when users eventually switch to a high-
performance language (e.g., C++, Fortran) to achieve the required performance
for scientific or industrial projects that target compute clusters or
supercomputers. As noted before, learning a new programming language is not
trivial because concepts often do not translate one-to-one from one language
to another, and oftentimes the new language’s capabilities are not used to the
fullest extent (Scholtz and Wiedenbeck, 1990; Shrestha et al., 2020).
The Julia programming language has the potential to overcome this division
between easy-to-learn and fast-to-execute languages. Its simple base syntax
allows novice programmers to quickly grasp basic concepts such as variables,
control flow, or data structures with a convenient style that enables the
translation of many mathematical formulae directly into code. Because it
compiles to native code, Julia provides the efficiency and optimization
opportunities required for production-type computations. This means that as
users move to more advanced programming concepts and applications, they
continuously accumulate and extend their experience with their programming
language and do not need to switch between different tools for rapid
prototyping or large-scale application programming. Because Julia provides a
REPL, a compiler, and a package manager in one combined solution, it further
eases the transition of users between their own laptops, a university cluster,
or an extreme-scale machine. Tools, packages, and experience can seamlessly
move between different systems and applications.
### 5.4 Workflow portability and reusability
As demonstrated by NERSC’s Superfacility Project (Bard et al., 2022), HPC
workloads are rapidly expanding beyond the boundaries of a single data center.
At present, efforts to develop multisite workflows are driven by the
increasing need to integrate HPC into the data analysis pipelines of large
experiments. Furthermore, future DOE initiatives (e.g., the AI for science
initiative (Stevens et al., 2020)) emphasize the need for cross-facility
workflows. These developments are gradually shifting the emphasis from the HPC
application, which must be tailored to specific hardware and software
environments, to workflows that incorporate many applications and services at
multiple data centers.
Previous studies of state-of-the-art cross-site workflows (e.g., Antypas et
al. (2021); Giannakou et al. (2021)) provide a rough anatomy of cross-site
workflows, which consist of (1) a data movement layer, (2) portable
executables, (3) a workflow orchestration engine, and (4) a control layer that
coordinates resources across facilities.
As described in Section 5.2, Julia’s syntax provides a natural way to abstract
away details of the system’s hardware. This abstraction method is aided by the
many packages that adopt
Preferences.jl,888https://juliaparallel.org/tutorials/preferences/ which
allows HPC center administrators to configure site-specific settings (e.g.,
MPI). Notably, users do not need to follow a different deployment recipe for
each site. Furthermore, the Julia HPC community is active in developing
packages such as MPItrampoline.jl as well as bindings for Slurm and the Flux
resource manager.
## 6 Julia success stories
We have claimed that Julia is fast and useful for performance-critical
programs. This claim is backed up by the microbenchmarks on Julia’s
website999https://julialang.org/benchmarks, accessed 09-28-2021. that show
that Julia’s performance is comparable to compiled languages such as C and
Fortran. Here, we corroborate this claim with additional examples that range
from low-level code to high-level libraries and interfaces.
### 6.1 Performance of the same algorithms
Julia can generate efficient machine code for low-level BLAS routines (e.g.,
matrix multiplication), which are used in various scientific workflows,
including machine learning, optimization, statistics, and numerical solution
of differential equations. Elrod (2021) demonstrated that highly optimized
pure Julia packages (e.g., Octavian.jl) can be on par with or even faster than
established BLAS libraries (e.g., OpenBLAS, Intel MKL) on Intel’s CPU hardware
(Figure 7). This is expected because Julia can generate similar LLVM-IR
representations that could match the performance of the assembly code from
these highly optimized libraries.
Figure 7: Benchmark of matrix multiplication using different BLAS libraries on
a single Intel Xeon Gold Skylake 6148 CPU. The raw data and plotting script
are available in the reproducibility repository (Churavy et al., 2022).
Inspired by a similar plot in Octavian.jl (Elrod et al., 2022).
Similar results were obtained for discretizations of ordinary differential
equations, which are used in biology, chemistry, and pharmacology. Some
example benchmarks101010https://benchmarks.sciml.ai, accessed 09-28-2021. that
compare implementations of the same algorithm (Dormand and Prince, 1980) in
Fortran111111http://www.unige.ch/~hairer/software.html, accessed 09-28-2021.
and Julia (Rackauckas and Nie, 2017) show that the Julia versions are at least
comparable to the Fortran codes and are sometimes even more efficient owing to
the enhanced in-lining and other optimizations. These results, which show a
comparison of the same numerical methods implemented in different programming
languages, extend to partial differential equations, hyperbolic conservation
laws, and other transport-dominated phenomena used in weather prediction,
climate modeling, and aircraft design. Ranocha et al. (2022) compared the
performance of the Trixi.jl (Schlottke-Lakemper et al., 2021) Julia package
with the mature Fortran code FLUXO121212https://gitlab.com/project-
fluxo/fluxo, accessed 09-28-2021. to implement the same algorithms for
hyperbolic conservation laws. The Julia code was at least as fast as the
Fortran code and sometimes up to 2$\times$ faster. More recently, Lin and
McIntosh-Smith (2021) showed that in benchmarks across several HPC systems
equipped with CPUs and GPUs, Julia’s performance either matches or is only
slightly behind existing parallel programming frameworks coded in C, C++, and
Fortran.
### 6.2 Algorithmic improvements
Further evidence of Julia’s performance and strengths is provided by the
Gridap.jl Julia package (Badia and Verdugo, 2020), which can be used for
finite element discretizations in structural engineering, heat transfer
problems, and incompressible fluid flows. Leveraging Julia’s expressiveness
and just-in-time compilation, Verdugo and Badia (2021) reported a finite
element assembly performance comparable to FENICS (Logg and Wells, 2010),
which is based on a DSL and code generation via C/C++. Thus, Julia’s
expressiveness allows one to have a code that is easier to develop and
maintain without sacrificing performance. Furthermore, Julia makes it easier
to develop new algorithms with direct support for parallelism, thereby
enabling significant speedups in applications that benefit from algorithmic
improvements (e.g., pharmaceutical
development131313https://juliacomputing.com/case-studies/pfizer, accessed
09-28-2021.).
### 6.3 Common interfaces
One of Julia’s strengths is the use of common interfaces in libraries enabled
by multiple dispatch. For example, the standard array interface is generic and
allows the use of CPUs and GPUs (Besard et al., 2019). Furthermore, automatic
differentiation and other tasks do not rely on creating a new array type;
instead, they can reuse existing functionality. By using generic programming
based on these common interfaces in Julia, packages can work together
seamlessly without boilerplate glue code (Karpinski, 2019). For example, error
propagation with Measurements.jl can be combined with spatial semi-
discretizations from Trixi.jl and time integration methods from
OrdinaryDiffEq.jl for numerical simulations without special glue code.
Additionally, the results can be visualized directly with Plots.jl.
At a lower level, common interfaces and operator overloading enable automatic
differentiation (Revels et al., 2016), speedups provided by using low- and
mixed-precision arithmetic on modern hardware (Klöwer et al., 2020), and
uncertainty propagation (Giordano, 2016).
At a higher level, such common interfaces are useful for algorithms in certain
problem classes: solving linear
systems,141414https://github.com/SciML/LinearSolve.jl, accessed 03-01-2022.
differential equations (Rackauckas and Nie, 2019), mathematical optimization
(Legat et al., 2020), and automatic differentiation (Schäfer et al., 2021).
Because the optimal choice of a numerical algorithm depends on the problem,
providing all algorithms via a unified interface enables users to swap
algorithms depending on their needs. There are focused research efforts to
organize such open interfaces to allow seamless interconnection in scientific
computations (e.g., in the Mathematical Research Data
Initiative151515https://www.mardi4nfdi.de, accessed 03-01-2022.). Dunning et
al. (2017) demonstrated how such common interfaces can be used via an open-
source modeling language for optimization in Julia that is competitive with
widely used commercial systems and can even outperform other open-source
alternatives.
### 6.4 Julia’s adoption in CSE
Given its features and performance, Julia has demonstrated its readiness for
the diverse set of applications in the broader CSE field. Furthermore, we see
this readiness as an opportunity for HPC. Working well with CSE applications
is crucial for the success of Julia in HPC because these applications allow
for testing proven technologies and algorithms at different scales with
varying levels of support in a broad community. Success stories in different
CSE fields include algebraic geometry (Breiding and Timme, 2018), astronomy at
petascale (Regier et al., 2018), cancer therapies (Pich et al., 2019),
computer algebra and number theory (Fieker et al., 2017), electrical
engineering (Plietzsch et al., 2022), epidemic modeling (Weitz et al., 2020),
high-performance geophysical simulations (Räss et al., 2022), fluid dynamics
(Ramadhan et al., 2020; Ranocha et al., 2022), semiconductor theory (Frost,
2017), symbolic-numeric computing (Ketcheson and Ranocha, 2021; Iravanian et
al., 2022; Ma et al., 2021), quantum optics (Krämer et al., 2018), quantum
chemistry (Aroeira et al., 2022), quantum physics (Herbst et al., 2021), and
many others.
Typically, the performance of these Julia packages is at least comparable to
existing frameworks in low-level programming languages. Sometimes Julia’s
productivity features even enable improved algorithmic development and simpler
reuse of existing specialized implementations, thereby leading to speedups
compared to established codes. If highly tuned libraries of core routines are
already available with a C interface, then they can be easily accessed from
Julia. Thus, a gradual transition that incorporates old code bases is also
feasible, as described in Section 7.
## 7 Interoperability and composability with preexisting code
Owing to the large investment in creating, optimizing, and maintaining HPC
software infrastructure, developers do not have to throw away or rewrite their
Fortran, C, or C++ codes. Interoperability with preexisting codes has been a
top priority and is at the heart of Julia’s advantage. Furthermore, to be
successful in this space, one must reuse the tremendous work from well-
established HPC frameworks. Although there is interest in writing BLAS
routines in pure Julia (Elrod, 2021) (Figure 7), the ability to call existing
vendor-optimized BLAS libraries was important to kick-start the language
ecosystem. In Section 7.1, we describe how this capability has grown to
integrate preexisting HPC codes into Julia. Section 7.1 describes how these
codes can be enhanced with new capabilities. Additionally, Section 7.2
describes how Julia can be used as an implementation language for new
algorithms, thus requiring Julia to be embedded into preexisting HPC software.
### 7.1 Calling existing codes from Julia
HPC workflows are becoming increasingly complex as a result of increasing
resource heterogeneity as well as a growing need for HPC in traditionally non-
HPC domains. Yet, traditional HPC code bases are written in languages that
prioritize bare-metal performance, and this focus results in low productivity
when developing workflows. As a result, we need a programming language that
can express complex workflows while still making use of existing codes that
encapsulate a large amount (often decades) of institutional and domain
knowledge. A common example is incorporating simulation codes and solvers into
experimental data analysis workflows.
By far the most common approach in HPC has been to adopt Python as the
workflow language and develop high-performance kernels in HPC languages. This
approach has a problem: the workflow orchestration layer is not optimized for
HPC.
Function Signature | Pybind11 | Julia’s ccall | Speedup
---|---|---|---
int fn0() | 132 | $\pm 14.9$ | 2.34 | $\pm 1.24$ | $56\times$
int fn1(int) | 217 | $\pm 20.9$ | 2.35 | $\pm 1.33$ | $92\times$
double fn2(int, double) | 232 | $\pm 11.7$ | 2.32 | $\pm 0.189$ | $100\times$
char* fn3(int, double, char*) | 267 | $\pm 28.9$ | 6.27 | $\pm 0.396$ | $42\times$
Table 2: Round-trip times for calling C functions from Python (using Pybind11)
and Julia (using ccall). All times are in nanoseconds. Round-trip times in
Python include the time to resolve the function symbol, convert Python types
to native C-types, invoke the function call, and return the result (including
the conversion of the returned C-type to native Python types). Because C-types
are binary-compatible with Julia data types, the Julia benchmark does not
require type conversions. The benchmark results were collected by using an
Intel Core i7-1185G7 CPU running at 3.00 GHz with Julia version 1.7.1, Python
version 3.8.10, and Pybind11 version 2.9.1. All scripts required to reproduce
these results are available in the reproducibility repository (Churavy et al.,
2022).
To illustrate this problem, we compare the round-trip time to call a C
function with Pybind11 (Jakob et al., 2017) vs. Julia’s native ccall interface
(see Table 2 for results). The need to convert between Python data types and
native C data types can be seen as an increased round-trip time in the
Pybind11 benchmark results. Therefore, workflows coordinated by using Python
codes will avoid frequent calls to small C functions---instead opting to
combine work in monolithic C kernels. Julia does not have this limitation.
#### Adding new capabilities to preexisting code.
Over the last few years, Julia has become a test bed for the development of
new techniques in probabilistic programming (Cusumano-Towner et al., 2019; Ge
et al., 2018) as well as scientific machine learning (Rackauckas et al.,
2020). For these new techniques, the availability of gradients through
automatic differentiation has been key. Similarly, the CESMIX project at the
MIT is currently building an integrated framework for uncertainty
quantification that greatly benefits from the availability of gradients.
Although Julia has emphasized interoperability with codes written in C, C++,
or Fortran from the very beginning, there is an open question as to whether
these new techniques can be utilized in codes that are a mixture of Julia +
$x$, where $x$ is an HPC application to which one wishes to apply these
techniques. The lynchpin for any attempt at this will be the availability of
gradients and the integration of those gradients into Julia’s automatic-
differentiation frameworks.
Enzyme (Moses and Churavy, 2020) and its Enzyme.jl Julia front end are an
automatic differentiation framework that operates over the LLVM-IR (instead of
operating in operator-overloading or source-rewriting modes) and can thus
synthesize gradients for multiple languages as long as they have an LLVM front
end. This means it supports C, C++, Julia, and Rust with experimental support
for Fortran. Enzyme can be used for differentiating large C++ projects as well
as CUDA and HIP GPU kernels (Moses et al., 2021). Support for additional forms
of parallelism (e.g., OpenMP, MPI) is part of the roadmap.
By leveraging Enzyme, users can perform cross-language automatic-
differentiation and thus integrate newly developed capabilities in Julia with
previously existing HPC libraries.
### 7.2 Calling Julia from C
Fully featured Julia HPC code can be compiled into C libraries and called from
regular C applications, as shown in a proof of concept with a MultiGPU 2D heat
diffusion solver written in Julia and using CUDA, MPI, and graphics called
from C.161616https://github.com/omlins/libdiffusion
The proof of concept shows that variables can be passed from C to Julia in a
straightforward and portable manner. The example passes a GPU array allocated
and initialized in the C code and an MPI communicator created in the C code to
the solver written in Julia. Furthermore, support of CUDA-aware MPI that
leverages RDMA, which is frequently requested in HPC, was successfully
demonstrated.
Straightforward scientific visualization is possible thanks to Julia’s
graphics packages. The proof of concept demonstrated this by producing an
animated GIF using the Plots.jl package from within the generated C library.
For additional productivity in scientific HPC code development, Julia code
that is compiled to a C library (e.g., the heat diffusion solver in the proof
of concept) can also be executed within the Julia run time in an interactive
manner.
The library building is enabled by the PackageCompiler.jl julia package
(Carlsson and contributors, 2022).
## 8 Now is the time for Julia in HPC
We are seeing a rapid uptake of Julia in technical computing. Consequently,
the interest in scaling up Julia applications for HPC and designing HPC
applications in Julia from the start are also on the rise.
As with every new tool in HPC, the initial adoption must overcome challenges
and to some extent adapt to the unique HPC environments. It is therefore
encouraging that many HPC centers are already providing Julia to their users.
### 8.1 For application developers
The Julia language has reached a level of maturity and stability suitable for
production code. Julia’s language design features native performance tools,
LLVM-based just-in-time compilation, and support for parallelism and hardware
accelerators, and this support makes it convenient for developing high-
performance applications. Furthermore, Julia adopted many tools that enhance
developer productivity, including tools for package management, code
introspection, a powerful REPL, and a module system. This makes Julia one of
the few high-productivity high-performance programming languages.
Historically, the adoption of programming languages in HPC has been driven by
the popularity of software frameworks that are programmed in those languages.
Therefore, as Julia-based frameworks rise in popularity, so will the Julia
language. However, it is not necessary to wait for Julia’s killer app because
HPC frameworks also have a long history of multilanguage development (e.g.,
calling Fortran functions from C, calling C functions from Python). Therefore,
we encourage developers to begin incorporating Julia components within
existing HPC frameworks with the added value of portable access to different
hardware accelerator targets.
### 8.2 For Julia language developers
The Julia language is uniquely suited for high-productivity, high-performance
code development because it already addresses many issues of developing HPC
applications in other high-productivity languages. Therefore, the work for
language developers is not insurmountable. At present, the adoption challenges
described in this work mainly stem from HPC hardware being similar but still
different from consumer-grade hardware. For example, many HPC file systems are
not optimized for loading small files, thereby resulting in slower application
startup times that contribute significantly to a job’s overall wall time.
Also, the software and networking environments are very different at HPC
centers. Vendors often address this issue by requiring the code to be compiled
with their compilers to ensure the use of system drivers---something that
usually does not work out of the box and can require configuration.
The Julia community is already providing many solutions in this area and truly
shines with a variety of successful and documented HPC use cases---including
how deployment challenges were overcome. Julia language developers should
therefore curate these use cases, incorporate solutions into the language
standard (e.g., ahead-of-time compilation for demanding codes, global site
configurations), and add useful examples to the Julia documentation. Finally,
because the Julia language has reached a high level of maturity, the language
developers should now begin to emphasize language stability.
### 8.3 For HPC center operators
One major adoption challenge we have encountered so far is the lack of vendor
support in HPC. This was felt most acutely during the initial deployment of
the OLCF’s Summit supercomputer because Julia lacked support for IBM’s PowerPC
architecture. This is less of an issue now with architectures such as ARM’s
AArch64 being used in consumer devices, which provides more access and
opportunity for the Julia open-source community to develop support for these
architectures early on (Giordano et al., 2022). HPC centers have a history of
pioneering new architectures (e.g., RISC-V) and new accelerator designs, and
it is important to collaborate with vendors to garner Julia support. This will
obviously benefit Julia, but because Julia is based around the open-source
LLVM project, it will also lead to a better open compiler ecosystem for HPC.
China’s Sunway architecture is an interesting data point. Shang et al. (2022)
describes a variational quantum eigensolver written in Julia scaling up to 20
million cores. While details are sparse, we can determine that they ported
Julia to the Sunway SW26010P architecture. Each SW26010P core is split into a
management processing element (MPE) and 64 compute processing elements (CPEs).
They developed support for running on both the MPE and CPE cores. The CPE
cores are targeted in an offloading style by using the infrastructure built
for Julia’s general accelerator support.
## Conclusion
As described here, our view is that the Julia programming language provides an
excellent investment opportunity for the HPC community. Julia’s value
proposition prioritizes the needs of HPC in the current era: programming
models that closely align with science to make HPC accessible; a coordinated
ecosystem approach for packaging, testing, code instrumentation, and
interactive computing; a growing community; a modern and pragmatic workflow
composition strategy that interoperates with LLVM and existing HPC frameworks
for simulation performance; and a powerful data science and AI unified
ecosystem. Not since Fortran has a programming language been designed
specifically to target the needs of the broader scientific community. Julia
incorporates modern software requirements into the language to enrich the end-
to-end co-design process and lower the cost of the software development cycle
---from idea to performance portability. This is a pivotal time for the HPC
community as it continues to march toward a more heterogeneous computing
landscape in the post-Moore era, in which data-driven AI workflows become
relevant for scientific discovery at scale. We believe that investing in the
Julia language and enriching its ecosystem capabilities will pay dividends in
easing current and future challenges associated with the increasing cost and
complexity of multidisciplinary HPC endeavors.
## Reproducibility
The benchmarks shown in Table 2 were run on NVIDIA P100 GPUs on the Swiss
National Supercomputing Centre’s Piz Daint Cray XC50 and are available in our
reproducibility repository (Churavy et al., 2022). The BLAS benchmarks shown
in Figure 7 were run on a single Intel Xeon Gold Skylake 6148 CPU in Noctua 1
at PC2 and are also available in our reproducibility repository (Churavy et
al., 2022).
VC and AE gratefully acknowledges funding from the National Science Foundation
(OAC-1835443, OAC-2103804, AGS-1835860, and AGS-1835881) and DARPA under
agreement number HR0011-20-9-0016 (PaPPa). This research was also made
possible by the generosity of Eric and Wendy Schmidt by recommendation of the
Schmidt Futures program, by the Paul G. Allen Family Foundation, Charles
Trimble, and the Audi Environmental Foundation. This material is based upon
work supported by the DOE’s National Nuclear Security Administration under
award number DE-NA0003965.
LR and SO acknowledge financial support from the Swiss University Conference
and the Swiss Council of Federal Institutes of Technology through the Platform
for Advanced Scientific Computing program. This work was supported by a grant
from the Swiss National Supercomputing Centre under project ID c23 obtained
via the PASC project GPU4GEO.
CB gratefully acknowledges the funding of this project by computing time
provided by the Paderborn Center for Parallel Computing (PC2). This work is
partially funded by Paderborn University’s research award for ’’GreenIT‘‘, as
well as the Federal Ministry of Education and Research (BMBF) and the state of
North Rhine-Westphalia as part of the NHR Program.
MSL gratefully acknowledges funding by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) project FOR-5409 (SNuBIC).
Parts of this research are supported by the Exascale Computing Project
(17-SC-20-SC), a joint project of the DOE’s Office of Science and the National
Nuclear Security Administration, responsible for delivering a capable exascale
ecosystem, including software, applications, and hardware technology, to
support the nation’s exascale computing imperative. This research used
resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge
National Laboratory, which is supported by the Office of Science of the U.S.
Department of Energy under Contract No. DE-AC05-00OR22725. This research used
resources of the National Energy Research Scientific Computing Center (NERSC),
a U.S. Department of Energy Office of Science User Facility located at
Lawrence Berkeley National Laboratory, operated under Contract No. DE-
AC02-05CH11231.
Research at Perimeter Institute is supported in part by the Government of
Canada through the Department of Innovation, Science and Economic Development
and by the Province of Ontario through the Ministry of Colleges and
Universities.
The views and opinions of authors expressed herein do not necessarily state or
reflect those of the United States government or any agency thereof. The US
government is authorized to reproduce and distribute reprints for government
purposes notwithstanding any copyright notation herein.
Notice: This manuscript has been authored by UT-Battelle LLC under contract
DE-AC05-00OR22725 with DOE. The US government retains and the publisher, by
accepting the article for publication, acknowledges that the US government
retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or
reproduce the published form of this manuscript, or allow others to do so, for
US government purposes. DOE will provide public access to these results of
federally sponsored research in accordance with the DOE Public Access Plan
(http://energy.gov/downloads/doe-public-access-plan).
## References
* Abadi et al. (2015) Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mané D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viégas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y and Zheng X (2015) TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. URL https://www.tensorflow.org/. Software available from tensorflow.org.
* Allen et al. (2005) Allen E, Chase D, Hallett J, Luchangco V, Maessen JW, Ryu S, Steele Jr GL, Tobin-Hochstadt S, Dias J, Eastlund C et al. (2005) The Fortress language specification 139(140).
* Almasi (2011) Almasi G (2011) _PGAS (Partitioned Global Address Space) Languages_. Boston, MA: Springer US. ISBN 978-0-387-09766-4, pp. 1539--1545. 10.1007/978-0-387-09766-4_210. URL https://doi.org/10.1007/978-0-387-09766-4_210.
* AMD (2008) AMD (2008) ROCm HIP: Heterogeneous-Computing Interface for Portability. https://github.com/ROCm-Developer-Tools/HIP.
* Antypas et al. (2021) Antypas KB, Bard DJ, Blaschke JP, Shane Canon R, Enders B, Shankar MA, Somnath S, Stansberry D, Uram TD and Wilkinson SR (2021) Enabling discovery data science through cross-facility workflows. In: _2021 IEEE International Conference on Big Data (Big Data)_. pp. 3671--3680. 10.1109/BigData52589.2021.9671421.
* Aroeira et al. (2022) Aroeira GJ, Davis MM, Turney JM and Schaefer III HF (2022) Fermi.jl: A Modern Design for Quantum Chemistry. _Journal of Chemical Theory and Computation_ 10.1021/acs.jctc.1c00719.
* Backus (1980) Backus J (1980) Programming in America in the 1950s—Some Personal Impressions. In: _A History of Computing in the twentieth century_. Elsevier, pp. 125--135.
* Backus and Heising (1964) Backus JW and Heising WP (1964) Fortran. _IEEE Transactions on Electronic Computers_ EC-13(4): 382--385. 10.1109/PGEC.1964.263818.
* Badia and Verdugo (2020) Badia S and Verdugo F (2020) Gridap: An extensible finite element toolbox in Julia. _Journal of Open Source Software_ 5(52): 2520. 10.21105/joss.02520.
* Bard et al. (2022) Bard D, Snavely C, Gerhardt L, Lee J, Totzke B, Antypas K, Arndt W, Blaschke J, Byna S, Cheema R, Cholia S, Day M, Enders B, Gaur A, Greiner A, Groves T, Kiran M, Koziol Q, Lehman T, Rowland K, Samuel C, Selvarajan A, Sim A, Skinner D, Stephey L, Thomas R and Torok G (2022) The lbnl superfacility project report.
* Beckingsale et al. (2019) Beckingsale DA, Burmark J, Hornung R, Jones H, Killian W, Kunen AJ, Pearce O, Robinson P, Ryujin BS and Scogland TR (2019) RAJA: Portable Performance for Large-Scale Scientific Applications. In: _2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC)_. pp. 71--81. 10.1109/P3HPC49587.2019.00012.
* Ben-Nun et al. (2020) Ben-Nun T, Gamblin T, Hollman DS, Krishnan H and Newburn CJ (2020) Workflows are the New Applications: Challenges in Performance, Portability, and Productivity. In: _2020 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC)_. pp. 57--69. 10.1109/P3HPC51967.2020.00011.
* Bercea et al. (2016) Bercea GT, McRae ATT, Ham DA, Mitchell L, Rathgeber F, Nardi L, Luporini F and Kelly PHJ (2016) A structure-exploiting numbering algorithm for finite elements on extruded meshes, and its performance evaluation in Firedrake. _Geoscientific Model Development_ 9(10): 3803--3815. 10.5194/gmd-9-3803-2016.
* Besard et al. (2018) Besard T, Foket C and De Sutter B (2018) Effective extensible programming: unleashing Julia on GPUs. _IEEE Transactions on Parallel and Distributed Systems_ 30(4): 827--841.
* Besard et al. (2019) Besard T, Foket C and De Sutter B (2019) Effective Extensible Programming: Unleashing Julia on GPUs. _IEEE Transactions on Parallel and Distributed Systems_ 30(4): 827--841. 10.1109/TPDS.2018.2872064.
* Besard and other contributors (2020) Besard T and other contributors (2020) oneAPI.jl: Julia support for the oneAPI programming toolkit. https://github.com/JuliaGPU/oneAPI.jl.
* Bezanson et al. (2018) Bezanson J, Chen J, Chung B, Karpinski S, Shah VB, Vitek J and Zoubritzky L (2018) Julia: Dynamism and performance reconciled by design. _Proceedings of the ACM on Programming Languages_ 2(OOPSLA): 1--23. 10.1145/3276490.
* Bezanson et al. (2017) Bezanson J, Edelman A, Karpinski S and Shah VB (2017) Julia: A fresh approach to numerical computing. _SIAM Review_ 59(1): 65--98. 10.1137/141000671.
* Bradbury et al. (2018) Bradbury J, Frostig R, Hawkins P, Johnson MJ, Leary C, Maclaurin D, Necula G, Paszke A, VanderPlas J, Wanderman-Milne S and Zhang Q (2018) JAX: composable transformations of Python+NumPy programs. URL http://github.com/google/jax.
* Breiding and Timme (2018) Breiding P and Timme S (2018) HomotopyContinuation.jl: A package for homotopy continuation in Julia. In: _International Congress on Mathematical Software_. Springer, pp. 458--465. 10.1007/978-3-319-96418-8_54.
* Buck (2007) Buck I (2007) GPU computing with Nvidia CUDA. In: _ACM SIGGRAPH 2007 courses_. pp. 6--es.
* Byna et al. (2017) Byna S, Chaarawi M, Koziol Q, Mainzer J and Willmore F (2017) Tuning hdf5 subfiling performance on parallel file systems URL https://www.osti.gov/biblio/1398484.
* Byrne et al. (2021) Byrne S, Wilcox LC and Churavy V (2021) MPI.jl: Julia bindings for the Message Passing Interface. In: _Proceedings of the JuliaCon Conferences_ , volume 1. p. 68. 10.21105/jcon.00068. https://github.com/JuliaParallel/MPI.jl.
* Carlsson and contributors (2022) Carlsson K and contributors (2022) PackageCompiler.jl: Compile your Julia package. https://github.com/JuliaLang/PackageCompiler.j.
* Carter Edwards et al. (2014) Carter Edwards H, Trott CR and Sunderland D (2014) Kokkos: Enabling manycore performance portability through polymorphic memory access patterns. _Journal of Parallel and Distributed Computing_ 74(12): 3202--3216. https://doi.org/10.1016/j.jpdc.2014.07.003. URL https://www.sciencedirect.com/science/article/pii/S0743731514001257. Domain-Specific Languages and High-Level Frameworks for High-Performance Computing.
* Chamberlain et al. (2007) Chamberlain B, Callahan D and Zima H (2007) Parallel Programmability and the Chapel Language. _The International Journal of High Performance Computing Applications_ 21(3): 291--312. 10.1177/1094342007078442. URL https://doi.org/10.1177/1094342007078442.
* Christ et al. (2022) Christ S, Schwabeneder D, Rackauckas C, Borregaard MK and Breloff T (2022) Plots.jl -- a user extendable plotting api for the julia programming language. 10.48550/ARXIV.2204.08775. URL https://arxiv.org/abs/2204.08775.
* Churavy et al. (2022) Churavy V, Godoy WF, Bauer C, Ranocha H, Schlottke-Lakemper M, Räss L, Blaschke J, Giordano M, Schnetter E, Omlin S, Vetter JS and Edelman A (2022) Reproducibility repository for Bridging HPC communities through the Julia programming language. https://github.com/JuliaParallel/paper-2022-HPC. 10.5281/zenodo.7236016.
* Contributors (2015) Contributors JP (2015) DistributedArrays.jl: Distributed Arrays in Julia. https://github.com/JuliaParallel/DistributedArrays.jl.
* Cusumano-Towner et al. (2019) Cusumano-Towner MF, Saad FA, Lew AK and Mansinghka VK (2019) Gen: A general-purpose probabilistic programming system with programmable inference. In: _Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation_ , PLDI 2019. New York, NY, USA: Association for Computing Machinery. ISBN 9781450367127, p. 221–236. 10.1145/3314221.3314642. URL https://doi.org/10.1145/3314221.3314642.
* Dagum and Menon (1998) Dagum L and Menon R (1998) OpenMP: an industry standard API for shared-memory programming. _IEEE computational science and engineering_ 5(1): 46--55.
* Danisch and Krumbiegel (2021) Danisch S and Krumbiegel J (2021) Makie.jl: Flexible high-performance data visualization for julia. _Journal of Open Source Software_ 6(65): 3349. 10.21105/joss.03349. URL https://doi.org/10.21105/joss.03349.
* de Graaf and contributors (2022) de Graaf C and contributors (2022) PkgTemplates.jl: Create new Julia packages, the easy way. https://github.com/invenia/PkgTemplates.jl.
* Dongarra et al. (2011) Dongarra J, Beckman P, Moore T, Aerts P, Aloisio G, Andre JC, Barkai D, Berthou JY, Boku T, Braunschweig B, Cappello F, Chapman B, Chi X, Choudhary A, Dosanjh S, Dunning T, Fiore S, Geist A, Gropp B, Harrison R, Hereld M, Heroux M, Hoisie A, Hotta K, Jin Z, Ishikawa Y, Johnson F, Kale S, Kenway R, Keyes D, Kramer B, Labarta J, Lichnewsky A, Lippert T, Lucas B, Maccabe B, Matsuoka S, Messina P, Michielse P, Mohr B, Mueller MS, Nagel WE, Nakashima H, Papka ME, Reed D, Sato M, Seidel E, Shalf J, Skinner D, Snir M, Sterling T, Stevens R, Streitz F, Sugar B, Sumimoto S, Tang W, Taylor J, Thakur R, Trefethen A, Valero M, van der Steen A, Vetter J, Williams P, Wisniewski R and Yelick K (2011) The International Exascale Software Project roadmap. _The International Journal of High Performance Computing Applications_ 25(1): 3--60. 10.1177/1094342010391989. URL https://doi.org/10.1177/1094342010391989.
* Dongarra et al. (2008) Dongarra J, Graybill R, Harrod W, Lucas R, Lusk E, Luszczek P, McMahon J, Snavely A, Vetter J, Yelick K et al. (2008) DARPA’s HPCS program: History, models, tools, languages. In: _Advances in Computers_ , volume 72. Elsevier, pp. 1--100.
* Dormand and Prince (1980) Dormand JR and Prince PJ (1980) A family of embedded Runge-Kutta formulae. _Journal of Computational and Applied Mathematics_ 6(1): 19--26. 10.1016/0771-050X(80)90013-3.
* Dunning et al. (2017) Dunning I, Huchette J and Lubin M (2017) JuMp: A modeling language for mathematical optimization. _SIAM review_ 59(2): 295--320. 10.1137/15M1020575.
* Edelman (2019) Edelman A (2019) IEEE-CS sidney fernbach memorial award. SC ’19: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis.
* El-Ghazawi et al. (2005) El-Ghazawi T, Carlson W, Sterling T and Yelick K (2005) _UPC: distributed shared memory programming_ , volume 40. John Wiley & Sons.
* Elrod (2021) Elrod C (2021) Roadmap to Julia BLAS and linear algebra. https://www.youtube.com/watch?v=KQ8nvlURX4M. Talk presented at JuliaCon.
* Elrod et al. (2022) Elrod C, Aluthge D, Protter M and contributors (2022) Octavian.jl: Multi-threaded BLAS-like library that provides pure Julia matrix multiplication. https://github.com/JuliaLinearAlgebra/Octavian.jl.
* Elrod and Lilly (2019) Elrod C and Lilly E (2019) LoopVectorization.jl: Macro(s) for vectorizing loops. https://github.com/JuliaSIMD/LoopVectorization.jl.
* Enos et al. (2010) Enos J, Steffen C, Fullop J, Showerman M, Shi G, Esler K, Kindratenko V, Stone JE and Phillips JC (2010) Quantifying the impact of GPUs on performance and energy efficiency in HPC clusters. In: _International Conference on Green Computing_. pp. 317--324. 10.1109/GREENCOMP.2010.5598297.
* Fieker et al. (2017) Fieker C, Hart W, Hofmann T and Johansson F (2017) Nemo/Hecke: computer algebra and number theory packages for the Julia programming language. In: _Proceedings of the 2017 acm on international symposium on symbolic and algebraic computation_. pp. 157--164. 10.1145/3087604.3087611.
* Frost (2017) Frost JM (2017) Calculating polaron mobility in halide perovskites. _Physical Review B_ 96(19): 195202. 10.1103/PhysRevB.96.195202.
* Ge et al. (2018) Ge H, Xu K and Ghahramani Z (2018) Turing: a language for flexible probabilistic inference. In: _International Conference on Artificial Intelligence and Statistics, AISTATS 2018, 9-11 April 2018, Playa Blanca, Lanzarote, Canary Islands, Spain_. pp. 1682--1690. URL http://proceedings.mlr.press/v84/ge18b.html.
* Giannakou et al. (2021) Giannakou A, Blaschke JP, Bard D and Ramakrishnan L (2021) Experiences with cross-facility real-time light source data analysis workflows. In: _2021 IEEE/ACM HPC for Urgent Decision Making (UrgentHPC)_. pp. 45--53. 10.1109/UrgentHPC54802.2021.00011.
* Giordano (2016) Giordano M (2016) Uncertainty propagation with functionally correlated quantities.
* Giordano et al. (2022) Giordano M, Klöwer M and Churavy V (2022) Productivity meets Performance: Julia on A64FX. In: _2022 IEEE International Conference on Cluster Computing (CLUSTER)_. pp. 549--555. 10.1109/CLUSTER51413.2022.00072.
* Godoy et al. (2020) Godoy WF, Podhorszki N, Wang R, Atkins C, Eisenhauer G, Gu J, Davis P, Choi J, Germaschewski K, Huck K, Huebl A, Kim M, Kress J, Kurc T, Liu Q, Logan J, Mehta K, Ostrouchov G, Parashar M, Poeschel F, Pugmire D, Suchyta E, Takahashi K, Thompson N, Tsutsumi S, Wan L, Wolf M, Wu K and Klasky S (2020) Adios 2: The adaptable input output system. a framework for high-performance data management. _SoftwareX_ 12: 100561. https://doi.org/10.1016/j.softx.2020.100561.
* Gropp et al. (1999) Gropp W, Gropp WD, Lusk E, Skjellum A and Lusk ADFEE (1999) _Using MPI: portable parallel programming with the message-passing interface_ , volume 1. MIT press.
* Hanson and Giordano (2021) Hanson EP and Giordano M (2021) Code, docs, and tests: what’s in the General registry? URL https://julialang.org/blog/2021/08/general-survey/.
* Herbst et al. (2021) Herbst MF, Levitt A and Cancès E (2021) DFTK: A Julian approach for simulating electrons in solids. _Proc. JuliaCon Conf._ 3: 69. 10.21105/jcon.00069.
* Heroux (2019) Heroux MA (2019) The Extreme-Scale Scientific Software Stack (e4s). Technical report, Sandia National Lab.(SNL-NM), Albuquerque, NM (United States).
* Heroux et al. (2018) Heroux MA, Carter J, Thakur R, Vetter J, McInnes LC, Ahrens J and Neely JR (2018) ECP Software Technology Capability Assessment Report 10.2172/1463232. URL https://www.osti.gov/biblio/1463232.
* HPCWire (2017) HPCWire (2017) Julia Joins Petaflop Club. URL https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-club/.
* Innes (2018) Innes M (2018) Flux: Elegant machine learning with julia. _Journal of Open Source Software_ 10.21105/joss.00602.
* Innes et al. (2018) Innes M, Saba E, Fischer K, Gandhi D, Rudilosso MC, Joy NM, Karmali T, Pal A and Shah V (2018) Fashionable modelling with flux. _CoRR_ abs/1811.01457. URL https://arxiv.org/abs/1811.01457.
* Iravanian et al. (2022) Iravanian S, Martensen CJ, Cheli A, Gowda S, Jain A, Julia Computing, Ma Y and Rackauckas C (2022) Symbolic-numeric integration of univariate expressions based on sparse regression.
* Jakob et al. (2017) Jakob W, Rhinelander J and Moldovan D (2017) pybind11 -- seamless operability between c++11 and python. Https://github.com/pybind/pybind11.
* Janssens (2022) Janssens B (2022) CxxWrap.jl: Package to make C++ libraries available in Julia. https://github.com/JuliaInterop/CxxWrap.jl.
* Johnson and contributors (2022) Johnson SG and contributors (2022) PyCall.jl: Package to call Python functions from the Julia language. https://github.com/JuliaPy/PyCall.jl.
* Jupyter Development Team (2022) Jupyter Development Team (2022) _Jupyter: Free software, open standards, and web services for interactive computing across all programming languages_. URL https://jupyter.org/.
* Karpinski (2019) Karpinski S (2019) The Unreasonable Effectiveness of Multiple Dispatch. https://youtu.be/kc9HwsxE1OY. Talk presented at JuliaCon.
* Karrasch et al. (2022) Karrasch D, Haegeman J and contributors (2022) LinearMaps.jl: A Julia package for defining and working with linear maps, also known as linear transformations or linear operators acting on vectors. The only requirement for a LinearMap is that it can act on a vector (by multiplication) efficiently. https://github.com/JuliaLinearAlgebra/LinearMaps.jl.
* Kedward et al. (2022) Kedward LJ, Aradi B, Čertík O, Curcic M, Ehlert S, Engel P, Goswami R, Hirsch M, Lozada-Blanco A, Magnin V, Markus A, Pagone E, Pribec I, Richardson B, Snyder H, Urban J and Vandenplas J (2022) The State of Fortran. _Computing in Science & Engineering_ 24(2): 63--72. 10.1109/MCSE.2022.3159862.
* Kernighan and Ritchie (1988) Kernighan BW and Ritchie DM (1988) _The C programming language_. Pearson Educación.
* Ketcheson and Ranocha (2021) Ketcheson DI and Ranocha H (2021) Computing with B-series.
* Kindratenko et al. (2009) Kindratenko VV, Enos JJ, Shi G, Showerman MT, Arnold GW, Stone JE, Phillips JC and Hwu Wm (2009) GPU clusters for high-performance computing. In: _2009 IEEE International Conference on Cluster Computing and Workshops_. pp. 1--8. 10.1109/CLUSTR.2009.5289128.
* Klöwer et al. (2020) Klöwer M, Düben P and Palmer T (2020) Number formats, error mitigation, and scope for 16-bit arithmetics in weather and climate modeling analyzed with a shallow water model. _Journal of Advances in Modeling Earth Systems_ 12(10): e2020MS002246. 10.1029/2020MS002246.
* Krämer et al. (2018) Krämer S, Plankensteiner D, Ostermann L and Ritsch H (2018) QuantumOptics.jl: A Julia framework for simulating open quantum systems. _Computer Physics Communications_ 227: 109--116. 10.1016/j.cpc.2018.02.004.
* Lai and contributors (2022) Lai R and contributors (2022) RCall.jl: Call R from Julia. https://github.com/JuliaInterop/RCall.jl.
* Lattner and Adve (2004) Lattner C and Adve V (2004) LLVM: a compilation framework for lifelong program analysis amp; transformation. In: _International Symposium on Code Generation and Optimization, 2004. CGO 2004._ pp. 75--86. 10.1109/CGO.2004.1281665.
* Lavrijsen and Dutta (2016) Lavrijsen WT and Dutta A (2016) High-performance python-c++ bindings with pypy and cling. In: _2016 6th Workshop on Python for High-Performance and Scientific Computing (PyHPC)_. pp. 27--35. 10.1109/PyHPC.2016.008.
* Legat et al. (2020) Legat B, Dowson O, Garcia JD and Lubin M (2020) MathOptInterface: a data structure for mathematical optimization problems.
* Lin and McIntosh-Smith (2021) Lin WC and McIntosh-Smith S (2021) Comparing Julia to Performance Portable Parallel Programming Models for HPC. In: _2021 International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS)_. pp. 94--105. 10.1109/PMBS54543.2021.00016.
* Logg and Wells (2010) Logg A and Wells GN (2010) DOLFIN: Automated finite element computing. _ACM Transactions on Mathematical Software (TOMS)_ 37(2): 1--28. 10.1145/1731022.1731030.
* Ma et al. (2021) Ma Y, Gowda S, Anantharaman R, Laughman C, Shah V and Rackauckas C (2021) Modelingtoolkit: A composable graph transformation system for equation-based modeling.
* Marques and Barker (2020) Marques O and Barker A (2020) Training Efforts in the Exascale Computing Project. _Computing in Science & Engineering_ 22(5): 103--107. 10.1109/MCSE.2020.3010596.
* Mohamad and contributors (2022) Mohamad M and contributors (2022) MATLAB.jl: Calling MATLAB in Julia through MATLAB Engine. https://github.com/JuliaInterop/MATLAB.jl.
* Moore (1998) Moore GE (1998) Cramming more components onto integrated circuits. _Proceedings of the IEEE_ 86(1): 82--85.
* Moses and Churavy (2020) Moses W and Churavy V (2020) Instead of rewriting foreign code for machine learning, automatically synthesize fast gradients. _Advances in neural information processing systems_ 33: 12472--12485.
* Moses et al. (2021) Moses WS, Churavy V, Paehler L, Hückelheim J, Narayanan SHK, Schanen M and Doerfert J (2021) Reverse-mode automatic differentiation and optimization of GPU kernels via enzyme. In: _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_. New York, NY, USA: ACM. 10.1145/3458817.3476165. To be published.
* Munshi (2009) Munshi A (2009) The OpenCL specification. In: _2009 IEEE Hot Chips 21 Symposium (HCS)_. IEEE, pp. 1--314.
* Norton et al. (2022) Norton I, Qi Y and contributors (2022) Clang.jl: Julia interface to libclang. https://github.com/JuliaInterop/Clang.jl.
* Numrich and Reid (1998) Numrich RW and Reid J (1998) Co-Array Fortran for Parallel Programming. _SIGPLAN Fortran Forum_ 17(2): 1–31. 10.1145/289918.289920. URL https://doi.org/10.1145/289918.289920.
* (87) Oak Ridge Leadership Computing Facility (????) Oak Ridge National Laboratory. URL https://www.olcf.ornl.gov.
* Omlin and Räss (2019) Omlin S and Räss L (2019) ParallelStencil.jl: Package for writing high-level code for parallel high-performance stencil computations that can be deployed on both GPUs and CPUs. https://github.com/omlins/ParallelStencil.jl.
* Omlin et al. (2020) Omlin S, Räss L, Kwasniewski G, Malvoisin B and Podladchikov YY (2020) Solving Nonlinear Multi-Physics on GPU Supercomputers with Julia. https://youtu.be/vPsfZUqI4_0. Talk presented at JuliaCon.
* Omlin et al. (2019) Omlin S, Räss L and Utkin I (2019) ImplicitGlobalGrid.jl: Almost trivial distributed parallelization of stencil-based GPU and CPU applications on a regular staggered grid. https://github.com/eth-cscs/ImplicitGlobalGrid.jl.
* Orban et al. (2020) Orban D, Siqueira AS and contributors (2020) LinearOperators.jl. https://github.com/JuliaSmoothOptimizers/LinearOperators.jl. 10.5281/zenodo.2559295.
* Ousterhout (1998) Ousterhout JK (1998) Scripting: higher level programming for the 21st century. _Computer_ 31(3): 23--30. 10.1109/2.660187.
* Parashar et al. (1994) Parashar M, Hariri S, Haupt T and Fox G (1994) A study of software development for high performance computing. In: Decker KM and Rehmann RM (eds.) _Programming Environments for Massively Parallel Distributed Systems_. Basel: Birkhäuser Basel. ISBN 978-3-0348-8534-8, pp. 107--116.
* Paszke et al. (2019) Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L et al. (2019) Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ 32\.
* Pich et al. (2019) Pich O, Muiños F, Lolkema MP, Steeghs N, Gonzalez-Perez A and Lopez-Bigas N (2019) The mutational footprints of cancer therapies. _Nature genetics_ 51(12): 1732--1740. 10.1038/s41588-019-0525-5.
* Plietzsch et al. (2022) Plietzsch A, Kogler R, Auer S, Merino J, Gil-de Muro A, Liße J, Vogel C and Hellmann F (2022) PowerDynamics.jl - An experimentally validated open-source package for the dynamical analysis of power grids. _SoftwareX_ 17: 100861. 10.1016/j.softx.2021.100861.
* Rackauckas and Nie (2017) Rackauckas C and Nie Q (2017) DifferentialEquations.jl -- A performant and feature-rich ecosystem for solving differential equations in Julia. _Journal of Open Research Software_ 5(1): 15. 10.5334/jors.151.
* Rackauckas and Nie (2019) Rackauckas C and Nie Q (2019) Confederated modular differential equation APIs for accelerated algorithm development and benchmarking. _Advances in Engineering Software_ 132: 1--6. 10.1016/j.advengsoft.2019.03.009.
* Rackauckas et al. (2020) Rackauckas C, Singhvi A, Ma Y, Hatherly M, Jones S, Caine C, Saba E, TagBot J and Olver S (2020) Sciml/differentialequations. jl: v6. 15.0 .
* Ramadhan et al. (2020) Ramadhan A, Wagner G, Hill C, Campin JM, Churavy V, Besard T, Souza A, Edelman A, Ferrari R and Marshall J (2020) Oceananigans.jl: Fast and friendly geophysical fluid dynamics on GPUs. _Journal of Open Source Software_ 5(53). 10.21105/joss.02018.
* Ranocha et al. (2022) Ranocha H, Schlottke-Lakemper M, Winters AR, Faulhaber E, Chan J and Gassner G (2022) Adaptive numerical simulations with Trixi.jl: A case study of Julia for scientific computing. _Proceedings of the JuliaCon Conferences_ 1(1): 77. 10.21105/jcon.00077.
* Räss et al. (2019) Räss L, Omlin S and Podladchikov YY (2019) Resolving Spontaneous Nonlinear Multi-Physics Flow Localization in 3-D: Tackling Hardware Limit. URL https://developer.nvidia.com/gtc/2019/video/S9368. GTC Silicon Valley - 2019.
* Räss et al. (2022) Räss L, Utkin I, Duretz T, Omlin S and Podladchikov YY (2022) Assessing the robustness and scalability of the accelerated pseudo-transient method. _Geoscientific Model Development_ 15(14): 5757--5786. 10.5194/gmd-15-5757-2022. URL https://gmd.copernicus.org/articles/15/5757/2022/.
* Reed et al. (2022) Reed D, Gannon D and Dongarra J (2022) Reinventing High Performance Computing: Challenges and Opportunities. 10.48550/ARXIV.2203.02544. URL https://arxiv.org/abs/2203.02544.
* Regier et al. (2018) Regier J, Pamnany K, Fischer K, Noack A, Lam M, Revels J, Howard S, Giordano R, Schlegel D, McAuliffe J et al. (2018) Cataloging the visible universe through Bayesian inference in Julia at petascale. In: _2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_. IEEE, pp. 44--53. 10.1016/j.jpdc.2018.12.008.
* Revels et al. (2016) Revels J, Lubin M and Papamarkou T (2016) Forward-Mode Automatic Differentiation in Julia.
* Reyes and Lomüller (2016) Reyes R and Lomüller V (2016) SYCL: Single-source C++ accelerator programming. In: _Parallel Computing: On the Road to Exascale_. IOS Press, pp. 673--682.
* Rowley (2022) Rowley C (2022) Pythoncall.jl: Python and julia in harmony. URL https://github.com/cjdoris/PythonCall.jl.
* Rutkowski (2022) Rutkowski K (2022) CBinding.jl: Automatic C interfacing for Julia. https://github.com/analytech-solutions/CBinding.jl.
* Saba and contributors (2022) Saba E and contributors (2022) BinaryBuilder.jl: Binary dependency builder for Julia. https://github.com/JuliaPackaging/BinaryBuilder.jl.
* Samaroo et al. (2013) Samaroo J, Besard T, Churavy V, Lin D and other contributors (2013) AMDGPU.jl: AMD GPU (ROCm) programming in Julia. https://github.com/JuliaGPU/AMDGPU.jl.
* Saraswat et al. (2007) Saraswat VA, Sarkar V and von Praun C (2007) X10: Concurrent Programming for Modern Architectures. In: _Proceedings of the 12th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming_ , PPoPP ’07. New York, NY, USA: Association for Computing Machinery. ISBN 9781595936028, p. 271. 10.1145/1229428.1229483. URL https://doi.org/10.1145/1229428.1229483.
* Schäfer et al. (2021) Schäfer F, Tarek M, White L and Rackauckas C (2021) AbstractDifferentiation.jl: Backend-agnostic differentiable programming in Julia.
* Schlottke-Lakemper et al. (2021) Schlottke-Lakemper M, Winters AR, Ranocha H and Gassner GJ (2021) A purely hyperbolic discontinuous Galerkin approach for self-gravitating gas dynamics. _Journal of Computational Physics_ : 11046710.1016/j.jcp.2021.110467.
* Schnetter and contributors (2016) Schnetter E and contributors (2016) SIMD.jl: Explicit SIMD vector operations for Julia. https://github.com/eschnett/SIMD.jl.
* Scholtz and Wiedenbeck (1990) Scholtz J and Wiedenbeck S (1990) Learning second and subsequent programming languages: A problem of transfer. _International Journal of Human–Computer Interaction_ 2(1): 51--72. 10.1080/10447319009525970.
* Shalf and Leland (2015) Shalf JM and Leland R (2015) Computing beyond Moore’s Law. _Computer_ 48(12): 14--23. 10.1109/MC.2015.374.
* Shang et al. (2022) Shang H, Shen L, Fan Y, Xu Z, Guo C, Liu J, Zhou W, Ma H, Lin R, Yang Y, Li F, Wang Z, Zhang Y and Li Z (2022) Large-Scale simulation of quantum computational chemistry on a new sunway supercomputer .
* Shrestha et al. (2020) Shrestha N, Botta C, Barik T and Parnin C (2020) Here We Go Again: Why Is It Difficult for Developers to Learn Another Programming Language? In: _2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE)_. pp. 691--701.
* Stevens et al. (2020) Stevens R, Taylor V, Nichols J, Maccabe AB, Yelick K and Brown D (2020) AI for Science: Report on the Department of Energy (DOE) Town Halls on Artificial Intelligence (AI) for Science 10.2172/1604756. URL https://www.osti.gov/biblio/1604756.
* Straßel et al. (2020) Straßel D, Reusch P and Keuper J (2020) Python workflows on hpc systems. In: _2020 IEEE/ACM 9th Workshop on Python for High-Performance and Scientific Computing (PyHPC)_. pp. 32--40. 10.1109/PyHPC51966.2020.00009.
* Stroustrup (2013) Stroustrup B (2013) _The C++ programming language_. Pearson Education.
* The HDF Group (2000-2010) The HDF Group (2000-2010) Hierarchical data format version 5. URL http://www.hdfgroup.org/HDF5.
* van der Plas et al. (2022) van der Plas F, Dral M, Berg P, , Bochenski N, disberd, Lungwitz B, Huijzer R, Zhang E, Schneider FSS, Weaver I, Rogerluo, Kadowaki S, Ling J, Wu Z, Burns C, Gerritsen J, Novosel R, Supanat, Moon Z, pupuis, Abbott M, Bauer N, Bouffard P, Terasaki S, Polasa S, TheCedarPrince and fghzxm (2022) fonsp/pluto.jl: v0.17.7. 10.5281/zenodo.5889169. URL https://doi.org/10.5281/zenodo.5889169.
* Verdugo and Badia (2021) Verdugo F and Badia S (2021) The software design of Gridap: a finite element package based on the Julia JIT compiler.
* Vetter et al. (2018) Vetter JS, Brightwell R, Gokhale M, McCormick P, Ross R, Shalf J, Antypas K, Donofrio D, Humble T, Schuman C, Van Essen B, Yoo S, Aiken A, Bernholdt D, Byna S, Cameron K, Cappello F, Chapman B, Chien A, Hall M, Hartman-Baker R, Lan Z, Lang M, Leidel J, Li S, Lucas R, Mellor-Crummey J, Peltz Jr P, Peterka T, Strout M and Wilke J (2018) Extreme Heterogeneity 2018 - Productive Computational Science in the Era of Extreme Heterogeneity: Report for DOE ASCR Workshop on Extreme Heterogeneity 10.2172/1473756.
* Weitz et al. (2020) Weitz JS, Beckett SJ, Coenen AR, Demory D, Dominguez-Mirazo M, Dushoff J, Leung CY, Li G, Măgălie A, Park SW et al. (2020) Modeling shield immunity to reduce COVID-19 epidemic spread. _Nature medicine_ 26(6): 849--854. 10.1038/s41591-020-0895-3.
* Wienke et al. (2012) Wienke S, Springer P, Terboven C et al. (2012) OpenACC—first experiences with real-world applications. In: _European Conference on Parallel Processing_. Springer, pp. 859--870.
* Williams (2016) Williams C (2016) How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript. URL https://www.theregister.com/2016/03/23/npm_left_pad_chaos/.
* Zhu et al. (2021) Zhu S, AlAwar N, Erez M and Gligoric M (2021) Dynamic generation of python bindings for hpc kernels. In: _2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE)_. pp. 92--103. 10.1109/ASE51524.2021.9678726.
|
# On the Separability Problem of VASS Reachability Languages
Eren Keskin TU Braunschweig<EMAIL_ADDRESS>and Roland Meyer TU
Braunschweig<EMAIL_ADDRESS>
###### Abstract.
We show that the regular separability problem of VASS reachability languages
is decidable and $\mathbb{F}_{\omega}$-complete. At the heart of our decision
procedure are doubly-marked graph transition sequences, a new proof object
that tracks a suitable product of the VASS we wish to separate. We give a
decomposition algorithm for DMGTS that not only achieves perfectness as known
from MGTS, but also a new property called faithfulness. Faithfulness allows us
to construct, from a regular separator for the $\mathbb{Z}$-versions of the
VASS, a regular separator for the $\mathbb{N}$-versions. Behind faithfulness
is the insight that, for separability, it is sufficient to track the counters
of one VASS modulo a large number that is determined by the decomposition.
## 1\. Introduction
Regular separability problems for the languages of infinite-state systems are
recently gaining momentum [16, 3, 7, 4, 8, 5, 10, 1, 14, 15]. These problems
take as input two infinite-state systems with languages $L_{1}$ and $L_{2}$,
and ask whether $L_{1}\mid L_{2}$ holds, whether there is a regular language
$R$ that separates the two in the sense that $L_{1}\subseteq R$ and $R\cap
L_{2}=\emptyset$. What makes regular separability problems interesting is that
they do not seem to admit a reduction to established problems like emptiness.
Instead, the decision procedure has to analyze the gap between $L_{1}$ and
$L_{2}$, and judge whether it is large enough to be described by a regular
language.
Despite this challenge, there is a pleasant number of positive results on
regular separability. It has been shown that disjoint WSTS languages are
always separated by a regular language [8, 14]. For disjoint VASS coverability
languages, matching upper and lower bounds on the size of least separators
have been found [15]. For Parikh automata [3] and Büchi VASS coverability
languages [1], regular separability has been shown to be decidable.
Unfortunately, for the main model in this field, namely VASS reachability
languages, the search has only brought partial results. This includes the
decidability of the regular separability problem for the reachability
languages of one-dimensional VASS [7], for $\mathbb{Z}$-VASS reachability
languages [3], for the commutative closure of VASS reachability languages [4],
and for VASS reachability languages from any of the aforementioned classes
[10]. The study has also led to important new techniques. With the transducer
trick, one can reduce the regular separability problem to a variant where only
one language is taken as input and the second is fixed [3, 10]. For this
variant, the basic separator approach tries to determine a limited set of
regular languages so that, if separability holds, then a finite combination of
these languages will serve as a separator [10]. The techniques turned out
widely applicable [10, 15, 1], and the transducer trick will also play a
central role in our work. Related to regular separability is the separability
of VASS reachability sets [4]. A landmark result in this context shows that
VASS reachability sets admit Presburger-definable invariants [20], which
resulted in a new algorithm for solving VASS reachability [21]. To sum up,
despite more than a decade of efforts, the decidability of regular
separability for VASS reachability languages is still open.
We solve the open problem and show that regular separability for VASS
reachability languages is decidable and $\mathbf{F}_{\omega}$-complete. The
problem is primitive recursive if the dimension of the input VASS is fixed.
The class $\mathbf{F}_{\omega}$ contains the problems that can be solved with
Ackermannian time and space [28, 29], and the master problem is VASS
reachability. The hardness of VASS reachability has been established only
recently [22, 9, 19]. The decidability is a classic result [25, 17, 18], with
[27] an early attempt, and based on the algorithms proposed in these works,
the upper bound has been brought down from $\mathbf{F}_{\omega^{3}}$ [23] over
$\mathbf{F}_{\omega^{2}}$ [29] to $\mathbf{F}_{\omega}$ [24]. The algorithms
reduce the VASS reachability problem to the reachability problem in
$\mathbb{Z}$-VASS, using an iterative decomposition that creates potentially
many and potentially large $\mathbb{Z}$-VASS. We will follow the same
strategy, and reduce the regular separability problem of VASS reachability
languages to the regular separability problem of $\mathbb{Z}$-VASS
reachability languages, also with a decomposition. The latter problem has been
shown to be decidable in [3]. For the precise upper bound, we rely on an
analysis inspired by [24].
While this is the overall strategy, it takes new ingredients to make it work
that go beyond the toolkit of VASS reachability. To explain them, we refer to
the input as the subject VASS. The second is the Dyck VASS, and can be fixed
with the transducer trick [3, 10].
#### Ingredients
We define doubly-marked graph transition sequences as a new proof object. A
DMGTS $\mathcal{W}$ simultaneously track both, the subject VASS and the Dyck
VASS, like a product construction would. Unlike a product, however, a DMGTS
defines two languages $L_{\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathsf{dy}}(\mathcal{W})$, and the goal is to understand the separability
of the two. To this end, we define a decomposition algorithm for DMGTS that is
inspired by Lambert’s decomposition [18]. What is new is that our
decomposition not only computes one set of perfect DMGTS, but also another set
of DMGTS for which separability is guaranteed to hold. The idea is that our
decomposition does not treat the languages as symmetric, but only tries to
preserve the subject language. If now, as the result of a decomposition step,
the Dyck language becomes empty, then separability will hold and there is no
need to decompose further.
DMGTS have a new property called faithfulness. Faithfulness says that it is
sufficient to track the Dyck language modulo a large number that is determined
in the course of the decomposition. To explain what it means to be sufficient,
note that DMGTS define acceptance not only by reaching a final counter
valuation from an initial one, but require the run to also reach intermediate
valuations. Faithfulness says that if we can reach these intermediate
valuations modulo a large number, then we can reach them precisely. Unlike
perfectness, faithfulness is not established by the decomposition, but it is
preserved as an invariant. The idea why faithfulness holds is this. The
decomposition only introduces intermediate valuations if a counter variable is
bounded. If we then track the counter modulo this bound, then we do not lose
information. For this argument to hold, it is crucial that the input DMGTS is
already faithful.
When the decomposition terminates, it returns a finite set of faithful and
perfect DMGTS (and the second set discussed above). The last ingredient is a
separability transfer result: if the DMGTS $\mathcal{W}$ is faithful, then
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ implies
$L_{\mathsf{sj}}(\mathcal{W})\mid L_{\mathsf{dy}}(\mathcal{W})$; if it is
perfect, the reverse holds. Behind the first implication is a result that
shows how to turn every separator for the $\mathbb{Z}$-approximations of the
languages into a separator for the languages of interest. Faithfulness is
crucial here. It tells us to intersect the given separator with a regular
language that tracks the Dyck counters modulo the large number determined by
the decomposition. The second implication says that if the
$\mathbb{Z}$-approximations are not separable, then this carries over to the
original languages. Behind this is an application of Lambert’s pumping lemma
[18], and the fact that both languages share the same DMGTS.
#### Overview
After an introduction to VASS, reachability languages, and regular
separability, we discuss Lambert’s decision procedure for VASS reachability in
Section 4. It contains a number of concepts that we build on, including MGTS,
characteristic equations, and perfectness. DMGTS and faithfulness are defined
in Section 5. Our decision procedure for regular separability is given in
Section 6. In Section 7, we prove the separability transfer result. The DMGTS
decomposition can be found in Section 8. Details missing in the main text can
be found in the appendix.
## 2\. VASS
A vector addition system with states $\mathcal{U}=(V,\Sigma,I,E)$ consists of
a finite set of nodes $V$, a finite alphabet $\Sigma$, a finite set of
counters $I$, and a finite set of edges $E\subseteq V\times\mathit{Up}\times
V$. We call $\mathit{Up}=\Sigma_{\varepsilon}\times\mathbb{Z}^{I}$ with
$\Sigma_{\varepsilon}=\Sigma\cup\\{\varepsilon\\}$ the set of updates.
We introduce some notation. For sequences $\sigma\in A^{*}$ over a set $A$, we
use $|{\sigma}|$ for the length and $\sigma[i]$ for the $i$-th component. We
use the distinguished indices $\mathsf{first}$ and $\mathsf{last}$ to access
the first resp. last component. When we have a function
$f:A\rightarrow\mathbb{P}(X)$ into a powerset and $B\subseteq A$, we may use
$f(B)$ for $\bigcup_{b\in B}f(b)$.
We will not only work with VASS but also with $\mathbb{Z}$-VASS. To define the
semantics of both models in one go, let $J\subseteq I$ be a subset of the
counters. A $J$-counter valuation
$c\in\mathbb{N}^{J}\times\mathbb{Z}^{I\setminus J}$ gives a non-negative value
to the $J$-counters. A $J$-configuration is a pair $(v,c)$ consisting of a
node $v\in V$ and a $J$-counter valuation $c$. A $J$-run is a sequence
$\rho=(v_{0},c_{0})e_{0}(v_{1},c_{1})\ldots(v_{l},c_{l})$ of
$J$-configurations and edges where for all $i<l$ we have
$e_{i}=(v_{i},a_{i},x_{i},v_{i+1})$ with $x_{i}=c_{i+1}-c_{i}$. We write
$\mathsf{Runs}_{J}(\mathcal{U})$ for the set of all $J$-runs in $\mathcal{U}$.
We also use $\mathsf{Runs}_{\mathbb{N}}(\mathcal{U})$ if $J$ contains all
counters, and $\mathsf{Runs}_{\mathbb{Z}}(\mathcal{U})$ if $J$ is empty. Note
that the configurations in a run are already determined by the initial counter
valuation and the sequence of edges. We may therefore also give a run as
$\rho=c.\sigma$ with $\sigma\in E^{*}$. We may also emphasize the initial and
final configurations and give a run as
$\rho=(v,c).\sigma.(v^{\prime},c^{\prime})$. We use
$\lambda(\rho)\in\Sigma^{*}$ for the sequence of letters on the run. Two runs
are equivalent, $\rho_{1}\approx\rho_{2}$, if they only differ in the nodes
they visit. We use $\Delta(e)\in\mathbb{Z}^{I}$ for the counter update done by
an edge, and $\Delta(\rho)$ for the counter update done by a run. A Parikh
vector $\psi\in\mathbb{N}^{E}$ associates with each edge an occurrence count,
and we define $\Delta(\psi)=\sum_{e\in E}\psi[e]\cdot\Delta(e)$. Note that
$\Delta(\rho)=\Delta(\psi(\rho))$, where $\psi(\rho)$ is the Parikh vector
induced by $\rho$.
We define accepting runs with generalized initial and final configurations.
Let $\mathbb{N}_{\omega}=\mathbb{N}\cup\\{\omega\\}$ and
$\mathbb{Z}_{\omega}=\mathbb{Z}\cup\\{\omega\\}$ extend the natural numbers
and the integers by a top element. We lift this to counter valuations and call
$c\in\mathbb{N}_{\omega}^{J}\times\mathbb{Z}_{\omega}^{I\setminus J}$ a
generalized $J$-counter valuation. We write $\Omega(c)$ for the set of
counters $i$ with $c(i)=\omega$. We call $(v_{,}c)$ a generalized
$J$-configuration if $c$ is a generalized $J$-counter valuation. We define
acceptance parametric in a preorder $\sqsubseteq\
\subseteq\mathbb{Z}_{\omega}\times\mathbb{Z}_{\omega}$. We lift the preorder
to generalized counter valuations by a componentwise comparison. We also lift
it to generalized configurations by $(v_{1},c_{1})\sqsubseteq(v_{2},c_{2})$,
if $v_{1}=v_{2}$ and $c_{1}\sqsubseteq c_{2}$. An important instance is the
specialization preorder $\leq_{\omega}\
\subseteq\mathbb{Z}_{\omega}\times\mathbb{Z}_{\omega}$, which is defined by
$k\leq_{\omega}k$ and $k\leq_{\omega}\omega$ for all
$k\in\mathbb{Z}_{\omega}$.
An initalized VASS
$\mathcal{V}=(\mathcal{U},(v_{\mathsf{in}},c_{\mathsf{in}}),(v_{\mathsf{out}},c_{\mathsf{out}}))$
enriches a VASS $\mathcal{U}$ with generalized $\mathbb{N}$-configurations
$(v_{\mathsf{in}},c_{\mathsf{in}})$ and $(v_{\mathsf{out}},c_{\mathsf{out}})$
that we call initial and final. We speak of an extremal configuration if it is
initial or final. The runs of $\mathcal{V}$ are the runs of $\mathcal{U}$.
Such a run is $\sqsubseteq$-accepting, if
$\rho[\mathsf{first}]\sqsubseteq(v_{\mathsf{in}},c_{\mathsf{in}})$ and
$\rho[\mathsf{last}]\sqsubseteq(v_{\mathsf{out}},c_{\mathsf{out}})$, the first
configuration is smaller than the initial configuration in the given preorder,
and the last configuration is smaller than the final configuration. We use
$\mathsf{Acc}_{J,\sqsubseteq}(\mathcal{V})$ for the set of all $J$-runs in
$\mathcal{V}$ that are $\sqsubseteq$-accepting. We denote the size of an
initialized VASS by $|{\mathcal{V}}|$. We measure the size in binary, but this
does not matter for the large complexity classes we are concerned with.
With an initialized VASS, we associate the language of all words that label an
$\leq_{\omega}$-accepting run:
$\displaystyle
L_{J}(\mathcal{V})\;\;=\;\;\\{\lambda(\rho)\mid\rho\in\mathsf{Acc}_{J,\leq_{\omega}}(\mathcal{V})\\}\
.$
We use $L_{\mathbb{N}}(\mathcal{V})$ and $L_{\mathbb{Z}}(\mathcal{V})$ if
every counter resp. no counter has to stay non-negative. The former are the
VASS reachability languages and the latter the $\mathbb{Z}$-VASS reachability
languages.
## 3\. Regular Separability
We study the regular separability of VASS reachability languages. Languages
$L_{1},L_{2}\subseteq\Sigma^{*}$ are _separable by a regular language_ ,
denoted by $L_{1}\mid L_{2}$, if there is a regular language
$S\subseteq\Sigma^{*}$ that satisfies $L_{1}\subseteq S$ and $S\cap
L_{2}=\emptyset$. The language $S$ is usually called the separator, and the
regular separability problem asks whether a separator exists for given
languages. In the definition of the decision problem, we again make the domain
of counter values a parameter:
> $\mathbb{X}$-REGSEP
> Given: Initialized VASS $\mathcal{V}_{1}$ and $\mathcal{V}_{2}$ over
> $\Sigma$.
> Problem: Does $L_{\mathbb{X}}(\mathcal{V}_{1})\mid
> L_{\mathbb{X}}(\mathcal{V}_{2})$ hold?
Our main result is the decidability of regular separability for the
reachability languages of ordinary VASS.
###### Theorem 3.1.
$\mathbb{N}$-REGSEP is decidable and $\mathbf{F}_{\omega}$-complete. We can
effectively compute a separator in this time and space bound.
Recall that $\mathbf{F}_{\omega}$ is the class of problems that can be solved
with Ackermannian time and space [28]. It is closed under further calls to
primitive recursive functions [28, Lemma 4.6], and these functions are also
used as reductions to define hardness. Our lower bound for regular
separability is by a reduction from the reachability problem in
$\mathbb{N}$-VASS, whose $\mathbf{F}_{\omega}$-hardness is a recent
achievement [9, 22, 19]. It even holds if we promise the input languages to be
disjoint and the only separator candidate is $\emptyset$. The techniques are
standard and can be found in the appendix.
For the upper bound, we use the transducer trick [3, 10] and reduce
$\mathbb{N}$-REGSEP to a separability problem that only takes one reachability
language as input. The second language is fixed to the Dyck language $D_{n}$,
where $n$ is the number of counters in the second VASS. The Dyck language is
accepted by the initialized VASS
$(\mathcal{D}_{n},(v,\mathbf{0}),(v,\mathbf{0}))$ with
$\mathcal{D}_{n}=(\\{v\\},\Sigma_{n},\mathsf{dy},E)$. The alphabet is
$\Sigma_{n}=\\{a_{i},\bar{a}_{i}\mid 1\leq i\leq n\\}$, the counters
$\mathsf{dy}=\\{1,\ldots,n\\}$, and the edges are
$E=\\{(v,(a_{i},e_{i}),v),(v,(\bar{a}_{i},-e_{i}),v)\mid 1\leq i\leq n\\}$,
where $e_{i}$ is the $i$-th unit vector. This means we increment counter $i$
when seeing $a_{i}$, and decrement $i$ upon $\bar{a}_{i}$. We call a VASS that
sticks with this link between letters and counter updates _Dyck visible_. Note
that the VASS only has one state, and we also speak of a _VAS_.
###### Lemma 3.2.
Let $\mathcal{U}$ and $\mathcal{V}$ be initialized VASS over $\Sigma$, and let
$\mathcal{V}$ have $n$ counters. We can compute in elementary time from
$\mathcal{V}$ a transducer $T$ so that
$\mathit{L}(\mathcal{U})\mid\mathit{L}(\mathcal{V})$ if and only if
$T^{-1}(\mathit{L}(\mathcal{U}))\mid D_{n}$.
VASS are effectively closed under inverse transductions.
###### Lemma 3.3.
Let $\mathcal{U}$ be an initialized VASS over $\Sigma$ and let $T$ be a
transducer from $\Sigma_{n}$ to $\Sigma$. We can compute in time elementary in
the size of $\mathcal{U}$ and $T$ a VASS $\mathcal{U}^{\prime}$ so that
$\mathit{L}(\mathcal{U}^{\prime})=T^{-1}(\mathit{L}(\mathcal{U}))$.
With the aforementioned closure of $\mathbf{F}_{\omega}$ under primitive
recursive functions, Theorem 3.1 is a consequence of the following result.
###### Proposition 3.4.
Let $\mathcal{U}$ be an initialized VASS over $\Sigma_{n}$. Then
$\mathit{L}(\mathcal{U})\mid D_{n}$ is decidable and we can compute a
separator in $\mathbf{F}_{\omega}$.
The rest of the paper is devoted to proving Proposition 3.4. Our algorithm is
based on the decision procedure for VASS reachability, whose details we recall
in a moment. A second ingredient is the decidability of regular separability
for $\mathbb{Z}$-VASS. This is a result we can use in black-box fashion, when
it is formulated as follows.
###### Theorem 3.5 ([3]).
$\mathbb{Z}$-REGSEP is decidable and a regular separator can be computed with
elementary resources.
## 4\. VASS Reachability
We recall Lambert’s decision procedure for VASS reachability [18], with the
recent additions due to Leroux and Schmitz [24]. The purpose is to introduce
concepts that we need later.
### 4.1. Overview
The VASS reachability problem takes as input an initialized VASS $\mathcal{V}$
and asks whether
$\mathsf{Acc}_{\mathbb{N},\leq_{\omega}}(\mathcal{V})\neq\emptyset$. The
decision procedure is an abstraction-refinement algorithm that computes a
sequence
$\displaystyle\mathsf{Acc}_{\mathbb{Z},\leq_{\omega}}(S_{0})\supseteq\mathsf{Acc}_{\mathbb{Z},\leq_{\omega}}(S_{1})\supseteq\ldots\supseteq\mathsf{Acc}_{\mathbb{N},\leq_{\omega}}(\mathcal{V})\
.$
Each over-approximation $S_{i}$ is a finite set of $\mathbb{Z}$-VASS
$\mathcal{W}$ and, as agreed, we use
$\mathsf{Acc}_{\mathbb{Z},\leq_{\omega}}(S_{i})$ for $\bigcup_{\mathcal{W}\in
S_{i}}\mathsf{Acc}_{\mathbb{Z},\leq_{\omega}}(\mathcal{W})$. Details on the
shape of $\mathcal{W}$ will follow in a moment.
At each step, the algorithm picks an element $\mathcal{W}\in S_{i}$ and checks
reachability. If reachability does not hold,
$\mathsf{Acc}_{\mathbb{Z},\leq_{\omega}}(\mathcal{W})=\emptyset$, then the
element will not be considered in the future,
$S_{i+1}=S_{i}\setminus\\{\mathcal{W}\\}$. If reachability holds, the
algorithm checks whether $\mathcal{W}$ is perfect, a property that we discuss
below. If so, the algorithm concludes that the $\mathbb{Z}$-run it has just
found can be turned into an $\mathbb{N}$-run, and terminates with the verdict
_reachable_. If $\mathcal{W}$ is not perfect, there is a guarantee that
$\mathcal{W}$ can be refined. Following Leroux and Schmitz [23], the
refinement is called KLMST decomposition after the inventors Kosaraju [17],
Lambert [18], Mayr [25], as well as Sacerdote and Tenney [27]. The
decomposition computes from $\mathcal{W}$ a new and again finite set
$S(\mathcal{W})$ of $\mathbb{Z}$-VASS that replace $\mathcal{W}$ in the
approximation, meaning we have $S_{i+1}=(S_{i}\setminus\\{\mathcal{W}\\})\cup
S(\mathcal{W})$. The decomposition guarantees
$\mathsf{Acc}_{\mathbb{N},\leq_{\omega}}(\mathcal{W})=\mathsf{Acc}_{\mathbb{N},\leq_{\omega}}(S(\mathcal{W}))$
so that we do not lose $\mathbb{N}$-runs but remain over-approximate. If
$S_{i}$ is found to be empty, the algorithm terminates with the verdict
_unreachable_.
What makes the algorithm terminate is a progress guarantee: each
$\mathbb{Z}$-VASS in the KLMST decomposition $S(\mathcal{W})$ is guaranteed to
be strictly smaller than $\mathcal{W}$ in a well-founded order. Since
$S(\mathcal{W})$ is finite, König’s lemma shows termination of the overall
algorithm.
We elaborate on the $\mathbb{Z}$-VASS used in the over-approximation. There
are two standard techniques to over-approximate reachability in VASS:
characteristic equations [6, 12] and coverablity graphs [13]. The
characteristic equations can track counter values precisely, but they cannot
guarantee that intermediate values remain non-negative. Coverability graphs
are the opposite, they can guarantee that intermediate values remain non-
negative, but they cannot track counter values precisely. The decision
procedure combines the two. The $\mathbb{Z}$-VASS are decorated by generalized
$\mathbb{N}$-counter valuations, like coverability graphs. If a decoration is
$\omega$, and thus not precise enough for reachability, that counter is
treated as a $\mathbb{Z}$-counter and handled by the characteristic equations.
For this combination to solve reachability, we have to be able to transfer the
non-negativity guarantee given by coverability graphs to the characteristic
equations. Behind the non-negativity guarantee given by coverability graphs is
a translation: if we have a run in the coverability graph, we obtain an
$\mathbb{N}$-run in the underlying VASS by repeating intermediate transition
sequences in order to pump counter values arbitrarily high. By transferring
the guarantee, we mean that also the characteristic equations should admit
this pumping behavior: the counter as well as the repetition count for the
edges on the pumping sequences should be unbounded in the solution space of
the characteristic equations. If this is the case, the coverability graph is
called perfect.
It is easy to check unboundedness in the solution space, and therefore also
perfectness: we just have to find a solution to the homogeneous variant of the
characteristic equations where the variable is positive. More precisely, the
characteristic equations will have the shape $Ax=b\wedge x\geq 0$. The
solutions $s$ can have arbitrarily high values $s[i]$ if and only if
$Ay=0\wedge y\geq 0$ has a solution $s^{\prime}$ with $s^{\prime}[i]>0$. To
see that homogeneous solutions are necessary, assume there are solutions with
arbitrarily high values for a variable. The well-quasi order of
$\mathbb{N}^{k}$ will give us comparable solutions that we can subtract to
solve the homogenous equations.
The KLMST decomposition comes in when perfectness fails, but the
$\omega$-decorations and pumping sequences do not coincide with the
unboundedness in the solution space. For example, the decoration may suggest
the counter value $\omega$, but the counter is bounded in the solution space.
Then the counter can only take finitely many values in runs that solve
reachability. The decomposition now replaces the $\omega$-entry by each of
these values. The result is a potentially large but finite set of new
$\mathbb{Z}$-VASS. The analysis also informs us about transitions that can
only be taken a bounded number of times in the solution space, but that lie on
loops in the coverability graph. To improve the precision, one unwinds the
coverability graph so that the bounded transitions lie on an acyclic path.
This is the second form of decomposition. The last decomposition has to do
with the fact that we need pumping sequences that justify the existence of the
$\omega$-entries. We make the ideas formal.
### 4.2. MGTS
The $\mathbb{Z}$-VASS used for the over-approximation are so-called _marked
graph transition sequences (MGTS)_ that are defined as follows:
$\displaystyle\mathcal{W}\;\;::=\;\;G\;\;\mid\;\;\mathcal{W}_{1}.\mathit{up}.\mathcal{W}_{2}\
.$
An MGTS is an interleaving of precovering graphs $G$ and updates
$\mathit{up}$. Precovering graphs are initialized VASS of a form we define in
a moment. In an MGTS, all precovering graphs share the same alphabet and the
same set of counters $I$, but the sets of nodes are pairwise disjoint.
A _precovering graph_ $G=(\mathcal{V},\varphi)$ is an initialized VASS
$\mathcal{V}$ that is decorated by a consistent assignment $\varphi$. The VASS
should be strongly connected and satisfy
$\mathcal{V}.v_{\mathsf{in}}=\mathcal{V}.v_{\mathsf{out}}$, called the root of
$G$ and denoted by $v_{\mathsf{root}}$. Let $V=\mathcal{V}.V$ be the nodes,
$E=\mathcal{V}.E$ the edges, and $I=\mathcal{V}.I$ the counters in
$\mathcal{V}$. An assignment $\varphi:V\to\mathbb{N}_{\omega}^{I}$ of
generalized counter valuations to nodes is _consistent_ , if there is a set of
counters $J\subseteq I$ so that for all nodes $v\in V$ we have
$\varphi(v)[j]\in\mathbb{N}$ if and only if $j\in J$, we have
$\varphi(v_{\mathsf{root}})[J]=c_{\mathsf{in}}[J]=c_{\mathsf{out}}[J]$, and
$\Delta(\mathit{up})[J]=\varphi(w)[J]-\varphi(v)[J]$ for all
$(v,\mathit{up},w)\in E$. The consistent assignment is the decoration from
above. Consistency means all nodes agree on the set of counters $J$ that are
decorated by $\mathbb{N}$-values. The remaining counters are decorated by
$\omega$ and we use $\Omega(G)=I\setminus J$ to refer to them. For the
counters in $J$, the decoration of the root has to coincide with the initial
and the final valuations. This means the initial and final valuations may only
have less $\omega$-entries. The decoration tracks the updates performed by the
edges.
A counter $i$ may be decorated by $\omega$ in the precovering graph but have a
concrete initial value, $i\in\Omega$ with
$\Omega=\Omega(G)\setminus\Omega(c_{\mathsf{in}})$. Then it should be possible
to pump this counter in the precovering graph to arbitrarily high values while
going from the root back to the root. Pumping means the loop starts in a
counter valuation $c_{1}$ that respects the concrete initial values and ends
in a valuation $c_{2}$ that is strictly larger in the counters from $\Omega$.
For the counters in $\Omega(c_{\mathsf{in}})$, there is no requirement and the
loop may reduce their values. The remaining counters are tracked by the
decoration and every loop will leave their valuation unchanged. We use
$c_{1}<^{\Omega}c_{2}$ for $c_{1}(i)<c_{2}(i)$ for all $i\in\Omega$. That
pumping should be possible means the following set of _covering sequences_
should be non-empty:
$\displaystyle\mathsf{CS}_{\mathsf{up}}(G)=\;\\{\sigma\in$ $\displaystyle\
E^{*}\mid\exists
c_{1},c_{2}\in\mathbb{N}^{I}.c_{1}\leq_{\omega}c_{\mathsf{in}}\wedge
c_{1}<^{\Omega}c_{2}$
$\displaystyle\qquad\wedge(v_{\mathsf{root}},c_{1}).\sigma.(v_{\mathsf{root}},c_{2})\in\mathsf{Runs}_{\mathbb{N}}(G)\\}\
.$
We also need to pump down counters $i$ that are decorated $\omega$ in the
precovering graph but receive a concrete value in the final configuration,
$i\in\Omega$ with $\Omega=\Omega(G)\setminus\Omega(c_{\mathsf{out}})$. We
reuse the above definition and define
$\displaystyle\mathsf{CS}_{\mathsf{down}}(G)\;\;=\;\;{\mathsf{CS}_{\mathsf{up}}({G}^{\mathit{rev}})}^{\mathit{rev}}\
.$
The reverse of a run is defined as expected,
${(\rho_{1}.\rho_{2})}^{\mathit{rev}}={\rho_{2}}^{\mathit{rev}}.{\rho_{1}}^{\mathit{rev}}$.
The reverse of a single edge is ${(v,a,x,w)}^{\mathit{rev}}=(w,a,-x,v)$,
meaning we increment where we have decremented before, and vice versa. The
reverse runs stem from a reversal of the precovering graph,
${G}^{\mathit{rev}}=(({\mathcal{U}}^{\mathit{rev}},(v_{\mathsf{root}},c_{\mathsf{out}}),(v_{\mathsf{root}},c_{\mathsf{in}})),\varphi)$.
Note that the initial and final configurations have changed their roles. The
reversal of the underlying counter system
${\mathcal{U}}^{\mathit{rev}}=(V,\Sigma,I,\\{{e}^{\mathit{rev}}\mid e\in
E\\})$ simply reverses the edges. Hence, the two reversals in the definition
have no effect on the counter updates but just identify the down-pumping runs
that end in $c_{\mathsf{out}}$ from arbitrarily high values in the
$\Omega$-counters.
The emptiness of $\mathsf{CS}_{\mathsf{up}}(G)$ and
$\mathsf{CS}_{\mathsf{down}}(G)$ can be checked using (two different)
unboundedness checks [13, 11], which will also provide covering sequences if
the sets are non-empty.
In our development, it will be helpful to understand MGTS $\mathcal{W}$ as
initalized VASS. We simply turn the intermediate updates into edges that
connect the final node in one precovering graph with the initial node of the
next. We write $\mathcal{W}.V$ for the set of all nodes in precovering graphs
of $\mathcal{W}$, and similar for the alphabet, the counters, and the edges.
We write $\mathcal{W}.v_{\mathsf{in}}$ for
$\mathcal{W}[\mathsf{first}].v_{\mathsf{in}}$, and
$\mathcal{W}.v_{\mathsf{out}}$ for
$\mathcal{W}[\mathsf{last}].v_{\mathsf{out}}$. We use $|{\mathcal{W}}|$ for
the size. The initial and final configurations of each precovering graph count
towards the size.
Seeing MGTS as VASS, we can use $\mathsf{Acc}_{J,\sqsubseteq}(\mathcal{W})$ to
refer to the accepting runs. MGTS also have their own notion of _intermediate
acceptance_ , where the runs not only have to meet the overall initial and
final configurations, but the initial and final configurations of every
precovering graph. Since we transition through a sequence of precovering
graphs, we also speak of entry and exit rather than initial and final
configurations. We say that a $J$-run $\rho\in\mathsf{Runs}_{J}(\mathcal{W})$
is $\sqsubseteq$-intermediate accepting, if for every precovering graph $G$ in
$\mathcal{W}$ that is traversed by the infix $\rho_{G}$ of $\rho$, we have
$\rho_{G}[\mathsf{first}]\sqsubseteq(v_{\mathsf{root}},c_{\mathsf{in}})$ and
$\rho_{G}[\mathsf{last}]\sqsubseteq(v_{\mathsf{root}},c_{\mathsf{out}})$.
Here, $(v_{\mathsf{root}},c_{\mathsf{in}})$ and
$(v_{\mathsf{root}},c_{\mathsf{out}})$ are the entry and exit configurations
of $G$. We use $\mathsf{IAcc}_{J,\sqsubseteq}(\mathcal{W})$ for the set of all
$\sqsubseteq$-intermediate accepting $J$-runs in $\mathcal{W}$.
### 4.3. Characteristic Equations
The characteristic equations for MGTS are
$\displaystyle\mathsf{Char}(G)$
$\displaystyle\;\;=\;\;\mathsf{RunsEq}(G)\wedge\mathsf{IAccEq}(G,\leq_{\omega})$
$\displaystyle\mathsf{Char}(G.\mathit{up}.\mathcal{W})$
$\displaystyle\;\;=\;\;\mathsf{Char}(G)\wedge\mathsf{Char}(\mathcal{W})$
$\displaystyle\hskip 19.91684pt\wedge
x[G,\mathsf{out}]+\Delta(\mathit{up})-x[\mathcal{W}[\mathsf{first}],\mathsf{in}]=\mathbf{0}\
.$
For each precovering graph $G$ and each counter $i$ in the MGTS, we introduce
the variables $x[G,\mathsf{in},i]$ and $x[G,\mathsf{out},i]$. The idea is that
the vectors
$x[G,\mathsf{in}]=(x[G,\mathsf{in},1],\ldots,x[G,\mathsf{in},|{I}|])$ and
$x[G,\mathsf{out}]$ describe the counter valuations upon entering respectively
exiting the precovering graph. The last system of equations says that the
counter valuation when entering the first precovering graph in $\mathcal{W}$
is the valuation when leaving $G$ plus the counter update
$\Delta(\mathit{up})$. We also have a variable $x[e]$ for every edge in a
precovering graph, which counts how often the edge is taken in a run.
The system of equations $\mathsf{RunsEq}(G)$ captures the runs through the
precovering graph $G$. It consists of the Kirchhoff equations, the marking
equations, and equations that require non-negative values for the edge
variables. The Kirchhoff equations express the fact that a run must enter and
exit every node the same number of times. Since precovering graphs are
strongly connected, this means the edge vector can be turned into a path
provided every single edge is taken at least once. Perfectness will make sure
this is the case. The marking equations say that the counter valuation after
the run is the initial counter valuation plus the updates performed by the
edges. The reader may note that we would only have to track the values for
counters in $\Omega(G)$, but this would clutter the presentation. Let $V=G.V$,
$E=G.E$, and $E^{\mathsf{to}}(v)$ and $E^{\mathsf{from}}(v)$ denote the edges
leading to and originating from node $v$. We have
$\mathsf{RunsEq}(G)$: $\displaystyle\sum_{e\in
E^{\mathsf{to}}(v)}x[e]-\sum_{e\in E^{\mathsf{from}}(v)}x[e]$
$\displaystyle=0\quad\text{for all }v\in V$ $\displaystyle
x[G,\mathsf{in}]+\Delta(x[E])-x[G,\mathsf{out}]$ $\displaystyle=\mathbf{0}$
$\displaystyle x[e]$ $\displaystyle\geq 0\quad\text{for all }e\in E\ .$
The system $\mathsf{IAccEq}(G,\sqsubseteq)$ formulates
$\sqsubseteq$-intermediate acceptance. It says that the initial counter
valuation held by $x[G,\mathsf{in}]$ should be smaller than the initial
counter valuation $G.c_{\mathsf{in}}$ wrt. $\sqsubseteq$, and the final
counter valuation $x[G,\mathsf{out}]$ should be smaller than
$G.c_{\mathsf{out}}$. Moreover, both valuations should be non-negative. We
define
$\mathsf{IAccEq}(G,\sqsubseteq)$: $\displaystyle\mathbf{0}\leq
x[G,\mathsf{in}]$ $\displaystyle\sqsubseteq G.c_{\mathsf{in}}$
$\displaystyle\mathbf{0}\leq x[G,\mathsf{out}]$ $\displaystyle\sqsubseteq
G.c_{\mathsf{out}}\ .$
As explained above, to judge whether a variable is unbounded in the solution
space of $\mathsf{Char}(\mathcal{W})$, we need a homogeneous variant of the
characteristic equations. Since most equations are already homogeneous, all we
have to do is replace the concrete values in $\mathsf{IAccEq}(G,\sqsubseteq)$
by zero. We thus define $\mathbf{0}_{\mathsf{in}}\in\\{0,\omega\\}^{I}$ by
$\mathbf{0}_{\mathsf{in}}[i]=\omega$ if and only if
$c_{\mathsf{in}}[i]=\omega$ for all $i\in I$, and similar for
$\mathbf{0}_{\mathsf{out}}$. With this,
$\mathsf{HomIAccEq}(G,\sqsubseteq)$: $\displaystyle\mathbf{0}\leq
x[G,\mathsf{in}]$ $\displaystyle\sqsubseteq\mathbf{0}_{\mathsf{in}}$
$\displaystyle\mathbf{0}\leq x[G,\mathsf{out}]$
$\displaystyle\sqsubseteq\mathbf{0}_{\mathsf{out}}\ .$
A support solution of $\mathsf{Char}(\mathcal{W})$ is a vector
$s\in\mathbb{Z}^{\mathcal{W}}$ that satisfies the homogeneous characteristic
equations. The support $\mathsf{supp}(\mathsf{Char}(\mathcal{W}))$ consists of
all (counter and edge) variables $x[c]$ for which there is a support solution
$s$ with $s[c]\geq 1$. We call $s$ a full support solution, if it gives a
positive value to all variables in the support, $s[c]\geq 1$ for all
$x[c]\in\mathsf{supp}(\mathsf{Char}(\mathcal{W}))$. Since support solutions
are stable under addition, we have the following result.
###### Lemma 4.1.
There always is a full support solution of $\mathsf{Char}(\mathcal{W})$.
### 4.4. Perfectness and Reachability
When it comes to reachability, an $\leq_{\omega}$-intermediate accepting
$\mathbb{N}$-run in an MGTS $\mathcal{W}$ immediately yields a solution of the
characteristic equations $\mathsf{Char}(\mathcal{W})$. Lambert’s important
insight is that also the reverse holds [18]: a solution to the characteristic
equations yields an $\leq_{\omega}$-intermediate accepting $\mathbb{N}$-run.
What is remarkable is that the characteristic equations cannot guarantee non-
negativity of the valuations attained within precovering graphs. Instead,
Lambert achieves non-negativity by pumping covering sequences. His result
needs the hypothesis that covering sequences exist and, moreover, the
characteristic equations admit the pumping. This is captured by the notion of
perfectness. The MGTS $\mathcal{W}$ is _perfect_ , if for every precovering
graph $G$ in $\mathcal{W}$,
* (i)
$\mathsf{CS}_{\mathsf{up}}(G)\neq\emptyset\neq\mathsf{CS}_{\mathsf{down}}(G)$,
and
* (ii)
$\mathsf{supp}(\mathsf{Char}(\mathcal{W}))$ justifies the unboundedness in
$G$.
It remains to define what it means to justify the unboundedness. We make the
definition slightly more general so that we can reuse it later. Let $G$ have
counters $I$, edges $E$, initial valuation $c_{\mathsf{in}}$, and final
valuation $c_{\mathsf{out}}$. Let $J\subseteq I$ be a subset of the counters.
We say that $\mathsf{supp}(\mathsf{Char}(\mathcal{W}))$ _justifies the
unboundedness of $J$ in $G$_, if
$\displaystyle x[G,\mathsf{in},j]$
$\displaystyle\in\mathsf{supp}(\mathsf{Char}(\mathcal{W}))\text{ for all $j\in
J$ with $c_{\mathsf{in}}[j]=\omega$}$ $\displaystyle x[G,\mathsf{out},j]$
$\displaystyle\in\mathsf{supp}(\mathsf{Char}(\mathcal{W}))\text{ for all $j\in
J$ with $c_{\mathsf{out}}[j]=\omega$}$ $\displaystyle x[e]$
$\displaystyle\in\mathsf{supp}(\mathsf{Char}(\mathcal{W}))\text{ for all $e\in
E$}\ .$
If $J=I$, we say _$\mathsf{supp}(\mathsf{Char}(\mathcal{W}))$ justifies the
unboundedness in $G$_.
Perfectness of $\mathcal{W}$ is sufficient to construct an $\mathbb{N}$-run
from a solution to the characteristic equations:
$\displaystyle\mathsf{IAcc}_{\mathbb{N},\leq_{\omega}}(\mathcal{W})\neq\emptyset\qquad$
$\displaystyle\text{iff}\qquad\mathsf{IAcc}_{\mathbb{Z},\leq_{\omega}}(\mathcal{W})\neq\emptyset$
$\displaystyle\text{iff}\qquad\mathsf{Char}(\mathcal{W})\text{ is feasible}\
.$
The implication from right to left is Lambert’s famous iteration lemma [18,
Lemma 4.1]. As the key arguments will reappear in our solution to regular
separability, we explain them before stating the result. For simplicity,
assume there is only one precovering graph $G$ whose initial and final
valuations are concrete, $c_{\mathsf{in}},c_{\mathsf{out}}\in\mathbb{N}^{I}$.
Let $s_{\mathit{h}}$ be a full support solution that exists by Lemma 4.1. Let
$s_{\mathit{c}}$ be a solution to the characteristic equations that exists by
feasibility. Then $s_{\mathit{h}}+s_{\mathit{c}}=s$ solves the characteristic
equations and satisfies $s[e]\geq 1$ for every edge. As the solution contains
every edge, we can turn it into a path $\rho$ with $\psi(\rho)=s$.
Unfortunately, the path may not be an $\mathbb{N}$-run, because the
$\omega$-decorated counters may become negative. By perfectness, however,
there is a covering sequence $u\in\mathsf{CS}_{\mathsf{up}}(G)$ that produces
a positive value on all $\omega$-decorated counters (recall that the initial
valuation is concrete). The idea is to repeat $u$ to enable $\rho$.
Unfortunately, we cannot repeat $u$ in isolation, otherwise we may end up with
a run that no longer solves reachability. The way out is to work with
repetitions of the support solution. We also have to involve a sequence
$v\in\mathsf{CS}_{\mathsf{down}}(G)$ for reasons that will become clear in a
moment. Select $m\in\mathbb{N}$ so that
(1) $\displaystyle m\cdot s_{\mathit{h}}[E]-\psi(u)-\psi(v)\geq\mathbf{1}\ .$
The condition says that $m$ copies of the support solution contain enough
transitions to fit in $u$, $v$, and another cycle $w$. We can form $w$ because
we still have every edge at least once. The idea is to embed $\rho$ into a
repetition $u^{k}.\rho.w^{k}.v^{k}$. We first have a sequence that increases
the counter values and in the end a sequence that decreases them. Since $u$
and $v$ are $\mathbb{N}$-runs, we know they are executable once we have their
initial valuations.
Unfortunately, we do not even know that $u.w.v$ forms an $\mathbb{N}$-run. The
word $w$ may have a negative effect on the $\omega$-decorated counters. This
is where $v$ comes in. We know that $u.w.v$ has a zero effect on the
$\omega$-decorated counters by the shape of the homogeneous characteristic
equations. Moreover, we know that $v$ has a strictly negative effect on these
counters by the definition of $\mathsf{CS}_{\mathsf{down}}(G)$. Then $u.w$
must have a strictly positive effect on the $\omega$-decorated counters. This
means there is $k\in\mathbb{N}$ so that $u^{k}.w^{k}.v^{k}$ is an
$\mathbb{N}$-run. We can choose $k$ large enough for $u^{k}.\rho.w^{k}.v^{k}$
to form an $\mathbb{N}$-run.
It is also helpful to consider the case where $c_{\mathsf{out}}[i]=\omega\neq
c_{\mathsf{in}}[i]$, meaning the precovering graph has to provide arbitrarily
large values for counter $i$. By perfectness, we know that
$x[G,\mathsf{out},i]$ is in the support. In the above discussion, this means
$u.w.v$ will have a strictly positive effect on this counter.
To lift the argumentation from precovering graphs to composed MGTS, we have to
deal with $\omega$-entries in the initial valuation of a precovering graph. An
$\omega$-entry means the covering sequence may have a negative impact on the
counter. To be able to execute the sequence, we let the precovering graphs
which are placed earlier in the MGTS produce a high enough value on the
counter as follows. By perfectness, the variable for the $\omega$-decorated
counter is in the support. This means we can scale the support solution
$s_{\mathit{h}}$ by a factor $m\in\mathbb{N}$ that not only achieves Condition
(1) from above, but also satisfies the following: for all precovering graphs
$G$ in $\mathcal{W}$ with $\omega$-decorated counter $i$,
$u\in\mathsf{CS}_{\mathsf{up}}(G)$ and $v\in\mathsf{CS}_{\mathsf{down}}(G)$,
we have
(2) $\displaystyle m\cdot s_{\mathit{h}}[G,\mathsf{in},i]+\Delta(u)\geq
1\;\text{and}\;m\cdot s_{\mathit{h}}[G,\mathsf{out},i]-\Delta(v)\geq 1\ .$
The following is a slightly strengthened variant of Lambert’s iteration lemma
that makes explicit the universal quantification over the cycles that can be
iterated. This gives us freedom for our construction in Section 7. Despite the
stronger formulation, the correctness of the lemma still follows from the
proof in [18].
###### Lemma 4.2 (Lambert’s Iteration Lemma, Lemma 4.1 in [18]).
Let $\mathcal{W}$ be a perfect MGTS. For every $G_{i}$ in $\mathcal{W}$, let
$u_{i}\in\mathsf{CS}_{\mathsf{up}}(G_{i})$ and
$v_{i}\in\mathsf{CS}_{\mathsf{down}}(G_{i})$. Let $s_{\mathit{c}}$ solve
$\mathsf{Char}(\mathcal{W})$. We can compute
* •
a support solution $s_{\mathit{h}}$ satisfying (1) and (2) for every $G_{i}$,
* •
for every $G_{i}$, cycles $\rho_{i}$ and $w_{i}$ originating in the root,
* •
so that $s_{\mathit{h}}[E_{i}]=\psi(u_{i})+\psi(w_{i})+\psi(v_{i})$,
* •
and $s_{\mathit{c}}[E]=\sum_{G_{i}\in\mathcal{W}}\psi(\rho_{i})$.
Moreover, for every $s_{\mathit{c}}$, $s_{\mathit{h}}$, and $(u_{i}$,
$\rho_{i}$, $w_{i}$, $v_{i})_{G_{i}\in\mathcal{W}}$ that satisfy the above
conditions, there is $k_{0}\in\mathbb{N}$ so that for all $k\in\mathbb{N}$
with $k_{0}\leq k$
$\displaystyle
c.u_{0}^{k}\rho_{0}w_{0}^{k}v_{0}^{k}\mathit{up}_{1}\ldots\mathit{up}_{\mathsf{last}}u_{\mathsf{last}}^{k}\rho_{\mathsf{last}}w_{\mathsf{last}}^{k}v_{\mathsf{last}}^{k}$
$\displaystyle\in\mathsf{IAcc}_{\mathbb{N},\leq_{\omega}}(\mathcal{W})\ .$
### 4.5. Decomposition
Our decision procedure for regular separability will modify the KLMST
decomposition. We therefore omit the details here and only discuss the well-
founded relation. To achieve the $\mathbf{F}_{\omega}$ upper bound, we cannot
work with the well-founded relation from [18], but rely on recent ideas from
[24]. We assign each MGTS with $d$ counters a rank in $\mathbb{N}^{d+1}$, and
define the well-founded relation $<_{\mathsf{rnk}}$ to compare the ranks
lexicographically. The rank of an MGTS $\mathcal{W}$ is defined inductively.
For a precovering graph, $\mathsf{rank}(G)$ is a vector with a single non-zero
entry, and this entry holds information about the size of $G$. The entry
itself is related to the dimension of a vector space $\mathsf{V}(G)$ that is
associated with the precovering graph. This is the space spanned by the cycle
effects:
$\displaystyle\mathsf{V}(G)=\mathsf{span}(\\{\Delta(\rho)\mid\rho=(v,c)\ldots(v,c^{\prime})\in\mathsf{Runs}_{\mathbb{Z}}(G)\\})\
.$
Assume $\mathsf{V}(G)$ is $i$-dimensional. We define
$\mathsf{rank}(G)[d-i]=|{G.E}|+|{\Omega(G.c_{\mathsf{in}})}|+|{\Omega(G.\varphi)}|+|{\Omega(G.c_{\mathsf{out}})}|$
and $\mathsf{rank}(G)[j]=0$ for $j\neq d-i$. The inductive case is
$\mathsf{rank}(\mathcal{W}_{1}.\mathit{up}.\mathcal{W}_{2})=\mathsf{rank}(\mathcal{W}_{1})+\mathsf{rank}(\mathcal{W}_{2})$.
The definition allows us to unwind a precovering graph into an MGTS with a
number of precovering graphs. If the cycle spaces of the new precovering
graphs have a smaller dimension, then this makes their non-zero entry in the
rank move to the right. As a consequence, the well-founded order decreases,
even though the new MGTS may have more edges in total.
## 5\. DMGTS
We introduce _doubly-marked graph transition sequences (DMGTS)_ as the data
structure behind our decision procedure for regular separability. Recall that
the goal is to separate the language of a subject VASS from the Dyck language.
The idea of DMGTS is to simultanteously track both languages, very much like
an MGTS for the intersection would. The coupling, however, is not as tight as
in the case of intersection. Instead, the notion of perfectness and the
decomposition focus on the language of the subject VASS, and only consider the
Dyck language as a barrier to which the subject language cannot be extended.
A DMGTS $\mathcal{W}=(\mathcal{S},\mu)$ consists of an MGTS $\mathcal{S}$ and
a natural number $\mu\geq 1$. The counters in $\mathcal{S}$ form a disjoint
union $\mathsf{sj}\uplus\mathsf{dy}$ between the counters $\mathsf{sj}$ in the
subject VASS and the counters $\mathsf{dy}$ in the VASS accepting the Dyck
language. We use $\mathsf{sd}$ to refer to the counters from either side,
$\mathsf{sj}$ or $\mathsf{dy}$. The idea is this: when we project the runs in
$\mathcal{S}$ to $\mathsf{sj}$, we obtain behavior of the subject VASS, and
when we project the runs to $\mathsf{dy}$, we obtain the effect that this
behavior has on the Dyck language. To achieve this, we expect that the
counters in $\mathsf{dy}$ are updated in a visible way, meaning a letter
$a_{i}$ leads to an increment of the $i$-th Dyck-counter, and $\bar{a}_{i}$
leads to a decrement. We lift the well-founded relation from MGTS to DMGTS and
define $(\mathcal{S}_{1},\mu_{1})<_{\mathsf{rnk}}(\mathcal{S}_{2},\mu_{2})$ by
$\mathcal{S}_{1}<_{\mathsf{rnk}}\mathcal{S}_{2}$. The size is
$|{\mathcal{W}}|=|{\mathcal{S}}|+|{\mu}|$, where $|{\mu}|$ is the length of
the binary representation. We call $\mathcal{W}$ _zero-reaching_ , if
$\mathcal{W}.c_{\mathsf{in}}[\mathsf{dy}]=\mathbf{0}=\mathcal{W}.c_{\mathsf{out}}[\mathsf{dy}]$.
When it comes to regular separability, we only track the Dyck language in an
approximate way, namely by acceptance modulo $\mu$, and so the number $\mu$ in
the definition of DMGTS is the precision of this approximation. The central
new notion will then be faithfulness: if we have an $\mathbb{N}$-run where we
only know that the Dyck-side is accepting modulo $\mu$, then we are sure the
Dyck-side is already accepting in the normal sense. This will allow us to
construct a regular separator in Section 7. To formulate acceptance modulo
$\mu$, we define the _modulo $\mu$ specialization order_
$\sqsubseteq_{\omega}^{\mu}\
\subseteq\mathbb{Z}_{\omega}\times\mathbb{Z}_{\omega}$ by
$i\sqsubseteq_{\omega}^{\mu}j$, if $j=\omega$ or $i\equiv j$ mod $\mu$. We
extend it to counter valuations and to configurations as we have done for the
specialization order. Given a preorder $\sqsubseteq$ on configurations, we use
$\sqsubseteq\\![\mathsf{sd}]$ for the restriction of the preorder that only
compares the counters in $\mathsf{sd}$, but does not constrain the remaining
counters.
Since DMGTS are MGTS, the definitions of runs, acceptance, and intermediate
acceptance carry over. To account for the fact that a DMGTS is meant to
represent two languages, one for the Dyck-side and one for the VASS-side, we
add the following abbreviations:
$\displaystyle\mathsf{(I)Acc}_{\mathsf{dy}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\
\mathsf{(I)Acc}_{\mathsf{dy},\leq_{\omega}\\![\mathsf{dy}]}(\mathcal{W})$
$\displaystyle\mathsf{(I)Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\
\mathsf{(I)Acc}_{\mathbb{Z},\leq_{\omega}\\![\mathsf{dy}]}(\mathcal{W})$
$\displaystyle\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\
\mathsf{IAcc}_{\mathsf{sj},\leq_{\omega}\\![\mathsf{sj}]}(\mathcal{W})\;\cap\;\mathsf{IAcc}_{\mathsf{dy},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})\
$ $\displaystyle\mathsf{IAcc}_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\
\mathsf{IAcc}_{\mathbb{Z},\leq_{\omega}\\![\mathsf{sj}]}(\mathcal{W})\;\cap\;\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})\
.$
A run is (intermediate) accepting for the Dyck-side, if the counters in the
set $\mathsf{dy}$ remain non-negative and on these counters the run is
(intermediate) accepting in the normal sense, expressed as
$\leq_{\omega}\\![\mathsf{dy}]$. A run is intermediate accepting on the VASS-
side, if the same holds for the counters in $\mathsf{sj}$ and, moreover, the
Dyck-side is accepting modulo $\mu$. The purpose of the intersection is to
limit the words the subject VASS may produce before violating separability.
This will become clear in Section 7. We also introduce
$\mathbb{Z}$-relaxations of these notions. With acceptance in place, we define
the languages
$\displaystyle L_{\mathsf{sd}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\\{\lambda(\rho)\mid\rho\in\mathsf{IAcc}_{\mathsf{sd}}(\mathcal{W})\\}$
$\displaystyle L_{\mathbb{Z},\mathsf{sd}}(\mathcal{W})\;\;$
$\displaystyle=\;\;\\{\lambda_{\\#}(\rho)\mid\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{sd}}(\mathcal{W})\\}\
.$
By $\lambda_{\\#}$, we denote the function that extracts the edge labels
except that it replaces the label $\lambda(\mathit{up})$ of every update
between precovering graphs by $(\lambda(\mathit{up}),\\#)$. This will allow us
to uniquely identify the current precovering graph in a run. The following is
immediate.
###### Lemma 5.1.
If $\mathcal{W}$ is zero-reaching, we have
$L_{\mathsf{dy}}(\mathcal{W})\subseteq D_{n}$. One can construct a
$\mathbb{Z}$-VASS that accepts the language
$L_{\mathbb{Z},\mathsf{sd}}(\mathcal{W})$.
We also define characteristic equations for each side that mimic the notions
of acceptance we have just defined:
$\displaystyle\mathsf{Char}_{\mathsf{dy}}(G,\mu)$
$\displaystyle\;\;=\;\;\mathsf{RunsEq}(G)\wedge\mathsf{IAccEq}(G,\leq_{\omega}\\![\mathsf{dy}])$
$\displaystyle\mathsf{Char}_{\mathsf{sj}}(G,\mu)$
$\displaystyle\;\;=\;\;\mathsf{RunsEq}(G)\wedge\mathsf{IAccEq}(G,\leq_{\omega}\\![\mathsf{sj}])$
$\displaystyle\hskip
62.59596pt\wedge\mathsf{IAccEq}(G,\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}])$
$\displaystyle\mathsf{Char}_{\mathsf{sd}}(G.\mathit{up}.\mathcal{W},\mu)$
$\displaystyle\;\;=\;\;\mathsf{Char}_{\mathsf{sd}}(G,\mu)\wedge\mathsf{Char}_{\mathsf{sd}}(\mathcal{W},\mu)$
$\displaystyle\hskip 5.69046pt\wedge
x[G,\mathsf{out}]+\Delta(\mathit{up})-x[\mathcal{W}[\mathsf{first}],\mathsf{in}]=\mathbf{0}\
.$
We also define the support
$\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$. As in the case of
reachability, we first define homogeneous variants of the above systems by
replacing concrete values in $\mathsf{IAccEq}(G,\sqsubseteq)$ with zero. Then
we collect the variables that receive a positive value in a solution to the
homogeneous characteristic equations.
The central new notion for regular separability is faithfulness. A DMGTS
$\mathcal{W}$ is _faithful_ , if it is zero-reaching and
(3) $\displaystyle\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\ \cap\
\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})\quad\subseteq\quad\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\
.$
The definition considers runs that take the Dyck counters from zero to zero.
That the initial and final valuations are precisely zero and not just zero
modulo $\mu$ is by the intersection with
$\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. Faithfulness now says
that, for the intermediate precovering graphs in the underlying MGTS, there is
no difference between acceptance modulo $\mu$ and ordinary acceptance. Indeed,
the reverse inclusion is readily checked (but will not be needed).
A DMGTS $\mathcal{W}$ is _perfect_ , if it is faithful and for every
precovering graph $G$ in $\mathcal{W}$, for every
$\mathsf{sd}\in\\{\mathsf{sj},\mathsf{dy}\\}$,
* •
$\mathsf{CS}_{\mathsf{up}}(G)\neq\emptyset\neq\mathsf{CS}_{\mathsf{down}}(G)\neq\emptyset$
and
* •
$\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$ justifies the
unboundedness of $\mathsf{sd}$ in $G$.
Note that the edge variables are in the support of both, the VASS-side and the
Dyck-side.
## 6\. Deciding Regular Separability
Our decision procedure for regular separability decomposes the DMGTS
$\mathcal{W}$ of interest until the regular separability
$L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$ reduces to the regular separability
of the $\mathbb{Z}$-VASS approximations
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. The latter problem can be solved
with the algorithm from [3], as stated in Theorem 3.5 above. Behind our
decision procedure are two key results. The first says that we can decompose
faithful DMTGS into two finite sets of DMTGS.
###### Lemma 6.1.
We can decompose a faithful DMTGS $\mathcal{W}$ in
$\mathbf{F}_{|{\mathsf{sj}\cup\mathsf{dy}}|+4}$ into two finite sets
$\mathsf{Perf}$ and $\mathsf{Fin}$ of DMTGS, where all
$\mathcal{S}\in\mathsf{Perf}$ are perfect, all $\mathcal{T}\in\mathsf{Fin}$
satisfy $L_{\mathsf{sj}}(\mathcal{T})\mid D_{n}$, and
$\displaystyle L_{\mathsf{sj}}(\mathcal{W})\quad=\quad
L_{\mathsf{sj}}(\mathsf{Perf}\cup\mathsf{Fin})\ .$
The second is a transfer result, saying that separability can be checked on
the $\mathbb{Z}$-approximations as soon as we have perfectness.
###### Lemma 6.2.
If $\mathcal{W}$ is faithful, then
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ implies
$L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$. If $\mathcal{W}$ is perfect, also
the reverse holds.
Since perfect DMGTS are faithful, the following is a consequence.
###### Corollary 6.3.
Let $\mathcal{W}$ be perfect. Then $L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$ if
and only if $L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
These insights allow us to decide regular separability.
###### Proof of Proposition 3.4.
To ease the notation, we assume the subject VASS is actually a VAS,
$\mathcal{U}=((\\{v\\},\Sigma_{n},\mathsf{sj},E),(v,c_{1}),(v,c_{2}))$. It is
well-known that any VASS can be turned into a VAS with the same language by
introducing auxiliary counters for the states. One can also adapt our
procedure to directly work with VASS. We first check
$\mathit{L}(\mathcal{U})\cap D_{n}=\emptyset$. If the intersection is non-
empty, our decision procedure returns inseparable. If it is empty, we
construct an initial DMGTS $\mathcal{W}$ with
$\mathit{L}(\mathcal{U})=L_{\mathsf{sj}}(\mathcal{W})$. Checking
$\mathit{L}(\mathcal{U})\mid D_{n}$ then amounts to checking
$L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$.
We define $\mathcal{W}=(G,\mu)$ with $\mu=1$. The precovering graph $G$ uses
the underlying VASS
$\mathcal{V}=(\\{v_{\mathsf{root}}\\},\Sigma_{n},\mathsf{sj}\uplus\mathsf{dy},E^{\prime})$.
It has a single node and both sets of counters. The set of edges $E^{\prime}$
contains a loop $(v_{\mathsf{root}},a,(x,y),v_{\mathsf{root}})$ for every
$(v,a,x,v)\in E$. The vector $y$ modifies the counters in $\mathsf{dy}$ as
required by Dyck visibility. The precovering graph is
$G=(\mathcal{V},(v_{\mathsf{root}},(c_{1},\mathbf{0})),(v_{\mathsf{root}},(c_{2},\mathbf{0})),\varphi)$.
The root node is decorated by $\omega$, expressed as
$\varphi=\mathbb{N}^{\emptyset}$. The initial and final valuations expect the
counters in $\mathsf{sj}$ to behave like in $\mathcal{U}$, and the counters in
$\mathsf{dy}$ to go from zero to zero. For
$L_{\mathsf{sj}}(\mathcal{W})=\mathit{L}(\mathcal{U})$, note that the modulo
$\mu=1$ constraints that $L_{\mathsf{sj}}(\mathcal{W})$ imposes on the Dyck-
side do not mean a restriction.
To decide $L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$, we invoke Lemma 6.1. The
required faithfulness of $\mathcal{W}$ is trivial: the extremal valuations are
zero on $\mathsf{dy}$, and since there are no intermediate precovering graphs,
we have
$\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})=\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
The lemma yields finite sets of DMGTS $\mathsf{Perf}$ and $\mathsf{Fin}$ with
$L_{\mathsf{sj}}(\mathcal{W})=L_{\mathsf{sj}}(\mathsf{Perf})\cup
L_{\mathsf{sj}}(\mathsf{Fin})$. It moreover guarantees
$L_{\mathsf{sj}}(\mathcal{T})\mid D_{n}$ for all $\mathcal{T}\in\mathsf{Fin}$.
To decide $L_{\mathsf{sj}}(\mathcal{W})\mid D_{n}$, it thus remains to check
$L_{\mathsf{sj}}(\mathcal{S})\mid D_{n}$ for all
$\mathcal{S}\in\mathsf{Perf}$. If all checks succeed, our decision procedure
returns separable, and if one check fails, it returns inseparable. Since the
DMGTS in $\mathsf{Perf}$ are perfect, we can apply Corollary 6.3. We compute
$\mathbb{Z}$-VASS for the languages $L_{\mathbb{Z},\mathsf{sd}}(\mathcal{S})$
using Lemma 5.1, and check $L_{\mathbb{Z},\mathsf{sj}}(\mathcal{S})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{S})$ with the algorithm from [3] that is
behind Theorem 3.5.
The decomposition of the DMGTS takes resources
$\mathbf{F}_{|{\mathsf{sj}\cup\mathsf{dy}}|+4}$, followed by an elementary
separability check. By [28, Lemma 4.6], this yields an $\mathbf{F}_{\omega}$
upper bound. ∎
## 7\. Separability Transfer
We prove the separability transfer result in Lemma 6.2.
### 7.1. Regular Separator
For the direction from left to right, we show that every regular separator for
the $\mathbb{Z}$-approximations $L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ can be turned into a regular
separator for $L_{\mathsf{sj}}(\mathcal{W})$ and $D_{n}$. Faithfulness and
modulo reasoning will play an important role.
Let $\mathcal{B}^{\\#}$ separate $L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. We write $\mathcal{B}$ for the NFA
that results from $\mathcal{B}^{\\#}$ by replacing transition labels $(a,\\#)$
with $a$. Our plan is to use $\mathcal{B}$ as a separator for
$L_{\mathsf{sj}}(\mathcal{W})$ and $D_{n}$. For this to work,
$\mathcal{B}^{\\#}$ should be _precise_ as follows:
(4) $\displaystyle\mathit{L}(\mathcal{B}^{\\#})\ \cap\ D_{n}^{\\#}\ \subseteq\
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\ .$
Preciseness says that the language of $\mathcal{B}^{\\#}$ is so small that it
cannot intersect the Dyck language without already intersecting
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. The language $D_{n}^{\\#}$ offers
$a$ and $(a,\\#)$ whenever $D_{n}$ has letter $a$. The following is immediate.
###### Lemma 7.1.
If the NFA $\mathcal{B}^{\\#}$ separates
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ and is precise, then $\mathcal{B}$
separates $L_{\mathsf{sj}}(\mathcal{W})$ and $D_{n}$.
Our main finding is the following lemma, where the product captures language
intersection,
$\mathit{L}(\mathcal{B}^{\\#}\times\mathcal{A}^{\\#})=\mathit{L}(\mathcal{B}^{\\#})\cap\mathit{L}(\mathcal{A}^{\\#})$.
###### Lemma 7.2.
Let $\mathcal{W}$ be faithful. Every separator $\mathcal{B}^{\\#}$ of
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ can be turned into a precise
separator $\mathcal{B}^{\\#}\times\mathcal{A}^{\\#}$. The NFA
$\mathcal{A}^{\\#}$ is independent of $\mathcal{B}^{\\#}$.
A first failure of preciseness may be due to the fact that $\mathcal{B}^{\\#}$
accepts words that do not label a run through $\mathcal{W}$. To overcome the
problem, we understand $\mathcal{W}$ as an NFA
$\mathcal{B}^{\\#}({\mathcal{W}})$, and use this as $\mathcal{A}^{\\#}$ in the
lemma. If $\mathcal{B}^{\\#}({\mathcal{W}})$ accepts a word from
$D_{n}^{\\#}$, then the word labels a run through $\mathcal{W}$ that takes the
Dyck-counters from zero to zero. The latter holds, because the word is in the
Dyck-language and $\mathcal{W}$ is Dyck-visible. Since $\mathcal{W}$ is
faithful, the initial and final configurations are zero on $\mathsf{dy}$, and
so the run belongs to $\mathsf{Acc}_{\mathsf{dy}}(\mathcal{W})$.
Unfortunately, this does not suffice for preciseness.
The problem is that $L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ is not defined
via $\mathsf{Acc}_{\mathsf{dy}}(\mathcal{W})$, but via intermediate acceptance
$\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. This means the run not
only has to reach zero on the Dyck-counters, but it also has to reach certain
values at the entries and exits of the intermediate precovering graphs in
$\mathcal{W}$. This is where the Inclusion (3) in the definition of
faithfulness comes in. It suggests we should define the NFA
$\mathcal{A}^{\\#}$ so that it (i) follows $\mathcal{W}$ like
$\mathcal{B}^{\\#}({\mathcal{W}})$ does, (ii) maintains the counters modulo
the number $\mu$ given by $\mathcal{W}$, and (iii) checks intermediate
acceptance. If then $\mathcal{A}^{\\#}$ accepts a word from the Dyck-language,
we have a run in $\mathsf{Acc}_{\mathsf{dy}}(\mathcal{W})$ as before, but
moreover we know that the run belongs to
$\mathsf{IAcc}_{\mathsf{dy},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})$.
Faithfulness shows that the run is in
$\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
To be a separator, $\mathit{L}(\mathcal{B}^{\\#}\times\mathcal{A}^{\\#})$ has
to cover $L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$. This holds, because
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ is not only defined via ordinary
acceptance on the counters in $\mathsf{sj}$, but also via modulo $\mu$
acceptance on $\mathsf{dy}$. The purpose of the constraint
$\mathsf{Acc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})$
in the definition of $\mathsf{Acc}_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ is
thus to support the above restriction of a given separator. The disjointness
$\mathit{L}(\mathcal{B}^{\\#}\times\mathcal{A}^{\\#})\cap
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})=\emptyset$ is by the fact that
$\mathcal{B}^{\\#}$ is a separator. This justifies the product construction
$\mathcal{B}^{\\#}\times\mathcal{A}^{\\#}$ in the sense that
$\mathcal{A}^{\\#}$ alone may not be a separator.
It remains to define the NFA $\mathcal{A}^{\\#}$ that satisfies (i) to (iii)
above:
$\displaystyle\mathcal{A}^{\\#}=(Q\times[0,\mu-1]^{d},\Sigma_{n}\times\\{\varepsilon,\\#\\},\delta,$
$\displaystyle((G_{\mathsf{first}},\mathsf{in}),{(c_{1},\mathbf{0})}\mathop{\text{
mod }}{\mu}),$
$\displaystyle((G_{\mathsf{last}},\mathsf{out}),{(c_{2},\mathbf{0})}\mathop{\text{
mod }}{\mu}))\ .$
The set $Q$ contains states $(G,\mathsf{in})$ and $(G,\mathsf{out})$ for every
precovering graph $G$ im $\mathcal{W}$, and moreover all nodes in
$\mathcal{W}$. For every transition $(v,a,y,w)$ in a precovering graph of
$\mathcal{W}$ and for every counter valuation $x\in[0,\mu-1]^{d}$, we have
$\displaystyle(v,x)\xrightarrow{a}(w,{x+y}\mathop{\text{ mod
}}{\mu})\in\delta\ .$
We also have transitions that enter and exit a precovering graph $G$, or move
from $G$ to $G^{\prime}$ via the update $\mathit{up}$:
$\displaystyle((G,\mathsf{in}),x)\xrightarrow{\varepsilon}$
$\displaystyle(G.v_{\mathsf{root}},x)\in\delta,\hskip 24.18501pt\text{ if
}x\sqsubseteq_{\omega}^{\mu}G.c_{\mathsf{in}}$
$\displaystyle(G.v_{\mathsf{root}},x)\xrightarrow{\varepsilon}$
$\displaystyle((G,\mathsf{out}),x)\in\delta,\hskip 19.91684pt\text{ if
}x\sqsubseteq_{\omega}^{\mu}G.c_{\mathsf{out}}$
$\displaystyle((G,\mathsf{out}),x)\xrightarrow{(\lambda(\mathit{up}),\\#)}$
$\displaystyle((G^{\prime},\mathsf{in}),{x+\Delta(\mathit{up})}\mathop{\text{
mod }}{\mu})\in\delta\ .$
It is worth noting that this construction could have been done without the
$\\#$ symbol. The symbol only plays a role for the reverse implication in
Lemma 6.2 that we show next.
### 7.2. Inseparability
We prove the missing direction of Lemma 6.2, formulated as follows.
###### Lemma 7.3.
Consider a perfect DMGTS $\mathcal{W}$ that satisfies
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\not{\mid}L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
Then also $L_{\mathsf{sj}}(\mathcal{W})\not{\mid}D_{n}$ holds.
The proof needs a classic definition [2]. A DFA $\mathcal{A}$ over $\Sigma$
induces the equivalence $\sim_{\mathcal{A}}\
\subseteq\Sigma^{*}\times\Sigma^{*}$ defined by $w\sim_{\mathcal{A}}v$, if for
all states $p,q$ in $\mathcal{A}$, we have
$p\mathop{\raisebox{-1.70709pt}{$\xrightarrow{w}$}}q$ if and only if
$p\mathop{\raisebox{-1.70709pt}{$\xrightarrow{v}$}}q$. The equivalence says
that the words lead to the same state changes in $\mathcal{A}$.
To prove Lemma 7.3, we reason towards a contradiction, and assume there is a
DFA $\mathcal{A}$ that separates $L_{\mathsf{sj}}(\mathcal{W})$ from $D_{n}$.
We use the premise
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\not{\mid}L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$
and Lambert’s iteration lemma to construct words $o_{\mathsf{sj}}\in
L_{\mathsf{sj}}(\mathcal{W})$ and $o_{\mathsf{dy}}\in
L_{\mathsf{dy}}(\mathcal{W})$ with
$o_{\mathsf{sj}}\sim_{\mathcal{A}}o_{\mathsf{dy}}$. Then $\mathcal{A}$ must
accept or reject both words. If $\mathcal{A}$ accepts $o_{\mathsf{dy}}$, we
have a contradiction to $\mathit{L}(\mathcal{A})\cap D_{n}=\emptyset$ due to
Lemma 5.1. If $\mathcal{A}$ does not accept $o_{\mathsf{sj}}$, we have a
contradiction to
$L_{\mathsf{sj}}(\mathcal{W})\subseteq\mathit{L}(\mathcal{A})$. This concludes
the proof.
To construct $o_{\mathsf{sj}}$ and $o_{\mathsf{dy}}$, we use the following
lemma. Here, we need the $\\#$ symbols in the definition of
$L_{\mathbb{Z},\mathsf{sd}}(\mathcal{W})$.
###### Lemma 7.4.
Consider a DFA $\mathcal{A}$ such that for all pairs of words
$w_{0}(a_{1},\\#)\ldots w_{k}\in L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$v_{0}(a_{1},\\#)\ldots v_{k}\in L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$
there is $i\leq k$ with $w_{i}\not\sim_{\mathcal{A}}v_{i}$. Then
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\mid
L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
###### Proof.
It is well-known that the equivalence $\sim_{\mathcal{A}}$ has finite index
and the equivalence classes $[w]_{\sim_{\mathcal{A}}}$ are regular languages
[2]. Then the following is a finite union of regular languages:
$\displaystyle S\quad=\quad\bigcup_{w_{0}(a_{1},\\#)\ldots w_{k}\in
L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})}[w_{0}]_{\sim_{\mathcal{A}}}(a_{1},\\#)\ldots[w_{k}]_{\sim_{\mathcal{A}}}\
.$
We show that the regular language $S$ separates
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. The inclusion
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\subseteq S$ is immediate. Assume
there is $v\in S\cap L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. Then
$v=v_{0}(a_{1},\\#)\ldots v_{k}\in L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$
and there is $w_{0}(a_{1},\\#)\ldots w_{k}\in
L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ so that
$v_{i}\sim_{\mathcal{A}}w_{i}$ for all $i\leq k$. This is the conclusion that
needs the $\\#$ symbols. Without them, the equivalent words may not align with
the precovering graphs. The conclusion contradicts the lemma’s premise, and
$v$ cannot exist. ∎
We proceed with the definition of the words $o_{\mathsf{sj}}$ and
$o_{\mathsf{dy}}$. We apply Lemma 7.4 in contraposition to
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})\not{\mid}L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
This yields $c_{0}\ldots c_{k}\in L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ and
$b_{0}\ldots b_{k}\in L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ so that for all
$i\leq k$ we have $c_{i}\sim_{\mathcal{A}}b_{i}$. The membership in
$L_{\mathbb{Z},\mathsf{sj}}(\mathcal{W})$ resp.
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$ gives us loops $g_{i}$ and $h_{i}$
in every precovering graph $G_{i}$ of $\mathcal{W}$ that start in the root and
are labeled by $c_{i}$ resp. $b_{i}$. Moreover, since the languages are
defined via intermediate acceptance, we know that $\sum_{i\leq k}\psi(g_{i})$
and $\sum_{i\leq k}\psi(h_{i})$ solve
$\mathsf{Char}_{\mathsf{sj}}(\mathcal{W})$ resp.
$\mathsf{Char}_{\mathsf{dy}}(\mathcal{W})$. To sum up, the words given by
Lemma 7.4 provide solutions to the characteristic equations as we need them to
apply Lambert’s pumping lemma.
The perfectness of $\mathcal{W}$ yields covering sequences
$u_{i}^{\prime}\in\mathsf{CS}_{\mathsf{up}}(G_{i})$ and
$v_{i}^{\prime}\in\mathsf{CS}_{\mathsf{down}}(G_{i})$ for all $i\leq k$. We
show how to construct new covering sequences $u_{i}$ and $v_{i}$, as well as
further rooted loops $\mathit{w}_{\mathsf{sj},i}$ and
$\mathit{w}_{\mathsf{dy},i}$ in each precovering graph $G_{i}$ so that the
conditions on the homogeneous solutions formulated by Lambert’s pumping lemma
are met for both, $(u_{i},\mathit{w}_{\mathsf{sj},i},v_{i})_{i\leq k}$ and
$(u_{i},\mathit{w}_{\mathsf{dy},i},v_{i})_{i\leq k}$. In addition, we will
guarantee that
$\lambda(\mathit{w}_{\mathsf{sj},i})\sim_{\mathcal{A}}\lambda(\mathit{w}_{\mathsf{dy},i})$.
Applying Lemma 4.2 twice then yields a common $c\in\mathbb{N}$ so that
$\displaystyle
o_{\mathsf{sj}}=\lambda(u_{0}^{c}g_{0}\mathit{w}_{\mathsf{sj},0}^{c}v_{0}^{c}t_{0}\ldots
t_{k-1}u_{k}^{c}g_{k}\mathit{w}_{\mathsf{sj},k}^{c}v_{k}^{c})$
$\displaystyle\in L_{\mathsf{sj}}(\mathcal{S})$ $\displaystyle
o_{\mathsf{dy}}=\lambda(u_{0}^{c}h_{0}\mathit{w}_{\mathsf{dy},0}^{c}v_{0}^{c}t_{0}\ldots
t_{k-1}u_{k}^{c}h_{k}\mathit{w}_{\mathsf{dy},k}^{c}v_{k}^{c})$
$\displaystyle\in L_{\mathsf{dy}}(\mathcal{S})\ .$
Note that $o_{\mathsf{sj}}\in L_{\mathsf{sj}}(\mathcal{S})$ also requires
$\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})$.
This is taken care of by the modulo constraints in
$\mathsf{Char}_{\mathsf{sj}}(\mathcal{W})$.
With this definition, the desired
$o_{\mathsf{sj}}\sim_{\mathcal{A}}o_{\mathsf{dy}}$ is a consequence of
$\lambda(g_{i})\sim_{\mathcal{A}}\lambda(h_{i})$ and
$\lambda(\mathit{w}_{\mathsf{sj},i})\sim_{\mathcal{A}}\lambda(\mathit{w}_{\mathsf{dy},i})$
for all $i\leq k$.
We turn to the construction of $u_{i}$, $v_{i}$, $\mathit{w}_{\mathsf{sj},i}$,
and $\mathit{w}_{\mathsf{dy},i}$. To ease the notation, we fix a precoving
graph $G$ and skip the index $i$. So $u^{\prime}$ and $v^{\prime}$ will be the
pumping sequences for this precovering graph, $E$ will be the edges in this
precovering graph, and $u$, $v$, $\mathit{w}_{\mathsf{sj}}$, and
$\mathit{w}_{\mathsf{dy}}$ are the sequences we want to construct.
With $\mathit{N}$ the number of states in $\mathcal{A}$, we define
(5) $\displaystyle\mathit{w}_{\mathsf{sj}}\ =\
\mathsf{diff}^{\mathit{N}}.\mathsf{rem}\qquad\mathit{w}_{\mathsf{dy}}\ =\
\mathsf{diff}^{\mathit{N}+c\cdot\mathit{N}!}.\mathsf{rem}\ .$
The integer $c\geq 1$ will become clear when we define $\mathsf{rem}$. To see
$\lambda(\mathit{w}_{\mathsf{sj}})\sim_{\mathcal{A}}\lambda(\mathit{w}_{\mathsf{dy}})$,
let $p$ and $q$ be states in $\mathcal{A}$. Since $\mathcal{A}$ is a DFA,
there is a unique run from $p$ on $\lambda(\mathit{w}_{\mathsf{sj}})$.
Consider the part of the run that reads $\lambda(\mathsf{diff}^{\mathit{N}})$.
By the pigeonhole principle, there are $0\leq i<j\leq\mathit{N}$ where the
state after reading $\lambda(\mathsf{diff}^{i})$ and
$\lambda(\mathsf{diff}^{j})$ is the same. This means we can repeat
$\lambda(\mathsf{diff}^{j-i})$ and still arrive at this state. While we repeat
$\mathsf{diff}^{j-i}$ only once in $\mathit{w}_{\mathsf{sj}}$, we repeat it an
additional $c\cdot\mathit{N}!/(j-i)$ times in $\mathit{w}_{\mathsf{dy}}$. The
sole purpose of the factorial $\mathit{N}!$ is to guarantee that this division
by $j-i$ results in an integer: $j-i\leq\mathit{N}$ implies
$\mathit{N}!/(j-i)\in\mathbb{N}$. Since the states reached after
$\lambda(\mathsf{diff}^{\mathit{N}})$ and
$\lambda(\mathsf{diff}^{\mathit{N}+c\cdot\mathit{N}!})$ coincide, we have that
$\lambda(\mathit{w}_{\mathsf{sj}})$ leads from $p$ to $q$ if and only if this
holds for $\lambda(\mathit{w}_{\mathsf{dy}})$.
It remains to construct $u$, $v$, $\mathsf{diff}$, and $\mathsf{rem}$ for each
precovering graph. By Lemma 4.2, we can find full support solutions
$\mathit{s}_{\mathsf{sj}}$ and $\mathit{s}_{\mathsf{dy}}$ that satisfy the
Conditions (1) and (2) wrt. $u^{\prime}$ and $v^{\prime}$. We will not only
construct cycles, but also new support solutions
$\mathit{s}_{\mathsf{sj}}^{*}$ and $\mathit{s}_{\mathsf{dy}}^{*}$. Our
construction is guided by the following equations in Lemma 4.2:
(6) $\displaystyle\psi(u)+\psi(v)+\psi(\mathit{w}_{\mathsf{sj}})\ $
$\displaystyle=\ \mathit{s}_{\mathsf{sj}}^{*}\\![E]$ (7)
$\displaystyle\psi(u)+\psi(v)+\psi(\mathit{w}_{\mathsf{dy}})\ $
$\displaystyle=\ \mathit{s}_{\mathsf{dy}}^{*}\\![E]\ .$
By inserting the shape of $\mathit{w}_{\mathsf{sj}}$ and
$\mathit{w}_{\mathsf{dy}}$ required by Equation (5) and subtracting Equation
(6) from (7), we obtain
(8) $\displaystyle c\cdot\mathit{N}!\cdot\psi(\mathsf{diff})\ =\
(\mathit{s}_{\mathsf{dy}}^{*}-\mathit{s}_{\mathsf{sj}}^{*})[E]\ .$
This leads us to define
(9) $\displaystyle\mathit{s}_{\mathsf{sd}}^{*}\ =\
c\cdot\mathit{N}!\cdot\mathit{s}_{\mathsf{sd}}\ .$
We can now divide Equation (8) by $c\cdot\mathit{N}!$ and obtain
$\displaystyle\mathsf{diff}\ =\
\langle\mathit{s}_{\mathsf{dy}}-\mathit{s}_{\mathsf{sj}}\rangle\ .$
Here, we use $\langle v\rangle$ to turn a Parikh vector $v\geq\mathbf{1}$ into
a cycle. We can assume
$(\mathit{s}_{\mathsf{dy}}-\mathit{s}_{\mathsf{sj}})[E]\geq\mathbf{1}$,
because we could have scaled the support solution for the Dyck-side by an
appropriate factor.
The new support solutions in Equation (9) suggest we should define the new
covering sequences by repetition:
(10) $\displaystyle u\ =\ (u^{\prime})^{c\cdot\mathit{N}!}\qquad\qquad v\ =\
(v^{\prime})^{c\cdot\mathit{N}!}\ .$
We insert the Equations (5), (9), and (10) into (6) and get
$\displaystyle
c\cdot\mathit{N}!\cdot(\psi(u^{\prime})+\psi(v^{\prime}))+\mathit{N}\cdot\psi(\mathsf{diff})+\psi(\mathsf{rem})\
=\ c\cdot\mathit{N}!\cdot\mathit{s}_{\mathsf{sj}}[E]\ .$
This yields the missing
$\displaystyle\mathsf{rem}\ =\ \langle
c\cdot\mathit{N}!\cdot(\mathit{s}_{\mathsf{sj}}\\![E]-\psi(u^{\prime})-\psi(v^{\prime}))-\mathit{N}\cdot\psi(\mathsf{diff})\rangle\
.$
This is the moment we need the factor $c$: it has to be large enough so that
$c\cdot\mathit{N}!\cdot(\mathit{s}_{\mathsf{sj}}\\![E]-\psi(u^{\prime})-\psi(v^{\prime}))-\mathit{N}\cdot\psi(\mathsf{diff})\geq\mathbf{1}$,
and hence the vector can be realized as a cycle. Note that $c$ goes into the
definition of the support solution, which is shared by all precovering graphs.
This means the choice of $c$ not only has to satisfy the inequality for $G$,
but for all precovering graphs.
## 8\. Decomposition
We prove Lemma 6.1 by developing a decomposition algorithm that takes a
faithful DMGTS and returns a finite set of perfect DMGTS and a finite set of
DMGTS for which separability holds. At the heart of the algorithm is a single
decomposition step as follows.
###### Lemma 8.1.
There is a function $\textsf{dec}(-)$, computable with elementary resources,
that expects a faithful but imperfect DMGTS $\mathcal{W}$ with
$\mathsf{sol}(\mathsf{Char}_{\mathsf{sj}}(\mathcal{W}))\neq\emptyset$ and
$\mathsf{sol}(\mathsf{Char}_{\mathsf{dy}}(\mathcal{W}))\neq\emptyset$, and
returns finite sets $X,Y$ of DMGTS so that
* (a)
for all $\mathcal{W}^{\prime}\in X$ we have faithfulness and
$\mathcal{W}^{\prime}<_{\mathsf{rnk}}\mathcal{W}$,
* (b)
for all $\mathcal{W}^{\prime}\in Y$ we have
$L_{\mathsf{sj}}(\mathcal{W}^{\prime})\mid D_{n}$, and
* (c)
$L_{\mathsf{sj}}(\mathcal{W})=L_{\mathsf{sj}}(X\cup Y)$.
Lemma 8.1 readily implies Lemma 6.1.
###### Proof of Lemma 6.1.
We formulate the overall decomposition algorithm and afterwards reason about
its correctness. The input to the decomposition is a faithful DMGTS
$\mathcal{W}$. If $\mathcal{W}$ is perfect, then we return
$\mathsf{Perf}=\\{\mathcal{W}\\},\mathsf{Fin}=\emptyset$. If
$\mathsf{sol}(\mathsf{Char}_{\mathsf{sj}}(\mathcal{W}))=\emptyset$, then we
return $\mathsf{Perf}=\mathsf{Fin}=\emptyset$. If
$\mathsf{sol}(\mathsf{Char}_{\mathsf{dy}}(\mathcal{W}))=\emptyset$, then we
return $\mathsf{Perf}=\emptyset$ and $\mathsf{Fin}=\\{\mathcal{W}\\}$. If
these conditions do not apply, we invoke $\textsf{dec}(-)$ from Lemma 8.1 to
generate sets $X$ and $Y$ of DMGTS with the stated properties. We recursively
call our decomposition algorithm on each DMGTS $\mathcal{S}\in X$, which
returns $\mathsf{Perf}_{\mathcal{S}}$ and $\mathsf{Fin}_{\mathcal{S}}$. We
include all DMGTS from $(\mathsf{Perf}_{\mathcal{S}})_{\mathcal{S}\in X}$ in
$\mathsf{Perf}$, and all DMGTS from
$(\mathsf{Fin}_{\mathcal{S}})_{\mathcal{S}\in X}$ as well as from $Y$ in
$\mathsf{Fin}$.
We reason about correctness with an induction on the height of the call tree.
The tree is finite as every recursive call decreases the well-founded order,
each node has a finite outdegree, and hence König’s lemma applies. For a
perfect DMGTS, there is nothing to do. If
$\mathsf{sol}(\mathsf{Char}_{\mathsf{sj}}(\mathcal{W}))=\emptyset$, then
$L_{\mathsf{sj}}(\mathcal{W})=\emptyset$ follows and we are done. If
$\mathsf{sol}(\mathsf{Char}_{\mathsf{dy}}(\mathcal{W}))=\emptyset$, the set
$Y=\\{\mathcal{W}\\}$ preserves the language. Since
$\mathsf{sol}(\mathsf{Char}_{\mathsf{dy}}(\mathcal{W}))=\emptyset$ implies
$L_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})=\emptyset$, the required separability
holds by the forward direction of Lemma 6.2.
In the induction step, the induction hypothesis and Lemma 8.1 show that for
the DMGTS in $\mathsf{Fin}$ the required separability holds. The induction
hypothesis moreover shows that the DMGTS in $\mathsf{Perf}$ are perfect. We
have
$L_{\mathsf{sj}}(\mathcal{W})=L_{\mathsf{sj}}(\mathsf{Perf}\cup\mathsf{Fin})$,
because $L_{\mathsf{sj}}(\mathcal{W})=L_{\mathsf{sj}}(X\cup Y)$ by Lemma 8.1
and, by the induction hypothesis, our decomposition yields
$L_{\mathsf{sj}}(\mathcal{S})=L_{\mathsf{sj}}(\mathsf{Perf}_{\mathcal{S}}\cup\mathsf{Fin}_{\mathcal{S}})$
for all $\mathcal{S}\in X$.
Perfectness and infeasibility can be checked with time and space elementary in
the size of the input DMGTS [18]. By Lemma 8.1, also
$\textsf{dec}(\mathcal{W})$ is computable with resources elementary in
$|{\mathcal{W}}|$. The well-founded relation $<_{\mathsf{rnk}}$ is defined as
a lexicographic order on $\mathbb{N}^{|{\mathsf{sj}\cup\mathsf{dy}}|+1}$. This
yields an algorithm in $\mathbf{F}_{|{\mathsf{sj}\cup\mathsf{dy}}|+4}$ by the
same argument as [24, Theorem 5.4]. ∎
Lemma 8.1 deals with faithful but imperfect DMGTS. A faithful DMGTS
$\mathcal{W}$ is not perfect if and only if it contains a precovering graph
$G$ for which one of the following conditions holds.
* (i)
There are $\mathsf{sd}\in\\{\mathsf{sj},\mathsf{dy}\\}$, a counter
$\mathit{j}\in\mathsf{sd}$, and
$c_{\mathsf{io}}\in\\{c_{\mathsf{in}},c_{\mathsf{out}}\\}$ so that
$c_{\mathsf{io}}[\mathit{j}]=\omega$ but
$x[G,\mathsf{io},\mathit{j}]\not\in\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$.
* (ii)
There are a side $\mathsf{sd}\in\\{\mathsf{sj},\mathsf{dy}\\}$ and an edge
$e\in G.E$ so that
$x[e]\not\in\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$.
* (iii)
We have $\mathsf{CS}_{\mathsf{up}}(G)=\emptyset$ or
$\mathsf{CS}_{\mathsf{down}}(G)=\emptyset$.
Case (iii) is part of the perfectness definition. The Cases (i) and (ii)
follow from the fact that
$\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$ should capture the
unboundedness of $\mathsf{sd}$ in $\mathcal{W}$. We break down the proof of
Lemma 8.1 into three arguments, one for each of the cases.
### 8.1. Case (i)
Consider a faithful but imperfect DMGTS $\mathcal{W}=(\mathcal{U},\mu)$ that
satisfies
$\mathsf{sol}(\mathsf{Char}_{\mathsf{sj}}(\mathcal{W}))\neq\emptyset\neq\mathsf{sol}(\mathsf{Char}_{\mathsf{dy}}(\mathcal{W}))$.
In Case (i), there is a precovering graph $G$, a side
$\mathsf{sd}\in\\{\mathsf{sj},\mathsf{dy}\\}$, a counter
$\mathit{j}\in\mathsf{sd}$, and a counter valuation
$c_{\mathsf{io}}\in\\{G.c_{\mathsf{in}},G.c_{\mathsf{out}}\\}$ so that
$c_{\mathsf{io}}[\mathit{j}]=\omega$ but
$x[G,\mathsf{io},\mathit{j}]\notin\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$.
If the variable is not in the support, the set of values
$A_{\mathsf{sd}}=\\{s[G,\mathsf{io},\mathit{j}]\mid
s\in\mathsf{sol}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))\\}$ is finite. We
also know $A_{\mathsf{sd}}\neq\emptyset$, because
$\mathsf{sol}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))\neq\emptyset$.
Finally, we have $A_{\mathsf{sd}}\subseteq\mathbb{N}$ by the shape of
$\mathsf{Char}_{\mathsf{sd}}(\mathcal{W})$. We show how to construct
$(X,Y)=\textsf{dec}(\mathcal{W})$ as required by Lemma 8.1.
#### Case $\mathsf{sd}=\mathsf{sj}$
Let $\mathcal{U}_{a}$ be the MGTS that results from $\mathcal{U}$ by changing
the entry or exit value of counter $\mathit{j}$ in $G$ from $\omega$ to
$a\in\mathbb{N}$. We define
$\displaystyle X\ =\ \\{(\mathcal{U}_{a},\mu)\mid a\in
A_{\mathsf{sj}}\\}\qquad\text{and}\qquad Y\ =\ \emptyset\ .$
###### Proof.
We begin with Property (c) in Lemma 8.1. The difference between $\mathcal{W}$
and $\mathcal{W}_{\mathsf{new}}=(\mathcal{U}_{a},\mu)$ is a single entry or
exit value that changes from $\omega$ to $a$. This makes intermediate
acceptance stricter,
$\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{\mathsf{new}})\subseteq\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})$,
and so we have $L_{\mathsf{sj}}(X\cup Y)\subseteq
L_{\mathsf{sj}}(\mathcal{W})$. For the reverse inclusion, we use that every
run $\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})$ induces a solution to
$\mathsf{Char}_{\mathsf{sj}}(\mathcal{W})$. Then $\rho$ enters or exits the
precovering graph $G$ with a value $a\in A_{\mathsf{sj}}$ on counter
$\mathit{j}$. For Property (b), there is nothing to show as $Y=\emptyset$. For
Property (a), note that we do not modify the edges, nodes, or
$\mathsf{dy}$-valuations when moving from $\mathcal{U}$ to $\mathcal{U}_{a}$.
Hence, the faithfulness of $\mathcal{W}_{\mathsf{new}}=(\mathcal{U}_{a},\mu)$
follows from the faithfulness of $\mathcal{W}$. We reduce the well-founded
order, because we replace say an exit value $\omega$ by a concrete value,
while keeping $G.E$, $\Omega(G)$, and $G.c_{\mathsf{in}}$ unchanged. The
complexity follows from the fact that the set $A_{\mathsf{sj}}$ can be
constructed using resources elementary in the size of $\mathcal{W}$ [18]. ∎
#### Case $\mathsf{sd}=\mathsf{dy}$
The construction uses an equivalence
$\mathcal{S}_{1}\simeq_{\mu}\mathcal{S}_{2}$ on MGTS. It is the least
equivalence that satisfies the following. For precovering graphs, we have
$G_{1}\simeq_{\mu}G_{2}$ if the nodes, the edges, and the root coincide, and
moreover
$G_{1}.c_{\mathsf{io}}[\mathsf{sj}]=G_{2}.c_{\mathsf{io}}[\mathsf{sj}]$ and
$G_{1}.c_{\mathsf{io}}[\mathsf{dy}]\equiv
G_{2}.c_{\mathsf{io}}[\mathsf{dy}]\mathop{\text{ mod }}\mu$, for both,
$\mathsf{io}=\mathsf{in}$ and $\mathsf{io}=\mathsf{out}$. For composed MGTS,
we use
$\displaystyle\mathcal{S}_{1}.\mathit{up}.\mathcal{S}_{2}\simeq_{\mu}\mathcal{S}_{1}^{\prime}.\mathit{up}.\mathcal{S}_{2}^{\prime}\mathcal{S}_{1}\simeq_{\mu}\mathcal{S}_{1}^{\prime}\qquad\mathcal{S}_{2}\simeq_{\mu}\mathcal{S}_{2}^{\prime}\
.$
Equivalent MGTS may only differ in the entry and exit values of Dyck counters,
and these values still have to coincide modulo $\mu$. As a piece of notation,
we use $0\leq\mathcal{S}<c$ with $c\in\mathbb{N}$ to mean
$G^{\prime}.c_{\mathsf{in}}$ and $G^{\prime}.c_{\mathsf{out}}$ only take
values from $[0,c-1]\cup\\{\omega\\}$, for all $G^{\prime}$ in $\mathcal{S}$.
To define $X$ and $Y$, we choose the least value $l$ that is larger than the
maximal value in $A_{\mathsf{dy}}$ and moreover larger than any entry or exit
value in a precovering graph of $\mathcal{U}$. We set
$\mu_{\mathsf{new}}=l\cdot\mu$ and
$\displaystyle Z\ $ $\displaystyle=\
\\{(\mathcal{S},\mu_{\mathsf{new}})\mid\mathcal{S}\simeq_{\mu}\mathcal{U}_{a},\;0\leq
a,\mathcal{S}<\mu_{\mathsf{new}},\;\mathcal{S}.c_{\mathsf{in}}[\mathsf{dy}]=\mathbf{0}\\}$
$\displaystyle X\ $ $\displaystyle=\ \\{(\mathcal{S},\mu_{\mathsf{new}})\in
Z\mid\mathcal{S}.c_{\mathsf{out}}[\mathsf{dy}]=\mathbf{0}\\}$ $\displaystyle
Y\ $ $\displaystyle=\ Z\setminus X\ .$
We discuss three important points before turning to the proof. We deliberately
not only replace $\omega$ by values from $A_{\mathsf{dy}}$, but by all values
$0\leq a<\mu_{\mathsf{new}}$. The reason is that
$L_{\mathsf{sj}}(\mathcal{W})$ also checks intermediate acceptance for the
Dyck counters, but only modulo $\mu$. The set $A_{\mathsf{dy}}$ is constructed
from runs where the Dyck counters reach intermediate values precisely. This
means $A_{\mathsf{dy}}$ may not offer enough values for
$L_{\mathsf{sj}}(\mathcal{W})\subseteq L_{\mathsf{sj}}(X\cup Y)$ to hold.
The definition of $\mu_{\mathsf{new}}$ addresses the main challenge in this
case, namely the faithfulness of the DMGTS
$\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})\in X$. We have to
show that we reach the value $0\leq a<\mu_{\mathsf{new}}$ that replaces
$\omega$, whenever we reach it modulo $\mu_{\mathsf{new}}$. The idea is this.
The left-hand side of Inclusion (3) will allow us to show that the run belongs
to $\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$. A consequence is
that it reaches $b\in A_{\mathsf{dy}}\subseteq\mathbb{N}$ with
$b<\mu_{\mathsf{new}}$. We thus reach $a$ if we can show $b=a$. We use the
following property of the modulo equivalence.
###### Lemma 8.2.
Consider $\mu_{\mathsf{new}}\in\mathbb{N}$. For all $x,k\in\mathbb{N}$, we
have that $x,k<\mu_{\mathsf{new}}$ and $x\equiv k\mathop{\text{ mod
}}\mu_{\mathsf{new}}$ together imply $x=k$.
The missing $b\equiv a\mathop{\text{ mod }}\mu_{\mathsf{new}}$ is by
$\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$,
which is a premise of faithfulness. To sum up, we let $\mu_{\mathsf{new}}$
exceed the counter values that any run (satisfying the premise of
faithfulness) can take (in the moment we use $a$ for $\omega$), and so we do
not lose information by only tracking counter values modulo
$\mu_{\mathsf{new}}$.
The change from $\mu$ to $\mu_{\mathsf{new}}$ brings $\simeq_{\mu}$ to the
definition of $X$ and $Y$. The purpose of the equivalence is to modify the
entry and exit valuations of the Dyck counters in all precovering graphs. To
see the need for a modification, note that such a valuation is a constraint of
the form $x\equiv k\mathop{\text{ mod }}\mu$. Imagine now we multiply $\mu=3$
by $l=4$ and get $\mu_{\mathsf{new}}=12$. Assume $k=2$. To obtain all
solutions to $x\equiv 2\mathop{\text{ mod }}3$, it is not sufficient to
consider $x\equiv 2\mathop{\text{ mod }}12$. We need to join the solutions to
$x\equiv i\mathop{\text{ mod }}12$ for all $i\in\\{2,5,8,11\\}$. The reason
these values $i$ collect all solutions, is the following property.
###### Lemma 8.3.
Let $\mu$ divide $\mu_{\mathsf{new}}$. For all $x,k\in\mathbb{Z}$ with
$x\equiv k\mathop{\text{ mod }}\mu$, there is a $0\leq i<\mu_{\mathsf{new}}$
with $x\equiv i\mathop{\text{ mod }}\mu_{\mathsf{new}}$ and $i\equiv
k\mathop{\text{ mod }}\mu$.
The equivalence $\simeq_{\mu}$ incorporates all and only these choices of $i$.
###### Proof.
We prove Property (c) in Lemma 8.1 and begin with the inclusion
$L_{\mathsf{sj}}(X\cup Y)\subseteq L_{\mathsf{sj}}(\mathcal{W})$. Let
$\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})\in X\cup Y$. By
definition, $\mathcal{S}\simeq_{\mu}\mathcal{U}_{a}$ for some $0\leq
a<\mu_{\mathsf{new}}$. We argue that
$\displaystyle\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{\mathsf{new}})\subseteq\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{S},\mu)\subseteq\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{U}_{a},\mu)\subseteq\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})\
.$
The first inclusion is by the fact that $\mu$ divides $\mu_{\mathsf{new}}$.
The next uses the fact that $\simeq_{\mu}$ preserves the valuations of the
counters in $\mathsf{sj}$ and the valuations of the counters in $\mathsf{dy}$
modulo $\mu$, and so $\mathsf{IAcc}_{\mathsf{sj}}(-)$ is invariant under this
equivalence. The last inclusion is by the fact that concrete values make
intermediate acceptance stricter.
For the inclusion $L_{\mathsf{sj}}(\mathcal{W})\subseteq L_{\mathsf{sj}}(X\cup
Y)$, consider $\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})$. As we do not
change the entry and exit valuations for the counters in $\mathsf{sj}$, we
readily have
$\rho\in\mathsf{IAcc}_{\mathsf{sj},\leq_{\omega}\\![\mathsf{sj}]}(\mathcal{W}_{\mathsf{new}})$
for all $\mathcal{W}_{\mathsf{new}}\in X\cup Y$. What remains is to argue that
$\rho\in\mathsf{IAcc}_{\mathsf{dy},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$
for some $\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})\in X\cup
Y$. Let $\rho$ enter or leave the precovering graph $G$ of interest with
counter valuation $c$. Let $a\in[0,\mu_{\mathsf{new}}-1]$ be such that
$c[\mathit{j}]\equiv a\mathop{\text{ mod }}\mu_{\mathsf{new}}$, where
$\mathit{j}$ is the counter of interest. By Lemma 8.3, there is
$(\mathcal{S},\mu_{\mathsf{new}})\in X\cup Y$ with
$\mathcal{S}\simeq_{\mu}\mathcal{U}_{a}$ for which the run is intermediate
accepting.
For the separability stated in (b), let
$\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})\in Y$. The
definition of $Y$ yields
$\mathcal{S}.c_{\mathsf{in}}[\mathsf{dy}]=\mathbf{0}$. Moreover, we know that
$0<\mathcal{S}.c_{\mathsf{out}}[\mathit{j}]<\mu_{\mathsf{new}}$ for all
$\mathit{j}\in\mathsf{dy}$. The final values are concrete, because
$\mathcal{W}$ is zero-reaching by faithfulness. They are bounded by
$\mu_{\mathsf{new}}$ due to $0\leq\mathcal{S}<\mu_{\mathsf{new}}$. They are
different from zero by the definition of $Y$. The analysis of the initial and
final values shows that every
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{\mathsf{new}})\subseteq\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$
has an effect $c\not\equiv\mathbf{0}\mathop{\text{ mod }}\mu_{\mathsf{new}}$
on the Dyck counters. By Dyck visibility, the word $\lambda(\rho)$ must have
the same effect on $\mathcal{D}_{n}$. Then, a DFA that tracks the Dyck
counters modulo $\mu_{\mathsf{new}}$ and only accepts upon valuations
different from zero modulo $\mu_{\mathsf{new}}$ shows
$L_{\mathsf{sj}}(\mathcal{W}_{\mathsf{new}})\mid D_{n}$.
For Property (a), the argument that the well-founded relation decreases is the
same as in the case $\mathsf{sd}=\mathsf{sj}$. For faithfulness, consider
$\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})\in X$. It is
zero-reaching by definition. The challenge is to prove Inclusion (3). We
reason as follows:
(11)
$\displaystyle\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{\mathsf{new}})\cap\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$
$\displaystyle\subseteq\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$
(12)
$\displaystyle\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\cap\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$
$\displaystyle\subseteq\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{\mathsf{new}}).$
Inclusion (11) is a consequence of the Inclusions (13) and (14), which allow
us to invoke the faithfulness of $\mathcal{W}$:
(13)
$\displaystyle\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{\mathsf{new}})$
$\displaystyle\subseteq\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$
(14)
$\displaystyle\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$
$\displaystyle\subseteq\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W})\
.$
Inclusion (13) holds, as we only change an intermediate valuation from
$\mathcal{W}$ to $\mathcal{W}_{\mathsf{new}}$, and acceptance does not take
the intermediate valuations into account. To see that we only change an
intermediate valuation, note that also $\mathcal{W}$ is zero-reaching by
faithfulness. Inclusion (14) holds with the same argument as Property (c)
above.
For Inclusion (12), let
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\cap\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$.
Consider the moment the run enters or exits the precovering graph of interest,
and the counter $j$ whose value changes from $\omega$ in $\mathcal{W}$ to
$0\leq a<\mu_{\mathsf{new}}$ in $\mathcal{W}_{\mathsf{new}}$. By the
intermediate acceptance
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$, the run induces
a solution to $\mathsf{Char}_{\mathsf{dy}}(\mathcal{W})$. The consequence is
that, at this moment in the run, counter $j$ has a value $b\in
A_{\mathsf{dy}}\subseteq\mathbb{N}$. Moreover $b<\mu_{\mathsf{new}}$, because
we chose $l$ larger than all values in $A_{\mathsf{dy}}$. With
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$,
we additionally get $b\equiv a\mathop{\text{ mod }}\mu_{\mathsf{new}}$ . Lemma
8.2 applies and shows $a=b$.
For the remaining precovering graphs, and the precovering graph $G$ but a
counter different from $j$, we show intermediate acceptance as follows. Let
$\mathcal{W}$ carry value $b\in\mathbb{N}$ at the moment of interest. Note
that $b<\mu_{\mathsf{new}}$ by the choice of $l$, namely larger than all
values in $\mathcal{W}$. In
$\mathcal{W}_{\mathsf{new}}=(\mathcal{S},\mu_{\mathsf{new}})$, we find a value
$b^{\prime}$ at this moment. As $0\leq\mathcal{S}<\mu_{\mathsf{new}}$, we know
$0\leq b^{\prime}<\mu_{\mathsf{new}}$. We have
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})\cap\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu_{\mathsf{new}}}\\![\mathsf{dy}]}(\mathcal{W}_{\mathsf{new}})$.
We are thus sure to reach $b$ and reach $b^{\prime}$ modulo
$\mu_{\mathsf{new}}$. This allows us to conclude $b\equiv
b^{\prime}\mathop{\text{ mod }}\mu_{\mathsf{new}}$ with $0\leq
b,b^{\prime}<\mu_{\mathsf{new}}$. Lemma 8.2 applies and yields the desired
$b=b^{\prime}$.
We analyze the complexity. Every counter in every valuation may be replaced by
$\mu_{\mathsf{new}}=l\cdot\mu$ many values. This limits the number of
generated DMGTS to ${\mu_{\mathsf{new}}}^{|{\mathcal{W}}|}$. The DMGTS have a
maximal size of $\mu_{\mathsf{new}}\cdot|{\mathcal{W}}|$. As we argued in the
case $\mathsf{sd}=\mathsf{sj}$, the value $l$ itself is of size elementary in
$|{\mathcal{W}}|$. We conclude that the whole procedure takes elementary
resources. ∎
### 8.2. Reasoning Locally about Faithfulness
In Case (i), we modified the entry and exit valuations of every precovering
graph in a DMGTS. In the remaining two cases, we will decompose a single
precovering graph, in a way that is independent of the context. We now develop
techniques that allow us to reason locally about the one precovering graph,
and lift the results to the overall DMGTS. The focus is on faithfulness.
An _MGTS context_ is an MGTS in which a distinguished variable $\bullet$
occurs precisely once:
$\displaystyle\mathcal{C}[\bullet]\;\;::=\;\;\bullet\;\;\mid\;\;\mathcal{C}[\bullet].\mathit{up}.\mathcal{W}\;\;\mid\;\;\mathcal{W}.\mathit{up}.\mathcal{C}[\bullet]\
.$
We write $\mathcal{C}[\mathcal{W}]$ for the MGTS that is obtained by replacing
$\bullet$ with the MGTS $\mathcal{W}$. When $\mathcal{W}$ is a DMGTS
$(\mathcal{S},\mu)$, we also write $\mathcal{C}[\mathcal{W}]$ to mean
$(\mathcal{C}[\mathcal{S}],\mu)$. A first observation is that the well-founded
relation is preserved when comparable MGTS and DMGTS are inserted into
contexts, $\mathcal{W}_{1}\preceq\mathcal{W}_{2}$ implies
$\mathcal{C}[\mathcal{W}_{1}]\preceq\mathcal{C}[\mathcal{W}_{2}]$.
With contexts at hand, the remaining cases we will start from
$(\mathcal{C}[G],\mu)$ and decompose $(G,\mu)$ into sets of DMGTS $U$ and $V$.
The sets needed for Lemma 8.1 are then
$X=\mathcal{C}[U]=\\{\mathcal{C}[\mathcal{S}]\mid\mathcal{S}\in U\\}$ resp.
$Y=\mathcal{C}[V]$. We also have $Y=\emptyset$ in one case. To show the
faithfulness of these DMGTS, we use the following arguments.
We define a relation called _consistent specialization_ between DMGTS. We use
an induction on the structure of the underlying MGTS. In the base case,
$(\mathcal{S},\mu)$ is a consistent specialization of $(G,\mu)$ if the
following two conditions hold.
* (1)
We have $\mathcal{S}.c_{\mathsf{in}}\leq_{\omega}G.c_{\mathsf{in}}$,
$\mathcal{S}.c_{\mathsf{out}}\leq_{\omega}G.c_{\mathsf{out}}$, and for all
runs $\rho\in\mathsf{Runs}_{\mathbb{Z}}(\mathcal{S})$ there is
$\sigma\in\mathsf{Runs}_{\mathbb{Z}}(G)$ with $\sigma\approx\rho$.
* (2)
For all
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{S})$
with $\rho[\mathsf{first}][\mathsf{dy}]\leq_{\omega}G.c_{\mathsf{in}}$ and
$\rho[\mathsf{last}][\mathsf{dy}]\leq_{\omega}G.c_{\mathsf{out}}$, we have
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W})$.
In the inductive step, if $\mathcal{S}_{1}$ is a consistent specialization of
$\mathcal{S}_{2}$, then $\mathcal{C}[\mathcal{S}_{1}]$ is a consistent
specialization of $\mathcal{C}[\mathcal{S}_{2}]$. Note that $\mu$ has to
coincide for DMGTS that are related by consistent specialization.
Condition (1) expects that every run through $\mathcal{S}$ can be mimicked by
$G$. With this, consistent specializations have smaller languages. Together,
Conditions (1) and (2) show that consistent specializations preserve
faithfulness.
###### Lemma 8.4.
Let $\mathcal{W}_{1}$ be a consistent specialization of $\mathcal{W}_{2}$.
Then $L_{\mathsf{sj}}(\mathcal{W}_{1})\subseteq
L_{\mathsf{sj}}(\mathcal{W}_{2})$ holds. Moreover, if $\mathcal{W}_{2}$ is
faithful, so is $\mathcal{W}_{1}$.
We give an intuition as to why the decompositions for the Cases (ii) and (iii)
will guarantee Condition (2). The decompositions unroll the precovering graph
$G$ into DMGTS. The intermediate counter valuations of these DMGTS correspond
to the consistent assignment in $G$. The precovering graph only admits runs
that respects the consistent assignment. As a consequence, every run through
the new DMGTS will satisfy intermediate acceptance.
### 8.3. Case (ii)
This is the case where an edge $e$ belongs to a precovering graph $G$, and
hence can be taken in loops, but the characteristic equations for the DMGTS
$(\mathcal{C}[G],\mu)$ impose an upper bound $l\in\mathbb{N}$ on the number of
times the edge can be taken. The decomposition unrolls $G$ into MGTS where
every copy of $e$ leads to a new precovering graph. The MGTS thus count the
number of times $e$ is taken.
The DMGTS in the set $U$ only admit runs where $e$ is taken at most $l$ times.
As $x[e]$ is not in the support, the precovering graphs we obtain by excluding
$e$ have a smaller dimension, and hence the well-founded preorder decreases.
The DMGTS in $V$ count until $e$ has been taken $l+1$ times, and then admit
all edges while returning to the former root. Any solution to the
characteristic equations can be translated into a solution $s$ for the
characteristic equations of $(G,\mu)$ with $s[e]>l$. When the elements of $V$
are inserted into the context, this makes the characteristic equation
infeasible as the edge count is too high.
Lemma 8.5 lists the guarantees. It is [18, Proposition 3.3] with information
about faithfulness and the DMGTS in $V$ added.
###### Lemma 8.5.
Let $(G,\mu)$ contain the edge $e$, let
$\mathsf{sd}\in\\{\mathsf{sj},\mathsf{dy}\\}$, and let $\mathcal{C}[\bullet]$
be a context with
$x[e]\not\in\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{C}[G],\mu))$.
Using resources elementary in $|{(\mathcal{C}[G],\mu)}|$, we can compute sets
$U$ and $V$ of consistent specializations of $(G,\mu)$, where
* •
for all $\mathcal{S}\in U$, we have $\mathcal{S}<_{\mathsf{rnk}}(G,\mu)$,
* •
for all $\rho\in\mathsf{IAcc}_{\mathsf{sj}}(G,\mu)$ there is
$\sigma\in\mathsf{IAcc}_{\mathsf{sj}}(U\cup V)$ with $\rho\approx\sigma$,
* •
for all $\mathcal{T}\in V$, we have that
$\mathsf{Char}_{\mathsf{sd}}(\mathcal{C}[\mathcal{T}])$ is infeasible.
We define the decomposition $(X,Y)=\textsf{dec}(\mathcal{W})$ for the faithful
DMGTS $\mathcal{W}=(\mathcal{C}[G],\mu)$ whose precovering graph $G$ contains
an edge $e$ with
$x[e]\not\in\mathsf{supp}(\mathsf{Char}_{\mathsf{sd}}(\mathcal{W}))$. With
Lemma 8.5, we compute the sets $U$ and $V$. If $\mathsf{sd}=\mathsf{sj}$, we
set $X=\mathcal{C}[U]$ and $Y=\emptyset$. If $\mathsf{sd}=\mathsf{dy}$, we set
$X=\mathcal{C}[U]$ and $Y=\mathcal{C}[V]$. This should be read as follows. If
the subject VASS can only execute the edge a bounded number of times, we use
the usual decomposition and do not create elements in $Y$. If the Dyck-side
can only execute the edge a bounded number of times, we split the runs of the
subject VASS. The set $X$ contains the runs where the edge is bounded. The set
$Y$ contains the runs where the edge may occur more often, and we have the
guarantee to be separable from the Dyck language by Lemma 6.2.
###### Proof.
We prove the properties promised by Lemma 8.1. For (a), we note that not only
the DMGTS in $X$ but also the ones in $Y$ are faithful by Lemmas 8.5 and 8.4.
The well-founded relation decreases by Lemma 8.5. It is stable under forming
contexts as noted above. For (b), if $\mathsf{sd}=\mathsf{sj}$ there is
nothing to do, because $Y=\emptyset$. If $\mathsf{sd}=\mathsf{dy}$, Lemma 8.5
already yields the infeasibility of
$\mathsf{Char}_{\mathsf{dy}}(\mathcal{C}[\mathcal{T}])$ for all
$\mathcal{T}\in V$. With Lemma 6.2, this implies the desired
$L_{\mathsf{sj}}(\mathcal{C}[\mathcal{T}])\mid D_{n}$.
For (c), we have $L_{\mathsf{sj}}(X\cup Y)\subseteq
L_{\mathsf{sj}}(\mathcal{W})$ by Lemmas 8.5 and 8.4. For reverse inclusion,
consider a word $\lambda(\rho)$ with
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W})$. As
$\mathcal{W}=(\mathcal{C}[G],\mu)$, we have $\rho=\rho_{0}.\rho_{1}.\rho_{2}$,
where $\rho_{1}$ is the part of the run through $G$. Intermediate acceptance
propagates down to the components of the DMGTS, which yields
$\rho_{1}\in\mathsf{IAcc}_{\mathsf{sj}}(G,\mu)$. By Lemma 8.5, there are
$\mathcal{S}\in U\cup V$ and
$\sigma\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{S})$ with
$\rho_{1}\approx\sigma$. The equivalence among runs guarantees that the labels
and counter values coincide, only the visited nodes may differ. Hence, we have
$\rho_{0}.\sigma.\rho_{2}\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{C}[\mathcal{S}])$
and
$\lambda(\rho_{0}.\sigma.\rho_{2})=\lambda(\rho_{0}.\rho_{1}.\rho_{2})=\lambda(\rho)$.
If $\mathsf{sd}=\mathsf{dy}$, we are done. If $\mathsf{sd}=\mathsf{sj}$, we
must additionally show $\mathcal{S}\not\in V$, because the MGTS in $V$ are
dropped by the construction. We reason with infeasibility, like we did for
(b). ∎
### 8.4. Case (iii)
This is the case where a precovering graph $G$ does not have a covering
sequence to arbitrarily increase or decrease the values of $\omega$-decorated
counters. As $\mathsf{CS}_{\mathsf{down}}(G)$ is defined via
$\mathsf{CS}_{\mathsf{up}}(G)$, we focus on
$\mathsf{CS}_{\mathsf{up}}(G)=\emptyset$. We follow the construction by Leroux
and Schmitz [24], which employs a Rackoff argument [26], rather than the one
by Lambert [18], which works with coverability graphs [13].
For each $\mathit{j}\in\mathsf{sj}\cup\mathsf{dy}$ that is decorated by
$\omega$ and has $G.c_{\mathsf{in}}[\mathit{j}]\in\mathbb{N}$, we unroll the
precovering graph into a DMGTS that tracks $\mathit{j}$ up to a bound $B$. The
bound $B$ is of size doubly exponential in $|{G}|$. By the same Rackoff
argument as in [24], we conclude that this captures all words in
$L_{\mathsf{sj}}(G)$, because the opposite would imply
$\mathsf{CS}_{\mathsf{up}}(G)\neq\emptyset$. The graph has one peculiarity
compared to [24]. Assume a Dyck counter has not yet exceeded $B$ and becomes
negative. Then we enter a sink node in which we enable all transitions.
The DMGTS in $U$ capture the runs that do not enter the sink and reach
$G.c_{\mathsf{out}}$. Since cycles cannot change counter $\mathit{j}$, the
dimension of the vector space decreases for each precovering graph. This
reduces the well-founded preorder. The details are in the appendix.
The DMGTS in $V$ capture the runs that enter the sink and the runs that end in
a valuation different from $G.c_{\mathsf{out}}$. In both cases, the
characteristic equations for $\mathsf{dy}$ become infeasible. Indeed, when a
Dyck counter becomes negative, the construction forces us to leave a
precovering graph towards the sink. But then we fail to satisfy the non-
negativity requirement for entering the sink.
Lemma 8.6 formalizes the guarantees given by the construction. It is based on
[18, Propositions 3.4 and 3.5].
###### Lemma 8.6.
Let $(G,\mu)$ have $\mathsf{CS}_{\mathsf{up}}(G)=\emptyset$ or
$\mathsf{CS}_{\mathsf{down}}(G)=\emptyset$. Using resources elementary in
$|{(G,\mu)}|$, we can compute sets $U$ and $V$ of consistent specializations
of $(G,\mu)$, where
* •
for all $\mathcal{S}\in U$, we have $\mathcal{S}<_{\mathsf{rnk}}(G,\mu)$,
* •
for all $\rho\in\mathsf{IAcc}_{\mathsf{sj}}(G,\mu)$ there is
$\sigma\in\mathsf{IAcc}_{\mathsf{sj}}(U\cup V)$ with $\rho\approx\sigma$,
* •
for all $\mathcal{T}\in V$, we have that
$\mathsf{Char}_{\mathsf{dy}}(\mathcal{T})$ is infeasible.
We explain the role of the DMGTS in $V$ that enter the sink as a Dyck counter
becomes negative. How can they help with the second property, if the
requirement there is that the Dyck counters stay non-negative? The observation
is that intermediate acceptance modulo $\mu$ satisfies a monotonicity
property: if
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{T})$
then
$\rho+k\cdot\mu\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{T})$.
Here, we use $\rho+k\cdot\mu$ to denote the run obtained from $\rho$ by
raising the values of all counters in all configurations by $k\cdot\mu$ with
$k\in\mathbb{N}$. A consequence of monotonicity is that even though a run
starting from small values has to enter the sink due to a negative counter,
this negativity will disappear in properly scaled runs, and they will still
take the same transitions. The set $V$ makes sure we do not miss the scaled
runs.
We define $(X,Y)=\textsf{dec}(\mathcal{W})$ for
$\mathcal{W}=(\mathcal{C}[G],\mu)$ faithful with
$\mathsf{CS}_{\mathsf{up}}(G)=\emptyset$ or
$\mathsf{CS}_{\mathsf{down}}(G)=\emptyset$. Lemma 8.6 yields sets $U$ and $V$
of consistent specializations. We set $X=\mathcal{C}[U]$ and
$Y=\mathcal{C}[V]$. The requirements of Lemma 8.1 are derived like in Case
(ii).
## References
* [1] P. Baumann, R. Meyer, and G. Zetzsche. Regular separability in Büchi VASS. In STACS, volume 254 of LIPIcs, pages 9:1–9:19, 2023.
* [2] J. R. Büchi. On a decision method in restricted second order arithmetic. In International Congress on Logic, Method, and Philosophy of Science, pages 1–12. Stanford University Press, 1962.
* [3] L. Clemente, W Czerwiński, S. Lasota, and C. Paperman. Regular separability of Parikh automata. In ICALP, volume 80 of LIPIcs, pages 117:1–117:13. Dagstuhl, 2017.
* [4] L. Clemente, W. Czerwiński, S. Lasota, and C. Paperman. Separability of reachability sets of vector addition systems. In STACS, volume 66 of LIPIcs, pages 24:1–24:14. Dagstuhl, 2017.
* [5] L. Clemente, S. Lasota, and R. Piórkowski. Timed games and deterministic separability. In ICALP, volume 168 of LIPIcs, pages 121:1–121:16. Dagstuhl, 2020.
* [6] F. G. Commoner, A. W. Holt, S. Even, and A. Pnueli. Marked directed graphs. JCSS, 5(5):511–523, 1971.
* [7] W. Czerwiński and S. Lasota. Regular separability of one counter automata. In LICS, pages 1–12. IEEE, 2017.
* [8] W. Czerwiński, S. Lasota, R. Meyer, S. Muskalla, K. Narayan Kumar, and P. Saivasan. Regular separability of well-structured transition systems. In CONCUR, volume 118 of LIPIcs, pages 35:1–35:18. Dagstuhl, 2018.
* [9] W. Czerwiński and Ł. Orlikowski. Reachability in vector addition systems is Ackermann-complete. In FOCS, pages 1229–1240. IEEE, 2021.
* [10] W. Czerwiński and G. Zetzsche. An approach to regular separability in vector addition systems. In LICS, pages 341–354. ACM, 2020.
* [11] S. Demri. On selective unboundedness of VASS. JCSS, 79(5):689–713, 2013.
* [12] H. J. Genrich and K. Lautenbach. Synchronisationsgraphen. Acta Informatica, 2:143–161, 1973.
* [13] R. M. Karp and R. E. Miller. Parallel program schemata. JCSS, 3(2):147–195, 1969.
* [14] E. Keskin and R. Meyer. Separability and non-determinizability of WSTS. In CONCUR, volume 279 of LIPIcs, pages 8:1–8:17. Dagstuhl, 2023.
* [15] C. Köcher and G. Zetzsche. Regular separators for VASS coverability languages. In FSTTCS, volume 284 of LIPIcs, pages 15:1–15:19. Dagstuhl, 2023.
* [16] E. Kopczynski. Invisible pushdown languages. In LICS, pages 867–872. ACM, 2016.
* [17] S. R. Kosaraju. Decidability of reachability in vector addition systems (preliminary version). In STOC, pages 267–281. ACM, 1982.
* [18] J.-L. Lambert. A structure to decide reachability in Petri nets. TCS, 99(1):79–104, 1992.
* [19] S. Lasota. Improved Ackermannian lower bound for the Petri nets reachability problem. In STACS, volume 219 of LIPIcs, pages 46:1–46:15. Dagstuhl, 2022.
* [20] J. Leroux. The general vector addition system reachability problem by Presburger inductive invariants. In LICS, pages 4–13. IEEE, 2009.
* [21] J. Leroux. Vector addition system reachability problem: a short self-contained proof. In POPL, pages 307–316. ACM, 2011.
* [22] J. Leroux. The reachability problem for Petri nets is not primitive recursive. In FOCS, pages 1241–1252. IEEE, 2021.
* [23] J. Leroux and S. Schmitz. Demystifying reachability in vector addition systems. In LICS, pages 56–67. IEEE, 2015.
* [24] J. Leroux and S. Schmitz. Reachability in vector addition systems is primitive-recursive in fixed dimension. In LICS. IEEE, 2019.
* [25] E. W. Mayr. An algorithm for the general Petri net reachability problem. In STOC, pages 238–246. ACM, 1981.
* [26] C. Rackoff. The covering and boundedness problems for vector addition systems. TCS, 6(2):223–231, 1978.
* [27] G. S. Sacerdote and R. L. Tenney. The decidability of the reachability problem for vector addition systems (preliminary version). In STOC, pages 61–76. ACM, 1977.
* [28] S. Schmitz. Complexity hierarchies beyond elementary. ACM TOCT, 8(1):3:1–3:36, 2016.
* [29] S. Schmitz. Algorithmic Complexity of Well-Quasi-Orders. Habilitation à diriger des recherches, ENS-Paris-Saclay, 2017.
## Appendix A Appendix: DMGTS
###### Proof of Lemma 5.1.
Let $w\in L_{\mathsf{dy}}(\mathcal{W})$. Then there is a run $\rho$ in
$\mathcal{W}$ labeled by $\lambda(\rho)=w$ that remains non-negative and that
is accepting on $\mathsf{dy}$. By the definition of DMGTS, the effects of
letters on the counters in $\mathsf{dy}$ agree with the effects that these
letters have in the VASS $\mathcal{D}_{n}$. This allows us to construct a run
$\rho^{\prime}\in\mathsf{Acc}_{\mathbb{N},\leq_{\omega}}(\mathcal{D}_{n})$
that has the same counter valuations and updates. Hence, we have $w\in D_{n}$.
∎
## Appendix B Appendix: The Decomposition
In this section, we provide the missing proofs from Section 8, followed by the
analysis the complexity of the algorithm from the proof of Lemma 6.1. We
preface the missing proofs with the adapting the Rackoff-styled observations
in [24] to our needs.
As we mention in the main paper, our decomposition is based on the
decomposition by Leroux et al. in [24]. For this reason, many proofs we
feature are adaptations of the ones in [24]. Because our setting differs from
that of [24] non-trivially, we adapt the proofs to our setting instead of
using them as black-boxes. The biggest difference is this. DMGTS define two
languages, one per counter set $\mathsf{sj}$ and $\mathsf{dy}$, where the
language associated with one side does not track the constraints on the other
side. Additionally, we need the faithfulness invariant, which is not needed in
[24].
### B.1. Preliminary Proofs
As alluded to, we first show the following observation which is an adaptation
of Lemma A.1 from the appendix of [24]. For a set of counters $J$ our version
of the observation states the following. If we have an integer run, where each
counter (i) exceeds some doubly exponential bound in $C$, the number of
counters $d$, and the largest effect of a transition $l$ along the run and
(ii) remains positive until this happens, then there is an $J$-run that
reaches a value that covers $C$ in all counters $i\in J$. Our version of the
observation differs from the one used in [24, Lemma A.1], in that we allow
counters to become negative after exceeding the bound. We recall the proof
###### Lemma B.1.
Let $G$ be a precovering graph with the largest transition effect $l$, $C\geq
2$, $J\subseteq\mathsf{sj}\cup\mathsf{dy}$, and
$\rho\in\mathsf{Runs}_{\mathbb{Z}}(G)$ with
$\rho=(p_{0},c_{0})e_{0}\ldots e_{k-1}(p_{k},c_{k}).$
If for all $\mathit{j}\in J$, there is an $i\leq k$ with
$c_{i}[\mathit{j}]\geq(|{G.V}|\cdot l\cdot C)^{|{J}|+1!}$, and
$c_{i^{\prime}}[\mathit{j}]\geq 0$ for all $i^{\prime}\leq i$, then there is a
run $\sigma\in\mathsf{Runs}_{J}(G)$ of size at most $(|{G.V}|\cdot l\cdot
C)^{|{J}|+1!}$ where $\sigma[\mathsf{last}]=(p_{k},c)$,
$\sigma[\mathsf{first}]=(p_{0},c_{0})$, and $c[\mathit{j}]\geq C$ for all
$\mathit{j}\in J$.
###### Proof of Lemma B.1.
Let $G$, $J$, $\rho$, and $l$ be defined as stated in the premise of the
lemma, and $\rho$ additionally the shortest run with this property. First, we
define $\alpha_{i}$ for $i\leq|{\mathsf{sj}\cup\mathsf{dy}}|$ inductively. We
define $\alpha_{0}=|{G.V}|$, and
$\alpha_{i}=|{G.V}|\cdot(l\cdot\alpha_{i-1}+C)^{i}+\alpha_{i-1}$ for $i>1$.
Over approximating, we get
$\alpha_{i}\leq|{G.V}|\cdot(l\cdot\alpha_{i-1}\cdot
C)^{i}+\alpha_{i-1}\leq(|{G.V}|\cdot l\cdot\alpha_{i-1}\cdot C)^{i+1}.$
We can apply the inductive definition to get
$\alpha_{i}\leq(|{G.V}|\cdot l\cdot C)^{i+1!}$
Our proof is by induction on $|{J}|$. For the base case, we take
$J=\emptyset$. Then, both the premise and the requirements become trivial.
Since there is a run $\rho$ that starts from $p_{0}$ and ends at $p_{k}$, we
can find a run of size at most $|{G.V}|=\alpha_{0}$ between these nodes.
For the inductive case, let Lemma B.1 hold for all $J^{\prime}\subset J$. The
premise of the lemma gives us $i_{\mathit{j}}\leq k$ for each $\mathit{j}\in
J$ where $c_{i_{\mathit{j}}}[\mathit{j}]\geq\alpha_{|{J}|}\geq
l\cdot\alpha_{|{J}|-1}+C$ and $c_{i^{\prime}}[\mathit{j}]\geq 0$ for all
$i^{\prime}\leq i_{\mathit{j}}$. Wlog. assume $i_{\mathit{j}}$ to be minimal
for each $\mathit{j}\in J$. Let $J^{\prime}$ be the set of counters
$\mathit{j}$ that minimize $i_{\mathit{j}}$. Fix some $\mathit{j}\in
J^{\prime}$. The remainder of the run
$(p_{i_{\mathit{j}}},c_{i_{\mathit{j}}})\ldots(p_{k},c_{k})$ satisfies the
premise of the lemma for $J\setminus J^{\prime}$. Since $i_{\mathit{j}}$ is
minimal and $\rho$ is a $J$-run, we know that for all $\mathit{j}\in J$ and
$i^{\prime}<i_{\mathit{j}}$,
$c_{i^{\prime}}[\mathit{j}]<l\cdot\alpha_{|{J}|-1}+C$. Then, each counter in
$J$ can have at most $l\cdot\alpha_{|{J}|-1}+C$ different values. We have
$i_{\mathit{j}}<|{G.V}|\cdot(l\cdot\alpha_{|{J}|-1}+C)^{|{J}|}$ since $\rho$
is minimal and we could eliminate a loop from $\rho$ if this were to not hold.
Denote the run up to this point as $\rho_{0}$. Observe that the remaining run
$(p_{i_{\mathit{j}}},c_{i_{\mathit{j}}})\ldots(p_{k},c_{k})$ fulfills the
premise of the lemma for the counters $J\setminus J^{\prime}$: for all $n\in
J\setminus J^{\prime}$, we know that there is a $i_{\mathit{j}}<i_{m}\leq k$
where $c_{i_{m}}[m]\geq\alpha_{|{J}|}(C)\geq\alpha_{|{J\setminus
J^{\prime}}|}(C)$ and $c_{i^{\prime}}[m]\geq 0$ for all $i^{\prime}\leq
i_{m}$. Then the induction hypothesis applies to show that there is a
$J\setminus J^{\prime}$-run $\sigma_{1}$ from
$(p_{i_{\mathit{j}}},c_{i_{\mathit{j}}})$ of length at most
$\alpha_{|{J\setminus J^{\prime}}|}(C)$ that reaches $(p_{k},c)$ with
$c[\mathit{j}]\geq C$ for all $m\in J\setminus J^{\prime}$. Let
$\sigma=\ldots(p_{k},c)$ be the run obtained by following up $\rho_{0}$ by
$\sigma_{1}$. We claim that $\sigma$ is a $J$-run with $c[n]\geq C$. This is
already clear for all $n\in J\setminus J^{\prime}$. For $n\in J^{\prime}$, we
observe that $\rho_{0}$ is by definition positive on these counters, and
$c_{i_{n}}[n]\geq l\cdot\alpha_{|{J}|-1}+C\geq l\cdot\alpha_{|{J\setminus
J^{\prime}}|}+C$. Then, since $\sigma$ takes at most $\alpha_{|{J\setminus
J^{\prime}}|}$ transitions, the counter $n$ remains above $C\geq 0$ for all
prefixes of $\sigma_{1}$. This concludes the proof. ∎
We now show Lemma 8.4. We break down its conclusions into two proofs, one for
the implied inclusion and one for the implied faithfulness. In the following
proofs, we write $\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$ mean that
$\mathcal{W}_{0}$ is a consistent specialization of $\mathcal{W}_{1}$. We also
write $(p,c).c$ for a configuration $(p,c)$ to denote the counter valuation of
the configuration.
###### Proof Sketch of Lemma 8.4, Inclusion.
The proof is by structural induction over the proof tree for $\leq_{cs}$. By
this we refer to a tree labeled by pairs of DMGTS
$(\mathcal{W}_{0},\mathcal{W}_{1})$ where (i) the node is a leaf where the
properties (1) and (2) have been applied to show
$\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$ or (ii) the node is not a leaf and
the inductive case is used to show $\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$
by inserting the relation shown in its successor into a context. Our inductive
invariant is stronger than the conclusion of Lemma 8.4: If
$\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$, then for all
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{0})$, there is a
$\sigma\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{1})$ with
$\rho\approx\sigma$. Because $\approx$ preserves labels, this clearly implies
the desired inclusion. For the base case, we have $\mathcal{W}_{1}=(G,\mu)$
for some precovering graph $G$, where (1) and (2) hold to show
$\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$. Let
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{0})$. We know
$\rho[\mathsf{first}].c[\mathsf{sj}]\leq_{\omega}\mathcal{W}_{0}.c_{\mathsf{in}}[\mathsf{sj}]$,
$\rho[\mathsf{last}].c[\mathsf{sj}]\leq_{\omega}\mathcal{W}_{0}.c_{\mathsf{out}}[\mathsf{sj}]$,
$\rho[\mathsf{first}].c[\mathsf{dy}]\sqsubseteq_{\omega}^{\mu}\mathcal{W}_{0}.c_{\mathsf{in}}[\mathsf{dy}]$,
and
$\rho[\mathsf{last}].c[\mathsf{dy}]\sqsubseteq_{\omega}^{\mu}\mathcal{W}_{0}.c_{\mathsf{out}}[\mathsf{dy}]$.
The second part of (1) tells us that there is a
$\sigma\in\mathsf{Runs}_{\mathbb{Z}}(\mathcal{W}_{1})$ with
$\sigma\approx\rho$. The relation $\approx$ keeps the counter valuations
intact. So positivity on $\mathsf{sj}$ counters is kept, and the constraints
are kept:
$\displaystyle\sigma[\mathsf{first}].c[\mathsf{sj}]$
$\displaystyle\leq_{\omega}\mathcal{W}_{0}.c_{\mathsf{in}}[\mathsf{sj}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}[\mathsf{sj}]$
$\displaystyle\sigma[\mathsf{last}].c[\mathsf{sj}]$
$\displaystyle\leq_{\omega}\mathcal{W}_{0}.c_{\mathsf{out}}[\mathsf{sj}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{out}}[\mathsf{sj}]$
$\displaystyle\sigma[\mathsf{first}].c[\mathsf{dy}]$
$\displaystyle\sqsubseteq_{\omega}^{\mu}\mathcal{W}_{0}.c_{\mathsf{in}}[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}[\mathsf{dy}]$
$\displaystyle\sigma[\mathsf{last}].c[\mathsf{dy}]$
$\displaystyle\sqsubseteq_{\omega}^{\mu}\mathcal{W}_{0}.c_{\mathsf{out}}[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{out}}[\mathsf{dy}]$
At each line, the first inequality follows from
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{0})$, and the fact that
$\approx$ keeps the counter valuations intact. The second inequality follows
from the first part of (1) that constraints the valuations of
$\mathcal{S}_{0}$ with respect to $G$. For the inductive case, let
$\mathcal{W}_{i}=\mathcal{C}[\mathcal{W}_{i}^{\prime}]$ for some DMGTS
$\mathcal{W}_{i}^{\prime}$ for each $i\in\\{0,1\\}$, where
$\mathcal{W}_{0}^{\prime}\leq_{cs}\mathcal{W}_{1}^{\prime}$. Let
$\rho\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{0})$. We break this run down
into $\rho=\rho_{0}.\rho_{1}.\rho_{2}$ where $\rho_{1}$ is the part of the run
in $\mathcal{W}_{0}^{\prime}$, and $\rho_{0}.\bullet.\rho_{2}$ the remaining
part in the outer context $\mathcal{C}[\bullet]$. Compositionality of
$\mathsf{IAcc}_{\mathsf{sj}}(-)$ tells us
$\rho_{1}\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{0}^{\prime})$. Using the
induction hypothesis, we get a run
$\sigma_{1}\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{1}^{\prime})$ with
$\sigma_{1}\approx\rho_{1}$. Using compositionality and the fact that
$\mathcal{C}[\bullet]$ is the same between $\mathcal{W}_{0}$ and
$\mathcal{W}_{1}$, we deduce
$\sigma=\rho_{0}.\sigma_{1}.\rho_{2}\in\mathsf{IAcc}_{\mathsf{sj}}(\mathcal{W}_{1})$.
Since $\rho_{1}\approx\sigma_{1}$, it is clear that $\rho_{0}\approx\rho_{1}$.
This concludes the proof. ∎
###### Proof Sketch of Lemma 8.4, Faithfulness.
We do a structural induction similarly to the proof of Lemma 8.4. We maintain
two invariants. If $\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$, then
* (a)
For all
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0})$,
there is $\sigma\approx\rho$ with
$\sigma\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{1})$.
* (b)
If $\mathcal{W}_{1}$ is faithful, for all runs
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0})$
with
$\rho[\mathsf{first}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}[\mathsf{dy}]$,
and
$\rho[\mathsf{last}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{out}}[\mathsf{dy}]$,
we have $\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{0})$.
We also use the fact that
$\mathcal{W}_{0}.c_{\mathsf{in}}\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}$
and
$\mathcal{W}_{0}.c_{\mathsf{out}}\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}$,
which can be shown by standard induction. This fact, applied with (b), shows
the implication from the faithfulness of $\mathcal{W}_{1}$ to the faithfulness
of $\mathcal{W}_{0}$, in Lemma 8.4. For the base case, let
$\mathcal{W}_{1}=(G,\mu)$ for some precovering graph $G$, where (1) and (2)
show $\mathcal{W}_{0}\leq_{cs}\mathcal{W}_{1}$. Let
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0})$.
Because of (1), we know that there is a
$\sigma\in\mathsf{Runs}_{\mathbb{Z}}(G)$ with $\sigma\approx\rho$. Since (1)
tells us that $\mathcal{W}_{0}.c_{\mathsf{in}}$ and
$\mathcal{W}_{0}.c_{\mathsf{out}}$ are stricter than
$\mathcal{W}_{1}.c_{\mathsf{in}}$ and $\mathcal{W}_{1}.c_{\mathsf{out}}$, and
$G$ does not have intermediate valuations,
$\sigma\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{1})$.
This shows (a). Invariant (b) is just (2). This concludes the base case. For
the inductive case, let
$\mathcal{W}_{0}=\mathcal{C}[\mathcal{W}_{0}^{\prime}]$ and
$\mathcal{W}_{1}=\mathcal{C}[\mathcal{W}_{1}^{\prime}]$ with
$\mathcal{W}_{0}^{\prime}\leq_{cs}\mathcal{W}_{1}^{\prime}$. Let
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0})$.
We have $\rho=\rho_{0}.\rho_{1}.\rho_{2}$ where $\rho_{1}$ is the part of the
run in $\mathcal{W}_{0}^{\prime}$, and $\rho_{0}.\bullet.\rho_{2}$ is the part
of the run in the outer context $\mathcal{C}[\bullet]$. For (a), we invoke the
induction hypothesis on $\rho_{1}$ and use compositionality, same as in the
proof of Lemma 8.4, Inclusion. For (b), let $\mathcal{W}_{1}$ be faithful, let
$\rho\in\mathsf{Runs}_{\mathbb{Z}}(\mathcal{W}_{0})$,
$\rho[\mathsf{first}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{in}}[\mathsf{dy}]$,
and
$\rho[\mathsf{last}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}.c_{\mathsf{out}}[\mathsf{dy}]$
also hold. By (a) and compositionality, we obtain
$\sigma=\rho_{0}.\sigma_{1}.\rho_{2}\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{1})$
with $\sigma_{1}\approx\rho_{1}$. Since $\approx$ does not change counter
valuations, and $\rho$ already reaches the initial and final valuations of
$\mathcal{W}_{1}$, we know that
$\sigma\in\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{1})$. Using the
faithfulness of $\mathcal{W}_{1}$,
$\sigma\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{1})$,
and $\sigma\in\mathsf{Acc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{1})$, we
deduce $\rho\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{1})$. This
yields
$\sigma_{1}\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{1}^{\prime})$,
which implies
$\rho_{1}[\mathsf{first}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}^{\prime}.c_{\mathsf{in}}[\mathsf{dy}]$
and
$\rho_{1}[\mathsf{last}].c[\mathsf{dy}]\leq_{\omega}\mathcal{W}_{1}^{\prime}.c_{\mathsf{out}}[\mathsf{dy}]$
by the properties of $\approx$. We also note that
$\rho\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0})$
implies
$\rho_{1}\in\mathsf{IAcc}_{\mathbb{Z},\sqsubseteq_{\omega}^{\mu}\\![\mathsf{dy}]}(\mathcal{W}_{0}^{\prime})$.
We use induction hypothesis (b) to get
$\rho_{1}\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{0}^{\prime})$.
Since the outer run $\rho_{0}.\bullet.\rho_{2}$ fulfills the remaining
requirements by
$\sigma=\rho_{0}.\sigma_{1}.\rho_{2}\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{1})$,
we get
$\rho=\rho_{0}.\rho_{1}.\rho_{2}\in\mathsf{IAcc}_{\mathbb{Z},\mathsf{dy}}(\mathcal{W}_{0})$.
∎
### B.2. Decomposition Steps
In this section, we prove Lemma 8.5 and Lemma 8.6. They both involve
decomposing a precovering graph $(G,\mu)$ into sets of DMGTS $U$ and $V$ that
track extra information. We do this by enriching the structure of the
precovering graph. To make this enrichment formal and our handling uniform, we
define _observer NFA_. An observer NFA $\mathcal{O}=(Q,I,G.E,\delta,F)$ for
$G$ is an NFA $\mathcal{A}$ that has the edges $G.E$ as its alphabet. We note
that the language of an observer $\mathit{L}(\mathcal{O})\subseteq G.E^{*}$
consist of sequences of edges in $G$. By constructing the product of an
observer and a precovering graph, we get a larger automaton that accepts the
edge sequences of runs based on the information we want to track. We then
decompose this automaton into MGTS, in accordance with the valuations of $G$.
We define the product $G\times\mathcal{O}$ between an observer
$\mathcal{O}=(Q,I,G.E,\delta,F)$ and a precovering graph $G$ to be the
observer $(G.V\times Q,\\{G.v_{\mathsf{root}}\\}\times
I,G.E,\delta_{\times},\\{G.v_{\mathsf{root}}\\}\times F)$, where
$\displaystyle\delta_{\times}=\\{((p_{0},p_{1}),$ $\displaystyle
e,(q_{0},q_{1}))\mid$ $\displaystyle $ $\displaystyle e\in
G.E,e=(p_{0},a,x,q_{0}),(p_{1},e,q_{1})\in\delta\\}.$
Intuitively, $G\times\mathcal{O}$ simulates $\mathcal{O}$ along the edges $G$
can take during a run. We construct MGTS along the runs in
$G\times\mathcal{O}$. Towards this construction, we define the precovering
graphs. For each state $p$ of $\mathcal{O}\times G$, we define the precovering
graph
$\displaystyle G_{p}^{\mathcal{O}}$ |
# Discriminative Feature Representaion with Spatio-temporal Cues for Vehicle
Re-identification
Jingzheng Tu, Cailian Chen, , Xiaolin Huang, , Jianping He, , Xinping Guan J.
Tu, C. Chen, X. Huang, J. He and X. Guan are with the Department of
Automation, Shanghai Jiao Tong University, Shanghai 200240, China and also
with the Key Laboratory of System Control and Information Processing, Ministry
of Education of China, Shanghai Jiao Tong University, Shanghai 200240, China
(e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>xpguan@sjtu.edu.cn).
###### Abstract
Vehicle re-identification (re-ID) aims to discover and match the target
vehicles from a gallery image set taken by different cameras on a wide range
of road networks. It is crucial for lots of applications such as security
surveillance and traffic management. The remarkably similar appearances of
distinct vehicles and the significant changes of viewpoints and illumination
conditions take grand challenges to vehicle re-ID. Conventional solutions
focus on designing global visual appearances without sufficient consideration
of vehicles’ spatio-tamporal relationships in different images. In this paper,
we propose a novel discriminative feature representation with spatio-temporal
clues (DFR-ST) for vehicle re-ID. It is capable of building robust features in
the embedding space by involving appearance and spatio-temporal information.
Based on this multi-modal information, the proposed DFR-ST constructs an
appearance model for a multi-grained visual representation by a two-stream
architecture and a spatio-temporal metric to provide complementary
information. Experimental results on two public datasets demonstrate DFR-ST
outperforms the state-of-the-art methods, which validate the effectiveness of
the proposed method.
###### Index Terms:
Vehicle re-identification, computer vision, deep learning, attention
mechanism, video surveillance.
## I Introduction
With increasing demand for public security and the rapid growth of vehicles,
vehicle re-identification (re-ID) has become one of the most pivotal
technologies for intelligent urban surveillance. It also has a wide range of
potential applications including multi-target multi-camera tracking, traffic
flow modeling [1, 2, 3]. The main task of vehicle re-ID is to locate vehicles
accurately and identify the same vehicle over multiple cameras with particular
perspectives.
A similar topic is person re-identification (re-ID), but it is totally
different from vehicle re-ID in terms of the facing challenges. Compared to
person re-ID, vehicle re-ID suffers from much smaller inter-class variations
and larger intra-class variations, as shown in Fig. 1. In the top rectangle,
each column represents a near-duplicate pair of vehicles with different
identities. Due to diversified viewpoints, illuminations, and orientations of
vehicle images, they only differ in slight as highlighted. Concretely,
vehicles with distinct labels can be of the same model and the same color. Two
different vehicles of the same model and color only differ in minor details,
e.g. brands, individual decorations and scratches. Additionally, the same
vehicle’s images with various orientations, such as front, side, and rear,
only share little visual overlap. As human sizes are smaller than vehicles,
the images of one specific person with diverse viewpoints retain more
appearance similarities.
Figure 1: (1) Different vehicles share extremely analogous appearances. (2)
Large intra-class variations. (3) In person re-ID, the same person’s images
from different aspects share more visual overlap than those in vehicle re-ID.
Considering the sophisticated relationship between intra-class and inter-class
discrepancies, conventional methods typically focus on hand-craft features [4,
5, 6], including geometric attributes, color histograms, and textures. Their
major disadvantage lies to the insensitivity to background clutter and large
variations of light conditions and viewpoints, thus leading to restricted
applications in practical situations. Recently, deep features have dominated
many computer vision tasks, such as object detection [7, 8, 9, 10], semantic
segmentation [11] and action recognition [12, 13, 14]. Hence, researchers have
embraced deep features into vehicle re-ID to construct a more efficient
feature representation based on two strategies: 1) visual representation and
2) multi-modal representation.
The first strategy exploits the visual representation of vehicles. Refs. [15,
16, 17] devote to establish a global description of vehicle images. However,
the global features have limited performance when dealing with inevitable
visual ambiguities among different vehicles and dramatic changes of
uncontrolled variations of the same vehicle. This inspired several works [18,
19, 20] to seek helpful visual clues for distinguishing subtle differences
between vehicle images. However, these methods mainly focus on informative
region localization instead of how local regions are assigned the importances
to distinct degrees.
To solve this problem, we develop a discriminative feature representation
method to explore more fine-grained features by introducing an appearance
module with two streams, i.e., the coarse-grained and the fine-grained feature
streams respectively, to describe visual information of different
granularities. The coarse-grained feature stream extracts deep features from
the global network, presenting a macroscopic impression of images. Besides,
the fine-grained feature stream pulls the samples of the same class up
together while pushing those of different classes away in the feature
embedding.
The second strategy aims to discover and utilize multi-modal information to
improve the performance of vehicle re-ID algorithms. Because visual appearance
is not always reliable, especially in unconstrained environments with
excessively dynamical changes, including license plates [15] and vehicle
models [15, 21, 22, 23]. As aforementioned, license plate recognition is
vulnerable and involves privacy problems. Moreover, vehicle models require
manually annotated labels, which are laborious and uneconomical. Contrastly,
the spatio-temporal information of vehicle re-ID problem is usually available
due to the universalness of video monitoring systems. Refs. [17, 24, 18, 25,
26] employ the spatio-temporal information as the refinement of appearance
features. Ref. [17] defines a spatio-temporal similarity between image pairs
based on the approximate statistics of datasets. Ref. [24] uses Chain of
Markov Random Field to model visual spatio-temporal paths. However, the missed
detections of vehicles on passed spatio-temporal paths would degrade the
vehicle re-ID algorithm’s overall performance. Moreover, ref. [25] constructs
spatio-temporal constraints and refines the matching problem by a transfer
time matrix. However, the acquisition of timestamps in [25] requires the
preprocess of a multi-camera multi-target tracking task, which injures the
method’s adaptability and extendibility. Additionally, a group of works for
person re-ID also explore spatio-temporal information, which can be mainly
divided into two manners. One is to excavate implicit spatial-temporal
information in videos [27, 28]. The other is to use explicit spatio-temporal
information as physical constraints to reduce the complexity of the matching
algorithm [29, 30]. The spatio-temporal model for person re-ID can not be
straightforwardly used to vehicle re-ID due to severe performance degradation.
In particular, since vehicles move much faster than people, the assumption of
constrained location prediction [30] is not rational any more.
To solve the above problem, we propose a spatio-temporal module to obtain a
more robust model for identifying vehicles. Specifically, the distances
between camera pairs and the time intervals are modeled as a distribution
rather than a transfer matrix [25], spatio-temporal constraints [26] or visual
spatio-temporal paths [24]. By modeling the camera locations’ distance and the
discrepancy of timestamps as random variables, we formulate the spatio-
temporal relationship in a simple yet effective manner quantitively. By adding
the spatio-temporal module, we observe an evident performance improvement.
In summary, we propose a novel discriminative representation with spatio-
temporal information (DFR-ST) to establish a robust feature embedding with
multi-modal cues for vehicle re-ID. The main contributions of DFR-ST are
three-fold:
* •
The proposed DFR-ST constructs the appearance representation by the two-stream
architecture to extract the coarse-grained and the fine-grained features.
Besides, the combination of an attention mechanism and division operations
drives the fine-grained visual representation to focus on more salient and
informative regions.
* •
The spatio-temporal module is proposed to form a complementary representation
with the visual appearance by taking multi-modal cues into sufficient
considerations.
* •
Extensive experiments on two large-scale benchmarks indicate the effectiveness
and robustness of the proposed DFR-ST and it achieves the state-of-the-art
performance.
The rest of this article is organized as follows. Section II refers to the
related works of vehicle re-ID and person re-ID. Section III introduces the
proposed DFR-ST method while Section IV presents the experimental results and
analyses. Section V concludes this article.
## II Related Works
### II-A Vehicle Re-Identification
The earliest vision-based works [4, 5, 6] design hand-craft features
scrupulously to identify vehicles. However, hand-craft features have limited
capability in practice, because heavy occlusions and drastic light changes in
unconstrained situations would destroy the performance when modeling
discriminative features. Since deep features exhibit powerful strength on
multiple vision tasks including image classification [31, 32, 33], object
detection [7, 8, 9, 10] and action recognition [12, 13, 14], researchers
exploit deep features for vehicle re-ID mainly through two approaches as
follows.
The first approach is constructing visual representation to tackle the issue
of identifying the same vehicle. Wang _et al._ [18] aggregate four local
features with different directions to describe an orientation-invariant
feature. However, this work requires a manual classification of the key points
on vehicles. He _et al._ [19] utilize an object detection method to extract
partial features for near-duplicate vehicles. Chen _et al._ [20] propose a
two-branch network with partitions on the height and the width channels to
maximally distinct local features. The experimental results illustrate that
the channel-wise partition improves most. Our work shares a similar idea with
PRN [20], which fuses the global and the local features to establish an
overall embedding. However, observing that the above methods focus on the
local region localization instead of assigning different degrees of importance
on different informative regions, we design a more discriminative
representation by an attention network and the collaboration of three-
dimension divisions. Moreover, experiments demonstrate that our algorithm
performs better than [20] on public datasets.
The second approach aims to improve vehicle re-ID performance with multi-modal
information. The most intuitive idea is the introduction of license plates
[15]. However, this unique information relies on the performance of character
recognition tasks, which inevitably suffer from low-resolution videos,
frequent occlusions and blurred motion. Besides, the model type information of
vehicles is also be explored to benefit matching vehicles. Guo _et al._ [21]
propose a structured feature embedding by defining a vehicle model
classification loss and two ranking losses with distinct granularities.
Although the fine-grained ranking loss has a huge contribution to the final
performance, acquiring vehicle models with manually annotation is uneconomical
and laborious.
Contrastly, spatio-temporal information for vehicle re-identification is
usually available because of the rapid popularization of video monitoring
systems. Hence, it is more efficient to incorporate spatio-temporal cues in
the algorithms to complement the visual appearance information with lower
costs. Wang _et al._ [18] propose the orientation-invariant visual
representation to describe the macroscopic embedding of vehicles with
constraints on spatio-temporal relationships. Shen _et al._ [24] establish
candidate paths using Chain of Markov Random Field and they employ a siamese
architecture with long short term memory (LSTM) network units to model the
visual appearance and spatio-temporal information. Lv _et al._ [25] adopt the
combination of three different features learned by various losses to identify
vehicles. The spatio-temporal constraints is realized by a transfer time
matrix, which refine the search space and reduce the computing complexity.
However, we notice that the existing models of spatio-temporal information are
relatively qualitative and intuitive, which is lack of a more precise
mathematical description. Therefore, we propose a spatio-temporal module to
construct multi-modal representation by modeling camera locations and time
intervals as random variables. Different from [18], which only focuses on
transition time intervals but neglects spatial information, we consider both
temporal and spatial clues simultaneously by a quantitative formulation.
Experiments demonstrate the effectiveness of the proposed spatio-temporal
module.
### II-B Person Re-Identification
Most existing vision-based approaches could be classified into two categories.
The first category focuses on improving feature representation against pose
variants, clutter backgrounds, and distinct camera viewpoints. The most common
strategy is to train the deep network on multiple local regions and combine
all local branches with the global one [35, 36, 37]. The second category
considers metric learning [38, 39, 40]. Besides, some works utilize multi-
modal information including RGB-D data [41], pose estimation [42] and
segmentation annotations [43], because the collaboration of multi-modal data
can obtain further performance boosting. However, the cost of data acquisition
is an unnegligible problem. As for vehicle re-ID, camera locations and
timestamps of videos are easy to obtain because of surveillance systems’
prevalence. Hence, taking spatio-temporal information into considerations is a
reasonable proposal and can promote the performance of vehicle re-ID methods
without much higher costs. Note that our proposed spatio-temporal module does
not need camera calibration information, which is usually unavailable and
contains additional computing costs and noises.
Figure 2: An overview of our DFR-ST approach. The proposed method contains the
appearance module and the spatio-temporal module—the former aims to establish
a discriminative feature embedding through a two-stream structure by involving
multi-grained features, and the latter uses camera locations and timestamps as
the spatio-temporal cues to construct further refinement.
## III Methodology
In this section, we design an appearance module and a spatio-temporal module,
aiming to empower the capability of DFR-ST for discriminative and robust
feature representation. Detailed descriptions of each module in the proposed
DFR-ST method is demonstrated in the following sub sections.
### III-A System Architecture
Fig. 2 illustrates the overall framework of the proposed DFR-ST. The
appearance module is responsible for modeling visual features, in which input
images are fed into a backbone network followed by two streams: _Coarse-
Grained Feature Stream_ and _Fine-Grained Feature Stream_ , as shown in Fig.
3. Meanwhile, the spatio-temporal module establishes the spatial and temporal
distances to provide extra information for identifying the same vehicle.
Finally, the combination of visual and spatio-temporal representation measures
the similarity of vehicle images, conducting the ranking list of gallery
images.
The appearance module is composed of a coarse-grained feature stream and a
fine-grained feature stream. The coarse-grained feature stream extracts the
general feature representation $\mathbf{x}_{c}$ to deal with the complicated
relationship of the inter-class and intra-class variation. This stream
attempts to enlarge the distances of the samples with distinct identities in
the embedding space. However, only general features could fail to process the
detailed discrepancies of input images, especially dealing with the images
with only slight differences such as private decorations, irregular scratches,
and brands. Hence, $\mathbf{x}_{c}$ is replenished with $\mathbf{x}_{f}$ from
the fine-grained feature stream, which can take account of local regions and
salient parts.
Moreover, we propose a spatio-temporal module to exploit extra minutiae,
considering the images with intricate changes in unconstrained environments
degrade the visual appearance’s effectiveness in practical scenarios, In
particular, we apply the camera location and the timestamps information as the
spatio-temporal cues, which are usually easily acquired due to the prevalence
of the security video systems. Thus, the collaboration of the visual
appearance representation and the spatio-temporal clues enhances the final
vehicle re-ID performance.
Figure 3: The schematic diagram of the appearance module in DFR-ST. The
shallow representation extracted from the backbone is transmitted to the
coarse-grained feature stream for constructing a macroscopic feature embedding
and the fine-grained feature stream for a microscopic representation
simultaneously. The fine-grained feature stream contains an AttentionNet and
four branches including division operations of three dimensions to extract
high-quality part-level representations. Afterward, the aggregation of two-
stream features constructs a discriminative feature representation for
identifying the same vehicle.
### III-B Appearance Module
#### III-B1 Coarse-Grained Feature Stream
The coarse-grained feature stream extracts a macroscopic description of input
images. Features obtained from the backbone are delivered to the coarse-
grained feature stream for further processing. Two residual blocks are placed
successively as the Global Network, followed by an average pooling operation
and a $1\times 1$ convolution operation. To promote the overall performance,
we adopt the BNNeck strategy [44] to alleviate the inconsistency of the
triplet loss and the ID loss in the embedding. Moreover, we set the stride of
down-sampling operations in the last convolutional layer to 1 to maintain more
deep information.
#### III-B2 Fine-Grained Feature Stream
Opposed to the coarse-grained stream, the fine-grained feature stream captures
the local regions’ microcosmic features to distinct challenging near-duplicate
vehicles. The AttentionNet receives the backbone network features to construct
an attentive feature representation, followed by four branches for further
processing. The first branch considers assigning reasonable weights to
different local regions. And the rest three branches conduct the divisions of
features along three dimensions, i.e., the height-wise, the width-wise, and
the channel-wise operations. The division strategy is inspired by the height
slicing strategy in the person re-ID approach [45]. But unlike the vertical
partition of a human body, a single-vehicle has semantic partitions along both
vertical and horizontal dimensions. Specifically, the vehicle is composed of
the car roof, the window, the bumper, the license plate, the chassis in the
vertical dimension, and the rearview mirror, the door, the main body in the
horizontal dimension. Hence, we design the channel, the height, and the width
division branches concurrently because different channels carry independent
semantic information.
The architecture of AttentionNet is shown in Fig. 4. The feature
$\mathbf{X}_{0}\in\mathbb{R}^{C\times H\times W}$ is firstly processed with
the channel-domain attention to learn a channel feature map
$\mathbf{g}_{c}\in\mathbb{R}^{C}$, then followed by the spatial-domain
attention with the learned attention map $\mathbf{g}_{s}\in\mathbb{R}^{1\times
H\times W}.$ $C,H,W$ denote the channel, height, and width dimensions of
$\mathbf{X}_{0}$. The above procedure is:
$\displaystyle{}\mathbf{X}_{1}(i,j,k)$
$\displaystyle=\mathbf{g}_{c}(i)\mathbf{X}_{0}(i,j,k),\forall i,$
$\displaystyle\mathbf{X}_{2}(i,j,k)$
$\displaystyle=\mathbf{g}_{s}(j,k)\mathbf{X}_{1}(i,j,k),\forall j,k.$
In particular, $\mathbf{X}_{1}$ and $\mathbf{X}_{2}$ denote the output
features after channel and spatial attention operations successively. $i,j,k$
are indexes of the channel, height, and width dimensions.
Fig. 5 illustrates the designed structure of channel attention and spatial
attention. Define the transformation $\mathcal{T}_{c}$:
$\mathbf{X}_{0}\rightarrow\mathbf{g}_{c}$. The input feature $\mathbf{X}_{0}$
is first transmitted to the squeeze operation constituted of a global average
pooling and a maximum pooling. The parallel pooling operations can aggregate
feature maps along the spatial dimension, symbolizing the overall distribution
of channel-domain responses. Second, a multi-layer perceptron machine with one
hidden layer processes the element-wise summation of two pooling operations.
Finally, a sigmoid function is applied to grasp channel-domain dependencies.
The above procedure is:
$\displaystyle{}\mathbf{g}_{c}$
$\displaystyle=\mathcal{T}_{c}(\mathbf{X}_{0})=\sigma\\{\text{MLP}[\text{P}_{\text{avg}}(\mathbf{X}_{0})]+\text{MLP}[\text{P}_{\text{max}}(\mathbf{X}_{0})]\\}$
$\displaystyle=\sigma\\{\mathbf{W}_{2}\delta[\mathbf{W}_{1}(\mathbf{x}^{\text{c}}_{\text{avg}})]+\mathbf{W}_{2}\delta[\mathbf{W}_{1}(\mathbf{x}^{\text{c}}_{\text{max}})]\\},$
where
$\displaystyle{}\mathbf{x}^{\text{c}}_{\text{avg}}(i)$
$\displaystyle=\frac{1}{W\times
H}\sum^{W}_{j=1}\sum^{H}_{k=1}\mathbf{X}_{0}(i,j,k),$
$\displaystyle\mathbf{x}^{\text{c}}_{\text{max}}(i)$
$\displaystyle=\max_{j,k}\mathbf{X}_{0}(i,j,k).$
In particular, $\sigma$ is a sigmoid function and $\delta$ represents a ReLu
function. MLP denotes a multi-layer perception machine with one hidden layer.
$\text{P}_{\text{avg}}$ and $\text{P}_{\text{max}}$ are average pooling and
maximum pooling operators. $\mathbf{W}_{1}\in\mathbb{R}^{\frac{C}{k}\times C}$
and $\mathbf{W}_{2}\in\mathbb{R}^{C\times\frac{C}{k}}$ refer to the weights of
two convolution layers in MLP, where the reduction ratio $k$ is 16 in the
experiments. $\mathbf{x}^{\text{c}}_{\text{avg}}$ and
$\mathbf{x}^{\text{c}}_{\text{max}}$ are the average and maximum pooling
results of $\mathbf{X}_{0}$.
Spatial attention concentrates on the location information of salient parts.
Define the transformation
$\mathcal{T}_{s}:\mathbf{X}_{1}\rightarrow\mathbf{g}_{s}$. The squeeze
operation processes $\mathbf{X}_{1}$ with average pooling and maximum pooling
operations across the channel dimension. Moreover, the concatenation of pooled
feature maps is passed through a single convolution layer, followed by a
sigmoid function. Mathematically,
$\mathbf{g}_{s}=\mathcal{T}_{s}(\mathbf{X}_{1})=\sigma\\{\mathbf{W}([\mathbf{x}^{\text{s}}_{\text{max}};\mathbf{x}^{\text{s}}_{\text{avg}}])\\},$
where
$\displaystyle\mathbf{x}^{\text{s}}_{\text{max}}(j,k)$
$\displaystyle=\max_{i}\mathbf{X}_{1}(i,j,k),\forall j,k,$
$\displaystyle\mathbf{x}^{\text{s}}_{\text{avg}}(j,k)$
$\displaystyle=\frac{1}{C}\sum^{C}_{i=1}\mathbf{X_{1}}(i,j,k),\forall j,k.$
Specifically, $\sigma$ denotes the sigmoid function, and $\mathbf{W}$ is the
weight of the convolution layer. $\mathbf{x}^{\text{s}}_{\text{max}}$ and
$\mathbf{x}^{\text{s}}_{\text{avg}}$ represent the output feature maps of
maximum pooling and average pooling operations across the channel axis.
#### III-B3 Object Function
The total loss is the weighted summation of a cross-entropy loss and a triplet
loss. The loss of a batch with $N$ images is written as:
$L_{\text{all}}=L_{\text{ce}}+\lambda L_{\text{tri}},$ (1)
where $L_{\text{ce}}$ is the cross-entropy loss and $L_{\text{tri}}$ is the
triplet loss with the batch hard sampling strategy. $\lambda$ is a weitht
hyperparameter. The label smoothing strategy [46] is employed with the cross-
entropy loss to alleviate the overfitting problem. Hence, the cross-entropy
loss $L_{\text{ce}}$ can be delineated as:
$L_{\text{ce}}=-\frac{1}{N}\sum_{i=0}^{N-1}\sum_{k=0}^{K-1}p_{i,k}\log
q_{i,k}$ (2)
where
$\displaystyle p_{i,k}=\left\\{\begin{aligned} &1-\varepsilon,&\text{if}\
y_{i}=k\\\ &\frac{\varepsilon}{K-1},&\text{if}\ y_{i}\neq
k\end{aligned}\right.,\quad
q_{i,k}=\frac{\exp[\Phi(\mathbf{I}_{i})]}{\sum_{k=0}^{K-1}\exp[\Phi(\mathbf{I}_{k})]}.$
In particular, $i\in\\{0,\cdots,N-1\\}$ is the index of images in the batch
and $k\in\\{0,\cdots,K-1\\}$ is the index of $K$ classes. $p_{i,k}$ denotes
the distribution after label smoothing. $y_{i}$ is the ground truth label of
the $i$th image $\mathbf{I}_{i}$. The hyperparameter $\varepsilon\in[0,1]$ is
a weight factor. $q_{i,k}$ is the network prediction probability of
$\mathbf{I}_{i}$ to the $k$th class and $\Phi(\cdot)$ means the transformation
of the appearance module. Furthermore, the triplet loss $L_{\text{tri}}$ is:
$L_{\text{tri}}=\sum_{i}^{N}[||\mathbf{x}_{i}^{a}-\mathbf{x}_{i}^{p}||_{2}^{2}-||\mathbf{x}_{i}^{a}-\mathbf{x}_{i}^{n}||_{2}^{2}+m]_{+},$
(3)
where $\mathbf{x}_{i}^{a}$, $\mathbf{x}_{i}^{p}$, $\mathbf{x}_{i}^{n}$ denote
features of anchors, positive and negative samples respectively. $[\cdot]_{+}$
represents the hinge function, and $m$ controls the margin of distances
between the positive and the negative samples to the anchors.
Figure 4: The architecture of AttentionNet. The sequential placement of the
channel and spatial attention are added in the network’s last residual block
to further learn an attentive feature representation from local regions
adaptively. Figure 5: The block diagrams of the channel and spatial attention
operations. $\oplus$ denotes the element-wise summation. $\mathbf{g}_{c}$,
$\mathbf{g}_{s}$ denote the learned channel and spatial attention maps
respectively.
#### III-B4 Backbone
The backbone network establishes a shallow representation of visual
appearance. In the proposed DFR-ST, the backbone network adopts the first
three blocks of ResNet-50 network for its flexible architecture and superior
performance. As shown in Fig. 3, we duplicate subsequent convolutional layers
after the first three residual blocks to split ResNet-50 network into two
feature branches. Different convolutional network architectures designed for
deep learning also can be adjusted as the backbone correspondingly.
### III-C Spatio-temporal Module
In real-world applications, the appearance model is not sufficient to
construct a discriminative representation in the presence of noises from
intricate brackgrounds and severe occlusions. Therefore, camera location and
timestamp information, available in urban surveillance and intelligent
transportation systems, can provide extra content on vehicle attributes.
The fundamental assumption is that two images with smaller spatial or temporal
distances have higher possibilities to be the same identity based on the
observation in [15], and vice versa. According to this assumption, we propose
a novel spatio-temporal measurement, in which the spatial similarity $D_{s}$
and the temporal similarity $D_{t}$ between a query image $i$ from camera
$c_{i}$ and a gallery image $j$ from camera $c_{j}$ are as follows:
$\displaystyle D_{s}$
$\displaystyle=\frac{1}{1+\exp(\alpha_{1}[p(\delta|\mu_{\delta},\sigma_{\delta})-\alpha_{2}])},$
(4) $\displaystyle D_{t}$
$\displaystyle=\frac{1}{1+\exp(\beta_{1}[p(\tau|\mu_{\tau},\sigma_{\tau})-\beta_{2}])},$
(5)
where $\delta$ denotes the shortest distance on Google map between camera
$c_{i}$ and $c_{j}$. $\tau$ is the discrepancy of time stamps between $i$ and
$j$. The hyperparameters $\alpha_{1}$, $\alpha_{2}$ and $\beta_{1}$,
$\beta_{2}$ indicate that higher probilities corresponding to smaller spatio-
temporal distances. $p(\delta|\mu_{\delta},\sigma_{\delta})$ and
$p(\tau|\mu_{\tau},\sigma_{\tau})$ describe the estimation of conditional
probabilities of $\delta$ and $\tau$ with parameters
$(\mu_{\delta},\sigma_{\delta})$ and $(\mu_{\tau},\sigma_{\tau})$, which are
modeled as log-normal distributions:
$\displaystyle p(\delta|\mu_{\delta},\sigma_{\delta})$
$\displaystyle=\ln\mathcal{N}(\delta;\mu_{\delta},\sigma_{\delta})=\frac{1}{\delta\sqrt{2\pi\sigma_{\delta}^{2}}}\exp[-\frac{(\ln\delta-\mu_{\delta})^{2}}{2\sigma_{\delta}^{2}}],$
$\displaystyle p(\tau|\mu_{\tau},\sigma_{\tau})$
$\displaystyle=\ln\mathcal{N}(\tau;\mu_{\tau},\sigma_{\tau})=\frac{1}{\tau\sqrt{2\pi\sigma_{\tau}^{2}}}\exp[-\frac{(\ln\tau-\mu_{\tau})^{2}}{2\sigma_{\tau}^{2}}].$
The parameters $(\mu_{\delta},\sigma_{\delta})$ and
$(\mu_{\tau},\sigma_{\tau})$ of the distributions are estimated by maximizing
the following likelihood functions:
$\displaystyle L(\delta|\mu_{\delta},\sigma_{\delta})$
$\displaystyle=\prod_{i=1}^{N}(\frac{1}{\delta_{i}})\mathcal{N}(\ln\delta_{i};\mu_{\delta},\sigma_{\delta}),$
$\displaystyle L(\tau|\mu_{\tau},\sigma_{\tau})$
$\displaystyle=\prod_{i=1}^{N}(\frac{1}{\tau_{i}})\mathcal{N}(\ln\tau_{i};\mu_{\tau},\sigma_{\tau}).$
where $N$ is the number of images in the gallery set.
In this way, we establish the spatio-temporal module to provide an additional
refinement for vehicle re-ID. Among the existing spatio-temporal models [15,
18, 24, 26, 25], the most similar spatio-temporal model is [18]. Different
from [18], which considers the transition time intervals of vehicles but does
not model the spatial information, our DFR-ST involves both temporal and
spatial cues simultaneously. Therefore, we can explicitly obtain a
quantitative formulation of spatio-temporal relationships to promote overall
performance.
Hence, the entire retrieval process is: DFR-ST firstly extracts an appearance
representation $D_{a}$ from the appearance module, which describes the
distance of the query image $i$ and the gallery set in the feature embedding
space. Secondly, the spatio-temporal module constructs the spatio-temporal
similarity $D_{st}$ to evaluate the similarities in spatial and temporal
domains. The overall similarity between the query image $i$ and the gallery
image $j$ is calculated as:
$D(j)=D_{a}(j)+\omega[D_{s}(i,j)+D_{t}(i,j)],\forall i.$ (6)
$\omega$ is a weighted parameter. $D_{s}(i,j)$ and $D_{t}(i,j)$ model the
spatial and temporal similarities between $i$ and $j$ respectively. Finally,
the ranking list of the proposed DFR-ST is conducted by the weighted summation
of $D_{a}$ and $D_{st}$.
## IV Experiments
In this section, the proposed DFR-ST is evaluated on public datasets and
compared with state-of-the-art methods. Extensive experiments are conducted to
demonstrate the effectiveness and the robustness of DFR-ST at the end of this
section.
### IV-A Datasets
The experiments are executed on two public large-scale datasets, _i.e._ ,
VeRi-776 [15] and VehicleID [47]. The following subsections introduce the two
datasets and evaluation metrics.
Figure 6: Visualization of the ranking lists of DFR-ST on VeRi-776. The re-
identification results are listed in ascending order of the appearance and the
spatio-temporal distance between the query image and the gallery images. The
green (red) boxes denote the correct (wrong) re-identification.
#### IV-A1 VeRi-776
The VeRi-776 dataset contains 49,357 images of 776 different vehicles captured
by 20 cameras involving various viewpoints, illumination changes, and
background clutters. The training set consists of 37,778 images with 576
identities, and the test set receives the remaining 11,579 images with 200
identities. Moreover, the selected 1,678 images of the testing set constitute
the query set. Followed [15], the evaluation metrics of VeRi-776 are mean
Average Precision (mAP), Top-1, and Top-5 accuracy of Cumulative Match Curve
(CMC) corresponding to the image-to-track search.
#### IV-A2 VehicleID
VehicleID dataset is released after the VeRi-776, which includes 221,763
images with 26,267 identities and 250 models. Researchers annotate 90,196
images with model labels. Among the identities, the training set contains
13,164 identities, and the rest remains for the test set. Three test splits
for different gallery sizes are 800, 1,600 and 2,400, as depicted in Table I.
For each test split, the test set randomly selects images of distinct
identities to form the gallery set, and the same procedure repeats ten times.
The average results of 10-times procedures are the final performance.
Evaluation metrics on VehicleID are mAP, Top-1, and Top-5 accuracy of CMC.
TABLE I: Three test splits of VehicleID dataset. | Small | Medium | Large
---|---|---|---
Query Images | 800 | 1,600 | 2,400
Gallery Images | 6,493 | 13,377 | 19,777
### IV-B Implementation Details
The backbone network for shallow feature extraction adopts ResNet-50 [48]. In
the coarse-grained feature stream, the average pooling operation is employed
after three residual blocks and a $1\times 1$ convolutional layer is followed.
$\lambda$ in Eq. (1) is set to 0.4 and $\varepsilon$ in Eq. (2) is set to 0.1.
$m$ in Eq. 3 is set to 1.2. Moreover, the parameters $\alpha_{1}$ and
$\alpha_{2}$ are set to 6 while $\beta_{1}$ and $\beta_{2}$ are set to 0.5 in
Eq. (4) and Eq. (5). The weighted parameter $\omega$ in Eq. (6) is set to 0.2.
As for the training procedure, we apply the adam optimizer with a weight decay
$5\emph{e}^{-4}$ and a synchronous batch normalization strategy. The initial
learning rate is $1\emph{e}^{-4}$, which is adjusted by a warmup strategy.
Moreover, we utilize the Euclidean distance to compute the similarity between
the query and gallery images during training and testing. The batch size is
32, with randomly selected eight identities and four images of each identity.
We train the appearance module of DFR-ST for 135 epochs on two Nvidia GeForce
GTX 1080 Ti GPUs. The overall DFR-ST is implemented on the PyTorch platform.
Moreover, our work adopts the re-ranking strategy [49] for further boosting
the performance. since re-ranking benefits performance promotion [50, 49, 21,
51].
TABLE II: Comparison with state-of-the-art on VeRi-776. Methods | mAP | Top-1 | Top-5
---|---|---|---
LOMO [4] | 9.64 | 25.33 | 46.48
BOW-CN [5] | 12.20 | 33.91 | 53.69
FACT [16] | 19.92 | 59.65 | 75.27
SCCN-Ft + CLBL-8-Ft [52] | 25.12 | 60.83 | 78.55
OIN [18] | 48.00 | 65.9 | 87.7
VAMI [53] | 50.13 | 77.03 | 90.82
RNN-HA (ResNet) [22] | 56.80 | 74.49 | 87.31
Hard-View-EALN [54] | 57.44 | 84.39 | 94.05
GRF + GGL [55] | 61.7 | 89.4 | 95.0
QD-DLF [56] | 61.83 | 88.50 | 94.46
SPAN w/ CPDM [57] | 68.9 | 94.0 | 97.6
SAVER [58] | 79.6 | 96.4 | 98.6
HPGN [59] | 80.18 | 96.72 | N/A
PRN [20] | 85.84 | 97.14 | $\mathbf{99.40}$
PRN + ReRanking [20] | 90.48 | 97.38 | 98.87
FACT + Plate-SNN + STR [15] | 27.77 | 61.44 | 78.78
PROVID [17] | 53.42 | 81.56 | 95.11
OIN + ST [18] | 51.42 | 68.3 | 89.7
Siamese-CNN + Path-LSTM [24] | 58.27 | 83.49 | 90.04
VAMI [53] \+ STR [16] | 61.32 | 85.92 | 91.84
ReID + query expansion [25] | 70.8 | 93.2 | 98.0
DFR-ST (ours) | 86.00 | 95.67 | $\underline{99.17}$
DFR-ST (ours) + ReRanking | $\mathbf{91.56}$ | $\mathbf{97.74}$ | 98.41
### IV-C Performance Comparison on VeRi-776
The proposed DFR-ST is compared with state-of-the-art methods on a large-scale
public dataset, i.e., VeRi-776, and the results are presented in Table II.
Note that we directly copy state-of-the-art algorithms’ performance from the
original papers instead of reproducing all methods.
On VeRi-776, we compare our proposed DFR-ST with the following methods. Local
Maximal Occurrence Representaion (LOMO) [4] and Bag of Words with Color Name
[60] (BOW-CN) are based on hand-craft features which are first proposed for
person re-ID [4, 5]. FACT [16] is based on the fusion with colors and
attribute features. Hard-View-EALN [54] and VAMI [53] impose an adversarial
network between the generator and the discriminator to obtain more robust
cross-view features. Moreover, SCCN-Ft + CLBL-8-Ft [52] utilizes two networks
to learn the local and global multi-view features. OIN [18] produces region
masks based on the clustering of key points and establishes overall features
using these region masks and global features. SPAN w/ CPDM [57] detects
different parts of vehicle images and then generates an attentive feature
representation by aggregating the global and three-part attentive features
together. Besides, RNN-HA (ResNet) [22] employs RNN to capture hierarchical
dependencies for vehicle re-ID. HPGN [59] proposes a pyramid of the spatial
graph networks to handle multi-scale spatial features. PRN + ReRanking [20]
explores partition strategies on three dimensions of feature maps to promote
overall performance further. Moreover, GRF+GGL [55] designs an efficient
group-group loss to accelerate feature learning. QD-DLF [56] defines pooling
operations on four directions to compress basic features and concatenates them
together as deep quadruple features. SAVER [58] focuses on self-supervised
attention mechanisms to improve vehicle re-ID algorithm. As for methods using
spatio-temporal clues, ReID + query expansion [25] ensembles multiple
informative features from several existing approaches and builds an
acquisition module for vehicle locations and timestamps. FACT+Plate-SNN+STR
[15] and PROVID [17] both use a license plate recognition module and a spatio-
temporal module based on FACT [16]. OIN+ST [18] constructs a spatio-temporal
regularization module in addition to OIN [18] while VAMI [53] \+ STR [16]
integrates a spatio-temporal similarity to the original model.
Figure 7: Visualization of attention maps of DFR-ST on VehicleID. The red
regions represent the attentive areas captured by the proposed network, which
cover mostly car lights, car bumpers, sunroofs and car edges etc.
Table II illustrates the comparison results with state-of-the-art methods on
VeRi-776. The methods listed in the upper part of Table II only use visual
appearance information while those appeared in the lower part of Table II take
both appearance and spatio-temporal cues into considerations. The proposed
DFR-ST obtains the best Top-5 metric of 99.17% and achieves the highest mAP of
91.56% and Top-1 of 97.74% with the re-ranking strategy (DFR-ST + ReRanking).
Firstly, compared with the approaches only based on visual appearance
features, the proposed DFR-ST acquires the highest mAP of 91.56% and the
highest Top-1 of 97.74%. This comparison verifies the intuition on the
effectiveness of adding more information from other domains. Specifically, we
notice that the Top-5 of PRN [20] is only a bit higher (0.23%) to our proposed
DFR-ST. Although this observation indicates the advantage of the
representation from [20], we show that the unexpected noises frequent in
unconstrained environments would degrade the performance of PRN [20] while our
proposed DFR-ST maintain acceptable performance in the ablation studies. This
comparison result could validate the superiority of the proposed DFR-ST.
Secondly, among six vehicle re-ID methods based on multi-modal information
(_i.e._ , FACT + Plate-SNN + STR [15], PROVID [17], OIN + ST [18], Siamese-CNN
+ Path-LSTM [24], VAMI [53] \+ STR [16], ReID + query expansion [25]), the
proposed DFR-ST consistently outperforms FACT + Plate-SNN+STR [15], PROVID
[17], OIN + ST [18] and Siamese-CNN + Path-LSTM [24] by exceeding over 30% on
mAP, 12% on Top-1 and 5% on Top-5 at least. This comparison demonstrates that
DFR-ST can take advantage of multi-modal information exceedingly from designed
spatio-temporal scheme. Fig. 6 visualizes the vehicle re-ID results on
VeRi-776 dataset qualitatively. The bounding boxes in green describe the true
positives, and those in red are false positives.
### IV-D Ablation Study
Extensive experiments are conducted on two large-scale datasets, i.e.,
VeRi-776 and VehicleID, to thoroughly analyze each component’s effectiveness
of the proposed approach.
TABLE III: Component effectiveness analysis on VeRi-776. Methods | mAP | Top-1 | Top-5
---|---|---|---
Baseline | 65.72 | 78.10 | 84.63
$\mathbf{x}_{c}$ | 70.53 | 84.55 | 88.76
$\mathbf{x}_{c}+\mathbf{x}_{\text{f1}}$ | 76.90 | 87.24 | 91.77
$\mathbf{x}_{c}+\mathbf{x}_{\text{f1}}+\mathbf{x}_{\text{fdiv}}$ | 84.47 | 93.02 | 97.13
$\mathbf{x}_{c}+\mathbf{x}_{\text{f1}}+\mathbf{x}_{\text{fdiv}}$ \+ ST | $\mathbf{86.00}$ | $\mathbf{95.67}$ | $\mathbf{99.17}$
#### IV-D1 Component Analysis
We conduct ablation experiments on each component in the proposed DFR-ST to
validate each component’s effectiveness in the overall architecture. Table III
shows the experimental results. The baseline adopts a ResNet-50 network with
the cross-entropy loss. $\mathbf{x}_{c}$ denotes the features extracted from
the coarse-grained feature stream. $\mathbf{x}_{\text{f1}}$ and
$\mathbf{x}_{\text{fdiv}}$ represent the features from the first branch and
the rest three-division branches of the fine-grained feature stream,
respectively. Moreover, $ST$ is the spatio-temporal module.
TABLE IV: Effectiveness of spatio-temporal information on VeRi-776. Methods | mAP | Top-1 | Top-5
---|---|---|---
FACT w/o STR [16] | 19.92 | 59.65 | 75.27
\+ STR [16] | (+6.85%) | (+1.79%) | (+1.02%)
OIN w/o ST [18] | 48.00 | 65.9 | 87.7
\+ ST [18] | (+3.42%) | (+2.40%) | (+2.00%)
VAMI [53] | 50.13 | 77.03 | 90.82
\+ STR [16] | (+11.19%) | (+8.89%) | (1.02%)
Siamese-CNN [24] | 54.21 | 79.32 | 88.92
\+ Path-LSTM [24] | (+4.06%) | (+4.17%) | (+1.12%)
DFR-ST w/o ST (ours) | 84.47 | 93.02 | 97.13
\+ STR [16] | (+1.72%) | (+1.52%) | (+0.67%)
\+ ST (ours) | (+1.53%) | (+2.65%) | (+2.04%)
TABLE V: Effectiveness of the appearance module on VehicleID. Methods | Test Size=800 | Test Size=1600 | Test Size=2400
---|---|---|---
mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5
BOW-CN [5] | N/A | 13.14 | 22.69 | N/A | 12.94 | 21.09 | N/A | 10.20 | 17.89
LOMO [4] | N/A | 19.74 | 32.14 | N/A | 18.95 | 29.46 | N/A | 15.26 | 25.63
FACT + Plate-SNN [15] | 49.2 | 43.62 | 64.84 | 44.8 | 39.94 | 62.98 | 38.6 | 35.68 | 56.24
PROVID [17] | N/A | 48.90 | 69.51 | N/A | 43.64 | 65.34 | N/A | 38.63 | 60.72
DRDL [47] | N/A | 49.0 | 73.5 | N/A | 42.8 | 66.8 | N/A | 38.2 | 61.6
DenseNet121 [61] | 68.85 | 66.10 | 77.87 | 69.45 | 67.39 | 75.49 | 65.37 | 63.07 | 72.57
TAMR [62] | N/A | 66.02 | 79.71 | N/A | 62.90 | 76.80 | N/A | 59.69 | 73.87
VAMI [53] | N/A | 63.12 | 83.25 | N/A | 52.87 | 75.12 | N/A | 47.34 | 70.29
GS-TRE [63] | 75.4 | 75.9 | 84.2 | 74.3 | 74.8 | 83.6 | 72.4 | 74.0 | 82.7
QD-DLF [56] | $76.54$ | 72.32 | 92.48 | $74.63$ | 70.66 | 88.90 | $68.41$ | 64.14 | 83.37
RNN-HA (ResNet + 672) [22] | N/A | 83.8 | 88.1 | N/A | $81.9$ | 87.0 | N/A | $81.1$ | 87.4
PRN (Single Height-Channel Branch) [20] | N/A | 78.92 | 94.81 | N/A | 74.94 | $\underline{92.02}$ | N/A | 71.58 | 88.46
GRF + GGL [55] | N/A | 77.1 | 92.8 | N/A | 72.7 | 89.2 | N/A | 70.0 | 87.1
Hard-View-EALN [54] | 77.5 | 75.11 | 88.09 | 74.2 | 71.78 | 83.94 | 71.0 | 69.30 | 81.42
OIN [18] | N/A | N/A | N/A | N/A | N/A | N/A | N/A | 67.0 | 82.9
SAVER [58] | N/A | 79.9 | $\underline{95.2}$ | N/A | 77.6 | 91.1 | N/A | 75.3 | 88.3
HPGN [59] | $\mathbf{89.60}$ | $\mathbf{83.91}$ | N/A | $\mathbf{86.16}$ | $\mathbf{79.97}$ | N/A | $\mathbf{83.60}$ | 77.32 | N/A
DFR-ST w/o ST (ours) | 87.55 | 82.15 | $\mathbf{95.39}$ | 84.94 | 79.33 | $\mathbf{92.76}$ | 83.18 | $\mathbf{77.93}$ | $\mathbf{89.52}$
DFR-ST w/o ST (ours) + Re-Ranking | 87.76 | 82.71 | 95.02 | 83.58 | 77.66 | 91.28 | 82.28 | 76.96 | 88.48
| | | | | | | | |
As depicted in Table III, the first observation is that only utilizing
$\mathbf{x}_{c}$ for vehicle re-ID can improve the performance by 4.81% on mAP
comparing to the baseline, which confirms the useful combination of the cross-
entropy loss and the triplet loss with the batch hard sampling strategy. After
adding $\mathbf{x}_{\text{f1}}$, extracted from the first branch of the fine-
grained feature stream, the mAP metric is enhanced by an additional 6.37% and
the Top-1, Top-5 metrics are increased by 2.69%, 3.01% respectively. This
improvement indicates the aggregation of the coarse-grained features
$\mathbf{x}_{c}$ and $\mathbf{x}_{\text{f1}}$ focusing more on local regions
by AttentionNet, can provide extra useful information to recognize and
identify the same vehicle. Furthermore, the participation of
$\mathbf{x_{\text{fdiv}}}$ actively promotes the overall performance of the
proposed method. Three metrics have improved by 7.57%, 5.78%, 5.36%. This
analysis confirms the effectiveness of division operations in different
dimensions. Moreover, $ST$ brings 1.53% improvement on mAP and 2.65%, 2.04% on
Top-1, Top-5, which verifies the complementary of spatio-temporal
relationships with appearance features.
#### IV-D2 Effectiveness of the Appearance Module
To validate the effectiveness of the proposed appearance module, we further
compare our proposed DFR-ST with several current methods on VehicleID. Table V
illustrates the mAP, Top-1, and Top-5 metrics of the proposed appearance
module and state-of-the-art methods on the large-scale VehicleID dataset. The
bold value means the first, and the underlined value stands for the second.
Note that VehicleID dataset does not provide any other information besides
images, hence we can not utilize the spatio-temporal module in the performance
comparison.
As shown in Table V, the proposed appearance module outperforms all the
existing methods on the Top-5 metric with small, medium, and large test sizes.
Besides, the proposed DFR-ST also achieves the highest Top-1 of 77.93% with
the large test size, which is the most challenging setting among three test
splits. In particular, compared with traditional hand-craft methods, LOMO [4]
and BOW-CN [5], deep-learning-based approaches achieve significant
improvements which verifies the strong representation capability of deep
neural networks on the non-linear transformation.
Furthermore, our proposed appearance module beats the existing methods
including FACT [16], PROVID [17], DRDL [47], DenseNet121 [61], TAMR [62], VAMI
[53], GS-TRE [63], QD-DLF [56], PRN (Single Height-Channel Branch) [20], GRF +
GGL [55], Hard-View-EALN [54], OIN [18] and SAVER [58] on the mAP, Top-1 and
Top-5 metrics with three test splits. Although RNN-HA (ResNet+672) [22]
receives more 1.65% on Top-1 with the small test size than the proposed DFR-ST
w/o ST, the proposed appearance module outperforms RNN-HA (ResNet+672) [22] on
the other metrics with three test sizes, which show the effectiveness of the
proposed approach. Although our DFR-ST without the spatio-temporal module is a
bit inferior to HPGN [59], the proposed DFR-ST can still obtain rather
competitive performance on the challenging VehicleID dataset, which confirms
the competitiveness of our proposed appearance module.
#### IV-D3 Effectiveness of AttentionNet
We conduct qualitative experiments to examine the effectiveness of multi-
domain attention schemes in AttentionNet. Fig. 7 displays typical examples of
the attention maps obtained by the proposed DFR-ST method for interpreting the
multi-domain attention. Most of the activated regions (marked by the red
color) are the car lights, car windows, the headstocks, and car edges. These
regions can be interpreted by humans easily and have semantic meanings, which
validates the fine-grained features’ capability with the multi-domain
attention mechanism to learn salient and informative regions for identifying
near-duplicate vehicle images. This property benefits DFR-ST to maintain
acceptable performance under image contamination, which is frequent and common
in open and unconstrained environments.
(a)
(b)
(c)
Figure 8: Performance comparision of the proposed DFR-ST and PRN [20] with different percentages of image noises. With the percentage of the image contamination becomes larger, the proposed DFR-ST obtains a slower rate of performance degradation than PRN [20]. TABLE VI: Performance evaluation of DFR-ST (ours) with different contamination percentages on VehicleID. Contaminated Percentages | Test Size=800 | Test Size=1600 | Test Size=2400 | Average
---|---|---|---|---
mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5
0% | 87.76 | 82.15 | 95.39 | 84.94 | 79.33 | 92.76 | 83.18 | 77.93 | 89.52 | 85.29 | 79.80 | 92.56
5% | 84.47 | 78.83 | 91.41 | 81.50 | 75.99 | 88.32 | 80.09 | 74.77 | 85.48 | 82.02 | 76.53 | 88.40
10% | 82.55 | 76.92 | 89.71 | 79.36 | 73.72 | 86.88 | 78.72 | 73.49 | 85.04 | 80.21 | 74.71 | 87.21
20% | 69.18 | 61.13 | 78.73 | 68.35 | 61.37 | 76.98 | 64.38 | 56.63 | 73.70 | 67.30 | 59.71 | 76.47
30% | 65.75 | 58.62 | 73.69 | 61.53 | 53.59 | 71.10 | 58.24 | 50.46 | 67.29 | 61.84 | 54.22 | 70.69
40% | 61.61 | 54.12 | 70.35 | 59.77 | 51.84 | 69.21 | 54.81 | 46.84 | 63.73 | 58.73 | 50.93 | 67.76
50% | 60.66 | 52.47 | 69.93 | 56.06 | 47.75 | 65.64 | 53.70 | 45.35 | 63.24 | 56.81 | 48.52 | 66.27
| | | | | | | | | | | |
TABLE VII: Performance evaluation of PRN [20] with different contamination percentages on VehicleID. Contaminated Percentages | Test Size=800 | Test Size=1600 | Test Size=2400 | Average
---|---|---|---|---
mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5 | mAP | Top-1 | Top-5
0% | 84.08 | 78.92 | 94.81 | 80.54 | 74.94 | 92.02 | 75.76 | 71.58 | 88.46 | 80.13 | 75.15 | 91.76
5% | 80.53 | 74.38 | 91.28 | 76.90 | 70.86 | 88.46 | 71.49 | 68.10 | 83.61 | 76.31 | 71.11 | 87.78
10% | 73.37 | 69.81 | 84.24 | 71.20 | 64.50 | 81.88 | 66.45 | 61.32 | 79.44 | 70.34 | 65.21 | 81.85
20% | 58.87 | 50.60 | 67.12 | 54.40 | 46.73 | 65.03 | 50.38 | 43.18 | 62.89 | 54.55 | 46.84 | 65.01
30% | 50.47 | 43.91 | 61.54 | 47.62 | 40.86 | 58.17 | 45.59 | 37.63 | 56.40 | 47.89 | 40.80 | 58.70
40% | 43.71 | 38.30 | 55.99 | 38.53 | 35.66 | 52.25 | 35.48 | 31.70 | 50.06 | 39.24 | 35.22 | 52.77
50% | 35.69 | 32.42 | 48.16 | 31.80 | 28.75 | 44.26 | 29.48 | 25.74 | 42.39 | 32.32 | 28.97 | 44.94
| | | | | | | | | | | |
#### IV-D4 Performance Evaluation with Image Contamination
In real-world scenarios, images are inevitably contaminated with noises due to
unreliable sensing devices, limited network communication resources, or time-
variant transmission environments. Thus, we conduct additional experiments to
test the performance of the proposed DFR-ST under unexpected noises to
demonstrate its robustness. We simulate three main problems (_i.e._ , mosaics,
color cast, and abnormal brightness) caused by unreliable transmission as
noise categories. The percentage of noisy pixels measures the degree of image
contamination to total pixels. Note that YUV pixel matrices’ improper decoding
produces these noises instead of the decoding of elementary streams with
MPEG-2 proposal.
As shown in Table VI, firstly, the proposed DFR-ST performance only drops
5.08% on mAP, 5.09% on Top-1, and 5.35% on Top-5 with 10% noisy pixels on
average. This result confirms the robustness of the proposed DFR-ST. Note that
DFR-ST with even 10% image contamination still obtains competitive performance
compared to QD-DLF [56] and defeats GS-TRE [63], DJDL [64], VAMI [53] and
DenseNet121 [61]. This comparison validates the effectiveness of DFR-ST.
Secondly, we notice that the performance of DFR-ST gets worse with larger
image noises. However, even with the worst performance, i.e., under 50% noisy
pixels, DFR-ST still has a better performance on VehicleID than several
methods including DRDL [47], FACT [16], LOMO [4] and BOW-CN [5]. This result
verifies DFR-ST can maintain an acceptable performance with a severe degree of
image contamination, which is advantageous for real-world applications.
Besides, we reproduce PRN [20] and compare its capability to resist image
contamination with the proposed DFR-ST on VehicleID. For the fairness of the
comparison, PRN (Single height-channel branch) is evaluated thoroughly with
DFR-ST w/o ST module because PRN (Single height-channel branch) has better
performance than the complete PRN. The comparison results are shown in Fig. 8
and Table VII. With the image noises grow larger, the performance of PRN [20]
decreases faster than the proposed DFR-ST. This comparison validates the
robustness of the proposed DFR-ST to resist image noises.
## V Conclusion
In this paper, we proposed a discriminative feature representation DFR-ST with
multi-modal cues for vehicle re-ID. The scheme consists of an appearance
module for establishing robust visual features integrating coarse-grained and
fine-grained features, and a spatio-temporal module for constructing a
quantitative description of the transition distances of camera locations and
time snapshots. Extensive experiments on two large-scale datasets validate the
superiority of the proposed DFR-ST and the complementary of appearance and
spatio-temporal information. In the future, we will exploit the improvement of
more discriminative representation for vehicle re-ID and real-time video
analysis.
## References
* [1] Y. He, X. Wei, X. Hong _et al._ , “Multi-target multi-camera tracking by tracklet-to-target assignment,” _IEEE Trans. Image Processing_ , vol. 29, pp. 5191–5205, 2020.
* [2] R. Bashir, M. Shahzad, and M. Fraz, “Vr-proud: Vehicle re-identification using progressive unsupervised deep architecture,” _Pattern Recognition_ , vol. 90, pp. 52–65, 2019.
* [3] J. O. N. Castaneda, A. F. Velazquez, N. B. Bo _et al._ , “Scalable semi-automatic annotation for multi-camera person tracking,” _IEEE Trans. Image Processing_ , vol. 25, no. 5, pp. 2259–2274, 2016.
* [4] S. Liao, Y. Hu, X. Zhu, and S. Li, “Person re-identification by local maximal occurrence representation and metric learning,” in _Proc. CVPR_ , 2015, p. 2197–2206.
* [5] L. Zheng, L. Shen, L. Tian _et al._ , “Scalable person re-identification: A benchmark,” in _Proc. ICCV_ , 2015, pp. 1116–1124.
* [6] D. Zapletal and A. Herout, “Vehicle re-identification for automatic video traffic surveillance,” in _Proc. CVPR_ , 2016, pp. 25–31.
* [7] S. Chen, X. Tan, B. Wang _et al._ , “Reverse attention-based residual network for salient object detection,” _IEEE Trans. Image Processing_ , vol. 29, pp. 3763–3776, 2020.
* [8] M. Feng, H. Lu, and Y. Yu, “Residual learning for salient object detection,” _IEEE Trans. Image Processing_ , vol. 29, pp. 4696–4708, 2020.
* [9] H. Lee, S. Eum, and H. Kwon, “Me r-cnn: Multi-expert r-cnn for object detection,” _IEEE Trans. Image Processing_ , vol. 29, pp. 1030–1044, 2020\.
* [10] Y. Liu, J. Han, Q. Zhang, and C. Shan, “Deep salient object detection with contextual information guidance,” _IEEE Trans. Image Processing_ , vol. 29, pp. 360–374, 2020.
* [11] L. Pan, Y. Dai, M. Liu _et al._ , “Joint stereo video deblurring, scene flow estimation and moving object segmentation,” _IEEE Trans. Image Processing_ , vol. 29, pp. 1748–1761, 2020.
* [12] C. Dhiman and D. K. Vishwakarma, “View-invariant deep architecture for human action recognition using two-stream motion and shape temporal dynamics,” _IEEE Trans. Image Processing_ , vol. 29, pp. 3835–3844, 2020.
* [13] Y. Ji, Y. Zhan, Y. Yang _et al._ , “Deep image-to-video adaptation and fusion networks for action recognition,” _IEEE Trans. Image Processing_ , vol. 29, pp. 3168–3182, 2020.
* [14] H. Yang, C. Yuan, and L. Zhang, “Sta-cnn: Convolutional spatial-temporal attention learning for action recognition,” _IEEE Trans. Image Processing_ , vol. 29, pp. 5783–5793, 2020.
* [15] X. Liu, W. Liu, T. Mei _et al._ , “A deep learning-based approach to progressive vehicle re-identification for urban surveillance,” in _Proc. ECCV_ , 2016, pp. 869–884.
* [16] X. Liu, W. Liu, and H. Ma, “Large-scale vehicle re-identification in urban surveillance videos,” in _Proc. ICME_ , 2016, pp. 1–6.
* [17] X. Liu, W. Liu, T. Mei, and H. Ma, “Provid: Progressive and multimodal vehicle reidentification for large-scale urban surveillance,” _IEEE Trans. Multimedia_ , vol. 20, no. 3, pp. 645–658, Mar. 2018.
* [18] Z. Wang, L. Tang, X. Liu _et al._ , “Orientation invariant feature embedding and spatial temporal regularization for vehicle re-identification,” in _Proc. ICCV_ , 2017, pp. 379–387.
* [19] B. He, J. Li, Y. Zhao _et al._ , “Part-regularized near-duplicate vehicle re-identification,” in _Proc. CVPR_ , 2019, pp. 3997–4005.
* [20] H. Chen, B. Lagadec, and F. Bremond, “Partition and reunion: A two-branch neural network for vehicle re-identification,” in _Proc. CVPR_ , 2019, pp. 184–192.
* [21] H. Guo, C. Zhao, Z. Liu _et al._ , “Learning coarse-to-fine structured feature embedding for vehicle re-identification,” in _Proc. AAAI_ , 2018, pp. 6853–6860.
* [22] X. S. Wei, C. L. Zhang, L. Liu _et al._ , “Coarse-to-fine: A rnn-based hierarchical attention model for vehicle re-identification,” in _Proc. ACCV_ , 2018.
* [23] G. Rajamanoharan, A. Kanaci, M. Li _et al._ , “Multi-task mutual learning for vehicle re-identification,” in _Proc. CVPR_ , 2019, pp. 62–70.
* [24] Y. Shen, T. Xiao, H. Li _et al._ , “Learning deep neural networks for vehicle re-id with visual-spatio-temporal path proposals,” in _Proc. ICCV_ , 2017, pp. 1900–1909.
* [25] K. Lv, H. Du, Y. Hou _et al._ , “Vehicle re-identification with location and time stamps,” in _Proc. CVPR_ , 2019, pp. 399–406.
* [26] X. Tan, Z. Wang, M. Jiang _et al._ , “Multi-camera vehicle tracking and re-identification based on visual and spatial-temporal features,” in _Proc. ICCV_ , 2019, pp. 275–284.
* [27] G. Wang, J. Lai, and X. Xie, “P2snet: Can an image match a video for person re-identification in an end-to-end way?” _IEEE Trans. Circuits Syst. Video Technol. (TCSVT)_ , vol. 28, no. 10, pp. 2777–2787, 2018.
* [28] S. Li, S. Bak, P. Carr, and X. Wang, “Diversity regularized spatio-temporal attention for video-based person re-identification,” in _Proc. CVPR_ , 2018, pp. 369–378.
* [29] Y. Cho, S. Kim, J. Park _et al._ , “Joint person re-identification and camera network topology inference in multiple cameras,” _Comput. Vis. Image Underst._ , vol. 180, pp. 34–46, 2019.
* [30] J. L. abd W. Chen, Q. Li, and C. Yang, “Unsupervised cross-dataset person re-identification by transfer learning of spatial-temporal patterns,” in _Proc. CVPR_ , 2018, pp. 7984–7956.
* [31] D. Chang, Y. Ding, J. Xie _et al._ , “The devil is in the channels: Mutual-channel loss for fine-grained image classification,” _IEEE Trans. Image Processing_ , vol. 29, pp. 4683–4695, 2020.
* [32] M. Meng, M. Lan, J. Yu _et al._ , “Constrained discriminative projection learning for image classification,” _IEEE Trans. Image Processing_ , vol. 29, pp. 186–198, 2020.
* [33] X. Sun, N. M. Nasrabadi, and T. D. Tran, “Supervised deep sparse coding networks for image classification,” _IEEE Trans. Image Processing_ , vol. 29, pp. 405–418, 2020.
* [34]
* [35] H. Luo, W. Jiang, X. Zhang _et al._ , “Alignedreid++: Dynamically matching local information for person re-identification,” _Pattern Recognition_ , vol. 94, pp. 53–61, 2019.
* [36] K. Wang, C. Ding, S. J. Maybank, and D. Tao, “Cdpm: Convolutional deformable part models for semantically aligned person re-identification,” _IEEE Trans. Image Processing_ , vol. 29, pp. 3416–3428, 2020.
* [37] W. Zhang, X. He, X. Yu _et al._ , “A multi-scale spatial-temporal attention model for person re-identification in videos,” _IEEE Trans. Image Processing_ , vol. 29, pp. 3365–3373, 2020.
* [38] F. Schroff, D. Kalenichenko, and J. Philbin, “Facenet: A unified embedding for face recognition and clustering,” in _Proc. CVPR_ , 2015, pp. 815–823.
* [39] X. Zhu, X. Y. Jing, X. You _et al._ , “Video-based person re-identification by simultaneously learning intra-video and inter-video distance metrics,” _IEEE Trans. Image Processing_ , vol. 27, no. 11, pp. 5683–5695, 2018.
* [40] Q. Zhu, B. Zhong, X. Lan _et al._ , “Fine-grained spatial alignment model for person re-identification with focal triplet loss,” _IEEE Trans. Image Processing_ , vol. 29, pp. 7578–7589, 2020.
* [41] L. Ren, J. Lu, J. Feng, and J. Zhou, “Uniform and variational deep learning for rgb-d object recognition and person re-identification,” _IEEE Trans. Image Processing_ , vol. 28, no. 10, pp. 4970–4982, 2019.
* [42] Y. J. Cho and K. J. Yoon, “Pamm: Pose-aware multi-shot matching for improving person re-identification,” _IEEE Trans. Image Processing_ , vol. 27, no. 8, pp. 3739–3752, 2018.
* [43] H. Huang, W. Yang, L. J _et al._ , “Improve person re-identification with part awareness learning,” _IEEE Trans. Image Processing_ , vol. 29, pp. 7468–7481, 2020.
* [44] X. L. H. Luo, Y. Gu _et al._ , “Bag of tricks and a strong baseline for deep person re-identification,” in _Proc. CVPR_ , 2019.
* [45] H. Luo, W. Jiang, X. Zhang _et al._ , “Alignedreid++: Dynamically matching local information for person re-identification,” _Pattern Recognition_ , vol. 94, pp. 53–61, 2020.
* [46] C. Szegedy, V. Vanhoucke, S. Ioffe _et al._ , “Rethinking the inception architecture for computer vision,” _CoRR_ , vol. abs/1512.00567, 2015. [Online]. Available: http://arxiv.org/abs/1512.00567
* [47] H. Liu, Y. Tian, Y. Wang _et al._ , “Deep relative distance learning: Tell the difference between similar vehicles,” in _Proc. CVPR_ , 2016, pp. 2167–2175.
* [48] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proc. CVPR_ , 2016, pp. 770–778.
* [49] Z. Zhong, L. Zheng, D. Cao, and S. Li, “Re-ranking person re-identification with k-reciprocal encoding,” in _Proc. CVPR_ , 2017, p. 3652–3661.
* [50] W. Li, Y. Wu, M. Mukunoki, and M. Minoh, “Common-near-neighbor analysis for person re-identification,” in _Proc. ICIP_ , 2019, pp. 1621–1624.
* [51] A. Shankar, A. Poojary, V. Kollerathu _et al._ , “Comparative study of various losses for vehicle re-identification,” in _Proc. CVPR_ , 2019, pp. 256–264.
* [52] Y. Zhou, L. Liu, and L. Shao, “Vehicle re-identification by deep hidden multi-view inference,” _IEEE Trans. Image Processing_ , vol. 27, no. 7, pp. 3275–3284, 2018.
* [53] Y. Zhou and L. Shao, “Viewpoint-aware attentive multi-view inference for vehicle re-identification,” in _Proc. CVPR_ , 2018, pp. 6489–6498.
* [54] Y. Lou, Y. Bai, J. Liu _et al._ , “Embedding adversarial learning for vehicle re-identification,” _IEEE Trans. Image Processing_ , vol. 28, no. 8, pp. 3794–3807, 2019.
* [55] X. Liu, S. Zhang, X. Wang _et al._ , “Group-group loss-based global-regional feature learning for vehicle re-identification,” _IEEE Trans. Image Processing_ , vol. 29, pp. 2638–2652, 2020.
* [56] J. Zhu, H. Zeng, J. Huang _et al._ , “Vehicle re-identification using quadruple directional deep learning features,” _IEEE Trans. on Intell. Transp. Syst._ , vol. 21, no. 1, pp. 410–420, Jan. 2020.
* [57] T. Chen, C. Liu, C. Wu, and S. Chien, “Orientation-aware vehicle re-identification with semantics-guided part attention network,” in _Proc. ECCV_ , 2020.
* [58] P. Khorramshahi, N. Peri, J. Chen, and R. Chellappa, “The devil is in the details: Self-supervised attention for vehicle re-identification,” in _Proc. ECCV_ , 2020.
* [59] F. Shen, J. Zhu, X. Zhu _et al._ , “Exploring spatial significance via hybrid pyramidal graph network for vehicle re-identification,” _CoRR_ , vol. abs/2005.14684, 2020. [Online]. Available: https://arxiv.org/abs/2005.14684
* [60] J. V. D. Weijer, C. Schmid, J. Verbeek, and D. Larlus, “Learning color names for real-world applications,” _IEEE Trans. Image Processing_ , vol. 18, no. 7, pp. 1512–1523, 2009.
* [61] G. Huang, Z. Liu, L. Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in _Proc. CVPR_ , 2017, pp. 2261–2269.
* [62] H. Guo, K. Zhu, M. Tang, and J. Wang, “Two-level attention network with multi-grain ranking loss for vehicle re-identification,” _IEEE Trans. Image Processing_ , vol. 28, no. 9, pp. 4328–4338, 2019.
* [63] Y. Bai, Y. Lou, F. Gao _et al._ , “Group-sensitive triplet embedding for vehicle reidentification,” _IEEE Trans. Multimedia_ , vol. 20, no. 9, pp. 2385–2399, Sept. 2018.
* [64] C. T. Liu, M. Y. Lee, and C. W. Wu, “Supervised joint domain learning for vehicle re-identification,” in _Proc. CVPR_ , 2019, pp. 45–52.
## Appendix A Effect of the Division Branches
Table VIII presents the evaluation results of variants on the division
branches on the VeRi-776 dataset. The first variant refers to the appearance
module without three division branches, which only contains a dual path of the
coarse-grained features stream and the first branch of the fine-grained
feature stream.
The first observation of this comparison is that the division operations along
three dimensions contribute to the overall performance, and the channel
division branch brings the most significant improvement among these three
categories of division branches. This result is also observed in work PRN
[20], which only uses a single channel branch can get the highest performance
on VehicleID dataset than the fusion of three branches from different
dimensions. Different from PRN [20], we discover that the combination of
multiple primary division branches can promote the performance further.
Besides, we achieve the best performance by employing these three division
strategies on VeRi-776. It indicates the complementary of the information
provided from the division strategies along different axes.
## Appendix B Effect of placements in AttentionNet
We conduct extensive experiments to investigate the effect on different
placements of the multi-domain attention scheme in the fine-grained feature
stream of the proposed DFR-ST on VeRi-776 dataset. Table IX shows the
performance with different orders of the channel attention and spatial
attention. Note that the experiments neglect the spatio-temporal module for
simplicity. As observed in Table IX, the consolidation of the awareness in
different domains (i.e., the spatial domain and the channel domain) can
promote the overall performance rather than single-domain attention from the
experimental results. Moreover, we adopt the placement of the sequential
channel attention and spatial attention for better performance in further
experiments. However, the empirical consequences reveal that the order of the
channel attention and spatial attention is insubstantial to the overall
performance.
TABLE VIII: Performance analysis of the division branches on VeRi-776. Variants | mAP | Top-1 | Top-5
---|---|---|---
w/o division | 76.90 | 87.24 | 91.77
Height division | 77.81 | 85.29 | 89.59
Width division | 78.28 | 87.43 | 91.31
Channel division | 82.92 | 90.46 | 94.06
Height + Width division | 82.67 | 92.14 | 94.89
Height + Channel division | 84.30 | 92.52 | 96.10
Width + Channel division | 83.82 | 92.35 | 96.96
Height + Width + Channel division | $\mathbf{84.47}$ | $\mathbf{93.02}$ | $\mathbf{97.13}$
TABLE IX: The Effect of different placements of attention sub-modules in AttentionNet on VeRi-776. Placements | mAP | Top-1 | Top-5
---|---|---|---
Channel Attention | 83.69 | 90.17 | 95.04
Spatial Attention | 82.55 | 90.76 | 94.48
Sequential Channel + Spatial Attention | $\mathbf{84.47}$ | $\mathbf{93.02}$ | $\mathbf{97.13}$
Sequential Spatial + Channel Attention | 83.74 | 92.81 | 96.35
Parallel Spatial + Channel Attention | 83.51 | 92.38 | 95.86
## Appendix C Parameter Setting
We first investigate the influence of $\lambda$ in Eq. (1). As shown in Table
X, we set $\lambda=0.4$ in all experiments because adding the triplet loss can
boost the overall performance further.
TABLE X: Evaluation on the influence of $\lambda$ on VeRi-776. $\lambda$ | mAP | Top-1 | Top-5
---|---|---|---
0 | 83.04 | 91.55 | 96.18
0.1 | 83.56 | 92.32 | 96.70
0.2 | 84.08 | 92.64 | 96.82
0.3 | 84.41 | $\mathbf{93.05}$ | 97.00
0.4 | $\mathbf{84.47}$ | 93.02 | $\mathbf{97.13}$
0.5 | 84.43 | 92.94 | 96.89
0.6 | 84.30 | 92.96 | 96.80
0.7 | 84.15 | 92.74 | 96.53
0.8 | 84.07 | 92.43 | 96.45
0.9 | 83.69 | 92.16 | 96.39
1.0 | 83.35 | 91.86 | 96.24
TABLE XI: Evaluation on the influence of $\omega$ on VeRi-776. $\omega$ | mAP | Top-1 | Top-5
---|---|---|---
0 | 84.47 | 93.02 | 97.13
0.1 | 85.73 | 94.60 | 98.52
0.2 | $\mathbf{86.00}$ | $\mathbf{95.67}$ | 99.17
0.3 | 85.86 | 95.60 | $\mathbf{99.23}$
0.4 | 85.56 | 95.12 | 98.35
0.5 | 83.40 | 93.66 | 96.67
0.6 | 81.58 | 92.31 | 94.83
0.7 | 80.09 | 91.24 | 93.37
0.8 | 78.35 | 88.28 | 92.54
0.9 | 75.70 | 84.35 | 89.11
1.0 | 73.61 | 82.33 | 86.64
(a)
(b)
Figure 9: Visualization the effect of $\alpha$ and $\beta$ on the distribution
shapes. (a) The effect on the distribution shapes of the parameter $\alpha$.
The curves are $D=\frac{1}{1+\exp(\alpha(x-0.5))}$ as
$\alpha=0.5,1,2,\cdots,6$ based on the ascending order of the intersections
across the y axis. (b) The effect on the distribution shapes of the parameter
$\beta$. The curves are $D=\frac{1}{1+\exp(6(x-\beta))}$ as
$\beta=0,0.1,0.2,\cdots,1$ from left to right.
Besides, we study the impact of $\omega$ in Eq. (6), i.e., the weight of the
spatio-temporal similarity between the image pairs in the distance measurement
of Algorithm LABEL:alg. Table XI shows the experimental results. We first
observe that a moderate $\omega$ can increase the performance, which validates
the complementary of the spatio-temporal information and the appearance
representation. Second, the performance becomes worse quickly as $\omega$
becomes larger. It shows that too large weights on spatio-temporal cues can
bring unexpected degradation, and the appearance features account for more
contributions to the proposed vehicle re-ID approach. This result has
following the human perception of the world, in which the vision accounts for
over 80% information. Therefore, we set $\omega=0.2$ for all experiments.
Moreover, we investigate the effect of different distribution shapes
controlled by $\alpha_{1}$, $\alpha_{2}$ and $\beta_{1}$, $\beta_{2}$ in the
spatio-temporal module. The distribution in Eq. 5 delineates the declining
speeds of the spatio-temporal similarity with the differences of the camera
locations and the timestamps between two vehicle images. The intuition is that
we hope to maintain more plausible samples of high spatio-temporal
similarities for the appearance module to make decisions and get rid of those
relatively dissimilar images to reduce the search complexity. Thus, we set the
the parameters $\alpha_{1}$ and $\alpha_{2}$ to be 6 and $\beta_{1}$,
$\beta_{2}$ to be 0.5 based on Fig. 9.
|
# Time-dependent multistate switching of topological antiferromagnetic order
in Mn3Sn
Gunasheel Kauwtilyaa Krishnaswamy Department of Materials, ETH Zurich, 8093
Zurich, Switzerland Giacomo Sala Department of Materials, ETH Zurich, 8093
Zurich, Switzerland Benjamin Jacot Department of Materials, ETH Zurich, 8093
Zurich, Switzerland Charles-Henri Lambert Department of Materials, ETH
Zurich, 8093 Zurich, Switzerland Richard Schlitz Department of Materials,
ETH Zurich, 8093 Zurich, Switzerland Marta D. Rossell Electron Microscopy
Center, Empa, Swiss Federal Laboratories for Material Science and Technology,
Dübendorf, Switzerland Paul Nöel Department of Materials, ETH Zurich, 8093
Zurich, Switzerland Pietro Gambardella Department of Materials, ETH Zurich,
8093 Zurich, Switzerland
###### Abstract
The manipulation of antiferromagnetic order by means of spin-orbit torques
opens unprecedented opportunities to exploit the dynamics of antiferromagnets
in spintronic devices. In this work, we investigate the current-induced
switching of the magnetic octupole vector in the Weyl antiferromagnet Mn3Sn as
a function of pulse shape, magnetic field, temperature, and time. We find that
the switching behavior can be either bistable or tristable depending on the
temporal structure of the current pulses. Time-resolved Hall effect
measurements performed during the current pulsing reveal that Mn3Sn switching
proceeds via a two-step demagnetization-remagnetization process caused by
self-heating over a timescale of tens of ns followed by cooling in the
presence of spin-orbit torques. Single-shot switching measurements with 50 ps
temporal resolution indicate that chiral spin rotation is either damped or
incoherent in polycrystalline Mn3Sn. Our results shed light on the switching
dynamics of Mn3Sn and prove the existence of extrinsic limits on its switching
speed.
## I Introduction
Electric control of magnetic order in antiferromagnets has raised prospects
for realizing high-speed and high-density magnetoelectric devices using
materials with zero net magnetization [1, 2, 3, 4, 5, 6]. The switching of the
order parameter in antiferromagnets is achieved by either injecting spin
currents from an adjacent heavy metal layer or current-induced spin-orbit
torques intrinsic to noncentrosymmetric crystals [7]. Electrical readout,
however, is problematic because of the small magnetoresistance [8], resistive
artefacts [9, 10, 11], and absence of Hall effect in most conventional
antiferromagnets. This problem can be elegantly solved by turning to
noncollinear antiferromagnets, which combine topologically nontrivial
electronic properties with chiral magnetic order. In these systems, the broken
time-reversal symmetry and large Berry curvature in momentum space give rise
to strong anomalous Hall effect (AHE) [12, 13] and magneto-optical responses
[14, 15, 16, 17], similar to ferromagnets but in the absence of significant
magnetization. Theoretical work shows that these materials can even exhibit a
large tunneling magnetoresistance [18], whereas the emergence of exotic
phenomena such as the chiral anomaly [19] and magnetic spin Hall effect [20,
21, 22, 23] makes them a very interesting playground for investigating the
interplay of topology, electron transport, and magnetism [24, 25, 26].
A prime candidate of this material class is Mn3Sn, a hexagonal Weyl metal in
which the Mn atoms form kagome lattice planes stacked along the $c$-axis with
an inverse-triangular spin structure and all the spins oriented in-plane [27,
28, 29, 12, 30]. The non-collinear antiferromagnetic order is best described
by the magnetic octupole moment g of the six Mn spins that reside in two
stacked inverted triangles on adjacent kagome layers [green arrow in Fig. 1
(a)]. Magnetic anisotropy defines six possible orientations of the g-vector in
the kagome plane [Fig. 1 (b)]. The almost perfect 120∘ non-collinear spin
alignment is slightly distorted by magnetic anisotropy, which leads to a weak
ferromagnetic moment of $\sim 0.002$ $\mu_{B}$ per Mn atom in the direction of
the g-vector. This conveniently allows for the manipulation of
antiferromagnetic order by external magnetic fields, whereas the large AHE and
anomalous Nernst effect (ANE) of Mn3Sn provide direct information on the
orientation of g [31, 32, 33, 34]. Importantly for applications, the
topological properties of Mn3Sn emerge in both polycrystalline and epitaxial
thin films [35, 36, 37, 33, 38, 39, 34, 40].
Figure 1: (a) Cross-section of the Mn3Sn/Pt bilayer. The inverted triangular
spin structure is shown in the center: white and black arrows represent the Mn
spins and the green arrow the octupole vector g. (b) Possible orientations of
g and corresponding AHE signal. (c) Microscope image of a Hall bar device and
(d) Hall cross used for switching and time-resolved measurements. (e) AHE of
Mn3Sn/Pt as a function of magnetic field along $z$ and (f) current density for
10 $\mu$s-long pulses and $B_{\rm x}=+200$ mT. Figure 2: (a) Switching loops
of Mn3Sn/Pt as a function of pulse voltage with rise and fall time increasing
from left to right. (b) Corresponding pulse shape. (c) Schematic showing the
$-z$, $+z$ and intermediate states and the possible orientations of the
g-vector in each state.
Pioneering work on Mn3Sn/heavy metal bilayers has demonstrated switching of
antiferromagnetic order by current-induced spin-orbit torques [41, 42, 43,
44]. In these experiments, a change of the AHE as a function of current
reveals the reorientation of g in crystal grains with $c$-axis oriented in-
plane. Switching only occurs in the presence of a symmetry-breaking magnetic
field collinear with the current, with the final state determined by the
relative orientation of current and field and by the sign of the spin Hall
angle in the heavy metal [41, 42, 45]. These observations suggest a switching
mechanism very similar to ferromagnet/heavy metal bilayers [46, 47, 7]. Within
this picture, however, different magnetization dynamics can be expected
depending on whether the torques rotate the moments in or out of the kagome
plane [48, 49, 41, 43]. New effects such as chiral spin rotation have been
proposed, whereby the Mn moments undergo continuous rotation in the kagome
plane with time periods in the tens of ns [48, 43, 50]. Thus far, however,
switching experiments relied on electrical pulses with pulse duration of 100
ms, which yield no information about the fast switching dynamics expected of
antiferromagnets.
In this work, we explore the chiral switching dynamics of Mn3Sn/Pt bilayers.
We observe that the switching behavior varies characteristically with the
pulse length and shape: conventional bistable switching between $\pm z$ states
is observed for pulses with fall times longer than 400 ns whereas tristable
switching is observed for pulses with shorter fall times, leading to a
demagnetized state with zero AHE. By studying the switching dependence on the
temporal shape of the pulses, applied field, temperature, and time we show
that the reversal of the g-vector occurs through two phases, namely current-
induced partial demagnetization lasting several ns followed by cooling in the
presence of spin-orbit torques at the end of a current pulse. This mechanism
is similar to the setting of exchange bias during field cooling in coupled
antiferromagnetic/ferromagnetic systems [51]. However, it differs from the
thermally-activated switching observed in collinear ferromagnets [52] and
antiferromagnets [3], in which Joule heating reduces the magnetic anisotropy
energy barrier while the sample remains magnetic. Time-resolved measurements
during pulsing indicate that the reversal of chiral antiferromagnetic order is
incoherent and that chiral spin rotation is either damped or averaged out in
polycrystalline Mn3Sn. Our measurements also set a limit on the reversal speed
attainable by the interplay of current-induced heating and spin-orbit torques
in chiral antiferromagnets.
## II Methods
Our samples are polycrystalline Mn3Sn(50 nm)/Pt(5 nm) bilayers grown by
magnetron sputtering patterned into 3 to 6-$\mu$m-wide Hall bars and Hall
crosses [Fig. 1 (c,d)] [53]. High-resolution transmission electron microscopy
reveals the presence of columnar grains of about 250 nm width, different
orientations and excellent crystalline order [53]. Measurements of the
longitudinal ($R_{\rm xx}$) and transverse Hall resistance ($R_{\rm xy}$) are
consistent with previous work on similar samples [14, 41, 42, 45, 50, 53]. We
used a quasi-static pulse-probe protocol for characterizing the switching
properties as a function of pulse shape and field [46] and performed the time-
resolved measurements of the AHE using the split-pulse technique described in
Ref. 54. In the pulse-probe method we inject a current pulse of up to 20 mA to
induce switching followed by an alternate current of 1 mA, which allows for
probing the first and second harmonic contributions to Rxy that are
proportional to the AHE and ANE, respectively [53, 55]. In the time-resolved
measurements, we probe the change in AHE during a current pulse with about 50
ps temporal resolution [54]. Hall bars are used for quasi-static switching and
Hall crosses for the time-resolved measurements. Given the structure of our
samples, the AHE (ANE) reflects the out-of-plane (in-plane) component of g
averaged over different crystal grains in the region sensed by the Hall
resistance [53, 35, 31, 38]. Comparative switching measurements on Mn3Sn/W and
W/Mn3Sn/Pt samples are reported in Ref. 53.
## III Results
### III.1 Multistate switching determined by the pulse fall time
Figure 3: Field dependence of the current-induced switching for (a) long and
(b) short fall time pulses of 10 $\mu$s length. Switching amplitude $\Delta
R_{\rm xy}$ between $\pm 2$V (black squares) and $\pm 20$V (purple circles) as
a function of $B_{\rm x}$ for (c) long and (d) short fall time.
Figures 1 (e) and (f) show the field- and current-induced switching of the
g-vector, respectively, as measured by the AHE. In agreement with previous
reports [35, 41, 42], we observe switching of about 30% of the total AHE upon
injecting 10-$\mu$s-long current pulses with a fall time $\tau=420$ ns. This
bistable behavior is interpreted as g switching between the $+z$ and $-z$
states. Surprisingly, however, we find that gradually reducing $\tau$ to below
100 ns changes the switching from bistable to tristable, leading to the
appearance of states with high and low AHE at intermediate current values and
zero AHE at high current [Fig. 2]. Because the pulse length is constant, the
gradual shift of the endpoint $R_{\rm xy}$ in Fig. 2 (a) demonstrates that the
fall time determines the switching regime. Importantly, the magnetic state set
by the current pulse and magnetic field remains constant after the pulse. The
occurrence of multistate switching has been reported before in Mn3Sn [43], but
the role of the transient dynamic effects that determine the final orientation
of the g-vector has not been elucidated. These effects can be of two types,
thermal, due to Joule heating, and magnetic due to spin-orbit torques.
### III.2 Switching as a function of in-plane field
To exclude a purely thermal origin of the switching, we study its dependence
on the external in-plane magnetic field $B_{\rm x}$. Figures 3 (a) and (b)
show the current-induced switching loops for 10-$\mu$s-long pulses with
$\tau=35$ ns and $420$ ns, respectively, for increasing values of $B_{\rm x}$.
The reversal of the switching direction upon inversion of $B_{\rm x}$
indicates that switching is due to spin-orbit torques in the entire range of
fall times. We also find that the switching amplitude between $+z$ and $-z$
states, $\Delta R_{\rm xy}=R_{\rm xy}(2~{}{\rm V})-R_{\rm xy}(-2~{}\rm{V})$,
increases up to $B_{\rm x}\approx 100$ mT, consistently with previous reports
[41, 42, 45, 35, 43] and the standard model of spin-orbit torque switching in
ferromagnets [46, 7]. However, $\Delta R_{\rm xy}$ decreases in the high field
limit [black squares in Fig. 3 (c,d)], indicating that another mechanism comes
into play. We also note that the offset of $R_{\rm xy}$ and the sign of the
switching amplitude $\Delta R_{\rm xy}(\pm\rm 20V)$ in the short pulse regime
are very sensitive to the presence of an out-of-plane external field [53].
### III.3 Switching by current-induced heating and cooling in the presence of
spin-orbit torques
To understand the role played by heating we measured the AHE as a function of
temperature [Fig. 4 (a,b)]. The AHE vanishes at $T_{\rm N}=390$ K, close to
the Néel temperature of bulk Mn3Sn (420 K) [30, 43]. The longitudinal
resistance $R_{\rm xx}$ has a nonlinear temperature behavior as it is a
mixture of the resistance due to Pt and Mn3Sn. Measuring $R_{\rm xx}$ as a
function of current allows us to gauge the extent of Joule heating, which
shows that the sample temperature reaches $T_{\rm N}$ for pulse currents
larger than 14 mA (16 V) [53]. We thus propose a model to explain the
multistate switching behaviour in which the interplay of temperature and spin-
orbit torques is governed by $\tau$. Consider a generic voltage pulse that
heats up the sample and provides a current density $j$ to exert a torque, as
shown in Fig. 4 (c). As the pulse starts, the temperature increases
quadratically with the current at a rate determined by the longest between the
pulse rise time and the heat diffusion time. For pulses longer than a few tens
of ns, the sample temperature approaches $T_{\rm N}$, leading to a
demagnetized state until cool down begins at the end of the pulse.
Deterministic switching to a final state $+z$ or $-z$ can be achieved only if
$j$ is larger than a critical current density $j_{\rm c}$ as the temperature
has dropped below $T_{\rm N}$, i.e., for long $\tau$. If, on the other hand,
the current drops abruptly below $j_{\rm c}$ when the temperature is still
close to $T_{\rm N}$, the Mn3Sn grains freeze in a mixed multidomain
configuration, which leads to the intermediate state with no AHE for short
$\tau$. Our simultaneous measurements of the AHE and ANE show that this
intermediate state consists of domains along $\pm z$, which give a net zero
AHE, and grains that are oriented along $+x$ and $-x$ for $B_{\rm x}>0$ and
$B_{\rm x}<0$, respectively [53]. The fraction of grains oriented along $\pm
x$ during cool down increases with $B_{\rm x}$, which explains the non-
monotonic field dependence of the switching amplitude in Fig. 3. Thus, by
gradually modifying $\tau$, we tune the fraction of grains that switch and
those that remain demagnetized at the end of the pulse.
Figure 4: (a) AHE switching as a function of pulse voltage at different
temperatures for 10-$\mu$s-long pulses. (b) Temperature dependence of $R_{\rm
xy}$ (bottom panel) and $R_{\rm xx}$ (middle panel) of Mn3Sn/Pt. $R_{\rm xx}$
vs direct current and calibrated temperature (top panel). (c) Schematic
current pulse with temperature profile indicated by the red shading. (d) Step-
wise switching sequences: $R_{\rm xy}$ vs pulse amplitude starting from $\pm
18$ V in steps of 2 V (black, red), 4 V (blue) and 8 V (green).
This model also explains why the $\pm z$ final states with large/low AHE can
be reached starting from the intermediate state with zero AHE upon reducing
the pulse voltage in small incremental steps, as seen in Fig. 2 even for short
$\tau$. Figure 4 (d) shows $R_{\rm xy}$ recorded by sweeping the pulse
amplitude from +18 V to -18 V and back in steps of 2 V (black dots). Starting
from the intermediate state obtained by pulsing at +18 V with $\tau=35$ ns,
the AHE changes progressively to the low state upon reducing the pulse
amplitude. However, if the pulse amplitude is abruptly decreased from +18 to
+10 V, no switching occurs (green dots). The type of switching thus depends on
the initial state and on the decremental step size, which is different from
the change of switching amplitude as a function of current reported for
bistable switching in Ref. 41. Our observation is consistent with different
Mn3Sn grains having a distribution of $T_{\rm N}$ due to their varying sizes,
which are selectively switched to the $\pm z$ final states upon decreasing the
pulse amplitude from the intermediate state. This is essentially a step-wise
version of the long fall time scenario described above.
Overall, our results show that the switching of antiferromagnetic order in
Mn3Sn occurs due to heat-assisted demagnetization followed by reorientation of
the g-vector induced by spin-orbit torques during cool down. The fall time of
the current pulses determines the final magnetic configuration of the Mn3Sn
domains. Additionally, switching loops measured for 21 V pulses of decreasing
length, from 50 to 5 ns, evidence that the switching amplitude vanishes in the
limit of short pulses [Fig. 5 (a)]. These findings show that switching of
antiferromagnetic order in Mn3Sn by spin-orbit torques has a composite
temporal dependence and a different dynamics relative to ferromagnets [54, 56]
and collinear antiferromagnets [3, 57, 58].
### III.4 Time-resolved measurements
To determine the transient dynamics, we performed time-resolved measurements
of $R_{\rm xy}$ during the current pulses using the setup shown in Fig. 5 (b).
The temporal evolution of the AHE voltage $V_{\rm H}$ during the switching
process is determined by taking the difference of the Hall voltage trace
measured during switching relative to a reference trace in the absence of
switching [54]. Figure 5 (c) shows the average of 20 differential time traces
of $V_{\rm H}$ taken during pulses with +21 V amplitude, 75 ns duration and
$\tau=0.3$ ns, separated by a 1 s delay. The decrease (increase) of $V_{\rm
H}$ following the onset of the pulse at $t=0$ for $B_{\rm x}=+250$ mT ($-250$
mT) reflects the decrease (increase) of the AHE from the initial $-z$ ($+z$)
state to the intermediate state with no AHE. It takes about 35 ns for $\lvert
V_{\rm H}\rvert$ to reduce to 0, after which no further changes of $V_{\rm H}$
are observed until the end of the pulse. Measurements performed for 20 ns-long
pulses as a function of $B_{\rm x}$, reported in Fig. 5 (d), further reveal
that the amplitude of the transient switching signal scales with $B_{\rm x}$
and that the timescale over which $\lvert V_{\rm H}\rvert$ reduces to 0 is
independent of $B_{\rm x}$. We thus associate the decrease of $\lvert V_{\rm
H}\rvert$ with the time it takes for the device to reach a temperature close
to $T_{\rm N}$, in line with the switching mechanism proposed above. This time
depends only on $j$ and not on $B_{\rm x}$, which shows that the switching
speed of Mn3Sn is ultimately limited by the heating rate.
Recent studies propose a coherent chiral spin reversal mechanism in
noncollinear antiferromagnets where the g-vector continuously rotates above a
given current density threshold [48, 43, 50]. The rotation period is estimated
in a range of 1-30 ns, depending on the current density. Inirect evidence for
this effect has been reported both in epitaxial and polycrystalline thin films
[43, 50].
The experimental evidence for such a mechanism, however, lacks insight into
the time-dependent dynamics that is the hallmark of coherent switching. Our
time-resolved traces shown in Fig. 5 (c,d) evidence a monotonic decrease of
$\lvert V_{\rm H}\rvert$ that is not consistent with reproducible oscillations
of $R_{\rm xy}$ due to chiral spin rotation. Because these traces are averaged
over several pulses, they do not provide information on stochastic rotations.
To investigate the occurrence of chiral spin rotation during individual
current pulses, we have thus measured single-shot time traces of $V_{\rm H}$.
Representative examples of such traces are shown in Fig. 5 (e,f) for a series
of 20-ns-long current pulses. Our analysis does not reveal evidence of
periodic oscillations of $V_{\rm H}$ consistent with chiral spin rotation
during single-shot pulses.
Figure 5: (a) Current-induced switching loops for different pulse lengths,
$\tau=0.3$ ns and $B_{\rm x}=-250$ mT. (b) Schematic of the time-resolved AHE
measurements. (c) Differential switching time traces averaged over 20
consecutive +21 V pulses with $B_{\rm x}=\pm 250$ mT. The pulses are 75 ns
long starting at $t=0$. (d) Differential switching time traces averaged over
100 consecutive 20-ns-long voltage pulses of amplitude +21 V vs $B_{\rm x}$.
The gray trace at the top shows the pulse shape. (e) Single shot differential
switching traces for 20-ns-long voltage pulses at $B_{\rm x}=-180$ mT and (f)
+180 mT. The black lines are moving averages over $1.5$ ns.
The absence of oscillations can be ascribed to different factors. First,
chiral spin rotation requires an injected spin current with polarization
parallel to the $c$-axis [43]. Given the polycrystalline nature of our
samples, we estimate to have a measurable amount of such grains in our Hall
crosses [53, 43, 50]. On the other hand, chiral spin rotation may take place
in different grains with different phase factors, averaging to zero in the
total Hall signal. Acording to simulations, however, these coherent effects
should result in visible oscillations also in polycrystalline samples [50].
Another possibility is that the rotation is too fast to be resolved by our
measurements, which have a temporal resolution of about 50 ps [54]. The
current density in the time-resolved measurements is $1.3\times 10^{7}$ A/cm2
when averaged over the entire thickness of Mn3Sn/Pt and about $6.2\times
10^{7}$ A/cm2 in Pt, as estimated using a parallel resistor model. The
rotation frequency corresponding to this current is 1.7 GHz [43], which is
within our time resolution. The continuous decrease of the AHE signal thus
indicates that any oscillation, if present, is strongly damped and that heat-
induced demagnetization dominates over coherent effects.
## IV Conclusions
In summary, our work shows that the switching of chiral antiferromagnetic
order in Mn3Sn/Pt is incoherent and determined by the timed interplay of heat
and spin-orbit torques. Both effects are current-induced but heating up to
$T_{\rm N}$ occurs on a timescale of tens of ns whereas the injection of a
spin current from Pt closely follows the temporal profile of the current
pulses. Switching proceeds via a two-step demagnetization-remagnetization
process, whereby the final orientation of the g-vector is deterministic
between $\pm z$ states only if the sample cools down in the presence of a spin
current larger than a critical value. Our results provide insight into the
switching timescale and dynamics of topological antiferromagnets, showing that
it is different from both ferromagnets and collinear antiferromagnets and
limited by the sample heating rate. Additionally, this work shows that time-
resolved Hall effect measurements provide a viable method to investigate the
current-induced dynamics of antiferromagnetic order in topological materials.
_Acknowledgements_. This research was partially supported by the Swiss
National Science Foundation (Grants No. 200020-200465 and PZ00P2-179944).
During the resubmission of this manuscript we became aware of related work
reporting the multistate switching of Mn3Sn [59].
## V References
## References
* Wadley _et al._ [2016] P. Wadley, B. Howells, J. Železný, C. Andrews, V. Hills, R. P. Campion, V. Novák, K. Olejník, F. Maccherozzi, S. S. Dhesi, S. Y. Martin, T. Wagner, J. Wunderlich, F. Freimuth, Y. Mokrousov, J. Kuneš, J. S. Chauhan, M. J. Grzybowski, A. W. Rushforth, K. Edmond, B. L. Gallagher, and T. Jungwirth, Spintronics: Electrical switching of an antiferromagnet, Science 351, 587 (2016).
* Olejník _et al._ [2017] K. Olejník, V. Schuler, X. Marti, V. Novák, Z. Kašpar, P. Wadley, R. P. Campion, K. W. Edmonds, B. L. Gallagher, J. Garces, M. Baumgartner, P. Gambardella, and T. Jungwirth, Antiferromagnetic CuMnAs multi-level memory cell with microelectronic compatibility, Nature Communications 8, 15434 (2017).
* Meinert _et al._ [2018] M. Meinert, D. Graulich, and T. Matalla-Wagner, Electrical Switching of Antiferromagnetic Mn2Au and the Role of Thermal Activation, Physical Review Applied 9, 064040 (2018).
* DuttaGupta _et al._ [2020] S. DuttaGupta, A. Kurenkov, O. A. Tretiakov, G. Krishnaswamy, G. Sala, V. Krizakova, F. Maccherozzi, S. S. Dhesi, P. Gambardella, S. Fukami, and H. Ohno, Spin-orbit torque switching of an antiferromagnetic metallic heterostructure, Nature Communications 11, 5715 (2020).
* Siddiqui _et al._ [2020] S. A. Siddiqui, J. Sklenar, K. Kang, M. J. Gilbert, A. Schleife, N. Mason, and A. Hoffmann, Metallic antiferromagnets, Journal of Applied Physics 128, 040904 (2020).
* Arpaci _et al._ [2021] S. Arpaci, V. Lopez-Dominguez, J. Shi, L. Sánchez-Tejerina, F. Garesci, C. Wang, X. Yan, V. K. Sangwan, M. A. Grayson, M. C. Hersam, G. Finocchio, and P. Khalili Amiri, Observation of current-induced switching in non-collinear antiferromagnetic IrMn3 by differential voltage measurements, Nature Communications 12, 3828 (2021), arXiv:2105.02277 .
* Manchon _et al._ [2019] A. Manchon, J. Železný, I. M. Miron, T. Jungwirth, J. Sinova, A. Thiaville, K. Garello, and P. Gambardella, Current-induced spin-orbit torques in ferromagnetic and antiferromagnetic systems, Reviews of Modern Physics 91, 035004 (2019).
* Marti _et al._ [2014] X. Marti, I. Fina, C. Frontera, J. Liu, P. Wadley, Q. He, R. J. Paull, J. D. Clarkson, J. Kudrnovský, I. Turek, J. Kuneš, D. Yi, J.-H. Chu, C. T. Nelson, L. You, E. Arenholz, S. Salahuddin, J. Fontcuberta, T. Jungwirth, and R. Ramesh, Room-temperature antiferromagnetic memory resistor, Nature Materials 13, 367 (2014).
* Chiang _et al._ [2019] C. C. Chiang, S. Y. Huang, D. Qu, P. H. Wu, and C. L. Chien, Absence of Evidence of Electrical Switching of the Antiferromagnetic Néel Vector, Physical Review Letters 123, 227203 (2019).
* Jacot _et al._ [2020] B. J. Jacot, G. Krishnaswamy, G. Sala, C. O. Avci, S. Vélez, P. Gambardella, and C.-H. Lambert, Systematic study of nonmagnetic resistance changes due to electrical pulsing in single metal layers and metal/antiferromagnet bilayers, Journal of Applied Physics 128, 173902 (2020).
* Matalla-Wagner _et al._ [2020] T. Matalla-Wagner, J.-M. Schmalhorst, G. Reiss, N. Tamura, and M. Meinert, Resistive contribution in electrical-switching experiments with antiferromagnets, Physical Review Research 2, 033077 (2020).
* Nakatsuji _et al._ [2015] S. Nakatsuji, N. Kiyohara, and T. Higo, Large anomalous Hall effect in a non-collinear antiferromagnet at room temperature, Nature 527, 212 (2015).
* Nayak _et al._ [2016] A. K. Nayak, J. E. Fischer, Y. Sun, B. Yan, J. Karel, A. C. Komarek, C. Shekhar, N. Kumar, W. Schnelle, J. Kübler, C. Felser, and S. S. P. Parkin, Large anomalous Hall effect driven by a nonvanishing Berry curvature in the noncolinear antiferromagnet Mn 3 Ge, Science Advances 2, e1501870 (2016), arXiv:1511.03128 .
* Higo _et al._ [2018a] T. Higo, D. Qu, Y. Li, C. L. Chien, Y. Otani, and S. Nakatsuji, Anomalous Hall effect in thin films of the Weyl antiferromagnet Mn3Sn, Applied Physics Letters 113, 1 (2018a), arXiv:1810.11599 .
* Balk _et al._ [2019] A. L. Balk, N. H. Sung, S. M. Thomas, P. F. Rosa, R. D. McDonald, J. D. Thompson, E. D. Bauer, F. Ronning, and S. A. Crooker, Comparing the anomalous Hall effect and the magneto-optical Kerr effect through antiferromagnetic phase transitions in Mn3Sn, Applied Physics Letters 114, 032401 (2019), arXiv:1901.07642 .
* Zhao _et al._ [2021] H. C. Zhao, H. Xia, S. Hu, Y. Y. Lv, Z. R. Zhao, J. He, E. Liang, G. Ni, L. Y. Chen, X. P. Qiu, S. M. Zhou, and H. B. Zhao, Large ultrafast-modulated Voigt effect in noncollinear antiferromagnet Mn3Sn, Nature Communications 12, 5266 (2021).
* Uchimura _et al._ [2022] T. Uchimura, J.-Y. Yoon, Y. Sato, Y. Takeuchi, S. Kanai, R. Takechi, K. Kishi, Y. Yamane, S. DuttaGupta, J. Ieda, H. Ohno, and S. Fukami, Observation of domain structure in non-collinear antiferromagnetic Mn3Sn thin films by magneto-optical Kerr effect, Applied Physics Letters 120, 172405 (2022).
* Dong _et al._ [2021] J. Dong, X. Li, G. Gurung, M. Zhu, P. Zhang, F. Zheng, E. Y. Tsymbal, and J. Zhang, Tunneling Magnetoresistance in Noncollinear Antiferromagnetic Tunnel Junctions, . (2021), arXiv:2112.06568 .
* Kuroda _et al._ [2017] K. Kuroda, T. Tomita, M.-T. Suzuki, C. Bareille, A. A. Nugroho, P. Goswami, M. Ochi, M. Ikhlas, M. Nakayama, S. Akebi, R. Noguchi, R. Ishii, N. Inami, K. Ono, H. Kumigashira, A. Varykhalov, T. Muro, T. Koretsune, R. Arita, S. Shin, T. Kondo, and S. Nakatsuji, Evidence for magnetic Weyl fermions in a correlated metal, Nature Materials 16, 1090 (2017), arXiv:1710.06167 .
* Kimata _et al._ [2019] M. Kimata, H. Chen, K. Kondou, S. Sugimoto, P. K. Muduli, M. Ikhlas, Y. Omori, T. Tomita, A. H. MacDonald, S. Nakatsuji, and Y. Otani, Magnetic and magnetic inverse spin Hall effects in a non-collinear antiferromagnet, Nature 565, 627 (2019).
* Kondou _et al._ [2021] K. Kondou, H. Chen, T. Tomita, M. Ikhlas, T. Higo, A. H. MacDonald, S. Nakatsuji, and Y. Otani, Giant field-like torque by the out-of-plane magnetic spin Hall effect in a topological antiferromagnet, Nature Communications 12, 6491 (2021).
* Hu _et al._ [2021] S. Hu, D.-F. Shao, H. Yang, M. Tang, Y. Yang, W. Fan, S. Zhou, E. Y. Tsymbal, and X. Qiu, Efficient field-free perpendicular magnetization switching by a magnetic spin Hall effect, , (2021), arXiv:2103.09011 .
* Ghosh _et al._ [2022] S. Ghosh, A. Manchon, and J. Železný, Unconventional Robust Spin-Transfer Torque in Noncollinear Antiferromagnetic Junctions, Physical Review Letters 128, 097702 (2022).
* Gomonay and Loktev [2014] E. V. Gomonay and V. M. Loktev, Spintronics of antiferromagnetic systems, Low Temperature Physics 40, 17 (2014).
* Baltz _et al._ [2018] V. Baltz, A. Manchon, M. Tsoi, T. Moriyama, T. Ono, and Y. Tserkovnyak, Antiferromagnetic spintronics, Reviews of Modern Physics 90, 015005 (2018), arXiv:1606.04284 .
* Šmejkal _et al._ [2018] L. Šmejkal, Y. Mokrousov, B. Yan, and A. H. MacDonald, Topological antiferromagnetic spintronics, Nature Physics 14, 242 (2018), arXiv:1706.00670 .
* Tomiyoshi and Yamaguchi [1982] S. Tomiyoshi and Y. Yamaguchi, Magnetic Structure and Weak Ferromagnetism of Mn3Sn Studied by Polarized Neutron Diffraction, Journal of the Physical Society of Japan 51, 2478 (1982).
* Ohmori _et al._ [1987] H. Ohmori, S. Tomiyoshi, H. Yamauchi, and H. Yamamoto, Spin structure and weak ferromagnetism of Mn3Sn, Journal of Magnetism and Magnetic Materials 70, 249 (1987).
* Duan _et al._ [2015] T. F. Duan, W. J. Ren, W. L. Liu, S. J. Li, W. Liu, and Z. D. Zhang, Magnetic anisotropy of single-crystalline Mn3Sn in triangular and helix-phase states, Applied Physics Letters 107, 10.1063/1.4929447 (2015).
* Song _et al._ [2020] Y. Song, Y. Hao, S. Wang, J. Zhang, Q. Huang, X. Xing, and J. Chen, Complicated magnetic structure and its strong correlation with the anomalous Hall effect in Mn3Sn, Physical Review B 101, 144422 (2020).
* Ikhlas _et al._ [2017] M. Ikhlas, T. Tomita, T. Koretsune, M. T. Suzuki, D. Nishio-Hamane, R. Arita, Y. Otani, and S. Nakatsuji, Large anomalous Nernst effect at room temperature in a chiral antiferromagnet, Nature Physics 13, 1085 (2017), arXiv:1710.00062 .
* Li _et al._ [2017] X. Li, L. Xu, L. Ding, J. Wang, M. Shen, X. Lu, Z. Zhu, and K. Behnia, Anomalous Nernst and Righi-Leduc Effects in Mn3Sn: Berry Curvature and Entropy Flow, Physical Review Letters 119, 056601 (2017), arXiv:1612.06128 .
* Ikeda _et al._ [2020] T. Ikeda, M. Tsunoda, M. Oogane, S. Oh, T. Morita, and Y. Ando, Fabrication and evaluation of highly c-plane oriented Mn3Sn thin films, AIP Advances 10, 015310 (2020).
* Nakano _et al._ [2021] T. Nakano, T. Higo, A. Kobayashi, S. Miwa, S. Nakatsuji, and K. Yakushiji, Fabrication of polycrystalline Weyl antiferromagnetic Mn3Sn thin films on various seed layers, Physical Review Materials 5, 054402 (2021).
* Higo _et al._ [2018b] T. Higo, H. Man, D. B. Gopman, L. Wu, T. Koretsune, O. M. J. V. Erve, Y. P. Kabanov, D. Rees, Y. Li, M.-t. Suzuki, S. Patankar, M. Ikhlas, C. L. Chien, R. Arita, R. D. Shull, J. Orenstein, and S. Nakatsuji, Large magneto-optical Kerr effect and imaging of magnetic octupole domains in an antiferromagnetic metal, Nature Photonics 12, 73 (2018b).
* Ikeda _et al._ [2019] T. Ikeda, M. Tsunoda, M. Oogane, S. Oh, T. Morita, and Y. Ando, Improvement of Large Anomalous Hall Effect in Polycrystalline Antiferromagnetic Mn3+xSn Thin Films, IEEE Transactions on Magnetics 55, 1 (2019).
* You _et al._ [2019] Y. You, X. Chen, X. Zhou, Y. Gu, R. Zhang, F. Pan, and C. Song, Anomalous Hall Effect–Like Behavior with In-Plane Magnetic Field in Noncollinear Antiferromagnetic Mn3Sn Films, Advanced Electronic Materials 5, 1800818 (2019).
* Yoon _et al._ [2020] J. Yoon, Y. Takeuchi, R. Itoh, S. Kanai, S. Fukami, and H. Ohno, Crystal orientation and anomalous Hall effect of sputter-deposited non-collinear antiferromagnetic Mn3Sn thin films, Applied Physics Express 13, 013001 (2020).
* Yoon _et al._ [2021] J.-Y. Yoon, Y. Takeuchi, S. DuttaGupta, Y. Yamane, S. Kanai, J. Ieda, H. Ohno, and S. Fukami, Correlation of anomalous Hall effect with structural parameters and magnetic ordering in Mn3+xSn1-x thin films, AIP Advances 11, 065318 (2021).
* Khadka _et al._ [2020] D. Khadka, T. R. Thapaliya, S. Hurtado Parra, X. Han, J. Wen, R. F. Need, P. Khanal, W. Wang, J. Zang, J. M. Kikkawa, L. Wu, and S. X. Huang, Kondo physics in antiferromagnetic Weyl semimetal Mn3+xSn1-x films, Science Advances 6, 10.1126/sciadv.abc1977 (2020).
* Tsai _et al._ [2020] H. Tsai, T. Higo, K. Kondou, T. Nomoto, A. Sakai, A. Kobayashi, T. Nakano, K. Yakushiji, R. Arita, S. Miwa, Y. Otani, and S. Nakatsuji, Electrical manipulation of a topological antiferromagnetic state, Nature 580, 608 (2020).
* Tsai _et al._ [2021a] H. Tsai, T. Higo, K. Kondou, A. Kobayashi, T. Nakano, K. Yakushiji, S. Miwa, Y. Otani, and S. Nakatsuji, Spin–orbit torque switching of the antiferromagnetic state in polycrystalline Mn3Sn/Cu/heavy metal heterostructures, AIP Advances 11, 045110 (2021a).
* Takeuchi _et al._ [2021] Y. Takeuchi, Y. Yamane, J. Y. Yoon, R. Itoh, B. Jinnai, S. Kanai, J. Ieda, S. Fukami, and H. Ohno, Chiral-spin rotation of non-collinear antiferromagnet by spin–orbit torque, Nature Materials 20, 1364 (2021).
* Deng _et al._ [2021] Y. Deng, R. Li, and X. Liu, Thickness dependent anomalous Hall effect in noncollinear antiferromagnetic Mn3Sn polycrystalline thin films, Journal of Alloys and Compounds 874, 159910 (2021).
* Tsai _et al._ [2021b] H. Tsai, T. Higo, K. Kondou, S. Sakamoto, A. Kobayashi, T. Matsuo, S. Miwa, Y. Otani, and S. Nakatsuji, Large Hall Signal due to Electrical Switching of an Antiferromagnetic Weyl Semimetal State, Small Science 1, 2000025 (2021b).
* Miron _et al._ [2011] I. M. Miron, K. Garello, G. Gaudin, P.-J. Zermatten, M. V. Costache, S. Auffret, S. Bandiera, B. Rodmacq, A. Schuhl, and P. Gambardella, Perpendicular switching of a single ferromagnetic layer induced by in-plane current injection, Nature 476, 189 (2011).
* Baumgartner _et al._ [2017] M. Baumgartner, K. Garello, J. Mendil, C. O. Avci, E. Grimaldi, C. Murer, J. Feng, M. Gabureac, C. Stamm, Y. Acremann, S. Finizio, S. Wintz, J. Raabe, and P. Gambardella, Spatially and time-resolved magnetization dynamics driven by spin–orbit torques, Nature Nanotechnology 12, 980 (2017).
* Fujita [2017] H. Fujita, Field‐free, spin‐current control of magnetization in non‐collinear chiral antiferromagnets, physica status solidi (RRL) – Rapid Research Letters 11, 1600360 (2017).
* Yamane _et al._ [2019] Y. Yamane, O. Gomonay, and J. Sinova, Dynamics of noncollinear antiferromagnetic textures driven by spin current injection, Physical Review B 100, 054415 (2019).
* Yan _et al._ [2022] G. Q. Yan, S. Li, H. Lu, M. Huang, Y. Xiao, L. Wernert, J. A. Brock, E. E. Fullerton, H. Chen, H. Wang, and C. R. Du, Quantum Sensing and Imaging of Spin–Orbit‐Torque‐Driven Spin Dynamics in the Non‐Collinear Antiferromagnet Mn3Sn, Advanced Materials , 2200327 (2022).
* Nogués and Schuller [1999] J. Nogués and I. K. Schuller, Exchange bias, Journal of Magnetism and Magnetic Materials 192, 203 (1999).
* Grimaldi _et al._ [2020] E. Grimaldi, V. Krizakova, G. Sala, F. Yasin, S. Couet, G. Sankar Kar, K. Garello, and P. Gambardella, Single-shot dynamics of spin–orbit torque and spin transfer torque switching in three-terminal magnetic tunnel junctions, Nature Nanotechnology 15, 111 (2020).
* [53] See Supplemental Material at [URL] for details on the sample fabrication, crystal structure, time-resolved measurements, switching of Mn3Sn/W and W/Mn3Sn/Pt, temperature calibration, orientation of domains probed by the ANE and dependence of the switching amplitude on applied field.
* Sala _et al._ [2021] G. Sala, V. Krizakova, E. Grimaldi, C. H. Lambert, T. Devolder, and P. Gambardella, Real-time Hall-effect detection of current-induced magnetization dynamics in ferrimagnets, Nature Communications 12, 656 (2021), arXiv:2102.00716 .
* Avci _et al._ [2014] C. O. Avci, K. Garello, M. Gabureac, A. Ghosh, A. Fuhrer, S. F. Alvarado, and P. Gambardella, Interplay of spin-orbit torque and thermoelectric effects in ferromagnet/normal-metal bilayers, Physical Review B 90, 224427 (2014).
* Krizakova _et al._ [2021] V. Krizakova, E. Grimaldi, K. Garello, G. Sala, S. Couet, G. S. Kar, and P. Gambardella, Interplay of Voltage Control of Magnetic Anisotropy, Spin-Transfer Torque, and Heat in the Spin-Orbit-Torque Switching of Three-Terminal Magnetic Tunnel Junctions, Physical Review Applied 15, 054055 (2021).
* Wörnle _et al._ [2019] M. S. Wörnle, P. Welter, Z. Kašpar, K. Olejník, V. Novák, R. P. Campion, P. Wadley, T. Jungwirth, C. L. Degen, and P. Gambardella, Current-induced fragmentation of antiferromagnetic domains, arXiv (2019), arXiv:1912.05287 .
* Kašpar _et al._ [2021] Z. Kašpar, M. Surýnek, J. Zubáč, F. Krizek, V. Novák, R. P. Campion, M. S. Wörnle, P. Gambardella, X. Marti, P. Němec, K. W. Edmonds, S. Reimers, O. J. Amin, F. Maccherozzi, S. S. Dhesi, P. Wadley, J. Wunderlich, K. Olejník, and T. Jungwirth, Quenching of an antiferromagnet into high resistivity states using electrical or ultrashort optical pulses, Nature Electronics 4, 30 (2021).
* Pal _et al._ [2022] B. Pal, B. K. Hazra, B. Göbel, J.-C. Jeon, A. K. Pandeya, A. Chakraborty, O. Busch, A. K. Srivastava, H. Deniz, J. M. Taylor, H. Meyerheim, I. Mertig, S.-H. Yang, and S. S. P. Parkin, Setting of the magnetic structure of chiral kagome antiferromagnets by a seeded spin-orbit torque, Science Advances 8, 10.1126/sciadv.abo5930 (2022).
|
This follows from the Hurwitz-Chevalley-Weil formula, see [MO13, Proposition
5.9]. Alternatively, see [CT99, Section 5]. ∎
Fix a point $F_{0}\in X_{0}(\mathbb{C})$ and let
$C=\\{z^{5}=F_{0}(x,y)\\}\subset\mathbb{P}^{2}_{\mathbb{C}}$ (6)
be the corresponding cyclic cover of $\mathbb{P}^{1}_{\mathbb{C}}$. Let
$\left(A=J(C)=\text{Pic}^{0}(C),\quad\lambda\colon
A\to{\widehat{A}},\quad\iota\colon\mathcal{O}_{K}=\mathbb{Z}[\zeta]\to\textnormal{End}(A)\right)$
be the Jacobian of $C$, viewed as a principally polarized abelian variety of
dimension six equipped with an $\mathcal{O}_{K}$-action compatible with the
polarization, see (4.43).
Write $\Lambda={\mathrm{H}}_{1}(A(\mathbb{C}),\mathbb{Z})$. We have
$\Lambda\otimes_{\mathbb{Z}}\mathbb{C}={\mathrm{H}}^{-1,0}\oplus{\mathrm{H}}^{0,-1}$,
the Hodge decomposition of $\Lambda\otimes_{\mathbb{Z}}\mathbb{C}$. Define a
CM-type $\Psi\subset\textnormal{Hom}(K,\mathbb{C})$ as follows:
$\displaystyle\tau_{i}:K\to\mathbb{C},\quad\tau_{1}(\zeta)=\zeta^{3},\quad\tau_{2}(\zeta)=\zeta^{4};\quad\quad\Psi=\left\\{\tau_{1},\tau_{2}\right\\}.$
(7)
Since
${\mathrm{H}}^{-1,0}=\textnormal{Lie}(A)={\mathrm{H}}^{1}(C,\mathcal{O}_{C})={\mathrm{H}}^{0,1}(C)$,
Lemma 5.6 implies that
$\dim_{\mathbb{C}}{\mathrm{H}}^{-1,0}_{\tau_{1}}=2,\;\;\;\dim_{\mathbb{C}}{\mathrm{H}}^{-1,0}_{\tau_{1}\sigma}=1,\;\;\;\dim_{\mathbb{C}}{\mathrm{H}}^{-1,0}_{\tau_{2}}=3,\;\;\;\dim_{\mathbb{C}}{\mathrm{H}}^{-1,0}_{\tau_{2}\sigma}=0.$
(8)
Define $\eta=5/(\zeta-\zeta^{-1})$. Then ${\mathfrak{D}}_{K}=(\eta)$ (see
Lemma 4.52). Let
$E:\Lambda\times\Lambda\to\mathbb{Z}$
be the alternating form corresponding to the polarization $\lambda$ of the
abelian variety $A$. For $a\in\mathcal{O}_{K}$ and $x,y\in\Lambda$, we have
$E(\iota(a)x,y)=E(x,\iota(a^{\sigma})y)$. Define
$T\colon\Lambda\times\Lambda\to{\mathfrak{D}}_{K}^{-1},\quad
T(x,y)=\frac{1}{5}\sum_{j=0}^{4}\zeta^{j}E\left(x,\iota(\zeta)^{j}y\right).$
By Example 4.40.2, this is the skew-hermitian form corresponding to $E$ via
Lemma 4.39. We obtain a hermitian form on the free $\mathcal{O}_{K}$-module
$\Lambda$ as follows:
${\mathfrak{h}}:\Lambda\times\Lambda\to\mathcal{O}_{K},\;\;\;{\mathfrak{h}}(x,y)=\eta
T(x,y)=(\zeta-\zeta^{-1})^{-1}\sum_{j=0}^{4}\zeta^{j}E\left(x,\iota(\zeta)^{j}y\right).$
(9)
By Lemma 4.39, the hermitian lattice $(\Lambda,{\mathfrak{h}})$ is unimodular,
because $(\Lambda,E)$ is unimodular. For each embedding
$\varphi:K\to\mathbb{C}$, the restriction of the hermitian form
$\varphi(\eta)\cdot E_{\mathbb{C}}(x,\bar{y})$ on $\Lambda_{\mathbb{C}}$ to
$(\Lambda_{\mathbb{C}})_{\varphi}\subset\Lambda_{\mathbb{C}}$ coincides with
${\mathfrak{h}}^{\varphi}$ by Lemma 4.41. Since
$\Im(\tau_{i}(\zeta-\zeta^{-1}))<0$ for $i=1,2$, the signature of
${\mathfrak{h}}^{\tau_{i}}$ on
$V_{i}=\Lambda\otimes_{\mathcal{O}_{K},\tau_{i}}\mathbb{C}$ is
$\displaystyle{\textnormal{sign}}({\mathfrak{h}}^{\tau_{i}})=\begin{cases}({\mathrm{h}}^{-1,0}_{\tau_{1}},h^{0,-1}_{\tau_{1}})=(2,1)&\textnormal{
for }i=1,\quad\textnormal{ and }\\\
({\mathrm{h}}^{-1,0}_{\tau_{2}},h^{0,-1}_{\tau_{2}})=(3,0)&\textnormal{ for
}i=2.\end{cases}$ (10)
##### 2 The monodromy representation
Consider the real algebraic variety $X_{0}$ introduced in Section 1. Let
$D\subset\textnormal{GL}_{2}(\mathbb{C})$ be the subgroup
$D=\left\\{\zeta^{i}\cdot
I_{2}\right\\}\subset\textnormal{GL}_{2}(\mathbb{C})$ of scalar matrices
$\zeta^{i}\cdot I_{2}$, where $I_{2}\in\textnormal{GL}_{2}(\mathbb{C})$ is the
identity matrix of rank two, and define
$\displaystyle{\mathbb{G}}(\mathbb{C})=\textnormal{GL}_{2}(\mathbb{C})/D.$
(11)
The group $\mathbb{G}(\mathbb{C})$ acts from the left on $X_{0}(\mathbb{C})$
in the following way: if $F(x,y)\in\mathbb{C}[x,y]$ is a binary quintic, we
may view $F$ as a function $\mathbb{C}^{2}\to\mathbb{C}$, and define $g\cdot
F=F(g^{-1})$ for $g\in\mathbb{G}(\mathbb{C})$. This gives a canonical
isomorphism of complex analytic orbifolds
${\mathcal{M}}_{0}(\mathbb{C})=\mathbb{G}(\mathbb{C})\setminus
X_{0}(\mathbb{C}),$
where ${\mathcal{M}}_{0}$ is the moduli stack of smooth binary quintics.
Consider two families
$\pi:{\mathscr{C}}\to X_{0}\quad{\textnormal{ and }}\quad\phi:J\to X_{0},$
defined as follows. We define $\pi$ as the universal family of cyclic covers
$C\to\mathbb{P}^{1}$ ramified along a smooth binary quintic
$\\{F=0\\}\subset\mathbb{P}^{1}$. We let $\phi$ be the relative Jacobian of
$\pi$. By Section 1, $\phi$ is an abelian scheme of relative dimension six
over $X_{0}$, with $\mathcal{O}_{K}$-action of signature $\\{(2,1),(3,0)\\}$
with respect to $\Psi=\\{\tau_{1},\tau_{2}\\}$.
Let ${\mathbb{V}}=R^{1}\pi_{\ast}\mathbb{Z}$ be the local system of hermitian
$\mathcal{O}_{K}$-modules underlying the abelian scheme $J/X_{0}$. Attached to
${\mathbb{V}}$, we have a representation
$\rho^{\prime}:\pi_{1}(X_{0}(\mathbb{C}),F_{0})\to\Gamma,\quad\Gamma=\textnormal{Aut}_{\mathcal{O}_{K}}(\Lambda,{\mathfrak{h}}),$
whose composition with the quotient map $\Gamma\to P\Gamma=\Gamma/\mu_{K}$
defines a homomorphism
$\rho:\pi_{1}(X_{0}(\mathbb{C}),F_{0})\to P\Gamma.$ (12)
We shall see that $\rho$ is surjective, see Corollary 5.8 below.
##### 3 Marked binary quintics
For $F\in X_{0}(\mathbb{C})$, define $Z_{F}$ as the hypersurface
$Z_{F}=\\{F=0\\}\subset\mathbb{P}^{1}_{\mathbb{C}}.$
A marking of $F$ is a ring isomorphism
$m:{\mathrm{H}}^{0}(Z_{F}(\mathbb{C}),\mathbb{Z})\xrightarrow{\sim}\mathbb{Z}^{5}$.
To give a marking is to give a labelling of the points $p\in
Z_{F}(\mathbb{C})$. Let ${\mathcal{N}}_{0}$ be the space of marked binary
quintics $(F,m)$; this is a manifold, equipped with a holomorphic map
$\displaystyle{\mathcal{N}}_{0}\to X_{0}(\mathbb{C}).$ (13)
Let
$\psi:{\mathcal{Z}}\to X_{0}(\mathbb{C})$
be the universal complex binary quintic, and consider the local system
$H=\psi_{\ast}\mathbb{Z}$ of stalk
$H_{F}={\mathrm{H}}^{0}(Z_{F}(\mathbb{C}),\mathbb{Z})$ for $F\in
X_{0}(\mathbb{C})$. Then $H$ corresponds to a monodromy representation
$\tau:\pi_{1}(X_{0}(\mathbb{C}),F_{0})\to{\mathfrak{S}}_{5}.$ (14)
It can be shown that $\tau$ is surjective using the results of [Bea86a]. This
implies that (13) is covering space, i.e. that ${\mathcal{N}}_{0}$ is
connected.
If we choose a marking
$m_{0}:{\mathrm{H}}^{0}(Z_{F_{0}}(\mathbb{C}),\mathbb{Z})\cong\mathbb{Z}^{5}$
lying over our base point $F_{0}\in X_{0}(\mathbb{C})$, we obtain an embedding
$\pi_{1}\left({\mathcal{N}}_{0},m_{0}\right)\hookrightarrow\pi_{1}(X_{0}(\mathbb{C}),F_{0})$,
whose composition with $\rho$ in (12) defines a homomorphism
$\mu:\pi_{1}({\mathcal{N}}_{0},m_{0})\to P\Gamma.$ (15)
Define $\theta=\zeta-\zeta^{-1}$ and consider the $3$-dimensional
${\mathbb{F}}_{5}$ vector space $\Lambda/\theta\Lambda$ and the quadratic
space
$W\coloneqq\left(\Lambda/\theta\Lambda,q\right),$
where $q$ is the quadratic form obtained by reducing ${\mathfrak{h}}$ modulo
$\theta\Lambda$. Define two groups $\Gamma_{\theta}$ and $P\Gamma_{\theta}$ as
follows:
$\Gamma_{\theta}=\textnormal{Ker}\left(\Gamma\to\textnormal{Aut}(W)\right),\quad
P\Gamma_{\theta}=\textnormal{Ker}\left(P\Gamma\to
P\textnormal{Aut}(W)\right)\subset\text{PU}(2,1).$
Remark that the composition ${\mathcal{N}}_{0}\to X_{0}(\mathbb{C})\to
X_{s}(\mathbb{C})$ admits an essentially unique completion
${\mathcal{N}}_{s}\to X_{s}(\mathbb{C})$, see [Fox57] or [DM86, §8]. Here
${\mathcal{N}}_{s}$ a manifold and ${\mathcal{M}}_{s}\to X_{s}(\mathbb{C})$ is
a ramified covering space.
###### Proposition 5.7.
The image of $\mu$ in (15) is the group $P\Gamma_{\theta}$, and the induced
homomorphism
$\pi_{1}(X_{0}(\mathbb{C}),F_{0})/\pi_{1}\left({\mathcal{N}}_{0},m_{0}\right)={\mathfrak{S}}_{5}\to
P\Gamma/P\Gamma_{\theta}$ is an isomorphism. In other words, we obtain the
following commutative diagram with exact rows:
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{1}({\mathcal{N}}_{0},m_{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\mu}$$\textstyle{\pi_{1}(X_{0}(\mathbb{C}),F_{0})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\scriptstyle{\;\;\;\;\;\;\tau}$$\textstyle{{\mathfrak{S}}_{5}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\sim}$$\scriptstyle{\gamma}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\Gamma_{\theta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\Gamma\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\textnormal{Aut}(W)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
(16)
###### Proof.
Consider the quotient
$Q=\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{0}=\textnormal{PGL}_{2}(\mathbb{C})\setminus
P_{0}(\mathbb{C}),$
where $P_{0}~\subset(\mathbb{P}^{1}_{\mathbb{R}})^{5}$ is the subvariety of
distinct five-tupes, see Section 5. Let $0\in Q$ be the image of
$m_{0}\in{\mathcal{N}}_{0}$. In [DM86], Deligne and Mostow define a hermitian
space bundle $B_{Q}\to Q$ over $Q$ whose fiber over $0\in Q$ is
$\mathbb{C}H^{2}$. Consequently, writing
$V_{1}=\Lambda\otimes_{\mathcal{O}_{K},\tau_{1}}\mathbb{C}$, this gives a
monodromy representation
$\pi_{1}(Q,0)\to\textnormal{PU}(V_{1},{\mathfrak{h}}^{\tau_{1}})\cong\textnormal{PU}(2,1)$
whose image we denote by $\Gamma_{\text{DM}}$. Kondō has shown that in fact,
$\Gamma_{\text{DM}}=P\Gamma_{\theta}$ [Kon07, Theorem 7.1]. Since
${\mathcal{N}}_{0}\to Q$ is a covering space (the action of
$\mathbb{G}(\mathbb{C})$ on ${\mathcal{N}}_{0}$ being free) we have an
embedding $\pi_{1}({\mathcal{N}}_{0},m_{0})\hookrightarrow\pi_{1}(Q,0)$ whose
composition with $\pi_{1}(Q,0)\to\textnormal{PU}(2,1)$ is the map
$\mu:\pi_{1}({\mathcal{N}}_{0},m_{0})\to P\Gamma\subset\textnormal{PU}(2,1)$.
To prove that the image of $\mu$ is $P\Gamma_{\theta}$, it suffices to give a
section of the map ${\mathcal{N}}_{0}\to Q$. Indeed, such a section induces a
retraction of $\pi_{1}({\mathcal{N}}_{0},m_{0})\hookrightarrow\pi_{1}(Q,0)$,
so that the images of these two groups in $\textnormal{PU}(2,1)$ are the same.
To define such a section, observe that if
$\Delta\subset\mathbb{P}^{1}(\mathbb{C})^{5}$ is the union of all hyperplanes
$\\{x_{i}=x_{j}\\}\subset\mathbb{P}^{1}(\mathbb{C})^{5}$ for $i\neq j$, then
$\displaystyle Q=\textnormal{PGL}_{2}(\mathbb{C})\setminus P_{0}(\mathbb{C})$
$\displaystyle=\textnormal{PGL}_{2}(\mathbb{C})\setminus\left(\mathbb{P}^{1}(\mathbb{C})^{5}-\Delta\right)$
$\displaystyle\cong\\{(x_{4},x_{5})\in\mathbb{C}^{2}:x_{i}\neq 0,1\textnormal{
and }x_{1}\neq x_{2}\\}.$
The section $Q\to{\mathcal{N}}_{0}$ may then be defined by sending
$(x_{4},x_{5})$ to the binary quintic
$F(x,y)=x(x-y)y(x-x_{4}\cdot y)(x-x_{5}\cdot y)\in X_{0}(\mathbb{C}),$
marked by the labelling of its roots
$\\{[0:1],[1:1],[1:0],[x_{4}:1],[x_{5}:1]\\}$.
It remains to prove that the homomorphism $\gamma:{\mathfrak{S}}_{5}\to
P\Gamma/P\Gamma_{\theta}$ appearing on the right in (16) is an isomorphism. We
use Theorem 5.34, proven by Shimura in [Shi64], which says that
$(\Lambda,{\mathfrak{h}})\cong\left(\mathcal{O}_{K}^{3},\textnormal{diag}\left(1,1,\frac{1-\sqrt{5}}{2}\right)\right).$
It follows that
$P\Gamma/P\Gamma_{\theta}=P\textnormal{Aut}(W)\cong\textnormal{PO}_{3}({\mathbb{F}}_{5})\cong{\mathfrak{S}}_{5}.$
Next, consider the manifold ${\mathcal{N}}_{s}$. Remark that
${\mathfrak{S}}_{5}$ embeds into
$\textnormal{Aut}(\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{s})$.
Moreover, there is a natural isomorpism
$p\colon\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{s}\cong
P\Gamma_{\theta}\setminus\mathbb{C}H^{2},\quad\quad{\textnormal{see~\cite[cite]{[\@@bibref{}{DeligneMostow,
kondo5points}{}{}]}.}}$
(See also (18).) The two compositions
${\mathfrak{S}}_{5}\subset\textnormal{Aut}(\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{s})\cong\textnormal{Aut}(P\Gamma_{\theta}\setminus\mathbb{C}H^{2})\;{\textnormal{
and }}\;{\mathfrak{S}}_{5}\to
P\Gamma/P\Gamma_{\theta}\subset\textnormal{Aut}(P\Gamma_{\theta}\setminus\mathbb{C}H^{2})$
agree, because of the equivariance of $p$ with respect to $\gamma$. Thus,
$\gamma$ is injective. ∎
###### Corollary 5.8.
The monodromy representation $\rho$ in (12) is surjective. $\hfill\qed$
##### 4 Framed binary quintics
By a framing of a point $F\in X_{0}(\mathbb{C})$ we mean a projective
equivalence class $[f]$, where
$f\colon\mathbb{V}_{F}={\mathrm{H}}^{1}(C_{F}(\mathbb{C}),\mathbb{Z})\to\Lambda$
is an $\mathcal{O}_{K}$-linear isometry: two such isometries are in the same
class if and only if they differ by an element in $\mu_{K}$. Let
${\mathcal{F}}_{0}$ be the collection of all framings of all points $x\in
X_{0}(\mathbb{C})$. The set ${\mathcal{F}}_{0}$ is naturally a complex
manifold, by arguments similar to those in [ACT02a]. Note that Corollary 5.8
implies that ${\mathcal{F}}_{0}$ is connected, hence
$\displaystyle{\mathcal{F}}_{0}\to X_{0}(\mathbb{C})$ (17)
is a covering, with Galois group $P\Gamma$.
###### Lemma 5.9.
The spaces $P\Gamma_{\theta}\setminus{\mathcal{F}}_{0}$ and
${\mathcal{N}}_{0}$ are isomorphic as covering spaces of $X_{0}(\mathbb{C})$.
In particular, there is a covering map ${\mathcal{F}}_{0}\to{\mathcal{N}}_{0}$
with Galois group $P\Gamma_{\theta}$.
###### Proof.
We have $P\Gamma/P\Gamma_{\theta}\cong{\mathfrak{S}}_{5}$ as quotients of
$P\Gamma$, see Proposition 5.7. ∎
###### Lemma 5.10.
$\Delta\coloneqq X_{s}(\mathbb{C})-X_{0}(\mathbb{C})$ is an irreducible normal
crossings divisor of $X_{s}(\mathbb{C})$.
###### Proof.
The proof is similar to the proof of Proposition 6.7 in [Bea09]. ∎
###### Lemma 5.11.
The local monodromy transformations of ${\mathcal{F}}_{0}\to
X_{0}(\mathbb{C})$ around every $x\in\Delta$ are of finite order. More
precisely, if $x\in\Delta$ lies on the intersection of $k$ local components of
$\Delta$, then the local monodromy group around $x$ is isomorphic to
$(\mathbb{Z}/10)^{k}$.
###### Proof.
See [DM86, Proposition 9.2] or [CT99, Proposition 6.1] for the generic case,
i.e. when a quintic $Z=\\{F=0\\}\subset\mathbb{P}^{1}_{\mathbb{C}}$ aquires
only one node. In this case, the local equation of the singularity is
$x^{2}=0$, hence the curve $C_{F}$ acquires a singularity of the form
$y^{5}+x^{2}=0$. If the quintic acquires two nodes, then $C_{F}$ acquires two
such singularities; the vanishing cohomology splits as an orthogonal direct
sum, hence the local monodromy transformations commute. ∎
In the following corollary, we let
$D=\left\\{z\in\mathbb{C}\mid\left|z\right|<1\right\\}$ denote the open unit
disc, and $D^{\ast}=D-\\{0\\}$ the punctured open unit disc.
###### Corollary 5.12.
There is an essentially unique branched cover $\pi:{\mathcal{F}}_{s}\to
X_{s}(\mathbb{C})$, with ${\mathcal{F}}_{s}$ a complex manifold, such that for
any $x\in\Delta$, any open $x\in U\subset X_{s}(\mathbb{C})$ with $U\cong
D^{k}\times D^{6-k}$ and $U\cap X_{0}(\mathbb{C})\cong(D^{\ast})^{k}\times
D^{6-k}$, and any connected component $U^{\prime}$ of
$\pi^{-1}(U)\subset{\mathcal{F}}_{s}$, there is an isomorphism
$U^{\prime}\cong D^{k}\times D^{6-k}$ such that the composition
$D^{k}\times D^{6-k}\cong U^{\prime}\to U\cong D^{6}\;\text{ is the map
}\;(z_{1},\dotsc,z_{6})\mapsto(z_{1}^{r_{1}},\dotsc,z_{k}^{r_{k}},z_{k+1},\dotsc,z_{6}).$
###### Proof.
See [Bea09, Lemma 7.2]. See also [Fox57] and [DM86, Section 8]. ∎
The group $\mathbb{G}(\mathbb{C})=\textnormal{GL}_{2}(\mathbb{C})/D$ (see
(11)) acts on ${\mathcal{F}}_{0}$ over its action on $X_{0}$. Explicitly, if
$g\in\mathbb{G}(\mathbb{C})$ and if $([\phi],\phi:\mathbb{V}_{F}\cong\Lambda)$
is a framing of $F\in X_{0}(\mathbb{C})$, then
$\left([\phi\circ g^{\ast}],\phi\circ g^{\ast}:\mathbb{V}_{g\cdot
F}\to\Lambda\right)$
is a framing of $g\cdot F\in X_{0}(\mathbb{C})$. This is a left action. The
group $P\Gamma$ also acts on ${\mathcal{F}}_{0}$ from the left, and the
actions of $P\Gamma$ and $\mathbb{G}(\mathbb{C})$ on ${\mathcal{F}}_{0}$
commute. By functoriality of the Fox completion, the action of
$\mathbb{G}(\mathbb{C})$ on ${\mathcal{F}}_{0}$ extends to an action of
$\mathbb{G}(\mathbb{C})$ on ${\mathcal{F}}_{s}$.
###### Lemma 5.13.
The group $\mathbb{G}(\mathbb{C})=\textnormal{GL}_{2}(\mathbb{C})/D$ acts
freely on ${\mathcal{F}}_{s}$.
###### Proof.
The functoriality of the Fox completion gives an action of
$\mathbb{G}(\mathbb{C})$ on ${\mathcal{N}}_{s}$ such that, by Lemma 5.9, there
is a $\mathbb{G}(\mathbb{C})$-equivariant commutative diagram
---
$\textstyle{P\Gamma_{\theta}\setminus{\mathcal{F}}_{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{{\mathcal{N}}_{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X_{s}(\mathbb{C}).}$
In particular, it suffices to show that $\mathbb{G}(\mathbb{C})$ acts freely
on ${\mathcal{N}}_{s}$. Note that ${\mathcal{N}}_{0}$ admits a natural
$\mathbb{G}_{m}$-covering map ${\mathcal{N}}_{0}\to P_{0}(\mathbb{C})$ where
$P_{0}(\mathbb{C})\subset\mathbb{P}^{1}(\mathbb{C})^{5}$ is the space of
distinct ordered five-tuples in $\mathbb{P}^{1}(\mathbb{C})$ introduced in
Section 1. Consequently, there is a $\mathbb{G}_{m}$-quotient map
${\mathcal{N}}_{s}\to P_{s}(\mathbb{C})$, where $P_{s}(\mathbb{C})$ is the
space of stable ordered five-tuples, and this map is equivariant for the
homomorphism
$\textnormal{GL}_{2}(\mathbb{C})\to\textnormal{PGL}_{2}(\mathbb{C})$.
Let $g\in\textnormal{GL}_{2}(\mathbb{C})$ and $x\in{\mathcal{N}}_{s}$ such
that $gx=x$. It is clear that $\textnormal{PGL}_{2}(\mathbb{C})$ acts freely
on $P_{s}(\mathbb{C})$. Therefore, $g=\lambda\in\mathbb{C}^{\ast}$. Let $F\in
X_{s}(\mathbb{C})$ be the image of $x\in{\mathcal{N}}_{s}$; then
$gF(x,y)=F(g^{-1}(x,y))=F(\lambda^{-1}x,\lambda^{-1}y)=\lambda^{-5}F(x,y).$
The equality $gF=F$ implies that $\lambda^{5}=1\in\mathbb{C}$, from which we
conclude that $\lambda\in\langle\zeta\rangle$. Therefore,
$[g]=[\textnormal{id}]\in\mathbb{G}(\mathbb{C})=\textnormal{GL}_{2}(\mathbb{C})/D$.
∎
##### 5 Complex uniformization
Consider the hermitian space
$V_{1}=\Lambda\otimes_{\mathcal{O}_{K},\tau_{1}}\mathbb{C}$; define
$\mathbb{C}H^{2}$ to be the space of negative lines in $V_{1}$. Using
Proposition 4.45 we see that the abelian scheme $J\to X_{0}$ induces a
$\mathbb{G}(\mathbb{C})$-equivariant morphism
${\mathcal{P}}:{\mathcal{F}}_{0}\to\mathbb{C}H^{2}$. Explicitly, if
$(F,[f])\in{\mathcal{F}}_{0}$ is the framing
$[f:{\mathrm{H}}^{1}(C_{F}(\mathbb{C}),\mathbb{Z})\xrightarrow{\sim}\Lambda]$
of the binary quintic $F\in X_{0}(\mathbb{C})$, and $A_{F}$ is the Jacobian of
the curve $C_{F}$, then
${\mathcal{P}}:{\mathcal{F}}_{0}\to\mathbb{C}H^{2},\quad{\mathcal{P}}\left(F,[f]\right)=f\left(H^{0,-1}(A_{F})_{\tau_{1}}\right)=f\left(H^{1,0}(C_{F})_{\zeta^{3}}\right)\in\mathbb{C}H^{2}.$
(18)
The map ${\mathcal{P}}$ is holomorphic, and descends to a morphism of complex
analytic spaces
${\mathcal{M}}_{0}(\mathbb{C})=\mathbb{G}(\mathbb{C})\setminus
X_{0}(\mathbb{C})\to P\Gamma\setminus\mathbb{C}H^{2}.$ (19)
By Riemann extension, (18) extends to a $\mathbb{G}(\mathbb{C})$-equivariant
holomorphic map
$\displaystyle\overline{{\mathcal{P}}}:{\mathcal{F}}_{s}\to\mathbb{C}H^{2}.$
(20)
###### Theorem 5.14 (Deligne–Mostow).
The period map (20) induces an isomorphism of complex manifolds
$\displaystyle{\mathcal{M}}_{s}^{f}(\mathbb{C})\coloneqq\mathbb{G}(\mathbb{C})\setminus{\mathcal{F}}_{s}\cong\mathbb{C}H^{2}.$
(21)
Taking $P\Gamma$-quotients gives an isomorphism of complex analytic spaces
${\mathcal{M}}_{s}(\mathbb{C})=\mathbb{G}(\mathbb{C})\setminus
X_{s}(\mathbb{C})\cong P\Gamma\setminus\mathbb{C}H^{2}.$ (22)
###### Proof.
In [DM86], Deligne and Mostow define $\widetilde{Q}\to Q$ to be the covering
space corresponding to the monodromy representation
$\pi_{1}(Q,0)\to\textnormal{PU}(2,1)$; since the image of this homomomorphism
is $P\Gamma_{\theta}$ (see the proof of Proposition 5.7), it follows that
$\mathbb{G}(\mathbb{C})\setminus{\mathcal{F}}_{0}\cong\widetilde{Q}$ as
covering spaces of $Q$. Consequently, if $\widetilde{Q}_{\textnormal{st}}$ is
the Fox completion of the spread
$\widetilde{Q}\to Q\to
Q_{\textnormal{st}}:=\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{s}=\textnormal{PGL}_{2}(\mathbb{C})\setminus
P_{s}(\mathbb{C}),$
then there is an isomorphism
$\mathbb{G}(\mathbb{C})\setminus{\mathcal{F}}_{s}\cong\widetilde{Q}_{\textnormal{st}}$
of branched covering spaces of $Q_{\textnormal{st}}$. We obtain commutative
diagrams, where the lower right morphism uses (16):
$\textstyle{\mathbb{G}(\mathbb{C})\setminus{\mathcal{F}}_{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim\;\;\;\;}$$\textstyle{\widetilde{Q}_{\textnormal{st}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{C}H^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{G}(\mathbb{C})\setminus{\mathcal{N}}_{s}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim\;\;\;\;}$$\textstyle{Q_{\textnormal{st}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\Gamma_{\theta}\setminus\mathbb{C}H^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{G}(\mathbb{C})\setminus
X_{s}(\mathbb{C})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim\;\;\;\;}$$\textstyle{Q_{\textnormal{st}}/{\mathfrak{S}}_{5}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P\Gamma\setminus\mathbb{C}H^{2}.}$
The map $\widetilde{Q}_{\textnormal{st}}\to\mathbb{C}H^{2}$ is an isomorphism
by [DM86, (3.11)]. Therefore, we are done if the composition
$\mathbb{G}(\mathbb{C})\setminus{\mathcal{F}}_{0}\to\widetilde{Q}\to\mathbb{C}H^{2}$
agrees with the period map ${\mathcal{P}}$ of equation (18). This follows from
[DM86, (2.23) and (12.9)]. ∎
###### Proposition 5.15.
The isomorphism (22) induces an isomorphism of complex analytic spaces
${\mathcal{M}}_{0}(\mathbb{C})=\mathbb{G}(\mathbb{C})\setminus
X_{0}(\mathbb{C})\cong
P\Gamma\setminus\left(\mathbb{C}H^{2}-{\mathscr{H}}\right).$ (23)
###### Proof.
We have
$\overline{{\mathcal{P}}}({\mathcal{F}}_{0})\subset\mathbb{C}H^{2}-{\mathscr{H}}$
by Proposition 4.48, because the Jacobian of a smooth curve cannot contain an
abelian subvariety whose induced polarization is principal. Therefore, we have
$\overline{{\mathcal{P}}}^{-1}({\mathscr{H}})\subset{\mathcal{F}}_{s}-{\mathcal{F}}_{0}$.
Since ${\mathcal{F}}_{s}$ is irreducible (it is smooth by Corollary 5.12 and
connected by Corollary 5.8), the analytic space
$\overline{{\mathcal{P}}}^{-1}({\mathscr{H}})$ is a divisor. Since
${\mathcal{F}}_{s}-{\mathcal{F}}_{0}$ is also a divisor by Corollary 5.12, we
have
$\overline{{\mathcal{P}}}^{-1}({\mathscr{H}})={\mathcal{F}}_{s}-{\mathcal{F}}_{0}$
and we are done.
Alternatively, let $H_{0,5}$ be the moduli space of degree $5$ covers of
$\mathbb{P}^{1}$ ramified along five distinct marked points [HM98, §2.G]. The
period map
$H_{0,5}(\mathbb{C})\to P\Gamma\setminus\mathbb{C}H^{2},$
that sends the moduli point of a curve $C\to\mathbb{P}^{1}$ to the moduli
point of the $\mathbb{Z}[\zeta]$-linear Jacobian $J(C)$, extends to the stable
compactification $\overline{H}_{0,5}(\mathbb{C})\supset H_{0,5}(\mathbb{C})$
because the curves in the limit are of compact type. Since the divisor
${\mathscr{H}}\subset\mathbb{C}H^{2}$ parametrizes abelian varieties that are
products of lower dimensional ones by Proposition 4.48, the image of the
boundary is exactly $P\Gamma\setminus{\mathscr{H}}$. ∎
#### 3 Moduli of real binary quintics
Having understood the period map for complex binary quintics, we turn to the
period map of real binary quintics in this Section 3.
##### 1 The period map for stable real binary quintics
Define $\kappa$ as the anti-holomorphic involution
$\kappa\colon X_{0}(\mathbb{C})\to X_{0}(\mathbb{C}),\quad
F(x,y)=\sum_{i+j=5}a_{ij}x^{i}y^{j}\mapsto\overline{F(x,y)}=\sum_{i+j=5}\overline{a_{ij}}x^{i}y^{j}.$
Let ${\mathscr{A}}$ be the set of anti-unitary involutions
$\alpha:\Lambda\to\Lambda$, and
$P{\mathscr{A}}=\mu_{K}\setminus{\mathscr{A}}$, see Section 4. For each
$\alpha\in P{\mathscr{A}}$, there is a natural anti-holomorphic involution
$\alpha~\colon{\mathcal{F}}_{0}\to{\mathcal{F}}_{0}$ such that the following
diagram commutes:
$\textstyle{{\mathcal{F}}_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\alpha}$$\textstyle{{\mathcal{F}}_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{X_{0}(\mathbb{C})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\kappa}$$\textstyle{X_{0}(\mathbb{C}).}$
It is defined as follows. Consider a framed binary quintic
$(F,[f])\in{\mathcal{F}}_{0}$, where $f:\mathbb{V}_{F}\to\Lambda$ is an
$\mathcal{O}_{K}$-linear isometry. Let $C_{F}\to\mathbb{P}^{1}_{\mathbb{C}}$
be the quintic cover defined by a smooth binary quintic $F\in
X_{0}(\mathbb{C})$. Complex conjugation
${\textnormal{conj}}\colon\mathbb{P}^{2}(\mathbb{C})\to\mathbb{P}^{2}(\mathbb{C})$
induces an anti-holomorphic map
$\sigma_{F}:C_{F}(\mathbb{C})\to C_{\kappa(F)}(\mathbb{C}).$
Consider the pull-back
$\sigma_{F}^{\ast}:{\mathbb{V}}_{\kappa(F)}\to{\mathbb{V}}_{F}$ of $\sigma$.
The composition
$\mathbb{V}_{\kappa(F)}\xrightarrow{\sigma_{F}^{\ast}}\mathbb{V}_{F}\xrightarrow{f}\Lambda\xrightarrow{\alpha}\Lambda$
induces a framing of $\kappa(F)\in X_{0}(\mathbb{C})$, and we define
$\alpha(F,[f])\coloneqq\left(\kappa(F),[\alpha\circ
f\circ\sigma_{F}^{\ast}]\right)\in{\mathcal{F}}_{0}.$
Although we have chosen a representative $\alpha\in{\mathscr{A}}$ of the class
$\alpha\in P{\mathscr{A}}$, the element $\alpha(F,[f])\in{\mathcal{F}}_{0}$
does not depend on this choice.
Consider the covering map ${\mathcal{F}}_{0}\to X_{0}(\mathbb{C})$ introduced
in (3), and define
$\displaystyle{\mathcal{F}}_{0}(\mathbb{R})=\bigsqcup_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{0}^{\alpha}\subset{\mathcal{F}}_{0}$ (24)
as the preimage of $X_{0}(\mathbb{R})$ in the space ${\mathcal{F}}_{0}$. To
see why the union on the left hand side of (24) is disjoint, observe that
${\mathcal{F}}_{0}^{\alpha}=\left\\{(F,[f])\in{\mathcal{F}}_{0}:\kappa(F)=F\textnormal{
and }[f\circ\sigma^{\ast}_{F}\circ f^{-1}]=\alpha\right\\}.$
Thus, for $\alpha,\beta\in P{\mathscr{A}}$ and
$(F,[f])\in{\mathcal{F}}_{0}^{\alpha}\cap{\mathcal{F}}_{0}^{\beta}$, we have
$\alpha=[f\circ\sigma\circ f^{-1}]=\beta$.
###### Lemma 5.16.
The anti-holomorphic involution
$\alpha\colon{\mathcal{F}}_{0}\to{\mathcal{F}}_{0}$ defined by $\alpha\in
P{\mathscr{A}}$ makes the period map
${\mathcal{P}}\colon{\mathcal{F}}_{0}\to\mathbb{C}H^{2}$ equivariant for the
$\mathbb{G}(\mathbb{C})$-actions on both sides.
###### Proof.
Indeed, if ${\textnormal{conj}}\colon\mathbb{C}\to\mathbb{C}$ is complex
conjugation, then for any $F\in X_{0}(\mathbb{C})$, the induced map
$\sigma^{\ast}_{F}\otimes{\textnormal{conj}}\colon{\mathbb{V}}_{\kappa(F)}\otimes_{\mathbb{Z}}\mathbb{C}\to{\mathbb{V}}_{F}\otimes_{\mathbb{Z}}\mathbb{C}$
is anti-linear, preserves the Hodge decompositions [Sil89, Chapter I, Lemma
(2.4)] as well as the eigenspace decompositions. ∎
We obtain a real period map
(27)
Define $\mathbb{G}(\mathbb{R})=\textnormal{GL}_{2}(\mathbb{R})$. The map (27)
is constant on $\mathbb{G}(\mathbb{R})$-orbits, since the same is true for the
complex period map ${\mathcal{P}}\colon{\mathcal{F}}_{0}\to\mathbb{C}H^{2}$.
By abuse of notation, we for $\alpha\in P{\mathscr{A}}$, we write
${\mathbb{R}}H^{2}_{\alpha}-{\mathscr{H}}={\mathbb{R}}H^{2}_{\alpha}-\left({\mathscr{H}}\cap{\mathbb{R}}H^{2}_{\alpha}\right)$.
###### Proposition 5.17.
The real period map (27) descends to a $P\Gamma$-equivariant diffeomorphism
$\displaystyle{\mathcal{M}}_{0}(\mathbb{R})^{f}\coloneqq\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{0}(\mathbb{R})\cong\coprod_{\alpha\in
P{\mathscr{A}}}\mathbb{R}H^{2}_{\alpha}-{\mathscr{H}}.$ (28)
By $P\Gamma$-equivariance, the map (28) induces an isomorphism of real-
analytic orbifolds
${\mathcal{P}}_{\mathbb{R}}\colon{\mathcal{M}}_{0}(\mathbb{R})=\mathbb{G}(\mathbb{R})\setminus
X_{0}(\mathbb{R})\cong\coprod_{\alpha\in
C{\mathscr{A}}}P\Gamma_{\alpha}\setminus\left(\mathbb{R}H^{2}_{\alpha}-{\mathscr{H}}\right).$
(29)
###### Proof.
This follows from [ACT10, proof of Theorem 3.3]. It is crucial that the
actions of $G$ and $P\Gamma$ on ${\mathcal{F}}_{0}$ commute and are free,
which is the case, see Corollary 5.13. ∎
##### 2 The period map for smooth real binary quintics
Our next goal will be to prove the real analogue of the isomorphisms (21) and
(21) in Theorem 5.14. We need a lemma, a definition, and then two more lemmas.
Consider the CM-type $\Psi=\left\\{\tau_{1},\tau_{2}\right\\}$ defined in (7),
the hermitian $\mathcal{O}_{K}$-lattice $(\Lambda,{\mathfrak{h}})$ defined in
(9), and the sets (c.f. Definition 4.11):
${\mathcal{H}}=\left\\{H_{r}\subset{\mathbb{C}}H^{2}\mid
r\in{\mathscr{R}}\right\\},~\quad{\textnormal{
and~}}\quad{\mathscr{H}}=\cup_{H\in{\mathcal{H}}}H\subset{\mathbb{C}}H^{2}.$
Here, ${\mathscr{R}}\subset\Lambda$ is the set of short roots (see Section 1).
###### Lemma 5.18.
The hyperplane arrangement ${\mathscr{H}}~\subset{\mathbb{C}}H^{2}$ satisfies
Condition 4.9, that is: any two distinct $H_{1},H_{2}\in{\mathcal{H}}$ either
meet orthogonally, or not at all.
###### Proof.
Condition 4.49.1 holds because $K$ does not contain proper CM-subfields. By
Lemma 4.52, we have that Condition 4.49.2 is satisfied. By equation (10),
Condition 4.49.3 holds. By Theorem 4.50, we obtain the desired result. ∎
###### Definition 5.19.
1. 1.
For $k=1,2$, define
$\Delta_{k}\subset\Delta=X_{s}(\mathbb{C})-X_{0}(\mathbb{C})$ to be the locus
of stable binary quintics with exactly $k$ nodes. Define
$\widetilde{\Delta}={\mathcal{F}}_{s}-{\mathcal{F}}_{0}$, and let
$\widetilde{\Delta}_{k}\subset\widetilde{\Delta}$ be the inverse image of
$\Delta_{k}$ in $\widetilde{\Delta}$ under the map
$\widetilde{\Delta}\to\Delta$.
2. 2.
For $k=1,2$, define ${\mathscr{H}}_{k}\subset{\mathscr{H}}$ as the set
${\mathscr{H}}_{k}=\left\\{x\in{\mathbb{C}}H^{2}\mid\left|{\mathcal{H}}(x)\right|=k\right\\}$.
Thus, this is the locus of points in ${\mathscr{H}}$ where exactly $k$
hyperplanes meet.
###### Lemma 5.20.
1. 1.
The period map $\overline{{\mathcal{P}}}$ of (20) satisfies
$\overline{{\mathcal{P}}}(\widetilde{\Delta}_{k})\subset{\mathscr{H}}_{k}$.
2. 2.
If $f\in\widetilde{\Delta}_{k}$,
$x=\overline{{\mathcal{P}}(f)}\in~{\mathbb{C}}H^{2}$, and
${\mathcal{H}}(x)=\left\\{H_{r_{1}},\dotsc,H_{r_{k}}\right\\}$ for
$r_{i}\in{\mathscr{R}}$, then $\overline{{\mathcal{P}}}$ induces a group
isomorphism $P\Gamma_{f}\cong G(x)$. ∎
The naturality of the Fox completion implies that for $\alpha\in
P{\mathscr{A}}$, the anti-holomorphic involution
$\alpha:{\mathcal{F}}_{0}\to{\mathcal{F}}_{0}$ extends to an anti-holomorphic
involution $\alpha:{\mathcal{F}}_{s}\to{\mathcal{F}}_{s}$.
###### Lemma 5.21.
For every $\alpha\in P{\mathscr{A}}$, the restriction of
$\overline{{\mathcal{P}}}:{\mathcal{F}}_{s}\to\mathbb{C}H^{2}$ to
${\mathcal{F}}_{s}^{\alpha}$ defines a diffeomorphism
$\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{s}^{\alpha}\cong\mathbb{R}H^{2}_{\alpha}$.
###### Proof.
See [ACT10, Lemma 11.3]. It is essential that $G$ acts freely on
${\mathcal{F}}_{s}$, which holds by Corollary 5.13. ∎
We arrive at the main theorem of Section 3. Define
${\mathcal{F}}_{s}(\mathbb{R})=\bigcup_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}=\pi^{-1}\left(X_{s}(\mathbb{R})\right).$
This is not a manifold because of the ramification of
$\pi:{\mathcal{F}}_{s}\to X_{s}(\mathbb{C})$, but a union of embedded
submanifolds.
###### Theorem 5.22.
There is a smooth map
$\displaystyle\overline{{\mathcal{P}}}_{\mathbb{R}}:\coprod_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}\to\coprod_{\alpha\in
P{\mathscr{A}}}\mathbb{R}H^{2}_{\alpha}=\widetilde{Y}$ (30)
that extends the real period map (27). The map (30) induces the following
commutative diagram of topological spaces, in which
${\mathscr{P}}_{\mathbb{R}}$ and ${\mathfrak{P}}_{\mathbb{R}}$ are
homeomorphisms:
$\textstyle{\coprod_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{{\mathcal{P}}}_{\mathbb{R}}}$$\textstyle{\widetilde{Y}=\coprod_{\alpha\in
P{\mathscr{A}}}\mathbb{R}H^{2}_{\alpha}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathcal{F}}_{s}(\mathbb{R})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\overline{{\mathcal{P}}}_{\mathbb{R}}}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathcal{M}}_{s}(\mathbb{R})^{f}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{s}(\mathbb{R})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathscr{P}}_{\mathbb{R}}}$$\scriptstyle{\sim}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathcal{M}}_{s}(\mathbb{R})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbb{G}(\mathbb{R})\setminus
X_{s}(\mathbb{R})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{{\mathfrak{P}}}_{\mathbb{R}}}$$\scriptstyle{\sim}$$\textstyle{P\Gamma\setminus
Y.}$
###### Proof.
The existence of $\overline{{\mathcal{P}}}_{\mathbb{R}}$ follows from the
compatibility with the involutions $\alpha\in P{\mathscr{A}}$. We first show
that the composition
$\coprod_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}\xrightarrow{\overline{{\mathcal{P}}}_{\mathbb{R}}}\widetilde{Y}\xrightarrow{p}Y$
factors through ${\mathcal{F}}_{s}(\mathbb{R})$. Now $f_{\alpha}$ and
$g_{\beta}\in\coprod_{\alpha\in P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}$
have the same image in ${\mathcal{F}}_{s}(\mathbb{R})$ if and only if
$f=g\in{\mathcal{F}}_{s}^{\alpha}\cap{\mathcal{F}}_{s}^{\beta}$, in which case
$x\coloneqq\overline{{\mathcal{P}}}(f)=\overline{{\mathcal{P}}}(g)\in\mathbb{R}H^{2}_{\alpha}\cap{\mathbb{R}}H^{2}_{\beta},$
so we need to show is that $x_{\alpha}\sim x_{\beta}\in\widetilde{Y}$. For
this, note that $\alpha\beta\in P\Gamma_{f}\cong(\mathbb{Z}/10)^{k}$, and
$\overline{{\mathcal{P}}}$ induces an isomorphism $P\Gamma_{f}\cong G(x)$ by
Lemma 5.20. Hence $\alpha\beta\in G(x)$ so that indeed, $x_{\alpha}\sim
x_{\beta}$.
Let us prove the $\mathbb{G}(\mathbb{R})$-equivariance of
$\overline{{\mathcal{P}}}_{\mathbb{R}}$. Suppose that
$f\in{\mathcal{F}}_{s}^{\alpha},g\in{\mathcal{F}}_{s}^{\beta}\quad\mid\quad
a\cdot f=g\in{\mathcal{F}}_{s}(\mathbb{R})~\quad{\textnormal{ for some
}}~\quad a\in\mathbb{G}(\mathbb{R}).$
Then
$x\coloneqq\overline{{\mathcal{P}}}(f)=\overline{{\mathcal{P}}}(g)\in\mathbb{C}H^{2}$,
so we need to show that $\alpha\beta\in G(x)$. The actions of
$\mathbb{G}(\mathbb{C})$ and $P\Gamma$ on $\mathbb{C}H^{2}$ commute, and the
same holds for the actions of $\mathbb{G}(\mathbb{R})$ and $P\Gamma^{\prime}$
on ${\mathcal{F}}_{s}^{\mathbb{R}}$. It follows that
$\alpha(g)=\alpha(a\cdot f)=a\cdot\alpha(f)=a\cdot f=g,$
hence $g\in{\mathcal{F}}_{s}^{\alpha}\cap{\mathcal{F}}_{s}^{\beta}$. This
implies in turn that $\alpha\beta(g)=g$, hence $\alpha\beta\in
P\Gamma_{g}\cong G(x)$, so that indeed, $x_{\alpha}\sim x_{\beta}$.
To prove that ${\mathscr{P}}_{\mathbb{R}}$ is injective, let again
$f_{\alpha},g_{\beta}\in\coprod_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}$ and suppose that they have the same
image in $Y$. This implies that
$x\coloneqq\overline{{\mathcal{P}}}(f)=\overline{{\mathcal{P}}}(g)\in\mathbb{R}H^{2}_{\alpha}\cap{\mathbb{R}}H^{2}_{\beta},$
and that $\beta=\phi\circ\alpha$ for some $\phi\in G(x)$. We have $\phi\in
G(x)\cong P\Gamma_{f}$ (by Lemma 5.20) hence
$\beta(f)=\phi\left(\alpha(f)\right)=\phi(f)=f.$ (31)
Therefore $f,g\in{\mathcal{F}}_{s}^{\beta}$; since
$\overline{{\mathcal{P}}}(f)=\overline{{\mathcal{P}}}(g)$, it follows from
Lemma 5.21 that there exists $a\in\mathbb{G}(\mathbb{R})$ such that $a\cdot
f=g$. This proves injectivity of ${\mathscr{P}}_{\mathbb{R}}$, as desired.
The surjectivity of
${\mathscr{P}}_{\mathbb{R}}:\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{s}(\mathbb{R})\to
Y$ is straightforward, using the surjectivity of
$\overline{{\mathcal{P}}}_{\mathbb{R}}$, which follows from Lemma 5.21.
Finally, we claim that ${\mathscr{P}}_{\mathbb{R}}$ is open. Let
$U\subset\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{s}^{\mathbb{R}}$, and
write
$U={\mathscr{P}}_{\mathbb{R}}^{-1}{\mathscr{P}}_{\mathbb{R}}\left(U\right)$.
Let $V$ be the preimage of $U$ in $\coprod_{\alpha\in
P{\mathscr{A}}}{\mathcal{F}}_{s}^{\alpha}$. Then
$V=\overline{{\mathcal{P}}}_{\mathbb{R}}^{-1}\left(p^{-1}\left({\mathscr{P}}_{\mathbb{R}}(U)\right)\right)$
and hence
$\overline{{\mathcal{P}}}_{\mathbb{R}}\left(V\right)=p^{-1}\left({\mathscr{P}}_{\mathbb{R}}(U)\right).$
The map $\overline{{\mathcal{P}}}_{\mathbb{R}}$ is open, being the coproduct
of the maps ${\mathcal{F}}_{s}^{\alpha}\to\mathbb{R}H^{2}_{\alpha}$, which are
open since they have surjective differential at each point. Thus
${\mathscr{P}}_{\mathbb{R}}(U)$ is open in $Y$. ∎
###### Corollary 5.23.
There is a lattice $P\Gamma_{\mathbb{R}}\subset\textnormal{PO}(2,1)$, an
inclusion of orbifolds
$\displaystyle\coprod_{\alpha\in
C{\mathscr{A}}}P\Gamma_{\alpha}\setminus\left(\mathbb{R}H^{2}_{\alpha}-{\mathscr{H}}\right)\hookrightarrow
P\Gamma_{\mathbb{R}}\setminus\mathbb{R}H^{2},$ (32)
and a homeomorphism
${\mathcal{M}}_{s}(\mathbb{R})=\mathbb{G}(\mathbb{R})\setminus
X_{s}(\mathbb{R})\cong P\Gamma_{\mathbb{R}}\setminus\mathbb{R}H^{2}$ (33)
such that (33) restricts to (29) with respect to (32).
###### Proof.
This follows directly from Theorems 4.24 and 5.22. ∎
###### Remark 5.24.
The proof of Theorem 5.22 also shows that ${\mathcal{M}}_{s}(\mathbb{R})$ is
homeomorphic to the glued space $P\Gamma\setminus Y$ (see Definition 4.22) if
${\mathcal{M}}_{s}$ is the stack of cubic surfaces or of binary sextics. This
strategy to uniformize the real moduli space differs from the one used in
[ACT10, ACT06, ACT07], since we first glue together the real ball quotients,
and then prove that our real moduli space is homeomorphic to the result.
##### 3 Automorphism groups of stable real binary quintics
Before we can finish the proof of Theorem 5.2, we need to understand the
orbifold structure of ${\mathcal{M}}_{s}(\mathbb{R})$, and how this structure
differs from the orbifold structure of the glued space $P\Gamma\setminus Y$.
In the current Section 3 we start by analyzing the orbifold structure of
${\mathcal{M}}_{s}(\mathbb{R})$, by listing its stabilizer groups. There is a
canonical orbifold isomorphism
${\mathcal{M}}_{s}(\mathbb{R})=\mathbb{G}(\mathbb{R})\setminus
X_{s}(\mathbb{R})=(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R}).$ Therefore, to list
the automorphism groups of binary quintics is to list the elements
$x=[\alpha_{1},\dotsc,\alpha_{5}]\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$
whose stabilizer $\textnormal{PGL}_{2}(\mathbb{R})_{x}$ is non-trivial, and
calculate $\textnormal{PGL}_{2}(\mathbb{R})_{x}$ in these cases. This will be
our next goal.
###### Proposition 5.25.
All stabilizer groups
$\textnormal{PGL}_{2}(\mathbb{R})_{x}\subset\textnormal{PGL}_{2}(\mathbb{R})$
for points $x\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ are among
$\mathbb{Z}/2,D_{3},D_{5}$. For $n\in\\{3,5\\}$, there is a unique
$\textnormal{PGL}_{2}(\mathbb{R})$-orbit in
$(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ of points $x$ with stabilizer
$D_{n}$.
###### Proof.
We have an injection $(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})\hookrightarrow
P_{s}/{\mathfrak{S}}_{5}$ which is equivariant for the embedding
$\textnormal{PGL}_{2}(\mathbb{R})\hookrightarrow\textnormal{PGL}_{2}(\mathbb{C})$.
In particular,
$\textnormal{PGL}_{2}(\mathbb{R})_{x}\subset\textnormal{PGL}_{2}(\mathbb{C})_{x}$
for every $x\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$. The groups
$\textnormal{PGL}_{2}(\mathbb{C})_{x}$ for equivalence classes of distinct
points $x\in P_{0}/{\mathfrak{S}}_{5}$ are calculated in [WX17, Theorem 22],
and such a group is isomorphic to $\mathbb{Z}/2,D_{3},\mathbb{Z}/4$ or
$D_{5}$. None of these have subgroups isomorphic to
$D_{2}=\mathbb{Z}/2\rtimes\mathbb{Z}/2$ or
$D_{4}=\mathbb{Z}/2\rtimes\mathbb{Z}/4$. Define an involution
$\nu\coloneqq(z\mapsto 1/z)\in\textnormal{PGL}_{2}(\mathbb{R}).$
The proof of Proposition 5.25 will follow from the following Lemmas 5.26,
5.27, 5.28 and 5.29.
###### Lemma 5.26.
Let $\tau\in\textnormal{PGL}_{2}(\mathbb{R})$. Consider a subset
$S=\\{x,y,z\\}~\subset\mathbb{P}^{1}(\mathbb{C})$ stabilized by complex
conjugation, such that $\tau(x)=x$, $\tau(y)=z$ and $\tau(z)=y$. There is a
transformation $g\in\textnormal{PGL}_{2}(\mathbb{R})$ that maps $S$ to either
$\\{-1,0,\infty\\}$ or $\\{-1,i,-i\\}$, and that satisfies $g\tau
g^{-1}=\nu=(z\mapsto 1/z)\in\textnormal{PGL}_{2}(\mathbb{R})$. In particular,
$\tau^{2}=\textnormal{id}$.
###### Proof.
Indeed, two transformations $g,h\in\textnormal{PGL}_{2}(\mathbb{C})$ that
satisfy $g(x_{i})=h(x_{i})$ for three different points
$x_{1},x_{2},x_{3}\in\mathbb{P}^{1}(\mathbb{C})$ are necessarily equal. ∎
###### Lemma 5.27.
There is no $x\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ stabilized by some
$\phi\in\textnormal{PGL}_{2}(\mathbb{R})$ of order $4$.
###### Proof.
By [Bea10, Theorem 4.2], all subgroups
$G\subset\textnormal{PGL}_{2}(\mathbb{R})$ that are isomorphic to
$\mathbb{Z}/4$ are conjugate to each other. Since the transformation
$I:z\mapsto(z-1)/(z+1)$ is of order $4$, it gives a representative
$G_{I}=\langle I\rangle$ of this conjugacy class. Hence, assuming there exists
$x$ and $\phi$ as in the lemma, possibly after replacing $x$ by $gx$ for some
$g\in\textnormal{PGL}_{2}(\mathbb{R})$, we may and do assume that $\phi=I$. On
the other hand, it is easily shown that $I$ cannot fix any
$x\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$. ∎
Define
$\rho\in\textnormal{PGL}_{2}(\mathbb{R}),\;\;\;\rho(z)=\frac{-1}{z+1}.$
###### Lemma 5.28.
Let $x=(x_{1},\dotsc,x_{5})\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$. Suppose
$\phi(x)=x$ for an element $\phi\in\textnormal{PGL}_{2}(\mathbb{R})$ of order
$3$. There is a transformation $g\in\textnormal{PGL}_{2}(\mathbb{R})$ mapping
$x$ to $z=(-1,\infty,0,\omega,\omega^{2})$ with $\omega$ a primitive third
root of unity, and the stabilizer of $x$ to the subgroup of
$\textnormal{PGL}_{2}(\mathbb{R})$ generated by $\rho$ and $\nu$. In
particular, $\textnormal{PGL}_{2}(\mathbb{R})_{x}$ is isomorphic to $D_{3}$.
###### Proof.
It follows from Lemma 5.26 that there must be three elements
$x_{1},x_{2},x_{3}$ which form an orbit under $\phi$. Since complex
conjugation preserves this orbit, one element in it is real; since $g$ is
defined over $\mathbb{R}$, they are all real. Let
$g\in\textnormal{PGL}_{2}(\mathbb{R})$ such that $g(x_{1})=-1$,
$g(x_{2})=\infty$ and $g(x_{3})=0$. Define $\kappa=g\phi g^{-1}$. Then
$\kappa^{3}=\textnormal{id}$, and $\kappa$ preserves $\\{-1,\infty,0\\}$ and
sends $-1$ to $\infty$ and $\infty$ to $0$. Consequently, $\kappa(0)=-1$, and
it follows that $\kappa=\rho$. Hence $x$ is equivalent to an element of the
form $z=(-1,\infty,0,\alpha,\beta)$. Moreover, $\beta=\bar{\alpha}$ and
$\alpha^{2}+\alpha+1=0$. ∎
Recall that $\zeta_{5}=e^{2i\pi/5}\in\mathbb{P}^{1}(\mathbb{C})$ and define
$\lambda=\zeta_{5}+\zeta_{5}^{-1}\in\mathbb{R},\;\;\;\;\;\;\gamma(z)=\frac{(\lambda+1)z-1}{z+1}\in\textnormal{PGL}_{2}(\mathbb{R}).$
###### Lemma 5.29.
Let $x=(x_{1},\dotsc,x_{5})\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$. Suppose
$x$ is stabilized by a subgroup of $\textnormal{PGL}_{2}(\mathbb{R})$ of order
$5$. There is a transformation $g\in\textnormal{PGL}_{2}(\mathbb{R})$ mapping
$x$ to $z=(0,-1,\infty,\lambda+1,\lambda)$ and identifying the stabilizer of
$x$ with the subgroup of $\textnormal{PGL}_{2}(\mathbb{R})$ generated by
$\gamma$ and $\nu$. In particular, the stabilizer
$\textnormal{PGL}_{2}(\mathbb{R})_{x}$ of $x$ is isomorphic to $D_{5}$.
###### Proof.
Let $\phi\in\textnormal{PGL}_{2}(\mathbb{R})_{x}$ be an element of order $5$.
Using Lemma 5.26 one shows that $x$ must be smooth, i.e. all $x_{i}$ are
distinct, and $x_{i}=\phi^{i-1}(x_{1})$. Since there is one real $x_{i}$ and
$\phi$ is defined over $\mathbb{R}$, all $x_{i}$ are real. Now note that
$z=(0,-1,\infty,\lambda+1,\lambda)$ is the orbit of $0$ under
$\gamma:z\mapsto((\lambda+1)z-1)/(z+1)$. The reflection $\nu:z\mapsto 1/z$
preserves $z$ as well: if $\zeta=\zeta_{5}$ then $\lambda=\zeta+\zeta^{-1}$
hence $\lambda+1=-(\zeta^{2}+\zeta^{-2})=-\lambda^{2}+2$, so that
$\lambda(\lambda+1)=1$. So we have $\textnormal{PGL}_{2}(\mathbb{R})_{z}\cong
D_{5}$. By [WX17, Theorem 22], the point $z$ with its stabilizer
$\textnormal{PGL}_{2}(\mathbb{R})_{z}$ must be equivalent under
$\textnormal{PGL}_{2}(\mathbb{C})$ to the point
$(1,\zeta,\zeta^{2},\zeta^{3},\zeta^{4})$ with its stabilizer $\langle
x\mapsto\zeta x,x\mapsto 1/x\rangle$. Thus, there exists
$g\in\textnormal{PGL}_{2}(\mathbb{C})$ such that $g(x_{1})=0$, $g(x_{2})=-1$,
$g(x_{3})=\infty$, $g(x_{4})=\lambda+1$ and $g(x_{5})=\lambda$, and such that
$g\textnormal{PGL}_{2}(\mathbb{R})_{x}g^{-1}=\textnormal{PGL}_{2}(\mathbb{R})_{z}$.
Since all $x_{i}$ and $z_{i}\in z$ are real, we see that
$\bar{g}(x_{i})=z_{i}$ for every $i$, hence $g$ and $\bar{g}$ coincide on more
than $2$ points, hence $g=\bar{g}\in\textnormal{PGL}_{2}(\mathbb{R})$. ∎
Proposition 5.25 follows. ∎
##### 4 Binary quintics with automorphism group of order two
The goal of Section 4 is to prove that there are no cone points in the
orbifold
$\textnormal{PGL}_{2}(\mathbb{R})\setminus(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$,
i.e. orbifold points whose stabilizer group is $\mathbb{Z}/n$ for some $n$
acting on the orbifold chart by rotations. By Proposition 5.25, this fact will
follow from the following:
###### Proposition 5.30.
Let $x=(x_{1},\dotsc,x_{5})\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ such
that $\textnormal{PGL}_{2}(\mathbb{R})_{x}=\langle\tau\rangle$ has order two.
There is a $\textnormal{PGL}_{2}(\mathbb{R})_{x}$-stable open neighborhood
$U\subset(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ of $x$ such that
$\textnormal{PGL}_{2}(\mathbb{R})_{x}\setminus
U\to{\mathcal{M}}_{s}(\mathbb{R})$ is injective, and a homeomorphism
$\phi:(U,x)\to(B,0)$ for $0\in B\subset\mathbb{R}^{2}$ an open ball, such that
$\phi$ identifies $\textnormal{PGL}_{2}(\mathbb{R})_{x}$ with $\mathbb{Z}/2$
acting on $B$ by reflections in a line through $0$.
###### Proof.
Using Lemma 5.26, one checks that the only possibilities for the element
$x=(x_{1},\dotsc,x_{5})\in(P_{s}/{\mathfrak{S}}_{5})(\mathbb{R})$ are
$(-1,0,\infty,\beta,\beta^{-1})$, $(-1,i,-i,\beta,\beta^{-1})$,
$(-1,-1,\beta,0,\infty)$,
$(-1,-1,\beta,i,-i)$, $(0,0,\infty,\infty,-1)$ and $(-1,i,i,-i,-i)$. ∎
##### 5 Comparing the orbifold structures
Consider the moduli space ${\mathcal{M}}_{s}(\mathbb{R})$ of real stable
binary quintics.
###### Definition 5.31.
Let $\overline{{\mathscr{M}}}_{\mathbb{R}}$ be the hyperbolic orbifold with
$\left|{\mathcal{M}}_{s}(\mathbb{R})\right|$ as underlying space, whose
orbifold structure is induced by the homeomorphism of Corollary 5.23 and the
natural orbifold structure of
$P\Gamma_{\mathbb{R}}\setminus{\mathbb{R}}H^{2}$.
There are two orbifold structures on the space
$\left|{\mathcal{M}}_{s}(\mathbb{R})\right|$: the natural orbifold structure
of ${\mathcal{M}}_{s}(\mathbb{R})$, see Proposition 2.12 (i.e. the orbifold
structure of the quotient $\mathbb{G}(\mathbb{R})\setminus
X_{s}(\mathbb{R})$), and the orbifold structure
$\overline{{\mathscr{M}}}_{\mathbb{R}}$ introduced in Definition 5.31.
###### Proposition 5.32.
1. 1.
The orbifold structures of ${\mathcal{M}}_{s}(\mathbb{R})$ and
$\overline{{\mathscr{M}}}_{\mathbb{R}}$ differ only at the moduli point
attached to the five-tuple $(\infty,i,i,-i,-i)$. The stabilizer group of
${\mathcal{M}}_{s}(\mathbb{R})$ at that moduli point is $\mathbb{Z}/2$,
whereas the stabilizer group of $\overline{{\mathscr{M}}}_{\mathbb{R}}$ at
that point is the dihedral group $D_{10}$ of order twenty.
2. 2.
The orbifold $\overline{{\mathscr{M}}}_{\mathbb{R}}$ has no cone points and
three corner reflectors, whose orders are $\pi/3,\pi/5$ and $\pi/10$.
###### Proof.
The statements can be deduced from Proposition 4.37. The notation of that
proposition was as follows: for $f\in
Y\cong\mathbb{G}(\mathbb{R})\setminus{\mathcal{F}}_{s}(\mathbb{R})$ (see
Theorem 5.22) the group $A_{f}\subset P\Gamma$ is the stabilizer of $f\in K$.
Moreover, if $\tilde{f}\in{\mathcal{F}}_{s}(\mathbb{R})$ represents $f$ and if
$F=[\tilde{f}]\in X_{s}(\mathbb{R})$ has $k=2a+b$ nodes, then the image
$x\in\mathbb{C}H^{2}$ has $k=2a+b$ nodes in the sense of Definition 4.11. If
$F$ has no nodes ($k=0$), then $G(x)$ is trivial by Proposition 4.37.1 and
$G_{F}=A_{f}=\Gamma_{f}$. If $F$ has only real nodes, then $B_{f}=G(x)$ hence
$G_{F}=A_{f}/G(x)=A_{f}/B_{f}=\Gamma_{f}$. Now suppose that $a=1$ and $b=0$:
the equation $F$ defines a pair of complex conjugate nodes. In other words,
the zero set of $F$ defines a $5$-tuple
$\underline{\alpha}=(\alpha_{1},\dotsc,\alpha_{5})\in\mathbb{P}^{1}(\mathbb{C})$,
well-defined up to the
$\textnormal{PGL}_{2}(\mathbb{R})\times{\mathfrak{S}}_{5}$ action on
$\mathbb{P}^{1}$, where $\alpha_{1}\in\mathbb{P}^{1}(\mathbb{R})$ and
$\alpha_{3}=\bar{\alpha}_{2}=\alpha_{5}=\bar{\alpha}_{4}\in\mathbb{P}^{1}(\mathbb{C})\setminus\mathbb{P}^{1}(\mathbb{R})$.
So we may write
$\underline{\alpha}=(\rho,\alpha,\bar{\alpha},\alpha,\bar{\alpha})$ with
$\rho\in\mathbb{P}^{1}(\mathbb{R})$ and
$\alpha\in\mathbb{P}^{1}(\mathbb{C})\setminus\mathbb{P}^{1}(\mathbb{R})$. Then
there is a unique $T\in\textnormal{PGL}_{2}(\mathbb{R})$ such that
$T(\rho)=\infty$ and $T(\alpha)=i$. But this gives $T(x)=(\infty,i,-i,i,-i)$
hence $F$ is unique up to isomorphism. As for the stabilizer
$G_{F}=A_{f}/G(x)$, we have $G(x)\cong(\mathbb{Z}/10)^{2}$. Since there are no
real nodes, $B_{f}$ is trivial. By Proposition 4.37.3, $K_{f}$ is the union of
ten copies of ${\mathbb{B}}^{2}(\mathbb{R})$ meeting along a common point
$\mathbb{B}^{0}(\mathbb{R})$. In fact, in the local coordinates
$(t_{1},t_{2})$ around $f$, the
$\alpha_{j}:\mathbb{B}^{2}(\mathbb{C})\to\mathbb{B}^{2}(\mathbb{C})$ are
defined by $(t_{1},t_{2})\mapsto(\bar{t}_{2}\zeta^{j},\bar{t}_{1}\zeta^{j})$,
for $j\in\mathbb{Z}/10$, and so the fixed points sets are given by the
equations
$\mathbb{R}H^{2}_{j}=\\{t_{2}=\bar{t}_{1}\zeta^{j}\\}\subset\mathbb{B}^{2}(\mathbb{C})$,
$j\in\mathbb{Z}/10$. Notice that the subgroup $E\subset G(x)$ that stabilizes
$\mathbb{R}H^{2}_{j}$ is the cyclic group of order $10$ generated by the
transformations $(t_{1},t_{2})\mapsto(\zeta t_{1},\zeta^{-1}t_{2})$. There is
only one non-trivial transformation $T\in\textnormal{PGL}_{2}(\mathbb{R})$
that fixes $\infty$ and sends the subset
$\\{i,-i\\}\subset\mathbb{P}^{1}(\mathbb{C})$ to itself, and $T$ is of order
$2$. Hence $G_{F}=\mathbb{Z}/2$ so that we have an exact sequence
$0\to\mathbb{Z}/10\to\Gamma_{f}\to\mathbb{Z}/2\to 0$ and this splits since
$G_{F}$ is a subgroup of $\Gamma_{f}$. We are done by Propositions 5.25 and
5.30. ∎
##### 6 The real moduli space as a hyperbolic triangle
The goal of Section 6 is to show that $\overline{{\mathscr{M}}}_{\mathbb{R}}$
(see Definition 5.31) is isomorphic, as hyperbolic orbifolds, to the triangle
$\Delta_{3,5,10}$ in the real hyperbolic plane ${\mathbb{R}}H^{2}$ with angles
$\pi/3,\pi/5$ and $\pi/10$. The results in the above Sections 3, 4 and 5 give
the orbifold singularities of $\overline{{\mathscr{M}}}_{\mathbb{R}}$ together
with their stabilizer groups. In order to complete determine the hyperbolic
orbifold structure of $\overline{{\mathscr{M}}}_{\mathbb{R}}$, however, we
also need to know the underlying topological space
$\left|{\mathcal{M}}_{s}(\mathbb{R})\right|$ of
$\overline{{\mathscr{M}}}_{\mathbb{R}}$. The first observation is that
$\left|{\mathcal{M}}_{s}(\mathbb{R})\right|$ is compact. Indeed, it is
classical that the topological space
${\mathcal{M}}_{s}(\mathbb{C})=\mathbb{G}(\mathbb{C})\setminus
X_{s}(\mathbb{C})$, parametrizing complex stable binary quintics, is compact.
This follows from the fact that it is homeomorphic to
$\overline{M}_{0,5}(\mathbb{C})/{\mathfrak{S}}_{5}$, and the stack of stable
five-pointed curves $\overline{M}_{0,5}$ is proper [Knu83], or from the fact
that it is homeomorphic to a compact ball quotient [Shi64]. Moreover, the map
${\mathcal{M}}_{s}(\mathbb{R})\to{\mathcal{M}}_{s}(\mathbb{C})$ is proper,
which proves the compactness of ${\mathcal{M}}_{s}(\mathbb{R})$.
The second observation is that ${\mathcal{M}}_{s}(\mathbb{R})$ is connected,
since $X_{s}(\mathbb{R})$ is obtained from the euclidean space
$\\{F\in\mathbb{R}[x,y]:F\text{ homogeneous }\mid\deg(F)=5\\}$ by removing a
subspace of codimension at least two. We can prove more:
###### Lemma 5.33.
The moduli space ${\mathcal{M}}_{s}(\mathbb{R})$ of real stable binary
quintics is simply connected.
###### Proof.
The idea is to show that the following holds:
1. 1.
For each $i\in\\{0,1,2\\}$, the embedding
${\mathscr{M}}_{i}\hookrightarrow\overline{{\mathscr{M}}}_{i}\subset{\mathcal{M}}_{s}({\mathbb{R}})$
of the connected component ${\mathscr{M}}_{i}$ of
${\mathcal{M}}_{0}(\mathbb{R})$ into its closure in
${\mathcal{M}}_{s}({\mathbb{R}})$ is homeomorphic to the embedding
$D\hookrightarrow\overline{D}$ of the open unit disc into the closed unit disc
in $\mathbb{R}^{2}$.
2. 2.
We have
${\mathcal{M}}_{s}(\mathbb{R})=\overline{{\mathscr{M}}}_{0}\cup\overline{{\mathscr{M}}}_{1}\cup\overline{{\mathscr{M}}}_{2}$,
and this glueing corresponds up to homeomorphism to the glueing of three
closed discs $\overline{D}_{i}\subset\mathbb{R}^{2}$ as in Figure 1.
To do this, one considers the moduli spaces of real smooth (resp. stable)
genus zero curves with five real marked points [Knu83], as well as twists of
this space. Define two anti-holomorphic involutions
$\sigma_{i}:\mathbb{P}^{1}(\mathbb{C})^{5}\to\mathbb{P}^{1}(\mathbb{C})^{5}$
by
$\sigma_{1}(x_{1},x_{2},x_{3},x_{4},x_{5})=(\bar{x}_{1},\bar{x}_{2},\bar{x}_{3},\bar{x}_{5},\bar{x}_{4}),$
and
$\sigma(x_{1},x_{2},x_{3},x_{4},x_{5})=(\bar{x}_{1},\bar{x}_{3},\bar{x}_{2},\bar{x}_{5},\bar{x}_{4}).$
Then define
$P_{0}^{1}(\mathbb{R})=P_{0}(\mathbb{C})^{\sigma_{1}},\;\;\;P_{s}^{1}(\mathbb{R})=P_{1}(\mathbb{C})^{\sigma_{1}},\;\;\;P_{0}^{2}(\mathbb{R})=P_{0}(\mathbb{C})^{\sigma_{2}},\;\;\;P_{s}^{2}(\mathbb{R})=P_{1}(\mathbb{C})^{\sigma_{2}}.$
It is clear that ${\mathscr{M}}_{0}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{0}(\mathbb{R})/{\mathfrak{S}}_{5}$. Similarly, we have:
$\displaystyle{\mathscr{M}}_{1}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{0}^{1}(\mathbb{R})/{\mathfrak{S}}_{3}\times{\mathfrak{S}}_{2}\quad{\textnormal{
and }}\quad{\mathscr{M}}_{2}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{0}^{2}(\mathbb{R})/{\mathfrak{S}}_{2}\times{\mathfrak{S}}_{2}.$
Moreover, we have
$\overline{{\mathscr{M}}}_{0}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{s}(\mathbb{R})/{\mathfrak{S}}_{5}$. We define
$\displaystyle\overline{{\mathscr{M}}}_{1}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{s}^{1}(\mathbb{R})/{\mathfrak{S}}_{3}\times{\mathfrak{S}}_{2},\quad{\textnormal{
and
}}\quad\overline{{\mathscr{M}}}_{2}=\textnormal{PGL}_{2}(\mathbb{R})\setminus
P_{s}^{2}(\mathbb{R})/{\mathfrak{S}}_{2}\times{\mathfrak{S}}_{2}.$
Each $\overline{{\mathscr{M}}}_{i}$ is simply connected. Moreover, the natural
maps $\overline{{\mathscr{M}}}_{i}\to{\mathcal{M}}_{s}(\mathbb{R})$ are closed
embeddings of topological spaces, and one can check that the images glue to
form ${\mathcal{M}}_{s}(\mathbb{R})$ in the prescribed way. We leave the
details to the reader. ∎
###### Proof of Theorem 5.2.
To any closed $2$-dimensional orbifold $O$ one can associate a set of natural
numbers $S_{O}=\\{n_{1},\dotsc,n_{k};m_{1},\dotsc,m_{l}\\}$ by letting $k$ be
the number of cone points of $X_{O}$, $l$ the number of corner reflectors,
$n_{i}$ the order of the $i$-th cone point and $2m_{j}$ the order of the
$j$-th corner reflector. A closed $2$-dimensional orbifold $O$ is then
determined, up to orbifold-structure preserving homeomorphism, by its
underlying space $X_{O}$ and the set $S_{O}$ [Thu80]. By Lemma 5.33,
$\overline{{\mathscr{M}}}_{\mathbb{R}}$ is simply connected. By Proposition
5.32, $\overline{{\mathscr{M}}}_{\mathbb{R}}$ has no cone points and three
corner reflectors whose orders are $\pi/3,\pi/5$ and $\pi/10$. This implies
$\overline{{\mathscr{M}}}_{\mathbb{R}}$ and $\Delta_{3,5,10}$ are isomorphic
as topological orbifolds. Consequently, the orbifold fundamental group of
$\overline{{\mathscr{M}}}_{\mathbb{R}}$ is abstractly isomorphic to the group
$P\Gamma_{3,5,10}$ defined in (2).
Let $\phi:P\Gamma_{3,5,10}\hookrightarrow\text{PSL}_{2}(\mathbb{R})$ be any
embedding such that
$X:=\phi\left(P\Gamma_{3,5,10}\right)\setminus\mathbb{R}H^{2}$ is a hyperbolic
orbifold; we claim that there is a fundamental domain $\Delta$ of $X$
isometric to $\Delta_{3,5,10}$. Consider the generator $a\in
P\Gamma_{3,5,10}$. Since $\phi(a)^{2}=1$, there exists a geodesic
$L_{1}\subset\mathbb{R}H^{2}$ such that
$\phi(a)\in\text{PSL}_{2}(\mathbb{R})=\text{Isom}(\mathbb{R}H^{2})$ is the
reflection across $L_{1}$. Next, consider the generator $b\in
P\Gamma_{3,5,10}$. There exists a geodesic $L_{2}\subset\mathbb{R}H^{2}$ such
that $\phi(b)$ is the reflection across $L_{2}$. One easily shows that
$L_{2}\cap L_{1}\neq\emptyset$. Let $x\in L_{1}\cap L_{2}$. Then
$\phi(a)\phi(b)$ is an element of order three that fixes $x$, hence
$\phi(a)\phi(b)$ is a rotation around $x$. Therefore, one of the angles
between $L_{1}$ and $L_{2}$ must be $\pi/3$. Finally, we know that $\phi(c)$
is an element of order $2$ in $\textnormal{PSL}_{2}(\mathbb{R})$, hence a
reflection across a line $L_{3}$. By the previous arguments, $L_{3}\cap
L_{2}\neq\emptyset$ and $L_{3}\cap L_{1}\neq\emptyset$. It also follows that
$x\in L_{3}\cap L_{2}\cap L_{1}=\emptyset$. Consequently, the three geodesics
$L_{i}\subset\mathbb{R}H^{2}$ enclose a hyperbolic triangle; the orders of
$\phi(a)\phi(b)$, $\phi(a)\phi(c)$ and $\phi(b)\phi(c)$ imply that the three
interior angles of the triangle are $\pi/3$, $\pi/5$ and $\pi/10$. ∎
#### 4 The monodromy groups
In this section, we describe the monodromy group $P\Gamma$, as well as the
groups $P\Gamma_{\alpha}$ appearing in Proposition 5.17. As for the lattice
$(\Lambda,{\mathfrak{h}})$ (see (9)), we have:
###### Theorem 5.34 (Shimura).
There is an isomorphism of hermitian $\mathcal{O}_{K}$-lattices
$\left(\Lambda,{\mathfrak{h}}\right)\cong\left(\mathcal{O}_{K}^{3},\textnormal{diag}\left(1,1,\frac{1-\sqrt{5}}{2}\right)\right).$
###### Proof.
See [Shi64, Section 6] as well as item (5) in the table on page 1. ∎
Let us write $\Lambda=\mathcal{O}_{K}^{3}$ and
${\mathfrak{h}}=\textnormal{diag}(1,1,\frac{1-\sqrt{5}}{2})$ in the remaining
part of Section 4. Write
$\alpha=\zeta_{5}+\zeta_{5}^{-1}=\frac{\sqrt{5}-1}{2}$. Recall that
$\theta=\zeta_{5}-\zeta_{5}^{-1}$ and observe that
$|\theta|^{2}=\frac{\sqrt{5}+5}{2}$. Define three quadratic forms $q_{0}$,
$q_{1}$ and $q_{2}$ on $\mathbb{Z}[\alpha]^{3}$ as follows:
$\begin{split}q_{0}(x_{0},x_{1},x_{2})&=x_{0}^{2}+x_{1}^{2}-\alpha
x_{2}^{2},\\\ q_{1}(x_{0},x_{1},x_{2})&=|\theta|^{2}x_{0}^{2}+x_{1}^{2}-\alpha
x_{2}^{2},\\\
q_{2}(x_{0},x_{1},x_{2})&=|\theta|^{2}x_{0}^{2}+|\theta|^{2}x_{1}^{2}-\alpha
x_{2}^{2}.\end{split}$ (34)
We consider $\mathbb{Z}[\alpha]$ as a subring of $\mathbb{R}$ via the standard
embedding.
###### Theorem 5.35.
Consider the quadratic forms $q_{j}$ defined in (34). There is a union of
geodesic subspaces ${\mathscr{H}}_{j}\subset\mathbb{R}H^{2}$
($j\in\\{0,1,2\\}$) and an isomorphism of hyperbolic orbifolds
${\mathcal{M}}_{0}(\mathbb{R})\cong\coprod_{j=0}^{2}\textnormal{PO}(q_{j},\mathbb{Z}[\alpha])\setminus\left(\mathbb{R}H^{2}-{\mathscr{H}}_{j}\right).$
(35)
###### Proof.
Recall that $\theta=\zeta_{5}-\zeta_{5}^{-1}$; we consider the
${\mathbb{F}}_{5}$-vector space $W$ equipped with the quadratic form
$q={\mathfrak{h}}\mod\theta$. Define three anti-isometric involutions as
follows:
$\begin{split}\alpha_{0}:&(x_{0},x_{1},x_{2})\mapsto(\;\;\;\bar{x}_{0},\;\;\;\bar{x}_{1},\bar{x}_{2})\\\
\alpha_{1}:&(x_{0},x_{1},x_{2})\mapsto(-\bar{x}_{0},\;\;\;\bar{x}_{1},\bar{x}_{2})\\\
\alpha_{2}:&(x_{0},x_{1},x_{2})\mapsto(-\bar{x}_{0},-\bar{x}_{1},\bar{x}_{2}).\end{split}$
(36)
For isometries $\alpha:W\to W$, the dimension and determinant of the fixed
space $(W^{\alpha},q|_{W^{\alpha}})$ are conjugacy-invariant. Using this, one
easily shows that an anti-unitary involution of $\Lambda$ is
$\Gamma$-conjugate to exactly one of the $\pm\alpha_{j}$, hence
$C{\mathscr{A}}$ has cardinality $3$ and is represented by
$\alpha_{0},\alpha_{1},\alpha_{2}$ of (36). By Proposition 5.17, we obtain
${\mathcal{M}}_{0}(\mathbb{R})\cong\coprod_{j=0}^{2}P\Gamma_{\alpha_{j}}\setminus(\mathbb{R}H^{2}_{\alpha_{j}}-{\mathscr{H}})$
where each hyperbolic quotient
$P\Gamma_{\alpha_{j}}\setminus(\mathbb{R}H^{2}_{\alpha_{j}}-{\mathscr{H}})$ is
connected. Next, consider the fixed lattices
$\begin{split}\Lambda_{0}:=\Lambda^{\alpha_{0}}=\mathbb{Z}[\alpha]\oplus\mathbb{Z}[\alpha]\oplus\mathbb{Z}[\alpha]\\\
\Lambda_{1}:=\Lambda^{\alpha_{1}}=\theta\mathbb{Z}[\alpha]\oplus\mathbb{Z}[\alpha]\oplus\mathbb{Z}[\alpha]\\\
\Lambda_{2}:=\Lambda^{\alpha_{2}}=\theta\mathbb{Z}[\alpha]\oplus\theta\mathbb{Z}[\alpha]\oplus\mathbb{Z}[\alpha].\end{split}$
(37)
One easily shows that $P\Gamma_{\alpha_{j}}=N_{P\Gamma}(\alpha_{j})$ for the
normalizer $N_{P\Gamma}(\alpha_{j})$ of $\alpha_{j}$ in $P\Gamma$. Moreover,
if $h_{j}$ denotes the restriction of ${\mathfrak{h}}$ to
$\Lambda^{\alpha_{j}}$, then there is a natural embedding
$\iota:N_{P\Gamma}(\alpha_{j})\hookrightarrow\textnormal{PO}(\Lambda_{j},h_{j},\mathbb{Z}[\alpha]).$
(38)
We claim that $\iota$ is actually an isomorphism. Indeed, this follows from
the fact that the natural homomorphism $\pi:N_{\Gamma}(\alpha_{j})\to
O(\Lambda_{j},h_{j})$ is surjective, where
$N_{\Gamma}(\alpha_{j})=\\{g\in\Gamma:g\circ\alpha_{j}=\alpha_{j}\circ g\\}$
is the normalizer of $\alpha_{j}$ in $\Gamma$. The surjectivity of $\pi$
follows in turn from the equality
$\Lambda={\mathcal{O}}_{K}\cdot\Lambda_{j}+{\mathcal{O}}_{K}\cdot\theta\Lambda_{j}^{\vee}\subset
K^{3}$
which follows from (37). Since
$\textnormal{PO}(\Lambda_{j},h_{j},\mathbb{Z}[\alpha])=\textnormal{PO}(q_{j},\mathbb{Z}[\alpha])$,
we are done. ∎
## Part 2 Real algebraic cycles
### Chapter 6 Integral Fourier transforms
This chapter is based on joint work with Thorsten Beckmann.
#### 1 Introduction
In the second part of this thesis, we focus on algebraic cycles on complex and
real abelian varieties. A central role in this study – which takes up Chapters
6, 7 and 8 – is played by a certain correspondence. It has since long been
known that for an abelian variety $A$ over a field $k$, with dual abelian
variety ${\widehat{A}}$, the Fourier transform
$\displaystyle{\mathcal{F}}_{A}\colon\textnormal{CH}(A)_{\mathbb{Q}}\xrightarrow{\sim}\textnormal{CH}({\widehat{A}})_{\mathbb{Q}}$
(1)
provides a powerful tool to study the $\mathbb{Q}$-linear algebraic cycles of
$A$. It is used to study the rational Chow ring of $A$, as well as the cycle
class map to rational cohomology. Recently, Moonen and Polishchuk [MP10]
initiated the study of the integrality aspects of the Fourier transform (1).
Indeed, it is natural to ask:
###### Question 6.1.
How does ${\mathcal{F}}_{A}$ interact with integral algebraic cycles?
The goal of the current Chapter 6 is to work on Question 6.1, building on the
results of Moonen–Polishchuk. The applications of _loc.cit._ primarily concern
the structure of the integral Chow rings themselves. We continue with their
study, but we also address the compatibility of Fourier transforms with
integral cycle class maps. Since Question 6.1 was phrased somewhat
imprecisely, let us explain in some detail the steps that we take during our
search for an answer:
1. Chapter 6:
We further develop the theory of _integral Fourier transforms_ , on Chow rings
as well as on cohomology. The main result of this chapter (Theorem 6.9) will
provide necessary and sufficient conditions for the Fourier transform (1) to
lift to a motivic homomorphism between integral Chow groups.
2. Chapter 7:
We apply the theory of Chapter 6 to complex abelian varieties. The main
outcome of this project is Theorem 7.1, which says that on a principally
polarized complex abelian variety $A$ whose minimal cohomology class is
algebraic, all integral Hodge classes of degree $2\dim(A)-2$ are algebraic.
3. Chapter 8:
We apply the theory of Chapter 6 (which is developed for abelian varieties
over an arbitrary field $k$) to the case of abelian varieties over
$k=\mathbb{R}$. The principal outcome is that modulo torsion, every real
abelian threefold satisfies the real integral Hodge conjecture (Theorem 8.3).
Other applications of integral Fourier transforms include a detailed analysis
of the Hochschild-Serre filtration on equivariant cohomology of a real abelian
variety (Theorem 8.8).
Having lifted a veil of what to do with integral Fourier transforms, let us
now make Question 6.1 more precise. Let $g$ be a positive integer and let $A$
be an abelian variety of dimension $g$ over a field $k$. _Fourier transforms_
are correspondences between the derived categories, rational Chow groups and
cohomology of $A$ and ${\widehat{A}}$ attached to the Poincaré bundle
${\mathcal{P}}_{A}$ on $A\times{\widehat{A}}$ [Muk81, Bea82, Huy06]. On the
level of cohomology, the Fourier transform preserves integral $\ell$-adic
étale cohomology when $k=k_{s}$ (separable closure), and integral Betti
cohomology when $k=\mathbb{C}$. It is thus natural to ask whether the Fourier
transform on rational Chow groups preserves integral cycles modulo torsion or,
more generally, lifts to a homomorphism between integral Chow groups. This
question was raised by Moonen–Polishchuk [MP10] and Totaro [Tot21]. More
precisely, Moonen and Polishchuk gave a counterexample for abelian varieties
over non-closed fields and asked about the case of algebraically closed
fields.
The goal of Chapter 6 is to investigate this question further.
#### 2 Integral Fourier transforms
The main result of Section 6 gives necessary and sufficient conditions for the
Fourier transform on rational Chow groups or cohomology to lift to a motivic
homomorphism between integral Chow groups. To get there, we need a precise
definition of "integral Fourier transform", which we introduce in this Section
2.
##### 1 Notation and conventions
We let $k$ be a field with separable closure $k_{s}$ and $\ell$ a prime number
different from the characteristic of $k$. For a smooth projective variety $X$
over $k$, we let $\textnormal{CH}(X)$ be the Chow group of $X$ and define
$\textnormal{CH}(X)_{\mathbb{Q}}=\textnormal{CH}(X)\otimes\mathbb{Q}$,
$\textnormal{CH}(X)_{\mathbb{Q}_{\ell}}=\textnormal{CH}(X)\otimes{\mathbb{Q}_{\ell}}$
and
$\textnormal{CH}(X)_{\mathbb{Z}_{\ell}}=\textnormal{CH}(X)\otimes{\mathbb{Z}_{\ell}}$.
We let
${\mathrm{H}}_{\textnormal{\'{e}t}}^{i}(X_{k_{s}},\mathbb{Z}_{\ell}(a))$ be
the $i$-th degree étale cohomology group with coeffients in
$\mathbb{Z}_{\ell}(a)$, $a\in\mathbb{Z}$.
Often, $A$ will denote an abelian variety of dimension $g$ over $k$, with dual
abelian variety ${\widehat{A}}$ and (normalized) Poincaré bundle on
${\mathcal{P}}_{A}$. The abelian group $\textnormal{CH}(A)$ will in that case
be equipped with two ring structures: the usual intersection product $\cdotp$
as well as the Pontryagin product $\star$. Recall that the latter is defined
as follows:
$\star\colon\textnormal{CH}(A)\times\textnormal{CH}(A)\to\textnormal{CH}(A),\quad
x\star y=m_{\ast}(\pi_{1}^{\ast}(x)\cdot\pi_{2}^{\ast}(y)).$
Here, as well as in the rest of the paper, $\pi_{i}$ denotes the projection
onto the $i$-th factor, $\Delta\colon A\to A\times A$ the diagonal morphism,
and $m\colon A\times A\to A$ the group law morphism of $A$.
For any abelian group $M$ and any element $x\in M$, we denote by
$x_{\mathbb{Q}}\in M\otimes_{\mathbb{Z}}\mathbb{Q}$ the image of $x$ in
$M\otimes_{\mathbb{Z}}\mathbb{Q}$ under the canonical homomorphism $M\to
M\otimes_{\mathbb{Z}}\mathbb{Q}$.
##### 2 Integral Fourier transforms and integral Hodge classes
For abelian varieties $A$ over $k=k_{s}$, it is unknown whether the Fourier
transform
${\mathcal{F}}_{A}\colon\textnormal{CH}(A)_{\mathbb{Q}}\to\textnormal{CH}({\widehat{A}})_{\mathbb{Q}}$
preserves the subgroups of integral cycles modulo torsion. A sufficient
condition for this to hold is that ${\mathcal{F}}_{A}$ lifts to a homomorphism
$\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$. In this section we
outline a second consequence of such a lift
$\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ when $A$ is defined over
the complex numbers: the existence of an integral lift of ${\mathcal{F}}_{A}$
implies the integral Hodge conjecture for one-cycles on ${\widehat{A}}$.
Let $A$ be an abelian variety over $k$. The Fourier transform on the level of
Chow groups is the group homomorphism
${\mathcal{F}}_{A}\colon\textnormal{CH}(A)_{\mathbb{Q}}\to\textnormal{CH}({\widehat{A}})_{\mathbb{Q}}$
induced by the correspondence
$\textnormal{ch}({\mathcal{P}}_{A})\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$,
where $\textnormal{ch}({\mathcal{P}}_{A})$ is the Chern character of
${\mathcal{P}}_{A}$. Similarly, one defines the Fourier transform on the level
of étale cohomology:
${\mathscr{F}}_{A}\colon{\mathrm{H}}_{\textnormal{\'{e}t}}^{\bullet}(A_{k_{s}},\mathbb{Q}_{\ell}(\bullet))\to{\mathrm{H}}_{\textnormal{\'{e}t}}^{\bullet}({\widehat{A}}_{k_{s}},\mathbb{Q}_{\ell}(\bullet)).$
In fact, ${\mathscr{F}}_{A}$ preserves the integral cohomology classes and
induces, for each integer $j$ with $0\leq j\leq 2g$, an isomorphism [Bea82,
Proposition 1], [Tot21, page 18]:
${\mathscr{F}}_{A}\colon{\mathrm{H}}_{\textnormal{\'{e}t}}^{j}(A_{k_{s}},\mathbb{Z}_{\ell}(a))\to{\mathrm{H}}_{\textnormal{\'{e}t}}^{2g-j}({\widehat{A}}_{k_{s}},\mathbb{Z}_{\ell}(a+g-j)).$
Similarly, if $k=\mathbb{C}$, then $\textnormal{ch}({\mathcal{P}}_{A})$
induces, for each integer $i$ with $0\leq i\leq 2g$, an isomorphism of Hodge
structures
${\mathscr{F}}_{A}\colon{\mathrm{H}}^{i}(A,\mathbb{Z})\to{\mathrm{H}}^{2g-i}({\widehat{A}},\mathbb{Z})(g-i).$
(2)
In [MP10], Moonen and Polishchuk consider an isomorphism $\phi\colon
A\xrightarrow{\sim}{\widehat{A}}$, a positive integer $d$, and define the
notion of motivic integral Fourier transform of $(A,\phi)$ up to factor $d$.
The definition goes as follows. Let ${\mathcal{M}}(k)$ be the category of
effective Chow motives over $k$ with respect to ungraded correspondences, and
let $h(A)$ be the motive of $A$. Then a morphism
${\mathcal{F}}\colon h(A)\to h(A)$
in ${\mathcal{M}}(k)$ is a motivic integral Fourier transform of $(A,\phi)$ up
to factor $d$ if the following three conditions are satisfied: (i) the induced
morphism $h(A)_{\mathbb{Q}}\to h(A)_{\mathbb{Q}}$ is the composition of the
usual Fourier transform with the isomorphism $\phi^{\ast}\colon
h({\widehat{A}})_{\mathbb{Q}}\xrightarrow{\sim}h(A)_{\mathbb{Q}}$, (ii) one
has $d\cdot{\mathcal{F}}\circ{\mathcal{F}}=d\cdot(-1)^{g}\cdot[-1]_{\ast}$ as
morphisms from $h(A)$ to $h(A)$, and (iii) $d\cdot{\mathcal{F}}\circ
m_{\ast}=d\cdot\Delta^{\ast}\circ{\mathcal{F}}\otimes{\mathcal{F}}\colon
h(A)\otimes h(A)\to h(A)$.
For our purposes, we consider similar homomorphisms
$\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$. To make their existence
easier to verify (c.f. Theorem 6.9) we relax the above conditions:
###### Definition 6.2.
Let $A_{/k}$ be an abelian variety and let
${\mathcal{F}}\colon\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ be a
group homomorphism. We call ${\mathcal{F}}$ a weak integral Fourier transform
if the following diagram commutes:
$\textstyle{\textnormal{CH}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}}$$\textstyle{\textnormal{CH}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\textnormal{CH}(A)_{\mathbb{Q}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}_{A}}$$\textstyle{\textnormal{CH}({\widehat{A}})_{\mathbb{Q}}.}$
(3)
We call a weak integral Fourier transform ${\mathcal{F}}$ motivic if it is
induced by a cycle $\Gamma$ in $\textnormal{CH}(A\times{\widehat{A}})$ that
satisfies
$\Gamma_{\mathbb{Q}}=\textnormal{ch}({\mathcal{P}}_{A})\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$.
A group homomorphism
${\mathcal{F}}\colon\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ is an
integral Fourier transform up to homology if the following diagram commutes:
$\textstyle{\textnormal{CH}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}}$$\textstyle{\textnormal{CH}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\oplus_{r\geq
0}{\mathrm{H}}_{\textnormal{\'{e}t}}^{2r}(A_{k_{s}},\mathbb{Z}_{\ell}(r))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathscr{F}}_{A}}$$\textstyle{\oplus_{r\geq
0}{\mathrm{H}}_{\textnormal{\'{e}t}}^{2r}({\widehat{A}}_{k_{s}},\mathbb{Z}_{\ell}(r)).}$
(4)
Similarly, a $\mathbb{Z}_{\ell}$-module homomorphism
${\mathcal{F}}_{\ell}\colon\textnormal{CH}(A)_{\mathbb{Z}_{\ell}}\to\textnormal{CH}({\widehat{A}})_{\mathbb{Z}_{\ell}}$
is called an $\ell$-adic integral Fourier transform up to homology if
${\mathcal{F}}_{\ell}$ is compatible with ${\mathscr{F}}_{A}$ and the
$\ell$-adic cycle class maps. Finally, an integral Fourier transform up to
homology ${\mathcal{F}}$ (resp. an $\ell$-adic integral Fourier transform up
to homology ${\mathcal{F}}_{\ell}$) is called motivic if it is induced by a
cycle $\Gamma\in\textnormal{CH}(A\times{\widehat{A}})$ (resp.
$\Gamma_{\ell}\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Z}_{\ell}})$
such that $cl(\Gamma)$ (resp. $cl(\Gamma_{\ell})$) equals
$\textnormal{ch}({\mathcal{P}}_{A})\in\oplus_{r\geq
0}{\mathrm{H}}_{\textnormal{\'{e}t}}^{2r}((A\times{\widehat{A}})_{k_{s}},\mathbb{Z}_{\ell}(r))$.
###### Remark 6.3.
If ${\mathcal{F}}\colon\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ is
a weak integral Fourier transform, then ${\mathcal{F}}$ is an integral Fourier
transform up to homology, the $\mathbb{Z}_{\ell}$-module $\oplus_{r\geq
0}{\mathrm{H}}_{\textnormal{\'{e}t}}^{2r}({\widehat{A}}_{k_{s}},\mathbb{Z}_{\ell}(r))$
being torsion-free. If $k=\mathbb{C}$, then
${\mathcal{F}}\colon\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ is an
integral Fourier transform up to homology if and only if ${\mathcal{F}}$ is
compatible with the Fourier transform
${\mathscr{F}}_{A}\colon{\mathrm{H}}^{\bullet}(A,\mathbb{Z})\to{\mathrm{H}}^{\bullet}({\widehat{A}},\mathbb{Z})$
on Betti cohomology.
The relation between integral Fourier transforms and Hodge classes is as
follows:
###### Lemma 6.4.
Let $A$ be a complex abelian variety and let
${\mathcal{F}}\colon\textnormal{CH}(A)\to\textnormal{CH}({\widehat{A}})$ be an
integral Fourier transform up to homology.
1. 1.
For each $i\in\mathbb{Z}_{\geq 0}$, the integral Hodge conjecture for degree
$2i$ classes on $A$ implies the integral Hodge conjecture for degree $2g-2i$
classes on ${\widehat{A}}$.
2. 2.
If ${\mathcal{F}}$ is motivic, then ${\mathscr{F}}_{A}$ induces a group
isomorphism
${\mathrm{Z}}^{2i}(A)\xrightarrow{\sim}{\mathrm{Z}}^{2g-2i}({\widehat{A}})$
and, therefore, the integral Hodge conjectures for degree $2i$ classes on $A$
and degree $2g-2i$ classes on ${\widehat{A}}$ are equivalent for all $i$.
###### Proof.
We can extend Diagram (4) to the following commutative diagram:
$\textstyle{\textnormal{CH}^{i}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl^{i}}$$\textstyle{\textnormal{CH}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}}$$\textstyle{\textnormal{CH}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\textnormal{CH}_{i}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl_{i}}$$\textstyle{{\mathrm{H}}^{2i}(A,\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathrm{H}}^{\bullet}(A,\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathscr{F}}_{A}}$$\textstyle{{\mathrm{H}}^{\bullet}({\widehat{A}},\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{\mathrm{H}}^{2g-2i}({\widehat{A}},\mathbb{Z}).}$
The composition
${\mathrm{H}}^{2i}(A,\mathbb{Z})\to{\mathrm{H}}^{2g-2i}({\widehat{A}},\mathbb{Z})$
appearing on the bottom line agrees up to a suitable Tate twist with the map
${\mathscr{F}}_{A}$ of equation (2). Therefore, we obtain a commutative
diagram:
$\textstyle{\textnormal{CH}^{i}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl^{i}}$$\textstyle{\textnormal{CH}_{i}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl_{i}}$$\textstyle{\textnormal{Hdg}^{2i}(A,\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\sim}$$\textstyle{\textnormal{Hdg}^{2g-2i}({\widehat{A}},\mathbb{Z}).}$
(5)
Thus the surjectivity of $cl^{i}$ implies the surjectivity of $cl_{i}$.
Moreover, if ${\mathcal{F}}$ is motivic, then replacing $A$ by ${\widehat{A}}$
and ${\widehat{A}}$ by ${\widehat{\widehat{A}\mkern 5.5mu}\mkern-5.5mu}{}$ in
the argument above shows that the images of $cl^{i}$ and $cl_{i}$ are
identified under the isomorphism
${\mathscr{F}}_{A}\colon\textnormal{Hdg}^{2i}(A,\mathbb{Z})\xrightarrow{\sim}\textnormal{Hdg}^{2g-2i}({\widehat{A}},\mathbb{Z})$
in diagram (5). ∎
#### 3 Rational Fourier transforms
The above suggests that to prove Theorem 7.1, one would need to show that for
a complex abelian variety of dimension $g$ whose minimal Poincaré class
$c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z})$
is algebraic, all classes of the form
$c_{1}({\mathcal{P}}_{A})^{i}/i!\in{\mathrm{H}}^{2i}(A\times{\widehat{A}},\mathbb{Z})$
are algebraic. With this goal in mind we shall study Fourier transforms on
rational Chow groups in Section 3, and investigate how these relate to
$\textnormal{ch}({\mathcal{P}}_{A})\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$.
In turns out that the cycles $c_{1}({\mathcal{P}}_{A})^{i}/i!$ in
$\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$ satisfy several relations
that are very similar to those proved by Beauville for the cycles
$\theta^{i}/i!\in\textnormal{CH}(A)_{\mathbb{Q}}$ in case $A$ is principally
polarized, see [Bea82]. Since we will need these results in any characteristic
in order to prove Theorem 7.6, we will work over our general field $k$, see
Section 1.
Let $A$ be an abelian variety over $k$. Define cycles
$\displaystyle\ell$
$\displaystyle=c_{1}({\mathcal{P}}_{A})\in\textnormal{CH}^{1}(A\times{\widehat{A}})_{\mathbb{Q}},$
$\displaystyle{\widehat{\ell}}$
$\displaystyle=c_{1}({\mathcal{P}}_{{\widehat{A}}})\in\textnormal{CH}^{1}({\widehat{A}}\times
A)_{\mathbb{Q}},$ $\displaystyle{\mathscr{R}}_{A}$
$\displaystyle=c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in\textnormal{CH}_{1}(A\times{\widehat{A}})_{\mathbb{Q}},\quad{\textnormal{
and }}$ $\displaystyle{\mathscr{R}}_{{\widehat{A}}}$
$\displaystyle=c_{1}({\mathcal{P}}_{{\widehat{A}}})^{2g-1}/(2g-1)!\in\textnormal{CH}_{1}({\widehat{A}}\times
A)_{\mathbb{Q}}.$
For $a\in\textnormal{CH}(A)_{\mathbb{Q}}$, define
$\mathrm{E}(a)\in\textnormal{CH}(A)_{\mathbb{Q}}$ as the $\star$-exponential
of $a$:
$\mathrm{E}(a)\coloneqq\sum_{n\geq 0}\frac{a^{\star
n}}{n!}\in\textnormal{CH}(A)_{\mathbb{Q}}.$
The key to Theorem 7.1 will be the following:
###### Lemma 6.5.
We have
$\textnormal{ch}({\mathcal{P}}_{A})=e^{\ell}=(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot{\mathscr{R}}_{A})\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$.
###### Proof.
The most important ingredient in the proof is the following:
Claim $(\ast)$: Consider the Fourier transform
${\mathcal{F}}_{A\times{\widehat{A}}}\colon\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}\to\textnormal{CH}({\widehat{A}}\times
A)_{\mathbb{Q}}$. One has
${\mathcal{F}}_{A\times{\widehat{A}}}(e^{\ell})=(-1)^{g}\cdot
e^{-{\widehat{\ell}}}\in\textnormal{CH}({\widehat{A}}\times A)_{\mathbb{Q}}.$
To prove Claim ($\ast$), we lift the desired equality in the rational Chow
group of ${\widehat{A}}\times A$ to an isomorphism in the derived category
${\mathrm{D}}^{b}({\widehat{A}}\times A)$ of ${\widehat{A}}\times A$. For
$X=A\times{\widehat{A}}$ the Poincaré line bundle $\mathcal{P}_{X}$ on
$X\times{\widehat{X}}\cong A\times{\widehat{A}}\times{\widehat{A}}\times A$ is
isomorphic to
$\pi_{13}^{\ast}\mathcal{P}_{A}\otimes\pi_{24}^{\ast}\mathcal{P}_{{\widehat{A}}}$.
Consider
$\Phi_{{{\mathcal{P}}}_{X}}({\mathcal{P}}_{A})\cong\pi_{34,\ast}\left(\pi_{13}^{\ast}{\mathcal{P}}_{A}\otimes\pi_{24}^{\ast}{\mathcal{P}}_{{\widehat{A}}}\otimes\pi_{12}^{\ast}{\mathcal{P}}_{A}\right)\in{\mathrm{D}}^{b}({\widehat{A}}\times
A)$ (6)
whose Chern character is exactly $\mathcal{F}_{X}(e^{\ell})$. Applying the
pushforward along the permutation map
$(123)\colon A\times{\widehat{A}}\times{\widehat{A}}\times
A\cong{\widehat{A}}\times A\times{\widehat{A}}\times A$
the object (6) becomes
$\pi_{14,\ast}\left(\pi_{12}^{\ast}\mathcal{P}_{{\widehat{A}}}\otimes\pi_{23}^{\ast}\mathcal{P}_{A}\otimes\pi_{34}^{\ast}\mathcal{P}_{{\widehat{A}}}\right)$
which is isomorphic to the Fourier–Mukai kernel of the composition
$\Phi_{\mathcal{P}_{{\widehat{A}}}}\circ\Phi_{\mathcal{P}_{A}}\circ\Phi_{\mathcal{P}_{{\widehat{A}}}}.$
Since $\Phi_{\mathcal{P}_{A}}\circ\Phi_{\mathcal{P}_{{\widehat{A}}}}$ is
isomorphic to $[-1_{{\widehat{A}}}]^{\ast}\circ[-g]$ by [Muk81, Theorem 2.2],
we have
$\Phi_{\mathcal{P}_{{\widehat{A}}}}\circ\Phi_{\mathcal{P}_{A}}\circ\Phi_{\mathcal{P}_{{\widehat{A}}}}\cong\Phi_{\mathcal{P}_{{\widehat{A}}}}\circ[-1_{{\widehat{A}}}]^{\ast}\circ[-g].$
This is the Fourier–Mukai transform with kernel
${\mathcal{E}}=\mathcal{P}_{{\widehat{A}}}^{\vee}[-g]\in{\mathrm{D}}^{b}({\widehat{A}}\times
A)$. By uniqueness of the Fourier–Mukai kernel of an equivalence [Orl97,
Theorem 2.2] and the fact that the Chern character of ${\mathcal{E}}$ equals
$(-1)^{g}\cdot e^{-{\widehat{\ell}}}\in\textnormal{CH}({\widehat{A}}\times
A)_{\mathbb{Q}}$, this finishes the proof of Claim ($\ast$).
Next, we claim that $(-1)^{g}\cdot{\mathcal{F}}_{{\widehat{A}}\times
A}(-{\widehat{\ell}})={\mathscr{R}}_{A}$. To see this, recall that for each
integer $i$ with $0\leq i\leq g$, there is a canonical Beauville decomposition
$\textnormal{CH}^{i}(A)_{\mathbb{Q}}=\oplus_{j=i-g}^{i}\textnormal{CH}^{i,j}(A)_{\mathbb{Q}},\quad\quad\quad{\textnormal{see
\cite[cite]{[\@@bibref{}{beauvilledecomposition}{}{}]}}}.$
Since the Poincaré bundle ${\mathcal{P}}_{A}$ is symmetric, we have
$\ell\in\textnormal{CH}^{1,0}(A\times{\widehat{A}})_{\mathbb{Q}}$ and hence
$\ell^{i}\in\textnormal{CH}^{i,0}(A\times{\widehat{A}})_{\mathbb{Q}}$. In
particular, we have
${\mathscr{R}}_{A}\in\textnormal{CH}^{2g-1,0}(A\times{\widehat{A}})_{\mathbb{Q}}$.
The fact that ${\mathcal{P}}_{A}$ is symmetric also implies - via Claim
($\ast$) - that we have ${\mathcal{F}}_{{\widehat{A}}\times A}((-1)^{g}\cdot
e^{-{\widehat{\ell}}})=e^{\ell}$. Indeed,
${\mathcal{F}}_{{\widehat{A}}\times
A}\circ{\mathcal{F}}_{A\times{\widehat{A}}}=[-1]^{\ast}\cdot(-1)^{2g}=[-1]^{\ast},$
see [DM91, Corollary 2.22]. Since ${\mathcal{F}}_{{\widehat{A}}\times A}$
identifies the group $\textnormal{CH}^{i,0}({\widehat{A}}\times
A)_{\mathbb{Q}}$ with the group
$\textnormal{CH}^{g-i,0}(A\times{\widehat{A}})$ (see [DM91, Lemma 2.18]), we
must indeed have
$(-1)^{g}\cdot{\mathcal{F}}_{{\widehat{A}}\times
A}(-{\widehat{\ell}})={\mathcal{F}}_{{\widehat{A}}\times
A}((-1)^{g+1}\cdot{\widehat{\ell}})=\frac{{\ell}^{2g-1}}{(2g-1)!}={\mathscr{R}}_{A}.$
(7)
For a $g$-dimensional abelian variety $X$ and any
$x,y\in\textnormal{CH}(X)_{\mathbb{Q}}$, one has ${\mathcal{F}}_{X}(x\cdot
y)=(-1)^{g}\cdot{\mathcal{F}}_{X}(x)\star{\mathcal{F}}_{X}(y)\in\textnormal{CH}({\widehat{X}})_{\mathbb{Q}}$,
see [Bea82, Proposition 3]. This implies (see also [MP10, §3.7]) that if $a$
is a cycle on $X$ such that
${\mathcal{F}}_{X}(a)\in\textnormal{CH}_{>0}({\widehat{X}})_{\mathbb{Q}}$,
then
${\mathcal{F}}_{X}(e^{a})=(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot{\mathcal{F}}_{X}(a))$.
This allows us to conclude that
$\displaystyle e^{\ell}$ $\displaystyle={\mathcal{F}}_{{\widehat{A}}\times
A}((-1)^{g}\cdot
e^{-{\widehat{\ell}}})=(-1)^{g}\cdot{\mathcal{F}}_{{\widehat{A}}\times
A}(e^{-{\widehat{\ell}}})$
$\displaystyle=(-1)^{g}\cdot{\mathrm{E}}({\mathcal{F}}_{{\widehat{A}}\times
A}(-{\widehat{\ell}}))=(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot{\mathscr{R}}_{A}),$
which finishes the proof. ∎
Next, assume that $A$ is equipped with a principal polarization $\lambda\colon
A\xrightarrow{\sim}{\widehat{A}}$, define $\ell=c_{1}({\mathcal{P}}_{A})$, and
let
$\Theta=\frac{1}{2}\cdot(\textnormal{id},\lambda)^{\ast}\ell\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}$
(8)
be the symmetric ample class corresponding to the polarization. Here
$(\textnormal{id},\lambda)$ is the morphism $(\textnormal{id},\lambda)\colon
A\to A\times{\widehat{A}}$. One can understand the relation between
$\Gamma_{\Theta}\coloneqq\Theta^{g-1}/(g-1)!\in\textnormal{CH}_{1}(A)_{\mathbb{Q}}$
and
${\mathscr{R}}_{A}={\ell}^{2g-1}/(2g-1)!\in\textnormal{CH}_{1}(A\times{\widehat{A}})_{\mathbb{Q}}$
in the following way. Define
$\displaystyle j_{1}$ $\displaystyle\colon A\to A\times{\widehat{A}},\quad
x\mapsto(x,0),\quad{\textnormal{ and }}$ $\displaystyle j_{2}$
$\displaystyle\colon{\widehat{A}}\to A\times{\widehat{A}},\quad
y\mapsto(0,y).$
Let ${\widehat{\Theta}}\in\textnormal{CH}^{1}({\widehat{A}})_{\mathbb{Q}}$ be
the dual of $\Theta$, and define a one-cycle $\tau$ on $A\times{\widehat{A}}$
as follows:
$\displaystyle\tau\coloneqq
j_{1,\ast}(\Gamma_{\Theta})+j_{2,\ast}(\Gamma_{{\widehat{\Theta}}})-(\textnormal{id},\lambda)_{\ast}(\Gamma_{\Theta})\in\textnormal{CH}_{1}(A\times{\widehat{A}})_{\mathbb{Q}}.$
###### Lemma 6.6.
One has
$\tau=(-1)^{g+1}\cdot{\mathscr{R}}_{A}\in\textnormal{CH}_{1}(A\times{\widehat{A}})_{\mathbb{Q}}$.
###### Proof.
Identify $A$ and ${\widehat{A}}$ via $\lambda$. This gives
$\ell=m^{\ast}(\Theta)-\pi_{1}^{\ast}(\Theta)-\pi_{2}^{\ast}(\Theta)$, and the
Fourier transform becomes an endomorphism
${\mathcal{F}}_{A}\colon\textnormal{CH}(A)_{\mathbb{Q}}\to\textnormal{CH}(A)_{\mathbb{Q}}$.
We claim that
$\tau=(-1)^{g}\cdot\left(\Delta_{\ast}{\mathcal{F}}_{A}(\Theta)-j_{1,\ast}{\mathcal{F}}_{A}(\Theta)-j_{2,\ast}{\mathcal{F}}_{A}(\Theta)\right).$
For this, it suffices to show that
${\mathcal{F}}_{A}(\Theta)=(-1)^{g-1}\cdot\Theta^{g-1}/(g-1)!\in\textnormal{CH}_{1}(A)_{\mathbb{Q}}$.
Now ${\mathcal{F}}_{A}(e^{\Theta})=e^{-\Theta}$ by Lemma 6.7 below. Moreover,
since $\Theta$ is symmetric, we have
$\Theta\in\textnormal{CH}^{1,0}(A)_{\mathbb{Q}}$, hence
$\Theta^{i}/i!\in\textnormal{CH}^{i,0}(A)_{\mathbb{Q}}$ for each $i\geq 0$.
Therefore,
${\mathcal{F}}_{A}\left(\Theta^{i}/i!\right)\in\textnormal{CH}^{g-i,0}(A)_{\mathbb{Q}}$
by [DM91, Lemma 2.18]. This implies that, in fact,
${\mathcal{F}}_{A}\left(\Theta^{i}/i!\right)=(-1)^{g-i}\cdot\Theta^{g-i}/(g-i)!\in\textnormal{CH}^{g-i,0}(A)_{\mathbb{Q}}$
for every $i$. In particular, the claim follows.
Next, recall that ${\mathcal{F}}_{A\times
A}(\ell)=(-1)^{g+1}\cdot{\mathscr{R}}_{A}$, see Claim ($\ast$). So at this
point, it suffices to prove the identity
${\mathcal{F}}_{A\times
A}(\ell)=(-1)^{g}\cdot\left(\Delta_{\ast}{\mathcal{F}}_{A}(\Theta)-j_{1,\ast}{\mathcal{F}}_{A}(\Theta)-j_{2,\ast}{\mathcal{F}}_{A}(\Theta)\right).$
To prove this, we use the following functoriality properties of the Fourier
transform on the level of rational Chow groups. Let $X$ and $Y$ be abelian
varieties and let $f\colon X\to Y$ be a homomorphism with dual homomorphism
${\widehat{f}}\colon{\widehat{Y}}\to{\widehat{X}}$. We then have the following
equalities [MP10, (3.7.1)]:
$({\widehat{f}})^{\ast}\circ{\mathcal{F}}_{X}={\mathcal{F}}_{Y}\circ
f_{\ast},\hskip 8.53581pt{\mathcal{F}}_{X}\circ f^{\ast}=(-1)^{\dim X-\dim
Y}\cdot({\widehat{f}})_{\ast}\circ{\mathcal{F}}_{Y}.$ (9)
Since $\ell=m^{\ast}\Theta-\pi_{1}^{\ast}\Theta-\pi_{2}^{\ast}\Theta$, it
follows from Equation (9) that
$\displaystyle{\mathcal{F}}_{A\times A}(\ell)$
$\displaystyle={\mathcal{F}}_{A\times
A}\left(m^{\ast}\Theta\right)-{\mathcal{F}}_{A\times
A}\left(\pi_{1}^{\ast}\Theta\right)-{\mathcal{F}}_{A\times
A}\left(\pi_{2}^{\ast}\Theta\right)$
$\displaystyle=(-1)^{g}\cdot\left(\Delta_{\ast}{\mathcal{F}}_{A}(\Theta)-j_{1,\ast}{\mathcal{F}}_{A}(\Theta)-j_{2,\ast}{\mathcal{F}}_{A}(\Theta)\right).$
∎
###### Lemma 6.7 (Beauville).
Let $A$ be an abelian variety over $k$, principally polarized by
$\lambda\colon A\xrightarrow{\sim}{\widehat{A}}$, and define
$\Theta=\frac{1}{2}\cdot(\textnormal{id},\lambda)^{\ast}c_{1}({\mathcal{P}}_{A})\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}$.
Identify $A$ and ${\widehat{A}}$ via $\lambda$. With respect to the Fourier
transform
${\mathcal{F}}_{A}\colon\textnormal{CH}(A)_{\mathbb{Q}}\xrightarrow{\sim}\textnormal{CH}(A)_{\mathbb{Q}},\quad\text{
one has }\quad{\mathcal{F}}_{A}(e^{\Theta})=e^{-\Theta}.$
###### Proof.
Our proof follows the proof of [Bea82, Lemme 1], but has to be adapted, since
$\Theta$ does not necessarily come from a symmetric ample line bundle on $A$.
Since one still has
$\ell=m^{\ast}\Theta-\pi_{1}^{\ast}\Theta-\pi_{2}^{\ast}\Theta$, the argument
can be made to work: one has
$\displaystyle{\mathcal{F}}_{A}(e^{\Theta})$
$\displaystyle=\pi_{2,\ast}\left(e^{\ell}\cdot\pi_{1}^{\ast}e^{\Theta}\right)$
$\displaystyle=\pi_{2,\ast}\left(e^{m^{\ast}\Theta-\pi_{2}^{\ast}\Theta}\right)=e^{-\Theta}\pi_{2,\ast}(m^{\ast}e^{\Theta})\in\textnormal{CH}(A)_{\mathbb{Q}}.$
For codimension reasons, one has
$\pi_{2,\ast}(m^{\ast}e^{\Theta})=\pi_{2,\ast}m^{\ast}(\Theta^{g}/g!)=\deg(\Theta^{g}/g!)\in\textnormal{CH}^{0}(A)_{\mathbb{Q}}\cong\mathbb{Q}.$
Pull back $\Theta^{g}/g!$ along $A_{k_{s}}\to A$ to see that
$\deg(\Theta^{g}/g!)=1\in\textnormal{CH}^{0}(A)_{\mathbb{Q}}\cong\textnormal{CH}^{0}(A_{k_{s}})_{\mathbb{Q}},$
since over $k_{s}$ the cycle $\Theta$ becomes the cycle class attached to a
symmetric ample line bundle. ∎
#### 4 Divided powers of algebraic cycles
It was asked by Bruno Kahn whether there exists a PD-structure on the Chow
ring of an abelian variety over any field with respect to its usual
(intersection) product. There are counterexamples over non-closed fields: see
[Esn04], where Esnault constructs an abelian surface $X$ and a line bundle
${\mathcal{L}}$ on $X$ such that $c_{1}({\mathcal{L}})\cdot
c_{1}({\mathcal{L}})$ is not divisible by $2$ in $\textnormal{CH}_{0}(X)$.
However, the case of algebraically closed fields remains open [MP10, Section
3.2]. What we do know, is the following:
###### Theorem 6.8 (Moonen–Polishchuk).
Let $A$ be an abelian variety over $k$. The ring
$\left(\textnormal{CH}(A),\star\right)$ admits a canonical PD-structure
$\gamma$ on the ideal $\textnormal{CH}_{>0}(A)\subset\textnormal{CH}(A)$. If
$k=\bar{k}$, then $\gamma$ extends to a PD-structure on the ideal generated by
$\textnormal{CH}_{>0}(A)$ and the zero cycles of degree zero.
In particular, for each element $x\in\textnormal{CH}_{>0}(A)$ and each
$n\in\mathbb{Z}_{\geq 1}$, there is a canonical element
$x^{[n]}\in\textnormal{CH}_{>0}(A)$ such that $n!x^{[n]}=x^{\star n}$, see
[Sta18, Tag 07GM]. For $x\in\textnormal{CH}_{>0}(A)$, we may then define
$\mathrm{E}(x)=\sum_{n\geq 0}x^{[n]}\in\textnormal{CH}(A)$
as the $\star$-exponential of $x$ in terms of its divided powers.
Together with the results of Section 3, Theorem 6.8 enables us to provide
criteria for the existence of a motivic weak integral Fourier transform.
Recall that for an abelian variety $A$ over $k$, principally polarized by
$\lambda\colon A\xrightarrow{\sim}{\widehat{A}}$, we defined
$\Theta\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}$ as the symmetric ample class
attached to $\lambda$, see equation (8).
###### Theorem 6.9.
Let $A_{/k}$ be an abelian variety of dimension $g$. The following are
equivalent:
1. 1.
The one-cycle
${\mathscr{R}}_{A}=c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$
lifts to $\textnormal{CH}_{1}(A\times{\widehat{A}})$.
2. 2.
The abelian variety $A$ admits a motivic weak integral Fourier transform.
3. 3.
The abelian variety $A\times{\widehat{A}}$ admits a motivic weak integral
Fourier transform.
Moreover, if $A$ carries a symmetric ample line bundle that induces a
principal polarization $\lambda\colon A\xrightarrow{\sim}{\widehat{A}}$, then
the above statements are equivalent to the following equivalent statements:
1. 4.
The two-cycle
${\mathscr{S}}_{A}=c_{1}({\mathcal{P}}_{A})^{2g-2}/(2g-2)!\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}$
lifts to $\textnormal{CH}_{2}(A\times{\widehat{A}})$.
2. 5.
The one-cycle
$\Gamma_{\Theta}=\Theta^{g-1}/(g-1)!\in\textnormal{CH}(A)_{\mathbb{Q}}$ lifts
to a one-cycle in $\textnormal{CH}(A)$.
3. 6.
The abelian variety $A$ admits a weak integral Fourier transform.
4. 7.
The Fourier transform ${\mathcal{F}}_{A}$ satisfies
${\mathcal{F}}_{A}\left(\textnormal{CH}(A)/(\textnormal{torsion})\right)\subset\textnormal{CH}({\widehat{A}})/(\textnormal{torsion})$.
5. 8.
There exists a PD-structure on the ideal
$\textnormal{CH}^{>0}(A)/(\textnormal{torsion})\subset\textnormal{CH}(A)/(\textnormal{torsion})$.
###### Proof.
Suppose that 1 holds, and let
$\Gamma\in\textnormal{CH}_{1}(A\times{\widehat{A}})$ be a cycle such that
$\Gamma_{\mathbb{Q}}={\mathscr{R}}_{A}$. Then consider the cycle
$(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot\Gamma)\in\textnormal{CH}(A\times{\widehat{A}})$.
By Lemma 6.5, we have
$\displaystyle(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot\Gamma)_{\mathbb{Q}}=(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot\Gamma_{\mathbb{Q}})$
$\displaystyle=(-1)^{g}\cdot{\mathrm{E}}((-1)^{g}\cdot{\mathscr{R}}_{A})=\textnormal{ch}({\mathcal{P}}_{A})\in\textnormal{CH}(A\times{\widehat{A}})_{\mathbb{Q}}.$
Thus 2 holds. We claim that 3 holds as well. Indeed, consider the line bundle
${\mathcal{P}}_{A\times{\widehat{A}}}$ on the abelian variety
$X=A\times{\widehat{A}}\times{\widehat{A}}\times A$; one has that
${\mathcal{P}}_{A\times{\widehat{A}}}\cong\pi_{13}^{\ast}{\mathcal{P}}_{A}\otimes\pi_{24}^{\ast}{\mathcal{P}}_{{\widehat{A}}}$,
which implies that we have the following equality in
$\textnormal{CH}_{1}(X)_{\mathbb{Q}}$:
$\begin{split}{\mathscr{R}}_{A\times{\widehat{A}}}&=\frac{\left(\pi_{13}^{\ast}c_{1}({\mathcal{P}}_{A})+\pi_{24}^{\ast}c_{1}({\mathcal{P}}_{{\widehat{A}}})\right)^{4g-1}}{(4g-1)!}\\\
&=\frac{\pi_{13}^{\ast}c_{1}({\mathcal{P}}_{A})^{2g-1}\cdot\pi_{24}^{\ast}c_{1}({\mathcal{P}}_{{\widehat{A}}})^{2g}+\pi_{13}^{\ast}c_{1}({\mathcal{P}}_{A})^{2g}\cdot\pi_{24}^{\ast}c_{1}({\mathcal{P}}_{{\widehat{A}}})^{2g-1}}{(2g)!(2g-1)!}\\\
&=\frac{\pi_{13}^{\ast}c_{1}({\mathcal{P}}_{A})^{2g-1}\cdot\pi_{24}^{\ast}((2g)!\cdot[0]_{A\times{\widehat{A}}})+\pi_{13}^{\ast}((2g)!\cdot[0]_{{\widehat{A}}\times
A})\cdot\pi_{24}^{\ast}c_{1}({\mathcal{P}}_{{\widehat{A}}})^{2g-1}}{(2g)!(2g-1)!}\\\
&=\pi_{13}^{\ast}(\frac{c_{1}({\mathcal{P}}_{A})^{2g-1}}{(2g-1)!})\cdot\pi_{24}^{\ast}([0]_{A\times{\widehat{A}}})+\pi_{13}^{\ast}([0]_{{\widehat{A}}\times
A})\cdot\pi_{24}^{\ast}(\frac{c_{1}({\mathcal{P}}_{{\widehat{A}}})^{2g-1}}{(2g-1)!}).\end{split}$
(10)
We conclude that ${\mathscr{R}}_{A\times{\widehat{A}}}$ lifts to
$\textnormal{CH}_{1}(X)$ which, by the implication
$[\ref{motivicone}\implies\ref{motivictwo}]$ (that has already been proved),
implies that $A\times{\widehat{A}}$ admits a motivic weak integral Fourier
transform. On the other hand, the implication
$[\ref{motivicthree}\implies\ref{motivicone}]$ follows from the fact that
$(-1)^{g}\cdot{\mathcal{F}}_{{\widehat{A}}\times
A}(-{\widehat{\ell}})={\mathscr{R}}_{A}$ (see Equation (7)) and the fact that
an abelian variety admits a motivic weak integral Fourier transform if and
only if its dual abelian variety does. Therefore, we have
$[\ref{motivicone}\iff\ref{motivictwo}\iff\ref{motivicthree}]$.
From now on, assume that $A$ is principally polarized by $\lambda\colon
A\xrightarrow{\sim}A$, where $\lambda$ is the polarization attached to a
symmetric ample line bundle ${\mathcal{L}}$ on $A$. Moreover, in what follows
we shall identify ${\widehat{A}}$ and $A$ via $\lambda$.
Suppose that 4 holds and let $S_{A}\in\textnormal{CH}_{2}(A\times
A)=\textnormal{CH}^{2g-2}(A\times A)$ be such that
$(S_{A})_{\mathbb{Q}}={\mathscr{S}}_{A}\in\textnormal{CH}_{2}(A\times
A)_{\mathbb{Q}}$. Define
$\textnormal{CH}^{1,0}(A)\coloneqq\textnormal{Pic}^{\textnormal{sym}}(A)$ to
be the group of isomorphism classes of symmetric line bundes on $A$. Then
$S_{A}$ induces a homomorphism
${\mathcal{F}}\colon\textnormal{CH}^{1,0}(A)\to\textnormal{CH}_{1}(A)$ defined
as the composition
$\displaystyle\textnormal{CH}^{1,0}(A)\xrightarrow{\pi_{1}^{\ast}}\textnormal{CH}^{1}(A\times
A)\xrightarrow{\cdot S_{A}}$ $\displaystyle\textnormal{CH}^{2g-1}(A\times A)$
$\displaystyle=\textnormal{CH}_{1}(A\times
A)\xrightarrow{\pi_{2,\ast}}\textnormal{CH}_{1}(A).$
Since
${\mathcal{F}}_{A}\left(\textnormal{CH}^{1,0}(A)_{\mathbb{Q}}\right)\subset\textnormal{CH}_{1}(A)_{\mathbb{Q}}$
(see [DM91, Lemma 2.18]) we see that
$\textstyle{\textnormal{CH}^{1,0}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}}$$\textstyle{\textnormal{CH}_{1}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\textnormal{CH}^{1,0}(A)_{\mathbb{Q}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{{\mathcal{F}}_{A}}$$\textstyle{\textnormal{CH}_{1}(A)_{\mathbb{Q}}}$
(11)
commutes. Since the line bundle ${\mathcal{L}}$ is symmetric, we have
$\displaystyle\begin{split}\Theta&=\frac{1}{2}\cdot(\textnormal{id},\lambda)^{\ast}c_{1}({\mathcal{P}}_{A})=\frac{1}{2}\cdot
c_{1}\left((\textnormal{id},\lambda)^{\ast}{\mathcal{P}}_{A}\right)\\\
&=\frac{1}{2}\cdot
c_{1}({\mathcal{L}}\otimes{\mathcal{L}})=c_{1}({\mathcal{L}})\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}.\end{split}$
(12)
The class $c_{1}({\mathcal{L}})\in\textnormal{CH}^{1,0}(A)$ of the line bundle
${\mathcal{L}}$ thus lies above
$\Theta\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}$. Therefore,
${\mathcal{F}}(c_{1}({\mathcal{L}}))\in\textnormal{CH}_{1}(A)$ lies above
$\Gamma_{\Theta}=(-1)^{g-1}{\mathcal{F}}_{A}(\Theta)$ by the commutativity of
(11), and 5 holds.
Suppose that 5 holds. Then 1 follows readily from Lemma 6.6. Moreover, if 2
holds, then $\textnormal{ch}({\mathcal{P}}_{A})\in\textnormal{CH}(A\times
A)_{\mathbb{Q}}$ lifts to $\textnormal{CH}(A\times A)$, hence in particular 4
holds. Since we have already proved that 1 implies 2, we conclude that
$[\ref{motiviczero}\implies\ref{motivicfour}\implies\ref{motivicone}\implies\ref{motivictwo}\implies\ref{motiviczero}]$.
The implications
$[\ref{motivictwo}\implies\ref{motivicfive}\implies\ref{motivicsix}]$ are
trivial. Assume that 7 holds. By Equation (12),
$\Theta\in\textnormal{CH}^{1}(A)_{\mathbb{Q}}$ lifts to
$\textnormal{CH}^{1}(A)$, hence
${\mathcal{F}}_{A}(\Theta)=(-1)^{g-1}\cdot\Gamma_{\Theta}$
lifts to $\textnormal{CH}_{1}(A)$, i.e. 5 holds.
Assume that 7 holds. The fact that
${\mathcal{F}}_{A}\left(\textnormal{CH}(A)/(\textnormal{torsion})\right)\subset\textnormal{CH}(A)/(\textnormal{torsion})$
implies that
$\displaystyle\textnormal{CH}(A)/(\textnormal{torsion})$
$\displaystyle={\mathcal{F}}_{A}\left({\mathcal{F}}_{A}\left(\textnormal{CH}(A)/(\textnormal{torsion})\right)\right)$
$\displaystyle\subset{\mathcal{F}}_{A}\left(\textnormal{CH}(A)/(\textnormal{torsion})\right)\subset\textnormal{CH}(A)/(\textnormal{torsion}).$
Thus the restriction of the Fourier transform ${\mathcal{F}}_{A}$ to
$\textnormal{CH}(A)/(\textnormal{torsion})$ defines an isomorphism
${\mathcal{F}}_{A}\colon\textnormal{CH}(A)/(\textnormal{torsion})\xrightarrow{\sim}\textnormal{CH}(A)/(\textnormal{torsion}).$
If $R$ is a ring and $\gamma$ a PD-structure on an ideal $I\subset R$, then
$\gamma$ extends to a PD-structure on $I/(\textnormal{torsion})\subset
R/(\textnormal{torsion})$. Thus, the ideal
$\textnormal{CH}_{>0}(A)/(\textnormal{torsion})$ of
$\textnormal{CH}(A)/(\textnormal{torsion})$ admits a PD-structure for the
Pontryagin product $\star$ by Theorem 6.8. Since ${\mathcal{F}}_{A}$ exchanges
Pontryagin and intersection products up to sign [Bea82, Proposition 3(ii)], it
follows that 8 holds. Since 8 trivially implies 5, we are done. ∎
###### Question 6.10 (Moonen–Polishchuk [MP10], Totaro [Tot21]).
Let $A$ be any principally polarized abelian variety over $k=\bar{k}$. Are the
equivalent conditions in Theorem 6.9 satisfied for $A$?
###### Remark 6.11.
For the Jacobian $A=J(C)$ of a hyperelliptic curve $C$, the answer to Question
6.10 is "yes" [MP10].
Similarly, there is a relation between integral Fourier transforms up to
homology and the algebraicity of minimal cohomology classes induced by
Poincaré line bundles and theta divisors.
###### Proposition 6.12.
Let $A$ be an abelian variety of dimension $g$ over $k$.
The following are equivalent:
1. 1.
The class
$\rho_{A}\coloneqq
c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}^{4g-2}_{\textnormal{\'{e}t}}((A\times{\widehat{A}})_{k_{s}},\mathbb{Z}_{\ell}(2g-1))$
is the class of a cycle in $\textnormal{CH}_{1}(A\times{\widehat{A}})$.
2. 2.
The abelian variety $A$ admits a motivic integral Fourier transform up to
homology.
3. 3.
The abelian variety $A\times{\widehat{A}}$ admits a motivic integral Fourier
transform up to homology.
Moreover, if $A$ carries an ample line bundle that induces a principal
polarization $\lambda\colon A\xrightarrow{\sim}{\widehat{A}}$, then the above
statements are equivalent to the following equivalent statements:
1. 4.
The class
$\sigma_{A}\coloneqq
c_{1}({\mathcal{P}}_{A})^{2g-2}/(2g-2)!\in{\mathrm{H}}_{\textnormal{\'{e}t}}^{4g-4}((A\times{\widehat{A}})_{k_{s}},\mathbb{Z}_{\ell}(2g-2))$
is the class of a cycle in $\textnormal{CH}_{2}(A\times{\widehat{A}})$.
2. 5.
The class
$\gamma_{\theta}=\theta^{g-1}/(g-1)!\in{\mathrm{H}}^{2g-2}_{\textnormal{\'{e}t}}(A_{k_{s}},\mathbb{Z}_{\ell}(g-1))$
lifts to a cycle in $\textnormal{CH}_{1}(A)$.
3. 6.
The abelian variety $A$ admits an integral Fourier transform up to homology.
###### Proof.
The proof of Theorem 6.9 can easily be adapted to this situation. ∎
###### Proposition 6.13.
1. 1.
If $k=\mathbb{C}$, then each of the statements $\ref{motivicone-
new}-\ref{motivicfive-new}$ in Proposition 6.12 is equivalent to the same
statement with étale cohomology replaced by Betti cohomology.
2. 2.
Proposition 6.12 remains valid if one replaces integral Chow groups in
statements 1, 4 and 5 by their tensor product with $\mathbb{Z}_{\ell}$ and
‘integral Fourier transform up to homology’ by ‘$\ell$-adic integral Fourier
transform up to homology’ in statements 2, 3 and 6.
###### Proof.
1. 1.
In this case $\mathbb{Z}_{\ell}(i)=\mathbb{Z}_{\ell}$ and the Artin comparison
isomorphism
${\mathrm{H}}^{2i}_{\textnormal{\'{e}t}}(A,\mathbb{Z}_{\ell})\xrightarrow{\sim}{\mathrm{H}}^{2i}(A(\mathbb{C}),\mathbb{Z}_{\ell})$
[AGV71, III, Exposé XI] is compatible with the cycle class map. Since the map
${\mathrm{H}}^{2i}(A(\mathbb{C}),\mathbb{Z})\to{\mathrm{H}}_{\textnormal{\'{e}t}}^{2i}(A,\mathbb{Z}_{\ell})$
is injective, a class $\beta\in{\mathrm{H}}^{2i}(A(\mathbb{C}),\mathbb{Z})$ is
in the image of
$cl\colon\textnormal{CH}^{i}(A)\to{\mathrm{H}}^{2i}(A(\mathbb{C}),\mathbb{Z})$
if and only if its image
$\beta_{\ell}\in{\mathrm{H}}^{2i}_{\textnormal{\'{e}t}}(A,\mathbb{Z}_{\ell})$
is in the image of
$cl\colon\textnormal{CH}^{i}(A)\to{\mathrm{H}}^{2i}_{\textnormal{\'{e}t}}(A,\mathbb{Z}_{\ell})$.
2. 2.
Indeed, for an abelian variety $A$ over $k$, the PD-structure on
$\textnormal{CH}_{>0}(A)\subset(\textnormal{CH}(A),\star)$ induces a PD-
structure on
$\textnormal{CH}_{>0}(A)\otimes\mathbb{Z}_{\ell}\subset(\textnormal{CH}(A)_{\mathbb{Z}_{\ell}},\star)$
by [Sta18, Tag 07H1], because the ring map
$(\textnormal{CH}(A),\star)\to(\textnormal{CH}(A)_{\mathbb{Z}_{\ell}},\star)$
is flat. The latter follows from the flatness of
$\mathbb{Z}\to\mathbb{Z}_{\ell}$.
∎
### Chapter 7 One-cycles on abelian varieties
This chapter is based on joint work with Thorsten Beckmann.
#### 1 Introduction
In this chapter we provide applications of the results developed in the
previous Chapter 6. These applications concern the cycle class map for curves
on an abelian variety $A$. More precisely, we will consider the integral Hodge
conjecture for one-cycles when $A$ is defined over $\mathbb{C}$, and the
integral Tate conjecture for one-cycles when $A$ is defined over the separable
closure of a finitely generated field.
To state the most important results of Chapter 7, let us recall how the
complex cycle class map was defined (see also Section 3). Whenever
$\iota\colon C\hookrightarrow A$ is a smooth curve, the image of the
fundamental class under the pushforward map
$\iota_{\ast}\colon{\mathrm{H}}_{2}(C,\mathbb{Z})\to{\mathrm{H}}_{2}(A,\mathbb{Z})\cong{\mathrm{H}}^{2g-2}(A,\mathbb{Z})$
defines a cohomology class $[C]\in{\mathrm{H}}^{2g-2}(A,\mathbb{Z})$. This
construction extends to one-cycles and factors modulo rational equivalence.
The cycle class map for curves on $A$ is the canonical homomorphism defined in
this way:
$cl\colon\textnormal{CH}_{1}(A)\to\textnormal{Hdg}^{2g-2}(A,\mathbb{Z}).$
It extends to a natural graded ring homomorphism
$cl\colon\textnormal{CH}(A)\to{\mathrm{H}}^{\bullet}(A,\mathbb{Z})$.
The liftability of the Fourier transform that we studied in Chapter 6 turns
out to have important consequences for the image of the cycle class map. An
element $\alpha\in{\mathrm{H}}^{\bullet}(A,\mathbb{Z})$ is called algebraic if
it is in the image of $cl$, and that $A$ satisfies the integral Hodge
conjecture for $i$-cycles if all Hodge classes in
${\mathrm{H}}^{2g-2i}(A,\mathbb{Z})$ are algebraic. Although the integral
Hodge conjecture fails in general [AH62, tre92, Tot97], it is an open question
for abelian varieties. The main result of Chapter 7 is as follows.
###### Theorem 7.1 (with T. Beckmann).
Let $A$ be a complex abelian variety of dimension $g$ with Poincaré bundle
${\mathcal{P}}_{A}$. The following three statements are equivalent:
1. 1.
The cohomology class
$c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z})$
is algebraic.
2. 2.
The Chern character
$\textnormal{ch}({\mathcal{P}}_{A})=\exp(c_{1}({\mathcal{P}}_{A}))\in{\mathrm{H}}^{\bullet}(A\times{\widehat{A}},\mathbb{Z})$
is algebraic.
3. 3.
The integral Hodge conjecture for one-cycles holds for $A\times{\widehat{A}}$.
Any of these statements implies that
1. 4.
The integral Hodge conjecture for one-cycles holds for $A$ and
${\widehat{A}}$.
Suppose that $A$ is principally polarized by
$\theta\in\textnormal{Hdg}^{2}(A,\mathbb{Z})$ and consider the following
statements:
1. 5.
The minimal cohomology class
$\gamma_{\theta}\coloneqq\theta^{g-1}/(g-1)!\in{\mathrm{H}}^{2g-2}(A,\mathbb{Z})$
is algebraic.
2. 6.
The cohomology class
$c_{1}({\mathcal{P}}_{A})^{2g-2}/(2g-2)!\in{\mathrm{H}}^{4g-4}(A\times{\widehat{A}},\mathbb{Z})$
is algebraic.
Then statements
$\ref{introitem:minimalpoincare}-\ref{introitem:minimalpoincare2}$ are
equivalent. If they hold, then the class
$\theta^{i}/i!\in{\mathrm{H}}^{2i}(A,\mathbb{Z})$ is algebraic for every
positive integer $i$.
Remark that Condition 5 is stable under products, so a product of principally
polarized abelian varieties satisfies the integral Hodge conjecture for one-
cycles if and only if each of the factors does. More importantly, if $J(C)$ is
the Jacobian of a smooth projective curve $C$ of genus $g$, then every
integral Hodge class of degree $2g-2$ on $J(C)$ is a $\mathbb{Z}$-linear
combination of curves classes:
###### Theorem 7.2.
Let $C_{1},\dotsc,C_{n}$ be smooth projective curves over $\mathbb{C}$. Then
the integral Hodge conjecture for one-cycles holds for the product of
Jacobians $J(C_{1})\times\cdots\times J(C_{n})$.
See Remark 7.9.1 for another approach towards Theorem 7.2 in the case $n=1$. A
second consequence of Theorem 7.1 is that the integral Hodge conjecture for
one-cycles on principally polarized complex abelian varieties is stable under
specialization, see Corollary 7.10. An application of somewhat different
nature is the following density result, proven in Section 2:
###### Theorem 7.3.
Let $\delta=(\delta_{1},\dotsc,\delta_{g})$ be positive integers such that
$\delta_{i}|\delta_{i+1}$ and let ${\mathsf{A}}_{g,\delta}(\mathbb{C})$ be the
coarse moduli space of polarized abelian varieties over $\mathbb{C}$ with
polarization type $\delta$. There is a countable union
$X\subset{\mathsf{A}}_{g,\delta}(\mathbb{C})$ of closed algebraic subvarieties
of dimension at least $g$, that satisfies the following property: $X$ is dense
in the analytic topology, and the integral Hodge conjecture for one-cycles
holds for those polarized abelian varieties whose isomorphism class defines a
point in $X$.
###### Remark 7.4.
The lower bound that we obtain on the dimension of the components of $X$
actually depends on $\delta$ and is often greater than $g$. For instance, when
$\delta=1$ and $g\geq 2$, there is a set $X$ as in the theorem, whose elements
are prime-power isogenous to products of Jacobians of curves. Therefore, the
components of $X$ have dimension $3g-3$ in this case, c.f. Remark 7.15.
One could compare Theorem 7.1 with the following statement, proven by
Grabowski [Gra04]: if $g$ is a positive integer such that the minimal class
$\gamma_{\theta}=\theta^{g-1}/(g-1)!$ of every principally polarized abelian
variety of dimension $g$ is algebraic, then every abelian variety of dimension
$g$ satisfies the integral Hodge conjecture for one-cycles. In this way, he
proved the integral Hodge conjecture for abelian threefolds, a result which
has been extended to smooth projective threefolds $X$ with $K_{X}=0$ by Voisin
and Totaro [Voi06, Tot21]. For abelian varieties of dimension greater than
three, not many unconditional statements seem to have been known.
In Section 3, we consider an abelian variety $A_{/\mathbb{C}}$ of dimension
$g$ and ask: if $n\in\mathbb{Z}_{\geq 1}$ is such that $n\cdot
c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z})_{\textnormal{alg}}$,
can we bound the order of ${\mathrm{Z}}^{2g-2}(A)$ in terms of $g$ and $n$?
For a smooth complex projective $d$-dimensional variety $X$,
${\mathrm{Z}}^{2d-2}(X)$ is called the degree $2d-2$ Voisin group of $X$
[Per22], is a stably birational invariant [Voi07, Lemma 15], and related to
the unramified cohomology groups by Colliot-Thélène–Voisin and Schreieder
[CTV12, Sch20]. We prove that if $n\cdot
c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!$ is algebraic, then
$\gcd(n^{2},(2g-2)!)\cdot{\mathrm{Z}}^{2g-2}(A)=(0)$. In particular,
$(2g-2)!\cdot{\mathrm{Z}}^{2g-2}(A)=(0)$ for any $g$-dimensional complex
abelian variety $A$. Moreover, if $A$ is principally polarized by
$\theta\in\textnormal{NS}(A)$ and if
$n\cdot\gamma_{\theta}\in{\mathrm{H}}^{2g-2}(A,\mathbb{Z})$ is algebraic, then
$n\cdot c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!$ is algebraic. Since it is
well known that for Prym varieties, the Hodge class $2\cdot\gamma_{\theta}$ is
algebraic, these observations lead to the following result (see also Theorem
7.19).
###### Theorem 7.5.
Let $A$ be a $g$-dimensional Prym variety over $\mathbb{C}$. Then
$4\cdot{\mathrm{Z}}^{2g-2}(A)=(0)$.
Recall that in our study of the liftability of the Fourier transform, carried
out in the previous Chapter 6, we considered abelian varieties defined over
arbitrary fields. This generality allows us now to obtain the analogue of
Theorem 7.1 over the separable closure $k$ of a finitely generated field.
A smooth projective variety $X$ of dimension $d$ over $k$ satisfies the
integral Tate conjecture for one-cycles over $k$ if, for every prime number
$\ell$ different from $\textnormal{char}(k)$ and for some finitely generated
field of definition $k_{0}\subset k$ of $X$, the cycle class map
$cl\colon\textnormal{CH}_{1}(X)_{\mathbb{Z}_{\ell}}=\textnormal{CH}_{1}(X)\otimes_{\mathbb{Z}}{\mathbb{Z}_{\ell}}\to\bigcup_{U}{\mathrm{H}}_{\textnormal{\'{e}t}}^{2d-2}(X,\mathbb{Z}_{\ell}(d-1))^{U}$
(1)
is surjective, where $U$ ranges over the open subgroups of
$\textnormal{Gal}(k/k_{0})$.
###### Theorem 7.6.
Let $A$ be an abelian variety of dimension $g$ over the separable closure $k$
of a finitely generated field.
* •
The abelian variety $A$ satisfies the integral Tate conjecture for one-cycles
over $k$ if the cohomology class
$c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}_{\textnormal{\'{e}t}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z}_{\ell}(2g-1))$
is the class of a one-cycle with $\mathbb{Z}_{\ell}$-coefficients for every
prime number $\ell<(2g-1)!$ unequal to $\textnormal{char}(k)$.
* •
Suppose that $A$ is principally polarized and let
$\theta_{\ell}\in{\mathrm{H}}^{2}_{\textnormal{\'{e}t}}(A,\mathbb{Z}_{\ell}(1))$
be the class of the polarization. The map (1) is surjective if
$\gamma_{\theta_{\ell}}\coloneqq\theta_{\ell}^{g-1}/(g-1)!\in{\mathrm{H}}_{\textnormal{\'{e}t}}^{2g-2}(A,\mathbb{Z}_{\ell}(g-1))$
is in its image. In particular, if $\ell>(g-1)!$ then this always holds. Thus
$A$ satisfies the integral Tate conjecture for one-cycles if and only if
$\gamma_{\theta_{\ell}}$ is in the image of (1) for every prime number
$\ell<(g-1)!$ unequal to $\textnormal{char}(k)$.
Theorem 7.6 implies in particular that products of Jacobians of smooth
projective curves over $k$ satisfy the integral Tate conjecture for one-cycles
over $k$. Moreover, for an abelian variety $A_{K}$ over a number field
$K\subset\mathbb{C}$, the integral Hodge conjecture for one-cycles on
$A_{\mathbb{C}}$ is equivalent to the integral Tate conjecture for one-cycles
on $A_{\bar{K}}$ (Corollary 7.21), which in turn implies the integral Tate
conjecture for one-cycles on the geometric special fiber
$A_{\overline{k}({\mathfrak{p}})}$ of the Néron model of $A_{K}$ over
$\mathcal{O}_{K}$ for any prime ${\mathfrak{p}}\subset\mathcal{O}_{K}$ at
which $A_{K}$ has good reduction (Corollary 7.22). Finally, we obtain the
analogue of Theorem 7.3 in positive characteristic as well. The definition for
a smooth projective variety over the algebraic closure $k$ of a finitely
generated field to satisfy the integral Tate conjecture for one-cycles over
$k$ is analogous to the definition above (see e.g. [CP15]).
###### Theorem 7.7.
Let $k$ be the algebraic closure of a finitely generated field of
characteristic $p>0$. Let ${\mathsf{A}}_{g}$ be the coarse moduli space over
$k$ of principally polarized abelian varieties of dimension $g$ over $k$. Let
$X\subset{\mathsf{A}}_{g}(k)$ be the subset of moduli points attached to
principally polarized abelian varieties over $k$ that satisfy the integral
Tate conjecture for one-cycles over $k$. Then $X$ is Zariski dense in
${\mathsf{A}}_{g}$.
#### 2 The integral Hodge conjecture
In this section we use the theory developed in Chapter 6 to prove Theorem 7.1.
We also prove some applications of Theorem 7.1: the integral Hodge conjecture
for one-cycles on products of Jacobians (Theorem 7.2), the fact that the
integral Hodge conjecture for one-cycles on principally polarized complex
abelian varieties is stable under specialization (Corollary 7.10) and density
of polarized abelian varieties satisfying the integral Hodge conjecture for
one-cycles (Theorem 7.3).
##### 1 Proof of the main theorem
Let us prove Theorem 7.1.
###### Proof of Theorem 7.1.
Suppose that 1 holds. Then 2 holds by Propositions 6.12 and 6.13.1. Suppose
that 2 holds. Then 4 follows from Lemma 6.4. So we have
$[\ref{introitem:minimalpoincare}\iff\ref{introitem:integralpoincare}\implies\ref{introitem:IHC}]$.
If 1 holds, then
$\rho_{A}=c_{1}({\mathcal{P}}_{A})^{2g-1}/(2g-1)!\in{\mathrm{H}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z})$
is algebraic, which implies that
$\rho_{{\widehat{A}}}\in{\mathrm{H}}^{4g-2}({\widehat{A}}\times A,\mathbb{Z})$
is algebraic. Therefore,
$\rho_{A\times{\widehat{A}}}\in{\mathrm{H}}^{8g-2}(A\times{\widehat{A}}\times{\widehat{A}}\times
A,\mathbb{Z})$ is algebraic by Equation (10). We then apply the implication
$[\ref{introitem:minimalpoincare}\implies\ref{introitem:IHC}]$ to the abelian
variety $A\times{\widehat{A}}$, which shows that 3 holds. Since
$[\ref{introitem:integralhodgeforproduct}\implies\ref{introitem:minimalpoincare}]$
is trivial, we have proven
$[\ref{introitem:minimalpoincare}\iff\ref{introitem:integralpoincare}\iff\ref{introitem:integralhodgeforproduct}\implies\ref{introitem:IHC}]$.
Next, assume that $A$ is principally polarized by
$\theta\in\textnormal{NS}(A)\subset{\mathrm{H}}^{2}(A,\mathbb{Z})$. The
directions $[\ref{introitem:IHC}\implies\ref{introitem:minimalclass}]$ and
$[\ref{introitem:integralpoincare}\implies\ref{introitem:minimalpoincare2}]$
are trivial and
$[\ref{introitem:minimalclass}\implies\ref{introitem:minimalpoincare}]$
follows from Propositions 6.12 and 6.13.1. We claim that 6 implies 4. Define
$\sigma_{A}=c_{1}({\mathcal{P}}_{A})^{2g-2}/(2g-2)!\in{\mathrm{H}}^{4g-4}(A\times{\widehat{A}},\mathbb{Z})$
and let $S\in\textnormal{CH}_{2}(A\times{\widehat{A}})$ be such that
$cl(S)=\sigma_{A}$. The squares in the following diagram commute:
$\textstyle{\textnormal{CH}^{1}(A)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl}$$\scriptstyle{\pi_{1}^{\ast}}$$\textstyle{\textnormal{CH}^{1}(A\times{\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl}$$\scriptstyle{\cdot
S}$$\textstyle{\textnormal{CH}^{2g-1}(A\times{\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl}$$\scriptstyle{\pi_{2,\ast}}$$\textstyle{\textnormal{CH}_{1}({\widehat{A}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{cl}$$\textstyle{{\mathrm{H}}^{2}(A,\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{1}^{\ast}}$$\textstyle{{\mathrm{H}}^{2}(A\times{\widehat{A}},\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\cdot\sigma_{A}}$$\textstyle{{\mathrm{H}}^{4g-2}(A\times{\widehat{A}},\mathbb{Z})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi_{2,\ast}}$$\textstyle{{\mathrm{H}}^{2g-2}({\widehat{A}},\mathbb{Z}).}$
(2)
Since
${\mathscr{F}}_{A}=\pi_{2,\ast}\left(\textnormal{ch}({\mathcal{P}}_{A})\cdot\pi_{1}^{\ast}(-)\right)$
restricts to an isomorphism
${\mathscr{F}}_{A}\colon{\mathrm{H}}^{2}(A,\mathbb{Z})\xrightarrow{\sim}{\mathrm{H}}^{2g-2}({\widehat{A}},\mathbb{Z})$
by [Bea82, Proposition 1], the composition
$\pi_{2,\ast}\circ(-\cdot\sigma_{A})\circ\pi_{1}^{\ast}$ on the bottom row of
(2) is an isomorphism. By Lefschetz $(1,1)$, the map
$cl\colon\textnormal{CH}_{1}({\widehat{A}})\to\textnormal{Hdg}^{2g-2}({\widehat{A}},\mathbb{Z})$
is therefore surjective.
It remains to prove the algebraicity of the classes
$\theta^{i}/i!\in{\mathrm{H}}^{2i}(A,\mathbb{Z})$. This follows from Theorem
6.8 and the following equality, see [Bea82, Corollaire 2]):
$\frac{\theta^{i}}{i!}=\frac{\gamma_{\theta}^{\star
j}}{j!},\quad\gamma_{\theta}=\frac{\theta^{g-1}}{(g-1)!}\in{\mathrm{H}}^{2g-2}(A,\mathbb{Z}),\quad
i+j=g.$
Therefore, the proof is finished. ∎
###### Corollary 7.8.
Let $A$ and $B$ be complex abelian varieties of respective dimensions
$g_{A},g_{B}$.
* •
The Hodge classes
$\rho_{A}\in{\mathrm{H}}^{4g_{A}-2}(A\times{\widehat{A}},\mathbb{Z})$ and
$\rho_{B}\in{\mathrm{H}}^{4g_{B}-2}(B\times{\widehat{B}},\mathbb{Z})$ are
algebraic if and only if $A\times{\widehat{A}}$, $B\times{\widehat{B}}$,
$A\times B$ and ${\widehat{A}}\times{\widehat{B}}$ satisfy the integral Hodge
conjecture for one-cycles.
* •
If $A$ and $B$ are principally polarized, then the integral Hodge conjecture
for one-cycles holds for $A\times B$ if and only if it holds for $A$ and $B$.
###### Proof.
The first statement follows from Theorem 7.1 and Equation (10). The second
statement follows from the fact that the minimal cohomology class of the
product $A\times B$ is algebraic if and only if the minimal cohomology classes
of the factors $A$ and $B$ are both algebraic. ∎
###### Proof of Theorem 7.2.
By Corollary 7.8 we may assume $n=1$, so let $C$ be a smooth projective curve.
Let $p\in C$ and consider the morphism $\iota\colon C\to J(C)$ defined by
sending a point $q$ to the isomorphism class of the degree zero line bundle
$\mathcal{O}(p-q)$. Then
$cl(\iota(C))=\gamma_{\theta}\in{\mathrm{H}}^{2g-2}(J(C),\mathbb{Z})$ by
Poincaré’s formula [ACGH85], so $\gamma_{\theta}$ is algebraic and the result
follows from Theorem 7.1. ∎
###### Remarks 7.9.
1. 1.
Let us give another proof of Theorem 7.2 in the case $n=1$, i.e. let $C$ be a
smooth projective curve of genus $g$ and let us prove the integral Hodge
conjecture for one-cycles on $J(C)$ in a way that does not use Fourier
transforms. It is classical that any Abel-Jacobi map $C^{(g)}\to J(C)$ is
birational. On the other hand, the integral Hodge conjecture for one-cycles is
a birational invariant, see [Voi07, Lemma 15]. Therefore, to prove it for
$J(C)$ it suffices to prove it for $C^{(g)}$. One then uses [Bn02, Corollary
5] which says that for each $n\in\mathbb{Z}_{\geq 1}$, there is a natural
polarization $\eta$ on the $n$-fold symmetric product $C^{(n)}$ such that for
any $i\in\mathbb{Z}_{\geq 0}$, the map
$\eta^{n-i}\cup(-)\colon{\mathrm{H}}^{i}(C^{(n)},\mathbb{Z})\to{\mathrm{H}}^{2n-i}(C^{(n)},\mathbb{Z})$
is an isomorphism. In particular, the variety $C^{(n)}$ satisfies the integral
Hodge conjecture for one-cycles for any positive integer $n$.
2. 2.
Along these lines, observe that the integral Hodge conjecture for one-cycles
holds not only for symmetric products of smooth projective complex curves but
also for any product $C_{1}\times\cdots\times C_{n}$ of smooth projective
curves $C_{i}$ over $\mathbb{C}$. Indeed, this follows readily from the
Künneth formula.
3. 3.
Let $C$ be a smooth projective complex curve of genus $g$. Our proof of
Theorem 7.1 provides an explicit description of
$\textnormal{Hdg}^{2g-2}(J(C),\mathbb{Z})$ depending on
$\textnormal{Hdg}^{2}(J(C),\mathbb{Z})$. More generally, let $(A,\theta)$ be a
principally polarized abelian variety of dimension $g$, identify $A$ and
${\widehat{A}}$ via the polarization, and let
$\ell=c_{1}({\mathcal{P}}_{A})\in{\mathrm{H}}^{2}(A\times{\widehat{A}},\mathbb{Z})$.
Then $\ell=m^{\ast}(\theta)-\pi_{1}^{\ast}(\theta)-\pi_{2}^{\ast}(\theta)$,
which implies that
$\displaystyle\sigma_{A}=\frac{\ell^{2g-2}}{(2g-2)!}$
$\displaystyle=\sum_{\begin{subarray}{c}i,j,k\geq 0\\\
i+j+k=2g-2\end{subarray}}^{2g-2}(-1)^{j+k}\cdot
m^{\ast}\left(\frac{\theta^{i}}{i!}\right)\cdot\pi_{1}^{\ast}\left(\frac{\theta^{j}}{j!}\right)\cdot\pi_{2}^{\ast}\left(\frac{\theta^{k}}{k!}\right).$
Any $\beta\in\textnormal{Hdg}^{2g-2}(A,\mathbb{Z})$ is of the form
$\pi_{2,\ast}\left(\sigma_{A}\cdot\pi_{1}^{\ast}[D]\right)$, where $[D]=cl(D)$
for a divisor $D$ on $A$, as follows from (2). Therefore, any
$\beta\in\textnormal{Hdg}^{2g-2}(A,\mathbb{Z})$ may be written as
$\beta=\sum_{\begin{subarray}{c}i,j,k\geq 0\\\
i+j+k=2g-2\end{subarray}}^{2g-2}(-1)^{j+k}\cdot\pi_{2,\ast}\left(m^{\ast}\left(\frac{\theta^{i}}{i!}\right)\cdot\pi_{1}^{\ast}\left(\frac{\theta^{j}}{j!}\right)\cdot\pi_{1}^{\ast}[D]\right)\cdot\frac{\theta^{k}}{k!}.$
(3)
Returning to the case of a Jacobian $J(C)$ of a smooth projective curve $C$ of
genus $g$, the classes $\theta^{i}/i!$ appearing in (3) are effective
algebraic cycle classes. Indeed, for $p\in C$ and $d\in\mathbb{Z}_{\geq 1}$,
the image of the morphism $C^{d}\to J(C)$,
$(x_{i})\mapsto{\mathcal{O}}(\sum_{i}x_{i}-d\cdot p)$ defines a subvariety
$W_{d}(C)\subset J(C)$ and by Poincaré’s formula [ACGH85, §I.5] one has
$cl(W_{d}(C))=\theta^{g-d}/(g-d)!\in{\mathrm{H}}^{2g-2d}(J(C),\mathbb{Z})$.
Apart from Theorem 7.2, we obtain the following corollary of Theorem 7.1:
###### Corollary 7.10.
Let $A\to S$ be a principally polarized abelian scheme over a proper, smooth
and connected variety $S$ over $\mathbb{C}$. Let $X\subset S(\mathbb{C})$ be
the set of $x\in S(\mathbb{C})$ such that the abelian variety $A_{x}$
satisfies the integral Hodge conjecture for one-cycles. Then
$X=\cup_{i}Z_{i}(\mathbb{C})$ for some countable union of closed algebraic
subvarieties $Z_{i}\subset S$. In particular, if the integral Hodge conjecture
for one-cycles holds on $U(\mathbb{C})$ for a non-empty open subscheme $U$ of
$S$, then it holds on all of $S(\mathbb{C})$.
###### Proof.
Write ${\mathcal{A}}=A(\mathbb{C})$ and $B=S(\mathbb{C})$ and let
$\pi\colon{\mathcal{A}}\to B$ be the induced family of complex abelian
varieties. Let $g\in\mathbb{Z}_{\geq 0}$ be the relative dimension of $\pi$
and define, for $t\in S(\mathbb{C})$,
$\theta_{t}\in\textnormal{NS}({\mathcal{A}}_{t})\subset{\mathrm{H}}^{2}({\mathcal{A}}_{t},\mathbb{Z})$
to be the polarization of ${\mathcal{A}}_{t}$. There is a global section
$\gamma_{\theta}\in{\mathrm{R}}^{2g-2}\pi_{\ast}\mathbb{Z}$ such that for each
$t\in B$,
$\gamma_{\theta_{t}}=\theta_{t}^{g-1}/(g-1)!\in{\mathrm{H}}^{2g-2}({\mathcal{A}}_{t},\mathbb{Z}).$
Note that $\gamma_{\theta}$ is Hodge everywhere on $B$. For those $t\in B$ for
which $\gamma_{\theta_{t}}$ is algebraic, write $\gamma_{\theta_{t}}$ as the
difference of effective algebraic cycle classes on ${\mathcal{A}}_{t}$. This
gives a countable disjoint union
$\phi\colon\sqcup_{ij}H_{i}\times_{S}H_{j}\to S$
of products of relative Hilbert schemes $H_{i}/S$. By Lemma 7.11 below,
$\gamma_{\theta_{t}}$ is algebraic precisely for closed points $t$ in the
image $Y\subset S$ of $\phi$. By Theorem 7.1, $X=Y$. ∎
###### Lemma 7.11.
Let $S$ be an integral variety over $\mathbb{C}$, let ${\mathcal{A}}\to S$ be
a principally polarized abelian scheme of relative dimension $g$ over $S$ and
let ${\mathcal{C}}_{i}\subset{\mathcal{A}}$ for $i=1,\dotsc,k$ be relative |
# Phenomenology of a Rydberg impurity in an ideal Bose Einstein condensate
Aileen A. T. Durst<EMAIL_ADDRESS>Matthew T. Eiles<EMAIL_ADDRESS>Max
Planck Institute for the Physics of Complex Systems, Nöthnitzer Str. 38, 01187
Dresden, Germany
(September 3, 2024)
###### Abstract
We investigate the absorption spectrum of a Rydberg impurity immersed in and
interacting with an ideal Bose-Einstein condensate. Here, the impurity-bath
interaction can greatly exceed the mean interparticle distance; this
discrepancy in length scales challenges the assumptions underlying the
universal aspects of impurity atoms in dilute bosonic environments. Our
analysis finds three distinct parameter regimes, each characterized by a
unique spectral response. In the low-density regime, we find that the Rydberg
impurity is dressed by the surrounding bath similarly to the known Bose
polaron. Transitioning to intermediate densities, the impurity response, given
by sharp quasiparticle peaks, fragments into an intricate pattern bearing the
hallmarks of a diverse molecular structure. Finally, at high density, a
universal Gaussian response emerges as the statistical nature of the bath
dominates its quantum dynamics. We complement this analysis with a study of an
ionic impurity, which behaves equivalently. Our exploration offers insights
into the interplay between interaction range, density, and many-body behavior
in impurity systems.
The dynamics of strongly-correlated quantum mixtures pose a significant
challenge to theoretical description, even at the level of a single impurity
immersed in a non-interacting bath. The apparent simplicity of the Hamiltonian
of such a mixture,
$\displaystyle\hat{H}$
$\displaystyle=\sum_{\boldsymbol{k}}\frac{\boldsymbol{k}^{2}}{2\mu}\hat{b}_{\boldsymbol{k}}^{\dagger}\hat{b}_{\boldsymbol{k}}+\sum_{\boldsymbol{k},\boldsymbol{q}}V(\boldsymbol{q})\hat{b}_{\boldsymbol{k}+\boldsymbol{q}}^{\dagger}\hat{b}_{\boldsymbol{k}},$
(1)
belies the rich complexity of the phenomena emergent in this many-particle
system [1, 2, 3]. In Equation 1, which is written in a frame centered on the
zero-momentum impurity, $\hat{b}_{\boldsymbol{k}}^{\dagger}$ and
$\hat{b}_{\boldsymbol{k}}$ denote the bath creation and annihilation
operators, $V(\boldsymbol{q})$ is the interspecies interaction, and $\mu$ is
the reduced mass of the impurity and a bath atom [4]. $\hat{H}$ is commonly
used to describe dilute gases with a mean interparticle distance
$\rho^{-1/3}$, where $\rho$ is the density, greatly exceeds the range of the
potential $V(\boldsymbol{r})$. This justifies its replacement by a zero-range
pseudopotential proportional to the bath-impurity scattering length $a$ [5].
Within this approximation, the physics of the system becomes universal,
depending only on the scattering length, the dimensionality of the system, and
the density and quantum statistics of the bath [6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16]. Measurements of repulsive and attractive polaron quasiparticles
and weakly bound molecules in ultracold gases have provided strong evidence
for this universal behavior [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 16].
However, this universality is not expected to apply when the interaction range
is comparable to the typical interparticle distance. Such is the case for a
Rydberg impurity, where the highly excited Rydberg electron with principal
quantum number $n$ mediates the impurity-bath interaction by scattering off of
the bath particles. The $s$-wave electron-atom scattering length $a_{s}$ (see
Figure 1) determines the overall strength of this interaction [27, 28, 29].
For a Rydberg $\ket{nS}$ state this leads to the isotropic potential
$\displaystyle V_{\mathrm{Ryd}}(r)=2\pi
a_{s}\absolutevalue{\psi_{n00}(r)}^{2},$ (2)
which, unlike a zero-range potential, can in principle support several bound
states [30, 31, 32, 33, 34]. The appearance of the Rydberg wave function,
$\psi_{n00}(r)$, causes the range $R_{0}$ and depth $V_{0}$ of this highly
oscillatory potential to vary as $n^{2}$ and $n^{-6}$, respectively [34].
Recently, the ultracold toolbox has been expanded to include other impurity
systems with finite-ranged interactions, such as dipolar atoms [35, 36, 37]
and ion-atom mixtures [38, 39, 40, 41]. These break the zero-range
universality in a similar fashion, and raise the question of whether or not a
unified description of a finite-ranged impurity in a quantum environment
exists.
In this article, we approach this question through an exploration of the
behavior of a Rydberg impurity interacting with an ideal Bose-Einstein
Condensate (BEC) at zero temperature. Previous studies [42, 43, 44, 45] have
treated such an impurity in isolation from the zero-range Bose polaron due to
the large discrepancy in length and energy scales. However, we show that these
different impurities share the same underlying physics determined by the
universal parameter $a\rho^{1/3}$. Even though the spectral response becomes
more complicated in finite-ranged impurity systems, each component can be
understood and generally described. To further support these findings, we also
consider an ionic impurity. Together, this leads to an extension of the
universal description of Bose polarons to a broader class of interactions.
Further, we attempt to unify the disparate interpretations provided by the
many approaches developed for such impurity problems, which include the field-
theoretical quasiparticle methods describing polaron physics, the few-body
picture of molecular physics, and semiclassical methods originating in the
theory of pressure broadening.
Figure 1: Two scattering lengths characterize the interaction of a Rydberg
impurity with its environment. Atoms probing the interior of the Rydberg atom
collide with the highly excited electron directly; these interactions are
characterized by the atom-electron scattering length $a_{\mathrm{s}}$. In
contrast, distant atoms interact with the Rydberg atom as a single entity, and
the atom-impurity scattering length $a_{\mathrm{Ryd}}$ is characteristic of
this interaction.
To investigate and characterize the universal aspects of the Rydberg impurity,
we performed a detailed numerical study of the absorption spectrum
$A(\omega)$. This is obtained from the Fourier transform of the auto-
correlation function
$S(t)=\langle
e^{i\hat{H}_{0}t}e^{-i\hat{H}t}\rangle=\left(\sum_{\alpha}e^{i(\epsilon_{0}-\omega_{\alpha})t}\absolutevalue{\bra{0}\ket{\alpha}}^{2}\right)^{N},$
(3)
where the expectation value is taken with respect to the non-interacting BEC
state (the ground state of $\hat{H}_{0}$) [44]. Since the ideal BEC is in a
pure product state, the final expression of the many-body response only
requires the eigenstates $(\ket{0})$, $\ket{\alpha}$ and energies
$(\epsilon_{0})$, $\omega_{\alpha}$ of the (non)-interacting two-body
Hamiltonian of the Rydberg atom and a single boson. We solve for the two-body
physics by employing the eigenchannel R-matrix method [46], which yields the
energy-dependent logarithmic derivative of the scattering wave function
$\ket{\alpha}$ for $r>R_{0}$. This allows us to efficiently calculate
thousands of box-continuum states, molecular bound states, and the zero-energy
scattering length $a_{\mathrm{Ryd}}$. Figure 1 shows
$a_{\mathrm{Ryd}}(a_{s})$, which carries information about both the
interaction range $R_{0}\sim 2n^{2}$ and the dependence of two-body bound
states on $V_{0}$. We study a Rydberg impurity with $n=50$ as an illustrative
and generic example, and compute $A(\omega)$ as a function of $a_{s}$
111Further details about the interaction potential and the numerical
parameters for the calculation can be found in the Supplementary Material at:
URL to be inserted by publisher.. In this way we adjust $V_{0}$ independent of
$R_{0}$ 222While it is not possible to tune the electron-atom scattering
length in experiment, it is a convenient theoretical tool due to this
decoupling of $V_{0}$ and $R_{0}$. To change $a_{\mathrm{Ryd}}$
experimentally, $n$ can be varied to explore the different interaction regimes
studied here..
Figure 2: $A(\omega)$ of a $50S$ Rydberg impurity in a BEC with
$\rho=10^{12}\,$cm-3. The mean-field energy shifts $E_{\mathrm{zr}}$ (teal)
and $E_{\mathrm{Ryd}}$ (green), as well as the bare and dressed dimer energies
$E_{\mathrm{b}}$ (solid white) and $E_{\mathrm{b}}+E_{\mathrm{zr}}$ (dashed
white) are overlaid.
Figure 2 shows $A(\omega)$ when $R_{0}\rho^{1/3}\ll 1$. In this regime, we
recover all of the features known from the limit of a zero-range impurity [49,
3, 12]. When $a_{\mathrm{s}}>-0.06a_{0}$, $V_{\mathrm{Ryd}}$ does not support
a two-body bound state, and $A(\omega)$ exhibits a single peak at negative
energy indicating the formation of an attractive polaron. The mean-field
energy of the zero-range approximation of the Rydberg potential,
$E_{\mathrm{zr}}=2\pi a_{\mathrm{Ryd}}\rho/\mu$, describes the position of
this feature. However, this description fails dramatically near a scattering
resonance. Instead, the mean-field description of the full Rydberg potential,
$E_{\mathrm{Ryd}}=\rho\int V_{\mathrm{Ryd}}(r)\mathrm{d}^{3}r={2\pi
a_{s}\rho}/{m_{e}}$, [44, 28, 50] follows the center of spectral weight
smoothly across unitarity, even as the spectral feature diffuses and cannot be
associated with a well-defined quasiparticle [51, 3]. Using the Born
approximation for the Rydberg scattering length,
$a_{\mathrm{Ryd}}^{\mathrm{B}}=a_{s}\mu/m_{e}$, one can rewrite
$E_{\mathrm{Ryd}}=2\pi a_{\mathrm{Ryd}}^{\mathrm{B}}\rho/\mu$ to have the same
structure as $E_{\mathrm{zr}}$.
To the red of the resonance, $E_{\mathrm{zr}}$ again describes the brightest
spectral feature, which is located at positive energy and therefore identified
as a repulsive polaron. Below this feature lies a series of negative energy
peaks associated with the molecular bound state, which can be multiply
occupied to form dimers, trimers, and the like. These interact with the
residual bath through the same Rydberg interaction as the bare atom, and as a
result they are dressed by bath excitations in the same fashion. Each
ultralong-range Rydberg molecule therefore possesses some quasiparticle
character inherited from the repulsive polaron and forms a "molaron" with a
binding energy shifted from that of the bare dimer, $E_{\mathrm{b}}$, to
$E_{\mathrm{b}}+E_{\mathrm{zr}}$ [52, 53, 54, 55]. Myriad experiments have
observed this vibrational spectrum, confirming its basic structure but not yet
providing conclusive evidence for the many-body shift $E_{\mathrm{zr}}$ [56,
57, 58, 43]. This is not surprising, since theoretical uncertainties [59, 60,
61, 62, 63] would have hidden this small shift.
Figure 3: $A(\omega)$ of a $50S$ Rydberg impurity in a BEC with (a)
$\rho=10^{13}\,$cm-3, (b) $\rho=5\cdot 10^{13}\,$cm-3, and (c) $\rho=5\cdot
10^{14}\,$cm-3. In (a), the mean-field energy $E_{\mathrm{zr}}$ (teal) is
shown alongside the mean-field energies of various molaron states. In panels
(b) and (c) cuts of $A(\omega)$ at fixed $a_{s}$ (white curves) show the
lineshape more clearly. $E_{\mathrm{Ryd}}$ is shown in green.
Figure 3(a) displays $A(\omega)$ over a broader range of $a_{s}$ and at ten
times the density of Figure 2, which causes the molaron peaks to accumulate
appreciable spectral weight as $R_{0}\rho^{1/3}\sim 1$. This reveals an
intriguing internal structure due to the appearance of additional two-body
bound states as the potential deepens. This structure suggests a nomenclature,
the $[n_{0},\dots,n_{i},\dots,n_{M}]_{M}$-molaron, where $M$ is the total
number of two-body bound states supported by the potential and $n_{i}$ is the
bosonic occupation number of the $i$th bound state, with $i=0$ representing
the bare atom. The peak position of the $[0_{0}]_{M}$-molaron, i.e. the
polaron, coincides with $E_{zr}$. Exemplary molaron peaks are labeled in
Figure 3(a). At every scattering resonance each quasiparticle peak undergoes
the same fragmentation seen in Figure 2. For example, at the second scattering
resonance, each of the $[n_{i}]_{1}$ states broadens and eventually splits
into a multitude of states $[n_{i},m_{i+1}]_{2}$.
An important consequence of the large extent of the Rydberg potential is that
$a_{\mathrm{Ryd}}$ is large and positive except close to a resonance, where it
dips below zero. As a result, the attractive polaron exists only in a very
limited parameter space. At the transition from a repulsive to an attractive
polaron, $a_{\mathrm{Ryd}}$ vanishes at a Ramsaeur-Townsend zero [64, 65].
Despite its non-zero interaction potential, the Rydberg atom effectively does
not interact with the bath – the scattering phase shift vanishes. Here, the
molaron features sharpen, losing quasiparticle weight as they more closely
resemble bare molecules.
At higher density (Figure 3b) the spectral weight shifts entirely into molaron
states. Their absorption peaks blur together and $A(\omega)$ takes on a
Gaussian profile with mean values $E_{\mathrm{Ryd}}$ whose emergence is
explained by the central limit theorem [42, 44, 66]. However, the regression
to a Gaussian distribution does not occur at the same density for each
$a_{s}$: the distinct molaron peaks, still resolvable in the vicinity of the
Ramsauer-Townsend zeros in Figure 3(b), merge to form a Gaussian spectral
profile only at higher density (panel (c)).
Figure 4: Effective quasiparticle weight of the Rydberg impurity, calculated
by averaging $S(t)$ over late times. The solid white curve shows
$a_{\mathrm{Ryd}}\rho^{1/3}=1$, which demarcates the two phases: quasiparticle
(molaron or polaron limit with $Z=1$) and semiclassical / statistical ($Z=0$).
The dashed white line shows $R_{0}\rho^{1/3}=1$, the semiclassical transition.
This progression from an individual Lorentzian at zero density through
asymmetric lineshapes and polymer formation to a symmetric Gaussian
distribution at high density is largely consistent with the semi-classical
theory of pressure broadening [67, 68, 69, 50, 70, 71, 72, 73]. This theory
predicts the occurrence of "satellite" peaks at integer multiples of the
extrema of the interaction potential, an obvious parallel to the molaron
structure, as well as their blending together into a Gaussian response as the
number of particles within the range of the potential, $R_{0}^{3}\rho$,
increases [66, 69]. Our calculation shows that a more accurate condition takes
into account the zero-energy scattering length, and thus generalizes the above
condition to $\rho^{-1/3}\ll a_{\mathrm{Ryd}}$.
This more accurate condition is depicted in Figure 4, which shows a
qualitative estimate of the quasiparticle weight given by a temporal average
of $S(t)$ in the long time limit. The curve $a_{\mathrm{Ryd}}\rho^{1/3}=1$
indicates the transition between a spectrum dominated by quasiparticle
features (where $S(t)\gg 0$) to a purely statistical state characterized by
the Gaussian lineshape with no quasiparticle weight. In the extreme limit
$a_{\mathrm{Ryd}}\to 0^{+/-}$, the above condition is never satisfied and
$A(\omega)$ will show distinct peaks no matter the density. This explains the
regions with large quasiparticle weight even at high densities seen in Figure
4 where the impurity has a small but non-zero quasiparticle character due to
its vastly reduced coupling to the dense environment.
In contrast, the region of approximately zero quasiparticle weight extends to
very low densities when $a_{\mathrm{Ryd}}$ diverges. This corresponds to the
parameter regime where the loss of quasiparticle characteristics and a
broadening of the well-defined polaron peak into a diffuse continuum is known
from zero-range impurities as well [6, 52, 74]. Consequently, the spectral
broadening of the attractive polaron peak near resonance in the zero-range
polaron and the emergence of the Gaussian feature in Rydberg polaron studies
share the same underlying physical origin.
This collection of phenomena is not limited to a Rydberg atom, but is shared
by all impurity systems: the quantitative differences between the absorption
spectra of various impurities are a matter of degree rather than of kind. This
can be seen by comparing the spectrum of a neutral impurity [52, 49] or an
ionic impurity [38] with the present results.
Figure 5: $A(\omega)$ of an ionic impurity at $\rho=2\,R_{\mathrm{Ion}}^{-3}$.
The green line shows the mean-field energy shift $E_{\mathrm{Ion}}$. The inset
highlights the fragmentation of molaron states.
In the latter case, the interaction is often taken to be a regularized
polarization potential [75, 38]
$\displaystyle V_{\mathrm{Ion}}(r)$
$\displaystyle=-\frac{\alpha}{(r^{2}+b^{2})^{2}}\frac{r^{2}-c^{2}}{r^{2}+c^{2}},$
(4)
with a characteristic length scale $R_{\mathrm{Ion}}=\sqrt{2\mu\alpha}$. We
calculated $A(\omega)$ for $c=0.0023\,R_{\mathrm{Ion}}$, varying $b$ to adjust
the potential depth, as shown in Figure 5. We observe polaron features and
fragmentation of the molaron states as they cross a scattering resonance,
familiar from Figure 3(a). At the density of $\rho=2\,R_{\mathrm{Ion}}^{-3}$,
the molaron states possess significant spectral weight and hint at the
emerging Gaussian lineshape centered around the mean-field energy shift
$E_{\mathrm{Ion}}=\rho\int V_{\mathrm{Ion}}(r)\mathrm{d}^{3}r$, which becomes
completely dominant at higher densities as in Figure 3(c) [76]. As with the
Rydberg impurity, by writing $E_{\mathrm{Ion}}=2\pi
a_{\mathrm{Ion}}^{B}\rho/\mu$ we see that this feature is simply characterized
by the Born approximation for the scattering length, the reduced mass and the
bath density.
With the insights provided by these two impurity systems, we have shown that
the universal parameter $a\rho^{1/3}$ determines the qualitative behavior of
their absorption spectra. This unites the phenomena of finite-ranged
impurities with those known from the well-studied zero-range impurity (the
"Bose polaron"). Even deeply bound molecular states, whose energies depend on
the details of the two-body interaction, respond identically to the influence
of the bath. At sufficiently high densities, these details become irrelevant
and the system response is universal, depending only on its scattering length
in the Born approximation within a mean-field description.
The huge length scales of the Rydberg atom are especially appropriate for the
approximations made in $\hat{H}$: the neglect of boson-boson interactions and
kinetic energy correlation. Rydberg impurities therefore may serve as a more
suitable platform for studying molarons and refining their theoretical
description than ground-state atoms, even though the bath-induced density
shift of the molecule peaks, typically on the scale $\leavevmode\nobreak\ 4\pi
n^{2}\rho/\mu$, is small relative to the binding energies. These subtle many-
body energy shifts, including the shift of the bare atomic line (the repulsive
Rydberg polaron), could be isolated by a careful study of the density-
dependence and asymmetry of the absorption peaks, particularly for light
atoms. Additionally, fermionic environments, which have been partially
explored for Rydberg and ionic impurities, provide another avenue for future
investigation [45, 77].
###### Acknowledgements.
We are grateful for many enlightening discussions with P. Giannakeas, A.
Eisfeld, and S. Wüster.
## References
* Schmidt _et al._ [2018a] R. Schmidt, M. Knap, D. A. Ivanov, J.-S. You, M. Cetina, and E. Demler, Universal many-body response of heavy impurities coupled to a Fermi sea: a review of recent progress, Reports on Progress in Physics 81, 024401 (2018a).
* Alexandrov and Devreese [2009] A. Alexandrov and J. Devreese, _Advances in Polaron Physics_, Springer Series in Solid-State Sciences (Springer Berlin Heidelberg, 2009).
* Shchadilova _et al._ [2016a] Y. E. Shchadilova, R. Schmidt, F. Grusdt, and E. Demler, Quantum Dynamics of Ultracold Bose Polarons, Physical Review Letters 117, 113002 (2016a).
* Lee and Pines [1953] T.-D. Lee and D. Pines, Interaction of a Nonrelativistic Particle with a Scalar Field with Application to Slow Electrons in Polar Crystals, Physical Review 92, 883 (1953).
* Huang and Yang [1957] K. Huang and C. N. Yang, Quantum-mechanical many-body problem with hard-sphere interaction, Physical review 105, 767 (1957).
* Shashi _et al._ [2014] A. Shashi, F. Grusdt, D. A. Abanin, and E. Demler, Radio-frequency spectroscopy of polarons in ultracold Bose gases, Physical Review A 89, 053617 (2014).
* Rath and Schmidt [2013] S. P. Rath and R. Schmidt, Field-theoretical study of the Bose polaron, Physical Review A 88, 053632 (2013).
* Christensen _et al._ [2015] R. S. Christensen, J. Levinsen, and G. M. Bruun, Quasiparticle Properties of a Mobile Impurity in a Bose-Einstein Condensate, Physical Review Letters 115, 160401 (2015).
* Drescher _et al._ [2019] M. Drescher, M. Salmhofer, and T. Enss, Real-space dynamics of attractive and repulsive polarons in Bose-Einstein condensates, Physical Review A 99, 023601 (2019).
* Ardila [2021] L. A. P. Ardila, Dynamical formation of polarons in a Bose-Einstein condensate: A variational approach, Physical Review A 103, 033323 (2021).
* Van Loon _et al._ [2018] S. Van Loon, W. Casteels, and J. Tempere, Ground-state properties of interacting Bose polarons, Physical Review A 98, 063631 (2018).
* Grusdt _et al._ [2017] F. Grusdt, R. Schmidt, Y. E. Shchadilova, and E. Demler, Strong-coupling Bose polarons in a Bose-Einstein condensate, Physical Review A 96, 013607 (2017).
* Tajima _et al._ [2021] H. Tajima, J. Takahashi, S. Mistakidis, E. Nakano, and K. Iida, Polaron Problems in Ultracold Atoms: Role of a Fermi Sea across Different Spatial Dimensions and Quantum Fluctuations of a Bose Medium, Atoms 9, 18 (2021).
* Massignan _et al._ [2014] P. Massignan, M. Zaccanti, and G. M. Bruun, Polarons, dressed molecules and itinerant ferromagnetism in ultracold Fermi gases, Reports on Progress in Physics 77, 034401 (2014).
* Chevy and Mora [2010] F. Chevy and C. Mora, Ultra-cold polarized Fermi gases, Reports on Progress in Physics 73, 112401 (2010).
* Etrych _et al._ [2024a] J. Etrych, G. Martirosyan, A. Cao, C. J. Ho, Z. Hadzibabic, and C. Eigen, Universal quantum dynamics of bose polarons (2024a), arXiv:2402.14816 [cond-mat.quant-gas] .
* Jørgensen _et al._ [2016] N. B. Jørgensen, L. Wacker, K. T. Skalmstang, M. M. Parish, J. Levinsen, R. S. Christensen, G. M. Bruun, and J. J. Arlt, Observation of Attractive and Repulsive Polarons in a Bose-Einstein Condensate, Physical Review Letters 117, 055302 (2016).
* Yan _et al._ [2020] Z. Z. Yan, Y. Ni, C. Robens, and M. W. Zwierlein, Bose polarons near quantum criticality, Science 368, 190 (2020), publisher: American Association for the Advancement of Science.
* Hu _et al._ [2016] M.-G. Hu, M. J. Van De Graaff, D. Kedar, J. P. Corson, E. A. Cornell, and D. S. Jin, Bose Polarons in the Strongly Interacting Regime, Physical Review Letters 117, 055301 (2016).
* Catani _et al._ [2012] J. Catani, G. Lamporesi, D. Naik, M. Gring, M. Inguscio, F. Minardi, A. Kantian, and T. Giamarchi, Quantum dynamics of impurities in a one-dimensional Bose gas, Physical Review A 85, 023623 (2012).
* Schmidt _et al._ [2018b] F. Schmidt, D. Mayer, Q. Bouton, D. Adam, T. Lausch, N. Spethmann, and A. Widera, Quantum Spin Dynamics of Individual Neutral Impurities Coupled to a Bose-Einstein Condensate, Physical Review Letters 121, 130403 (2018b).
* Skou _et al._ [2021] M. G. Skou, T. G. Skov, N. B. Jørgensen, K. K. Nielsen, A. Camacho-Guardian, T. Pohl, G. M. Bruun, and J. J. Arlt, Non-equilibrium quantum dynamics and formation of the Bose polaron, Nature Physics 17, 731 (2021), number: 6 Publisher: Nature Publishing Group.
* Scelle _et al._ [2013] R. Scelle, T. Rentrop, A. Trautmann, T. Schuster, and M. K. Oberthaler, Motional Coherence of Fermions Immersed in a Bose Gas, Physical Review Letters 111, 070401 (2013).
* Ness _et al._ [2020] G. Ness, C. Shkedrov, Y. Florshaim, O. K. Diessel, J. Von Milczewski, R. Schmidt, and Y. Sagi, Observation of a Smooth Polaron-Molecule Transition in a Degenerate Fermi Gas, Physical Review X 10, 041019 (2020).
* Schirotzek _et al._ [2009] A. Schirotzek, C.-H. Wu, A. Sommer, and M. W. Zwierlein, Observation of Fermi Polarons in a Tunable Fermi Liquid of Ultracold Atoms, Physical Review Letters 102, 230402 (2009).
* Chin _et al._ [2010] C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Feshbach resonances in ultracold gases, Reviews of Modern Physics 82, 1225 (2010).
* Greene _et al._ [2000] C. H. Greene, A. S. Dickinson, and H. R. Sadeghpour, Creation of Polar and Nonpolar Ultra-Long-Range Rydberg Molecules, Physical Review Letters 85, 2458 (2000).
* Fermi [1934] E. Fermi, Sopra lo Spostamento per Pressione delle Righe Elevate delle Serie Spettrali, Il Nuovo Cimento 11, 157 (1934).
* Omont [1977] A. Omont, On the theory of collisions of atoms in rydberg states with neutral particles, Journal de Physique 38, 1343 (1977).
* Bendkowsky _et al._ [2009] V. Bendkowsky, B. Butscher, J. Nipper, J. P. Shaffer, R. Löw, and T. Pfau, Observation of ultralong-range Rydberg molecules, Nature 458, 1005 (2009).
* Shaffer _et al._ [2018] J. P. Shaffer, S. T. Rittenhouse, and H. R. Sadeghpour, Ultracold Rydberg molecules, Nature Communications 9, 1965 (2018).
* Fey _et al._ [2020] C. Fey, F. Hummel, and P. Schmelcher, Ultralong-range Rydberg molecules, Molecular Physics 118, e1679401 (2020).
* Saßmannshausen _et al._ [2016] H. Saßmannshausen, J. Deiglmayr, and F. Merkt, Long-range Rydberg molecules, Rydberg macrodimers and Rydberg aggregates in an ultracold Cs gas: Investigation of long-range interactions between atoms in electronically highly excited statesRydberg Few-Body Physics, The European Physical Journal Special Topics 225, 2891 (2016).
* Eiles [2019] M. T. Eiles, Trilobites, butterflies, and other exotic specimens of long-range Rydberg molecules, Journal of Physics B: Atomic, Molecular and Optical Physics 52, 113001 (2019).
* Volosniev _et al._ [2023] A. G. Volosniev, G. Bighin, L. Santos, and L. A. Peña Ardila, Non-equilibrium dynamics of dipolar polarons, SciPost Physics 15, 232 (2023).
* Aikawa _et al._ [2012] K. Aikawa, A. Frisch, M. Mark, S. Baier, A. Rietzler, R. Grimm, and F. Ferlaino, Bose-Einstein Condensation of Erbium, Physical Review Letters 108, 210401 (2012).
* Lahaye _et al._ [2007] T. Lahaye, T. Koch, B. Fröhlich, M. Fattori, J. Metz, A. Griesmaier, S. Giovanazzi, and T. Pfau, Strong dipolar effects in a quantum ferrofluid, Nature 448, 672 (2007).
* Christensen _et al._ [2021] E. R. Christensen, A. Camacho-Guardian, and G. M. Bruun, Charged Polarons and Molecules in a Bose-Einstein Condensate, Physical Review Letters 126, 243001 (2021).
* Astrakharchik _et al._ [2021] G. E. Astrakharchik, L. A. P. Ardila, R. Schmidt, K. Jachymski, and A. Negretti, Ionic polaron in a Bose-Einstein condensate, Communications Physics 4, 94 (2021).
* Pessoa _et al._ [2024] R. Pessoa, S. A. Vitiello, and L. A. P. Ardila, Fermi polaron in atom-ion hybrid systems (2024), arXiv:2401.05324 [cond-mat, physics:physics, physics:quant-ph] version: 1.
* Massignan _et al._ [2005] P. Massignan, C. J. Pethick, and H. Smith, Static properties of positive ions in atomic Bose-Einstein condensates, Physical Review A 71, 023606 (2005).
* Schmidt _et al._ [2016] R. Schmidt, H. Sadeghpour, and E. Demler, Mesoscopic Rydberg Impurity in an Atomic Quantum Gas, Physical Review Letters 116, 105302 (2016).
* Camargo _et al._ [2018] F. Camargo, R. Schmidt, J. Whalen, R. Ding, G. Woehl, S. Yoshida, J. Burgdörfer, F. Dunning, H. Sadeghpour, E. Demler, and T. Killian, Creation of Rydberg Polarons in a Bose Gas, Physical Review Letters 120, 083401 (2018).
* Schmidt _et al._ [2018c] R. Schmidt, J. D. Whalen, R. Ding, F. Camargo, G. Woehl, S. Yoshida, J. Burgdörfer, F. B. Dunning, E. Demler, H. R. Sadeghpour, and T. C. Killian, Theory of excitation of Rydberg polarons in an atomic quantum gas, Physical Review A 97, 022707 (2018c).
* Sous _et al._ [2020] J. Sous, H. R. Sadeghpour, T. C. Killian, E. Demler, and R. Schmidt, Rydberg impurity in a Fermi gas: Quantum statistics and rotational blockade, Physical Review Research 2, 023021 (2020).
* Aymar _et al._ [1996] M. Aymar, C. H. Greene, and E. Luc-Koenig, Multichannel Rydberg spectroscopy of complex atoms, Reviews of Modern Physics 68, 1015 (1996).
* Note [1] Further details about the interaction potential and the numerical parameters for the calculation can be found in the Supplementary Material at: URL to be inserted by publisher.
* Note [2] While it is not possible to tune the electron-atom scattering length in experiment, it is a convenient theoretical tool due to this decoupling of $V_{0}$ and $R_{0}$. To change $a_{\mathrm{Ryd}}$ experimentally, $n$ can be varied to explore the different interaction regimes studied here.
* Drescher _et al._ [2021] M. Drescher, M. Salmhofer, and T. Enss, Quench Dynamics of the Ideal Bose Polaron at Zero and Nonzero Temperatures, Physical Review A 103, 033317 (2021).
* Allard and Kielkopf [1982] N. Allard and J. Kielkopf, The effect of neutral nonresonant collisions on atomic spectral lines, Reviews of Modern Physics 54, 1103 (1982).
* Etrych _et al._ [2024b] J. Etrych, G. Martirosyan, A. Cao, C. J. Ho, Z. Hadzibabic, and C. Eigen, Universal quantum dynamics of Bose polarons (2024b), arXiv:2402.14816 [cond-mat, physics:physics, physics:quant-ph].
* Shchadilova _et al._ [2016b] Y. E. Shchadilova, F. Grusdt, A. N. Rubtsov, and E. Demler, Polaronic mass renormalization of impurities in Bose-Einstein condensates: Correlated Gaussian-wave-function approach, Physical Review A 93, 043606 (2016b).
* Mostaan _et al._ [2023] N. Mostaan, N. Goldman, and F. Grusdt, A unified theory of strong coupling Bose polarons: From repulsive polarons to non-Gaussian many-body bound states (2023), arXiv:2305.00835 [cond-mat, physics:physics, physics:quant-ph].
* Diessel _et al._ [2022] O. K. Diessel, J. von Milczewski, A. Christianen, and R. Schmidt, Probing molecular spectral functions and unconventional pairing using Raman spectroscopy (2022), arXiv:2209.11758 [cond-mat, physics:physics].
* Schirotzek [2010] A. Schirotzek, _Radio-frequency spectroscopy of ultracold atomic Fermi gases_ , Ph.D. thesis (Massachusetts Institute of Technology, 2010).
* Bendkowsky _et al._ [2010] V. Bendkowsky, B. Butscher, J. Nipper, J. B. Balewski, J. P. Shaffer, R. Löw, T. Pfau, W. Li, J. Stanojevic, T. Pohl, and J. M. Rost, Rydberg Trimers and Excited Dimers Bound by Internal Quantum Reflection, Physical Review Letters 105, 163201 (2010).
* Gaj _et al._ [2014] A. Gaj, A. T. Krupp, J. B. Balewski, R. Löw, S. Hofferberth, and T. Pfau, From molecular spectra to a density shift in dense Rydberg gases, Nature Communications 5, 4546 (2014).
* Engel _et al._ [2019] F. Engel, T. Dieterle, F. Hummel, C. Fey, P. Schmelcher, R. Löw, T. Pfau, and F. Meinert, Precision Spectroscopy of Negative-Ion Resonances in Ultralong-Range Rydberg Molecules, Physical Review Letters 123, 073003 (2019).
* Fey _et al._ [2015] C. Fey, M. Kurz, P. Schmelcher, S. T. Rittenhouse, and H. R. Sadeghpour, A comparative analysis of binding in ultralong-range Rydberg molecules, New Journal of Physics 17, 055010 (2015).
* Greene and Eiles [2023] C. H. Greene and M. T. Eiles, Green’s-function treatment of Rydberg molecules with spins, Physical Review A 108, 042805 (2023).
* MacLennan _et al._ [2019] J. L. MacLennan, Y.-J. Chen, and G. Raithel, Deeply bound ( 24 D J \+ 5 S 1 / 2 ) Rb 87 and Rb 85 molecules for eight spin couplings, Physical Review A 99, 033407 (2019).
* Peper and Deiglmayr [2020] M. Peper and J. Deiglmayr, Photodissociation of long-range Rydberg molecules, Physical Review A 102, 062819 (2020).
* Giannakeas _et al._ [2020] P. Giannakeas, M. T. Eiles, F. Robicheaux, and J. M. Rost, Generalized local frame-transformation theory for ultralong-range Rydberg molecules, Physical Review A 102, 033315 (2020).
* Ramsauer [1921] C. Ramsauer, Über den Wirkungsquerschnitt der Gasmoleküle gegenüber langsamen Elektronen, Annalen der Physik 369, 513 (1921).
* Townsend and Bailey [1921] J. Townsend and V. Bailey, XCVII. The motion of electrons in gases, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 42, 873 (1921).
* Royer [1980] A. Royer, Shift, width, and asymmetry of pressure-broadened spectral lines at intermediate densities, Physical Review A 22, 1625 (1980).
* Baranger [1958a] M. Baranger, General Impact Theory of Pressure Broadening, Physical Review 112, 855 (1958a).
* Baranger [1958b] M. Baranger, Simplified Quantum-Mechanical Theory of Pressure Broadening, Physical Review 111, 481 (1958b).
* Allard _et al._ [1988] N. F. Allard, Y. G. Biraud, and A. Chevillot, Collision-broadened spectral line profiles in the limit of high perturber density, Physical Review A 37, 1479 (1988).
* Allard [1978] N. F. Allard, Alkali-rare-gas line profiles in a square-well potential approximation. I. Satellites, Journal of Physics B: Atomic and Molecular Physics 11, 1383 (1978).
* Allard and Biraud [1980] N. Allard and Y. Biraud, Alkali rare-gas line profiles in a square-well potential approximation: Width, shift, and asymmetry, Journal of Quantitative Spectroscopy and Radiative Transfer 23, 253 (1980).
* Szudy and Baylis [1975] J. Szudy and W. Baylis, Unified Franck-Condon treatment of pressure broadening of spectral lines, Journal of Quantitative Spectroscopy and Radiative Transfer 15, 641 (1975).
* Szudy and Baylis [1996] J. Szudy and W. E. Baylis, Profiles of line wings and rainbow satellites associated with optical and radiative collisions, Physics Reports 266, 127 (1996).
* Tempere _et al._ [2009] J. Tempere, W. Casteels, M. K. Oberthaler, S. Knoop, E. Timmermans, and J. T. Devreese, Feynman path-integral treatment of the BEC-impurity polaron, Physical Review B 80, 184504 (2009).
* Tomza _et al._ [2019] M. Tomza, K. Jachymski, R. Gerritsma, A. Negretti, T. Calarco, Z. Idziaszek, and P. S. Julienne, Cold hybrid ion-atom systems, Reviews of Modern Physics 91, 035001 (2019).
* [76] Further details about the interaction potential and the numerical parameters for the calculation can be found in the Supplementary Material at: URL to be inserted by publisher.
* Christensen _et al._ [2022] E. R. Christensen, A. Camacho-Guardian, and G. M. Bruun, Mobile ion in a Fermi sea, Physical Review A 105, 023309 (2022).
## Appendix A Supplementary Information
In the following, we provide additional details and parameters for the
calculations presented in the main text.
### A.1 Loschmidt Echo
The Loschmidt-Echo $S(t)$, also known as the autocorrelation function, is the
time-dependent overlap of the bare BEC state and the interacting state
following the sudden introduction of the impurity at $t=0$. Beginning with the
BEC wave function
$\ket{\psi_{\mathrm{BEC}}}=1/\sqrt{N!}b_{0}^{\dagger^{N}}\ket{\mathrm{vac}}$,
where
$\displaystyle N=\frac{\rho_{0}\cdot 2L^{3}}{\pi}$ (5)
is the number of $s$-wave BEC particles, $S(t)$ is given by
$\displaystyle S(t)$
$\displaystyle=\bra{\psi_{\mathrm{BEC}}}e^{i\hat{H}_{0}t}e^{-i\hat{H}t}\ket{\psi_{\mathrm{BEC}}}$
(6)
$\displaystyle=\left(\sum_{\alpha}e^{i(\epsilon_{0}-\omega_{\alpha})t}\absolutevalue{\bra{0}\ket{\alpha}}^{2}\right)^{N}.$
(7)
Here the state $\ket{0}$ is the ground state of the non-interacting two-body
Hamiltonian
$h_{0}(r)=-\frac{\nabla_{r}^{2}}{2\mu}$ (8)
with eigenenergy $\epsilon_{0}$, while $\ket{\alpha}$ are the eigenstates of
the interacting two-body Hamiltonian
$h(r)=-\frac{\nabla^{2}_{r}}{2\mu}+V_{\mathrm{Ryd}}(r)$ (9)
with eigenenergies $\omega_{\alpha}$. Calculating $S(t$) ultimately involves
determining the interacting eigenstates $\ket{\alpha}$ and their corresponding
eigenenergies $\omega_{\alpha}$. A more complete derivation can be found in
[45]. We perform the time evolution of $S(t)$ up to
$t_{\mathrm{max}}=1000\,\mu$s in order to compute $A(\omega)$ from directly
integrating the Fourier transform of $S$. To have a numerically well-defined
Fourier transformation we multiply $S(t)$ with a exponential decay
$\exp[-t/(0.4\cdot t_{\mathrm{max}})]$. This decay time is chosen to be large
in the present study to avoid obscuring any interesting results by numerical
broadening of the spectral features. Tests for shorter decay times, which
model the finite Rydberg lifetime, did not show significant deviation from the
calculations performed here.
### A.2 Rydberg potential
For an $s$-state Rydberg electron the interaction potential
$V_{\mathrm{Ryd}}(r)$ is isotropic and spherically symmetric. To simplify the
theoretical analysis, we neglect the effect of a finite quantum defect and use
the electronic wave function $\psi_{nlm}(r)$ of the hydrogen atom. Further, we
assume that the electron-atom scattering length $a_{s}$, which sets the
strength of the overall interaction, is energy-independent. These assumptions
yield the two-body interaction potential,
$\displaystyle V_{\mathrm{Ryd}}(r)=2\pi
a_{s}\absolutevalue{\psi_{n00}(r)}^{2},$ (10)
which captures the features of more sophisticated calculations to a semi-
quantitative degree. In the main text our results are computed for an
electronic $n=50$ state of a mass-balanced system with the mass of 84Sr,
$\mu=83.91342/2\,[\mathrm{au}]$, in a box with radial extent of
$L=550n^{2}a_{0}$.
Within the Born approximation, the zero-energy scattering length is given by
$a_{\mathrm{Ryd}}^{\mathrm{B}}=\frac{m_{e}}{\mu}a_{s}$, from which we obtain
the density shift $E_{\mathrm{Ryd}}=\frac{2\pi a_{s}}{m_{e}}$. This gives the
position of the Gaussian feature in the high density regime.
### A.3 Ion potential
The interaction between an ion and a neutral atom has a long-range tail
$\propto-\alpha/(2r^{4})$, with $\alpha$ the polarizability of the neutral
atom. To avoid problems at short inter-particle distances we include a short-
range regularization, which gives us the ionic interaction potential
$\displaystyle V(r)$
$\displaystyle=-\frac{\alpha/2}{(r^{2}+b^{2})^{2}}\frac{r^{2}-c^{2}}{r^{2}+c^{2}}$
(11)
with characteristic range $R_{\mathrm{Ion}}=\sqrt{2\mu\alpha}$. We fix the
free parameters to be $c=0.0023\,R_{\mathrm{Ion}}$, $\alpha=320$, and
$\mu=86.9092/2\,[\mathrm{au}]$, corresponding to the polarizability and mass
of 87Rb atoms.
Within the Born approximation, the zero-energy scattering length is
$\displaystyle a_{\mathrm{Ion}}^{\mathrm{B}}$
$\displaystyle=-\frac{R_{\mathrm{ion}}^{2}\pi}{4b}\frac{(b^{2}+2bc-c^{2})}{(b+c)^{2}}.$
(12)
From this, or alternatively from the integral $\rho\int
V_{\mathrm{Ion}}(r)\mathrm{d}^{3}r$, we obtain the density shift
$E_{\mathrm{Ion}}=\frac{2\pi}{\mu}\rho a_{\mathrm{Ion}}^{\mathrm{B}}$ giving
the position of the Gaussian feature in the high density regime.
### A.4 Spectrum of $h$
In the following, we describe our general approach to calculating the spectrum
of a given two-body Hamiltonian $h$ with a generic interaction potential which
can be truncated at some finite distance. When we give details, such as the
number of basis states and dimensions of the quantization volume, they are
specific to the calculation of the Rydberg impurity. The ionic impurity can
require, due to the slower decay of its interaction, a larger basis size and
matching of interacting and free wave fucntions at a larger radius to achieve
convergence.
We separate the bound state calculation from the continuum state calculation
in order to save computational effort, since we want to avoid diagonalizing a
huge matrix to obtain many hundreds of continuum states.
In our calculations for the continuum of single-particle states
$\ket{\alpha}$, we employ the eigenchannel R-matrix approach. This method
obviates the need to solve the Schrödinger equation numerically over the
entire quantization volume. We partition the space around the impurity into
two distinct regions: one (roughly for $0<r<3n^{2}$) where the Rydberg
potential differs significantly from zero, and another $3n^{2}<r<L$ where the
interaction potential is negligible and the wave function is known
analytically. At the boundary of the interaction volume, we compute the log-
derivative of the wave function at a specific energy $E$,
$-b_{\beta}(E)=\frac{\partial\ln r\psi_{\beta}}{\partial r}.$ (13)
We compute $b_{\beta}$ using the streamlined eigenchannel approach detailed in
Ref. [46], using 500 B-spline functions of order 12 to span the range from the
inner boundary $r_{0}=200a_{0}$ to $r_{1}=3n^{2}$. By now matching the
analytical log-derivative of the free particle solutions outside of the range
of the interaction potential with those inside, we compute the energy-
dependent phase shift $\delta(E)$ and the scattering length $a_{\mathrm{imp}}$
of the potential as follows:
$\displaystyle\tan(\delta(E))$
$\displaystyle=\frac{b(E)j_{0}(kr_{1})r_{1}+\partial_{r}j_{0}(kr_{1})}{b(E)y_{0}(kr_{1})r_{1}+\partial_{r}y_{0}(kr_{1})}$
(14) $\displaystyle a_{\mathrm{imp}}$
$\displaystyle=-\mathrm{lim}_{k\rightarrow 0}\tan(\delta(k))/k.$ (15)
In a second step we discretize the continuum by imposing the hard wall
boundary condition at $r=L$. To achieve this, we do an energy search for all
wave functions that have zero amplitude at the box boundary. In total we use
about 10000 interacting states $\ket{\alpha}$ up to a energy cut-off
($E_{\mathrm{max}}=300\,$[MHz]) to represent the continuum. Especially close
to a resonance the continuum couples strongly and a good numerical
representation of the continuum becomes particularly important. The calculated
overlaps of the low-lying box-continuum states with the free BEC wave function
is shown in Figure 6, for two different interaction strengths. In both cases
we see an exponential decay with energy, however for a value close to
resonance $a_{s}=-0.315a_{0}$ (blue line) the overlaps tend to be one or two
orders of magnitude larger than they are for an interaction strength far from
resonance $a_{s}=-0.2a_{0}$ (red line). This underscores the importance of
considering a comprehensive continuum description, especially in proximity to
a resonance.
Figure 6: The energy-dependent Frank-Condon overlaps of the continuum states
of $h$ with the ground state of the non-interaction system $h_{0}$.
To calculate the bound states, we use a basis of around 20000 B-splines
spanning the entire box and standard diagonalization routines designed for
sparse matrices. This step is especially important in order to accurately
obtain bound states close to threshold which decay very slowly at large $r$.
### A.5 Ion in the high density limit
Here we show the absorption spectrum of an ionic impurity for the same
parameters in the text, except at a density ten times greater. Here, the
Gaussian lineshape is again clear, and the peak position follows
$E_{\mathrm{Ion}}$. Some deviation from the smooth Gaussian can be seen near
to one of the Ramsauer Townsend zeros.
Figure 7: $A(\omega)$ of an ionic impurity at
$\rho=30\,R_{\mathrm{Ion}}^{-3}$. The green line shows the mean-field energy
shift $E_{\mathrm{Ion}}$.
|
# Generation of arbitrarily polarized GeV lepton beams via nonlinear Breit-
Wheeler process
Kun Xue MOE Key Laboratory for Nonequilibrium Synthesis and Modulation of
Condensed Matter, School of Physics, Xi’an Jiaotong University, Xi’an 710049,
China Ren-Tong Guo MOE Key Laboratory for Nonequilibrium Synthesis and
Modulation of Condensed Matter, School of Physics, Xi’an Jiaotong University,
Xi’an 710049, China Feng Wan MOE Key Laboratory for Nonequilibrium Synthesis
and Modulation of Condensed Matter, School of Physics, Xi’an Jiaotong
University, Xi’an 710049, China Rashid Shaisultanov Max-Planck-Institut für
Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany Helmholtz-Zentrum
Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden, Germany Yue-Yue
Chen Department of Physics, Shanghai Normal University, Shanghai 200234,
China Zhong-Feng Xu MOE Key Laboratory for Nonequilibrium Synthesis and
Modulation of Condensed Matter, School of Physics, Xi’an Jiaotong University,
Xi’an 710049, China Xue-Guang Ren MOE Key Laboratory for Nonequilibrium
Synthesis and Modulation of Condensed Matter, School of Physics, Xi’an
Jiaotong University, Xi’an 710049, China Karen Z. Hatsagortsyan
<EMAIL_ADDRESS>Max-Planck-Institut für Kernphysik,
Saupfercheckweg 1, 69117 Heidelberg, Germany Christoph H. Keitel Max-Planck-
Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg, Germany Jian-
Xing Li<EMAIL_ADDRESS>MOE Key Laboratory for Nonequilibrium Synthesis
and Modulation of Condensed Matter, School of Physics, Xi’an Jiaotong
University, Xi’an 710049, China
###### Abstract
Generation of arbitrarily spin-polarized lepton (here refer in particular to
electron and positron) beams has been investigated in the single-shot
interaction of high-energy polarized $\gamma$ photons with an ultraintense
asymmetric laser pulse via nonlinear Breit-Wheeler (BW) pair production. We
develop a fully spin-resolved semi-classical Monte Carlo method to describe
the pair creation and polarization in the local constant field approximation.
In nonlinear BW process the polarization of created pairs is simultaneously
determined by the polarization of parent $\gamma$ photons, the polarization
and asymmetry of scattering laser field, due to the spin angular momentum
transfer and the asymmetric spin-dependent pair production probabilities,
respectively. In considered all-optical method, dense GeV lepton beams with
average polarization degree up to about $80\%$ (adjustable between the
transverse and longitudinal components) can be obtained with currently
achievable laser facilities, which could be used as injectors of the polarized
$e^{+}e^{-}$ collider to search for new physics beyond the Standard Model.
Ultrarelativistic spin-polarized lepton (here refer in particular to electron
and positron) beams have many important applications in particle and high-
energy physics Wardle _et al._ (1998); Žutić _et al._ (2004); The BESIII
Collaboration. (2019), especially in $e^{+}e^{-}$ collider, such as
International Linear Collider (ILC) Moortgat-Pick _et al._ (2008); Bear _et
al._ (2013), Compact Linear Collider (CLIC) Usun Simitcioglu _et al._ (2018);
Ari _et al._ (2016) and Circular Electron Positron Collider (CEPC) Duan _et
al._ (2019); Nikitin (2020). In those experiments, the longitudinal
polarization of leptons can change interaction cross section and consequently
provides high sensitivities Moortgat-Pick _et al._ (2008) through, e.g.,
suppressing background from $WW$ boson and single $Z$ boson production via
$WW$ fusion Moortgat-Pick _et al._ (2008), enhancing different triple gauge
couplings in $WW$ pair production Moortgat-Pick _et al._ (2008); Diehl _et
al._ (2003) and improving top vector coupling in top quark production
Chakraborty _et al._ (2003); while, the transverse polarization can cause
asymmetric azimuthal distribution of final-state particles Bartl _et al._
(2007) and then brings a way to study new physics beyond the Standard Model
(BSM) Herczeg (2003); Ananthanarayan and Rindani (2004a, 2005, 2018) through,
e.g., measuring relative phases among helicity amplitudes in $WW$ pair
production Fleischer _et al._ (1994), probing mixture of scalar-electron
states Hikasa (1986) and searching for graviton in extra dimensions Rizzo
(2003). Commonly, longitudinal and transverse polarizations are studied
separately since those corresponding effects are independent to each other
Moortgat-Pick _et al._ (2008); Bartl _et al._ (2007). However, it deserves
to point out that the arbitrarily spin-polarized (ASP) lepton beams (having
both longitudinal and transverse polarization components) also attract broad
interests, because they can introduce three mutually orthogonal axes (required
by fully reconstructing the density matrix of a spin-1/2 particle), modify
effective BSM vertices Dass and Ross (1977); Burgess and Robinson (1991);
Ananthanarayan and Rindani (2004a), and thus play an unique role in future BSM
experiments in $e^{+}e^{-}$ colliders, e.g., rendering special spin structure
functions as being observable in vector and axial-vector type BSM interactions
Ananthanarayan and Rindani (2018), producing polarized top quark pairs as a
probe of new physics Harlander _et al._ (1997); Godbole _et al._ (2006) and
diagnosing spin and chirality structures of new particles in antler-topology
processes Choi _et al._ (2015).
In conventional methods, the transversely polarized lepton beams can be
directly obtained in a storage ring via Sokolov-Ternov effect Sokolov and
Ternov (1964, 1968); Baier and Katkov (1967); Baier (1972); Derbenev and
Kondratenko (1973), which demands a long polarization time since large-size
static magnetic fields are relatively weak ($\sim$Tesla), and the
longitudinally polarized ones can be created via high-energy circularly
polarized (CP) $\gamma$ photons interacting with high-$Z$ target Omori _et
al._ (2006); Alexander _et al._ (2008); Abbott _et al._ (2016) (Bethe-
Heitler $e^{+}e^{-}$ pair production process Heitler (1954)), in which the low
luminosity of $\gamma$ photons requires a large amount of repetitions to yield
a dense positron beam for further applications Variola (2014). The transverse
and longitudinal polarizations can be transformed to each other by a spin
rotator, which demands quasi-monoenergetic beams with a risk of beam intensity
reduction Buon and Steffen (1986); Moffeit _et al._ (2005).
Modern ultrashort ultraintense laser pulses Yoon _et al._ (2019); Danson _et
al._ (2019); Gales _et al._ (2018) enable alternative efficient methods to
generate dense polarized lepton beams in femtoseconds via nonlinear quantum
electrodynamics (QED) processes Ritus (1985), e.g. nonlinear Compton
scattering Goldman (1964); Nikishov and Ritus (1964); Brown and Kibble (1964);
CAI and Breit-Wheeler (BW) $e^{+}e^{-}$ pair production Reiss (1962); Ivanov
_et al._ (2005); Seipt and King (2020); Wistisen (2020); Wan _et al._ (2020).
As reported, the leptons can be greatly transversely polarized in a standing-
wave Del Sorbo _et al._ (2017, 2018); Seipt _et al._ (2018), elliptically
polarized (EP) Li _et al._ (2019), or bichromatic laser Seipt _et al._
(2019); Song _et al._ (2019); Chen _et al._ (2019) but not in a
monochromatic symmetric laser Kotkin _et al._ (2003); Ivanov _et al._
(2004); Karlovets (2011). And, longitudinally polarized positrons can be
produced by CP $\gamma$ photons through the helicity transfer Li _et al._
(2020a) (similar to the Bethe-Heitler process). Moreover, two-step schemes
have also been proposed: low-energy polarized leptons are first generated by
polarized photocathodes Pierce _et al._ (1980); Kuwahara _et al._ (2012);
Zitzewitz _et al._ (1979), polarized atoms Barth and Smirnova (2013) or
molecular photodissociation Rakitzis _et al._ (2003); Sofikitis _et al._
(2017, 2018), and then accelerated to ultrarelativistic energies via laser-
Wen _et al._ (2019); Wu _et al._ (2019a) or beam-driven Wu _et al._
(2019b); Nie _et al._ (2021) plasma wakefield (conventional accelerators work
as well). All those proposals provide leptons either greatly transversely or
longitudinally polarized, however, how to generate above mentioned unique ASP
lepton beams is still a great challenge.
$\hat{y}$$\hat{x}$$\hat{z}$$\gamma_{p}$$e^{-}$$e^{+}$
polarized $\gamma_{p}$
asymmetric laser
(a1)(a2)(a3)$\xi_{3}$$\xi_{1}$$\xi_{2}$Poincaré
sphere(b1)(b2)(b3)$\hat{y}$$\hat{x}$$\hat{z}$$\overline{{\bm{S}}}_{+}$Polarization
states of $e^{+}$
Magnet
Figure 1: Scenario of generation of ASP lepton beams via nonlinear BW
process. A LP asymmetric laser pulse (propagating along $-\hat{z}$ direction
and polarizing along $\hat{x}$ axis) head-on collides with polarized $\gamma$
photons ($\gamma_{p}$) to create ASP electron and positron beams. (a1)-(a3)
indicate LP, CP and EP $\gamma$ photons, respectively, and (b1)-(b3) show
average polarizations of created positrons $\overline{\bm{S}}_{+}$
corresponding to (a1)-(a3), respectively. The red-solid arrow and ellipsoid
indicate the direction and amplitude of $\overline{\bm{S}}_{+}$, respectively.
The red-dashed arrows in (b1) ($\parallel\hat{{y}}$) and (b2)
($\parallel\hat{{z}}$) indicate particular cases of neglecting the
polarization of $\gamma$ photons and employing symmetric laser field,
respectively.
In this Letter, the generation of ASP GeV lepton beams has been investigated
in the interaction of polarized $\gamma$ photons with a counter-propogating
ultraintense linearly polarized (LP) asymmetric laser pulse [see the
interaction scenario in Fig. 1]. We develop a fully spin-resolved semi-
classical Monte Carlo algorithm to describe photon-polarization-dependent pair
production and polarization in nonlinear BW process. We find that the
polarization of created pairs is simultaneously determined by the polarization
of parent $\gamma$ photons, the polarization and asymmetry of scattering laser
field, due to the spin angular momentum transfer and the asymmetric spin-
dependent pair production probabilities, respectively [see details in Fig. 2
and Eq. (4)]. As employing unpolarized $\gamma$ photons or ignoring the
polarization of parent $\gamma$ photons the pair polarization will rely on the
laser polarization and asymmetry [see Fig. 1(b1)], and as employing a
symmetric laser field the transverse polarization of the total beam will be
suppressed (since the polarization directions in adjacent half laser cycles
are opposite) and only the longitudinal polarization can be retained [see Fig.
1(b2)]. Our simulations show that dense GeV lepton beams with adjustable
polarization degree up to about 80% can be obtained with currently achievable
laser facilities to the benefit of many unique applications [see details in
Fig. 3].
(a)$\hat{\bf P}_{1}$(${\bm{E}}$)$\hat{\bf P}_{2}$(-${\bm{B}}$)$\hat{\bf
P}$$\theta_{\alpha}$$\theta_{1}$$\theta_{2}$${\bm{S}}_{+}$$2\theta_{\alpha}$${\bm{D}}_{mag.}$${\overline{\bm{P}}}_{+}$${\bm{D}}_{pol.}^{(LP)}$(c)$\hat{y}$$\hat{x}$$\theta_{\alpha}$$\overline{\bm{S}}_{+}$$\hat{\bf
P}$(b)${\bm{D}}_{mag.}$${\overline{\bm{P}}}_{+}$${\bm{D}}_{pol.}^{(LP)}$${\bm{D}}_{pol.}^{(CP)}$${\bm{S}}_{+}$$\hat{\bf
P}_{1}$(${\bm{E}}$)$\hat{\bf P}_{2}$(-${\bm{B}}$)$\hat{{\bf
v}}_{+}$(d)$\hat{y}$$\hat{x}$$\hat{z}$$\overline{\bm{S}}_{+}$$\varphi$$\theta_{\alpha}$
Figure 2: (a) and (b): Polarization of sample positron ${\bm{S}}_{+}$ created
by LP and EP $\gamma$ photons, respectively. In our interaction scheme [see
Fig. 1] we employ $\hat{\bf P}_{1}=\hat{\bm{E}}\approx\hat{{\bf a}}_{+}$ and
$\hat{\bf P}_{2}=-\hat{\bm{B}}\approx\hat{{\bm{b}}}_{+}$.
${\overline{\bm{P}}}_{+}$ indicates the direction of the instantaneous SQA
with two transverse components ${\bm{D}}_{mag}$, ${\bm{D}}_{pol}^{(LP)}$ and
one longitudinal component $D_{pol}^{(CP)}$ [see Eq.(3)]. For LP $\gamma$
photon in (a), $D_{pol}^{(CP)}=0$ and $\theta_{\alpha}$ indicates the
polarization angle to $\hat{\bf P}_{1}$. $\theta_{1}$ and $\theta_{2}$ are the
angles of $\hat{\bf P}$ to ${\bm{D}}_{pol.}^{(LP)}$ and ${\bm{D}}_{mag.}$,
respectively. (c) and (d): Average polarization of positrons
$\overline{\bm{S}}_{+}$ created by LP and EP $\gamma$ photons, respectively.
The red arrow and circle (ellipsoid) indicate the direction and amplitude of
$\overline{\bm{S}}_{+}$ [see Eq. (4)], respectively.
In strong laser field, a rich pair yield via nonlinear BW process requires the
nonlinear QED parameter
$\chi_{\gamma}\equiv|e|\sqrt{-(F_{\mu\nu}k_{\gamma}^{\nu})^{2}}/m^{3}\gtrsim
1$ Ritus (1985); Baier _et al._ (1998), and the created pairs may further
emit photons via nonlinear Compton scattering, which is unnegligible as
another nonlinear QED parameter
$\chi_{e}\equiv|e|\sqrt{-(F_{\mu\nu}p^{\nu})^{2}}/m^{3}\gtrsim 1$ Ritus
(1985). Here, $e$ and $m$ are the electron charge and mass, respectively,
$k_{\gamma}$ and $p$ the 4-momenta of $\gamma$ photon and positron (electron),
respectively, and $F_{\mu\nu}$ the field tensor. Relativistic units with
$c=\hbar=1$ are used throughout. The photon polarization can be characterized
by the unit vector $\hat{\bf P}={\rm cos}(\theta_{\alpha})\hat{\bf P}_{1}+{\rm
sin}(\theta_{\alpha})\hat{\bf P}_{2}\cdot e^{i\theta_{\beta}}$, and
corresponding Stokes parameters are $(\xi_{1},\xi_{2},\xi_{3})=[{\rm
sin}(2\theta_{\alpha}){\rm cos}(\theta_{\beta})$, ${\rm
sin}(2\theta_{\alpha}){\rm sin}(\theta_{\beta})$, ${\rm
cos}(2\theta_{\alpha})]$ McMaster (1961); Boyarkin (2011). Here $\hat{{\bf
P}}_{1}$ and $\hat{{\bf P}}_{2}$ are two orthogonal basis vectors,
$\theta_{\alpha}$ the polarization angle, $\theta_{\beta}$ the absolute phase,
$\xi_{1}$ and $\xi_{3}$ describe the linear polarizations, and $\xi_{2}$
circular polarization. The fully spin-resolved pair production probability
$W_{pair}$ has been calculated via the QED operator method of Baier-Katkov-
Fadin Baier _et al._ (1973) in the local constant field approximation Ritus
(1985); Baier _et al._ (1998); Di Piazza _et al._ (2018); Ilderton (2019);
Di Piazza _et al._ (2019); Seipt and King (2020) (valid at the invariant
field parameter $a_{0}=|e|E_{0}/m\omega\gg 1$, where $E_{0}$ and $\omega_{0}$
are the laser field amplitude and frequency, respectively); see the complex
expression of $W_{pair}$ in sup .
Let’s first summarize our methods of numerical simulation and analytical
estimation. Note that in nonlinear BW process the polarization of electrons is
similar with that of positrons, thus for simplicity we only discuss the case
of positrons below. To study the positron polarization ${\bm{S}}_{+}$, we
first sum over the electron polarization ${\bm{S}}_{-}$ in $W_{pair}$, and the
probability relying on ${\bm{S}}_{+}$ is simplified as:
$\displaystyle\frac{{\rm d}^{2}W_{pair}^{+}}{{\rm d}\varepsilon_{+}{\rm
d}t}=W_{0}(C+{\bm{S}}_{+}\cdot{\bm{D}}),$ (1)
where $W_{0}=\alpha m^{2}/\left(\sqrt{3}\pi\varepsilon_{\gamma}^{2}\right)$,
$C={\rm
IntK}_{\frac{1}{3}}(\rho)+\frac{\varepsilon_{-}^{2}+\varepsilon_{+}^{2}}{\varepsilon_{-}\varepsilon_{+}}{\rm
K}_{\frac{2}{3}}(\rho)-\xi_{3}{\rm K}_{\frac{2}{3}}(\rho)$,
${\bm{D}}=\left(\xi_{3}\frac{\varepsilon_{\gamma}}{\varepsilon_{-}}-\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}\right){\rm
K}_{\frac{1}{3}}(\rho)\hat{{\bm{b}}}_{+}-\xi_{1}\frac{\varepsilon_{\gamma}}{\varepsilon_{-}}{\rm
K}_{\frac{1}{3}}(\rho)\hat{{\bf
a}}_{+}+\xi_{2}\bigg{[}\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{\varepsilon_{-}\varepsilon_{+}}{\rm
K}_{\frac{2}{3}}(\rho)+\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}{\rm
IntK}_{\frac{1}{3}}(\rho)\bigg{]}\hat{\bf v}_{+}$, $\hat{\bm{b}}_{+}=\hat{\bf
v}_{+}\times\hat{{\bf a}}_{+}$
$\approx-\hat{\bm{k}}_{\gamma}\times\hat{\bm{E}}=-\hat{\bm{B}}$ is an unit
vector anti-parallel to the magnetic field ${\bm{B}}$ (with direction vector
$\hat{\bm{B}}$) in the rest frame of positron, ${\bm{E}}$ the electric field
with direction vector $\hat{\bm{E}}$, $\hat{\bf v}_{+}$ and $\hat{\bf a}_{+}$
the unit vectors of the positron velocity and acceleration, respectively,
$\rho=2\varepsilon_{\gamma}^{2}/\left(3\chi_{\gamma}\varepsilon_{-}\varepsilon_{+}\right)$,
${\rm IntK}_{\frac{1}{3}}(\rho)\equiv\int_{\rho}^{\infty}{\rm d}z{\rm
K}_{\frac{1}{3}}(z)$, ${\rm K}_{n}$ the $n$-order modified Bessel function of
the second kind, $\alpha$ the fine structure constant, $\varepsilon_{\gamma}$,
$\varepsilon_{-}$ and $\varepsilon_{+}$ the energies of parent $\gamma$
photon, created electron and positron, respectively, with
$\varepsilon_{\gamma}=\varepsilon_{+}+\varepsilon_{-}$. When a $\gamma$ photon
decays into a pair, the positron spin state is collapsed into one of its basis
states defined by the instantaneous spin quantization axis (SQA) along the
energy-resolved average polarization vector Baier _et al._ (1973):
$\displaystyle{\rm SQA}\parallel{\overline{\bm{P}}_{+}}={\bm{D}}/C,$ (2)
which can be rewritten as
$\displaystyle{\overline{\bm{P}}}_{+}$ $\displaystyle=$
$\displaystyle\left[{\bm{D}}_{mag.}+{\bm{D}}_{pol.}^{(LP)}+{\bm{D}}_{pol.}^{(CP)}\right]/C$
(3) $\displaystyle=$
$\displaystyle\left[|{\bm{D}}_{mag.}|\hat{\bm{D}}_{mag.}+|{\bm{D}}_{pol.}^{(LP)}|\hat{\bm{D}}_{pol.}^{(LP)}+|{\bm{D}}_{pol.}^{(CP)}|\hat{\bm{D}}_{pol.}^{(CP)}\right]/C,$
where $\hat{\bm{D}}_{mag.}=-\hat{{\bf b}}_{+}\approx\hat{\bm{B}}$,
$\hat{\bm{D}}_{pol.}^{(LP)}=\xi_{3}\hat{{\bm{b}}}_{+}-\xi_{1}\hat{{\bf
a}}_{+}$ and $\hat{\bm{D}}_{pol.}^{(CP)}=\xi_{2}\hat{{\bf v}}_{+}$ rely on the
magnetic field $\hat{\bm{B}}$, linear polarizations $\xi_{1}$ and $\xi_{3}$,
and circular polarization $\xi_{2}$, respectively, with corresponding factors
$|{\bm{D}}_{mag.}|=\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}{\rm
K}_{\frac{1}{3}}(\rho)$,
$|{\bm{D}}_{pol.}^{(LP)}|=\frac{\varepsilon_{\gamma}}{\varepsilon_{-}}{\rm
K}_{\frac{1}{3}}(\rho)$ and
$|{\bm{D}}_{pol.}^{(CP)}|=\frac{\varepsilon_{+}^{2}-\varepsilon_{-}^{2}}{\varepsilon_{-}\varepsilon_{+}}{\rm
K}_{\frac{2}{3}}(\rho)+\frac{\varepsilon_{\gamma}}{\varepsilon_{+}}{\rm
IntK}_{\frac{1}{3}}(\rho)$, respectively. As the photon polarization is
ignored, ${\rm SQA}\parallel\hat{\bm{D}}_{mag.}\parallel\hat{\bm{B}}$, i.e.,
the positrons are polarized along the magnetic field ${\bm{B}}$ Chen _et al._
(2019) [see Fig. 1(b1)]. The polarization geometries of positrons created by
LP and EP $\gamma$ photons are illustrated in Figs. 2(a) and (b),
respectively. For LP $\gamma$ photon, $\theta_{\beta}=0$,
$\hat{\bm{D}}_{pol.}^{(CP)}=0$, $(\xi_{1},\xi_{2},\xi_{3})=[{\rm
sin}(2\theta_{\alpha})$, 0, ${\rm cos}(2\theta_{\alpha})]$, and
$\hat{\bm{D}}_{pol.}^{(LP)}=\xi_{3}\hat{{\bf b}}_{+}-\xi_{1}\hat{{\bf
a}}_{+}={\rm cos}(2\theta_{\alpha})\hat{\bf P}_{2}-{\rm
sin}(2\theta_{\alpha})\hat{\bf P}_{1}$. For general EP $\gamma$ photon with
$\xi_{2}\neq 0$, the longitudinal polarization component must be taken into
account.
For a positron beam, the average transverse polarizations
$\overline{\bm{S}}_{T}$ in adjacent half laser cycles are opposite and cancel
each other out due to the laser field oscillation, and consequently,
$\overline{\bm{S}}_{T}$ is proportional to the asymmetry of the laser field,
which can be characterized by the relative deviation between the pair
production probabilities in positive and negative half cycles ${\cal
A}_{field}=(W_{pair}^{+,pos.}-W_{pair}^{+,neg.})/(W_{pair}^{+,pos.}+W_{pair}^{+,neg.})$.
Thus, one can estimate
$\displaystyle\overline{{\bm{S}}}_{+}=\left\\{{{\cal
A}_{field}\cdot\left[\overline{\bm{D}}_{mag.}+\overline{\bm{D}}_{pol.}^{(LP)}\right]+\overline{\bm{D}}_{pol.}^{(CP)}}\right\\}/{\overline{C}},$
(4)
where
$\overline{\bm{D}}_{mag.}=\int_{0}^{\varepsilon_{\gamma}}{\bm{D}}_{mag.}{\rm
d}\varepsilon_{+}$,
$\overline{\bm{D}}_{pol.}^{(LP)}=\int_{0}^{\varepsilon_{\gamma}}{\bm{D}}_{pol.}^{(LP)}{\rm
d}\varepsilon_{+}$,
$\overline{\bm{D}}_{pol.}^{(CP)}=\int_{0}^{\varepsilon_{\gamma}}{\bm{D}}_{pol.}^{(CP)}{\rm
d}\varepsilon_{+}$ and $\overline{C}=\int_{0}^{\varepsilon_{\gamma}}C{\rm
d}\varepsilon_{+}$. For LP $\gamma$ photons,
$|\overline{\bm{D}}_{mag.}|=|\overline{\bm{D}}_{pol.}^{(LP)}|$ with
${\theta}_{1}={\theta}_{2}$ results in
$\overline{\bm{S}}_{+}\parallel-\hat{\bf P}$ [see Fig. 2(c)]; for more general
EP ones, as $\theta_{\beta}$ increases, $\overline{\bm{S}}_{+}$ will rotate
anti-clockwise by an azimuth angle $\varphi$, which can be calculated by Eq.
(4) [see Fig. 2(d)]. $\overline{\bm{S}}_{T}={\cal A}_{field}{{\cal A}}_{pol.}$
is dominated by ${\cal A}_{field}$ with ${\cal
A}_{pol.}=\left[\overline{\bm{D}}_{mag.}+\overline{\bm{D}}_{pol.}^{(LP)}\right]/\overline{C}$
[see Fig. 4(a)], while the average longitudinal polarization
$\overline{{\bm{S}}}_{L}$ is solely determined by
$\overline{\bm{D}}_{pol.}^{(CP)}/\overline{C}\propto\xi_{2}$ as expected. In
symmetric laser field with ${\cal A}_{field}=0$, $\overline{\bm{S}}_{T}$ is
very little and only $\overline{{\bm{S}}}_{L}$ can be obtained by employing
longitudinally polarized $\gamma$ photons ($\xi_{2}\neq 0$) Li _et al._
(2020a) [see Fig. 1(b2)]. The momentum and spin dynamics of the pairs
propagating through the laser field are calculated following a Monte Carlo
algorithm Wan _et al._ (2020), including the radiative depolarization effects
and spin procession Thomas (1926, 1927); Bargmann _et al._ (1959). See more
details of our simulation method in sup .
(c)$\theta_{y}$(mrad)
$\theta_{x}$ (mrad)
$\overline{\bm{S}}_{T}$(d)$\theta_{y}$(mrad)
$\theta_{x}$ (mrad)
$\overline{\bm{S}}_{L}$(e)$\theta_{y}$(mrad)
$\theta_{x}$ (mrad)
log10(d$N_{+}$/d$\theta_{x}$)(f)$\varepsilon_{+}$(GeV)
$\overline{\bm{S}}_{T}$$|$$\overline{\bm{S}}_{L}$
log10(d$N_{+}$/d$\varepsilon_{+}$)
(a)$\theta_{\alpha}$
$\theta_{\beta}$
$63.3\%$(b)$\theta_{\alpha}$
$\theta_{\beta}$
$53.5\%$
Figure 3: (a) and (b): $\overline{\bm{S}}_{T}$ and $\overline{\bm{S}}_{L}$ of
positrons with respect to $\theta_{\alpha}$ and $\theta_{\beta}$,
respectively, analytically estimated by Eq. (4) with ${\cal
A}_{field}=0.8378$. The black points correspond to
($\theta_{\alpha}=50^{\circ}$, $\theta_{\beta}=70^{\circ}$). (c)-(f) are the
numerical simulation results with ($\theta_{\alpha}=50^{\circ}$,
$\theta_{\beta}=70^{\circ}$). (c)-(e): $\overline{\bm{S}}_{T}$,
$\overline{\bm{S}}_{L}$ and the angle-resolved positron density ${\rm
log}_{10}({\rm d}^{2}N_{+}/{\rm d}\theta_{x}{\rm d}\theta_{y})$ (rad-2) with
respect to the deflection angles $\theta_{x}$ = arctan($p_{+,x}/p_{+,z}$) and
$\theta_{y}$ = arctan($p_{+,y}/p_{+,z}$), respectively. The black curves
indicate $\overline{\bm{S}}_{T}$, $\overline{\bm{S}}_{L}$ and ${\rm
log}_{10}({\rm d}N_{+}/{\rm d}\theta_{x})$ (mrad-1) summing over $\theta_{y}$
vs $\theta_{x}$, respectively. (f) $\overline{\bm{S}}_{T}$ (red),
$\overline{\bm{S}}_{L}$ (blue) and the energy-resolved positron density ${\rm
log}_{10}({\rm d}N_{+}/{\rm d}\varepsilon_{+})$ (GeV-1) (green) vs
$\varepsilon_{+}$. Other parameters are given in the text.
Then, we illustrate typical results of created ASP positron beams in Fig 3.
The employed laser and $\gamma$ photon parameters are as follows. A tightly
focused LP bichromatic Gaussian laser pulse Salamin and Keitel (2002); sup (a
frequency-chirped laser pulse Galow _et al._ (2011) can work similarly)
propagates along $-\hat{z}$ direction and polarizes along $\hat{x}$ axis [see
Fig. 1], with a phase difference $\Delta\phi=\pi/2$ to obtain the maximal
field asymmetry, peak intensity $I_{0}\approx 1.11\times 10^{22}$ W/cm2
(corresponding invariant field parameters $a_{1}=60$ and $a_{2}=15$),
wavelengths $\lambda_{1}=2\lambda_{2}=1\mu$m, pulse durations
$\tau_{1}=\tau_{2}=15T_{1}$ with periods $T_{1}=2T_{2}$, and focal radii
$w_{1}=w_{2}=5\mu$m. This kind of laser pulse is currently feasible in
petawatt laser facilities Yoon _et al._ (2019); Danson _et al._ (2019);
Gales _et al._ (2018). While, a cylindrical polarized $\gamma$ photon beam
propagates along $\hat{z}$ direction, with an initial energy
$\varepsilon_{\gamma}=1.8$GeV, energy spread
$\Delta\varepsilon_{\gamma}/\varepsilon_{\gamma}=0.06$, angular divergence
$\Delta\theta_{\gamma}=0.3$mrad, beam radius $w_{\gamma}=1\mu$m, beam length
$L_{\gamma}=5\mu$m, photon number $N_{\gamma}=10^{6}$ and density
$n_{\gamma}\approx 6.37\times 10^{16}$cm-3 having a transversely Gaussian and
longitudinally uniform distribution. Such a $\gamma$ photon beam may be
generated via synchrotron radiation Alexander _et al._ (2008); Baynham _et
al._ (2011), bremsstrahlung Abbott et al. (2016), linear Omori _et al._
(2006) or nonlinear Compton scattering King and Tang (2020); Tang _et al._
(2020); Li _et al._ (2020b). The pair production is remarkable at these
parameters since $\overline{\chi}_{\gamma}\approx 0.96$.
(b)$\theta_{\alpha}$
$\overline{\bm{S}}_{T}$
$N_{+}/N_{\gamma}$
$\times 10^{-2}$ MCNo Rad.Exc. Pol.(a)$a_{1}$ $\overline{\bm{S}}_{T}$ ${\cal
A}_{field}$ ${\cal A}_{pol.}$$\overline{\bm{S}}_{T}$(c)$\varepsilon_{+}$(GeV)
${\bm{S}}_{T}$
log${}_{10}($d$N_{+}/$d$\varepsilon_{+})$
$\theta_{\alpha}=90^{\circ}$(d)$\varepsilon_{+}$(GeV)
${\bm{S}}_{T}$
log${}_{10}($d$N_{+}/$d$\varepsilon_{+})$
$\theta_{\alpha}=0^{\circ}$
Figure 4: (a) ${\cal A}_{pol.}$ (black-dash-dotted), ${\cal A}_{field}$
(blue-dashed) and $\overline{\bm{S}}_{T}$ (red-solid) analytically calculated
by Eq. (4) vs $a_{1}$ ($a_{2}=a_{1}/4$), and the green points
$\overline{\bm{S}}_{T}$ are our numerical results.
$\theta_{\alpha}=90^{\circ}$. At $a_{1}$ = 60, 80 and 120, respectively, the
corresponding $\overline{\chi}_{\gamma}\approx$ 0.96, 1.20 and 1.63,
respectively. (b) $\overline{\bm{S}}_{T}$ and the yield ratio of positrons
$N_{+}/N_{\gamma}$ (green-solid) vs $\theta_{\alpha}$. The red-solid, blue-
dash-dotted and black-dotted curves indicate the results of
$\overline{\bm{S}}_{T}$ calculated by our Monte Carlo method, artificially
ignoring the radiative depolarization effects of the positrons propagating
through the laser field, and excluding the polarization effects of parent
$\gamma$ photons in nonlinear BW process, respectively. (c) and (d):
${\bm{S}}_{T}$ (red-solid) and
log${}_{10}($d$N_{+}/$d$\varepsilon_{+})$(GeV-1) (blue) vs $\varepsilon_{+}$
at $\theta_{\alpha}=90^{\circ}$ and $\theta_{\alpha}=0^{\circ}$, respectively.
Here $\theta_{\beta}=0^{\circ}$, and other parameters are the same as those in
Fig. 3.
According to Eq. (4), the polarizations of created positron beam
($\overline{\bm{S}}_{T}$ and $\overline{\bm{S}}_{L}$) can be controlled by
adjusting the polarization of parent $\gamma$ photons ($\theta_{\alpha}$ and
$\theta_{\beta}$) [see Figs. 3(a) and (b)], which indicates the spin angular
momentum transfer from parent $\gamma$ photons to created pairs.
$\overline{\bm{S}}_{T}\propto\sqrt{\xi_{1}^{2}+\xi_{3}^{2}}=\sqrt{{\rm
sin}^{2}(2\theta_{\alpha}){\rm cos}^{2}(\theta_{\beta})+{\rm
cos}^{2}(2\theta_{\alpha})}$ is mainly determined by $\theta_{\alpha}$ and can
reach above 80%, while $\overline{\bm{S}}_{L}\propto\xi_{2}={\rm
sin}(2\theta_{\alpha}){\rm sin}(\theta_{\beta})$ periodically varies with
respect to $\theta_{\alpha}$ and $\theta_{\beta}$ and its amplitude can
achieve about 60%. For a specific case with $\theta_{\alpha}=50^{\circ}$ and
$\theta_{\beta}=70^{\circ}$, the analytical estimations are
$\overline{\bm{S}}_{T}\approx 63.3\%$ and $\overline{\bm{S}}_{L}\approx
53.5\%$ [see the black-star points in Figs. 3(a) and (b)], and corresponding
numerical results are $\overline{\bm{S}}_{T}\approx 62.1\%$ and
$\overline{\bm{S}}_{L}\approx 50.3\%$ sup . The little deviations are derived
from that in analytical estimations we employ a constant ${\cal A}_{field}$,
which actually has spatial and temporal profiles in our numerical simulations.
As the created positrons propagate through the laser field, the average
polarizations slightly decrease to $\overline{\bm{S}}_{T}\approx 59.4\%$ and
$\overline{\bm{S}}_{L}\approx 44.8\%$ [see Figs. 3(c) and (d)] due to the
quantum radiative depolarization effects Li _et al._ (2019) [see sup and
Fig. 4(b)]. $\overline{\bm{S}}_{T}\propto{\cal A}_{field}$ is asymmetric in
angular distribution due to asymmetric ${\cal A}_{field}$, which doesn’t
affect the symmetry of $\overline{\bm{S}}_{L}\propto\xi_{2}$. The yield rate
of positrons $N_{+}/N_{\gamma}\approx 0.1$ is much higher than that of the
common method employing Bethe-Heitler pair producation ($\sim
10^{-3}-10^{-4}$) Omori _et al._ (2006); Alexander _et al._ (2008); Abbott
_et al._ (2016), since $N_{+}\sim\alpha a_{0}$ is rather large in ultraintense
laser field Ritus (1985); Baier _et al._ (1973); see Fig. 3(e). The flux of
the positron beam is approximately $3.0\times 10^{19}$/s with duration
$\tau_{+}\approx\tau_{\gamma}$ (due to the relativistic effect). The emittance
$\epsilon\approx w_{+}\theta_{div.}\sim 10^{-2}$ mm$\cdot$mrad fulfills the
experimental requirements of the beam injectors Artru _et al._ (2008), with
radius $w_{+}\approx w_{\gamma}=1\mu$m and angular divergence
$\theta_{div.}\sim 30$mrad [FWHM in Fig. 3(e)]. Due to the stochastic effects
of the pair production and further radiation, the energy of the positron beam
spreads with a density peak at $\varepsilon_{+}\approx 0.3$GeV [see Fig.
3(f)]. With the increase of $\varepsilon_{+}$, $\overline{\bm{S}}_{T}$
($\overline{\bm{S}}_{L}$) monotonically decreases (increases) from above 90%
(0%) to about $50\%$ (above $85\%$), since the pair polarization is mainly
determined by the polarization of the laser (parent $\gamma$ photons) at low
(high) $\varepsilon_{+}$ (see sup ). For $\varepsilon_{+}=$ 0.4GeV, 0.8GeV and
1.2GeV, respectively, corresponding $\overline{\bm{S}}_{T}$
($\overline{\bm{S}}_{L}$) is about 58.7%, 52.6% and 49.8% (42.6%, 63.4% and
78.8%), respectively, and brilliance about 4.6$\times$1020, 1.4$\times$1020
and 0.5$\times$1020 positrons/(s mm2 mrad2 0.1% BW), respectively, with
angular divergence (FWHM) of about 24.9 mrad2, 15.9 mrad2 and 13.0 mrad2,
respectively.
We underline that as the ultra-relativistic laser intensity $a_{0}(\sim
a_{1})\propto\chi_{\gamma}$ continuously increases, $W_{pair}^{+,pos.}$ and
$W_{pair}^{+,neg.}$ are both gradually approaching the saturation threshold,
and thus $\overline{\bm{S}}_{T}\propto{\cal A}_{field}$ decreases continuously
[see Fig. 4(a)]. As employing unpolarized $\gamma$ photons or artificially
ignoring the polarization effects of $\gamma$ photons in nonlinear BW process,
for given parameters $\overline{\bm{S}}_{T}$ is only about 50.8%, however,
employing polarized $\gamma$ photons $\overline{\bm{S}}_{T}$ can achieve up to
76.2% with a peak yield rate $N_{+}/N_{\gamma}\approx 0.11$ at
$\theta_{\alpha}=90^{\circ}$. And in a broad range of
$45^{\circ}\lesssim\theta_{\alpha}\lesssim 135^{\circ}$ one can obtain
$\overline{\bm{S}}_{T}\gtrsim 60\%$ with $N_{+}/N_{\gamma}\gtrsim 0.08$ [see
Fig. 4(b)]. The energy-resolved $\overline{\bm{S}}_{T}$ and densities at
$\theta_{\alpha}=90^{\circ}$ and $0^{\circ}$ are represented in Figs. 4(c) and
(d), respectively. It’s interesting that at $\theta_{\alpha}=0^{\circ}$ even
though $\overline{\bm{S}}_{T}$ is very little, highly polarized positron beams
can still be obtained by energy choosing by magnets.
For experimental feasibility, the impact of the laser and $\gamma$ photon
parameters (e.g., the laser intensity, pulse duration, colliding angle, and
the angular spreading, energy and energy spreading of the $\gamma$ photon
beam) on the positron polarization is investigated, and the results keep
uniform (see sup ). Moreover, for a more general case: an electron beam head-
on collides with a LP bichromatic laser pulse to generate positrons, the
positrons are only weakly transversely polarized, since the polarization of
intermediate $\gamma$ photons is nearly parallel to that of the laser field
sup .
In conclusion, we reveal the fully spin-resolved pair polarization mechanism
in nonlinear BW process, which can be observed by the average polarization
vector in an asymmetric (e.g. well known bichromatic and frequency-chirped)
laser field. And we put forward an efficient method to generate dense ASP GeV
lepton beams with the polarization degree up to about $80\%$ with currently
achieveable petawatt laser facilities Yoon _et al._ (2019); Danson _et al._
(2019); Gales _et al._ (2018), which have unique applications for high-energy
physics and particle physics, in particular, as injectors of the polarized
$e^{+}e^{-}$ colliders for searching for BSM new physics Moortgat-Pick _et
al._ (2008); Ananthanarayan and Rindani (2004b, 2005, 2018); Herczeg (2003);
Dass and Ross (1977); Burgess and Robinson (1991); Ananthanarayan and Rindani
(2004a); Harlander _et al._ (1997); Godbole _et al._ (2006); Rizzo (2003);
Choi _et al._ (2015).
Acknowledgement: This work is supported by the National Natural Science
Foundation of China (Grants Nos. 12022506, 11874295, 11875219 and 11905169),
and the National Key R&D Program of China (Grant No. 2018YFA0404801).
## References
* Wardle _et al._ (1998) J. F. C. Wardle, D. C. Homan, R. Ojha, and D. H. Roberts, “Electron-positron jets associated with the quasar 3c279,” Nature , 457–461 (1998).
* Žutić _et al._ (2004) Igor Žutić, Jaroslav Fabian, and S. Das Sarma, “Spintronics: Fundamentals and applications,” Rev. Mod. Phys. 76, 323–410 (2004).
* The BESIII Collaboration. (2019) M. Achasov M.N. et al. The BESIII Collaboration., Ablikim (BES), “Polarization and Entanglement in Baryon-Antibaryon Pair Production in Electron-Positron Annihilation,” Nat. Phys. 15, 631–634 (2019).
* Moortgat-Pick _et al._ (2008) G. Moortgat-Pick, T. Abe, G. Alexander, B. Ananthanarayan, A.A. Babich, V. Bharadwaj, D. Barber, A. Bartl, A. Brachmann, S. Chen, J. Clarke, J.E. Clendenin, J. Dainton, K. Desch, M. Diehl, B. Dobos, T. Dorland, H.K. Dreiner, H. Eberl, J. Ellis, K. Flöttmann, H. Fraas, F. Franco-Sollova, F. Franke, A. Freitas, J. Goodson, J. Gray, A. Han, S. Heinemeyer, S. Hesselbach, T. Hirose, K. Hohenwarter-Sodek, A. Juste, J. Kalinowski, T. Kernreiter, O. Kittel, S. Kraml, U. Langenfeld, W. Majerotto, A. Martinez, H.-U. Martyn, A. Mikhailichenko, C. Milstene, W. Menges, N. Meyners, K. Mönig, K. Moffeit, S. Moretti, O. Nachtmann, F. Nagel, T. Nakanishi, U. Nauenberg, H. Nowak, T. Omori, P. Osland, A.A. Pankov, N. Paver, R. Pitthan, R. Pöschl, W. Porod, J. Proulx, P. Richardson, S. Riemann, S.D. Rindani, T.G. Rizzo, A. Schälicke, P. Schüler, C. Schwanenberger, D. Scott, J. Sheppard, R.K. Singh, A. Sopczak, H. Spiesberger, A. Stahl, H. Steiner, A. Wagner, A.M. Weber, G. Weiglein, G.W. Wilson, M. Woods, P. Zerwas, J. Zhang, and F. Zomer, “Polarized positrons and electrons at the linear collider,” Phys. Rep. 460, 131 – 243 (2008).
* Bear _et al._ (2013) Howard Bear, Tim Barklow, Keisuke Fujii, Yuanning Gao, Andre Hoang, Shinya Kanemura, Jenny List, Heather E. Logan, Andrei Nomerotski, Maxim Perelstein, Michael E. Peskin, Roman Pōschl, Jürgen Reuter, Sabine Riemann, Aurore Savoy-Navarro, Geraldine Servant, Tim M. P. Tait, and Jaehoon Yu, “The international linear collider technical design report \- volume 2: Physics,” (2013), arXiv:1306.6352 [hep-ph] .
* Usun Simitcioglu _et al._ (2018) Aysegul Usun Simitcioglu, Ilhan Tapan, R. Tomas, J. Barranco, and M. Beckmann, “Beam offset impact on the polarization in the clic beam delivery system,” Can. J. Phys. 96, 1266–1271 (2018).
* Ari _et al._ (2016) V. Ari, A.A. Billur, S.C. İnan, and M. Köksal, “Anomalous $ww\gamma$ couplings with beam polarization at the compact linear collider,” Nucl. Phys. B 906, 211 – 230 (2016).
* Duan _et al._ (2019) Zhe Duan, Jie Gao, Xiaoping Li, Dou Wang, Yiwei Wang, Wenhao Xia, Qingjin Xu, Chenghui Yu, and Yuan Zhang, “Concepts of longitudinally polarized electron and positron colliding beams in the circular electron positron collider,” in _10th International Particle Accelerator Conference_ (2019) p. MOPMP012.
* Nikitin (2020) Sergei Nikitin, “Polarization issues in circular electron positron super-colliders,” Int. J. Mod. Phys. A 35, 2041001 (2020).
* Diehl _et al._ (2003) M. Diehl, O. Nachtmann, and F. Nagel, “Probing triple gauge couplings with transverse beam polarisation in e +e- $\rightarrow$ W+W-,” Eur. Phys. J. C 32, 17–27 (2003).
* Chakraborty _et al._ (2003) Dhiman Chakraborty, Jacobo Konigsberg, and David Rainwater, “Top-quark physics,” Annu. Rev. Nucl. Part. Sci. 53, 301–351 (2003).
* Bartl _et al._ (2007) Alfred Bartl, Karl Hohenwarter-Sodek, Thomas Kernreiter, and Olaf Kittel, “Cp asymmetries with longitudinal and transverse beam polarizations in neutralino production and decay into the z0 boson at the ilc,” J. High Energy Phys. 2007, 079–079 (2007).
* Herczeg (2003) Peter Herczeg, “Cp-violating electron-nucleon interactions from leptoquark exchange,” Phys. Rev. D 68, 116004 (2003).
* Ananthanarayan and Rindani (2004a) B. Ananthanarayan and Saurabh D. Rindani, “CP violation at a linear collider with transverse polarization,” Phys. Rev. D 70, 036005 (2004a).
* Ananthanarayan and Rindani (2005) B. Ananthanarayan and Saurabh D. Rindani, “Transverse beam polarization and cp violation in $e^{+}e^{-}\rightarrow\gamma z$ with contact interactions,” Phys. Lett. B 606, 107–115 (2005).
* Ananthanarayan and Rindani (2018) B. Ananthanarayan and Saurabh D. Rindani, “Inclusive spin-momentum analysis and new physics at a polarized electron-positron collider,” Eur. Phys. J. C 78, 1–17 (2018).
* Fleischer _et al._ (1994) J. Fleischer, K. Kołodziej, and F. Jegerlehner, “Transverse versus longitudinal polarization effects in e+e- $\rightarrow$ +W-,” Phys. Rev. D 49, 2174–2187 (1994).
* Hikasa (1986) Ken-ichi Hikasa, “Transverse-polarization effects in ${e}^{+}$${e}^{\mathrm{-}}$ collisions: The role of chiral symmetry,” Phys. Rev. D 33, 3203–3223 (1986).
* Rizzo (2003) Thomas G. Rizzo, “Transverse polarization signatures of extra dimensions at linear colliders,” J. High Energy Phys. 7, 157–172 (2003).
* Dass and Ross (1977) G.V. Dass and G.G. Ross, “Neutral and weak currents in $e^{+}e^{-}$ annihilation to hadrons,” Nucl. Phys. B 118, 284–310 (1977).
* Burgess and Robinson (1991) C. P. Burgess and J. A. Robinson, “Transverse polarization at $e^{+}e^{-}$ colliders and CP violation from new physics,” Int. J. Mod. Phys. A 6, 2707–2728 (1991).
* Harlander _et al._ (1997) R. Harlander, M. Jeżabek, J. H. Kühn, and M. Peter, “Top quark polarization in polarized $e^{+}e^{-}$ annihilation near threshold,” Z. Phys. C: Part. Fields 73, 477–494 (1997).
* Godbole _et al._ (2006) Rohini M Godbole, Saurabh D Rindani, and Ritesh K Singh, “Lepton distribution as a probe of new physics in production and decay of thetquark and its polarization,” J. High Energy Phys. 2006, 021–021 (2006).
* Choi _et al._ (2015) S. Y. Choi, N. D. Christensen, D. Salmon, and X. Wang, “Spin and chirality effects in antler-topology processes at high energy $e^{+}e^{-}$ colliders,” Eur. Phys. J. C 75, 481 (2015).
* Sokolov and Ternov (1964) A. A. Sokolov and I. M. Ternov, “On polarization and spin effects in the theory of synchrotron radiation,” Sov. Phys. Dokl. 8, 1203 (1964).
* Sokolov and Ternov (1968) A. A. Sokolov and I. M. Ternov, _Synchrotron Radiation_ (Akademic, Germany, 1968).
* Baier and Katkov (1967) V. N. Baier and V. M. Katkov, “Radiational polarization of electrons in inhomogeneous magnetic field,” Phys. Lett. A 24, 327–329 (1967).
* Baier (1972) V. N. Baier, “Radiative polarization of electron in storage rings,” Sov. Phys. Usp. 14, 695 (1972).
* Derbenev and Kondratenko (1973) Y. Derbenev and A. M. Kondratenko, “Polarization kinematics of particles in storage rings,” Sov. Phys. JETP 37, 968 (1973).
* Omori _et al._ (2006) T. Omori, M. Fukuda, T. Hirose, Y. Kurihara, R. Kuroda, M. Nomura, A. Ohashi, T. Okugi, K. Sakaue, T. Saito, J. Urakawa, M. Washio, and I. Yamazaki, “Efficient propagation of polarization from laser photons to positrons through compton scattering and electron-positron pair creation,” Phys. Rev. Lett. 96, 114801 (2006).
* Alexander _et al._ (2008) G. Alexander, J. Barley, Y. Batygin, S. Berridge, V. Bharadwaj, G. Bower, W. Bugg, F.-J. Decker, R. Dollan, Y. Efremenko, V. Gharibyan, C. Hast, R. Iverson, H. Kolanoski, J. Kovermann, K. Laihem, T. Lohse, K. T. McDonald, A. A. Mikhailichenko, G. A. Moortgat-Pick, P. Pahl, R. Pitthan, R. Pöschl, E. Reinherz-Aronis, S. Riemann, A. Schälicke, K. P. Schüler, T. Schweizer, D. Scott, J. C. Sheppard, A. Stahl, Z. M. Szalata, D. Walz, and A. W. Weidemann, “Observation of polarized positrons from an undulator-based source,” Phys. Rev. Lett. 100, 210801 (2008).
* Abbott _et al._ (2016) D. Abbott, P. Adderley, A. Adeyemi, P. Aguilera, M. Ali, _et al._ (PEPPo Collaboration), “Production of highly polarized positrons using polarized electrons at mev energies,” Phys. Rev. Lett. 116, 214801 (2016).
* Heitler (1954) W. Heitler, _The Quantum Theory of Radiation_ (Clarendon Press, Oxford, 1954).
* Variola (2014) A. Variola, “Advanced positron sources,” Nucl. Instr. Meth. Phys. Res. A 740, 21–26 (2014).
* Buon and Steffen (1986) Jean Buon and Klaus Steffen, “Hera variable-energy ”mini” spin rotator and head-on ep collision scheme with choice of electron helicity,” Nucl. Instrum. Methods Phys. Res., Sect. A 245, 248–261 (1986).
* Moffeit _et al._ (2005) K. Moffeit, M. Woods, P. Schüler, K. Mönig, and P. Bambade, “Spin rotation schemes at the ilc for two interaction regions and positron polarization with both helicities,” SLAC-TN-05-045, LCC-0159, IPBI-TN-2005-2 (2005).
* Yoon _et al._ (2019) Jin Woo Yoon, Cheonha Jeon, Junghoon Shin, Seong Ku Lee, Hwang Woon Lee, Il Woo Choi, Hyung Taek Kim, Jae Hee Sung, and Chang Hee Nam, “Achieving the laser intensity of $5.5\times 10^{22}$ w/cm2 with a wavefront-corrected multi-pw laser,” Opt. Express 27, 20412–20420 (2019).
* Danson _et al._ (2019) Colin N. Danson, Constantin Haefner, Jake Bromage, Thomas Butcher, Jean-Christophe F. Chanteloup, Enam A. Chowdhury, Almantas Galvanauskas, Leonida A. Gizzi, Joachim Hein, David I. Hillier, and et al., “Petawatt and exawatt class lasers worldwide,” High Power Laser Sci. Eng. 7, e54 (2019).
* Gales _et al._ (2018) S. Gales, K. A. Tanaka, D. L. Balabanski, F. Negoita, D. Stutman, O. Tesileanu, C. A. Ur, D. Ursescu, I. An-drei, S. Ataman, M. O. Cernaianu, L. DAlessi, I. Dancus, B. Diaconescu, N. Djourelov, D. Filipescu, P. Ghenuche, D. G. Ghita, C. Matei, K. Seto, M. Zeng, and N. V. Zamfir, “The extreme light infrastructure nuclear physics (eli-np) facility: new horizons in physics with 10 pw ultra-intense lasers and 20 mev brilliant gamma beams,” Rep. Prog. Phys. 81, 094301 (2018).
* Ritus (1985) V. I. Ritus, “Quantum effects of the interaction of elementary particles with an intense electromagnetic field,” J. Sov. Laser Res. 6, 497 (1985).
* Goldman (1964) I. I. Goldman, “Intensity effects in compton scattering,” Sov. Phys. JETP 19, 954 (1964), [Zh. Eksp. Teor. Fiz. 46, 1412 (1964)].
* Nikishov and Ritus (1964) A. I. Nikishov and V. I. Ritus, “Quantum processes in the field of a plane electromagnetic wave and in a constant field. i,” Sov. Phys. JETP 19, 529 (1964), [Zh. Eksp. Teor. Fiz. 46, 776 (1964)].
* Brown and Kibble (1964) Lowell S. Brown and T. W. B. Kibble, “Interaction of intense laser beams with electrons,” Phys. Rev. 133, A705–A719 (1964).
* (44) K. Yokoya, CAIN2.42 Users Manual., https://ilc.kek.jp/~yokoya/CAIN/Cain242/.
* Reiss (1962) Howard R. Reiss, “Absorption of light by light,” J. Math. Phys. 3, 59–67 (1962).
* Ivanov _et al._ (2005) D. Y. Ivanov, G. L. Kotkin, and V. G. Serbo, “Complete description of polarization effects in $e^{+}e^{-}$ pair production by a photon in the field of a strong laser wave,” Eur. Phys. J. C 40, 27 (2005).
* Seipt and King (2020) D. Seipt and B. King, “Spin- and polarization-dependent locally-constant-field-approximation rates for nonlinear Compton and Breit-Wheeler processes,” Phys. Rev. A 102, 052805 (2020).
* Wistisen (2020) Tobias N. Wistisen, “Numerical approach to the semiclassical method of pair production for arbitrary spins and photon polarization,” Phys. Rev. D 101, 076017 (2020).
* Wan _et al._ (2020) Feng Wan, Yu Wang, Ren-Tong Guo, Yue-Yue Chen, Rashid Shaisultanov, Zhong-Feng Xu, Karen Z. Hatsagortsyan, Christoph H. Keitel, and Jian-Xing Li, “High-energy $\gamma$-photon polarization in nonlinear Breit-Wheeler pair production and $\gamma$ polarimetry,” Phys. Rev. Research 2, 032049 (2020).
* Del Sorbo _et al._ (2017) D. Del Sorbo, D. Seipt, T. G. Blackburn, A. G. R. Thomas, C. D. Murphy, J. G. Kirk, and C. P. Ridgers, “Spin polarization of electrons by ultraintense lasers,” Phys. Rev. A 96, 043407 (2017).
* Del Sorbo _et al._ (2018) D. Del Sorbo, D. Seipt, A. G. R. Thomas, and C. P. Ridgers, “Electron spin polarization in realistic trajectories around the magnetic node of two counter-propagating, circularly polarized, ultra-intense lasers,” Plasma Phys. Control. Fusion 60, 064003 (2018).
* Seipt _et al._ (2018) D. Seipt, D. Del Sorbo, C. P. Ridgers, and A. G. R. Thomas, “Theory of radiative electron polarization in strong laser fields,” Phys. Rev. A 98, 023417 (2018).
* Li _et al._ (2019) Yan-Fei Li, Rashid Shaisultanov, Karen Z. Hatsagortsyan, Feng Wan, Christoph H. Keitel, and Jian-Xing Li, “Ultrarelativistic electron-beam polarization in single-shot interaction with an ultraintense laser pulse,” Phys. Rev. Lett. 122, 154801 (2019).
* Seipt _et al._ (2019) Daniel Seipt, Dario Del Sorbo, Christopher P. Ridgers, and Alec G. R. Thomas, “Ultrafast polarization of an electron beam in an intense bichromatic laser field,” Phys. Rev. A 100, 061402(R) (2019).
* Song _et al._ (2019) Huai-Hang Song, Wei-Min Wang, Jian-Xing Li, Yan-Fei Li, and Yu-Tong Li, “Spin-polarization effects of an ultrarelativistic electron beam in an ultraintense two-color laser pulse,” Phys. Rev. A 100, 033407 (2019).
* Chen _et al._ (2019) Yue-Yue Chen, Pei-Lun He, Rashid Shaisultanov, Karen Z. Hatsagortsyan, and Christoph H. Keitel, “Polarized positron beams via intense two-color laser pulses,” Phys. Rev. Lett. 123, 174801 (2019).
* Kotkin _et al._ (2003) G. L. Kotkin, V. G. Serbo, and V. I. Telnov, “Electron (positron) beam polarization by compton scattering on circularly polarized laser photons,” Phys. Rev. ST Accel. Beams 6, 011001 (2003).
* Ivanov _et al._ (2004) D. Yu. Ivanov, G. L. Kotkin, and V. G. Serbo, “Complete description of polarization effects in emission of a photon by an electron in the field of a strong laser wave,” Eur. Phys. J. C 36, 127–145 (2004).
* Karlovets (2011) Dmitry V. Karlovets, “Radiative polarization of electrons in a strong laser wave,” Phys. Rev. A 84, 062116 (2011).
* Li _et al._ (2020a) Yan-Fei Li, Yue-Yue Chen, Wei-Min Wang, and Hua-Si Hu, “Production of highly polarized positron beams via helicity transfer from polarized electrons in a strong laser field,” Phys. Rev. Lett. 125, 044802 (2020a).
* Pierce _et al._ (1980) D. T. Pierce, R. J. Celotta, G.-C. Wang, W. N. Unertl, A. Galejs, C. E. Kuyatt, and S. R. Mielczarek, “The GaAs spin polarized electron source,” Rev. Sci. Instrum. 51, 478–499 (1980).
* Kuwahara _et al._ (2012) M. Kuwahara, S. Kusunoki, X. G. Jin, T. Nakanishi, Y. Takeda, K. Saitoh, T. Ujihara, H. Asano, and N. Tanaka, “30-kV spin-polarized transmission electron microscope with GaAs-GaAsP strained superlattice photocathode,” Appl. Phys. Lett. 101, 033102 (2012).
* Zitzewitz _et al._ (1979) P. W. Zitzewitz, J. C. Van House, A. Rich, and D. W. Gidley, “Spin polarization of low-energy positron beams,” Phys. Rev. Lett. 43, 1281–1284 (1979).
* Barth and Smirnova (2013) Ingo Barth and Olga Smirnova, “Spin-polarized electrons produced by strong-field ionization,” Phys. Rev. A 88, 013401 (2013).
* Rakitzis _et al._ (2003) T. P. Rakitzis, P. C. Samartzis, R. L. Toomes, T. N. Kitsopoulos, Alex Brown, G. G. Balint-Kurti, O. S. Vasyutinskii, and J. A. Beswick, “Spin-Polarized Hydrogen Atoms from Molecular Photodissociation,” Science 300, 1936–1938 (2003).
* Sofikitis _et al._ (2017) Dimitris Sofikitis, Pavle Glodic, Greta Koumarianou, Hongyan Jiang, Lykourgos Bougas, Peter C. Samartzis, Alexander Andreev, and T. Peter Rakitzis, “Highly nuclear-spin-polarized deuterium atoms from the uv photodissociation of deuterium iodide,” Phys. Rev. Lett. 118, 233401 (2017).
* Sofikitis _et al._ (2018) Dimitris Sofikitis, Chrysovalantis S. Kannis, Gregoris K. Boulogiannis, and T. Peter Rakitzis, “Ultrahigh-density spin-polarized h and d observed via magnetization quantum beats,” Phys. Rev. Lett. 121, 083001 (2018).
* Wen _et al._ (2019) Meng Wen, Matteo Tamburini, and Christoph H. Keitel, “Polarized laser-wakefield-accelerated kiloampere electron beams,” Phys. Rev. Lett. 122, 214801 (2019).
* Wu _et al._ (2019a) Yitong Wu, Liangliang Ji, Xuesong Geng, Qin Yu, Nengwen Wang, Bo Feng, Zhao Guo, Weiqing Wang, Chengyu Qin, Xue Yan, Lingang Zhang, Johannes Thomas, Anna Hützen, Markus Büscher, T Peter Rakitzis, Alexander Pukhov, Baifei Shen, and Ruxin Li, “Polarized electron-beam acceleration driven by vortex laser pulses,” New Journal of Physics 21, 073052 (2019a).
* Wu _et al._ (2019b) Yitong Wu, Liangliang Ji, Xuesong Geng, Qin Yu, Nengwen Wang, Bo Feng, Zhao Guo, Weiqing Wang, Chengyu Qin, Xue Yan, Lingang Zhang, Johannes Thomas, Anna Hützen, Alexander Pukhov, Markus Büscher, Baifei Shen, and Ruxin Li, “Polarized electron acceleration in beam-driven plasma wakefield based on density down-ramp injection,” Phys. Rev. E 100, 043202 (2019b).
* Nie _et al._ (2021) Zan Nie, Fei Li, Felipe Morales, Serguei Patchkovskii, Olga Smirnova, Weiming An, Noa Nambu, Daniel Matteo, Kenneth A. Marsh, Frank Tsung, Warren B. Mori, and Chan Joshi, “ In Situ Generation of High-Energy Spin-Polarized Electrons in a Beam-Driven Plasma Wakefield Accelerator ,” Physical Review Letters 126, 54801 (2021), 2101.10378 .
* Baier _et al._ (1998) V. N. Baier, V. M. Katkov, and V. M. Strakhovenko, _Electromagnetic Processes at High Energies in Oriented Single Crystals_ (World Scientific, Singapore, 1998).
* McMaster (1961) William H. McMaster, “Matrix representation of polarization,” Rev. Mod. Phys. 33, 8–28 (1961).
* Boyarkin (2011) O.M. Boyarkin, _Advanced particle physics volume I: Particles, fields, and quantum electrodynamics_ (2011).
* Baier _et al._ (1973) V. N. Baier, V. M. Katkov, and V. S. Fadin, _Radiation from relativistic electrons_ (Atomizdat, Moscow, 1973).
* Di Piazza _et al._ (2018) A. Di Piazza, M. Tamburini, S. Meuren, and C. H. Keitel, “Implementing nonlinear compton scattering beyond the local-constant-field approximation,” Phys. Rev. A 98, 012134 (2018).
* Ilderton (2019) A. Ilderton, “Note on the conjectured breakdown of qed perturbation theory in strong fields,” Phys. Rev. D 99, 085002 (2019).
* Di Piazza _et al._ (2019) A. Di Piazza, M. Tamburini, S. Meuren, and C. H. Keitel, “Improved local-constant-field approximation for strong-field qed codes,” Phys. Rev. A 99, 022125 (2019).
* (79) See Supplemental Material for details on the employed laser fields, the applied simulation methods, and the simulation results for other laser or $\gamma$ photon parameters.
* Thomas (1926) L. H. Thomas, “The motion of the spinning electron,” Nature (London) 117, 514 (1926).
* Thomas (1927) L. H. Thomas, “The kinematics of an electron with an axis,” Philos. Mag. 3, 1–22 (1927).
* Bargmann _et al._ (1959) V. Bargmann, Louis Michel, and V. L. Telegdi, “Precession of the polarization of particles moving in a homogeneous electromagnetic field,” Phys. Rev. Lett. 2, 435–436 (1959).
* Salamin and Keitel (2002) Yousef I. Salamin and Christoph H. Keitel, “Electron acceleration by a tightly focused laser beam,” Phys. Rev. Lett. 88, 095005 (2002).
* Galow _et al._ (2011) Benjamin J. Galow, Yousef I. Salamin, Tatyana V. Liseykina, Zoltán Harman, and Christoph H. Keitel, “Dense monoenergetic proton beams from chirped laser-plasma interaction,” Phys. Rev. Lett. 107, 185002 (2011).
* Baynham _et al._ (2011) D. E. Baynham, Y. Ivanyushenkov, J. A. Clarke, A. Brummitt, V. Bayliss, T. Bradshaw, D. J. Scott, J. Rochford, S. Carr, G. Burton, O. Taylor, and A. Lintern, “Demonstration of a High-Field Short-Period Superconducting Helical Undulator Suitable for Future TeV-Scale Linear Collider Positron Sources,” Phys. Rev. Lett. 107, 1–5 (2011).
* Abbott et al. (2016) D. Abbott et al. (PEPPo Collaboration), “Production of highly polarized positrons using polarized electrons at mev energies,” Phys. Rev. Lett. 116, 214801 (2016).
* King and Tang (2020) B. King and S. Tang, “Nonlinear compton scattering of polarized photons in plane-wave backgrounds,” Phys. Rev. A 102, 022809 (2020).
* Tang _et al._ (2020) S. Tang, B. King, and H. Hu, “Highly polarised gamma photons from electron-laser collisions,” Phys. Lett. B 809 (2020).
* Li _et al._ (2020b) Yan-Fei Li, Rashid Shaisultanov, Yue-Yue Chen, Feng Wan, Karen Z. Hatsagortsyan, Christoph H. Keitel, and Jian-Xing Li, “Polarized ultrashort brilliant multi-gev $\gamma$ rays via single-shot laser-electron interaction,” Phys. Rev. Lett. 124, 014801 (2020b).
* Artru _et al._ (2008) X. Artru, R. Chehab, M. Chevallier, V.M. Strakhovenko, A. Variola, and A. Vivoli, “Polarized and unpolarized positron sources for electron-positron colliders,” Nucl. Instrum. Methods Phys. Res., Sect. B 266, 3868 – 3875 (2008).
* Leemans and Esarey (2009) Wim Leemans and Eric Esarey, “Laser-driven plasma-wave electron accelerators,” Phys. Today 62, 44–49 (2009).
* Luo _et al._ (2018) J. Luo, M. Chen, W. Y. Wu, S. M. Weng, Z. M. Sheng, C. B. Schroeder, D. A. Jaroszynski, E. Esarey, W. P. Leemans, W. B. Mori, and J. Zhang, “Multistage coupling of laser-wakefield accelerators with curved plasma channels,” Phys. Rev. Lett. 120, 154801 (2018).
* Ananthanarayan and Rindani (2004b) B. Ananthanarayan and Saurabh D. Rindani, “Cp violation at a linear collider with transverse polarization,” Phys. Rev. D 70, 036005 (2004b).
|
11institutetext: Prime Minister Research Fellow 22institutetext: Jibaben Patel
Chair in Artificial Intelligence
CVIG Lab, IIT Gandhinagar, Gujarat, India 22email:
<EMAIL_ADDRESS>
# DeepPS2: Revisiting Photometric Stereo using Two Differently Illuminated
Images
Ashish Tiwari 11 0000-0002-4462-6086 Shanmuganathan Raman 22
0000-0003-2718-7891
###### Abstract
Photometric stereo, a problem of recovering 3D surface normals using images of
an object captured under different lightings, has been of great interest and
importance in computer vision research. Despite the success of existing
traditional and deep learning-based methods, it is still challenging due to:
(i) the requirement of three or more differently illuminated images, (ii) the
inability to model unknown general reflectance, and (iii) the requirement of
accurate 3D ground truth surface normals and known lighting information for
training. In this work, we attempt to address an under-explored problem of
photometric stereo using just two differently illuminated images, referred to
as the PS2 problem. It is an intermediate case between a single image-based
reconstruction method like Shape from Shading (SfS) and the traditional
Photometric Stereo (PS), which requires three or more images. We propose an
inverse rendering-based deep learning framework, called DeepPS2, that jointly
performs surface normal, albedo, lighting estimation, and image relighting in
a completely self-supervised manner with no requirement of ground truth data.
We demonstrate how image relighting in conjunction with image reconstruction
enhances the lighting estimation in a self-supervised setting.111Supported by
SERB IMPRINT 2 Grant
###### Keywords:
Photometric Stereo, Deep Learning, Inverse Rendering, Image Relighting
## 1 Introduction
Inferring the 3D shape of the objects using digital images is a fundamental
and challenging task in computer vision research. It directly extends to
quality control, virtual/augmented reality, medical diagnosis, e-commerce,
etc. The widely used geometric approaches to shape recovery such as binocular
[20, 41] or multi-view stereo [37, 11, 24, 23, 25] methods require images from
different viewpoints to triangulate the 3D points. However, they rely heavily
on the success of image feature matching techniques and fall short of
recovering finer details such as indentations, imprints, and scratches. The
photometric methods for 3D shape reconstruction use shading cues from either a
single image - Shape from Shading (SfS) [14] or at least three images -
Photometric Stereo (PS) [45] to recover surface normals and are known to
better preserve the finer surface details.
What are the bottlenecks? The SfS problem is ill-posed due to the underlying
convex/concave ambiguity and the fact that infinite surface normals exist to
explain the intensity at each pixel [32]. The PS methods are known to handle
such ambiguities and provide a unique surface normal defining the intensity at
each pixel by using three or more differently illuminated images. However, the
well-posed traditional photometric stereo problem (as introduced by Woodhman
[45]) assumes the surfaces to be purely Lambertian, which seldom is the case
in the real world. Several recent methods [16, 49, 9, 8, 7, 10] have also
addressed shape estimation for non-Lambertian surfaces with unknown
reflectance properties. However, they require more images ($\sim 50-100$) as
input. While there are methods that require as few as six (or even fewer)
images [27], our goal is to resort to just two images under a photometric
stereo setting, referred to as a PS2 problem.
Scope of the PS2 problem. The scope of this work is to address the photometric
stereo problem in an intermediate setting with two images ($m=2$) between SfS
($m=1$) and the traditional PS ($m\geq 3$). It can essentially be viewed as a
degenerative case of lack of meaningful information due to shadows in a
typical three-source photometric stereo setting [13]. Another use case of a
PS2 problem arises in the 3D reconstruction of the non-rigid objects [12].
When an object is imaged under three light sources, one could be occluded by
the object, and only the other two would provide meaningful cues. Therefore, a
PS2 problem needs to be solved in such cases. Further, the PS2 problem arises
when $m\geq 3$ and light sources are coplanar. Such a situation typically
occurs when the scene is illuminated by the sun and hence, applies to outdoor
PS as well [36, 18].
Constraints in addressing the PS2 problem. There exists two formulations of
photometric stereo, in general - the differential and the non-differential
formulation. Several normal fields can offer solutions to the PS2 problem.
Under either of the settings, a remedy is to perform an exhaustive search
among these normal fields and smoothly find the best one that characterizes
the underlying shape. In other words, the task is to find the normal field
that best satisfies the smoothness constraint [29]. The differential approach
of PS implicitly enforces such smoothness. However, it requires explicit
knowledge of the surface boundary conditions [28], which is rarely available
or requires regularization [13], which is generally tedious owing to heavy
parameter tuning. A few methods [28, 33] have put forward ways to address the
PS2 problem based on the non-differential formulation by recasting it as a
binary labeling problem. While such optimization problems can be solved using
graph-cut-based algorithms [5], they require the albedo to be known.
Can deep neural networks offer a solution? Owing to its success and
applicability in solving the general PS problem, we intend to address the PS2
problem using deep neural networks. The core idea is to use a deep neural
network to model unknown general surfaces with complex Bidirectional
Reflectance Distribution Functions (BRDFs). The photometric stereo problem
using deep neural networks has been addressed either under a calibrated (known
lightings) or an uncalibrated (unknown lightings) setting. While most of these
methods require 3D ground truth supervision [16, 49, 9, 8, 7, 10, 26, 40], a
little progress has been made to address PS in a self-supervised manner [19].
However, such self-supervised and uncalibrated methods still require ground
truth supervision for lighting estimation.
In this work, we introduce an inverse-rendering-based deep learning framework,
called DeepPS2, to address the PS2 problem and work towards developing a
completely uncalibrated and self-supervised method. The core idea is to
utilize the shading cues from two differently illuminated images to obtain the
3D surface normals. DeepPS2 is designed to perform albedo estimation, lighting
estimation, image relighting, and image reconstruction without any supervision
using 3D surface normals and/or lighting. While image reconstruction is
commonly adopted in the existing unsupervised/self-supervised approaches, the
appropriate design considerations to perform image relighting using the
estimated lightings bring out several interesting insights about the proposed
framework.
Contributions The following are the key contributions of this work.
* •
We introduce DeepPS2, an uncalibrated deep learning-based photometric stereo
method that jointly performs surface normal, albedo, and lighting estimation
in a self-supervised setting. The workflow of the proposed framework follows
the principles of inverse-rendering.
* •
We propose a self-supervised lighting estimation through light space
discretization and inclusion of image relighting (using the estimated
lightings) along with image reconstruction.
* •
We propose to explicitly model the specularities through albedo refinement and
estimated illumination.
* •
To the best of our knowledge, ours is the first work to address the PS2
problem in a deep learning setting and lighting estimation in a self-
supervised setting for the task at hand.
## 2 Related Work
This section reviews the literature on the PS2 problem and some recent deep
learning-based photometric stereo methods.
The PS2 Problem. Some earlier works have addressed this problem in a
traditional non-learning setting. Onn and Bruckstein [29] discussed the
ambiguities in determining surface normals using two images and proposed to
use integrability constraint to handle such ambiguities. Sato and Ikeuchi [36]
used their method to solve the problem with $m\geq 3$ images under solar
illumination, which in a sense addresses the PS2 problem [45]. Later, Yang _et
al._ [47] studied the problem, particularly for the convex objects. Kozera
provided an analytical resolution to the differential formulation of PS2 [22].
Since 1995 (for over ten years later), only Ikeda [15] addressed the PS2
problem by essentially considering the second image as an auxiliary to better
solve the SFS problem. Recently, Queau _et al._ [33] addressed the PS2 problem
using a graph cut based optimization method. Further, the problem of outdoor
PS is being re-explored in several works [1, 2]. While these methods attempt
to provide a numerical resolution to the PS problem [28, 33], we intend to
address it using the capacity of deep neural networks.
Deep Learning-based methods. Deep learning has seen great progress in many
areas of computer vision, including photometric stereo [49, 9, 7, 10, 16, 35].
Santo _et al._ [35] were the first to propose a deep learning-based method to
obtain per-pixel surface normals. However, they were limited by the pre-
defined order of pixels at the input. Later, Chen _et al._ in their subsequent
works [9, 7, 10] proposed to model the spatial information using feature-
extractor and features-pooling based strategies for photometric stereo.
Further, the works by Yao _et al._ [48] and Wang _et al._ [43] proposed to
extract and combine the local and global features for better photometric
understanding. However, all these methods require ground truth surface normals
for supervision which is generally difficult to obtain. Recently, Taniai &
Maehara [40] proposed a self-supervised network to directly output the surface
normal using a set of images and reconstruct them. However, their method
required known lightings as input. Kaya _et al._ [19] expanded their method to
deal with inter-reflections and address photometric stereo in an uncalibrated
setting. However, the lighting estimation was still performed using ground
truth supervision. Other methods such as Lichy _et al._ [27], and Boss _et
al._ [4] predicted shape and material using three or less and two images (one
with and one without flash), respectively. While LERPS [42] infers lighting
and surface normal from a single image, it requires multiple images (one at a
time) for training. We work towards an uncalibrated photometric stereo method
that uses only two differently illuminated images as the input while
estimating lightings, surface normals, and albedos, all in a self-supervised
manner.
## 3 Understanding PS2: Photometric Stereo using Two Images
Before describing the PS2 problem that we are interested to address, we would
like to review some key features of the SfS [14] and the traditional PS
problem [45]. We assume that an orthographic camera images the surface under
uniform directional lighting with viewing direction $\boldsymbol{v}\in\rm
I\\!R^{3}$ pointing along the z-direction and the image plane parallel to the
$XY$ plane of the 3D Cartesian coordinate system $XYZ$.
### 3.1 Shape from Shading (SfS)
Consider an anisotropic non-Lambertian surface $f$ characterised by the
Bidirectional Reflectance Distribution Function (BRDF) $\rho$. For each
surface point $(x,y)$ characterized by the surface normal
$\boldsymbol{n}\in\rm I\\!R^{3}$ and illuminated the light source in the
direction $\boldsymbol{\ell}\in\rm I\\!R^{3}$, the image formation of the
surface viewed from the direction $\boldsymbol{v}\in\rm I\\!R^{3}$ is given by
Equation 1.
$\boldsymbol{I}(x,y)=\rho(\boldsymbol{n},\boldsymbol{\ell},\boldsymbol{v})\psi_{f,s}(x,y)\left[\boldsymbol{n}(x,y)^{T}\boldsymbol{\ell}\right]+\epsilon$
(1)
Here, $\psi_{f,s}(x,y)$ specifies the attached and the cast shadows. It is
equal to 0, if $(x,y)$ is shadowed and equal to 1, otherwise. $\epsilon$
incorporates the global illumination and noise effect. $\boldsymbol{I}(x,y)$
is the normalized gray level with respect to the light source intensity.
Clearly, with albedo and lightings being known apriori, the surface normals
$\boldsymbol{n}(x,y)$ in the revolution cone around the lighting direction
$\boldsymbol{\ell}$ constitute the set of infinite solutions to Equation 1.
Therefore, it becomes an ill-posed problem and is difficult to solve locally.
### 3.2 Photometric Stereo (PS)
The simplest solution to overcome the ill-posedness of SfS is to have $m\geq
2$ differently illuminated images of the object taken from the same viewpoint.
In general, for multiple light sources, the formulation described in Equation
1 extends to the following.
$\boldsymbol{I}_{j}(x,y)=\rho(\boldsymbol{n},\boldsymbol{\ell}_{j},\boldsymbol{v})\psi_{f,s}(x,y)\left[\boldsymbol{n}(x,y)^{T}\boldsymbol{\ell}_{j}\right]+\epsilon_{j}$
(2)
Here, the equation is specific to the $j^{th}$ light source. For $m\geq 3$ and
a Lambertian surface, Equation 2 formulates a photometric stereo problem (the
traditional one for $m=3$). Solving such a system is advantageous as it is
well-posed and can be solved locally, unlike SfS.
### 3.3 The PS2 problem
With such a non-differential formulation (as in Equation 2), the three
unknowns $(n_{x},n_{y},n_{z})$ can be obtained by solving three or more linear
equations. However, such a formulation is tricky to solve under two scenarios:
(i) when the light sources are coplanar (rank-deficit formulation) and (ii)
when $m=2$. These scenarios lead us to the formulation of the PS2 problem -
photometric stereo with two images, as described in Equation 3.
$\rho(\boldsymbol{n},\boldsymbol{\ell}_{1},\boldsymbol{v})\psi_{f,s}(x,y)\left[\boldsymbol{n}(x,y)^{T}\boldsymbol{\ell}_{1}\right]+\epsilon_{1}=\boldsymbol{I}_{1}(x,y)$
$\rho(\boldsymbol{n},\boldsymbol{\ell}_{2},\boldsymbol{v})\psi_{f,s}(x,y)\left[\boldsymbol{n}(x,y)^{T}\boldsymbol{\ell}_{2}\right]+\epsilon_{2}=\boldsymbol{I}_{2}(x,y)$
$n_{x}(x,y)^{2}+n_{y}(x,y)^{2}+n_{z}(x,y)^{2}=1$ (3)
The non-linearity in the third part of Equation 3 could give non-unique
solution [17]. Adding one more image (under non-coplanar light source
configuration) can straightaway solve the problem. However, it will fail when
the surface is arbitrarily complex in its reflectance properties. Further, the
problem becomes even more difficult to solve when albedo is unknown.
To address the aforementioned issues in the PS2 problem with unknown albedo
and lightings, we introduce a deep learning-based framework that can resolve
such ambiguities by directly learning from images.
## 4 Method
In this section, we describe DeepPS2, a deep learning-based solution to the
PS2 problem. Further, we describe several design considerations, light space
sampling and discretization, and share the training strategy.
Figure 1: The proposed inverse rendering framework, called DeepPS2, for shape,
material, and illumination estimation. The encoder-decoder design is inspired
by Hourglass networks [46]. Layer-wise skip connections are avoided for visual
clarity
### 4.1 Network Architecture
Let $I_{1},I_{2}\in\rm I\\!R^{C\times H\times W}$ be the two images
corresponding to the lighting directions $\boldsymbol{\ell}_{1}$ and
$\boldsymbol{\ell}_{2}$, respectively. The two images along with the object
mask $M\in\rm I\\!R^{1\times H\times W}$ are fed to the encoder $f_{enc}$ to
obtain an abstract feature map $\boldsymbol{\phi}_{img}$, as described in
Equation 4.
$\centering\boldsymbol{\phi}_{img}=f_{enc}([I_{1},I_{2},M];\boldsymbol{\theta}_{enc})\@add@centering$
(4)
Here, $[\cdot]$ represents channel-wise concatenation and
$\boldsymbol{\theta}_{enc}$ represents the parameters of the encoder.
Surface Normal and Albedo Estimation. We use $\boldsymbol{\phi}_{img}$ to
obtain an estimate of surface normal map $\hat{N}$ and the albedo $\hat{A}$
through the decoders $f_{n\\_dec}$ and $f_{a\\_dec}$, respectively, as
described in Equation 5.
$\centering\hat{N}=f_{n\\_dec}(\boldsymbol{\phi}_{img};\boldsymbol{\theta}_{n\\_dec})\@add@centering$
$\centering\hat{A}=f_{a\\_dec}(\boldsymbol{\phi}_{img};\boldsymbol{\theta}_{a\\_dec})\@add@centering$
(5)
Here, $\hat{A}=[\hat{A}_{1},\hat{A}_{2}]$ represents the albedos of two images
$I_{1}$ and $I_{2}$ together. The design of each encoder-decoder combination
222The detailed layer-wise architecture can be found in our supplementary
material. is inspired by that of the Hourglass network [46].
Lighting Estimation. A straightforward way to estimate lighting directions
could be to use another fully connected branch and train the network to
regress to the desired lightings directly from $\boldsymbol{\phi}_{img}$.
However, fully connected layers require a large number of parameters. Further,
obtaining precise lighting information directly just from the image features
would be difficult since it would not have the explicit knowledge of the
structure and reflectance properties of the underlying surface. With an intent
to keep the entire architecture fully convolutional, we propose an
illumination module ($f_{ill}$) to predict the desired lighting directions by
using the estimated normal map and albedos, as described in Equation 6.
$\centering\hat{l_{i}}=f_{ill}([\hat{N},\hat{A}_{i}];\boldsymbol{\theta}_{lem})\@add@centering$
(6)
Here, $i=1,2$ corresponding to two images $I_{1}$ and $I_{2}$, respectively.
At this stage, one straightforward approach could be to use the estimated
normal, albedos, and lightings in order to reconstruct the original images
through the image rendering equation (see Equation 11). However, the estimated
albedo $\hat{A}$ without lighting estimates fails to capture the complex
specularities on the surface (see Figure 4). Also, the estimated lightings
were a little far from the desired ones.
Thus, the question now is - how do we validate the accuracy of the estimated
albedos and lightings, especially when there is no ground truth supervision?
The albedos and lightings go hand-in-hand and are dependent on each other as
far as image rendering is considered, of course, in addition to the surface
normal (see Generalized Bas Relief (GBR) ambiguity [3]). To address the
aforementioned concerns, we propose two crucial resolves: (i) albedo
refinement before image reconstruction and (ii) image relighting using the
estimated lightings.
Albedo Refinement by Specularity Modeling. As discussed earlier, the estimated
albedo $\hat{A}$ failed to represent the specularities directly from the image
features. Most of the existing deep photometric stereo methods have implicitly
handled specularities using multiple differently illuminated images through
max-pooling and global-local feature-fusion. However, it is crucial to
understand that the specularities are essentially the reflections on the
surface, and information about surface geometry can help model such
specularities better. Understanding surface geometry becomes even more crucial
when we have just one or two images to model the surface reflection.
Therefore, we choose to explicitly model these specularities and refine the
albedo estimate using a few reasonable and realistic assumptions.
We assume that the specular BRDF is isotropic and is only the function of the
half-vector $\boldsymbol{h}$ and the surface normal $\boldsymbol{n}$ at any
point on the surface as the BRDF can be re-parameterized to a half-vector
based function [34]. In doing so, we could omit the Fresnel Reflection
coefficients and geometric attenuation associated with modelling BRDFs. The
authors in [30, 6] found that the isotropic BRDF can also be modeled simply by
two parameters $\theta_{h}=cos^{-1}(\boldsymbol{n}^{T}\boldsymbol{h})$ and
$\theta_{d}=cos^{-1}(\boldsymbol{v}^{T}\boldsymbol{h})$. Therefore, we use the
estimated lighting $\ell_{i}$ to compute $cos(\theta_{h})$ and
$cos(\theta_{d})$ to further refine the albedo. Additionally, we use
positional encoding to model the high-frequency specularities in the refined
albedo. In short, we construct the $L_{i}$ as per Equation 7.
$\centering
L_{i}=[\boldsymbol{p}_{i},\gamma(\boldsymbol{p}_{i})]\@add@centering$
$\centering\boldsymbol{p}_{i}=[\boldsymbol{n}^{T}\boldsymbol{h}_{i},\boldsymbol{v}^{T}\boldsymbol{h}_{i}]\@add@centering$
(7)
Here,
$\gamma(\eta)=[sin(2^{0}\pi\eta),cos(2^{0}\pi\eta),...,sin(2^{m-1}\pi\eta),cos(2^{m-1}\pi\eta)]$.
We choose $m=3$ in our method. Futher,
$\boldsymbol{h}_{i}=\frac{\hat{\boldsymbol{l}_{i}}+\boldsymbol{v}}{||\hat{\boldsymbol{l}_{i}}+\boldsymbol{v}||}$.
Following these observations, we use an encoder-decoder based albedo
refinement module ($f_{arm}$) to obtain the refined albedo by considering the
estimated lightings $L_{i}$, albedos $\hat{A}$, surface normal $\hat{N}$, and
the underlying images as its input. Equation 8 describes the information flow.
$\centering\hat{A}_{i(ref)}=f_{arm}([I_{i},\hat{N},\hat{A}_{i},L_{i},];\boldsymbol{\theta}_{arm})\@add@centering$
(8)
Image Relighting. Generally, at this stage, the existing approaches proceed
further to use the rendering equation and reconstruct the input image(s).
However, the lightings are either known or have been estimated with ground
truth supervision. This allows stable training and offers convincing results.
However, in our case, the lightings are estimated without any explicit
supervision and are expected to produce learning instabilities. So the
question is, how can we ensure that the estimated lightings are close to the
desired ones without any ground truth supervision?
As an additional check on the authenticity of the estimated lightings, we
propose to use them for the image relighting task. We use an image relighting
module $(f_{rel})$ to relight one image into the other using the estimated
lighting as the target lighting and measure the quality of the relit image, as
described in Equation 9.
$\centering\hat{I}_{1(rel)}=f_{rel}(I_{2},\boldsymbol{\phi}(\hat{\boldsymbol{\ell}_{1}});\boldsymbol{\theta}_{rel})\@add@centering$
(9)
Here, $\boldsymbol{\phi}(\hat{\boldsymbol{\ell}_{1}})$ is the lighting feature
extracted from the desired target lighting $\hat{\boldsymbol{\ell}_{1}}$. The
quality of the relit image fosters the lighting estimates to be close to the
desired ones.
Image Reconstruction. Having obtained the estimates of surface normal, albedo,
and lightings, we finally use them to obtain the reflectance map
$\boldsymbol{R}_{i}$ using the encoder-decoder based image reconstruction
module $(f_{recon})$, as described in Equation 10.
$\centering\boldsymbol{R}_{i}=f_{recon}([I_{i},\hat{N},\hat{A}_{i(ref)},\hat{\boldsymbol{\ell}_{i}}];\boldsymbol{\theta}_{recon})\@add@centering$
(10)
The reflectance image $\boldsymbol{R}_{i}$ is then used to reconstruct the
associated image $\hat{I}_{i}$, as described in Equation 11.
$\centering\hat{I}_{i}=\boldsymbol{R}_{i}\odot
max(\hat{\boldsymbol{\ell}_{i}}^{T}\hat{N},0)\@add@centering$ (11)
Here, $\odot$ refers to the element-wise multiplication.
In this way, the proposed DeepPS2 produces estimates of surface normal,
albedos, and lightings as well as relights the image under target lightings by
using only two images as input and no additional ground truth supervision.
Based on the network performance, we show that the PS2 problem can be well
addressed using the benefits of deep learning framework.
Figure 2: (a) Light space discretization into $K=25$ bins. $\delta=180/2K$ is
the maximum angular deviation. (b) Variation of MAE with $K$. (c) Effect of
early stage warm-up
### 4.2 More on Lighting Estimation: The Light Space Sampling
As discussed earlier, an intuitive approach to estimate light source
directions would be to directly regress them from image(s). However,
regressing these values to the exact ones is difficult and can cause learning
difficulties [7]. Further, under the distant light source assumption, it is
easier and better to specify a region in the light space rather than the exact
direction while locating the light source. Additionally, this eases the light
source calibration during data acquisition. Therefore, we choose to formulate
the lighting estimation as a classification problem. A few methods in the
recent past have adopted the classification formulation [7, 10] and weak
calibration setting [27] for lighting and shape estimation and have produced
excellent results.
In this work, we discretize the light space (upper hemisphere) into $K=25$
bins (as shown in Fig. 2(a)) i.e. 5 bins along the azimuth direction
$\phi\in[0^{\circ},180^{\circ}]$ centered at
$[18^{\circ},54^{\circ},90^{\circ},$ $126^{\circ},162^{\circ}]$ and $5$ bins
along the elevation direction $\theta\in[-90^{\circ},90^{\circ}]$ centered at
$[-72^{\circ},-36^{\circ},0^{\circ},36^{\circ},72^{\circ}]$. While each bin
suffers a maximum angular deviation of $18^{\circ}$ along each direction (Fig.
2(a)), they offer a relatively simpler light source configuration during data
acquisition. They can be realized using hand-held lighting devices. Further,
learning under such discretized light space configuration allows the network
to better tolerate errors in the estimated lightings and the subsequent
downstream tasks. During training, the network must select the appropriate bin
in the light space to understand the light source configuration from the input
image, the estimated normal map, and the albedos.
### 4.3 Network Training
We use the standard DiLiGenT benchmark dataset [38] having the $10$ objects
imaged under $96$ different light directions with complex non-Lambertian
surfaces. We implement DeepPS2 in Pytorch [31] with Adam optimizer [21] and
initial learning rate of $1\times 10^{-4}$ for $25$ epochs and batch size $32$
on NVIDIA RTX $5000$ GPU. The learning rate is reduced to half after every $5$
epochs. It is observed that if the object under consideration has relatively
simple reflectance properties, even a randomly initialized network trained
with the image reconstruction loss can lead to good solutions. However, for
complex scenes, it is better to warm up the network by initializing the
weights through weak supervision only at the early stages of training [40,
19]. In our case, we perform this warming up for normal, albedo, and lighting
estimation through weak supervision using $L_{1}$-loss
($\mathcal{L}_{L_{1}}$), $L_{2}$-loss ($\mathcal{L}_{L_{2}}$), and the
perceptual loss ($\mathcal{L}_{perp}$) for first $2000$ iterations, as
described in Section 4.4. For weak supervision, we randomly sample $10$ images
(preferably, each one from a different lighting bin) and estimate the normal
map using the least-squares formulation [45], as per Equation 12.
$\centering\hat{N^{\prime}}=L^{-1}I\@add@centering$ (12)
It is important to note that the lighting directions in $L$ are from the
discretized light space setting, where we compute the lighting direction as
the one pointing towards the center of the selected bin. Since we have the
images, the normal map $\hat{N^{\prime}}$, and the discretized lightings $L$,
we compute the diffuse shading ($\boldsymbol{n}^{T}\boldsymbol{\ell}$) and
specular highlights (regions where $\boldsymbol{n}$ is close to the half-angle
$\boldsymbol{h}$ of $\boldsymbol{\ell}$ and viewing direction
$\boldsymbol{v}=[0,0,1]^{T}$). Once we have the shadings (diffuse and
specular), we compute the albedos ($\hat{A^{\prime}}$) to use them for weak
supervision since an image is the product of the albedo and the shading.
### 4.4 Loss Functions
In this section, we describe the loss function used for training the entire
framework. Equation 13 describes the combination of $L_{1}$-loss and the
perceptual loss $\mathcal{L}_{perp}$ used for both image reconstruction and
relighting.
$\centering\mathcal{L}_{T}(X,\hat{X})=\lambda_{1}\mathcal{L}_{1}(X,\hat{X})+\lambda_{2}\mathcal{L}_{2}(X,\hat{X})+\lambda_{perp}\mathcal{L}_{perp}(X,\hat{X})\@add@centering$
(13)
Here,
$\centering\mathcal{L}_{1}(X,\hat{X})=\hskip 5.0pt\parallel
X-\hat{X}\parallel_{1}\@add@centering$
$\centering\mathcal{L}_{2}(X,\hat{X})=\hskip 5.0pt\parallel
X-\hat{X}\parallel_{2}^{2}\@add@centering$
$\centering\mathcal{L}_{perp}(X,\hat{X})=\frac{1}{WHC}\sum_{x=1}^{W}\sum_{y=1}^{H}\sum_{z=1}^{C}\parallel\phi(X)_{x,y,z}-\phi(\hat{X})_{x,y,z}\parallel_{1}\@add@centering$
(14)
Here, $\phi$ is the output of VGG-19 [39] network and $W$, $H$, $C$ are the
width, height, and depth of the extracted feature $\phi$, respectively.
$\lambda_{1}=\lambda_{2}=0.5$ and $\lambda_{perp}=1.0$.
Weak Supervision. We use the $\mathcal{L}_{T}$ and the standard cross-entropy
loss to provide weak supervision (for first $2000$ iterations) for albedos and
lightings, respectively. However, for surface normals, we use Equation 15.
$\centering\mathcal{L}_{norm}(\hat{N},\hat{N^{\prime}})=\frac{1}{M}\sum_{p}\parallel\hat{N}_{p}-\hat{N^{\prime}}_{p}\parallel_{2}^{2}\@add@centering$
(15)
Figure 3: Surface normal maps obtained using a randomly chosen input image
pair
## 5 Experimental Results
In this section, we show the qualitative and quantitative comparison of the
DeepPS2 with several baseline approaches. The classical methods [33, 28] have
provided the numerical resolution to the underlying ambiguities in PS2.
However, the code and results on the DiLiGenT benchmark are not available for
comparison. Moreover, since deep learning-based methods have significantly
outperformed the traditional photometric stereo methods (even in handling
ambiguities), we resort to comparing our work only with the state-of-the-art
deep learning-based methods such as UPS-FCN [9], SDPS-Net [7], IRPS [40], Kaya
_et al._ [19], Lichy _et al._ [27], and Boss _et al._ [4]. They have been
chosen carefully as they can be modified to align with our problem setting by
re-training them with two images as input for a fair comparison.
Figure 4: Inverse rendering results on HARVEST and READING objects. The
reconstruction and relighting module yield the SSIM of 0.837 and 0.779,
respectively, when averaged over all the objects on the DiLiGenT Benchmark
Results on Normal Estimation. Table 1 shows a quantitative comparison of the
proposed framework with the other deep learning-based methods. All the methods
have been trained with two images as input, and the Mean Angular Error (MAE)
is reported to quantify the surface normal estimation. Since IRPS [40] is
designed to take two images (one with frontal flash), we evaluate it using
pairs of images where one image is lit frontally i.e., from the bin
corresponding to $\theta=0^{\circ}$ and $\phi=90^{\circ}$. From Table 1, we
observe that the proposed DeepPS2 obtains the best average MAE value and best
(or at least second best) individual scores for eight different objects
(except POT1 and BEAR). Even though our framework performs best in the
calibrated setting, it outperforms the other baselines under the uncalibrated
setting as well. Furthermore, even with no ground truth supervision, our
method outperforms other supervised (row 1-6) and self-supervised (row 7-8)
methods. To appreciate the results qualitatively, we show a visual comparison
of READING, HARVEST, COW, and POT2 with the self-supervised baselines [40,
19], and a two-image based supervised method [4] in Fig. 3. Interestingly,
DeepPS2 performs the best on objects like HARVEST and READING, having complex
shadows and inter-reflections with spatially-varying material.
Table 1: Mean Angular Error (MAE) over 10 randomly chosen image pairs per
object from the DiLiGenT Benchmark [38]. GREEN and YELLOW coloured cells
indicate the best and the second best performing methods, respectively. Rows
1-6 and 7-8 correspond to supervised and self-supervised approaches,
respectively
| Type of
---
Method
| Objects $\rightarrow$
---
Method $\downarrow$
Ball | Cat | Pot1 | Bear | Pot2 | Buddha | Goblet | Reading | Cow | Harvest | Average
Calibrated | PS-FCN [9] | 6.41 | 20.04 | 19.67 | 16.95 | 21.12 | 23.04 | 24.81 | 29.93 | 17.23 | 34.68 | 21.38 $\pm$ 2.05
Uncalibrated | UPS-FCN [9] | 9.71 | 18.97 | 17.85 | 15.12 | 18.62 | 19.77 | 22.14 | 27.36 | 14.83 | 31.25 | 19.56 $\pm$ 1.58
Calibrated | SDPS-Net [7] | 7.97 | 19.88 | 18.12 | 12.51 | 18.25 | 25.12 | 26.36 | 27.47 | 15.21 | 30.59 | 20.14 $\pm$ 1.17
Uncalibrated | SDPS-Net [7] | 7.81 | 21.74 | 19.73 | 13.25 | 20.47 | 27.81 | 29.66 | 31.12 | 18.94 | 34.14 | 22.6 $\pm$ 1.02
Uncalibrated | Boss _et al._ [4] | 7.71 | 14.81 | 10.17 | 8.01 | 12.89 | 15.98 | 18.18 | 21.54 | 11.96 | 27.36 | 14.85 $\pm$ 0.98
Uncalibrated | Lichy _et al._ [27] | 7.42 | 20.34 | 11.87 | 9.94 | 11.12 | 18.75 | 19.38 | 21.51 | 12.93 | 29.52 | 16.27 $\pm$ 1.01
Calibrated | Taniai & Maehara [40] | 7.03 | 10.02 | 11.62 | 8.74 | 12.58 | 18.25 | 16.85 | 21.31 | 14.97 | 28.89 | 15.03 $\pm$ 0.96
Uncalibrated | Kaya _et al._ [19] | 6.97 | 9.57 | 10.14 | 8.69 | 13.81 | 17.57 | 15.93 | 21.87 | 14.81 | 28.72 | 14.81 $\pm$ 0.89
Calibrated | DeepPS2 (Ours) | 6.17 | 9.62 | 10.35 | 8.87 | 12.78 | 14.78 | 13.29 | 18.34 | 10.13 | 25.18 | 12.95 $\pm$ 0.64
Uncalibrated | DeepPS2 (Ours) | 6.28 | 9.87 | 10.73 | 9.67 | 12.09 | 14.51 | 14.22 | 19.94 | 11.08 | 26.06 | 13.44 $\pm$ 0.67
Results on Albedo Estimation. In Fig. 4, we present a qualitative assessment
of the albedos obtained using our method. We observe that the learned albedos
are able to handle the complex shadows and specular highlights, especially
after refinement using the estimated lightings.
Results on Lighting Estimation. The goal of discretized lighting is to remove
the network’s dependence on precise lighting calibration. Therefore, we
attempt to model the illumination using the weakly calibrated lighting
directions such as front, front-right/left, top, top-right/left, bottom,
bottom-right/left, etc. Given that the light space discretization yields an
MAE of $18^{\circ}$ numerically, we intend to establish that the network may
not need precise calibration at all times. A rough and/or abstract
understanding of lighting directions should help guide the network towards
realistic shape estimation. To better evaluate the performance of the
illumination module, we visualize the learned illumination over a sphere in
Fig. 4. It is observed that the illumination module captures the distribution
of light sources essential for modeling the complex specularities in the
refined albedos at the later stage.
Results on Image Relighting and Reconstruction. We report the widely used
Structural Similarity Index (SSIM) [44] to quantify the quality of the
reconstructed and relight images. However, these results are best appreciated
visually. Therefore, we use Fig. 4 to show the quality of the generated
images. The quality of the results establishes that our inverse rendering
results are sufficiently stable for realistic relighting and reconstruction.
### 5.1 Ablation Studies
In this section, we discuss several design choices in DeepPS2 under different
experimental settings.
Ablation 1: What if we do not include lighting estimation in the framework? We
attempt to understand the effect of including the lighting information
explicitly in the surface normal estimation through such an inverse rendering-
based framework. In Table 2, comparing the experiment IDs 1 and 2, we observe
that lighting estimation is crucial for the task at hand. This observation is
in line with the classical rendering equation that requires lighting
directions to understand the reflectance properties and shadows on the
surface. Further, we intended to know the deviation in MAE for surface normal
estimation when actual lightings (calibrated setting) are used. Although the
network performs better under the calibrated setting (see Table 1), the error
difference is not very large ($0.49$ units). This supports our idea of using
weaker calibrations for surface normal estimation under distant lightings.
Table 2: Quantitative comparison of various design choices. LE: Lighting
Estimation, AR: Albedo Refinement, PE: Positional Encoding, and IR: Image
Relighting. Experiments IDs 1-6 include warm-up
ID | LE | AR | PE | IR | Ball | Cat | Pot1 | Bear | Pot2 | Buddha | Goblet | Reading | Cow | Harvest | Average
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
1 | ✗ | ✗ | ✗ | ✗ | 9.87 | 36.55 | 19.39 | 12.42 | 14.52 | 13.19 | 20.57 | 58.96 | 19.75 | 55.51 | 26.07
2 | ✓ | ✗ | ✗ | ✗ | 9.32 | 15.62 | 16.41 | 10.96 | 15.77 | 19.93 | 18.37 | 32.34 | 16.17 | 30.26 | 18.51
3 | ✓ | ✓ | ✗ | ✗ | 7.37 | 15.64 | 10.58 | 9.37 | 14.72 | 15.06 | 18.1 | 23.78 | 16.31 | 27.17 | 15.85
4 | ✓ | ✓ | ✓ | ✗ | 6.88 | 12.16 | 11.13 | 9.79 | 15.11 | 14.89 | 16.07 | 20.46 | 11.85 | 27.22 | 14.55
5 | ✓ | ✓ | ✓ | ✓ | 6.28 | 9.87 | 10.73 | 9.67 | 12.09 | 14.51 | 14.22 | 19.94 | 11.08 | 26.06 | 13.44
6 | frontally-lit image | 6.74 | 9.38 | 10.13 | 9.08 | 13.18 | 14.58 | 14.63 | 17.84 | 11.98 | 24.87 | 13.24
7 | w/o warm-up | 12.43 | 25.01 | 22.82 | 15.44 | 20.57 | 25.76 | 29.16 | 52.16 | 25.53 | 44.45 | 27.33
8 | fully supervised | 5.14 | 8.97 | 10.28 | 8.92 | 9.89 | 12.76 | 12.38 | 18.52 | 9.81 | 23.22 | 11.98
Ablation 2: Effect of discretizing the light space on normal estimation. Fig.
2 (b) shows the effect of a different number of bins on the MAE evaluated over
the DiLiGenT benchmark. We resort to choosing $K=25$ bins as the reduction in
the MAE plateaus (roughly) after that point. Further, the light space
discretization not only reduces the computational overhead but also helps the
network understand the lighting dynamics more holistically. This is evident
from the MAE reported in Table 1 and quality of the refined albedos in Fig. 4.
Ablation 3: Do albedo refinement and image relighting help in modeling the
illumination? Qualitative results in Fig. 4 show how well the refined albedos
capture the specularities on the surface. Table 2 (IDs 2 and 3) shows the
performance improvement by including the albedo refinement module. The
explicit specularity modeling is observed to produce realistic albedos. The
performance is further enhanced through the use of positional encoding (Table
2 ID 4) as it helps the module to better capture the high-frequency
characteristics in the refined albedo. Finally, the inclusion of the image
relighting module further reduces the MAE (Table 2 ID 5). Since the relighting
module is solely driven by the estimated lightings, relighting helps in
obtaining better surface normal estimates through better lighting estimation
as an additional task.
Ablation 4: What is the effect of warming up the network with weak supervision
at the early stages of training? We also consider understanding the effect of
weak supervision during the early stage warm-up. Table 2 (IDs 5 and 7) clearly
establishes the benefit of warming-up. Fig. 2 (c) shows the the convergence
with and without the warm-up. Clearly, an early-stage warm-up provides stable
and faster convergence as the outliers in the images are excluded at the early
stages during weak supervision.
Ablation 5: What if the lighting directions of one image at the input is
known? We evaluate an interesting and practical case where one of the two
input images is captured with collocated light source and camera i.e.,
$\boldsymbol{\ell}=\boldsymbol{v}=[0,0,1]^{T}$. Since the lighting direction
is known, we provide (auxiliary) supervision to the illumination module to
obtain a better lighting estimate for the other image. Table 2 (ID 6) shows
the results obtained over image pairs having one image sampled from the
frontal lighting bin i.e. $\theta=0^{\circ},\phi=90^{\circ}$. Under this
setting, the method performs better than the completely self-supervised
version because frontally-lit (flashed) images offer a better understanding of
specularities on complex surfaces. Finally, we also show the performance of
DeepPS2 under a fully supervised setting (Table 2 (ID 8)) to establish the
upper bound of DeepPS2.
## 6 Conclusion
In this work, we address the PS2 problem (photometric stereo with two images)
using a self-supervised deep learning framework called DeepPS2. In addition to
surface normals, the proposed method also estimates albedos and lightings and
performs image relighting, all without any ground truth supervision.
Interestingly, we demonstrate that weakly calibrated lightings can be enough
for the network to learn the underlying shape of an object. In conjunction
with image reconstruction, image relighting helps in better lighting
estimation. While other uncalibrated methods have used ground truth
supervision for learning to estimate lightings, we do so entirely in a self-
supervised manner. To the best of our knowledge, we are the first to address
photometric stereo using two images in a deep learning setting.
## References
* [1] Abrams, A., Hawley, C., Pless, R.: Heliometric stereo: Shape from sun position. In: European conference on computer vision. pp. 357–370. Springer (2012)
* [2] Ackermann, J., Langguth, F., Fuhrmann, S., Goesele, M.: Photometric stereo for outdoor webcams. In: 2012 IEEE conference on computer vision and pattern recognition. pp. 262–269. IEEE (2012)
* [3] Belhumeur, P.N., Kriegman, D.J., Yuille, A.L.: The bas-relief ambiguity. International journal of computer vision 35(1), 33–44 (1999)
* [4] Boss, M., Jampani, V., Kim, K., Lensch, H., Kautz, J.: Two-shot spatially-varying brdf and shape estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3982–3991 (2020)
* [5] Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on pattern analysis and machine intelligence 23(11), 1222–1239 (2001)
* [6] Burley, B., Studios, W.D.A.: Physically-based shading at disney. In: ACM SIGGRAPH. vol. 2012, pp. 1–7. vol. 2012 (2012)
* [7] Chen, G., Han, K., Shi, B., Matsushita, Y., Wong, K.Y.K.: Self-calibrating deep photometric stereo networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 8739–8747 (2019)
* [8] Chen, G., Han, K., Shi, B., Matsushita, Y., Wong, K.Y.K.: Deep photometric stereo for non-lambertian surfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
* [9] Chen, G., Han, K., Wong, K.Y.K.: Ps-fcn: A flexible learning framework for photometric stereo. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 3–18 (2018)
* [10] Chen, G., Waechter, M., Shi, B., Wong, K.Y.K., Matsushita, Y.: What is learned in deep uncalibrated photometric stereo? In: European Conference on Computer Vision. pp. 745–762. Springer (2020)
* [11] Furukawa, Y., Ponce, J.: Accurate, dense, and robust multiview stereopsis. IEEE transactions on pattern analysis and machine intelligence 32(8), 1362–1376 (2009)
* [12] Hernández, C., Vogiatzis, G., Brostow, G.J., Stenger, B., Cipolla, R.: Non-rigid photometric stereo with colored lights. In: 2007 IEEE 11th International Conference on Computer Vision. pp. 1–8. IEEE (2007)
* [13] Hernández, C., Vogiatzis, G., Cipolla, R.: Overcoming shadows in 3-source photometric stereo. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(2), 419–426 (2010)
* [14] Horn, B.K.: Shape from shading: A method for obtaining the shape of a smooth opaque object from one view (1970)
* [15] Ikeda, O.: A robust shape-from-shading algorithm using two images and control of boundary conditions. In: Proceedings 2003 International Conference on Image Processing (Cat. No. 03CH37429). vol. 1, pp. I–405. IEEE (2003)
* [16] Ikehata, S.: Cnn-ps: Cnn-based photometric stereo for general non-convex surfaces. In: Proceedings of the European conference on computer vision (ECCV). pp. 3–18 (2018)
* [17] Ikeuchi, K., Horn, B.K.: Numerical shape from shading and occluding boundaries. Artificial intelligence 17(1-3), 141–184 (1981)
* [18] Jung, J., Lee, J.Y., So Kweon, I.: One-day outdoor photometric stereo via skylight estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4521–4529 (2015)
* [19] Kaya, B., Kumar, S., Oliveira, C., Ferrari, V., Van Gool, L.: Uncalibrated neural inverse rendering for photometric stereo of general surfaces. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3804–3814 (2021)
* [20] Kendall, A., Martirosyan, H., Dasgupta, S., Henry, P., Kennedy, R., Bachrach, A., Bry, A.: End-to-end learning of geometry and context for deep stereo regression. In: Proceedings of the IEEE international conference on computer vision. pp. 66–75 (2017)
* [21] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [22] Kozera, R.: On shape recovery from two shading patterns. International Journal of Pattern Recognition and Artificial Intelligence 6(04), 673–698 (1992)
* [23] Kumar, S.: Jumping manifolds: Geometry aware dense non-rigid structure from motion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5346–5355 (2019)
* [24] Kumar, S., Dai, Y., Li, H.: Monocular dense 3d reconstruction of a complex dynamic scene from two perspective frames. In: Proceedings of the IEEE international conference on computer vision. pp. 4649–4657 (2017)
* [25] Kumar, S., Dai, Y., Li, H.: Superpixel soup: Monocular dense 3d reconstruction of a complex dynamic scene. IEEE transactions on pattern analysis and machine intelligence 43(5), 1705–1717 (2019)
* [26] Li, J., Robles-Kelly, A., You, S., Matsushita, Y.: Learning to minify photometric stereo. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 7568–7576 (2019)
* [27] Lichy, D., Wu, J., Sengupta, S., Jacobs, D.W.: Shape and material capture at home. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6123–6133 (2021)
* [28] Mecca, R., Durou, J.D.: Unambiguous photometric stereo using two images. In: International Conference on Image Analysis and Processing. pp. 286–295. Springer (2011)
* [29] Onn, R., Bruckstein, A.: Integrability disambiguates surface recovery in two-image photometric stereo. International Journal of Computer Vision 5(1), 105–113 (1990)
* [30] Pacanowski, R., Celis, O.S., Schlick, C., Granier, X., Poulin, P., Cuyt, A.: Rational brdf. IEEE transactions on visualization and computer graphics 18(11), 1824–1835 (2012)
* [31] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch (2017)
* [32] Prados, E., Faugeras, O.: Shape from shading: a well-posed problem? In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05). vol. 2, pp. 870–877. IEEE (2005)
* [33] Quéau, Y., Mecca, R., Durou, J.D., Descombes, X.: Photometric stereo with only two images: A theoretical study and numerical resolution. Image and Vision Computing 57, 175–191 (2017)
* [34] Rusinkiewicz, S.M.: A new change of variables for efficient brdf representation. In: Eurographics Workshop on Rendering Techniques. pp. 11–22. Springer (1998)
* [35] Santo, H., Samejima, M., Sugano, Y., Shi, B., Matsushita, Y.: Deep photometric stereo network. In: Proceedings of the IEEE international conference on computer vision workshops. pp. 501–509 (2017)
* [36] Sato, Y., Ikeuchi, K.: Reflectance analysis under solar illumination. In: Proceedings of the Workshop on Physics-Based Modeling in Computer Vision. pp. 180–187. IEEE (1995)
* [37] Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4104–4113 (2016)
* [38] Shi, B., Wu, Z., Mo, Z., Duan, D., Yeung, S.K., Tan, P.: A benchmark dataset and evaluation for non-lambertian and uncalibrated photometric stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3707–3716 (2016)
* [39] Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
* [40] Taniai, T., Maehara, T.: Neural inverse rendering for general reflectance photometric stereo. In: International Conference on Machine Learning. pp. 4857–4866. PMLR (2018)
* [41] Taniai, T., Matsushita, Y., Sato, Y., Naemura, T.: Continuous 3d label stereo matching using local expansion moves. IEEE transactions on pattern analysis and machine intelligence 40(11), 2725–2739 (2017)
* [42] Tiwari, A., Raman, S.: Lerps: Lighting estimation and relighting for photometric stereo. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 2060–2064. IEEE (2022)
* [43] Wang, X., Jian, Z., Ren, M.: Non-lambertian photometric stereo network based on inverse reflectance model with collocated light. IEEE Transactions on Image Processing 29, 6032–6042 (2020)
* [44] Wang, Z., Simoncelli, E.P., Bovik, A.C.: Multiscale structural similarity for image quality assessment. In: The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003. vol. 2, pp. 1398–1402. Ieee (2003)
* [45] Woodham, R.J.: Photometric method for determining surface orientation from multiple images. Optical engineering 19(1), 139–144 (1980)
* [46] Yang, J., Liu, Q., Zhang, K.: Stacked hourglass network for robust facial landmark localisation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. pp. 79–87 (2017)
* [47] Yang, J., Ohnishi, N., Sugie, N.: Two-image photometric stereo method. In: Intelligent Robots and Computer Vision XI: Biological, Neural Net, and 3D Methods. vol. 1826, pp. 452–463. SPIE (1992)
* [48] Yao, Z., Li, K., Fu, Y., Hu, H., Shi, B.: Gps-net: Graph-based photometric stereo network. Advances in Neural Information Processing Systems 33, 10306–10316 (2020)
* [49] Zheng, Q., Jia, Y., Shi, B., Jiang, X., Duan, L.Y., Kot, A.C.: Spline-net: Sparse photometric stereo through lighting interpolation and normal estimation networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 8549–8558 (2019)
## 7 Supplementary Material
Although the main paper is self-contained in terms of the main results, we
believe that the supplementary material can be of help to understand the work
in greater detail. Here, we describe the DeepPS2 architecture and remaining
results on surface normal, albedo, shading, and illumination estimation. Also,
we demonstrate qualitatively the results of image reconstruction and
relighting on different objects from the DiLiGenT benchmark [38].
### 7.1 DeepPS2 Architecture
Table 3 describes the detailed network architecture. The design of all the
modules (except the illumination module) is inspired by that of Hourglass
networks [46].
Table 3: Detailed network architecture of DeepPS2
Module | Architecture
---|---
Encoder | | conv(k=6, p=2, s=2, cin = 7, cout = 32), BN, ReLU
---
conv(k=4, p=1, s=2, cin = 32, cout = 64), BN, ReLU
conv(k=4, p=1, s=2, cin = 64, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 256), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout = 512), BN, ReLU
| Decoder
---
(Normal and Albedo)
| conv(k=4, p=1, s=2 cin = 512, cout = 256), BN, ReLU
---
conv(k=4, p=1, s=2, cin = 512, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 256, cout = 64), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 32), BN, ReLU
conv(k=4, p=1, s=2, cin = 64, cout = 64), BN, ReLU
Normal: conv(k=5, p=2, s=1, cin=64, cout = 3), Tanh
Albedo: conv(k=5, p=2, s=1, cin=64, cout=6), Tanh
Illumination | | conv(k=3, p=0, s=1, cin = 9, cout = 64), BN, ReLU
---
conv(k=3, p=0, s=1, cin = 64, cout = 128), BN, ReLU
conv(k=3, p=0, s=1, cin = 128, cout = 256), BN, ReLU
| | Regress $\theta$:
---
Linear(256, 256), ReLU, Dropout(0.25)
Linear(256,64), ReLU, DropOut(0.25)
Linear(64, 5)
| | Regress $\phi$:
---
Linear(256, 256), ReLU, Dropout(0.25)
Linear(256,64), ReLU, DropOut(0.25)
Linear(64, 5)
Albedo Refinement | | conv(k=6, p=2, s=2, cin = 44, cout = 128), BN, ReLU
---
conv(k=4, p=1, s=2, cin = 128, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 256), BN, ReLU
conv(k=4, p=1, s=2, cin = 256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin =256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin =256, cout = 64), BN, ReLU
conv(k=5, p=2, s=1, cin =64, cout = 6), Tanh
Image Reconstruction | | conv(k=6, p=2, s=2, cin = 15, cout = 64), BN, ReLU
---
conv(k=4, p=1, s=2, cin = 64, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 256), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout=64), BN, ReLU
conv(k=4, p=1, s=2, cin=128, cout=64), BN, ReLU
conv(k=5, p=2, s=1, cin=64, cout=6), Tanh
Image Relighting | | conv(k=6, p=2, s=2, cin = 7, cout = 64), BN, ReLU
---
conv(k=4, p=1, s=2, cin = 64, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin = 128, cout = 256), BN, ReLU
lighting feature(256)
conv(k=4, p=1, s=2, cin=256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout = 128), BN, ReLU
conv(k=4, p=1, s=2, cin=256, cout=64), BN, ReLU
conv(k=4, p=1, s=2, cin=128, cout=64), BN, ReLU
conv(k=5, p=2, s=1, cin=64, cout=6), Tanh
| | Lighting Feature:
---
conv(k=1, p=0, s=1, cin=3, cout=64)
conv(k=1, p=0, s=1, cin=64, cout=128), BN, Upsample(2)
conv(k=3, p=1, s=1, cin=128, cout=128), BN, Upsample(2)
conv(k=3, p=1, s=1, cin=128, cout=256), BN, Upsample(2)
conv(k=3, p=1, s=1, cin=256, cout=256)
### 7.2 Results on Normal Estimation
Figure 5 shows the qualitative comparison of the surface normal maps obtained
using DeepPS2 with other baselines [41, 19, 4] over the six remaining objects
on the DiLiGenT benchmark dataset.
Figure 5: Normal estimation results on remaining objects in the DiLiGenT
benchmark
### 7.3 Additional Inverse Rendering Results
Figure 6 shows the qualitative comparison of the estimated illumination,
albedo, and shading through DeepPS2. We also show the image reconstruction and
relighting results along with the SSIM value.
Figure 6: Inverse rendering results on additional objects from DiLiGenT
benchmark
|
# Orion-14B: Open-source Multilingual
Large Language Models
OrionStar Inc Authors are listed in Appendix A.
(Jan 2024)
###### Abstract
In this study, we introduce Orion-14B, a collection of multilingual large
language models with 14 billion parameters. We utilize a data scheduling
approach to train a foundational model on a diverse corpus of 2.5 trillion
tokens, sourced from texts in English, Chinese, Japanese, Korean, and other
languages. Additionally, we fine-tuned a series of models tailored for
conversational applications and other specific use cases. Our evaluation
results demonstrate that Orion-14B achieves state-of-the-art performance
across a broad spectrum of tasks. We make the Orion-14B model family and its
associated code publicly accessible111https://github.com/OrionStarAI/Orion,
aiming to inspire future research and practical applications in the field.
## 1 Introduction
Three hundreds years ago, Gottfried Wilhelm Leibniz’s insightful declaration
that "Language is the mirror of the mind" profoundly resonates in the
contemporary exploration of language. This thought provides a philosophical
foundation for understanding the intricate relationship between language and
intelligence. In the 20th century, language modeling (LM) became a fundamental
approach in artificial intelligence, forming the cornerstone of natural
language processing (NLP). The goal of language modeling is to learn the
probability distribution of word sequences. Desipite its simple modeling
procedure, it encapsulates substantial information about languages. Given that
a language contains $N$ words, the potential number of word sequences of the
length of $L$ is $N^{L}$. However, the actual number of sentences commonly
used in the language is far fewer than $N^{L}$. This discrepancy highlights
how language models effectively encode linguistic information.
Traditionally, statistical methods were employed to model word frequency in a
language. Among these, the $N$-gram model has been widely adopted, determining
the probability of a word based on the previous $N-1$ words. Though
straightforward and efficient, the method suffers from the data sparsity
problem. With the advancement of neural networks, a paradigm shift occurred
towards employing neural networks for language modeling. There are many
variations of neural language models, such as multi-layer perceptron (MLP)
(Bengio et al., 2000), recurrent neural networks (RNN) (Mikolov et al., 2010;
Yao et al., 2013), and transformer (Vaswani et al., 2017; Devlin et al.,
2019).
In recent years, the increase of model sizes and the scale of training data
have significantly boosted the capability of language models (Peters et al.,
2018; Radford et al., 2018; Devlin et al., 2019). Large language models (LLMs)
have exhibited remarkable promise in many traditional NLP tasks, such as
dialogue system, machine translation, information retrieval. Moreover, LLMs
have proven adept at complex tasks such as reasoning, code generation,
creative writing. These advancements have inspired both the academic and
industrial sectors to further investigate the underlying principles and
potential applications of LLMs.
The launch of ChatGPT/GPT-3.5 (OpenAI, 2022a) in 2022 captured tremendous
attention from the public, pushing the boundaries of what AI can achieve and
motivating researchers and engineers to delve deeper into their potential.
While GPT-3.5 and its successor, GPT-4 (OpenAI, 2022b), are prime examples of
LLMs, their specific model architectures and training methodologies remain
undisclosed. In contrast, Meta’s release of LLaMA (Touvron et al., 2023a) and
LLaMA 2 (Touvron et al., 2023b) have established a widely-recognized LLM
architecture within the open-source community, with numerous libraries
adopting these models. Despite LLaMA’s impressive performance, its primary
focus on English limits its applicability to other languages. Recently, there
has been a surge in the release of multilingual LLMs such as ChatGLM (THUDM,
2023), Baichuan (Baichuan, 2023a, b), Qwen (Bai et al., 2023a), InternLM
(InternLM, 2023), XVERSE (Yuanxiang, 2023), Skywork (Wei et al., 2023) and Yi
(01-ai, 2023). These models, trained on mainly English and Chinese datasets,
have shown promising results in tasks involving both English and Chinese.
Additionally, there has been a growing trend of LLMs specifically designed to
enhance performance in other languages, such as Japanese (Preferred Networks,
2023; Sasaki et al., 2023; Kojima, 2023; Lee et al., 2023b)and Korean (Kim et
al., 2021; Ko et al., 2023a).
In this technical report, we present Orion-14B, a family of multilingual
language models with 14 billion parameters. The foundation model is trained on
a diverse dataset of 2.5 trillion (2.5T) tokens, containing languages such as
English, Chinese, Japanese, Korean, and others. It has demonstrated superior
performance across a broad spectrum of tasks in multilingual settings.
We also provides a series of fine-tuned models built upon the foundation
model, each trained to different focuses such as conversation, long-context
text handling, quantization, and specific application requirements.
The remainder of this report describes our data preparation (Section 2),
pretraining methodology (Section 3), fine-tuning methodology (Section 4),
evaluation results (Section 5), extension works (Section 6), and conclusions
(Section 7).
## 2 Data
In the training framework of LLMs, the role of data is crucial in determining
the efficacy and performance of these models. Effective data processing for
pretraining is essential for achieving the desired outcomes. This involves
acquiring data from diverse sources, ensuring the high quality of the data
through thorough filtering, and removing any redundant information. This
section will discuss these processes in detail, outlining the necessary steps
to prepare and refine data to suit the stringent requirements of LLM training.
### 2.1 Data Source
Pretraining of LLM needs tremendous amounts of data. Hoffmann et al. (2022)
offered guildlines regarding the optimal quantity of training data for models
of varying sizes. For example, an LLM with 10 billion parameters requires 205
billion tokens for pretraining. However, recent work (Touvron et al., 2023b;
Baichuan, 2023b; Wei et al., 2023) in training 10 billion parameter models
have utilized 2.5 to 3 trillion tokens, substantially exceeding the
recommended data volume. These efforts have yielded notably impressive
results, demonstrating the efficacy of training with significantly larger
datasets than those proposed in the aforementioned study.
In order to obtain such a large amount of data, it is imperative to collect
data from multitude of sources with diversity and high quality. Our dataset
incorporates texts in multiple languages, with English and Chinese being
predominant, constituting over 90% of the entire dataset. Particular efforts
are also made to include Japanese and Korean texts, which account for more
than 5% of the dataset. The remaining portion comprises texts in various other
languages, such as Spanish, French, German, Arabic, and more.
In terms of content and style, the dataset primarily composes of written
language, with spoken language constituting only a minor portion. The dataset
spans a broad spectrum of topics, including web pages, news articles,
encyclopedic entries, books, source code, and academic publications, among
others. The diversity within the dataset is a crucial factor in achieving
superior performance across a range of tasks. The detailed distribution of the
data sources is shown in Fig. 1. We believe that different types of corpora
exert varying influences on the model training process; for instance, some may
be more effective to language understanding, while others better facilitate
knowledge reasoning. Unlike many existing studies that typically employ random
shuffling of training examples, we strategically feeds the model with varied
data sources across different training stages. We believe this method leads to
more efficient data usage. The details of this approach will be elaborated in
Section 3.
Figure 1: Data sources distribution.
### 2.2 Data Quality
Data quality is essential in the training of LLMs. To assure high-quality
data, we have implemented a series of measures for data filtering, detailed as
follows:
* •
Text normalization: The datasets contain a large number of texts from various
sources, such as web pages and ebooks. These texts are often accompanied by
HTML, special characters, or other format tags, which are not useful for LLM
training. We employ a series of tools, such as regular expressions and format
parsers, to effectively eliminate them.
* •
Harmful content removal: The Internet contains harmful and spam content. Our
approach to mitigate this involves a two-stage process: the initial stage
utilizes keywords and regular expressions matching, followed by a deep
learning-based model designed to identify and remove such content. It is
important to note that entirely eliminating harmful text from the training
dataset could lead to a scenario where the trained model lacks the ability to
identify and appropriately response to harmful information (Touvron et al.,
2023b). Therefore, we intentionally retain a minimal amount of harmful text in
the dataset. This approach ensures that the model remains capable of
recognizing and effectively addressing such content.
* •
Personal information removal: Some of the text data includes personal details
like names, phone numbers, and addresses. We utilize rule-based methods for
detection and either substitute these with placeholders or remove them
entirely.
* •
Quality filtering: This part is crucial in data processing. We first apply a
set of rules to filter the data, such as removing texts with excessive
repetition. Additionally, we use $N$-gram perplexity models to exclude texts
with anomalously high perplexity. Lastly, our proprietary data quality models
are employed to select high-quality data. We emphasize that while high quality
is essential for LLM training, achieving a balance between quality and
quantity of training data is a non-trivial task. Our models are optimized to
retain as much data as possible while maintaining high data quality, which is
one of the key factors in the successful training of LLMs.
### 2.3 Deduplication
Given that the training data for LLMs is sourced from a variety of origins,
there is a significant likelihood of encountering duplicate data. Duplicate
data can detrimentally affect the training process, potentially leading to a
model biased towards certain data sources and a decline in performance (Lee et
al., 2021; Nunes et al., 2023; Penedo et al., 2023). To address this, we
develop a deduplication procedure to eliminate redundant data.
In this process, we extract key words and phrases from each document and
compute their corresponding embedding vectors and SimHash vectors (Indyk and
Motwani, 1998; Charikar, 2002). These vectors are then compared to those in
our database. If a vector in the database shows similarity within a certain
threshold, the document is considered a duplicate and is subsequently
discarded.
Importantly, we note that while LLMs have shown significant advancements in
numerous NLP tasks, some studies (Yang et al., 2023; Golchin and Surdeanu,
2023; Wei et al., 2023) indicate that part of this improvement might be
attributed to unintentional inclusion of evaluation data in the training
datasets, potentially leading to overestimated results. To address this, we
adopt our deduplication approach for all evaluation datasets to prevent the
pretraining dataset from containing texts in the evaluation sets, thereby
enhancing the integrity and reliability of our model’s evaluation results. We
will further discuss the data contamination in detail in Section 5.3.
## 3 Pretraining
### 3.1 Tokenizer
A tokenizer is a basic component of an LLM, which need to represent the text
distribution in the language while maintaining an favorable vocabulary size
for training. For a multilingual tokenizer, statistical methods are typically
employed to generate word-level or subword-level tokens from texts in multiple
languages. We utilize the byte-pair encoding (BPE) algorithm (Shibata et al.,
1999), implemented via SentencePiece (Kudo and Richardson, 2018). Our
configuration ensures a character coverage of 99.99%, with rare characters
defaulting to UTF-8 bytes. To build a diverse corpus and align with our
training data distribution, we curate a broad spectrum of text types from our
training corpus. This includes English, Simplified Chinese, Traditional
Chinese, Japanese, Korean, a few other languages, as well as rare characters.
In Table 1, we provide a detailed comparison of our tokenizer with other open-
source tokenizers. This comparison includes vocabulary size and compression
ratio (CR), the latter calculated by the ratio of the size of the original
data to the size of the tokenized data.
Table 1: Tokenizer comparison with other open-source LLMs. We compare vocabulary sizes and compression ratios for simpifiled Chinese (zh_cn), tranditional Chinese (zh_cn), and English, respectively. Model | Vocab Size | CR (zh_cn) | CR (zh_tw) | CR (en)
---|---|---|---|---
LLaMA2 | 32,000 | 1.377 | 1.589 | 1.153
Yi | 64,000 | 0.606 | 0.785 | 1.084
Baichuan2 | 125,696 | 0.554 | 0.783 | 1.077
ChatGLM3 | 65,024 | 0.582 | 0.703 | 1.081
Skywork | 65,519 | 0.672 | 0.846 | 1.153
Orion-14B | 84,608 | 0.549 | 0.656 | 1.067
### 3.2 Architecture
Given that LLaMA2 has achieved superior performance, its architecture has been
widely adopted by many open-source LLM. In our approach, we adhere to the
LLaMA2 architecture while implementing several modifications. These include
extending the number of tokens to 84,608 and enlarging the dimensions of the
feed-forward network (FFN) to 15,360. We employ rotary positional embeddings
(RoPE) (Su et al., 2021) for positional encoding to accommodate context
lengths of up to 4096 tokens. The model uses 40 transformer layers with 40
attention heads each. The total parameter of the model is 14.4 billion,
slightly exceeding that of LLaMA2-13B. Detailed model parameters is provided
in Table 2.
Table 2: A comparison of model architecture. The table shows comparison of our model and several open-source model with similar model size. Model | seq_len | position embedding | hidden size | FFN size | # layers | # heads
---|---|---|---|---|---|---
LLaMA2-13B | 4096 | RoPE | 5,120 | 13,824 | 40 | 40
Skywork-13B | 4096 | RoPE | 5,120 | 12,288 | 52 | 36
Baichuan2-13B | 4096 | AliBi | 5,120 | 13,696 | 40 | 40
Qwen-14B | 2048 | RoPE | 5,120 | 13,696 | 40 | 40
InternLM-20B | 4096 | RoPE | 5,120 | 13,824 | 60 | 40
Orion-14B | 4096 | RoPE | 5,120 | 15,360 | 40 | 40
### 3.3 Infrastructure
For the training of Orion-14B, we employed Megatron-LM (Shoeybi et al., 2020)
on a cluster comprising 11 servers, each equipped with 8 NVIDIA H800 GPUs. To
optimize training efficiency, we integrated FlashAttention2 (Dao, 2023) and
APEX (NVIDIA, 2023) into Megatron-LM framework, achieving a training speed of
4,000-5,000 tokens/GPU/second.
### 3.4 Training Process
To train Orion-14B, we initiate the model training with a learning rate warm-
up stage spanning 2000 iterations, during which we linearly increase the
learning rate to the maximal learning rate of 3e-4. We then apply a cosine
schedule to gradually decrease the learning rate to 3e-5 throughout the
training processing. We employ the AdamW (Loshchilov and Hutter, 2018)
optimizer, setting $\beta_{1}$ to 0.9 and $\beta_{2}$ to 0.95, respectively.
In addition, we apply a weigh decay factor of 0.1 and enforce a gradient
clipping threshold of 1.0 to ensure the stability of the training process. The
model is trained using BF16/FP32 mixed precision, with a batch size of 1408,
corresponding to approximately 5.7 million tokens per step.
### 3.5 Data Scheduling
Training large language models requires hundreds of billions to trillions of
tokens. It is an interesting area to explore scaling laws in LLM training and
literature from Kaplan et al. (2020) through Hoffmann et al. (2022) to Touvron
et al. (2023b) suggests that model training tends to favor an increase in the
number of tokens over model sizes. We use a 2.5T token training dataset for
our 14B parameter model, aiming a balance between computational efficiency and
cost.
On the other side, while numerous theoretical and empirical studies have
examined the interplay between model size and training data volume, there is
no universally accepted methodology for scheduling training data. Considering
that humans acquire knowledge in a deliberate order (Evanson et al., 2023), it
is plausible that language models might also benefit from a structured
learning order when processing training data. Curriculum learning (Bengio et
al., 2009) has been suggested as a method to organize the learning process by
progressively increasing the complexity of the training data. However, most
prior studies have concentrated on sample-level data and smaller datasets.
Chen et al. (2023) employed a skills-based framework for training data
selection and continuous pretraining with a 3B-parameter language model. This
approach achieved greater accuracy compared to the baseline method of uniform
data source sampling, suggesting the potential efficacy of strategic data
scheduling.
In training the Orion-14B model, we intentionally develop a data scheduling
strategy that organizes training data to incrementally increase its
complexity. We divide the training data into several stages based on the data
sources and their complexity. These stages are differentiated by the mix
ratios of data sources. Initial stages primarily include data with common
knowledge, such as web pages and news articles. In the subsequent stages, we
gradually augment the proportion of data containing more complex knowledge,
including textbooks, academic papers, and source code. Additionally, the
linguistic diversity of the training data is expanded progressively from
English and Chinese to Japanese and Korean. The brief structure of our
training data schedule is depicted in Table 3.
Table 3: Training data schedule for Orion-14B. Primary data sources and languages refer to data that totally account for more than 90% of the whole training data in each stage. Stages | Tokens (B) | Primary data sources | Primary languages
---|---|---|---
Early stages | 0 ~600 | web pages, news articles | English, Chinese
Middle stages | 600 ~1100 | web pages, news articles, textbooks, academic papers | English, Chinese, Others
Final stages | 1100 ~2000 | web pages, news articles, textbooks, academic papers, source code | English, Chinese, Others
To assess the effectiveness of the data scheduling approach, we monitor the
loss on a validation set throughout the training process. This validation set
consists of 5,000 documents, each unseen in the training set. It includes a
diverse collection of English and Chinese texts sourced from a variety of data
sources. As shown in Fig. 2, there are significant reduction in validation
loss aligning with shifts in the training data distribution at 600B and 1,100B
tokens. Additionally, the validation loss exhibits initial fluctuations,
stabilizing progressively with continued training. This trend indicates that
the model increasingly adapts to the diversity of data types as training
progresses.
Figure 2: Validation loss during the training process. The validation set
consists of 5,000 documents including a diverse collection of English and
Chinese texts sourced from a variety of data sources.
To our knowledge, the training of most prior LLMs utilized fully shuffling the
training data, which was then fed into the model in a random sequence.
Orion-14B is the first LLM trained with a specific data scheduling strategy.
The evaluation results indicate that this model demonstrates impressive
performance in language understanding tasks at its early stages and rapidly
enhances its capabilities in reasoning and academic tasks in later stages,
aligning with our data scheduling policy. Notably, Orion-14B, trained on 2.5T
tokens, achieves comparable performance to other open-source models trained on
2.6T to 3T tokens, thereby illustrating the efficiency of our data utilization
approach.
## 4 Fine-tuning
During the pretraining stage, an LLM is trained to predict the next token at
each step. However, in many applications, the model needs to generate a
desired response to a given prompt. Thus, in the next stage, LLMs typically
undergo further fine-tuning using supervised learning, where the training data
consists of paired input and output text sequences. Further, to enhance the
quality and safety, approaches like Reinforcement Learning from Human Feedback
(RLHF) (Christiano et al., 2017; Ouyang et al., 2022) or Direct Preference
Optimization (DPO) (Rafailov et al., 2023) can be employed. In this work, our
primary focus is on the supervised fine-tuning (SFT) stage, leaving RLHF and
DPO for future exploration.
### 4.1 SFT Data
High-quality, diverse data has been proven to be crucial to supervised fine-
tuning in previous studies (Touvron et al., 2023b; Zhou et al., 2023). To
construct our SFT training data, we draw from two primary sources: a human-
labeled dataset and an open-source filtered dataset.
For a high-quality human-labeled dataset, we assemble a team of expert
annotators who spend weeks creating precisely annotated data. To ensure data
quality, all annotators adhere to three key principles—helpfulness,
truthfulness, and harmlessness—as outlined in InstructGPT (Ouyang et al.,
2022) and LLaMA2 (Touvron et al., 2023b). In total, we produce approximately
220,000 human-labeled SFT data entries.
While the human-labeled dataset is of high quality, its volume is insufficient
for training a high-performance LLM. Therefore, we also construct a large-
scale, open-source filtered dataset. The original SFT data includes
collections from various open-source datasets, such as COIG (Zhang et al.,
2023a), WildChat (Wenting Zhao, 2023), OpenOrca (Lian et al., 2023), and
UltraChat (Ding et al., 2023). Given the variable quality and occasional
presence of inappropriate content in these open-source datasets, we implement
a cleaning process inspired by methods from Platypus (Lee et al., 2023a) and
MoDS (Du et al., 2023), comprising the following steps:
* •
Rule-based filtering: We use regular expressions and keywords for simple
filtering to remove personal information, temporal-sensitive data, etc.
* •
Quality filtering: A large NLP model scores the data quality on a scale from 1
to 10, retaining only data with a score of 7 or higher.
* •
Semantic deduplication: Text embeddings are used for semantic deduplication,
considering texts with a similarity greater than 0.98 as duplicates.
Using this approach, we construct an open-source filtered dataset of 630,000
samples. Combined with the human-labeled data, we assemble an SFT dataset of
850,000 training pairs for supervised fine-tuning.
### 4.2 Training details
To fine-tune a pretrained LLM, we prepend <human> and <assistant> as headers
to the prompt text and the response text, respectively. The training process
employs the AdamW optimizer, with hyperparameters configured as follows:
$\beta_{1}$ is set to 0.9, $\beta_{2}$ to 0.95, and $\epsilon$ to $1e-8$. We
limit the sequence length to 4096 and use a batch size of 128. Our training
regimen spanned three epochs, involving over 500k samples; the learning rate
was incrementally increased over the first 1,500 steps to a maximum of $1e-5$.
To prevent overfitting, we apply a weight decay of 0.1, a dropout rate of 0.1,
and a gradient clipping threshold of 1.0.
## 5 Evaluation
### 5.1 Standard Evaluation
To effectively evaluate the LLM, we categorize the standard evaluation sets
into the examinations and professional knowledge, and language understanding
and common knowledge. We select the most common evaluation sets in each
category as follows:
Professional Knowledge and Reasoning
* •
C-Eval (Huang et al., 2023): A comprehensive Chinese evaluation benchmark
consisting of more than 10,000 multi-choice questions.
* •
CMMLU (Li et al., 2023): A general evaluation benchmark specifically designed
to evaluate the knowledge and reasoning abilities of LLMs within the context
of Chinese language and culture.
* •
MMLU (Hendrycks et al., 2020): A widely used benchmark designed to measure
knowledge acquired during pretraining by evaluating models.
* •
AGIEval (Zhong et al., 2023): A human-centric benchmark crafted to assess the
general capabilities of foundation models in tasks aligned with human
cognition and problem-solving.
* •
Gaokao (Zhang et al., 2023b): A dataset leverages questions from China’s
national college entrance examination to test LLMs.
* •
BBH (Suzgun et al., 2022): A challenging subset of the Big-Bench suite,
covering a wide array of themes, such as linguistics, mathematics, common
sense reasoning, biology, physics, software development, and more.
Language Understanding and Common Knowledge
* •
RACE (Lai et al., 2017): A comprehensive reading comprehension dataset
comprising over 28,000 passages and nearly 100,000 questions. It contains
reading and comprehension materials for both middle school (middle) and high
school (high) academic levels.
* •
HellaSwag (Zellers et al., 2019): A challenge dataset for evaluating
commonsense language inference that is particularly difficult for state-of-
the-art models.
* •
PIQA (Bisk et al., 2020): A dataset introducing the task of physical
commonsense reasoning and a corresponding benchmark dataset.
* •
Lambada (Paperno et al., 2016): A collection of narrative passages where human
subjects can guess the last word if exposed to the whole passage, but not if
they only see the last sentence preceding the target word.
* •
WSC (Levesque et al., 2012): A pronoun disambiguation task, which requires
determining the noun that the pronoun refers to according to the context.
For comparison, we select the most popular LLMs with a parameter range of
10-20 billion: LLaMA 2-13B (Touvron et al., 2023b), Skywork-13B (Wei et al.,
2023), Baichuan 2-13B (Baichuan, 2023b), Qwen-14B (Bai et al., 2023a),
InternLM (InternLM, 2023).
To ensure consistent comparisons, we employ open-source LLM evaluation
frameworks such as OpenCompass (Contributors, 2023) and LM-Eval-Harness (Gao
et al., 2021) for a unified performance evaluation of LLMs. For the models we
compared, we refer to the published scores from OpenCompass or their official
reports.
Table 4: LLM evaluation results on examination and professional knowledge. Bold font denotes the best score in each category, a convention followed in all subsequent tables throughout this paper. Model | C-Eval | CMMLU | MMLU | AGIEval | Gaokao | BBH
---|---|---|---|---|---|---
LLaMA 2-13B | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 45.6
Skywork-13B | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 48.3
Baichuan 2-13B | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 49.0
Qwen-14B | 71.7 | 70.2 | 67.9 | 51.9 | 62.5 | 53.7
InternLM-20B | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 52.5
Orion-14B | 72.9 | 70.6 | 69.9 | 54.7 | 62.1 | 56.5
The evaluation results in Table 4 highlight Orion-14B’s superior performance
across various examinations and professional knowledge evaluation sets,
compared to other LLMs. Orion-14B achieves the highest scores in C-Eval,
CMMLU, MMLU, AGIEval, and BBH, indicating its strong capabilities in
understanding and reasoning within professional contexts. While it excels in
most benchmarks, it is slightly outperformed by Qwen-14B in the Gaokao
evaluation. These results position Orion-14B as a highly competitive and
robust model for complex and professional tasks.
Table 5: LLM evaluation results on language understanding and common knowledge. Model | RACE-middle | RACE-high | HellaSwag | PIQA | Lambada | WSC
---|---|---|---|---|---|---
LLaMA 2-13B | 63.0 | 58.9 | 77.5 | 79.8 | 76.5 | 66.3
Skywork-13B | 87.6 | 84.1 | 73.7 | 78.3 | 71.8 | 66.3
Baichuan 2-13B | 68.9 | 67.2 | 70.8 | 78.1 | 74.1 | 65.4
Qwen-14B | 93.0 | 90.3 | 80.2 | 79.8 | 71.4 | 66.3
InternLM-20B | 86.4 | 83.3 | 78.1 | 80.3 | 71.8 | 68.3
Orion-14B | 93.2 | 91.3 | 78.5 | 79.5 | 78.8 | 70.2
As shown in Table 5, Orion-14B showcases robust performance in language
understanding and common knowledge tasks, outperforming other models in RACE
(mid and high), Lambada, and WSC benchmarks, highlighting its exceptional
comprehension and reasoning abilities. However, for HellaSwag, PIQA, and WSC
tasks, it is slightly outperformed by Qwen-14B and InternLM-20B. Overall, the
results indicate Orion-14B’s strong capabilities across a spectrum of natural
language understanding benchmarks.
For a comprehensive evaluation, we also utilize all test sets used in
OpenCompass leaderboard (Contributors, 2023) to assess performance. In
OpenCompass leaderboard, the evaluation sets are organized into five
categories. The summarized results for each category are shown in Table 6,
where Orion-14B leads with an average score of 64.4%. Notably, it outperforms
other models across four categories, including Examination, Language,
Understanding, and Reasoning, indicating its excellent analytical and problem-
solving abilities. These results demonstrate Orion-14B’s robust capabilities
in a wide range of cognitive and language tasks. Detailed results for each
testset are included in the Appendix B.
Table 6: LLM evaluation results of OpenCompass testsets Model | Average | Examination | Language | Knowledge | Understanding | Reasoning
---|---|---|---|---|---|---
LLaMA 2-13B | 47.3 | 45.2 | 47.0 | 58.3 | 50.9 | 43.6
Skywork-13B | 53.6 | 61.1 | 51.3 | 52.7 | 64.5 | 45.2
Baichuan 2-13B | 49.4 | 51.8 | 47.5 | 48.9 | 58.1 | 44.2
Qwen-14B | 62.4 | 71.3 | 52.7 | 56.1 | 68.8 | 60.1
InternLM-20B | 59.4 | 62.5 | 55.0 | 60.1 | 67.3 | 54.9
Orion-14B | 64.3 | 71.4 | 55.0 | 60.0 | 71.9 | 61.6
Note that, evaluation scores are not the definitive standard for assessing an
LLM. Given the vast amount of training data, there is a high likelihood that
the dataset includes elements of the evaluation set. To avoid this, we
purposely deduplicate the evaluation datasets from our pretraining corpus,
thereby ensuring that our model’s performance genuinely reflects its
capabilities. Overlooking this critical step could lead to training a model
that is overfitted to the evaluation set, resulting in artificially high
scores. We will delve into this topic more deeply in Section 5.3.
### 5.2 Multilingual
In our training approach, while the majority of the data is in English and
Chinese, we also incorporate additional languages to enhance multilingual
performance. Notably, Japanese and Korean texts are specifically added after
surpassing 600B tokens in the training process. The total amounts of Japanese
and Korean texts are approximately 100B and 50B tokens, respectively. Despite
the lower quantity of Japanese and Korean tokens compared to English and
Chinese, the model exhibits superior performance in these languages. This
indicates a significant transfer of knowledge from the more dominant languages
during the training of the LLM.
To assess the model’s multilingual capabilities, we benchmark it against other
models trained on English+Japanese (Preferred Networks, 2023; Kojima, 2023;
Sasaki et al., 2023; Lee et al., 2023b), English+Korean (Kim et al., 2021; Ko
et al., 2023b), or multilingual datasets (Touvron et al., 2023b; Baichuan,
2023b; Bai et al., 2023a; 01-ai, 2023). We employ the datasets from Gao et al.
(2021) and Kim et al. (2022) for evaluation of Japanese and Korean,
respectively.
Table 7: Comparison of LLM performances on Japanese testsets. The header of each column stands for Japanese CommonsenseQA, Japanese NLI, MARC in Japanese, Japanese SQUAD, Japanese QKET_v2, XLSUM in Japanese, XWinograd in Japanese, MGSM, respectively. Model | Average | JCQA | JNLI | MARC | JSQD | JQK | XLS | XWN | MGSM |
---|---|---|---|---|---|---|---|---|---|---
PLaMo-13B | 52.3 | 56.7 | 42.8 | 95.8 | 70.6 | 71.0 | 8.70 | 70.5 | 2.40 |
WebLab-10B | 50.7 | 66.6 | 53.7 | 82.1 | 62.9 | 56.2 | 10.0 | 72.0 | 2.40 |
ELYZA-jp-7B | 48.8 | 71.7 | 25.3 | 86.6 | 70.8 | 64.1 | 2.50 | 62.1 | 7.20 |
StableLM-jp-7B | 51.1 | 33.4 | 43.3 | 96.7 | 70.6 | 78.1 | 10.7 | 72.8 | 2.80 |
LLaMA 2-13B | 46.3 | 75.0 | 47.6 | 38.8 | 76.1 | 67.7 | 18.1 | 63.2 | 10.4 |
Baichuan 2-13B | 57.1 | 73.7 | 31.3 | 91.6 | 80.5 | 63.3 | 18.6 | 72.2 | 25.2 |
Qwen-14B | 65.8 | 85.9 | 60.7 | 97.0 | 83.3 | 71.8 | 18.8 | 70.6 | 38.0 |
Yi-34B | 67.1 | 83.8 | 61.2 | 95.2 | 86.1 | 78.5 | 27.2 | 69.2 | 35.2 |
Orion-14B | 69.1 | 88.2 | 75.8 | 94.1 | 75.7 | 85.1 | 17.3 | 78.8 | 38.0 |
Table 8: Comparison of LLM performances on Korean testsets. $n=0$ and $n=5$ stand for $n$-shot prompts used in the evaluation. The testsets are originally in English and have been translated to Korean by Kim et al. (2022). | Average | HellaSwag | COPA | BooIQ | SentiNeg
---|---|---|---|---|---
Model | n=0 | n=5 | n=0 | n=5 | n=0 | n=5 | n=0 | n=5 | n=0 | n=5
KoGPT | 53.0 | 70.1 | 55.9 | 58.3 | 73.5 | 72.9 | 45.1 | 59.8 | 37.5 | 89.4
Polyglot-ko-13B | 69.6 | 73.7 | 59.5 | 63.1 | 79.4 | 81.1 | 48.2 | 60.4 | 91.2 | 90.2
LLaMA 2-13B | 46.7 | 63.7 | 41.3 | 44.0 | 59.3 | 63.8 | 34.9 | 73.8 | 51.5 | 73.4
Baichuan 2-13B | 52.1 | 58.7 | 39.2 | 39.6 | 60.6 | 60.6 | 58.4 | 61.5 | 50.3 | 72.9
Qwen-14B | 53.8 | 73.7 | 45.3 | 46.8 | 64.9 | 68.9 | 33.4 | 83.5 | 71.5 | 95.7
Yi-34B | 54.2 | 72.1 | 44.6 | 44.7 | 58.0 | 60.6 | 65.9 | 90.2 | 48.3 | 92.9
Orion-14B | 74.5 | 79.6 | 47.0 | 49.6 | 77.7 | 79.4 | 81.6 | 90.7 | 92.4 | 98.7
Table 9: Multilingual evaluation. Model | Train Lang | Japanese | Korean | Chinese | English
---|---|---|---|---|---
PLaMo-13B | En,Jp | 52.3 | * | * | *
Weblab-10B | En,Jp | 50.7 | * | * | *
ELYZA-jp-7B | En,Jp | 48.8 | * | * | *
StableLM-jp-7B | En,Jp | 51.1 | * | * | *
KoGPT-6B | En,Ko | * | 70.1 | * | *
Polyglot-ko-13B | En,Ko | * | 70.7 | * | *
Baichuan2-13B | Multi | 57.1 | 58.7 | 50.8 | 57.1
Qwen-14B | Multi | 65.8 | 73.7 | 64.5 | 65.4
LLaMA2-13B | Multi | 46.3 | 63.7 | 41.4 | 55.3
Yi-34B | Multi | 67.1 | 72.2 | 58.7 | 68.8
Orion-14B | Multi | 69.1 | 79.5 | 67.9 | 67.3
As shown in Tables 7 and 8, Orion-14 consistently achieves the highest scores
across the majority of the test sets. On average, it outperforms all other
LLMs in Japanese and Korean datasets, surpassing even those models with a
greater number of parameters.
To gain a clearer insight into the multilingual capabilities, we compute the
average scores for the evaluation sets in Japanese, Korean, Chinese, and
English for comparison. The scores for Japanese and Korean are derived
directly from Tables 7 and 8. For the Chinese and English datasets, we
calculate the average scores using the OpenCompass dataset, excluding the
mathematics and programming testsets.
Table 9 demonstrates Orion-14B’s impressive performance in multilingual
evaluations. It leads with top scores in Japanese, Korean, and Chinese,
surpassing other multilingual models. In English, Orion-14B is marginally
outperformed by Yi-34B, which is an LLM with a significantly higher number of
parameters. This data highlights Orion-14B’s robust proficiency in multiple
languages.
### 5.3 Analysis of Data Contamination
The rise of the LLM has led to a surge in the performance of evaluation tasks.
Their superior performance is primarily attributed to the massive data
consumed by these billion/trillion-parameter LLMs during training. However,
recent work (Yang et al., 2023; Golchin and Surdeanu, 2023; Wei et al., 2023)
has shown that the performance of LLM on many downstream tasks may be inflated
due to data contamination, i.e., the presence of test data from these
downstream tasks in the pretraining data of LLMs.
As mentioned above, to prevent the pretraining dataset from containing answers
to the evaluation datasets, we apply our deduplication approach using all
evaluation datasets. This process removes text similar to the evaluation data
from the training corpus. On the other hand, to understand the influence of
such data, we also experiment with training a model using previously
deduplicated data. Specifically, we select those data that had been removed
due to deduplication with the evaluation set but we do _not_ contain data with
the exact same texts as in the evaluation texts. In other words, this approach
allows us to use data that may be _semantically_ or _partially_ related to the
evaluation set while excluding the exact text from it. We compile a smaller
dataset of 200B tokens, which includes these selected data alongside the
regular training data. We then continue the pretraining process with this 200B
token dataset, resulting in a new pretrained model named Orion-14B-Exam. As
illustrated in the accompanying table, Orion-14B-Exam demonstrates
significantly higher scores on the evaluation set compared to the baseline.
Table 10: Evaluation for data contamination and overfitting. Model | C-Eval | CMMLU | MMLU | Lambada | HellaSwag
---|---|---|---|---|---
GPT-4 | 69.9 | 71 | 83 | 65.5 | 91.4
Qwen-72B | 83.3 | 61.8 | 77.3 | 76.1 | 85.4
Yi-34B | 81.8 | 82.6 | 76.3 | 73.1 | 82
Orion-14B | 72.9 | 70.6 | 69.9 | 78.8 | 78.5
Orion-14B-Exam | 92.7 | 82.9 | 85.4 | 78.5 | 85.8
The results in Table 10 reveal that manipulating training data can easily lead
to fitting the evaluation dataset and achieve very high scores. We conduct an
additional experiment to gauge the extent of overfitting. Specifically, we
gather a collection of recent texts from the Internet, ensuring they are
unseen in any model’s training set. We then calculate the loss on this new
dataset $L_{unseen}$ and compare it to the loss on texts drawn from the
evaluation sets $L_{eval}$ mentioned in Tables 4 and 5, including C-Eval,
MMLU, HellaSwag, and others. The loss differential between these two sets
serves as an indicator of overfitting—the smaller the difference, the lower
the likelihood of overfitting to the evaluation set. The results of this
analysis are presented in Table 11. This table illustrates that with the
inclusion of the new training dataset, there is a significant reduction in the
loss on the evaluation set, decreasing from 1.87 to 1.44, clearly showing the
overfitting on the evaluation set. On the other hand, the original Orion-14B
model demonstrates consistent losses on both datasets, with a minimal
difference as expected, indicating little overfitting levels.
Table 11: Overfitting analysis of the loss of each model. Model | $L_{unseen}$ | $L_{eval}$ | $\Delta(L_{unseen}-L_{eval})$
---|---|---|---
Baichuan2-13B | 2.23 | 1.93 | 0.30
Qwen-14B | 2.19 | 1.73 | 0.46
Qwen-72B | 2.05 | 1.54 | 0.51
Orion-14B | 2.15 | 1.87 | 0.28
Orion-14B-Exam | 2.18 | 1.44 | 0.74
In light of these performance, it is crucial to examine the evaluation methods
used in the community of LLM. Since it is possible to achieve high scores
through specific training tactics, such scores may not accurately reflect the
true capabilities of an LLM. An overemphasis on top leaderboard positions can
be misleading and does not guarantee actual model proficiency. The principal
goal should be to develop robust, effective LLMs that prove their utility in a
wide range of real-world applications.
### 5.4 Fine-tuned Model Evaluations
The above evaluation utilizes standard evaluation datasets to test the
performance of the pretrained foundation model (base-model). On the other
hand, evaluating the performance of the fine-tuned model (chat-model) differs
from that of the base-model. This is because the chat-model is designed to
generate responses to given prompts, and determining the goodness of these
responses can be subjective and dependent on the specific task. To
comprehensively evaluate the chat-model’s performance, we conduct tests using
three different approaches: 1) standard evaluation sets, similar to those used
in the base-model evaluation; 2) subjective datasets based on GPT-4 scoring;
and 3) human evaluation.
Table 12: Standard evaluation for chat models. Model | CMMLU | MMLU | BBH | HellaSwag | PIQA | WSC
---|---|---|---|---|---|---
Baichuan2-13B-Chat | 58.4 | 57.0 | 49.9 | 66.9 | 77.6 | 71.2
Qwen-14B-Chat | 70.0 | 66.4 | 58.0 | 65.2 | 74.0 | 66.3
LLaMA2-13B-Chat | 38.7 | 54.6 | 40.2 | 78.2 | 78.8 | 68.3
InternLM-20B-Chat | 52.2 | 52.5 | 35.3 | 69.2 | 76.7 | 61.5
Orion-14B-Chat | 63.9 | 61.7 | 49.1 | 76.7 | 78.4 | 71.2
For the standard evaluation, we use widely recognized benchmarks, including
CMMLU, MMLU, BBH, HellaSwag, PIQA, and WSC. As indicated in 12, Orion-14B-chat
maintains strong performance in HellaSwag, BBH, PIQA, and WSC. However, there
is a slight decline in performance on CMMLU and MMLU compared to the base
model in Tabels 4 and 5. This is likely due to the evaluation prompts being
more designed for the base model than the chat model. Therefore, incorporating
subjective evaluation methods alongside standard metrics could provide a more
comprehensive assessment of the model’s capabilities. We utilize MT-Bench
(Zheng et al., 2023) and AlignBench (Liu et al., 2023) for English and
Chinese, respectively.
Table 13: Subjective evaluation of MT-Bench. Model | First-Turn | Second-Turn | Average
---|---|---|---
Baichuan2-13B-Chat | 7.05 | 6.47 | 6.76
Qwen-14B-Chat | 7.30 | 6.62 | 6.96
LLaMA2-13B-Chat | 7.10 | 6.20 | 6.65
InternLM-20B-Chat | 7.03 | 5.93 | 6.48
Orion-14B-Chat | 7.68 | 7.07 | 7.37
Table 14: Subjective evaluation of AlignBench. The header of each column stands for Mathematics, Logic, Basic tasks, Chinese understanding, Comprehensive Q&A, Writing, Role-playing, and Professional tasks, and Average scores. Model | Math. | Logi. | Basic. | Chi. | Comp. | Writ. | Role. | Prof. | Avg.
---|---|---|---|---|---|---|---|---|---
Baichuan2-13B-Chat | 3.76 | 4.07 | 6.22 | 6.05 | 7.11 | 6.97 | 6.75 | 6.43 | 5.25
Qwen-14B-Chat | 4.91 | 4.71 | 6.90 | 6.36 | 6.74 | 6.64 | 6.59 | 6.56 | 5.72
LLaMA2-13B-Chat | 3.05 | 3.79 | 5.43 | 4.40 | 6.76 | 6.63 | 6.99 | 5.65 | 4.70
InternLM-20B-Chat | 3.39 | 3.92 | 5.96 | 5.50 | 7.18 | 6.19 | 6.49 | 6.22 | 4.96
Orion-14B-Chat | 4.00 | 4.24 | 6.18 | 6.57 | 7.16 | 7.36 | 7.16 | 6.99 | 5.51
The results presented in Tables 13 and 14 highlight Orion-14B-Chat’s
performance in subjective evaluations. In MT-Bench evaluation, Orion-14B-Chat
significantly outperforms other models, achieving the highest scores in both
First-Turn and Second-Turn evaluations, with an average score of 7.37. In the
AlignBench evaluation, Orion-14B-Chat excels notably in Chinese understanding,
Writing, Role-Playing, and Professional tasks. The results demonstrate
competitive performance across diverse conversational contexts.
As the chat model is designed to generate responses to prompts, human
evaluation is a critical measure of its effectiveness. Adopting an approach
similar to the arena method used in Chatbot Arena (LMSYS, 2023), we engage
human annotators to assess responses from two models in a randomized head-to-
head format. Specifically, for a given prompt, responses generated by two
anonymized models are presented to the annotators, who then rate them as
"Win," "Tie," or "Loss" based on their preference. We have 14 human annotators
evaluate a total of 3562 questions. The models compared in this arena battle
are Orion-14B-Chat, Qwen-14B-Chat, and Baichuan2-13B-Chat. As indicated in
Table 15, Orion-14B-Chat received the highest number of "win" votes,
highlighting its exceptional performance in human evaluations.
Table 15: Human arena evaluation for chat models. Model | Win | Tie | Loss
---|---|---|---
Orion-14B-Chat | 1172 | 1491 | 899
Qwen-14B-Chat | 1101 | 1592 | 869
Baichuan2-13B-Chat | 728 | 1601 | 1233
## 6 Extension Works
In practical applications, LLMs have a variety of needs, including extended
context handling, minimizing inference resource requirement, and adapting to
specific applications. To address these challenges, we conduct extension works
and develop several specialized models. Below are the extensions we have
implemented:
* •
Orion-14B-Long: This model is optimized for long context lengths more than
200,000 tokens and demonstrates performance comparable to proprietary models
on long context evaluation sets (Bai et al., 2023b; Dacheng Li and Zhang,
2023).
* •
Orion-14B-INT4: A quantized model utilizing 4-bit integer weights. It
significantly reduces the model size by 70% and increases the inference speed
by 30% while incurring a minimal performance loss of only 1%.
* •
Orion-14B-RAG: A chat-model fine-tuned on a custom retrieval augmented
generation dataset, achieving superior performance in retrieval augmented
generation tasks.
* •
Orion-14B-PlugIn: A chat-model specifically tailored for plugin and function
calling tasks, ideal for agent-related scenarios where the LLM acts as a
plugin and function call system.
Due to time constraints, this technical report does not cover the training
details and evaluations of these models. We make all the above models
available for public use. For more information, please refer to our open-
source library https://github.com/OrionStarAI/Orion.
## 7 Conclusion
In this study, we present Orion-14B, a diverse suite of multilingual large
language models with 14 billion (14B) parameters. This family includes a
pretrained base model and a fine-tuned chat model, as detailed in this
technical report. Additionally, we offer several extensions to Orion-14B, such
as a long context model, a quantized model, and several application-oriented
models, enhancing its versatility and applicability. These models have
demonstrated competitive performance against existing open-source models in
the field of LLMs, positioning Orion-14B as a potential strong baseline for
future LLM research.
Training a large language model like Orion-14B poses considerable challenges.
Throughout this endeavor, we faced numerous obstacles and overcame significant
hurdles. We responsibly provide open access to the Orion-14B family and
documented our experiences and insights in this technical report, aiming to
assist and inspire other researchers in the community.
The journey of LLMs is more than a technological advancement; it is a
continuous dialogue between human intelligence and artificial intelligence,
constantly evolving and pushing the boundaries of what’s possible. As Ludwig
Wittgenstein insightfully remarked, "The limits of my language mean the limits
of my world." (Wittgenstein, 1922) This interplay of language and machine
learning does more than just reflect our existing world; it unlocks pathways
to previously uncharted realms of understanding.
## References
* 01-ai (2023) 01-ai. https://github.com/01-ai/Yi, 2023.
* Bai et al. (2023a) Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, et al. Qwen technical report. _arXiv preprint arXiv:2309.16609_ , 2023a.
* Bai et al. (2023b) Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang, and Juanzi Li. Longbench: A bilingual, multitask benchmark for long context understanding. _arXiv preprint arXiv:2308.14508_ , 2023b.
* Baichuan (2023a) Baichuan. https://github.com/baichuan-inc/Baichuan-13B, 2023a.
* Baichuan (2023b) Baichuan. Baichuan 2: Open large-scale language models. _arXiv preprint arXiv:2309.10305_ , 2023b. URL https://arxiv.org/abs/2309.10305.
* Bengio et al. (2000) Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. A neural probabilistic language model. _Advances in neural information processing systems_ , 13, 2000.
* Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _Proceedings of the 26th annual international conference on machine learning_ , pages 41–48, 2009.
* Bisk et al. (2020) Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 34, pages 7432–7439, 2020.
* Charikar (2002) Moses S Charikar. Similarity estimation techniques from rounding algorithms. In _Proceedings of the thiry-fourth annual ACM symposium on Theory of computing_ , pages 380–388, 2002.
* Chen et al. (2023) Mayee F Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, and Christopher Ré. Skill-it! a data-driven skills framework for understanding and training language models. _arXiv preprint arXiv:2307.14430_ , 2023.
* Christiano et al. (2017) Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep reinforcement learning from human preferences. _Advances in neural information processing systems_ , 30, 2017.
* Contributors (2023) OpenCompass Contributors. Opencompass: A universal evaluation platform for foundation models. https://github.com/open-compass/opencompass, 2023.
* Dacheng Li and Zhang (2023) Anze Xie Ying Sheng Lianmin Zheng Joseph E. Gonzalez Ion Stoica Xuezhe Ma Dacheng Li, Rulin Shao and Hao Zhang. How long can open-source llms truly promise on context length?, June 2023. URL https://lmsys.org/blog/2023-06-29-longchat.
* Dao (2023) Tri Dao. FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning, 2023.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4171–4186, 2019.
* Ding et al. (2023) Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. _arXiv preprint arXiv:2305.14233_ , 2023.
* Du et al. (2023) Qianlong Du, Chengqing Zong, and Jiajun Zhang. Mods: Model-oriented data selection for instruction tuning, 2023.
* Evanson et al. (2023) Linnea Evanson, Yair Lakretz, and Jean-Rémi King. Language acquisition: do children and language models follow similar learning stages? _arXiv preprint arXiv:2306.03586_ , 2023.
* Gao et al. (2021) Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo.5371628.
* Golchin and Surdeanu (2023) Shahriar Golchin and Mihai Surdeanu. Time travel in LLMs: Tracing data contamination in large language models. _arXiv preprint arXiv:2308.08493_ , 2023.
* Hendrycks et al. (2020) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. _arXiv preprint arXiv:2009.03300_ , 2020.
* Hoffmann et al. (2022) Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. _arXiv preprint arXiv:2203.15556_ , 2022.
* Huang et al. (2023) Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese evaluation suite for foundation models. _arXiv preprint arXiv:2305.08322_ , 2023.
* Indyk and Motwani (1998) Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In _Proceedings of the thirtieth annual ACM symposium on Theory of computing_ , pages 604–613, 1998.
* InternLM (2023) InternLM. Internlm: A multilingual language model with progressively enhanced capabilities. https://github.com/InternLM/InternLM-techreport, 2023.
* Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_ , 2020.
* Kim et al. (2022) Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, and Eric Davis. Kobest: Korean balanced evaluation of significant tasks, 2022. URL https://arxiv.org/abs/2204.04541.
* Kim et al. (2021) Ildoo Kim, Gunsoo Han, Jiyeon Ham, and Woonhyuk Baek. Kogpt: Kakaobrain korean(hangul) generative pre-trained transformer. https://github.com/kakaobrain/kogpt, 2021.
* Ko et al. (2023a) Hyunwoong Ko, Kichang Yang, Minho Ryu, Taekyoon Choi, Seungmu Yang, jiwung Hyun, and Sungho Park. A technical report for polyglot-ko: Open-source large-scale korean language models, 2023a.
* Ko et al. (2023b) Hyunwoong Ko, Kichang Yang, Minho Ryu, Taekyoon Choi, Seungmu Yang, Sungho Park, et al. A technical report for polyglot-ko: Open-source large-scale korean language models. _arXiv preprint arXiv:2306.02254_ , 2023b.
* Kojima (2023) Takeshi Kojima. https://huggingface.co/matsuo-lab/weblab-10b, 2023.
* Kudo and Richardson (2018) Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. _CoRR_ , abs/1808.06226, 2018. URL http://arxiv.org/abs/1808.06226.
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. _arXiv preprint arXiv:1704.04683_ , 2017.
* Lee et al. (2023a) Ariel N. Lee, Cole J. Hunter, and Nataniel Ruiz. Platypus: Quick, cheap, and powerful refinement of llms, 2023a.
* Lee et al. (2021) Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. Deduplicating training data makes language models better. _arXiv preprint arXiv:2107.06499_ , 2021.
* Lee et al. (2023b) Meng Lee, Fujiki Nakamura, Makoto Shing, Paul McCann, Takuya Akiba, and Naoki Orii. Japanese stablelm base alpha 7b, 2023b. URL [https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b).
* Levesque et al. (2012) Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In _Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning_. Citeseer, 2012.
* Li et al. (2023) Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, and Timothy Baldwin. Cmmlu: Measuring massive multitask language understanding in chinese. _arXiv preprint arXiv:2306.09212_ , 2023.
* Lian et al. (2023) Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and "Teknium". Openorca: An open dataset of gpt augmented flan reasoning traces. https://https://huggingface.co/Open-Orca/OpenOrca, 2023.
* Liu et al. (2023) Xiao Liu, Xuanyu Lei, Shengyuan Wang, Yue Huang, Zhuoer Feng, Bosi Wen, Jiale Cheng, Pei Ke, Yifan Xu, Weng Lam Tam, Xiaohan Zhang, Lichao Sun, Hongning Wang, Jing Zhang, Minlie Huang, Yuxiao Dong, and Jie Tang. Alignbench: Benchmarking chinese alignment of large language models, 2023.
* LMSYS (2023) LMSYS. Chatbot arena leaderboard, 2023. URL https://lmsys.org/blog/2023-05-25-leaderboard/.
* Loshchilov and Hutter (2018) Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. 2018\.
* Mikolov et al. (2010) Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. Recurrent neural network based language model. In _Interspeech_ , volume 2, pages 1045–1048. Makuhari, 2010.
* Nunes et al. (2023) Igor Nunes, Mike Heddes, Pere Vergés, Danny Abraham, Alex Veidenbaum, Alex Nicolau, and Tony Givargis. Dothash: Estimating set similarity metrics for link prediction and document deduplication. In _Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , pages 1758–1769, 2023.
* NVIDIA (2023) NVIDIA. https://github.com/NVIDIA/apex, 2023.
* OpenAI (2022a) OpenAI. Introducing ChatGPT. 2022a.
* OpenAI (2022b) OpenAI. GPT-4 technical report. _arXiv preprint arXiv:2303.08774_ , 2022b.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744, 2022.
* Paperno et al. (2016) Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernandez. The LAMBADA dataset: Word prediction requiring a broad discourse context. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1525–1534, Berlin, Germany, August 2016. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/P16-1144.
* Penedo et al. (2023) Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb dataset for falcon llm: Outperforming curated corpora with web data, and web data only, 2023.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In _Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL)_ , 2018.
* Preferred Networks (2023) Inc Preferred Networks. Plamo-13b, 2023. URL https://huggingface.co/pfnet/plamo-13b.
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018\.
* Rafailov et al. (2023) Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. _arXiv preprint arXiv:2305.18290_ , 2023.
* Sasaki et al. (2023) Akira Sasaki, Masato Hirakawa, Shintaro Horie, and Tomoaki Nakamura. Elyza-japanese-llama-2-7b, 2023. URL https://huggingface.co/elyza/ELYZA-japanese-LLaMA-2-7b.
* Shibata et al. (1999) Yusuxke Shibata, Takuya Kida, Shuichi Fukamachi, Masayuki Takeda, Ayumi Shinohara, Takeshi Shinohara, and Setsuo Arikawa. Byte pair encoding: A text compression scheme that accelerates pattern matching. 1999\.
* Shoeybi et al. (2020) Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism, 2020.
* Su et al. (2021) Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. _arXiv preprint arXiv:2104.09864_ , 2021.
* Suzgun et al. (2022) Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, , and Jason Wei. Challenging big-bench tasks and whether chain-of-thought can solve them. _arXiv preprint arXiv:2210.09261_ , 2022.
* THUDM (2023) THUDM. https://github.com/THUDM/ChatGLM3, 2023.
* Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_ , 2023a.
* Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ , 2023b.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Proceedings of the Conference on Neural Information Processing Systems (NIPS 2017)_ , pages 5998–6008, 2017.
* Wei et al. (2023) Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng, Weiwei Lü, Rui Hu, et al. Skywork: A more open bilingual foundation model. _arXiv preprint arXiv:2310.19341_ , 2023.
* Wenting Zhao (2023) Jack Hessel Claire Cardie Yejin Choi Yuntian Deng. Wenting Zhao, Xiang Ren. (inthe)wildchat: 650k chatgpt interaction logs in the wild, 2023.
* Wittgenstein (1922) Ludwig Wittgenstein. _Tractatus logigo-philosphicus_. 1922\.
* Yang et al. (2023) Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E Gonzalez, and Ion Stoica. Rethinking benchmark and contamination for language models with rephrased samples. _arXiv preprint arXiv:2311.04850_ , 2023.
* Yao et al. (2013) Kaisheng Yao, Geoffrey Zweig, Mei-Yuh Hwang, Yangyang Shi, and Dong Yu. Recurrent neural networks for language understanding. In _Interspeech_ , pages 2524–2528, 2013.
* Yuanxiang (2023) Yuanxiang. https://github.com/xverse-ai/XVERSE-13B, 2023.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? _arXiv preprint arXiv:1905.07830_ , 2019.
* Zhang et al. (2023a) Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wenhao Huang, and Jie Fu. Chinese open instruction generalist: A preliminary release, 2023a.
* Zhang et al. (2023b) Xiaotian Zhang, Chunyang Li, Yi Zong, Zhengyu Ying, Liang He, and Xipeng Qiu. Evaluating the performance of large language models on gaokao benchmark. 2023b.
* Zheng et al. (2023) Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, 2023.
* Zhong et al. (2023) Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation models, 2023.
* Zhou et al. (2023) Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. Lima: Less is more for alignment, 2023.
## Appendix
## Appendix A Contributions
All contributors sorted alphabetically by last name.
Core Contributors: Du Chen, Yi Huang, Xiaopu Li, Yongqiang Li, Yongqiang Liu,
Haihui Pan, Leichao Xu, Dacheng Zhang, Zhipeng Zhang.
Contributors: Yang Fan, Xuefeng Li, Yuxiang Liu, Haonan Tan, Bingcheng Zhang,
Enmao Zhang, Yinglou Zhao.
Human Annotators: Lixiu Chen, Zhenwei Hu, Ningting Luo, Zikang Ma, Jiali Pan,
Yuping Qin, Qin Shu, Qin Tu, Haiyan Wu, Jiamin Wu, Jingping Wu, Jing Xia,
Simiao Xu, Zhiyong Xue, Chonghuan Yang, Tao Zhu.
Science and Engineering Leadership: Kun Han.
We thank the executive team for their support: Sheng Fu, Mingyan Sun, Ting Li.
## Appendix B Detailed evaluation results of OpenCompass
Table 16: Evaluation results of OpenCompass in the examination category Model | Average | C-Eval | CMMLU | MMLU | AGIEval | GaoKao | ARC-c | ARC-e
---|---|---|---|---|---|---|---|---
LLaMA 2-13B | 45.2 | 41.4 | 38.4 | 55.0 | 30.9 | 18.2 | 60.3 | 71.8
Skywork-13B | 61.1 | 59.1 | 61.4 | 62.7 | 43.6 | 56.1 | 65.4 | 79.5
Baichuan 2-13B | 51.8 | 59.0 | 61.3 | 59.5 | 37.4 | 45.6 | 38 | 61.9
Qwen-14B | 71.3 | 71.7 | 70.2 | 67.9 | 51.9 | 62.5 | 84.4 | 90.1
InternLM-20B | 62.5 | 58.8 | 59.0 | 62.1 | 44.6 | 45.5 | 81.7 | 86.1
Orion-14B | 71.4 | 72.9 | 70.6 | 69.9 | 54.7 | 62.1 | 80.7 | 88.9
Table 17: Evaluation results of OpenCompass in the language category Model | Average | WiC | CHID | AFQMC | WSC | TyDiQA | Flores
---|---|---|---|---|---|---|---
LLaMA 2-13B | 47.0 | 53.3 | 53.0 | 69.0 | 66.3 | 33.2 | 7.20
Skywork-13B | 51.3 | 51.1 | 88.1 | 69.0 | 66.3 | 27.9 | 5.40
Baichuan 2-13B | 47.5 | 60.2 | 83.2 | 38.0 | 66.3 | 30.8 | 6.40
Qwen-14B | 52.7 | 50.9 | 84.7 | 69.0 | 66.3 | 39.8 | 5.30
InternLM-20B | 55.0 | 61.8 | 81.7 | 69.0 | 68.3 | 43.2 | 6.00
Orion-14B | 55.0 | 60.0 | 90.1 | 69.0 | 70.2 | 32.7 | 8.13
Table 18: Evaluation results of OpenCompass in the knowledge category Model | Average | BoolQ | CommonSenseQA | TriviaQA | NaturalQuestions
---|---|---|---|---|---
LLaMA 2-13B | 58.3 | 82.4 | 66.7 | 59.4 | 24.8
Skywork-13B | 52.7 | 80.9 | 64.6 | 48.1 | 17.2
Baichuan 2-13B | 48.9 | 67 | 65.6 | 46.6 | 16.3
Qwen-14B | 56.1 | 86.1 | 70.1 | 48.4 | 19.8
InternLM-20B | 60.1 | 87.5 | 70.6 | 57.3 | 25.2
Orion-14B | 60.0 | 84.9 | 65.7 | 77.2 | 12.4
Table 19: Evaluation results of OpenCompass in the understanding category Model | Average | C3 | RACE-middle | RACE-high | OpenbookQA
---|---|---|---|---|---
LLaMA 2-13B | 50.9 | 46.1 | 63.0 | 58.9 | 65.0
Skywork-13B | 64.5 | 64.9 | 87.6 | 84.1 | 83.4
Baichuan 2-13B | 58.1 | 65.6 | 68.9 | 67.2 | 65.0
Qwen-14B | 68.8 | 90.8 | 93.0 | 90.3 | 94.8
InternLM-20B | 67.3 | 73.7 | 86.4 | 83.3 | 87.6
Orion-14B | 71.9 | 80.2 | 93.2 | 91.3 | 89.8
Table 20: Evaluation results of OpenCompass in the understanding category (cont’) Model | CSL | LCSTS | XSum | EPRSTMT | Lambada
---|---|---|---|---|---
LLaMA 2-13B | 58.8 | 7.80 | 23.4 | 58.8 | 76.5
Skywork-13B | 60.0 | 17.7 | 22.6 | 88.1 | 71.8
Baichuan 2-13B | 63.1 | 6.30 | 25.2 | 87.5 | 74.1
Qwen-14B | 54.4 | 12.5 | 24.7 | 86.9 | 71.4
InternLM-20B | 65.6 | 12.7 | 35.5 | 89.4 | 71.8
Orion-14B | 62.5 | 28.9 | 38.2 | 83.8 | 78.8
Table 21: Evaluation results of OpenCompass in the reasoning category Model | Average | CMNLI | OCNLI | AXb | AXg | RTE | COPA | ReCoRD
---|---|---|---|---|---|---|---|---
LLaMA 2-13B | 43.6 | 41.4 | 34.1 | 58.3 | 50.6 | 47.3 | 70.0 | 11.6
Skywork-13B | 45.2 | 32.8 | 30.0 | 59.0 | 53.4 | 56.3 | 72.0 | 1.40
Baichuan 2-13B | 44.2 | 32.7 | 30.0 | 59.7 | 50.6 | 44.8 | 71.0 | 20.7
Qwen-14B | 60.1 | 62.1 | 58.2 | 49.5 | 80.9 | 71.5 | 93.0 | 42.3
InternLM-20B | 54.9 | 43.0 | 42.5 | 62.1 | 75.0 | 57.8 | 83.0 | 63.6
Orion-14B | 61.6 | 72.6 | 68.3 | 71.2 | 86.5 | 83.0 | 82.0 | 87.8
Table 22: Evaluation results of OpenCompass in the reasoning category (cont’) Model | HellaSwag | PIQA | SIQA | MATH | GSM8K | DROP | HumanEval | MBPP | BBH
---|---|---|---|---|---|---|---|---|---
LLaMA 2-13B | 77.5 | 79.8 | 54.8 | 5.00 | 29.6 | 46.4 | 18.9 | 26.8 | 45.6
Skywork-13B | 73.7 | 78.3 | 70.4 | 9.80 | 54.3 | 41.7 | 15.9 | 25.4 | 48.3
Baichuan 2-13B | 70.8 | 78.1 | 44.3 | 10.1 | 52.6 | 45.0 | 17.1 | 30.8 | 49.0
Qwen-14B | 80.2 | 79.8 | 78.1 | 25.2 | 61.6 | 53.6 | 32.3 | 39.8 | 53.7
InternLM-20B | 78.1 | 80.3 | 72.8 | 7.90 | 52.6 | 46.0 | 25.6 | 35.6 | 52.5
Orion-14B | 78.5 | 79.5 | 69.4 | 7.78 | 51.9 | 40.8 | 20.7 | 29.0 | 56.5
|
# Ultrahigh electron mobility in suspended graphene
K. I. Bolotina K. J. Sikesb Z. Jianga,d M. Klimac G. Fudenberga J. Honec P.
Kima H. L. Stormera,b,e,∗ Departments of aPhysics, bApplied Physics,
cMechanical Engineering, Columbia University, New York, NY 10027, USA
dNational High Magnetic Field Laboratory, Tallahassee, FL 32310, USA eBell
Labs, Alcatel-Lucent Technologies, Murray Hill, NJ 07974, USA
###### Abstract
We have achieved mobilities in excess of 200,000 cm2V-1s-1 at electron
densities of $\sim$2$\times$1011 cm-2 by suspending single layer graphene.
Suspension $\sim$150 nm above a Si/SiO2 gate electrode and electrical contacts
to the graphene was achieved by a combination of electron beam lithography and
etching. The specimens were cleaned in situ by employing current-induced
heating, directly resulting in a significant improvement of electrical
transport. Concomitant with large mobility enhancement, the widths of the
characteristic Dirac peaks are reduced by a factor of 10 compared to
traditional, non-suspended devices. This advance should allow for accessing
the intrinsic transport properties of graphene.
A. Graphene; B. Nanofabrication; D. Electronic transport
###### pacs:
73.50.-h; 73.63.-b; 81.07.-b; 81.16.-c
Graphene, the latest addition to the family of two-dimensional (2D) materials,
is distinguished from its cousins by its unusual band structure, rendering the
quasiparticles in it formally identical to massless, chiral fermions. The
experimental realization of graphene thus presents tantalizing opportunities
to study phenomena ranging from the topological phase resulting in exotic
quantum Hall states novoselov ; yuanbo to the famous Klein paradox – the
anomalous tunneling of relativistic particles rise_graphene . However, despite
tremendous interest and concerted experimental efforts [1-23], the presence of
strong impurity scattering – which limits the electron mean free path to less
than a micron – has been a major barrier to progress. At the same time, there
is strong evidence that graphene is a nearly perfect crystal free of the
structural defects elena ; ishigami that characterize most conductors. As a
result, it has been put forth that the scattering of charge carriers stems
from extrinsic sources nomura ; dassarma ; electrostatic ; geim_intr .
Although the exact nature of the scattering that limits the mobility of
graphene devices remains unclear, evidence has mounted that interactions with
the underlying substrate are largely responsible. Surface charge traps
chencharged ; dassarma ; nomura ; electrostatic , interfacial phonons
chen_limits , substrate stabilized ripples suspend_geim ; ishigami ; geim_intr
, and fabrication residues on or under the graphene sheet may all contribute.
Consequently, improving substrate quality or eliminating the substrate
altogether by suspending graphene over a trench seems a promising strategy
towards higher quality samples. While devices suspended over the substrate
were achieved in the past suspend_geim ; bunch , they lacked multiple
electrical contacts thus precluding transport measurements.
In this Letter we report the fabrication of electrically contacted suspended
graphene and achieve a tenfold improvement in mobility as compared to the best
values reported in the literature for traditional devices fabricated on a
substrate. Besides opening new avenues for studying the intrinsic physics of
Dirac fermions, this improvement demonstrates the dominant role played by
extrinsic scattering in limiting the transport properties of unsuspended
graphene samples.
The fabrication of a suspended graphene device starts with optically locating
a single-layer mechanically exfoliated graphene flake on top of a silicon
substrate covered with 300 nm of SiO2. Single-layer graphene flakes are
identified based on their contrast geim_contrast , and later confirmed via
measurements of the half-integer quantum Hall effect yuanbo ; novoselov . We
avoid patterning the flakes using oxygen plasma etching melinda ; novoselov ,
as it may introduce additional defects in the bulk and dangling bonds at the
edges of graphene. Instead, we choose natural flakes of approximately
rectangular shape suitable for fabrication into Hall bars. Electron beam
lithography is employed to pattern the contacts to the flake. The contact
material (3 nm Cr followed by 100 nm of Au) is deposited by thermal
evaporation followed by a liftoff in warm acetone. The large size and
thickness of the electrodes enhances the mechanical rigidity of the device.
Suspension of the graphene flake is achieved by dipping the entire device into
1:6 buffered oxide etch (BOE) for 90 seconds, which uniformly removes
approximately 150 nm of SiO2 across the substrate, including the area below
the flake (SiO2 masked by the gold electrodes remains unetched). Uniform
etching of the substrate directly below the flake is crucial for our process
as it allows the fabrication of large-area suspended graphene, while
maintaining the parallel plate capacitor geometry for our device. To our
knowledge, this unexpected etching anisotropy in the presence of graphene was
not reported before; it is, however, consistent with the rapid propagation of
BOE along the SiO2/graphene interface me_unpublished . Finally, the device is
transferred from BOE to ethanol and dried in a critical-point-drying step to
avoid the surface-tension-induced collapse of the suspended graphene sheet.
Figure 1: (a) SEM image of a typical suspended six-probe graphene device taken
at $15^{\circ}$ with respect to the sample plane. (b) AFM image of the
suspended device #1 before the measurements. (c) AFM image of the device #1
after the measurements with graphene removed by a short oxygen plasma etch
(same z scale). (d) Device schematic, side-view. Degenerately doped silicon
gate (blue), partly etched SiO2 (green), suspended single-layer graphene
(pink) and Au/Cr electrodes (orange).
Figure 1a shows a scanning electron microscope (SEM) image of a finished
device taken at $15^{\circ}$ angle with respect to the sample plane. The
graphene is apparent as a thin sheet suspended above the surface of the
remaining SiO2. The sheet is supported by six gold electrodes attached to
SiO2, which have been slightly undercut during the BOE etching step (see Fig.
1d). Atomic force microscopy (AFM) (Figs. 1b,c) demonstrates convincingly the
integrity of the graphene sheet, its suspension above the oxide and the
flatness of the substrate below it. Fig. 1b clearly indicates a flat graphene
surface $\sim$150 nm above the surface of SiO2. The single layer of carbon
atoms, which makes up graphene, is remarkably robust and is not damaged by
repeated AFM imaging. Fig. 1c show the same device after completion of the
electrical measurement and after removal of the suspended graphene via an
oxygen plasma etch o2etch . It reveals the previously hidden SiO2 substrate
below the graphene. The height variation of the substrate is less than 20 nm,
with a slight bowing towards the center of the device. We thus conclude that
our fabrication process results in graphene devices suspended $\sim$150 nm
above SiO2 substrate (Fig. 1d).
Electrical measurements on suspended graphene devices are performed in a
sample-in-vacuum cryostat with a pressure of less than $5\times 10^{-5}$
mtorr. A total of one four-probe and two six-probe devices were measured.
Before cooling the cryostat to its base temperature of $\sim$5 K the devices
are thermally annealed in situ to 400 K, as this has been shown to reduce
spurious doping in unsuspended samples ishigami ; schendin . Four-probe
measurements are performed using standard low-frequency lock-in techniques
with the excitation current less than $I=100$ nA. A typical measurement
consists of sending the current between electrodes labeled 1 and 4 in Fig. 1a
and recording the voltages $V_{xx}$ ( $V_{xy}$ ) between electrodes 2 and 3 (
2 and 6 ) respectively. The resistance is calculated as $R_{xx}=V_{xx}/I$ and
the Hall resistance as $R_{xy}=V_{xy}/I$. To convert resistance to resistivity
we estimate the ratio of sample width to spacing between voltage probes from
images such as shown in Fig. 1. Following the general approach for extended
voltage probes we use the center-to-center distance along the current path
($L$) as the sample length and the distance between voltage probes
perpendicular to the current path as the sample width ($W$). The sheet
resistivity $\rho_{xx}$ is then calculated as $\rho_{xx}=R_{xx}(W/L)$. The
uncertainty in actual current and voltage distribution within our specimens
may place an error on the estimated value of $\rho_{xx}$ of less than 30%.
The resistivity is measured as a function of gate voltage $V_{g}$ applied
between graphene and the degenerately doped silicon substrate. Special care is
taken not to collapse the devices electrostatically, as applying gate voltage
$V_{g}$ of either sign leads to an attractive force between the flexible
suspended graphene bunch ; electrostatic and the gate. The observation of
graphene collapse at $V_{g}=20$ V in similar samples leads us to limit the
range of applied gate voltages to $\pm 5$ V throughout our experiments.
Following Bunch _et al._ bunch , we estimate the force acting on our typical
device #1 at $V_{g}=\pm 5$ V as
$F=\frac{\epsilon_{0}\epsilon^{2}LWV_{g}^{2}}{2(d_{0}+d_{1}\epsilon)^{2}}\sim
3\times 10^{-8}$ N, where $d_{0},d_{1}=150$ nm are thicknesses of the
remaining and etched SiO2 and $L,W\sim 3$ $\mu$m are the length and the width
of the device. Using simple mechanics, we estimate the maximum strain
$\varepsilon$ in graphene to be in the range $V_{g}=\pm 5$ V as
$\varepsilon\sim 0.5(\frac{F}{EtW})^{2/3}\sim 5\times 10^{-4}$, assuming a
Young modulus $E$=1 TPa and a thickness $t=$0.34 nm bunch . We deduce that
this strain level does not significantly affect electronic transport in
graphene.
The blue line of Fig. 2a shows the low temperature resistivity $\rho_{xx}$ of
sample #1, measured as a function of the gate voltage $V_{g}$. We observe the
Dirac peak, indicated by a maximum in the resistivity, at the gate voltage VD
close to zero. The small reproducible fluctuations in $\rho_{xx}(V_{g})$ are
consistent with universal conductance fluctuation, typically seen in
mesoscopic devices melinda ; ucf . The carrier density $n$ is determined via
Hall effect measurements as $n(V_{g})=B/e\rho_{xy}(V_{g},B)$, where $B$ is the
applied magnetic field. The gate capacitance of the device is calculated as
$C_{g}=n(V_{g})e/(V_{g}-V_{D})\sim 60$ aF$\mu$m-2. novoselov ; yuanbo The
measured capacitance is close to the value $C_{g}\sim$ 47$\pm$5 aF$\mu$m-2
expected for graphene suspended 150$\pm$20 nm above 150$\pm$20 nm of residual
SiO2, as calculated using the serial capacitor model. This provides an
independent verification that the device is suspended during the measurements.
Finally, using the above carrier density, we determine the electron mobility
$\mu=1/ne\rho_{xx}\sim$ 28,000 cm2V-1s-1 at $n=2\times 10^{11}$ cm-2. This is
comparable to the best reported values for unsuspended devices at the same
density yuanbo ; novoselov ; ong ; yanwen . Thus, despite removing the
substrate, at this stage the scattering in graphene is not significantly
reduced, which leads us to the conclusion that it is caused by residual
impurities absorbed on the graphene surface.
Further mobility enhancement requires removal of the remaining impurities.
This is accomplished by sending a large current through the device. For
unsuspended samples, this current annealing was demonstrated to heat the
graphene sheet locally to an estimated $T\sim 600$ C and to desorb most of the
residues remaining on the surface of the device from the fabrication steps.
While current annealing has been shown to improved the quality of electrical
transport in unsuspended devices, the treatment did not lead to significant
mobility enhancement bachtold . Most likely, impurities permanently trapped at
the interface between graphene and the substrate are responsible for this lack
of improvement. Suspended devices, on the other hand, are not be subject to
such limitations, since impurities from both sides of the graphene sheet are
free to desorb. Current annealing is implemented by ramping the current across
the device up to a predefined setpoint, waiting for several minutes,
decreasing the current to zero and remeasuring the electrical transport
properties of the specimen. The procedure is applied repeatedly until changes
appear in the gate response of the device, which start to occur only at very
large current densities of $\sim 2\times 10^{8}$ A/cm2, estimated assuming a
graphene thickness 0.34 nm.
Figure 2: (a) Measured four-probe resistivity $\rho_{xx}$ as a function of
gate voltage $V_{g}$ for the device #1 before (blue) and after (red) current
annealing; data from traditional high-mobility device on the substrate (gray
dotted line) shown for comparison. The gate voltage is limited to $\pm$5 V
range to avoid mechanical collapse. (b) Mobility $\mu=1/en\rho_{xx}$ as a
function of carrier density $n$ for the same devices.
For every device measured, current annealing leads to a remarkable difference
in the transport properties compared to the initial state, which we illustrate
using device #1 as an example. Upon current annealing, the resistance of
sample #1 decreases by more than a factor of 8 for voltages away from the
Dirac point. At the same time the width of the Dirac peak reduces by about a
factor of 20, while the maximum resistivity of the device hardly changes (Fig.
2a). These large changes reflect a greatly improved sample quality. We
quantify this improvement via three different measures: carrier mobility,
width of the Dirac peak and the onset field of Shubnikov deHaas oscillations.
Our first measure of sample quality is carrier mobility $\mu$ evaluated at
high electron density, where $\mu$ saturates. In unsuspended devices, the
mobility ranges between 2,000 and 25,000 cm2V-1s-1 with $\mu\sim 25,000$
cm2V-1s-1 at $n=5\times$1012 cm-2 being the highest value reported in the
literature ong ; yanwen ; novoselov . Due to the gate voltage limitation in
our devices we measure the mobility at a smaller density $n=2\times$1011 cm-2,
where the highest reported $\mu$ is about 30,000 cm2V-1s-1 (Fig. 2b, dotted
line). This value is comparable to the mobility of 28,000 cm2V-1s-1 (Fig. 2b,
blue line) in the suspended sample #1 before current annealing. Upon current
annealing, the resistance decrease in sample #1 translates into an increase of
mobility to 230,000 cm2V${}^{-1}s^{-1}$ (Fig. 2b, red line) measured at our
highest density of $n=2\times$1011 cm-2. Every suspended device exhibits
mobilities higher than 60,000 cm2V-1s-1 after annealing. Our peak mobility of
230,000 cm2V-1s-1 represents an improvement of about a factor of 10 over
values reported in the literature so far, and is the central result of this
work.
In addition to the mobility enhancement, we notice that the Dirac peak of
suspended and annealed samples is very narrow compared to both that of
suspended devices before annealing and traditional substrate supported
devices. We argue that the width of the Dirac peak is related to the charge
inhomogeneity inside the sample. As has been demonstrated recently, at small
charge densities the graphene breaks into mesoscopic puddles of hole and
electrons yacoby . The mechanism causing the formation of puddles is debated
dassarma ; nomura ; geim_intr , but it is accepted that the presence of
puddles changes transport characteristics, resulting in a broadened Dirac
peak. We quantify the changes in sample quality by measuring $\Delta
W_{Dirac}$, defined as twice the carrier density at which the resistivity
decreases by a factor of two from its maximum value. Such $\Delta W_{Dirac}$
provides an upper bound for the charge inhomogeneity due to puddle formation.
In device #1, for example, the Dirac peak narrows to about 2$\times$1010 cm-2
(Fig. 2b, red line), an improvement of more than 10 times compared to the same
sample before annealing (Fig. 2b, blue line) and compared to typical high
mobility unsuspended devices (Fig. 2b, black dotted line). We remark that the
reduced charge inhomogeneity is correlated with enhanced carrier mobility
(Fig. 3b). Compared to unsuspended samples (black squares) where a typical
charge inhomogeneity is 2-9$\times$1011 cm-2 while the mobility ranges from
2,000-30,000 cm2V-1s-1, the suspended and annealed samples (red circles)
exhibit both an order of magnitude higher mobility and an order of magnitude
lower charge inhomogeneity, following the trend seen in the unsuspended
devices.
Finally, we turn to the onset of the Shubnikov deHaas oscillations as a
measure of sample quality. In a simple model, these oscillations commence at
magnetic field $B_{SdH}$ strong enough for a charge carrier to complete one
cyclotron orbit without scattering, which is equivalent to $\omega_{c}\tau\sim
1$, where $\omega_{c}$ is the cyclotron frequency and $\tau$ is the scattering
time. In graphene, a semiclassical relation yields
$\omega_{c}=ev_{F}B_{SdH}/\hbar(\pi n)^{1/2}$, where $v_{F}=10^{6}$ m/s is the
Fermi velocity. This results in an estimate $\tau\sim\hbar(\pi
n)^{1/2}/ev_{F}B_{SdH}$. Figure 3a shows the SdH effect in our highest
mobility specimen, sample #1. Oscillations are observed as low as $B_{SdH}\sim
250$ mT (Fig 3a, red line), while no SdH oscillations are observed before
current annealing (Fig. 3a, blue line). Other suspended devices exhibit
$B_{SdH}$ ranging from 250 to 600 mT, and we estimate $\tau\sim 2\times
10^{-13}$ s for the best device at $n=2\times 10^{11}$ cm-2. On the other
hand, in unsuspended devices SdH oscillations at the same density are seen at
fields larger than $\sim$ 700 mT, corresponding to $\tau\sim 7\times 10^{-14}$
s. Therefore, the early onset of Shubnikov deHaas oscillation in the suspended
devices is consistent with reduced electron scattering time and thus is
indicative of cleaner samples. While the onset of the SdH oscillations is a
qualitative measure for sample quality, we cannot deduce directly a quantum
scattering time $\tau_{q}$, since other factors, such as density
inhomogeneity, also affect the onset.
Figure 3: (a) $\rho_{xx}$ component of Hall resistance as a function of
magnetic field for the suspended sample #1 before annealing (blue) and after
annealing (red) at $n=2\times 10^{11}$ cm-2 and $T\sim 5$ K. (b) Full width at
half maximum of the Dirac peak $\Delta W_{Dirac}$ plotted as a function of
device mobility $\mu$ for all three measured suspended devices (red circles)
and previously studied devices on the substrate (black squares). (c)
Conductivity $\sigma$ as a function of carrier density $n$ for the sample #1
after current annealing.
Summarizing the results of our transport measurements on in-situ annealed,
suspended graphene samples, we observe a considerable improvement in sample
quality measured by the enhanced mobility, reduced sample inhomogeneity and
increased scattering time. In particular, we observe about an order of
magnitude improvement in carrier mobility and sample homogeneity, while the
improvement in the onset field of the SdH oscillations is about factor of 3.
Overall, we conclude that our fabrication procedure results in very clean
samples containing far fewer scatterers compared to the previously studied
substrate supported devices. Interestingly, suspended samples prior to current
annealing as well as current annealed but unsuspended samples bachtold do not
exhibit the aforementioned quality improvement. This suggests that impurities
trapped between the SiO2 and graphene are limiting the mobility of the current
generation of unsuspended graphene devices.
Finally, we consider the nature of the residual scatterers in our devices.
Upon current annealing, the carrier mean free path $l$ in our samples
approaches the typical dimensions of the device. Indeed, using a semiclassical
relation between the mobility and the mean free path dassarma
$\sigma=en\mu=\frac{2e^{2}}{h}(k_{F}l)$, where $k_{F}=(\pi n)^{1/2}$, we
estimate $l\sim 1.2~{}\mu$m for the sample #1 at $n=2\times 10^{11}$ cm-2.
Therefore, both the edges of the device and the electrodes may contribute
considerably to scattering. This is consistent with the observed strongly
sublinear dependence of the conductivity $\sigma(n)=1/\rho_{xx}(n)$ as a
function of carrier density $n$ (Fig. 3c). Such behavior was argued to result
from the short-range scattering dassarma ; yanwen , typically associated with
point defects or sample edges. Overall, we speculate that extrinstic sources
of scattering may still be the limiting factor in the present geometry and
that larger area devices may exhibit even higher mobilities.
## Acknowledgements
We acknowledge fruitful discussions with and experimental help from Erik
Henriksen, Jeffrey Kysar, Andrea Young, Barbaros Özyilmaz, and Pablo Jarillo-
Herrero. This work is supported by the NSF (No. DMR-03-52738), NSEC grant
CHE-0641523, NYSTAR, DOE (No. DE-AIO2-04ER46133 and No. DEFG02-05ER46215), ONR
(No. N000150610138), FENA MARCO, W. M. Keck Foundation, and the Microsoft
Project Q.
## References
* (1) K. S. Novoselov _et al._ , Nature 438 (2005) 197.
* (2) Y. Zhang, Y. -W. Tan, H. L. Stormer and P. Kim, Nature 438 (2005) 201.
* (3) A. K. Geim and K. S. Novoselov, Nature Materials 6 (2007) 183.
* (4) E. Stolyarova _et al._ , PNAS 104 (2007) 9209.
* (5) M. Ishigami, J. H. Chen, W. G. Cullen, M. S. Fuhrer, E. D. Williams, Nano Lett. 7 (2007) 1643.
* (6) J. H. Chen, C. Jang, M. S. Fuhrer, E. D. Williams, M. Ishigami, Nature Physics 4, (2008) 377.
* (7) E. H. Hwang, S. Adam, and S. Das Sarma, Phys. Rev. Lett. 98 (2007) 186806.
* (8) K. Nomura and A. H. MacDonald, Phys. Rev. Lett. 96 (2006) 256602.
* (9) J. Sabio _et al._ , arXiv:cond-mat/0712.2232v2
* (10) S. V. Morozov _et al._ , Phys. Rev. Lett. 100 (2008) 016602.
* (11) J. H. Chen, C. Jang, S. Xiao, M. Ishigami, M. S. Fuhrer, Nature Nanotech. 3, (2008) 206.
* (12) J. C. Meyer _et al._ , Nature 446 (2007) 60.
* (13) J. S. Bunch _et al._ , Science 315 (2007) 490.
* (14) K. I. Bolotin _et al._ , unpublished.
* (15) We use 6 seconds long oxygen plasma etch, with power at 50 W and oxygen pressure 200 mT.
* (16) S. V. Morozov _et al._ , Phys. Rev. Lett. 97 (2006) 016801.
* (17) M. Y. Han, B. Oezyilmaz, Y. Zhang, and P. Kim, Phys. Rev. Lett. 98 (2007) 206805.
* (18) Y. -W. Tan _et al._ Phys. Rev. Lett. 99 (2007) 246803.
* (19) J. G. Checkelsky, L. Li, and N. P. Ong, cond-mat/0708.1959.
* (20) J. Moser, A. Barreiro, A. Bachtold, Appl. Phys. Lett. 91 (2007) 163513.
* (21) F. Schedin _et al._ , Nature Mater. 6 (2007) 652.
* (22) J. Martin _et al._ , Nature Phys. 10.1038/nphys781 (2007).
* (23) P. Blake _et al._ , Appl. Phys. Lett. 91 (2007) 063124.
* (24) C. Berger, Science 312 (2006) 1191.
|
# RFWave: Multi-band Rectified Flow for Audio Waveform Reconstruction
Peng Liu
<EMAIL_ADDRESS>
&Dongyang Dai
<EMAIL_ADDRESS>
###### Abstract
Recent advancements in generative modeling have led to significant progress in
audio waveform reconstruction from diverse representations. Although diffusion
models have been used for reconstructing audio waveforms, they tend to exhibit
latency issues because they operate at the level of individual sample points
and require a relatively large number of sampling steps. In this study, we
introduce RFWave, a novel multi-band Rectified Flow approach that reconstructs
high-fidelity audio waveforms from Mel-spectrograms. RFWave is distinctive for
generating complex spectrograms and operating at the frame level, processing
all subbands concurrently to enhance efficiency. Thanks to Rectified Flow,
which aims for a flat transport trajectory, RFWave requires only 10 sampling
steps. Empirical evaluations demonstrate that RFWave achieves exceptional
reconstruction quality and superior computational efficiency, capable of
generating audio at a speed 90 times faster than real-time111Code is available
at https://github.com/bfs18/rfwave.
## 1 Introduction
The advent of neural network-based generative models has revolutionized the
generation and synthesis of audio, showcasing remarkable advancements in
artificial intelligence. These models are adept at learning the nuances of
training data distributions, enabling them to reconstruct new, similar data
points. In particular, waveform reconstruction has emerged as a key
application of these generative models.
A key focus in speech technology research is on reconstructing high-quality
waveforms from compact representations, such as Mel-spectrograms or phoneme
sequences. WaveNet [1], a convolution-based autoregressive model, marked a
major breakthrough in this domain through the use of neural network-based
generative models for waveform reconstruction. The quality of waveforms
produced by WaveNet significantly exceeds that of previous signal processing
methods like WORLD [2]. However, the sequential, sample-by-sample waveform
reconstruction approach of WaveNet leads to substantial computational demands.
To more effectively harness the capabilities of recurrent networks, WaveRNN
[3] was developed, employing an RNN in an autoregressive model for waveform
reconstruction, optimizing the use of recurrent network characteristics.
The inherent slowness of autoregressive models, which predict samples one
after another, has prompted interest in parallel waveform reconstruction. This
has led to the development of models like Parallel WaveNet [4] and ClariNet
[5]. Concurrently, the emergence of Generative Adversarial Networks (GANs) [6]
has given rise to parallel GAN-based waveform reconstruction methods, such as
Mel-GAN [7], ParallelWaveGAN [8], and HiFi-GAN [9]. These approaches generally
offer faster reconstruction speeds, attributing to their generators’ simpler
model structures compared to prior parallel methods.
Considering that speech signals consist of a high number of samples per
second, modeling speech waveforms typically involves intricate neural
networks, which incorporate upsampling layers. The incorporation of ISTFT in
replacing some upsampling layers of the model, shifting its task to predicting
the complex spectrograms rather than directly reconstructing the waveform,
effectively refines the model structure and enhances computational efficiency.
Models such as iSTFTNET [10], Vocos [11], and APNet2 [12], which are grounded
in GAN frameworks, have effectively used this method in reconstructing
waveforms. They have achieved faster processing speeds and increased
efficiency, all while maintaining the quality of the speech.
Nevertheless, GAN-based waveform reconstruction models [7, 8, 9, 10, 11] often
require intricate discriminator designs for high-quality output and may face
issues like instability and mode collapse. To address these challenges,
researchers have investigated diffusion models for reconstructing waveforms,
as seen in studies like Diffwave [13], WaveGrad [14], and Multi-Band Diffusion
[15]. However, diffusion-based models typically necessitate multiple sampling
steps for synthesizing high-quality audio, leading to significant
computational costs.
In this paper, we introduce a novel waveform reconstruction model based on
Rectified Flow [16], which ensures high-quality output while also
significantly enhancing overall efficiency. Our main contributions are
summarized as follows:
1. 1.
By adopting the Rectified Flow and a innovative time-balanced loss, our model
can reconstruct high-quality waveforms with a drastically reduced number of
sampling steps.
2. 2.
We implement a multi-band approach that generates different sub-bands in
parallel, ensuring audio quality while avoiding cumulative errors and
enhancing synthesis speed.
3. 3.
Our model operates at the level of STFT frames, not individual waveform sample
points. This approach significantly enhances processing speed.
Our experimental results confirm that our model is capable of generating high-
fidelity audio waveforms and performing inference at speeds up to 90 times
faster than real-time.
## 2 Background
#### Rectified Flow
Rectified Flow [16] presents an innovative ODE-based framework for generative
modeling and domain transfer. It introduces a method to learn a transport
mapping that connects two distributions, $\pi_{0}$ and $\pi_{1}$ on
$\mathbb{R}^{d}$, based on empirical observations:
$\frac{\mathrm{d}Z_{t}}{\mathrm{d}t}=v(Z_{t},t),\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{initialized from
$Z_{0}\sim\pi_{0}$, such that $Z_{1}\sim\pi_{1}$},$ (1)
where $v\colon\mathbb{R}^{d}\times[0,1]\to\mathbb{R}^{d}$ represents a
velocity field. The learning of this field involves minimizing a mean square
objective function,
$\min_{v}\mathbb{E}_{(X_{0},X_{1})\sim\gamma}\left[\int_{0}^{1}\mid\mid\frac{\mathrm{d}}{\mathrm{d}t}X_{t}-v(X_{t},t)\mid\mid^{2}\mathrm{d}t\right],\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\text{with}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ X_{t}=\phi(X_{0},X_{1},t),$ (2)
where $X_{t}=\phi(X_{0},X_{1},t)$ represents a time-differentiable
interpolation between $X_{0}$ and $X_{1}$, with
$\frac{\mathrm{d}}{\mathrm{d}t}X_{t}=\partial_{t}\phi(X_{0},X_{1},t)$. The
$\gamma$ represents any coupling of $(\pi_{0},\pi_{1})$. An illustrative
instance of $\gamma$ is the independent coupling
$\gamma=\pi_{0}\times\pi_{1}$, which allows for empirical sampling based on
separately observed data from $\pi_{0}$ and $\pi_{1}$. The authors recommended
a simple choice of
$X_{t}=(1-t)X_{0}+tX_{1}\implies\frac{\mathrm{d}}{\mathrm{d}t}X_{t}=X_{1}-X_{0}.$
(3)
This simplification results in linear trajectories, which are critical for
accelerating the inference process. Typically, the velocity field $v$
represented using a deep neural network. The solution to (2) is approximated
through stochastic gradient methods. To approximate the ODE presented in (1),
numerical solvers are commonly employed. A prevalent technique is the forward
Euler method. This approach computes values using the formula
$Z_{t+\frac{1}{n}}=Z_{t}+\frac{1}{n}v(Z_{t},t),\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \forall
t\in\\{0,\ldots,n-1\\}/n,$ (4)
where the simulation is executed with a step size of $\epsilon=1/n$ over $n$
steps.
The velocity field has the capacity to incorporate conditional information.
This is particularly essential in applications like text-to-image generation,
where a text prompt is a critical factor. Consequently, in such contexts,
$v(Z_{t},t)$ in (2) is modified to $v(Z_{t},t\mid\mathcal{C})$, where
$\mathcal{C}$ represents the conditional information pertinent to the
corresponding $X_{1}$.
#### Estimating Complex Spectrograms
Waveform reconstruction from complex spectrograms can be effectively achieved
using the Inverse Short-Time Fourier Transform (ISTFT). Notably, Vocos [11]
and APNet2 [12], utilizing GANs as their model framework, estimate magnitude
and phase spectrograms from the input Mel spectrograms, which can be
transformed to complex spectrograms effortlessly. Both models operate at the
frame level, enabling them to achieve significantly faster inference speeds
compared to HiFi-GAN [9], which uses multiple upsampling layers and operates
at the level of waveform sample points. Morever, these models preserve the
quality of the synthesized waveform, demonstrating their superiority in both
speed and fidelity without a trade-off. In this paper, we directly estimate
complex spectrograms using Rectified Flow and focus on frame-level operations,
aiming to enhance both the efficiency and quality of our waveform synthesis
process.
#### Multi-band Audio Waveform Reconstruction
Both Multi-band MelGAN [17] and Multi-band Diffusion [15] employ multi-band
strategies, albeit for different purposes within their respective frameworks.
Multi-band MelGAN, specifically, uses Pseudo-Quadrature Mirror Filters (PQMF)
[18] to divide frequency bands. This division results in each subband’s
waveform being a fraction of the original waveform’s length, based on the
number of subbands. By reshaping these subbands into feature dimensions and
utilizing a unified backbone for modeling, Multi-band Melgan is able to
operate on considerably shorter signals. This strategy significantly enhances
the efficiency of the model, leading to accelerated training and inference
processes. Multi-band Diffusion utilizes an array of band-pass filters to
separate the frequency bands and models each subband with a distinct model.
This approach ensures that errors in one band do not negatively impact the
others. In our research, we simplify the process of frequency band division by
directly choosing the appropriate dimensions from the complex spectrograms.
Furthermore, we enhance efficiency by modeling all subbands together in
parallel with a single, unified model. This strategy improves the processing
speed and also helps in reducing error accumulation across different subbands.
## 3 Method
Our model utilizes a multi-band Rectified Flow to directly predict the complex
spectrogram. It operates at the STFT frame level and incorporates a highly
efficient ConvNeXtV2 [19] backbone. With only 10 steps of sampling, the model
is capable of producing high-quality waveforms.
(a) RFWave
(b) ConvNeXtV2 backbone
Figure 1: The overall structure for RFWave. [border-style=solid,border-
radius=1ex]Band i is the subband index, [border-style=solid,border-
radius=1ex]Cond is the conditional input, and [border-style=dashed,border-
radius=1ex]Encodec i is the Encodec bandwidth index. The blue boxes represent
modules that contain trainable parameters, while the green boxes symbolize
modules without trainable parameters. Modules enclosed in a dashed box are
considered optional.
### 3.1 Multi-band Rectified Flow
In our initial experiments, we observe that attempts to predict the full-band
complex spectrogram result in a compromised quality of waveform
reconstruction. To address this, we shift our approach to a multi-band
structure. It introduces error accumulation when conditioning higher bands on
lower bands, whereby inaccuracies in the lower bands adversely affects the
higher bands during the inference stage, as noted in [15]. Consequently, we
design our model by not employing the lower band as a conditional input for
the higher band. This structure yields an additional benefit: the ability to
predict all frequency bands concurrently, thereby significantly diminishing
the inference latency. We can batch the different bands of a sample to
facilitate simultaneous training or inference.
The model structure is depicted in Figure 1(a). All frequency bands share the
model, distinguished by a unique subband index. For each subband, the
corresponding noise is fed into the ConvNeXtV2 backbone to predict the
velocity conditioned on time $t$, the subband index, conditional input (the
Mel spectrogram or the Encodec [20] embedding), and an optional Encodec
bandwidth index. The detailed structure of the ConvNeXtV2 backbone is shown in
Figure 1(b). We employ Fourier features as described in [21]. The noise,
Fourier features, and conditional inputs are concatenated and then passed
through a linear layer, forming the input that is fed into a series of
ConvNeXtV2 blocks. The sinusoidal time embedding, along with the optional
Encodec bandwidth index embedding, are element-wise added to the input of each
ConvNeXtV2 block. Furthermore, the subband index is incorporated via an
adaptive layer normalization module, which utilizes learnable embeddings as
described in [22, 11]. The other components are identical to those within the
ConvNeXtV2 architecture, details can be found in the [19].
### 3.2 Operating in Time Domain or Frequency Domain
Our model is designed to function at the STFT frame level, with the
flexibility to operate in either the time or frequency domain. In the time
domain, as shown in Figure 1(a), the signal and noise are both inherently
temporal, necessitating the use of STFT and ISTFT. Conversely, in the
frequency domain, both noise and velocity are represented in this domain,
eliminating the need for STFT and ISTFT. For the time domain, noise and
velocity adhere to dimensions of $[1,T]$, where $T$ represents the waveform
length in sample points222For simplicity, the batch dimension is not included
in the discussion.. Notably, only subband $i$ is processed after STFT and
prior to ISTFT in a single feed-through, in line with our prior discussion. In
the frequency domain, the dimensions of noise and velocity shift to
$[d_{s},F]$, with $d_{s}$ denoting the dimension of a subband’s complex
spectrum and $F$ the number of frames. Here, the real and imaginary parts are
concatenated to form a $d$-dimensional complex spectrum feature. During the
inference stage, the model operating in the time domain includes two
additional operations—STFT and ISTFT—at each step compared to the model
operating in the frequency domain. Despite this, it demonstrates slightly
superior performance, particularly in capturing high-frequency details. The
details of the comparative experiments are provided in Section 5.
### 3.3 Waveform Equalization or STFT Normalization
A white Gaussian noise signal has uniform energy distribution across all
frequency bands. However, the energy profiles of various waveform types vary
markedly among different frequency bands. For instance, the energy in a speech
waveform exhibits an exponential decay with increasing frequency, whereas a
music waveform tends to maintain a more consistent energy distribution across
frequencies. These disparities pose challenges for training diffusion models.
Consequently, it becomes advantageous to equalize the energy across waveform
frequency bands [15].
In the time-domain model, a bank of Pseudo-Quadrature Mirror Filters (PQMF) is
employed to decompose the input waveform into subbands. Subsequently, these
subbands are equalized and then recombined to form the equalized waveform. The
performance of the PQMF bank exhibits a modest enhancement compared to the
array of band-pass filters employed in [15]. In the frequency-domain model,
the waveform is transformed a complex spectrogram without equalization.
Subsequent processing involves the dimension-wise normalization of the complex
spectrogram feature. Mean-variance normalization, utilizing the running
averages of mean and variance computed during training, is applied for both
waveform equalization and STFT normalization. This approach ensures that the
transformation can be effectively inverted using the same running average
statistics.
### 3.4 Time-balanced Loss
In our preliminary experiments, we observe the presence of low-volume noise in
regions that are expected to be silent. We attribute this to the property of
mean square error (MSE) used in (2). The MSE measures the absolute distortion
between the predicted values and the ground truth. Since the values in silent
regions are close to zero, even a minor absolute distortion in predictions can
lead to a significant relative error. Consequently, models trained with the
MSE produce small absolute distortions in silent regions, which are then
perceived as noise.
We propose a time-balanced loss to mitigate this problem. Our time-balanced
loss is designed to weight errors differently depending on the region’s volume
accross the time-axis. Specifically, for each frequency subband, we compute
the standard deviation along the feature dimension of the ground truth
velocity to construct a weighting coefficient of size $[1,F]$. This vector is
reflective of the temporal volume of the respective subband, as depicted in
Figure A.1. Subsequently, both the ground truth and predicted velocity are
divided by this vector before proceeding to the subsequent steps. For the
frequency domain model, the training objective defined in (2) is adjusted as
follows:
$\displaystyle\min_{v}$
$\displaystyle\mathbb{E}_{X_{0}\sim\pi_{0},(X_{1},\mathcal{C})\sim
D}\left[\int_{0}^{1}\mid\mid(X_{1}-X_{0})/\sigma-v(X_{t},t\mid\mathcal{C})/\sigma\mid\mid^{2}\mathrm{d}t\right],$
(5) $\displaystyle\text{with}\leavevmode\nobreak\ \leavevmode\nobreak\
\sigma=\sqrt{\text{Var}_{1}(X_{1}-X_{0})}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \text{and}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ X_{t}=tX_{1}+(1-t)X_{0},$
where $D$ represents the dataset with paired $X_{1}$ and $\mathcal{C}$, and
$\text{Var}_{1}$ calculates the variance along the feature dimension. For the
time domain model, this time balancing operation precedes the ISTFT process.
This approach helps to minimize the relative error in low-volume regions. Our
experimental results demonstrate that this method enhances overall
performance, benefiting not just the silent parts. In an alternative
interpretation, the equalization or normalization discussed previously brings
the features closer to a standard normal distribution along the feature
dimension. Simultaneously, the time-balanced loss aligns the features more
closely with a standard normal distribution along the temporal dimension,
prior to the calculation of the original MSE. This approach provides a more
nuanced adjustment of the features, facilitate the training of Rectified Flow.
## 4 Experiments
#### Overview
Initially, we evaluate models operating in both the time and frequency domains
on the LJSpeech [23] dataset, assessing their performance variations with and
without the incorporation of time-balanced loss. Following this preliminary
assessment, the configuration yielding the best results undergoes further
evaluation in diverse auditory scenarios, including singing, music, and
extensive speech datasets, to determine its comprehensive applicability and
efficiency.
#### Data
For the singing data, we utilize the Opencpop dataset [24], comprising 100
high-quality Mandarin songs performed by a professional female singer.
Regarding the music data, we employ the MTG-Jamendo dataset [25], featuring
over 55,000 full-length audio tracks, annotated with 195 tags spanning genres,
instruments, and mood/theme categories. For a comprehensive collection of
speech data, we resort to the LibriTTS corpus, a multi-speaker English dataset
encompassing roughly 600 hours of recordings made in diverse environments.
Additionally, for speech data, we expand our training to a broader dataset
encompassing LibriTTS-R[26], Aishell-3[27], VCTK[28], and the HQ-TTS mentioned
in [29]. To ensure audio quality, we evaluate each speaker’s recordings within
this dataset using the WADA[30] tool, excluding speakers whose majority of
tracks exhibit a Signal-to-Noise Ratio (SNR) below 25dB. We refer to this
dataset as Universal in the following discussion. For LJSepech, we allocate
250 sentences for testing. We set apart 20 segments for the Opencpop test. The
test-clean is employed for LibriTTS. The LibriTTS-R test-clean is used for
Universial. Lastly, we reserve 1397 audio files for the Jamendo test.
We preserve the original sampling rates: LJSpeech at 22.05 kHz, LibriTTS and
Universal at 24 kHz, and compute the Mel-scaled spectrograms with n_fft =
1024, hop_length = 256, and the number of Mel bins set to 100. For Opencpop
and MTG-Jamendo, we keep the original 44.1 kHz sampling rate and and compute
the Mel-scaled spectrograms with n_fft = 2048, hop_length = 512, and again set
the number of Mel bins to 100. When extracting the complex cofficients
utilized by the model, we use the orthonormal Fast Fourier Transform (FFT) and
its inverse (IFFT), with the normalization convention of dividing by
$1/\sqrt{N}$ for both operations, here $N$ is the number of FFT points. This
approach ensures the spectrogram extracted is within a more reasonable range
for modeling.
#### Implementation
The RFWave backbone contains 8 ConvNeXtV2 blocks. Within each ConvNeXtV2
block, the depth-wise convolutional layer featuring a large kernel utilizes a
kernel size of 7 and maintains a channel dimension of 512. The first and last
1x1 point-wise convolutional layers in the sequence possess channel dimensions
of 512 and 1536, respectively. As described in Subsection 3.1 , the complex
spectrogram is divided into 8 equally spanned subbands. Those subbands are not
related to the waveform equalization subbands mentioned in Subsection 3.3.
During training, audio samples are randomly cropped to lengths of 32512 and
65024 for 22.05/24 kHz and 44.1 kHz waveforms, respectively. This is
equivalent to a crop window of 128 frames for both sampling rates. We use a
batch size of 64. The model optimization is performed using the AdamW
optimizer with a starting learning rate of 2e-4 and beta parameters of (0.9,
0.999). A cosine annealing schedule is applied to reduce the learning rate to
a minimum of 2e-6. For evaluation purposes, we use 10 sampling steps unless
otherwise stated.
#### Baseline and Evaluation Metrics
We benchmark our RFWave model against Vocos [11]. We adopt the original
training details to retrain Vocos for the LJSpeech and Opencpop datasets. For
the LibriTTS dataset, we utilize the pre-trained model. To assess our models,
we employ the UTMOS [31] for automatic prediction of Mean Opinion Scores
(MOS), which acts as a proxy for subjective human assessments. We incorporate
additional metrics into our evaluation framework as well. These metrics
include the Perceptual Evaluation of Speech Quality (PESQ) and Mel Spectral
Signal-to-Noise Ratio (Mel-SNR) as proposed in [15]. Mel-SNR measures the
fidelity of the mel-spectrogram of the reconstructed signal compared with the
ground truth across multiple frequency bands. The results are presented for
low frequencies (Mel-SNR-L), mid frequencies (Mel-SNR-M), high frequencies
(Mel-SNR-H), and an average of these three ranges is provided as the overall
Mel-SNR-A.
## 5 Results
The performance metrics of the model, both in the frequency and time domains,
with and without the implementation of the time-balanced loss, are summarized
in Table 1. The model that operates in the time domain and incorporates the
time-balanced loss exhibits superior performance. Consequently, only the
outcomes from this particular configuration are reported for the other
datasets.
As observed from Table 1 and Table 2, RFWave consistently outperforms in terms
of PESQ scores, while Vocos routinely excels in UTMOS scores across different
datasets. This might be due to the subtle biases inherent in these metrics.
Additionally, it is observed that Vocos achieves superior performance in terms
of Mel-SNR-M and Mel-SHR-H metrics, even when compared with RFWave(100) on the
LibriTTS dataset, which utilizes 100 sampling steps. Nonetheless, upon
examining the spectrograms in Figure 2, it is evident that RFWave produces
more distinct harmonics at medium and high frequencies. The objective metrics
pertaining to the samples can be found in Table A.1. This observation might be
due to the fact that the Mel-SNR metric primarily assesses the accuracy of
energy distribution across different frequencies without taking into account
the precision of the phase information. Simultaneously, models such as Vocos
perform better in spectral distance metrics due to their specific training
aimed at content reconstruction. Conversely, diffusion-based methods, which do
not employ feature or spectrogram matching, tend to generate samples that are
more representative of the distribution, leading to more natural sounding
audio [15].
We have trained the RFWave model on the Universal dataset employing
100-dimensional Mel spectrograms, as detailed in Section 4. Concurrently, we
have also trained the model with 80-dimensional Mel spectrograms, which were
extracted using the Espnet toolkit, a prevalent setup in Text-to-Speech (TTS)
tasks. As evidenced in Table 2, the configuration of Mel spectrograms is
impactful, with the 100-dimensional configuration demonstrating superior
performance to the 80-dimensional one. Furthermore, the model yields
satisfactory reconstructions for the Jamendo dataset. Online demos are
available for further review333https://bfs18.github.io/rfwave/.
We performe inference speed benchmark tests using an Nvidia GeForce RTX 4090
GPU and an Intel Core i7-12700 CPU. The implementation was done in Pytorch
[32], and no specific hardware optimizations were applied. The inference was
carried out with a batch size of 1 sample, utilizing the LJSpeech test set.
Table 1 displays the synthesis speed and model size of RFWave and Vocos. Vocos
stands as a strong baseline given that it requires only a single forward pass
and operates at the frame level.
Table 1: Objective metrics comparing the model’s performance in frequency and time domains with and without the time-balanced loss (frequency noted as freq, time-balanced loss noted as tbl). Setting | UTMOS($\uparrow$) | PESQ($\uparrow$) | Mel-SNR-L($\uparrow$) | Mel-SNR-M($\uparrow$) | Mel-SNR-H($\uparrow$) | Mel-SNR-A($\uparrow$)
---|---|---|---|---|---|---
freq w/o tbl | 3.61 | 3.60 | 15.43 | 16.94 | 18.09 | 16.80
freq w/ tbl | 3.59 | 3.64 | 15.72 | 17.05 | 18.19 | 16.97
time w/o tbl | 3.84 | 3.96 | 16.32 | 17.40 | 19.10 | 17.60
time w/ tbl | 3.86 | 4.00 | 17.40 | 17.73 | 19.63 | 18.24
Vocos(ISTFT) | 4.09 | 3.54 | 16.71 | 18.44 | 20.64 | 18.57
groundtruth | 4.39 | - | - | - | - | -
Table 2: Objective evaluation metrics for RFWave and Vocos across various datasets.RFWave(Espnet) utilizes the 80-dimension mel from Espnet as a conditional input. Meanwhile, RFWave(100) conducts sampling for 100 steps. Dataset | Model | UTMOS($\uparrow$) | PESQ($\uparrow$) | Mel-SNR-L($\uparrow$) | Mel-SNR-M($\uparrow$) | Mel-SNR-H($\uparrow$) | Mel-SNR-A($\uparrow$)
---|---|---|---|---|---|---|---
Opencpop | RFWave | - | 3.30 | 17.69 | 16.42 | 20.54 | 18.19
Vocos(ISTFT) | - | 3.01 | 16.90 | 17.40 | 20.45 | 18.22
LibriTTS | RFWave | 3.41 | 3.67 | 17.06 | 16.43 | 17.96 | 17.14
RFWave(100) | 3.51 | 3.98 | 19.17 | 17.97 | 20.86 | 19.29
Vocos | 3.74 | 3.31 | 17.20 | 18.81 | 20.87 | 18.81
Universal | RFWave | 3.87 | 3.80 | 18.45 | 17.88 | 19.90 | 18.73
RFWave(Espnet) | 3.75 | 3.40 | 16.43 | 16.54 | 17.20 | 16.71
Jamendo | RFWave | - | - | 11.82 | 12.55 | 17.25 | 13.83
Table 3: Model footprint and synthesis speed. xRT stands for the speed at which the model can generate speech in comparison to real-time. A higher xRT value signifies that the model is capable of producing speech quicker than real-time, with a value of 1.0 representing the speed of real-time. Model | parameters | GPU xRT($\uparrow$) | CPU xRT($\uparrow$)
---|---|---|---
RFWave | 18.1 M | 91.46 | 1.40
Vocos(ISTFT) | 13.5 M | 2078.20 | 143.84
(a) RFWave Opencpop
(b) Vocos Opencpop
(c) RFWave LibriTTS
(d) Vocos LibriTTS
Figure 2: Examples of spectrograms. The differences are emphasized by blue
rectangles. RFWave, in particular, produces clearer harmonics than Vocos,
especially in the higher frequency range.
## 6 Conclusion and Discussion on Text-to-Speech
In this study, we purpose RFWave, a multi-band Rectified Flow approach for
audio waveform reconstruction. The model has been carefully designed to
overcome the latency issues associated with diffusion models. RFWave stands
out for its ability to generate complex spectrograms by operating at the frame
level, processing all subbands concurrently. This concurrent processing
significantly enhances the efficiency of the waveform reconstruction process.
The empirical evaluations conducted in this research have demonstrated that
RFWave achieves exceptional reconstruction quality. Moreover, it has shown
superior computational efficiency by generating audio at a speed that is 90
times faster than real-time.
It would be relatively easy to implement the widely-used cascade pipeline to
develop a text-to-speech (TTS) system. This involves mapping text features to
Mel-spectrograms and then Mel-spectrograms to complex spectrograms using
Rectified Flow for both stages. Nevertheless, it is more advantageous to map
text features directly to complex spectrograms, especially in the context of
rapidly evolving large-scale TTS models. Large-scale TTS models typically
incorporate extensive corpora, and eliminating one stage of processing can
significantly reduce computational resource requirements. Additionally, this
direct approach limits the discrepancies that can arise between the two
stages. The infilling capabilities of Rectified Flow equip it to handle
diverse functions similar to those managed by prominent large-scale TTS
models, for example, replicating the speaker’s voice and speech style from a
provided audio prompt. We have conducted preliminary experiments in developing
a TTS system that maps text features directly to complex spectrograms based on
Rectified Flow. Currently, the results do not match those of the cascade
model. The code and model checkpoints are available in the repository. We
believe this approach warrants further investigation and plan to explore it in
future work.
## References
* [1] Dario Rethage, Jordi Pons, and Xavier Serra. A wavenet for speech denoising. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5069–5073. IEEE, 2018.
* [2] Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. World: a vocoder-based high-quality speech synthesis system for real-time applications. IEICE TRANSACTIONS on Information and Systems, 99(7):1877–1884, 2016.
* [3] Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron Oord, Sander Dieleman, and Koray Kavukcuoglu. Efficient neural audio synthesis. In International Conference on Machine Learning, pages 2410–2419. PMLR, 2018.
* [4] Aaron Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, et al. Parallel wavenet: Fast high-fidelity speech synthesis. In International conference on machine learning, pages 3918–3926. PMLR, 2018.
* [5] Wei Ping, Kainan Peng, and Jitong Chen. Clarinet: Parallel wave generation in end-to-end text-to-speech. arXiv preprint arXiv:1807.07281, 2018.
* [6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014.
* [7] Kundan Kumar, Rithesh Kumar, Thibault De Boissiere, Lucas Gestin, Wei Zhen Teoh, Jose Sotelo, Alexandre De Brebisson, Yoshua Bengio, and Aaron C Courville. Melgan: Generative adversarial networks for conditional waveform synthesis. Advances in neural information processing systems, 32, 2019.
* [8] Ryuichi Yamamoto, Eunwoo Song, and Jae-Min Kim. Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6199–6203. IEEE, 2020.
* [9] Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae. Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. Advances in Neural Information Processing Systems, 33:17022–17033, 2020.
* [10] Takuhiro Kaneko, Kou Tanaka, Hirokazu Kameoka, and Shogo Seki. istftnet: Fast and lightweight mel-spectrogram vocoder incorporating inverse short-time fourier transform. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6207–6211. IEEE, 2022.
* [11] Hubert Siuzdak. Vocos: Closing the gap between time-domain and fourier-based neural vocoders for high-quality audio synthesis. CoRR, abs/2306.00814, 2023.
* [12] Hui-Peng Du, Ye-Xin Lu, Yang Ai, and Zhen-Hua Ling. Apnet2: High-quality and high-efficiency neural vocoder with direct prediction of amplitude and phase spectra. arXiv preprint arXiv:2311.11545, 2023.
* [13] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020.
* [14] Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020.
* [15] Robin San Roman, Yossi Adi, Antoine Deleforge, Romain Serizel, Gabriel Synnaeve, and Alexandre Défossez. From discrete tokens to high-fidelity audio using multi-band diffusion. In Thirty-seventh Conference on Neural Information Processing Systems, 2023.
* [16] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
* [17] Geng Yang, Shan Yang, Kai Liu, Peng Fang, Wei Chen, and Lei Xie. Multi-band melgan: Faster waveform generation for high-quality text-to-speech. In 2021 IEEE Spoken Language Technology Workshop (SLT), pages 492–498, 2021.
* [18] J. Johnston. A filter family designed for use in quadrature mirror filter banks. In ICASSP ’80. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 5, pages 291–294, 1980.
* [19] Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, and Saining Xie. Convnext V2: co-designing and scaling convnets with masked autoencoders. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pages 16133–16142. IEEE, 2023.
* [20] Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. CoRR, abs/2210.13438, 2022.
* [21] Diederik Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, volume 34, pages 21696–21707. Curran Associates, Inc., 2021.
* [22] Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. Understanding and improving layer normalization. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
* [23] Keith Ito and Linda Johnson. The lj speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017.
* [24] Yu Wang, Xinsheng Wang, Pengcheng Zhu, Jie Wu, Hanzhao Li, Heyang Xue, Yongmao Zhang, Lei Xie, and Mengxiao Bi. Opencpop: A high-quality open source chinese popular song corpus for singing voice synthesis. In Hanseok Ko and John H. L. Hansen, editors, Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022, pages 4242–4246. ISCA, 2022.
* [25] Dmitry Bogdanov, Minz Won, Philip Tovstogan, Alastair Porter, and Xavier Serra. The mtg-jamendo dataset for automatic music tagging. In Machine Learning for Music Discovery Workshop, International Conference on Machine Learning (ICML 2019), Long Beach, CA, United States, 2019.
* [26] Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Michiel Bacchiani, Yu Zhang, Wei Han, and Ankur Bapna. Libritts-r: A restored multi-speaker text-to-speech corpus. arXiv preprint arXiv:2305.18802, 2023.
* [27] Yao Shi, Hui Bu, Xin Xu, Shaoji Zhang, and Ming Li. Aishell-3: A multi-speaker mandarin tts corpus and the baselines. arXiv preprint arXiv:2010.11567, 2020.
* [28] Junichi Yamagishi, Christophe Veaux, and Kirsten MacDonald. Cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit (version 0.92). 2019\.
* [29] Haohe Liu, Qiuqiang Kong, Qiao Tian, Yan Zhao, DeLiang Wang, Chuanzeng Huang, and Yuxuan Wang. Voicefixer: Toward general speech restoration with neural vocoder. arXiv preprint arXiv:2109.13731, 2021.
* [30] Chanwoo Kim and Richard M Stern. Robust signal-to-noise ratio estimation based on waveform amplitude distribution analysis. In Ninth Annual Conference of the International Speech Communication Association. Citeseer, 2008.
* [31] Takaaki Saeki, Detai Xin, Wataru Nakata, Tomoki Koriyama, Shinnosuke Takamichi, and Hiroshi Saruwatari. UTMOS: utokyo-sarulab system for voicemos challenge 2022. In Hanseok Ko and John H. L. Hansen, editors, Interspeech 2022, 23rd Annual Conference of the International Speech Communication Association, Incheon, Korea, 18-22 September 2022, pages 4521–4525. ISCA, 2022.
* [32] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014.
## Appendix A Supplementary Material
Figure A.1: The Volume and $\sigma$ exhibit consistent variation across the frames. Table A.1: Objective metrics for the two sentences as evaluated across different models. Data | Model | UTMOS($\uparrow$) | PESQ($\uparrow$) | Mel-SNR-L($\uparrow$) | Mel-SNR-M($\uparrow$) | Mel-SNR-H($\uparrow$) | Mel-SNR-A($\uparrow$)
---|---|---|---|---|---|---|---
Opencpop | RFWave | - | 3.54 | 18.31 | 16.44 | 20.14 | 18.27
2009000326 | Vocos(ISTFT) | - | 3.25 | 17.40 | 17.16 | 19.39 | 17.96
LibriTTS | RFWave | 3.87 | 3.71 | 17.73 | 16.97 | 17.97 | 17.55
121_121726_000005_000001 | Vocos(ISTFT) | 4.23 | 3.75 | 19.76 | 19.72 | 20.32 | 19.94
|
# Deep Learning Techniques for Future Intelligent Cross-Media Retrieval
Sadaqat ur Rehman, , Muhammad Waqas, , Shanshan Tu, , Anis Koubaa, Obaid ur
Rehman, Jawad Ahmad, Muhammad Hanif, , Zhu Han S. Rehman is an Assistant
Professor with the Department of Computer Science, Namal Institute - an
associated college of the University of Bradford UK. e-mail:
(engr.sidkhan@gmail.com)S. Tu is with the Faculty of Information Technology,
Beijing University of Technology, Beijing China. e-mail: (sstu@bjut.edu.cn)M.
Waqas is with the Faculty of Information Technology, Beijing University of
Technology, Beijing China, and also with Department of Computer Science and
Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and
Technology, Topi, 23460, Pakistan, e-mail: (engr.waqas2079@gmail.com)A. Koubaa
is with the Robotics and Internet-of-Things research lab, Department of
Computer Science, Prince Sultan University, R&D Gai-tech Robotics, China and
CISTER/INESC TEC and ISEP-IPP, Porto, Portugal.O. Rehman is an Assistant
Professor in the Department of EE, Sarhad University of Science and IT,
Pakistan, e-mail:(obaid.ee@suit.edu.pk)J. Ahmad is a Lecturer in the
Department of Computer Science, Edinburgh Napier University, UK,
email:(jawadkhattak@ieee.org)M. Hanif is with Department of Computer Science
and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and
Technology, Topi, 23460, Pakistan, e-mail: (muhammad.hanif@giki.edu.pk)Z. Han
<EMAIL_ADDRESS>is with the Department of Electrical and Computer Engineering,
University of Houston, Houston, TX 77004, USA.S. Tu is the corresponding
author.
###### Abstract
With the advancement in technology and the expansion of broadcasting, cross-
media retrieval has gained much attention. It plays a significant role in big
data applications and consists in searching and finding data from different
types of media. In this paper, we provide a novel taxonomy according to the
challenges faced by multi-modal deep learning approaches in solving cross-
media retrieval, namely: representation, alignment, and translation. These
challenges are evaluated on deep learning (DL) based methods, which are
categorized into four main groups: 1) unsupervised methods, 2) supervised
methods, 3) pairwise based methods, and 4) rank based methods. Then, we
present some well-known cross-media datasets used for retrieval, considering
the importance of these datasets in the context in of deep learning based
cross-media retrieval approaches. Moreover, we also present an extensive
review of the state-of-the-art problems and its corresponding solutions for
encouraging deep learning in cross-media retrieval. The fundamental objective
of this work is to exploit Deep Neural Networks (DNNs) for bridging the “media
gap”, and provide researchers and developers with a better understanding of
the underlying problems and the potential solutions of deep learning assisted
cross-media retrieval. To the best of our knowledge, this is the first
comprehensive survey to address cross-media retrieval under deep learning
methods.
###### Index Terms:
Cross-media retrieval, deep learning.
## I Introduction
Social media websites (e.g., Facebook, Youtube, Instagram, Flickr, and
Twitter) have tremendously increased the volume of multimedia data over the
Internet. Consequently, considering this large volume of data and the
heterogeneity of the data sources, data retrieval becomes more and more
challenging. Generally, multimodal data (i.e., data from sources, e.g., video,
audio, text, images) are used to describe the same events or occasions. For
instance, a web page describes similar contents of an event in different
modalities (image, audio, video, and text). Therefore, with a large amount of
multimodal data, the accurate result of a search concerning the information of
interest decreases. The evolution of different search algorithms for indexing
and searching multimodal data contributed positively to searching for
information of interest efficiently. Nevertheless, they only work in a single-
modality-based search, comprising two main classes: content-based retrieval
and keyword-based retrieval [1].
In the last few years, many cross-media retrieval methods have been proposed
[2, 3, 4, 5, 6, 7, 8]. However, Canonical Correlation Analysis (CCA) [9] and
Partial Least Square (PLS) [10, 11] are usually adopted to explicitly project
different modality data to a common space for similarity measurement. In the
Bilinear Model (BLM) [12], different modality (e.g., text and image) data are
projected to the same coordinates as it learns a common subspace. Generalized
Multiview Analysis (GMA) [13] can be used to combine CCA, BLM, and PLS for
solving cross-media retrieval task. Gong et. al. [14] proposed a variant CCA
model by incorporating the high-level semantic information as a third view.
Ranjan et al. [15] also introduced a variant of CCA called multilabel
Canonical Correlation Analysis (ml-CCA) for learning the weights of shared
subspaces using high-level semantics called multi label annotations. Rasiwasia
et al. [16] proposed a cluster CCA method to learn discriminant isomorphic
representations that maximize the correlation between two modalities while
distinguishing the different categories. Sharma et. al. [13] proposed a
variant of Marginal Fisher Analysis (MFA) called Generalized Multiview
Marginal Fisher Analysis (GMMFA).
Table I: Comparison of existing survey articles on deep learning and cross-media retrieval. ✔ represents that the topic is covered, ✘ represents the topic is not covered, and ❊ represents the topic is partially covered. Ref. | Year | Topic | Deep Learning | Cross-media Retrieval
---|---|---|---|---
| | | Supervised | Unsupervised | Pairwise | Rank | Representation | Alignment | Transalation
[17] | 2015 | Deep learning in neural networks: An overview | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[18] | 2015 | Deep Learning | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[19] | 2017 | A survey of deep neural network architectures and their applications. | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[20] | 2019 | Deep learning: methods and applications | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[21] | 2014 | A tutorial survey of architectures, algorithms, and applications for deep learning | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[22] | 2018 | A survey on deep learning: Algorithms, techniques, and applications | ✔ | ✔ | ✔ | ✔ | ✘ | ✘ | ✘
[23] | 2017 | Deep reinforcement learning: A brief survey | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘
[24] | 2017 | Imitation learning: A survey of learning methods | ✔ | ✔ | ✔ | ❊ | ✘ | ✘ | ✘
[25] | 2014 | Big data deep learning: challenges and perspectives | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘
[26] | 2015 | Deep learning applications and challenges in big data analytics | ✔ | ✔ | ✘ | ✘ | ✘ | ✘ | ✘
[27] | 2017 | A systematic literature review on features of deep learning in big data analytics | ✔ | ✔ | ✔ | ✘ | ✘ | ✘ | ✘
[28] | 2017 | An overview of cross-media retrieval: Concepts, methodologies, benchmarks, and challenges | ✔ | ✔ | ✘ | ❊ | ✔ | ✘ | ✘
[29] | 2016 | A comprehensive survey on cross-modal retrieval | ✔ | ✔ | ✔ | ✔ | ✔ | ✘ | ✘
[30] | 2010 | Cross-media retrieval: state-of-the-art and open issues | ✔ | ✔ | ✘ | ✘ | ✔ | ✘ | ✘
Our work | 2020 | Deep Learning Techniques: Evolving Machine Intelligence for Future Intelligent Cross-media Retrieval | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔
Even though every aforementioned contribution provide vital contribution in
cross-media retrieval society, still these methods lack satisfactory
performance. The key reason is that conventional feature learning techniques
hardly tackle the problem of image understanding, but visual features
representation between images and text is highly dependent on cross-media
retrieval. Recently, deep learning models have made significant development in
fields such as computer vision [31, 32], engineering [33], health [34] and
hydrology [35]. Donahue et. al. [36] proposed a deep eight-layer neural
network called DeCAF, which confirmed that Convolution Neural Network (CNN)
features are helpful for various feature extraction tasks.
In this paper, we investigate different deep learning approaches applied in
the domain of cross-media search, which are indispensable for the adoption and
implementation of cross-media retrieval. DNN is designed to simulate the
neuronal structure of the human brain, and represents a powerful approach to
naturally deal with the correlations of multi media. For this purpose, several
researchers have explored DNNs for using it in the search and retrieval of
data from heterogenous sources. Although, the latest research in the field of
DNN-based methods for cross-media retrieval has achieved better performance
[37], however, there are still significant improvements needed in this area.
We explore the following three main challenges for using deep learning
techniques in cross-media retrieval.
Figure 1: Taxonomy of the proposed work.
1. 1.
Representation. It aims to learn the representation of cross-media data in an
optimal way to mitigate its redundancy. This is a challenging task in cross-
media retrieval since data is heterogeneous. For instance, the text is
normally symbolic while audio and video modalities are represented as signals.
Therefore, learning the representation of individual modality in a common
semantic space is a challenging task.
2. 2.
Alignment. In this procedure, the key objective is to find the correlation
between elements from cross modalities to mitigate the modality-to-modality
mismatch issue. For instance, we want to align each human action image into a
video showing a series of different human actions. To achieve this, we need to
measure the similarity distance between different modalities and deal with
other correlation uncertainties.
3. 3.
Translation. It shows the correlation mapping of data across different
modalities, since data is heterogeneous and the relationship between cross
modalities is hard to identify. For instance, an image can be described in
various different ways, and a single perfect translation may not exist.
Therefore, it is hard to choose an appropriate translation for a particular
task, where multiple parameters are crucial. Particularly, there is no
appropriate correct answer to a query in translation. As there is no common
concept of translation to chose which answer is right and which is wrong.
For each of the aforementioned problems in cross-media retrieval, we provide a
taxonomy of classes and sub-classes. A detailed taxonomy is provided in Fig.
1. We found out that some key issues of deep learning in cross-media retrieval
on concepts, methodologies and benchmarks are still not clear in the
literature. To tackle the aforementioned challenges, we investigate the DNN-
based methods assisted cross-media retrieval.
### I-A Comparison with Related Surveys Article
Our current survey article is unique in a sense that it comprehensively covers
the area of DNNs-based cross-media retrieval. There is no prior detailed
survey article that jointly considers DNNs and cross-media retrieval, to the
best of our knowledge. Though there is an extensive literature on survey
articles on DNNs or cross-media retrieval, but these survey articles either
focus on DNNs or cross-media retrieval, individually.
General surveys regarding deep learning are discussed in [18, 19, 20, 22].
Surveys dealing with only cross-media retrieval domain are presented in [30].
Our work is closely related to [29, 28]; however, they cover the broader
picture of cross-media retrieval domain whereas, our work is more focus on DL-
based cross-media retrieval. Furthermore, we provide a novel taxonomy
according to the challenges faced by multi-modal deep learning approaches in
solving cross-media retrieval, namely: representation, alignment, and
translation. To the best of our knowledge, this is the only work till date,
which provide a detail survey of DL-based methods in solving cross-media
retrieval challenges (representation, alignment and translation). A summarized
comparison of survey articles on DL and cross-media retrieval are provided in
Table I.
### I-B Our Contributions
To summarize, our main objectives in this paper are as follows.
Figure 2: A generalized framework of cross-media retrieval system.
* •
Provide an up to date survey on the current advancement in cross-modal
retrieval. This provides an added value as compared to previous surveys, which
represents substantial benefits for understanding the trends in cross-media
retrieval rapidly.
* •
Provide a useful categorization of cross-media retrieval under DNN approaches.
The contrasts between various types of techniques are expounded, which are
helpful for readers to better understand various deep learning techniques used
in cross-media retrieval.
* •
A detailed explanation of almost every cross-media dataset is provided.
Furthermore, its advantages and disadvantages are also discussed to facilitate
the developers and researchers choosing a better dataset for their learning
algorithms.
* •
Present the key challenges and opportunities in the area of cross-media
retrieval and discuss open future research challenges.
## II An Overview of Cross-media Retrieval and Deep Learning
Before probing in to the depth of this paper, we want to initiate with the
fundamental concepts of cross-media retrieval and deep learning techniques. We
divide this section into diverse subsection such as, cross-media retrieval is
discussed in subsection A. Moreover, the deep learning techniques in
subsection B to discuss different algorithms for representations. Finally, the
subsection C explain why DL is important for cross-media retrieval?
### II-A Cross-media Retrieval
Cross-media retrieval represents the search for different modalities (e.g.,
images, texts, videos) by giving any individual modality as an input. The
generic framework of cross-media retrieval is shown in Fig. 2, in which data
is represented in different modalities such as text, image, and video.
Different algorithms (e.g., CNN, SIFT, LDA, TF-IDF, etc.) are applied to learn
the feature vectors of individual modality. Furthermore, in the case of joint
semantic space for multimodal data, cross-media correlation learning is
performed for feature extraction. Finally, the semantic representations allow
the cross-media retrieval to perform search results ranking and summarization.
It is important to note that cross-media retrieval is different from other
correlation matching approaches between various media types (image and text).
For example, correlation matching approaches [38, 39] are used to generate the
text descriptions of image/video only, whereas the cross-media retrieval
approach endeavor to retrieve text from different modalities data image/video
and vice versa. Methods of image annotation [40] are used to assign most
relevant tags to images for descriptions, whereas in cross-media retrieval,
the text also represents sentences and paragraph descriptions instead of only
tags.
Cross-media retrieval is an open research issue in real-world applications.
With the popularity of social media platforms (i.e., Facebook, Twitter,
Youtube, Flickr and Instagram) different types of media (images, videos,
texts) are flooding over the Internet. To tackle this issue, different cross-
media retrieval approaches have been proposed [41, 42, 43, 44, 45]. However,
in this paper we only consider DNNs-based cross-media retrieval approaches for
information utilization to learn the common representations. As, DNNs-based
approaches leverage the performance of different learning algorithms in cross-
media retrieval domain. Moreover, to our knowledge this is the only survey
mutually consider DNNs and cross-media retrieval. We categorize the DNN-based
methods for the individual challenge of cross-media retrieval into four
classes: (1) unsupervised methods, (2) supervised methods, (3) pairwise based
methods, and (4) rank based methods.
1. 1.
Unsupervised methods. Unsupervised methods leverage co-occurrence information
instead of label information to learn common representations across data with
different modalities. Specifically, these methods treated different modalities
of data existing in a common multi-modal document as the same semantic. For
instance, a website page contains both text and pictures for the outline of
same theme. Specifically, users get information from both images or texts to
get idea of a particular event or topic in a webpage.
2. 2.
Supervised methods. In supervised methods, label information is used to learn
common representations. These methods increase the correlation among intra-
class samples and decrease the correlation among inter-class samples to obtain
good discriminating representations. However, getting annotated data is costly
and laborious because of manual labelling.
3. 3.
Pairwise based methods. These methods are used to learn common representations
through similar/dissimilar pairs, in which, a semantic metric distance is
learned between data of various modalities.
4. 4.
Rank based methods. These methods are used to learn common representations for
cross-media retrieval through learning to rank.
### II-B Deep Learning Techniques
Deep Learning (DL) is a sub-class of Machine Learning (ML). DL networks are a
kind of neural network that discovers important object features. These
algorithms attempt to learn (multiple levels of) representation by using a
hierarchy of multiple layers. If the system is provided with a large amount of
information, it begins to understand it through feature extraction and respond
in useful ways. Most of the deep learning algorithms are built on neural
network architectures, due to this reason they are often called as Deep Neural
Networks (DNN).
Different DL architectures (Deep Neural Network, Convolution Neural Network,
Deep Belief Networks, Recurrent Neural Network) are successful in solving many
computer vision problems efficiently, where the solutions are difficult to
obtain analytically. These problems include handwritten digit recognition,
optical character recognition, object classification, face detection, Image
captioning and facial expression analysis [46, 17, 18].
Currently, DL algorithms are also tested in interdisciplinary research
domains, such as bio-informatics, drug design, medical image analysis,
material inspection, agriculture and hydrology [35, 47, 48, 49, 50]. The
processing and evolution of these fields are dependent on deep learning, which
is still evolving and in need of creative ideas [51, 52, 53].
#### II-B1 Evolution and Classification of Deep Learning Techniques
Figure 3: An overview of the evolution of deep learning from conventional
Machine Intelligence and Machine Learning paradigms.
Since the early excitement stirred by ML in the 1950s, smaller subsets of
machine intelligence have been impacting a myriad of applications over the
last three decades as shown in Fig. 3. Initially, the term “deep learning” was
presented to the community of machine learning by Rina Dechter in 1986 [54,
18], and Igor Aizenberg and his colleagues to artificial neural networks in
2000, in boolean threshold neurons domain [55, 56]. In 1965, Alexey Ivakhnenko
and Lapa published the primary general learning algorithm for feed-forward,
supervised, multi-layer perceptrons [57]. Moving forward in 1980, Kunihiko
Fukushima introduced Neocognitron in computer vision domain [58]. Furthermore,
Yann LeCun applied standard backpropagation algorithm to deep neural network
for handwritten recognition in 1989 [59, 60, 61, 62].
Although, deep learning has existed for more than three decades however, they
have recently gain interest in the machine learning community. Before 2006,
the deep learning method was a complete failure in training large deep
architectures. In 2006, the revolution to successful training schemes for deep
architectures originated with the algorithms for training Deep Belief Networks
(DBNs) by Hinton et al. [63] and autoencoders by Ranzato et al. [64] and
Bengio et al. [65] based on unsupervised pre-training followed by supervised
fine-tuning. Following the same path, different approaches were proposed to
deal with the aforementioned issues under different circumstances.
Before 2011, CNNs did not succeed in efficiently solving computer vision
problems. However, in 2011, CNNs achieved superhuman performance in a visual
pattern recognition contest. In 2012, the success of deep learning algorithms
in image and object recognition were started. However, backpropagation
algorithm had been used for decades to train CNNs, and Graphical Processing
Unit (GPU) implementations of Neural Networks (NNs) for years, comprising CNNs
[66, 67].Moreover, in the same year CNNs also won ICDAR Chinese handwriting
contest. In May 2012, CNNs won ISBI image segmentation contest [68], which
significantly attracted researcher’s attention. Ciresan et al. showed how max-
pooling CNNs on GPU can affectedly enhance several computer vision benchmark
records at CVPR 2012 [69]. Following the same path, in October 2012,
Krizhevsky et al. [52] showed the dominancy of DNNs over shallow machine
learning methods by winning the large-scale ImageNet competition over a large
margin.
Researchers believe that the victory of ImageNet in Large Scale Visual
Recognition Challenge (ILSVRC) 2012 anchored the begin of “deep learning
revolution” that has revolutionize the Artificial Intelligence (AI) industry
[70].
### II-C Why DL for Cross-media Retrieval?
Before going in detail, it is useful to understand the reason of applying DNNs
to cross-media retrieval. There are several DNNs attractive characteristics
that make it unique such as (1) end-to-end learning model, (2) efficiency
boost up using back-propagation training, and (3) the performance of DNNs
increase as the size of data increase [71, 72, 73]. Furthermore, the
architecture of DNNs are hierarchal and trained end-to-end. The main advantage
using such architecture is when dealing multimedia data. For example, a
webpage contains textual data (reviews [74], tweets [75]), visual data (posts,
scenery images), audio data and video data. Here modality-specific features
extraction will be complex and time consuming. Suppose, if we have to process
textual data, initially we need to perform expensive and time consuming pre-
processing (e.g., keywords extraction, main topic selection). However, DNNs
have the ability to process all the textual information in a sequential end-
to-end manner [74]. Therefore, these advances in the architecture of DNNs make
it very suitable for multi-modal tasks [76] and we urge for indispensable
neural end-to-end learning models.
As for as the interaction-only settings (i.e. matrix completion) are
concerned, DNNs are necessary in dealing huge number of training data and
gigantic complexity. He et al. [77] overcome the performance gain of
conventional Matrix Factorization (MF) method by using Multi Layer Perceptron
(MLP) to approximate the interaction function. Moreover, typical ML models
(i.e., BPR and MF) also achieve best performance on interaction-only data when
trained with momentum-based gradient descent [78]. Nevertheless, these models
also take the benefit of current DNNs based improvements such as Batch
normalization, Adam, and optimize weight initialization [77, 79]. It is fact
that most of the Cross-media retrieval algorithms have adopted DNNs-based
structure to improve its performance such as Deep Canonical Correlation
Analysis (DCCA) [80], Deep Canonically Correlated Auto-Encoder (DCCAE) [81],
and Discriminative Deep Canonical Correlation Analysis (DisDCCA) [82].
Therefore, DL is significantly useful tool for today’s research and industrial
environment for the advancement of cross-media retrieval methods.
We summarize some of the useful strengths of DNNs based cross-media retrieval
models, which are as follows:
#### II-C1 Flexibility
The DNNs based approaches are also known as global learning due to its vast
application domain. Currently, the flexibility of DL methods further boost up
with the invent of well-known DL frameworks i.e., Caffe, Tensorflow, Pytorch,
Keras, Theano, and MXnet. Each of the aforementioned framework has active
community and support. This make development and engineering efficient and
easier. For instance, concatenation of different neural models become easier,
and produce more powerful hybrid structures. Hence, the implementation of
hybrid cross-media retrieval models become easier to capture better features
and perform well.
#### II-C2 Generalization
This property of DL methods make it very demanding and unique. It can be used
in many different applications and with different data types. For example, in
the case of transfer learning the DL-based method have the ability to share
knowledge across different tasks. As, DL algorithms capture both low and high
level features, they may be beneficial to perform other tasks [46]. Andreas et
al. [83] and Perera et al. [84] showed the successful performance of DNNs-
based methods in transfer learning.
#### II-C3 Nonlinear Transformation
DNNs based models have the ability to process the non-linearity in data using
non-linear activation functions i.e., sigmoid, relu and tanh. This helps the
models to capture complex patterns within the dataset. Traditional cross-media
retrieval methods such as CCA, BLM and Linear Discriminant Analysis (LDA) are
linear models, which need DNNs-based methods to retrieve nonlinear features.
For example, in DCCA, initially DNNs are used to extracts nonlinear features
and then uses linear CCA to calculate the canonical matrices. It is well-know
that neural networks have the ability to approximate any continuous function
by fluctuating the activation functions [85].
#### II-C4 Robust
DL based methods do not need manually feature extraction algorithms rather
feature are learned in an end-to-end manner. Hence, the system achieve better
performance despite the variations of the input data. The authors of [86] and
[87] showed the robustness of DL against adversarial attacks in visual
recognition application.
## III Cross-media Datasets
Dataset plays a critical role in the evaluation of learning algorithm. Its
selection is very important for feature extraction and training of different
DL algorithms. We summarized some of the well-known cross-media datasets
below, and Table LABEL:tab:dataset depicts a comparison evaluation among them.
1. 1.
Wikipedia: this dataset is largely used in cross-media domain to evaluate the
performance of different learning algorithms. The dataset consists of 2866
image-text pairs of 10 distinct classes accumulated from Wikipedia’s articles.
2. 2.
NUS WIDE: A popular dataset in cross-media community after Wikipedia dataset.
This dataset contains 269,648 labeled images of 81 different concepts from
Flickr. Every image in the dataset is aligned with associated user tags called
image-text pair. Overall, the dataset contains 425,059 unique tags that are
associated with these images. Nevertheless, to enhance the quality of tags,
those tags were pruned that appear less than 100 times and do not exist in
WordNet [88]. Hence, 5,018 unique tags are included in this dataset.
3. 3.
Pascal VOC: the dataset consists of 20 distinct classes of image-tag pairs
having 5011 training pairs and 4952 testing pairs. Although, some images are
labeled more than twice. However, in the literature some studies have selected
uni-labelled images, which results in 2808 and 2841 training and testing
pairs, respectively [13]. The image feature chosen were GIST and color [89],
and histogram whereas; text features were 399-dimensional tag occurrence.
4. 4.
FB5K: The dataset contains 5,130 image-tag pairs with associated users’
feelings, which is accumulated from Facebook [90]. Furthermore, this dataset
is categorized into 80% and 20% for training and testing image-text pairs.
5. 5.
Twitter100K: This dataset is made up of 100,000 image-text pairs collected
from Twitter. It exploited 50,000 and 40,000 image-text pairs for training and
testing respectively. Moreover, about 1/4 of the images in this dataset
contain text which are highly correlated to the paired tweets.
6. 6.
XMedia: This is the only dataset in the cross-media domain with five different
modalities, such as video, audio, image, text, and 3-Dimensional (3D) model.
It consists of 20 distinct classes, such as elephant, explosion, bird, dog,
etc. Each class contains an overall of 600 media instances: 250 texts, 250
images, 25 videos, 50 audio clips, and 25 3D models. In the dataset’s overall
collection, different popular websites were used to collect data, i.e.,
Flickr, YouTube, Wikipedia, 3D Warehouse, and Princeton 3D model search
engine.
7. 7.
Flickr30K: the dataset is the extended version of Flickr8k datset [91]. It
consists of 31783 images collected from Flickr. Individual image in this
dataset is linked with associated five native English speakers’ descriptive
sentences.
8. 8.
INRIA-Websearch: this dataset contains 353 image search queries, along with
their meta-data and ground-truth annotations. In total, this dataset consists
of 71478 images.
9. 9.
IAPR TC-12: the dataset consists of 20,000 images (plus 20,000 corresponding
thumbnails) taken from locations around the world and comprising a varying
cross-section of still natural images.The time span used for the collection of
images falls within 2001-2005. Moreover, this collection is spatially diverse
as the images were collected from more than 30 countries.
10. 10.
ALIPR: the dataset contains annotation results for more than 54,700 images
created by users of flickr.com are viewable at the Website: alipr.com.
11. 11.
LabelMe: the dataset contains 30,000 images with associated 183 number of
labels. The main source of dataset collection was crowd-sourcing through MIT
CSAIL Database of objects and
scenes111http://web.mit.edu/torralba/www/database.html.
12. 12.
Corel5K: the dataset was collected from 50 Corel Stock Photo cds. It consists
a total of 5,000 images, with 100 images on the same topic. Individual image
is linked with an associated 1-5 keywords with a total of 371 keywords. Before
modelling, all the images in the dataset are pre-segmented using normalized
cuts [92]. It consists a total of 36 features: 18 color features, 12 texture
features and 6 shape features.
13. 13.
Corel30K: the dataset is the extended version of previously published dataset
called Corel5K. It contains 31,695 images and 5,587 associated words. It
exploited 90% (28,525) and 10% (3,170) images for training and testing
respectively. This dataset is much improved from Corel5K in terms of examples
per label and database size, and hence play a significant role in evaluating
learning systems.
14. 14.
AnnoSearch: the dataset contains 2.4 million photos collected from popular
websites, such as Google222images.google.com and the University of Washington
(UW)333http://www.cs.washington.edu/research/imagedatabase/groundtruth/. The
images are of high quality and consists rich associated descriptions, such as
title, category and comments from the photographers. Although these
descriptions cover to a certain degree the concepts of the associated images.
15. 15.
Clickture: this data set was obtained from the hard work of one-year click log
of a commercial image search website. There are 212.3 million triads in this
dataset. The triad is mathematically define as:
$Clickture=\left({i,k,t}\right),$ (1)
A triad $(i,k,t)$ is defined as as image “$i$” was clicked “$t$” times in the
search space of query “$k$” in one year by means of different users at
different times. The Clickture full dataset consists of 40 million unique
image and 73.6 million unique text queries. Moreover, this dataset also
contains a lite version titled as “Clickture-Lite”, which consists of 1
million images and 11.7 million text queries.
16. 16.
ESP: the dataset contains more than 10 million images. The key source of
dataset collection was crowd-sourcing. The main objective of this cross-media
dataset is to label the most of images over the internet. We envisioned that
if our game get a proper gaming site platform, such as Yahoo! Games and allows
people to play with interest like other games, it can solve the labeling of
most of the images in a time span of weeks. Furthermore, It is predicted that
if 5,000 people regularly play this game for 31 days they could assign labels
to all Google images.
Table II: A summary of datasets in cross-media retrieval. For each dataset we identify the modality used to tackle the problem of cross-media retrieval. Ref | Dataset | Year | Data size | URL | Image | Text | Tags | Video | Audio | 3D Model
---|---|---|---|---|---|---|---|---|---|---
[93] | Wikipedia | 2010 | 2,866 | http://www.svcl.ucsd.edu/projects/crossmodal/ | ✔ | ✔ | - | - | - | -
[94] | Nus Wide | 2009 | 269,648 | http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm | ✔ | | ✔ | - | - | -
[89] | Pascal VOC | 2015 | 9,963 | http://host.robots.ox.ac.uk/pascal/VOC/ | ✔ | | ✔ | - | - | -
[95] | Flickr30K | 2014 | 31,783 | http://shannon.cs.illinois.edu/DenotationGraph/ | ✔ | ✔ | - | - | - | -
[96] | INRIA-Websearch | 2010 | 71,478 | http://lear.inrialpes.fr/pubs/2010/KAVJ10/ | ✔ | ✔ | - | - | - | -
[97] | FB5K | 2018 | 5140 | http://ngn.ee.tsinghua.edu.cn/ | ✔ | - | \- ✔ | - | - | -
[98] | Twitter100K | 2018 | 100,000 | http://ngnlab.cn/wp-content/uploads/twitter100k.tar | ✔ | ✔ | - | - | - | -
[3] | Xmedia | 2018 | 12,000 | http://www.icst.pku.edu.cn/mipl/XMedia | ✔ | ✔ | - | ✔ | ✔ | ✔
[99] | IAPR TC-12 | 2006 | 20,000 | http://imageclef.org/photodata | ✔ | ✔ | - | - | - | -
[100] | ALPR | 2011 | - | http://alpr.com | ✔ | | ✔ | - | - | -
[101] | SML | 2007 | - | - | - | - | - | - | - | -
[102] | Corel5K | 2007 | 5000 | https://rdrr.io/cran/mldr.datasets/man/corel5k.html | ✔ | | ✔ | - | - | -
[103] | ESP | 2004 | - | - | ✔ | - | ✔ | - | - | -
[104] | LabelMe | 2008 | - | http://www.csail.mit.edu/brussell/research/ LabelMe/intro.html | ✔ | - | ✔ | - | - | -
[105] | AnnoSearch | 2006 | - | http://wsm.directtaps.net/default.aspx | ✔ | - | ✔ | - | - | -
[106] | Clickture | 2013 | - | http://www.clickture.info | ✔ | ✔ | - | - | - | -
## IV Challenges in Cross-media Retrieval and Proposed DL based Methods
In this survey paper, we provide a novel taxonomy according to the challenges
faced by multi-modal deep learning approaches in solving cross-media
retrieval. In subsection A, we explain the data representation in cross-modal
retrieval because it always difficult task in deep learning. Subsection B
describe the alignment of multimodal. Multimodal alignment is also a
challenging task in cross-media retrieval to find the relationship and
correlations between different instances in cross modalities. Finally, we also
consider the translation in subsection C that refers to map the data from one
modality to another. To tackle the aforementioned problems, we present an
extensive review of the state-of-the-art problems and their corresponding
solutions to leverage the use of deep learning in cross-media retrieval
applications. This new taxonomy will enable researchers to better understand
the state-of-the-art problems and solutions, and identify future research
directions.
### IV-A Representations
Figure 4: An illustration of multimedia for learning shared space
representations utilizing deep learning model.
Data representations in cross-modal retrieval has always been a difficult task
in deep learning. Multi-modal representations deal with the representation of
data from multiple domains. These representations from different modalities
faces several challenges to learn a common semantic space, such as, data
concatenation from heterogeneous sources (image, text, video), noise, and
missing data handling from various modalities. Semantic data representation
tries to learn the correlation across different modalities. Initially, to
represent multimodal data in a common semantic space, cross-media correlation
learning is performed for feature extraction. Finally, the semantic
representations allow the cross-media retrieval to perform search results
ranking and summarization. Semantic data representation is mandatory to multi-
modal issues, and leverages the performance of any cross-media retrieval
model.
Semantic representations are non-uniform in a low-level feature space. For
example, modeling a broad theme, such as “Asia”, is more challenging than
modeling a specific theme, such as “sky”, due to the absence of a significant,
unique visual feature that can characterize the concept of “Asia”. Therefore,
neglecting such semantic representation would be inappropriate. Hence, good
representation is indispensable for deep learning models. Bengio et al. [46]
proposed several ways for good representations - sparsity, smoothness, spatial
and temporal coherence etc. It is important to represent data in a meaningful
way to enhance the performance of DNN based cross-media retrieval models.
In a few years, many conventional methods shifted to advanced DNN based
methods. For instance, the bag of visual words (BoVW) and scale invariant
feature transform (SIFT) were used to represent an image. However, presently
CNN [52] is used to represent the description of the images. Similarly, Mel-
frequency cepstral coefficients (MFCC) have been overcome by deep neural
networks in the audio domain for speech recognition [107]. An overview of such
approaches can be visualized in Fig. 4, with representative work summarized in
Table IV-A4.
#### IV-A1 Unsupervised DNNs based Methods
The major advantage of neural network based joint representations come from
their ability to pre-train from unlabeled data when labeled data is not enough
for supervised learning. It is also common to fine-tune the resulting
representation on a particular task at hand as the representation constructed
with unsupervised data is generic and not necessarily optimal for a specific
task [108, 109]. Unsupervised methods used co-occurrence information instead
of label information to learn common representations across different modality
data. Srivastava et al. [110] learned the representations of multimodal data
using Deep Belief Network (DBN). They first model individual media type using
a separate DBN model. Then concatenated both networks by learning a mutual RBM
at the top.
Chen et al. [111] proposed conditional generative adversarial networks (CGAN)
to achieve cross-modal retrieval of audio-visual generation (e.g, sound and
image). Unlike traditional Generative Adversarial Networks (GANs), they make
their system to handle cross-modality generation, such as sound to image (S2I)
and image to sound (I2S). Furthermore, a fully connected layer and several
deconvolution layers of deep convolutional neural networks are used as the
image encoder and decoder respectively. Similarly is the case with sound
generation. Following the same path, Zhang et al. [112] proposed a novel
adversarial model, called HashGAN. It consists of three main modules: (1)
feature learning module for multi-modal data, which uses CNN to extract high
level semantic information, (2) generative attention module, which is used to
extract foreground and background feature representations, and (3)
discriminative hash coding module, which uphold the similarity between cross
modalities.
Multi-modal Stacked Auto-Encoders (MSAE) model [113] is used to project
features from cross-modality into a common latent space for efficient cross-
modal retrieval. This model shows significant advantages over current state-
of-the-art approaches. First, the non-linear mapping method used in this model
is more expressive. Second, since it is an unsupervised learning method, data
dependency is minimal. Third, the memory usage is optimized and independent of
the training dataset size. Unlike the authors of [114], they proposed an
unsupervised deep learning approach in text subspace for cross-media
retrieval. They claimed that the proposed text subspace is more efficient and
useful as compared to conventional latent subspace.
#### IV-A2 Supervised DNNs based Methods
Ngiam et al. [115] were the first to address a multimodal deep learning
approach in audio and video retrieval. They trained deep networks for a series
of multimodal learning tasks to learn a shared representation between cross
modalities and tested it on a single modality, for example, the system was
trained with video data but tested with audio data and vice versa.
Deep Cross-modal Hashing (DCMH) [116] efficiently reveals the correlations
among cross modalities. It is an end-to-end learning paradigm, which
integrates two parts: (1) feature learning part, and (2) the hash-code
learning part. Cao et al. [117] proposed Deep Visual-Semantic Hashing (DVSH)
model, which utilized two different DNN models such as CNN and Long Short Term
Memory (LSTM) to learn similar representation for visual data and natural
language.
Wang et al. [118] proposed a regularized deep neural network (RE-DNN), which
utilized deep CNN features and topic features as visual and textual semantic
representation across modalities. This model is able to capture both intra-
modal and inter-modal relationships for cross-media retrieval. They further
improve their work in [119, 120] by concatenating common subspace learning and
coupling feature selection into a joint feature learning framework. Unlike
previous models, this approach considers both the correlation and feature
selection problems at the same time. They learn the projection matrices
through linear regression to map cross-modality data into a common subspace,
and $\ell_{21}-$norm to select similar/dissimilar features from various
feature spaces. Furthermore, the inter-modality and intra-modality
similarities are preserved through a multimodal graph regularization.
#### IV-A3 Pairwise-based DNNs Methods
These methods are used to learn a semantic metric distance between cross
modalities data for utilizing similar/dissimilar pairs, which is termed as
heterogeneous metric learning.
Social media networks, e.g., Flickr, Facebook, Youtube, Wechat, Twitter, have
produced immense data on the web due to which it became the source of high
attention. Thus, it plays a significant role in multimedia related
applications, including cross-media retrieval. Social media networks are
completely different from traditional media network and exhibit unique
challenges to data analysis. 1) The data present on social media websites are
various and noisy. 2) The data are heterogeneous and present in different
modalities, e.g., image, text, video, audio, on the same platform. To predict
the link between various instances of social media Yuan et al. [121] proposed
a brave novel idea on the latent feature learning. To achieve this, they
designed a Relational Generative Deep Belief Nets (RGDBN). In this model, they
learn the latent feature for social media, which utilized the relationships
between social media instances in the network. By integrating the proposed
model called the Indian buffet process into the improved DBN, they learn the
optimal latent features that best embed both the media content and its
relationships. The proposed RGBDBN is able to analyze the correlation between
homogeneous and heterogeneous data for cross-media retrieval.
Following the same path, Wang et al. [122] proposed Modality-Specific Deep
Structure (MSDS) based on modality-specific feature learning. The MSDS model
used two different types of CNN to represent raw data in the latent space. The
semantic information among the images and texts in the latent space used one-
vs more learning scheme. Deep Cross-Modal Hashing (DCMH) [123] extends
traditional deep models for cross-modal retrieval, but it can only capture
intra-modal information and ignores inter-modal correlations, which makes the
retrieved results suboptimal. To overcome the aforementioned limitations, a
Pairwise Relationship guided Deep Hashing (PRDH) [124] adopted deep CNN models
to learn feature representations and hash codes for individual cross-modality
using the end-to-end architecture. Moreover, in this model, the decorrelation
constraints are integrated into a single deep architecture to enhance the
classification performance of the individual hash bit.
#### IV-A4 Rank-based DNNsMethods
These methods utilize rank lists to learn semantic representations, in which
an individual candidate is ranked based on the similarity distance between the
query and candidate. In this regard, Hu et al. [98] achieved the highest
efficiency for cross-media retrieval using Dual-CNN’s architecture. They used
dual CNN to model image and text independently, which is further used to rank
the similarity distance between query and candidate. Frome et al. [125]
introduced a novel deep visual-semantic embedding (DeViSE) approach to
leverage useful information learned in the text domain, and transfer it to a
system trained for visual recognition. Similarly, Weston et al. [126] employed
online learning to rank approach, called WSABIE, to train a joint embedding
model of labels and images. The authors of [127] developed a Deep Boltzmann
Machines (DBM) to represent joint cross-modal probability distribution over
sentences and images. Different from RNN-based approaches, Socher et al. [128]
introduced a novel Dependency Tree Recursive Neural Networks (DT-RNNs) model
which embed one modality (e.g., sentences) into a vector space using
dependency trees in order to retrieve cross-modality (e.g., images). However,
these methods reason about the image only on the global level using a single,
fixed-sized representation from the top layer of a CNN as a description for
the entire image. In contrast, the model presented in [129] clearly elaborated
the challenge faced in a complex scene. They formulated a max-margin objective
for DNN that learn to embed both image and text into a joint semantic space.
The ranking function for joint image-text representations is:
${c_{G}}\left(\theta\right)\sum\limits_{k}{\left[{\begin{array}[]{*{20}{c}}{\sum\limits_{l}{\max\left({0,{S_{kl}}-{S_{kk}}+\Delta}\right)+}}\\\
{\sum\limits_{l}{\max\left({0,{S_{lk}}-{S_{kk}}+\Delta}\right)}}\end{array}}\right]},$
(2)
where $\Delta$ is a hyperparameter that we cross-validate. The objective
stipulates that the score for true image-sentence pairs $S_{kk}$ should be
higher than $S_{kl}$ or $S_{lk}$ for any $l\neq k$ by at least a margin of
$\Delta$.
Table III: Summary of DNN based methods for the cross-media representations task. Reference | Modalities | Representation
---|---|---
3cm[111], [110], [112], [113], [114] | 5cmAudio and Images |
Text and Images | Unsupervised |
5cm[115], [116], [118], | |
[119, 120], [117] | 3cmAudio and Video |
Text and Images | |
Images and Audio | Supervised |
5cm [124, 121, 122] | 3cmAudio and Images |
Text and Images | Pairwise |
4.5cm[98], [125], [129] [126], | |
[127], [128] | 5cmText and Images |
Label and Images | |
Sentences and Images | Rank-based |
### IV-B Alignment
Figure 5: An example of cross-media multi-level alignment for correlation
learning, which not only explores global alignment between original instances
and local alignment between fine-grained patches, but also captures relation
alignment lying in the context.
Multimodal alignment is a challenging task in cross-media retrieval. It
basically consists in finding the relationships and correlations between
different instances in cross modalities. For example, aligning text and image
for a particular website. As the reader get good understanding from both
modalities present in a particular webpage rather than just one. Multimodal
alignment is significant for cross-media retrieval, as it allows us to
retrieve the contents of different modality based on input query (e.g., image
retrieval in case of the text as a query, and vice versa) as shown in Fig. 5.
Furthermore, we summarized different DNN based methods for the cross media
alignment task in Table IV-B.
Table IV: Summary of DNN based methods for the cross-media alignment task. Reference | Modalities | Alignment
---|---|---
[2, 130, 4, 131, 132] | 5cm Image and Text |
Speech and Text | Unsupervised |
[133] [134] | 5cmImage and Text |
Image and gesture | Supervised |
[135, 136, 137, 138] | 5cmImage and Text | Pairwise
#### IV-B1 Unsupervised DNNs based Methods
Unsupervised methods operate without label information between instances from
cross modalities. These methods enforce some constraints on alignment, such as
the temporal ordering of sequences and similarity metric existence between the
modalities.
To align multi-view time series Kruskal et al. [130] proposed the Dynamic Time
Warping (DTW) approach, which is used to measure the similarity between two
instances and find an optimized match between them using time warping (frames
insertion). DTW can be used directly for multimodal alignment by hand-crafting
similarity metrics between modalities; for example Rehman et al. [2]
introduced a novel similarity measurement between texts, images and users’
feelings to align images and texts.
The canonical correlation analysis (CCA) extended the original DTW formulation
as it requires a pre-define correlation metric between different modalities
[4, 131]. George et al. [139] proposed a novel Deep Canonical Time Warping
(DCTW) approach to automatically learn composite non-linear representations of
multiple time series which are highly correlated and temporally in alignment.
Yan et al. [140] proposed a novel end-to-end approach based on the deep CCA.
They formulated the objective function as:
$\begin{array}[]{l}\mathop{\max}\limits_{{k_{i}},{k_{j}}}tr\left({k_{i}^{T}\sum{ij{\rm{}}{k_{j}}}}\right)\\\
s.t.\left[{k_{i}^{T}\sum\nolimits_{ii}{{k_{i}}=k_{j}^{T}\sum\nolimits_{jj}{{k_{j}}=I}}}\right],\end{array}$
(3)
where
$T=\sum\nolimits_{ii}^{-1/2}{\sum\nolimits_{ij}{\sum\nolimits_{jj}^{-1/2}{}}},$
and the objective function can be rewritten as follwing.
$corr\left({i,j}\right)=tr\left({{{\left({{T^{T}}T}\right)}^{1/2}}}\right).$
(4)
Furthermore, Yan et al. [140] also optimize the memory consumption and speed
complexity in the DCCA framework using GPU implementation with CULA libraries,
which significantly increase the efficiency as compared to the CPU
implementation.
Chung et al. [132] proposed an unsupervised cross-modal alignment method to
learn the embedding spaces of speech and text. Particularly, the proposed
approach used the Speech2Vec [141] and Word2Vec [142] to learned the
respective speech and text embedding spaces. Furthermore, it also attempted to
align the two spaces through adversarial training, followed by a refinement
method.
#### IV-B2 Supervised DNNs based methods
Normally, researchers not only focus on the visual regions and keywords, when
aligning an image with text, but also between the rely on the correlation
between them. Correlation is very important for cross-media learning; however,
it is ignored in most of the previous works. For this purpose, Qi et. al.
[133] proposed Cross-media Relation Attention Network (CRAN) with multi-level
alignment. The proposed model was used to efficiently handle the relation
between different multimodal domains using multi-level alignment. In another
article, Amin et al. [134] proposed a concatenated model of CNN regressor
method and a 3-dimensional deep Markov Model (3DMM) to align faces with pose
appearance. Dai et al. [143] proposed a unified framework for cross-media
alignment task. They proposed a fused objective function, which contains both
CCA-like correlation capability and LDA-like distinguishing capabilities.
Further, Jia et al. [144] proposed an efficient CNN model, which includes
three main parts: the visual part is responsible for visual features
extraction, the tex part is responsible for text features extraction, and
finally the fusion part is responsible to fuse the image and sentences to
generate decisive alignment score of the tweet (image and sentence pair).
#### IV-B3 Pairwise-based DNNs Mehtods
With the recent advances of deep learning in multimedia applications, such as
image classification [52] and object detection [145], researchers adopt the
deep neural network to learn common space for cross-media retrieval, which
aims to fully utilize its considerable ability of modeling a highly nonlinear
correlation. Most of the deep learning based methods construct a multi-pathway
network, where each pathway is for the data of one media type. Multiple
pathways are linked at the joint layer to model cross-media correlation. Ngiam
et al. propose bimodal autoencoders (Bimodal AE) to extend the restricted
Boltzmann machine (RBM) [115]. They model the correlation by mutual
reconstruction between different media types. Multimodal deep belief network
[110] adopts two kinds of DBNs to model the distribution over data of
different media types, and it constructs a joint RBM to learn cross-media
correlation. Liu et al. propose deep canonical correlation analysis (DCCA) to
combine traditional CCA with deep network [80], which maximizes correlation on
the top of two subnetworks. Feng et al. jointly model cross-media correlation
and reconstruction information to perform correspondence autoencoder (Corr-AE)
[135]. Furthermore, Yuan et al. propose a recursive pyramid network with joint
attention (RPJA) [136]. They construct a hierarchical network structure with
stacked learning strategy, which aims to fully exploit both inter-media and
intra-media correlation. Cross-modal correlation learning (CCL) [137] utilizes
fine-grained information, and adopts multi-task learning strategy for better
performance. Zheng et al. propose a dual-path convolutional network to learn
image-text embedding [138]. They conduct efficient and effective end-to-end
learning to directly learn from the data with the utilization of supervisions.
Besides, Plummer et al. provide the first large-scale dataset of region-to-
phrase correspondences for image description based on Flickr-30K dataset
[146], where image regions depict the corresponding entities for richer image-
to-sentence modelling.
However, the above methods mainly focus on pairwise correlation, which exists
in global alignment between original instances of different media types.
Although some of they attempt to explore local alignment between fine-grained
patches, they all ignore important relation information lying in the context
of these fine-grained patches, which can provide rich complementary hints for
cross-media correlation learning. Thus, we propose to fully exploit multi-
level cross-media alignment, which can learn the more precise correlation
between different media types.
### IV-C Translation
Figure 6: A generalize description of example-based multimodal translation. It
shows that the system retrieves efficient translation as soon as it get a
query.
Translation refers to a mapping of data from one modality to another. For
example, given a query of one modality, the task is to retrieve different
modality of similar information. This task is a critical problem in cross-
media retrieval [147], computer vision and multimedia [148]. An overview of
multi-modal translation can be visualized in Fig. 6 and the representative
work is summarized in Table IV-C2.
In recent years, many deep learning based methods have been proposed to
elucidate multimodal translation challenges. It is important because the
retrieval task from different modalities has to fully understand the visual
scene and produce grammatically correct and brief text depicting it. The
multimodal translation is a very challenging issue in a deep learning
community for several reasons. Foremost, as most of the time, it is hard to
choose an appropriate translation for a particular task, where multiple
parameters are crucial. Particularly, there is no appropriate correct answer
to a query in translation. As there is no common concept of translation to
chose which answer is right and which is wrong.
Another important reason is the variety of media, linguistic, area and culture
differences, which further need expertise in the individual domain of
translation with image, text and audio channels. We categorize multimodal
translation based deep learning methods into two types - supervised and
unsupervised.
#### IV-C1 Unsupervised DNNs based Methods
These approaches normally rely on finding the nearest sample in the dictionary
through consensus caption selection and used that as the translated output.
Devlin et al. [149] proposed a k-nearest neighbor retrieval approach to
achieve translation results.
In [150] the authors projected words and image regions into a common space.
Moreover, they used unsupervised large text corpora to learn semantic word
representations for cross-media retrieval. Following the same path, Socher et
al. [151] proposed two different deep neural network models for translation.
First, they trained a DNN model on many images in order to obtain rich
features [152]; at the same time, a neural language model [153] was trained to
extract embedding representation of text. They further trained a linear
mapping between the image features and the text embeddings to decrease the
semantic space and link the two modalities. Lample et al. [154] proposed an
unsupervised bilingual translation method that can model bilingual dictionary
between two different languages. The key benefit of the proposed method is
that it does not use any cross-lingual annotated data instead it only uses two
monolingual corpora as the source and target language.
#### IV-C2 Supervised DNNs based Methods
These approaches rely on label information to retrieve cross-modality
instances. Yagcioglu et al. [155] used a CNN-based image representation to
translate the given visual query into a distributional semantics based form.
Furthermore, selecting intermediate semantic space for correlation measurement
during retrieval is also an alternative way to tackle the problem of
translation. Socher et al. [128] used intermediate semantic space to translate
common representation from text to image and vice versa. Similarly, Xu et al.
[156] proposed an integrated paradigm that models video and text data
simultaneously. Their proposed model contains three fundamental parts: a
semantic language model, a video model, and a joint embedding model. The
language model was used to embed sentences into a continuous vector space.
Whereas in the visual model, DNN was used to capture semantic correlation from
videos. Finally, in the fused embedding model, the distance of outputs of the
deep video model and language model was minimized in the common space to
leverage the semantic correlation between different modality. Cao et al. [117]
proposed a novel Deep Visual-Semantic Hashing (DVSH) model for cross-media
retrieval. They generated compact hash codes of visual and text data in a
supervised manner, which was able to learn the semantic correlation between
image and text data. The proposed architecture fuse joint multimodal embedding
and cross-media hashing based on CNN for images, RNN for text and max-margin
objective that incorporate both images and text to enable similarity
preservation and standard hash codes. Lebret et al. [157] used CNN to generate
image representation, which allow the system to infer phrases that describe
it. Moreover to predict a set of top-ranked phrases, a trigram constrained
language model is proposed to generate syntactically correct sentences from
different subsets phrases. Wei et al. [158] tackled the cross-media retrieval
problem through a novel approach called deep semantic matching (deep-SM).
Particularly, images and text are mapped into a joint semantic space using two
autonomous DNN models.
The popular benchmark multimodal techniques commonly learns a semantic space
for image and text features to find a semantic correlation between them.
However, using the same projection into the semantic space for two different
tasks such as image-to-text and text-to-image may lead to performance
degradation. Therefore, Wei et al [159] proposed a novel method called
Modality-Dependent Cross-media Retrieval (MDCR) to tackle the projection
problem into the semantic space efficiently. In their proposed method, they
learned two couples of projections for cross-media retrieval despite one
couple projection into the semantic space.
Table V: Summary of DNNs based methods for the cross-media translation task. Reference | Modalities | Translation
---|---|---
[150], [151] | Image and Text | Unsupervised
[155, 128, 157, 158] [156] [117] [159] | 5cmImage and Text |
Video and Text | |
Image and Audio | |
Image and Text | Supervised |
## V Discussion
In this section, we provide a summarized overview of each technical challenge,
namely: representation, alignment, and translation, with a discussion of
future directions and research problems faced by multi-modal deep learning
approaches with application to cross-media retrieval as shown in Fig. 7. We
also highlight the lessons and “best practices” obtained from our review of
the existing work.
### V-A Lessons Learned and Best Practices
Based on the reviewed papers, we derive a set of lessons learned and “best
practices” to be considered in implementing and deploying deep learning based
cross-media retrieval for solving different challenges, such as
representation, alignment, and translation. The key criteria used for solving
each challenge is described as follows.
#### V-A1 Representation
This section describes four major types of deep learning approaches to solve
multimodal representation — unsupervised deep learning, supervised deep
learning, pairwise deep learning, and rank based deep learning methods.
Unsupervised methods used co-occurrence information instead of label
information to learn common representations across different modality data.
These methods are commonly used for AVSR, affect, and multimodal gesture
recognition. The remaining three representations, project individual modality
into a separate space, which often used in applications where single modality
is required for retrieval, such as zero-shot learning. Moreover, for the
representation task, networks are mostly static. However, in the future, it
may be dynamically switching between the modalities [160, 161].
#### V-A2 Translation
Cross-media translation methods are extremely challenging to evaluate. As
such, tasks for instance speech recognition have a unique suitable
translation, whereas, tasks for instance speech synthesis and image
description do not. Most of the time it is hard to choose an appropriate
translation for a particular task, where multiple answers are acceptable.
However, we can add a number of probabilistic metrics that help in model
evaluation.
Normally, we use the help of human judgment in order to evaluate the
aforementioned task. A group of experts has been assigned the task of
evaluating individual translation manually through some scale parameter:
opinion mining [162, 163], realistic visual speech evaluation [164, 165],
media description [166, 167, 168, 169] and correlation and grammatical
correctness. On the other hand, preference studies is also an alternate option
where various translations are brought forward to the applicant for comparison
[170, 171]. Though, human judgment is a slow and expensive process. Moreover,
they also affected with a different culture, age and gender preferences. It is
hoped that by handling the evaluation challenge will be helpful to leverage
multimodal translation methods.
#### V-A3 Alignment
Cross-media alignment has several challenges, which are summarized as follows:
1. 1.
The number of datasets with clearly annotated alignment are scarce.
2. 2.
The development of common similarity metrics between different modalities is
hard.
3. 3.
The alignment of different elements in one modality may not have a
correspondence in other modality.
Literature showed that most of the alignment in cross-media focused on the
alignment of sequences in an unsupervised manner using graphical models and
dynamic programming methods [172, 173, 174]. Most of these methods used hand-
crafted similarity measures between different modalities or relied on
unsupervised algorithms. However, supervised learning techniques become
popular in the current era due to the availability of labeled training data.
Figure 7: Open problems and challenges for future direction
### V-B Challenges and Open Problems
#### V-B1 Dataset Construction
The current state-of-the-art cross-media datasets have significant gaps to
fulfil. First, datasets such as Wikipedia
dataset444http://www.svcl.ucsd.edu/projects/crossmodal/ [93], consists of only
two different media types i.e., images and texts. In addition to this, Pascal
VOC 2012 dataset555http://host.robots.ox.ac.uk/pascal/VOC/ [89] have only 20
different classes. Although, cross-media concatenate different domains such as
images, texts, audio, video and 3D models. Therefore, handling the queries
from unknown domain is challenging for the system trained on small dataset
[175]. Second, some of the current cross-media datasets are deficient in
context information, which results in the decline of cross-media retrieval
efficiency. Third, the major limitation in the benchmark cross-media retrieval
dataset is the size of the dataset, for instance Xmedia [3], IAPR TC-12 [99],
and Wikipedia. This makes the decision challenging for the learning systems
due to scarcity of data. Finally, some dataset lacks the proper image
labelling aligned with the training set such as, ALIPR [100], and SML [101].
Furthermore, datasets such as ESP [103], LabelMe [104], and AnnoSearch [105]
withdraw restrictions on the annotation vocabulary, which results in the weak
linkage among different modalities semantic gaps. The aforementioned
discussion concludes that cross-media retrieval method performance is directly
proportional to the nature of the dataset used for evaluation [176].
Therefore, we propose some significant characteristics for a good cross-media
retrieval dataset, which are as follow:
1. 1.
Social media platform is the best source for dataset collection as it contains
varied domains and informal text language.
2. 2.
There must be no constraint in the modality categorization.
3. 3.
Excluding images and texts the dataset also contain other modalities such as
video, audio and 3-dimensional (3D) models, which is acceptable in real time
scenario.
4. 4.
To avoid the overfitting problem during the training of the network. The size
of the dataset must be kept significantly large. Also, a large dataset helps
the learning algorithm understand the underlying patterns in the data and
produce efficient results.
5. 5.
The dataset aid in reducing the semantic gap for efficient retrieval by
providing coherent visual content descriptors. Also, the datasets with
structured alignment between distinct modalities help the learning algorithm
to be more robust.
#### V-B2 Scalability on large-scale data
With the advancement of technology and the expansion of social media websites
around the globe, a large number of multimedia data are produced over the
internet. Luckily, deep learning models have exhibit very promising and
efficient performance in handling a huge amount of data [26] with the help of
the Graphical Processing Unit (GPU). Therefore, the need for a scalable and
robust model for distributed platforms is significant. Furthermore, it is also
noteworthy to investigate further research on effectively organizing
individual related modality of data into a common semantic space. We believe
compression procedures [177] as one of the promising future directions for
cross-media retrieval. High-Dimensional input data can be compressed to
compact embedding to reduce the space and computation time during model
learning.
#### V-B3 Deep Neural Network
The work of deep learning on multimodal research is very scarce. Different
multimodal hashing techniques are introduced for cross-media retrieval [178,
179, 180, 181, 182, 113, 183, 184, 185, 186, 187, 188]. However, these methods
are based on shallow architecture, which cannot learn semantic information
efficiently between different modalities. Recently, different deep learning
models [125, 189, 190, 191, 192, 193, 83, 194, 195, 196, 117] showed that
these models were able to extract semantic information between different
modalities more efficiently compare to shallow methods. However, they were
restricted only to single modality retrieval. One of the promising solutions
for the aforementioned problem is transfer learning. It significantly improves
the learning task in a specific domain by using knowledge transferred from a
different domain. DNN based models are well-matched to transfer learning as it
learns both low and high-level features that separate the difference of
various cross-media domains.
#### V-B4 Informal annotations
Social networks websites such as YouTube, Facebook, Instagram, Twitter, and
Flickr have produced a large amount of multimodal data over the internet.
Generally, this data is poorly organized and has scarce and noisy annotations.
However, these annotations provide a correlation between different multimodal
data. The key question is how to use the restricted and noisy annotations for
a large amount of multimodal data to learn semantic information among the
cross-media?
#### V-B5 Practical Cross-media Retrieval Applications
As a hot topic these days, practical applications of cross-media retrieval
will soon become conceivable due to continuous enhancement in the performance
of multimodal efficiency. This will provide easy and flexible retrieval from
one modality to another modality. Furthermore, cross-media retrieval is also
important in many firms, such as press companies, Television, the
entertainment industry, and many others. Currently, people not looking to
search for text only but they want to completely visualize things. For
example, If you are looking for the installation of a window (operating
system) on your machine, it’s hard to complete read an article rather than
just follow few steps by watching a video. Moreover, the video explains and
visualize things better than text and is easily understandable. It is the need
for a smart city where people not only search in the same domain but cross-
modal searching is also at the fingertips.
#### V-B6 Evaluation Criteria
In the cross-media community we have seen that each time a model is proposed,
it is expected that the model show efficiency against numerous baselines.
However, most of the authors did not take it seriously and avail free options
for choosing baselines and datasets. This makes several issues in evaluating
cross-media models. First, it makes the output prediction score inconsistent.
Since individual author reports their own assessed results. By doing this,
sometimes, we also encounter conflicts of results. For instance, the original
score of the NCF model predicted in its pioneer research work [77] is ranked
very low compared to its variant/modified version [197]. This makes state-of-
the-art neural models very difficult. The main question arises here is, how
would we solve this issue? Considering other domains such as Natural Language
Processing (NLP) or Image Processing they have baseline datasets, such as
ImageNet and MNIST for the evaluation of models. Therefore, we strongly
believe such a standardized system for the cross-media domain. Second, there
must be proper designing of dataset split, particularly, test sets. Without
this, in fact, it is challenging to measure the performance of model
evaluation. Finally, by using deep learning models it is important to estimate
the dataset. As deep learning models performance varies with the amount of
data fluctuates.
#### V-B7 Requirement Gap and Conflict
Through our review, we found some blind-spots in DNN-based approaches, such as
pairwise based DL methods and rank based DL methods, for solving alignment and
translation in cross-media retrieval. The purpose of pairwise based DL methods
to learn common representations through similar/dissimilar pairs, in which, a
semantic metric distance is learned between data of various modalities,
whereas, rank based DL methods are used to learn common representations for
cross-media retrieval through learning to rank. These approaches are necessary
to solve the aforementioned challenges in cross-media retrieval. However,
these approaches received little attention in cross-media retrieval and only a
few articles have been published in shallow domain [198, 199, 200].
Moreover, the deep learning model used by most of the researchers is an
individual model for a separate modality. It is strongly recommended that
researchers should unfold the recent mathematical theory of deep learning
models to investigate the reason why a single model did not achieve benchmark
results in cross-media retrieval. It is also encouraged to find out a common
semantic space for the features extracted from different modality data using
DL models, simultaneously. Furthermore, the confliction between service
quality and retrieval is also noteworthy. For example, DL methods fulfill
multiple requirements of feature extraction and distance detection but can be
too heavyweight to achieve the real-time constraints of cross-media retrieval.
How to strike a balance among contradicting requirements deserves future
studies. The key is to balance feature extraction, similarity measurements,
and service quality.
## VI Conclusion
Multimedia information retrieval is a rapidly growing research field that aims
to build models that can validate the information from different modalities.
This paper reviewed cross-media retrieval in terms of DNN-based algorithms and
presented them in a common classification built upon three technical
challenges faced by multimodal researchers: alignment, translation, and
representation. For individual challenge, we introduced different sub-classes
of DNN-based methods to bridge the media gap, and provide researchers and
developers with a better understanding of the underlying problems and the
potential solutions of the current deep learning assisted cross-media
retrieval research.
## Acknowledgments
This work was partially supported by The China’s National Key R&D Program (No.
2018YFB0803600), National Natural Science Foundation of China (No.61801008),
Beijing Natural Science Foundation National (No. L172049), Scientific Research
Common Program of Beijing Municipal Commission of Education (No.
KM201910005025).
## References
* [1] R. Gasser, L. Rossetto, and H. Schuldt, “Towards an all-purpose content-based multimedia information retrieval system,” _arXiv preprint arXiv:1902.03878_ , 2019.
* [2] S. U. Rehman, S. Tu, Y. Huang, and O. U. Rehman, “A benchmark dataset and learning high-level semantic embeddings of multimedia for cross-media retrieval,” _IEEE Access_ , vol. 6, pp. 67 176–67 188, 2018.
* [3] Y. Peng, X. Huang, and Y. Zhao, “An overview of cross-media retrieval: Concepts, methodologies, benchmarks, and challenges,” _IEEE Transactions on Circuits and Systems for Video Technology_ , vol. 28, no. 9, pp. 2372–2385, 2018.
* [4] X. Dong, J. Sun, P. Duan, L. Meng, Y. Tan, W. Wan, H. Wu, B. Zhang, and H. Zhang, “Semi-supervised modality-dependent cross-media retrieval,” _Multimedia Tools and Applications_ , vol. 77, pp. 3579–3595, 2018.
* [5] L. Xia and H. Zhang, “Cross—media retrieval via cca—bp neural network,” in _IEEE Conference on Industrial Electronics and Applications_ , 2018, pp. 86–89.
* [6] R. Liu, S. Wei, Y. Zhao, Z. Zhu, and J. Wang, “Multi-view cross-media hashing with semantic consistency,” _IEEE MultiMedia_ , 2018.
* [7] K. Shu, S. Wang, J. Tang, Y. Wang, and H. Liu, “Crossfire: Cross media joint friend and item recommendations,” in _Proceedings of ACM International Conference on Web Search and Data Mining_ , 2018, pp. 522–530.
* [8] X. Xu, L. He, H. Lu, L. Gao, and Y. Ji, “Deep adversarial metric learning for cross-modal retrieval,” _World Wide Web_ , pp. 1–16, 2018.
* [9] D. R. Hardoon, S. Szedmak, and J. Shawe-Taylor, “Canonical correlation analysis: An overview with application to learning methods,” _Neural computation_ , vol. 16, no. 12, pp. 2639–2664, 2004.
* [10] R. Rosipal and N. Krämer, “Overview and recent advances in partial least squares,” in _International Statistical and Optimization Perspectives Workshop” Subspace, Latent Structure and Feature Selection”_. Springer, 2005, pp. 34–51.
* [11] A. Sharma and D. W. Jacobs, “Bypassing synthesis: Pls for face recognition with pose, low-resolution and sketch,” _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , pp. 593–600, 2011.
* [12] J. B. Tenenbaum and W. T. Freeman, “Separating style and content with bilinear models,” _Neural Computation_ , vol. 12, no. 6, pp. 1247–1283, 2000.
* [13] A. Sharma, A. Kumar, H. Daume, and D. W. Jacobs, “Generalized multiview analysis: A discriminative latent space,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2012, pp. 2160–2167.
* [14] Y. Gong, Q. Ke, M. Isard, and S. Lazebnik, “A multi-view embedding space for modeling internet images, tags, and their semantics,” _International journal of computer vision_ , vol. 106, no. 2, pp. 210–233, 2014.
* [15] V. Ranjan, N. Rasiwasia, and C. Jawahar, “Multi-label cross-modal retrieval,” in _IEEE International Conference on Computer Vision_ , 2015, pp. 4094–4102.
* [16] N. Rasiwasia, D. Mahajan, V. Mahadevan, and G. Aggarwal, “Cluster canonical correlation analysis,” in _Artificial Intelligence and Statistics_ , 2014, pp. 823–831.
* [17] J. Schmidhuber, “Deep learning in neural networks: An overview,” _Neural networks_ , vol. 61, pp. 85–117, 2015.
* [18] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” _nature_ , vol. 521, p. 436, 2015.
* [19] W. Liu, Z. Wang, X. Liu, N. Zeng, Y. Liu, and F. E. Alsaadi, “A survey of deep neural network architectures and their applications,” _Neurocomputing_ , vol. 234, pp. 11–26, 2017.
* [20] J. Ahmad, H. Farman, and Z. Jan, “Deep learning methods and applications,” in _Deep Learning: Convergence to Big Data Analytics_. Springer, 2019, pp. 31–42.
* [21] L. Deng, “A tutorial survey of architectures, algorithms, and applications for deep learning,” _APSIPA Transactions on Signal and Information Processing_ , vol. 3, 2014.
* [22] S. Pouyanfar, S. Sadiq, Y. Yan, H. Tian, Y. Tao, M. P. Reyes, M.-L. Shyu, S.-C. Chen, and S. Iyengar, “A survey on deep learning: Algorithms, techniques, and applications,” _ACM Computing Surveys (CSUR)_ , vol. 51, no. 5, p. 92, 2018.
* [23] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” _IEEE Signal Processing Magazine_ , vol. 34, no. 6, pp. 26–38, 2017.
* [24] A. Hussein, M. M. Gaber, E. Elyan, and C. Jayne, “Imitation learning: A survey of learning methods,” _ACM Computing Surveys (CSUR)_ , vol. 50, no. 2, p. 21, 2017.
* [25] X.-W. Chen and X. Lin, “Big data deep learning: challenges and perspectives,” _IEEE access_ , vol. 2, pp. 514–525, 2014.
* [26] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic, “Deep learning applications and challenges in big data analytics,” _Journal of Big Data_ , vol. 2, p. 1, 2015.
* [27] N. Hordri, A. Samar, S. Yuhaniz, and S. Shamsuddin, “A systematic literature review on features of deep learning in big data analytics,” _International Journal of Advances in Soft Computing & Its Applications_, vol. 9, no. 1, 2017.
* [28] Y. Peng, X. Huang, and Y. Zhao, “An overview of cross-media retrieval: Concepts, methodologies, benchmarks, and challenges,” _IEEE Transactions on circuits and systems for video technology_ , vol. 28, no. 9, pp. 2372–2385, 2017.
* [29] K. Wang, Q. Yin, W. Wang, S. Wu, and L. Wang, “A comprehensive survey on cross-modal retrieval,” _arXiv preprint arXiv:1607.06215_ , 2016.
* [30] J. Liu, C. Xu, and H. Lu, “Cross-media retrieval: state-of-the-art and open issues,” _International Journal of Multimedia Intelligence and Security_ , vol. 1, no. 1, pp. 33–52, 2010.
* [31] S. Tu, Y. Huang, G. Liu _et al._ , “Csfl: A novel unsupervised convolution neural network approach for visual pattern classification,” _AI Communications_ , vol. 30, pp. 311–324, 2017.
* [32] B. Benjdira, Y. Bazi, A. Koubaa, and K. Ouni, “Unsupervised domain adaptation using generative adversarial networks for semantic segmentation of aerial images,” _Remote Sensing_ , vol. 11, p. 1369, 2019.
* [33] S. U. Rehman, S. Tu, Y. Huang, and Z. Yang, “Face recognition: A novel un-supervised convolutional neural network method,” in _IEEE International Conference of Online Analysis and Computing Science_ , 2016, pp. 139–144.
* [34] Z. Yang, Y. Huang, Y. Jiang, Y. Sun, Y.-J. Zhang, and P. Luo, “Clinical assistant diagnosis for electronic medical record based on convolutional neural network,” _Scientific reports_ , vol. 8, no. 1, pp. 1–9, 2018.
* [35] S. u. Rehman, Z. Yang, M. Shahid, N. Wei, Y. Huang, M. Waqas, S. Tu _et al._ , “Water preservation in soan river basin using deep learning techniques,” _arXiv preprint arXiv:1906.10852_ , 2019.
* [36] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “Decaf: A deep convolutional activation feature for generic visual recognition,” in _International conference on machine learning_ , 2014, pp. 647–655.
* [37] Y. Peng, X. Huang, and J. Qi, “Cross-media shared representation by hierarchical learning with multiple deep networks.” in _IJCAI_ , 2016, pp. 3846 – 3853.
* [38] Z. Yang, Y.-J. Zhang, S. ur Rehman, and Y. Huang, “Image captioning with object detection and localization,” in _International Conference on Image and Graphics_ , 2017, pp. 109–118.
* [39] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 3156–3164.
* [40] L. Ballan, T. Uricchio, L. Seidenari, and A. Del Bimbo, “A cross-media model for automatic image annotation,” in _Proceedings of International Conference on Multimedia Retrieval_ , 2014, p. 73.
* [41] C. Li, T. Yan, X. Luo, L. Nie, and X. Xu, “Supervised robust discrete multimodal hashing for cross-media retrieval,” _IEEE Transactions on Multimedia_ , pp. 1–1, 2019.
* [42] J. Chi and Y. Peng, “Zero-shot cross-media embedding learning with dual adversarial distribution network,” _IEEE Transactions on Circuits and Systems for Video Technology_ , pp. 1–1, 2019.
* [43] Y. Peng and J. Qi, “Reinforced cross-media correlation learning by context-aware bidirectional translation,” _IEEE Transactions on Circuits and Systems for Video Technology_ , pp. 1–10, 2019.
* [44] X. Huang and Y. Peng, “Tpckt: Two-level progressive cross-media knowledge transfer,” _IEEE Transactions on Multimedia_ , pp. 1–1, 2019.
* [45] T. Yao, X. Kong, H. Fu, and Q. Tian, “Discrete semantic alignment hashing for cross-media retrieval,” _IEEE Transactions on Cybernetics_ , pp. 1–12, 2019.
* [46] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new perspectives,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, no. 8, pp. 1798–1828, 2013.
* [47] D. Bošnački, N. van Riel, and M. Veta, “Deep learning with convolutional neural networks for histopathology image analysis,” in _Automated Reasoning for Systems Biology and Medicine_. Springer, 2019, pp. 453–469.
* [48] N. Stephenson, E. Shane, J. Chase, J. Rowland, D. Ries, N. Justice, J. Zhang, L. Chan, and R. Cao, “Survey of machine learning techniques in drug discovery,” _Current drug metabolism_ , vol. 20, pp. 185–193, 2019.
* [49] B. T. Bastian, N. Jaspreeth, S. K. Ranjith, and C. Jiji, “Visual inspection and characterization of external corrosion in pipelines using deep neural network,” _NDT & E International_, pp. 102–134, 2019.
* [50] B. Alhnaity, S. Pearson, G. Leontidis, and S. Kollias, “Using deep learning to predict plant growth and yield in greenhouse environments,” _arXiv preprint arXiv:1907.00624_ , 2019.
* [51] D. Cireşan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” _arXiv preprint arXiv:1202.2745_ , 2012\.
* [52] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in _Advances in Neural Information Processing Systems_ , 2012, pp. 1097–1105.
* [53] A. H. Marblestone, G. Wayne, and K. P. Kording, “Toward an integration of deep learning and neuroscience,” _Frontiers in computational neuroscience_ , vol. 10, p. 94, 2016.
* [54] R. Dechter, “Learning while searching in constraint-satisfaction problems.” _University of California, Computer Science Department, Cognitive Systems Laboratory_.
* [55] F. J. Gomez and J. Schmidhuber, “Co-evolving recurrent neurons learn deep memory pomdps,” in _Proceedings of the 7th annual conference on Genetic and evolutionary computation_. ACM, 2005, pp. 491–498.
* [56] I. Aizenberg, N. N. Aizenberg, and J. P. Vandewalle, _Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications_. Springer Science & Business Media, 2013.
* [57] A. Ivakhnenko, “Cybernetic predicting devices,” Tech. Rep.
* [58] K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” _Biological cybernetics_ , vol. 36, no. 4, pp. 193–202, 1980.
* [59] S. Linnainmaa, “The representation of the cumulative rounding error of an algorithm as a taylor expansion of the local rounding errors,” _Master’s Thesis (in Finnish), Univ. Helsinki_ , pp. 6–7, 1970.
* [60] A. Griewank, “Who invented the reverse mode of differentiation,” _Documenta Mathematica, Extra Volume ISMP_ , pp. 389–400, 2012.
* [61] P. Werbos, “Beyond regression:” new tools for prediction and analysis in the behavioral sciences,” _Ph. D. dissertation, Harvard University_ , 1974.
* [62] P. J. Werbos, “Applications of advances in nonlinear sensitivity analysis,” in _System modeling and optimization_. Springer, 1982, pp. 762–770.
* [63] G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” _Neural computation_ , vol. 18, no. 7, pp. 1527–1554, 2006\.
* [64] C. Poultney, S. Chopra, Y. L. Cun _et al._ , “Efficient learning of sparse representations with an energy-based model,” in _Advances in neural information processing systems_ , 2007, pp. 1137–1144.
* [65] Y. Bengio, Y. LeCun _et al._ , “Scaling learning algorithms towards ai,” _Large-scale kernel machines_ , vol. 34, no. 5, pp. 1–41, 2007.
* [66] U. Chavan and D. Kulkarni, “Performance issues of parallel, scalable convolutional neural networks in deep learning,” in _Computing, Communication and Signal Processing_. Springer, 2019, pp. 333–343.
* [67] T. Chung, B. Xu, Y. Liu, C. Ouyang, S. Li, and L. Luo, “Empirical study on character level neural network classifier for chinese text,” _Engineering Applications of Artificial Intelligence_ , vol. 80, pp. 1–7, 2019.
* [68] T. M. Quan, D. G. Hildebrand, and W.-K. Jeong, “Fusionnet: A deep fully residual convolutional neural network for image segmentation in connectomics,” _arXiv preprint arXiv:1612.05360_ , 2016.
* [69] D. Cireşan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” _arXiv preprint arXiv:1202.2745_ , 2012\.
* [70] R. Parloff, “Why deep learning is suddenly changing your life,” _Fortune. New York: Time Inc_ , 2016.
* [71] G. d. S. P. Moreira, D. Jannach, and A. M. da Cunha, “Contextual hybrid session-based news recommendation with recurrent neural networks,” _arXiv preprint arXiv:1904.10367_ , 2019.
* [72] S. Zhang, Y. Tay, L. Yao, B. Wu, and A. Sun, “Deeprec: An open-source toolkit for deep learning based recommendation,” _arXiv preprint arXiv:1905.10536_ , 2019.
* [73] J. You, Y. Wang, A. Pal, P. Eksombatchai, C. Rosenburg, and J. Leskovec, “Hierarchical temporal convolutional networks for dynamic recommender systems,” in _The World Wide Web Conference_ , 2019, pp. 2236–2246.
* [74] L. Zheng, V. Noroozi, and P. S. Yu, “Joint deep modeling of users and items using reviews for recommendation,” in _Proceedings of the Tenth ACM International Conference on Web Search and Data Mining_. ACM, 2017, pp. 425–434.
* [75] Y. Gong and Q. Zhang, “Hashtag recommendation using attention-based convolutional neural network.” in _IJCAI_ , 2016, pp. 2782–2788.
* [76] Y. Zhang, Q. Ai, X. Chen, and W. B. Croft, “Joint representation learning for top-n recommendation with heterogeneous information sources,” in _Proceedings of ACM on Conference on Information and Knowledge Management_. ACM, 2017, pp. 1449–1458.
* [77] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural collaborative filtering,” in _Proceedings of the International Conference on World Wide Web_ , 2017, pp. 173–182.
* [78] Y. Tay, L. Anh Tuan, and S. C. Hui, “Latent relational metric learning via memory-based attention for collaborative ranking,” in _Proceedings of the World Wide Web Conference_. International World Wide Web Conferences Steering Committee, 2018, pp. 729–739.
* [79] S. Zhang, L. Yao, A. Sun, S. Wang, G. Long, and M. Dong, “Neurec: On nonlinear transformation for personalized ranking,” _arXiv preprint arXiv:1805.03002_ , 2018.
* [80] Y. Liu, Y. Li, Y.-H. Yuan, and H. Zhang, “A new robust deep canonical correlation analysis algorithm for small sample problems,” _IEEE Access_ , 2019.
* [81] W. Wang, R. Arora, K. Livescu, and J. Bilmes, “On deep multi-view representation learning: objectives and optimization,” _arXiv preprint arXiv:1602.01024_ , 2016.
* [82] N. E. D. Elmadany, Y. He, and L. Guan, “Multiview learning via deep discriminative canonical correlation analysis,” in _IEEE International Conference on Acoustics, Speech and Signal Processing_ , 2016, pp. 2409–2413.
* [83] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, “Deep compositional question answering with neural module networks. arxiv preprint,” _arXiv preprint arXiv:1511.02799_ , vol. 2, 2015.
* [84] P. Perera and V. M. Patel, “Deep transfer learning for multiple class novelty detection,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 11 544–11 552.
* [85] Y. Ren and X. Cheng, “Review of convolutional neural network optimization and training in image processing,” in _Tenth International Symposium on Precision Engineering Measurements and Instrumentation_ , vol. 11053. International Society for Optics and Photonics, 2019, pp. 31–36.
* [86] G. Goswami, N. Ratha, A. Agarwal, R. Singh, and M. Vatsa, “Unravelling robustness of deep learning based face recognition against adversarial attacks,” in _Thirty-Second AAAI Conference on Artificial Intelligence_ , 2018.
* [87] W. Matthew and Z. K. Marta, “Robustness of 3d deep learning in an adversarial setting,” 2019.
* [88] G. A. Miller, “Wordnet: a lexical database for english,” _Communications of the ACM_ , vol. 38, pp. 39–41, 1995.
* [89] S. J. Hwang and K. Grauman, “Reading between the lines: Object localization using implicit cues from image tags,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 34, no. 6, pp. 1145–1158, 2012.
* [90] S. ur Rehman, Y. Huang, S. Tu, and O. ur Rehman, “Facebook5k: A novel evaluation resource dataset for cross-media search,” in _International Conference on Cloud Computing and Security_ , 2018, pp. 512–524.
* [91] M. Hodosh, P. Young, and J. Hockenmaier, “Framing image description as a ranking task: Data, models and evaluation metrics,” _Journal of Artificial Intelligence Research_ , vol. 47, pp. 853–899, 2013.
* [92] J. Shi and J. Malik, “Normalized cuts and image segmentation,” _IEEE Transactions on pattern analysis and machine intelligence_ , vol. 22, pp. 888–905, 2000.
* [93] N. Rasiwasia, J. Costa Pereira, E. Coviello, G. Doyle, G. R. Lanckriet, R. Levy, and N. Vasconcelos, “A new approach to cross-modal multimedia retrieval,” in _Proceedings of the 18th ACM international conference on Multimedia_ , 2010, pp. 251–260.
* [94] T.-S. Chua, J. Tang, R. Hong, H. Li, Z. Luo, and Y. Zheng, “Nus-wide: a real-world web image database from national university of singapore,” in _Proceedings of the ACM international conference on image and video retrieval_. ACM, 2009, p. 48.
* [95] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,” _Transactions of the Association for Computational Linguistics_ , vol. 2, pp. 67–78, 2014.
* [96] J. Krapac, M. Allan, J. Verbeek, and F. Juried, “Improving web image search results using query-relative classifiers,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2010, pp. 1094–1101.
* [97] S. ur Rehman, S. Tu, Y. Huang _et al._ , “A benchmark dataset and learning high-level semantic embeddings of multimedia for cross-media retrieval,” _IEEE Access_ , 2018.
* [98] Y. Hu, L. Zheng, Y. Yang, and Y. Huang, “Twitter100k: A real-world dataset for weakly supervised cross-media retrieval,” _IEEE Transactions on Multimedia_ , vol. 20, no. 4, pp. 927–938, 2018.
* [99] M. Grubinger, P. Clough, H. Müller, and T. Deselaers, “The iapr tc-12 benchmark: A new evaluation resource for visual information systems,” in _International workshop ontoImage_ , vol. 5, no. 10, 2006.
* [100] J. Li and J. Z. Wang, “Real-time computerized annotation of pictures,” 2011, uS Patent 7,941,009.
* [101] G. Carneiro, A. B. Chan, P. J. Moreno, and N. Vasconcelos, “Supervised learning of semantic classes for image annotation and retrieval,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 29, no. 3, pp. 394–410, 2007.
* [102] V. Lavrenko, R. Manmatha, and J. Jeon, “A model for learning the semantics of pictures,” in _Advances in neural information processing systems_ , 2004, pp. 553–560.
* [103] L. Von Ahn and L. Dabbish, “Labeling images with a computer game,” in _Proceedings of the SIGCHI conference on Human factors in computing systems_ , 2004, pp. 319–326.
* [104] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman, “Labelme: a database and web-based tool for image annotation,” _International journal of computer vision_ , vol. 77, pp. 157–173, 2008.
* [105] X.-J. Wang, L. Zhang, F. Jing, and W.-Y. Ma, “Annosearch: Image auto-annotation by search,” in _Computer Vision and Pattern Recognition, IEEE Computer Society Conference on_ , vol. 2, 2006, pp. 1483–1490.
* [106] X.-S. Hua, L. Yang, J. Wang, J. Wang, M. Ye, K. Wang, Y. Rui, and J. Li, “Clickage: Towards bridging semantic and intent gaps via mining click logs of search engines,” in _Proceedings of ACM international conference on Multimedia_ , 2013, pp. 243–252.
* [107] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-r. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath _et al._ , “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” _IEEE Signal Processing Magazine_ , vol. 29, no. 6, pp. 82–97, 2012.
* [108] D. Wang, P. Cui, M. Ou, and W. Zhu, “Deep multimodal hashing with orthogonal regularization.” in _IJCAI_ , vol. 367, 2015, pp. 2291–2297.
* [109] O. U. Rehman, S. U. Rehman, S. Tu, S. Khan, M. Waqas, and S. Yang, “A quantum particle swarm optimization method with fitness selection methodology for electromagnetic inverse problems,” _IEEE Access_ , vol. 6, pp. 63 155–63 163, 2018.
* [110] N. Srivastava and R. Salakhutdinov, “Learning representations for multimodal data with deep belief nets,” in _International conference on machine learning workshop_ , vol. 79, 2012.
* [111] L. Chen, S. Srivastava, Z. Duan, and C. Xu, “Deep cross-modal audio-visual generation,” in _Proceedings of the on Thematic Workshops of ACM Multimedia 2017_ , 2017, pp. 349–357.
* [112] X. Zhang, S. Zhou, J. Feng, H. Lai, B. Li, Y. Pan, J. Yin, and S. Yan, “Hashgan: Attention-aware deep adversarial hashing for cross modal retrieval,” _arXiv preprint arXiv:1711.09347_ , 2017.
* [113] W. Wang, B. C. Ooi, X. Yang, D. Zhang, and Y. Zhuang, “Effective multi-modal retrieval based on stacked auto-encoders,” _Proceedings of the VLDB Endowment_ , vol. 7, no. 8, pp. 649–660, 2014.
* [114] M. Fan, W. Wang, P. Dong, R. Wang, and G. Li, “Unsupervised concept learning in text subspace for cross-media retrieval,” in _Pacific Rim Conference on Multimedia_ , 2017, pp. 505–514.
* [115] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng, “Multimodal deep learning,” in _Proceedings of international conference on machine learning_ , 2011, pp. 689–696.
* [116] Q.-Y. Jiang and W.-J. Li, “Deep cross-modal hashing,” _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2017.
* [117] Y. Cao, M. Long, J. Wang, Q. Yang, and P. S. Yu, “Deep visual-semantic hashing for cross-modal retrieval,” in _International Conference on Knowledge Discovery and Data Mining_ , 2016, pp. 1445–1454.
* [118] C. Wang, H. Yang, and C. Meinel, “Deep semantic mapping for cross-modal retrieval,” in _Tools with Artificial Intelligence (ICTAI), IEEE International Conference on_ , 2015, pp. 234–241.
* [119] K. Wang, R. He, L. Wang, W. Wang, and T. Tan, “Joint feature selection and subspace learning for cross-modal retrieval,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 38, no. 10, pp. 2010–2023, 2016\.
* [120] K. Wang, R. He, W. Wang, L. Wang, and T. Tan, “Learning coupled feature spaces for cross-modal matching,” in _Proceedings of the IEEE International Conference on Computer Vision_ , 2013, pp. 2088–2095.
* [121] Z. Yuan, J. Sang, Y. Liu, and C. Xu, “Latent feature learning in social media network,” in _Proceedings of ACM international conference on Multimedia_ , 2013, pp. 253–262.
* [122] J. Wang, Y. He, C. Kang, S. Xiang, and C. Pan, “Image-text cross-modal retrieval via modality-specific feature learning,” in _Proceedings of ACM on International Conference on Multimedia Retrieval_ , 2015, pp. 347–354.
* [123] Q.-Y. Jiang and W.-J. Li, “Deep cross-modal hashing,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2017, pp. 3232–3240.
* [124] E. Yang, C. Deng, W. Liu, X. Liu, D. Tao, and X. Gao, “Pairwise relationship guided deep hashing for cross-modal retrieval.” in _Thirty-first AAAI conference on artificial intelligence._ , 2017, pp. 1618–1625.
* [125] A. Frome, G. S. Corrado, J. Shlens, S. Bengio, J. Dean, T. Mikolov _et al._ , “Devise: A deep visual-semantic embedding model,” in _Advances in neural information processing systems_ , 2013, pp. 2121–2129.
* [126] J. Weston, S. Bengio, and N. Usunier, “Large scale image annotation: learning to rank with joint word-image embeddings,” _Machine learning_ , vol. 81, no. 1, pp. 21–35, 2010.
* [127] N. Srivastava and R. R. Salakhutdinov, “Multimodal learning with deep boltzmann machines,” in _Advances in neural information processing systems_ , 2012, pp. 2222–2230.
* [128] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng, “Grounded compositional semantics for finding and describing images with sentences,” _Transactions of the Association of Computational Linguistics_ , vol. 2, no. 1, pp. 207–218, 2014.
* [129] A. Karpathy, A. Joulin, and L. F. Fei-Fei, “Deep fragment embeddings for bidirectional image sentence mapping,” in _Advances in neural information processing systems_ , 2014, pp. 1889–1897.
* [130] J. B. Kruskal, “An overview of sequence comparison: Time warps, string edits, and macromolecules,” _SIAM review_ , vol. 25, no. 2, pp. 201–237, 1983.
* [131] J. Yan, H. Zhang, J. Sun, Q. Wang, P. Guo, L. Meng, W. Wan, and X. Dong, “Joint graph regularization based modality-dependent cross-media retrieval,” _Multimedia Tools and Applications_ , vol. 77, pp. 3009–3027, 2018.
* [132] Y.-A. Chung, W.-H. Weng, S. Tong, and J. Glass, “Unsupervised cross-modal alignment of speech and text embedding spaces,” _arXiv preprint arXiv:1805.07467_ , 2018.
* [133] J. Qi, Y. Peng, and Y. Yuan, “Cross-media multi-level alignment with relation attention network,” _arXiv preprint arXiv:1804.09539_ , 2018.
* [134] A. Jourabloo and X. Liu, “Large-pose face alignment via cnn-based dense 3d model fitting,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2016, pp. 4188–4196.
* [135] F. Feng, X. Wang, and R. Li, “Cross-modal retrieval with correspondence autoencoder,” in _Proceedings of the ACM international conference on Multimedia_. ACM, 2014, pp. 7–16.
* [136] Y. Yuan and Y. Peng, “Recursive pyramid network with joint attention for cross-media retrieval,” in _International Conference on Multimedia Modeling_. Springer, 2018, pp. 405–416.
* [137] Y. Peng, J. Qi, X. Huang, and Y. Yuan, “Ccl: Cross-modal correlation learning with multigrained fusion by hierarchical network,” _IEEE Transactions on Multimedia_ , vol. 20, no. 2, pp. 405–420, 2018.
* [138] Z. Zheng, L. Zheng, M. Garrett, Y. Yang, and Y.-D. Shen, “Dual-path convolutional image-text embedding with instance loss,” _arXiv preprint arXiv:1711.05535_ , 2017.
* [139] G. Trigeorgis, M. A. Nicolaou, B. W. Schuller, and S. Zafeiriou, “Deep canonical time warping for simultaneous alignment and representation learning of sequences,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , no. 5, pp. 1128–1138, 2018.
* [140] F. Yan and K. Mikolajczyk, “Deep correlation for matching images and text,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 3441–3450.
* [141] Y.-A. Chung and J. Glass, “Speech2vec: A sequence-to-sequence framework for learning word embeddings from speech,” _arXiv preprint arXiv:1803.08976_ , 2018.
* [142] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compositionality,” in _Advances in neural information processing systems_ , 2013, pp. 3111–3119.
* [143] X.-m. Dai and S.-G. Li, “Cross-modal deep discriminant analysis,” _Neurocomputing_ , vol. 314, pp. 437–444, 2018.
* [144] Y. Jia, L. Bai, P. Wang, J. Guo, and Y. Xie, “Deep convolutional neural network for correlating images and sentences,” in _International Conference on Multimedia Modeling_. Springer, 2018, pp. 154–165.
* [145] S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,” in _Advances in neural information processing systems_ , 2015, pp. 91–99.
* [146] B. A. Plummer, L. Wang, C. M. Cervantes, J. C. Caicedo, J. Hockenmaier, and S. Lazebnik, “Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models,” in _IEEE international conference on computer vision_ , 2015, pp. 2641–2649.
* [147] J. C. Pereira, E. Coviello, G. Doyle, N. Rasiwasia, G. R. Lanckriet, R. Levy, and N. Vasconcelos, “On the role of correlation and abstraction in cross-modal multimedia retrieval,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 36, no. 3, pp. 521–535, 2014.
* [148] R. Bernardi, R. Cakici, D. Elliott, A. Erdem, E. Erdem, N. Ikizler-Cinbis, F. Keller, A. Muscat, and B. Plank, “Automatic description generation from images: A survey of models, datasets, and evaluation measures,” _Journal of Artificial Intelligence Research_ , vol. 55, pp. 409–442, 2016\.
* [149] J. Devlin, H. Cheng, H. Fang, S. Gupta, L. Deng, X. He, G. Zweig, and M. Mitchell, “Language models for image captioning: The quirks and what works,” _arXiv preprint arXiv:1505.01809_ , 2015.
* [150] R. Socher and L. Fei-Fei, “Connecting modalities: Semi-supervised segmentation and annotation of images using unaligned text corpora,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_. IEEE, 2010, pp. 966–973.
* [151] R. Socher, M. Ganjoo, C. D. Manning, and A. Ng, “Zero-shot learning through cross-modal transfer,” in _Advances in neural information processing systems_ , 2013, pp. 935–943.
* [152] A. Coates and A. Y. Ng, “The importance of encoding versus training with sparse coding and vector quantization,” in _Proceedings of international conference on machine learning_ , 2011, pp. 921–928.
* [153] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, “A neural probabilistic language model,” _Journal of machine learning research_ , vol. 3, pp. 1137–1155, 2003.
* [154] G. Lample, A. Conneau, L. Denoyer, H. Jégou _et al._ , “Word translation without parallel data,” 2018.
* [155] S. Yagcioglu, E. Erdem, A. Erdem, and R. Cakici, “A distributed representation based query expansion approach for image captioning,” in _Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing_ , vol. 2, 2015, pp. 106–111.
* [156] R. Xu, C. Xiong, W. Chen, and J. J. Corso, “Jointly modeling deep video and compositional text to bridge vision and language in a unified framework.” in _In Twenty-Ninth AAAI Conference on Artificial Intelligence._ , vol. 5, 2015, p. 6.
* [157] R. Lebret, P. O. Pinheiro, and R. Collobert, “Phrase-based image captioning,” _arXiv preprint arXiv:1502.03671_ , 2015.
* [158] Y. Wei, Y. Zhao, C. Lu, S. Wei, L. Liu, Z. Zhu, and S. Yan, “Cross-modal retrieval with cnn visual features: A new baseline,” _IEEE Transactions on Cybernetics_ , vol. 47, no. 2, pp. 449–460, 2017.
* [159] Y. Wei, Y. Zhao, Z. Zhu, S. Wei, Y. Xiao, J. Feng, and S. Yan, “Modality-dependent cross-media retrieval,” _ACM Transactions on Intelligent Systems and Technology (TIST)_ , vol. 7, p. 57, 2016.
* [160] T. Baltrušaitis, C. Ahuja, and L.-P. Morency, “Multimodal machine learning: A survey and taxonomy,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 41, pp. 423–443, 2019.
* [161] V. Pahuja, J. Fu, S. Chandar, and C. J. Pal, “Structure learning for neural module networks,” _arXiv preprint arXiv:1905.11532_ , 2019.
* [162] A. Van Den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” _arXiv preprint arXiv:1609.03499_ , 2016.
* [163] H. Zen, N. Braunschweiler, S. Buchholz, M. J. Gales, K. Knill, S. Krstulovic, and J. Latorre, “Statistical parametric speech synthesis based on speaker and language factorization,” _IEEE transactions on audio, speech, and language processing_ , vol. 20, pp. 1713–1724, 2012.
* [164] S. L. Taylor, M. Mahler, B.-J. Theobald, and I. Matthews, “Dynamic units of visual speech,” in _Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation_ , 2012, pp. 275–284.
* [165] R. Anderson, B. Stenger, V. Wan, and R. Cipolla, “Expressive visual text-to-speech using active appearance models,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2013, pp. 3382–3389.
* [166] X. Chen, H. Fang, T.-Y. Lin, R. Vedantam, S. Gupta, P. Dollár, and C. L. Zitnick, “Microsoft coco captions: Data collection and evaluation server,” _arXiv preprint arXiv:1504.00325_ , 2015.
* [167] G. Kulkarni, V. Premraj, V. Ordonez, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg, “Babytalk: Understanding and generating simple image descriptions,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 35, pp. 2891–2903, 2013.
* [168] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daumé III, “Midge: Generating image descriptions from computer vision detections,” in _Proceedings of Conference of the European Chapter of the Association for Computational Linguistics_ , 2012, pp. 747–756.
* [169] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. Mooney, and K. Saenko, “Translating videos to natural language using deep recurrent neural networks,” _arXiv preprint arXiv:1412.4729_ , 2014.
* [170] S. S. Sarfjoo, C. Demiroğlu, and S. King, “Using eigenvoices and nearest-neighbors in hmm-based cross-lingual speaker adaptation with limited data,” _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 25, pp. 839–851, 2017.
* [171] S. Taylor, T. Kim, Y. Yue, M. Mahler, J. Krahe, A. G. Rodriguez, J. Hodgins, and I. Matthews, “A deep learning approach for generalized speech animation,” _ACM Transactions on Graphics (TOG)_ , vol. 36, p. 93, 2017.
* [172] O. U. Rehman, S. Yang, S. Khan, and S. U. Rehman, “A quantum particle swarm optimizer with enhanced strategy for global optimization of electromagnetic devices,” _IEEE Transactions on Magnetics_ , vol. 55, no. 8, pp. 1–4, 2019\.
* [173] S. ur Rehman, Y. Huang, S. Tu, and B. Ahmad, “Learning a semantic space for modeling images, tags and feelings in cross-media search,” in _Pacific-Asia Conference on Knowledge Discovery and Data Mining_. Springer, 2019, pp. 65–76.
* [174] O. U. Rehman, S. Tu, S. U. Rehman, S. Khan, and S. Yang, “Design optimization of electromagnetic devices using an improved quantum inspired particle swarm optimizer.” _Applied Computational Electromagnetics Society Journal_ , vol. 33, no. 9, 2018.
* [175] S. ur Rehman, S. Tu, M. Waqas, Y. Huang, O. ur Rehman, B. Ahmad, and S. Ahmad, “Unsupervised pre-trained filter learning approach for efficient convolution neural network,” _Neurocomputing_ , vol. 365, pp. 171–190, 2019.
* [176] S. ur Rehman, S. Tu, M. Waqas, O. Rehman, B. Ahmad, Z. Halim, W. Zhao, and Z. Yang, “Optimization based training of evolutionary convolution neural network for visual classification applications,” _IET Computer Vision_ , 2020\.
* [177] J. Serrà and A. Karatzoglou, “Getting deep recommenders fit: Bloom embeddings for sparse binary input/output networks,” in _Proceedings of the Eleventh ACM Conference on Recommender Systems_. ACM, 2017, pp. 279–287.
* [178] M. M. Bronstein, A. M. Bronstein, F. Michel, and N. Paragios, “Data fusion through cross-modality metric learning using similarity-sensitive hashing,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2010, pp. 3594–3601.
* [179] S. Kumar and R. Udupa, “Learning hash functions for cross-view similarity search,” in _Twenty-Second International Joint Conference on Artificial Intelligence._ , vol. 22, no. 1, 2011.
* [180] Y. Zhen and D.-Y. Yeung, “Co-regularized hashing for multimodal data,” in _Advances in neural information processing systems_ , 2012, pp. 1376–1384.
* [181] ——, “A probabilistic model for multimodal hash function learning,” in _Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining_ , 2012, pp. 940–948.
* [182] J. Song, Y. Yang, Y. Yang, Z. Huang, and H. T. Shen, “Inter-media hashing for large-scale retrieval from heterogeneous data sources,” in _Proceedings of the ACM SIGMOD International Conference on Management of Data_ , 2013, pp. 785–796.
* [183] Z. Yu, F. Wu, Y. Yang, Q. Tian, J. Luo, and Y. Zhuang, “Discriminative coupled dictionary hashing for fast cross-media retrieval,” in _Proceedings of the international ACM SIGIR conference on Research and development in information retrieval_ , 2014, pp. 395–404.
* [184] X. Liu, J. He, C. Deng, and B. Lang, “Collaborative hashing,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2014, pp. 2139–2146.
* [185] D. Zhang and W.-J. Li, “Large-scale supervised multimodal hashing with semantic correlation maximization.” in _In Twenty-Eighth AAAI Conference on Artificial Intelligence.v_ , vol. 1, no. 2, 2014, p. 7.
* [186] B. Wu, Q. Yang, W.-S. Zheng, Y. Wang, and J. Wang, “Quantized correlation hashing for fast cross-modal search.” in _In Twenty-Fourth International Joint Conference on Artificial Intelligence_ , 2015, pp. 3946–3952.
* [187] Z. Lin, G. Ding, M. Hu, and J. Wang, “Semantics-preserving hashing for cross-view retrieval,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 3864–3872.
* [188] M. Long, Y. Cao, J. Wang, and P. S. Yu, “Composite correlation quantization for efficient multimodal retrieval,” in _Proceedings of International ACM SIGIR conference on Research and Development in Information Retrieval_ , 2016, pp. 579–588.
* [189] R. Kiros, R. Salakhutdinov, and R. Zemel, “Multimodal neural language models,” in _International Conference on Machine Learning_ , 2014, pp. 595–603.
* [190] M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” _arXiv preprint arXiv:1502.02791_ , 2015\.
* [191] A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 3128–3137.
* [192] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 2625–2634.
* [193] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, “Are you talking to a machine? dataset and methods for multilingual image question,” in _Advances in neural information processing systems_ , 2015, pp. 2296–2304.
* [194] R. Xia, Y. Pan, H. Lai, C. Liu, and S. Yan, “Supervised hashing for image retrieval via image representation learning.” in _Twenty-eighth AAAI conference on artificial intelligence_ , vol. 1, 2014, p. 2.
* [195] H. Lai, Y. Pan, Y. Liu, and S. Yan, “Simultaneous feature learning and hash coding with deep neural networks,” in _Computer Vision and Pattern Recognition (CVPR), IEEE Conference on_ , 2015, pp. 3270–3278.
* [196] H. Zhu, M. Long, J. Wang, and Y. Cao, “Deep hashing network for efficient similarity retrieval.” in _Thirtieth AAAI Conference on Artificial Intelligence._ , 2016, pp. 2415–2421.
* [197] L. Zheng, C.-T. Lu, L. He, S. Xie, V. Noroozi, H. Huang, and P. S. Yu, “Mars: Memory attention-aware recommender system,” _arXiv preprint arXiv:1805.07037_ , 2018.
* [198] B. Bai, J. Weston, D. Grangier, R. Collobert, K. Sadamasa, Y. Qi, O. Chapelle, and K. Weinberger, “Learning to rank with (a lot of) word features,” _Information retrieval_ , vol. 13, pp. 291–314, 2018.
* [199] D. Grangier and S. Bengio, “A discriminative kernel-based model to rank images from text queries,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 10, no. 2018-010, 2018.
* [200] B. McFee and G. R. Lanckriet, “Metric learning to rank,” in _Proceedings of International Conference on Machine Learning_ , 2018, pp. 775–782.
|
# Analogical Concept Memory for
Architectures Implementing the Common Model of Cognition
Shiwali Mohan Matthew Klenk
###### Abstract
Architectures that implement the Common Model of Cognition - Soar, ACT-R, and
Sigma - have a prominent place in research on cognitive modeling as well as on
designing complex intelligent agents. In this paper, we explore how
computational models of analogical processing can be brought into these
architectures to enable concept acquisition from examples obtained
interactively. We propose a new analogical concept memory for Soar that
augments its current system of declarative long-term memories. We frame the
problem of concept learning as embedded within the larger context of
interactive task learning (ITL) and embodied language processing (ELP). We
demonstrate that the analogical learning methods implemented in the proposed
memory can quickly learn a diverse types of novel concepts that are useful not
only in recognition of a concept in the environment but also in action
selection. Our approach has been instantiated in an implemented cognitive
system Aileen and evaluated on a simulated robotic domain.
###### keywords:
cognitive architectures , common model of cognition , intelligent agents ,
concept representation and acquisition , interactive learning , analogical
reasoning and generalization , interactive task learning
††journal: Cognitive Systems Research
## 1 Introduction
The recent proposal for the common model of cognition (CMC; Laird et al. 2017)
identifies the central themes in the past $30$ years of research in three
cognitive architectures - Soar (Laird, 2012), ACT-R (Anderson, 2009), and
Sigma (Rosenbloom et al., 2016). These architectures have been prominent not
only in cognitive modeling but also in designing complex intelligent agents.
CMC architectures aim to implement a set of domain-general computational
processes which operate over domain-specific knowledge to produce effective
task behavior. Early research in CMC architectures studied procedural
knowledge - the knowledge of _how_ to perform tasks, often expressed as _if-
else_ rules. It explored the computational underpinnings of a general purpose
decision making process that can apply hand-engineered procedural knowledge to
perform a wide-range of tasks. Later research studied various ways in which
procedural knowledge can be learned and optimized.
While CMC architectures have been applied widely, Hinrichs and Forbus (2017)
note that reasoning in them focuses exclusively on problem solving, decision
making, and behavior. Further, they argue that a distinctive and arguably
signature feature of human intelligence is being able to build complex
conceptual structures of the world. In the CMC terminology, the knowledge of
concepts is _declarative_ knowledge - the knowledge of _what_. An example of
declarative knowledge is the final goal state of the tower-of-hanoi puzzle. In
contrast, procedural knowledge in tower-of-hanoi are the set of rules that
guide action selection in service of achieving the goal state. CMC
architectures agree that conceptual structures are useful for intelligent
behavior. To solve tower-of-hanoi, understanding the goal state is critical.
However, there is limited understanding of how declarative knowledge about the
world is acquired in CMC architectures. In this paper, we study the questions
of declarative concept representation, acquisition, and usage in task
performance in a prominent CMC architecture - Soar. As it is similar to ACT-R
and Sigma in the organization of computation and information, our findings can
be generalized to those architectures as well.
### 1.1 Declarative long-Term memories in Soar
In the past two decades, algorithmic research in Soar has augmented the
architecture with decalartive long-term memories (dLTMs). Soar has two -
semantic (Derbinsky et al., 2010) and episodic (Derbinsky and Laird, 2009) \-
that serve distinct cognitive functions following the hypotheses about
organization of memory in humans (Tulving and Craik, 2005). Semantic memory
enables enriching what is currently observed in the world with what is known
generally about it. For example, if a dog is observed in the environment, for
certain types of tasks it may be useful to elaborate that it is a type of a
mammal. Episodic memory gives an agent a personal history which can later be
recalled to establish reference to shared experience with a collaborator, to
aid in decision-making by predicting the outcome of possible courses of
action, to aid in reasoning by creating an internal model of the environment,
and by keeping track of progress on long-term goals. The history is also
useful in deliberate reflection about past events to improve behavior through
other types of learning such as reinforcement learning or explanation-based
learning. Using dLTMs in Soar agents has enable reasoning complexity that
wasn’t possible earlier (Xu and Laird, 2010; Mohan and Laird, 2014; Kirk and
Laird, 2014; Mininger and Laird, 2018).
However, a crucial question remains unanswered - how is general world
knowledge in semantic memory acquired? We posit that this knowledge is
acquired in two distinctive ways. Kirk and Laird (2014, 2019) explore the view
that semantic knowledge is acquired through interactive instruction when
natural language describes relevant declarative knowledge. An example concept
is the goal of tower-of-hanoi _a small block is on a medium block and a large
block is below the medium block._ Here, the trainer provides the definition of
the concept declaratively which is later operationalized so that it can be
applied to recognize the existence of a tower and in applying actions while
solving tower-of-hanoi. In this paper, we explore an alternative view that
that this knowledge is acquired through examples demonstrated as a part of
instruction. We augment Soar dLTMs with a new _concept memory_ that aims at
acquiring general knowledge about the world by collecting and analyzing
similar experiences, functionally bridging episodic and semantic memories.
### 1.2 Algorithms for analogical processing
To design the concept memory, we leverage the computational processes that
underlie analogical reasoning and generalization in the Companions cognitive
architecture - the Structure Mapping Engine (SME; Forbus et al. 2017) and the
Sequential Analogical Generalization Engine (SAGE; McLure et al. 2015).
Analogical matching, retrieval, and generalization is the foundation of the
Companions Cognitive architecture. In Why we are so smart?, Gentner claims
that what makes human cognition superior to other animals is “First,
relational concepts are critical to higher-order cognition, but relational
concepts are both non-obvious in initial learning and elusive in memory
retrieval. Second, analogy is the mechanism by which relational knowledge is
revealed. Third, language serves both to invite learning relational concepts
and to provide cognitive stability once they are learned” (Gentner, 2003).
Gentner’s observations provide a compelling case for exploring analogical
processing as a basis for concept learning. Our approach builds on the
analogical concept learning work done in Companions (Hinrichs and Forbus,
2017). Previous analogical learning work includes spatial prepositions
Lockwood (2009), spatial concepts (McLure et al., 2015), physical reasoning
problems (Klenk et al., 2011), and activity recognition (Chen et al., 2019).
This diversity of reasoning tasks motivates our use of analogical processing
to develop an architectural concept memory. Adding to this line of research,
our work shows that you can learn a variety of conceptual knowledge within a
single system. Furthermore, that such a system can be applied to not only
learn how to recognize the concepts but also acting on them in the environment
within an interactive task learning session.
### 1.3 Concept formation and its interaction with complex cognitive
phenomenon
Our design exploration of an architectural concept memory is motivated by the
interactive task learning problem (ITL; Gluck and Laird 2019) in embodied
agents. ITL agents rely on natural interaction modalities such as embodied
dialog to learn new tasks. Conceptual knowledge, language, and task
performance are inextricably tied - language is a medium through which
conceptual knowledge about the world is communicated and learned. Task
performance is aided by the conceptual knowledge about the world.
Consequently, embodied language processing (ELP) for ITL provides a set of
functional requirements that an architectural concept memory must address.
Embedding concept learning within the ITL and ELP contexts is a significant
step forward from previous explorations in concept formation. Prior approaches
have studied concept formation independently of how they will be used in a
complex cognitive system, often focusing on the problems of recognizing the
existence of a concept in input data and organizing concepts into a
similarity-based hierarchy. We study concept formation within the context of
higher-order cognitive phenomenon. We posit that concepts are learned through
interactions with an interactive trainer who structures a learner’s
experience. The input from the trainer help group concrete experiences
together and a generalization process distills common elements to form a
concept definition.
### 1.4 Theoretical Commitments, Claims, and Contributions
Our work is implemented in Soar and consequently, brings to bear the
theoretical postulates the architecture implements. More specifically, we
build upon the following theoretical commitments:
1. 1.
Diverse representation of knowledge: In the past decade, the CMC architectures
have adopted the view that architectures for general intelligence implement
diverse methods for knowledge representation and reasoning. This view has been
very productive in not only studying an increasing variety of problems but
also in integrating advances in AI algorithmic research in the CMC framework.
We contribute to this view by exploring how algorithms for analogical
processing can be integrated into a CMC architecture.
2. 2.
Deliberate access of conceptual knowledge: Following CMC architectures, we
assume that declarative, conceptual knowledge is accessed through deliberation
over when and how to use that knowledge. The architectures incorporates well-
defined interfaces i.e. _buffers_ in working memory that contain information
as well as an operation the declarative memory must execute on the
information. Upon reasoning, information may be stored, accessed, or projected
(described in further detail in Section 4).
3. 3.
Impasse-driven processing and learning: Our approach leverages _impasse_ in
Soar, a meta-cognitive signal that can variably indicate uncertainty or
failure in reasoning. Our approach uses impasses (and the corresponding state
stack) to identify and pursue opportunities to learn.
4. 4.
A benevolent interactive trainer: We assume existence of an intelligent
trainer that adopts a collaborative goal with the learning system that it
learns correct definitions of concepts. Upon being prompted, the trainer
provides correct information to the learner to base its concept learning upon.
Based on these theoretical commitments, our paper contributes an integrative
account of a complex cognitive phenomenon - interactive concept learning.
Specifically, this paper:
1. 1.
defines the concept formation problem within larger cognitive phenomenon of
ELP and ITL;
2. 2.
identifies a desiderata for an architectural concept memory;
3. 3.
implements a concept memory for Soar agents using the models of analogical
processing;
4. 4.
introduces a novel process - curriculum of guided participation - for
interactive concept learning;
5. 5.
introduces a novel framework for evaluating interactive concept formation.
Our implementation is a functional (and not an architectural) integration of
analogical processing in Soar’s declarative long-term memory systems. It
characterizes how an analogical concept memory can be interfaced with the
current mechanisms. Through experiments and system demonstration, we show that
an analogical concept memory leads to competent behavior in ITL. It supports
learning of diverse types of concepts useful in ITL. Learned concept
representations support recognition during ELP as well as action based on
those concepts during task performance. The concepts are from few examples
provided interactively.
## 2 Preliminaries - The AILEEN Cognitive System
Aileen is a cognitive system that learns new concepts through interactive
experiences (linguistic and situational) with a trainer in a simulated world.
A system diagram is shown in Figure 1. Aileen lives in a simulated robotic
world built in Webots111https://www.cyberbotics.com/. The world contains a
table-top on which various simple objects can be placed. A simulated camera
above the table captures top-down visual information. Aileen is engaged in a
continuous _perceive-decide-act_ loop with the world. A trainer can set up a
scene in the simulated world by placing simple objects on the scene and
providing instructions to the agent. Aileen is designed in Soar which has been
integrated with a deep learning-based vision module and an analogical concept
memory. It is related to Rosie, a cognitive system that has demonstrated
interactive, flexible learning on a variety of tasks (Mohan et al., 2012,
2014; Mohan and Laird, 2014; Kirk and Laird, 2014; Mininger and Laird, 2018),
and implements a similar organization of knowledge.
Figure 1: System diagram for Advanced cognItive LEarning for Embodied
compreheNsion (Aileen)
#### Visual Module
The visual module processes the image taken from the simulated camera. It
produces output in two channels: object detections as bounding boxes whose
centroids are localized on the table-top and two perceptual symbols or
_percept_ s corresponding to the object’s shape and color each. The module is
built using a deep learning framework - You Only Look Once (YoLo: Redmon et
al. (2016)). YoLo is pre-trained with supervision from the ground truth in the
simulator ($12,000$ images). It is detects four shapes (error rate $<0.1\%$) -
_box_ (percept - CVBox), _cone_ (CVCone), _ball_ (CVSphere), and _cylinder_
(CVCylinder).
For colors, each detected region containing an object is cropped from the
image, and a $K$-means clustering is applied all color pixel values within the
crop. Next, two weighted heuristics are applied that selects the cluster that
likely comprises the detected shape among any background pixels and/or
neighboring objects. The first heuristic selects the cluster with the maximum
number of pixels. The second heuristic selects the cluster with the centroid
that is closest to the image center of the cropped region. The relative
weighted importance of each of these heuristics is then trained using a simple
grid search over $w_{1}$ and $w_{2}$: $Score=w_{1}R_{s}+w_{2}(1-C_{s}),s\in
D$, where $w_{1}+w_{2}=1$, $D$ is the set clusters, $R_{s}$ denotes the ratio
between the number of pixels in each cluster and the the number of pixels in
the image crop, and $C_{s}$ is the Euclidean distance between the centroid of
the cluster and the image center normalized by the cropped image width. The
average RGB value for all pixels included in the cluster with the highest
score is calculated and compared with the preset list of color values. The
color label associated with the color value that has the smallest Euclidean
distance to the average RGB value is selected. The module can recognize $5$
colors (error rate $<0.1\%$): CVGreen, CVBlue, CVRed, CVYellow, and CVPurple.
Note that the percepts are named so to be readable for system designers - the
agent does not rely on the percept symbol strings for any reasoning.
#### Spatial Processing Module
The spatial processing module uses QSRLib (Gatsoulis et al., 2016) to process
the bounding boxes and centroids generated by the visual module to generate a
qualitative description of the spatial configuration of objects. For every
pair of objects, the module extracts qualitative descriptions using two
spatial calculi (qsrs): cardinal direction (CDC) and region connection (RCC8).
Additionally, the spatial processing module can also convert a set of calculi
into regions and sample points from them. This enables Aileen to identify
locations in continuous space that satisfy qualitative spatial constraints
when planning actions.
#### World representation, Intrinsic & Extrinsic Behaviors
The outputs of the visual module and the spatial module are collected into an
object-oriented relational representation of the current state of the world.
Each detected object is asserted and represented with attributes that
indicated its color and shape visual properties and is assigned a unique
identifier. Qualitative relationships extracted by the spatial processing
module are represented as as binary relation between relevant objects. The set
of objects that exist on the scene and qualitative relationships between them
capture the current state of the world and are written to Soar’s working
memory graph.
Figure 2: (left) Simplified, partial working memory graph for the scene in
Figure 1. Green colored symbols are generated in the visual module and yellow
colored symbols are generated in the spatial module. Black colored symbols are
internal to Soar and are used for driving behavior. (right) Concepts in
semantic memory. Map nodes (M1, M2, M3, M4, M5, M6) connect words with their
conceptual definitions in semantic memory.
Interactive and learning behaviors in Aileen are driven by its procedural
knowledge encoded as rules in Soar and similarly to Rosie (Mohan et al., 2012)
consists of knowledge for:
1. 1.
Interaction: As in Rosie (Mohan et al., 2012) Aileen implements collaborative
discourse theory (Rich et al., 2001) to manage its interactive behavior. It
captures the state of task-oriented interaction and is integrated with
comprehension, task execution, and learning.
2. 2.
Comprehension: Aileen implements the Indexical Model of comprehension (Mohan
et al., 2014) to process language by grounding it in the world and domain
knowledge. This model formulates language understanding as a search process.
It interprets linguistic symbols and their associated semantics as cues to
search the current environment as well as domain knowledge. Formulating
language comprehension in this fashion integrates naturally with interaction
and learning where ambiguities and failures in the search process drive
interaction and learning.
3. 3.
External task execution: Aileen has been programmed with primitive actions
that enable it to manipulate its environment: point(o), pick-up(o), and
place([x, y, z]). Following Mohan and Laird (2014), each primitive action has
a proposal rule that encodes its pre-conditions, a model that captures state
changes expected to occur when the action is applied, and an application rule.
Additionally, given a task goal, Aileen can use iterative-deepening search to
plan a sequence of primitive actions to achieve the goal and execute the task
in the world.
4. 4.
Learning: Learning in Aileen is the focus of this paper and is significantly
different from Rosie. Rosie uses an interactive variation of explanation-based
learning (Mohan and Laird, 2014) to learn representation and execution of
tasks. Aileen uses analogical reasoning and generalization to learn diverse
concepts including those relevant to task performance (Sections 3 and 4). A
crucial distinction is that EBL requires a complete domain theory to correctly
generalize observed examples while analogical reasoning and generalization can
operate with partial domain theory by leveraging statistical information in
observed examples.
The ongoing ITL research in Soar demonstrates the strength of this
organization of knowledge in hybrid cognitive systems. Our conjecture is that
an ideal concept memory in an architecture must support complex, integrated,
intelligent behavior such as ELP and ITL.
#### Using Conceptual Knowledge in Aileen
Consider the world in Figure 1 and the corresponding working memory graph in
Figure 2. Semantic memory stores concept definitions corresponding to various
words used to interact with Aileen. _Maps_ (M1, M2, M3, M4, M5, M6) - in
semantic memory (shown in Figure 2) - associate words (_cylinder_) to their
conceptual definition (percept CVCylinder). Maps provide bi-directional access
to the association between words and concept definitions. The semantic memory
can be queried with a word to retrieve its concept definition. The semantic
memory can also we queried with a concept definition to access the word that
describes it.
Phrases (1) _blue cone left of red cylinder_ and (2) _move blue cone right of
red cylinder_ can be understood via indexical comprehension (details by as
follows:
1. 1.
_Parse the linguistic input into semantic components_. Both (1) and (2) have
two references to objects: {or1: obj-ref{property:blue, property:cone}} and
{or2: obj-ref {property:red, property: cylinder}}. Additionally, (1) has a
reference to a spatial relationship: {rel1: {rel-name: left of, argument1:
or1, argument2: or2}}. (2) has a reference to an action: {act1: {act-name:
move, argument1: or1, argument2: or2, relation: left of} }. For this paper, we
assume that the knowledge for this step is pre-encoded.
2. 2.
_Create a goal for grounding each reference_. The goal of processing an object
reference is to find a set of objects that satisfy the properties specified.
It starts with first resolving properties. The process queries semantic memory
for a percept that corresponds to various properties in the parse. If the
knowledge in Figure 2 is assumed, property blue resolves to percept CVBlue,
cone to CVCone, red to CVRed, and cylinder to CVCylinder. Using these
percepts, Aileen queries its scene to resolve object references. For or1, it
finds an object that has both CVBlue and CVCone in its description. Let or1
resolve to o1 and or2 to o2 where o1 and o1 are identifiers of objects visible
on the scene. The goal of processing a relation reference is to find a set of
spatial calculi that correspond to the name specified. If knowledge in Figure
2 is assumed, rel1 in (1) is resolved to a conjunction of qsrs
e(a1,a2)$\land$dc(a1,a2) i.e, object mapping to a1 should be east (in CDC) of
a2 and they should be disconnected. Similarly, act1 in (2) resolves to a task
goal which is a conjunction of qsrs w(a1,a2)$\land$dc(a1,a2)
3. 3.
_Compose all references_ : Use semantic constraints to resolve the full input.
For (1) and (2) a1 is matched to to ar1 and consequently to o1. Similarly, a2
is resolved to o2 via ar2.
Tasks are represented in Aileen as goals that it must achieve in its
environment. Upon being asked to execute a task, _move blue cone right of red
cylinder_ , indexical comprehesion determines the desired goal state as
w(a1,a2)$\land$dc(a1,a2). Now, Aileen must execute a sequence of actions to
achieve this desired goal state in its environment. Leveraging standard pre-
conditions and effects of actions, Aileen can simulate the results of applying
plausible actions in any state. Through an iterative deepening search
conducted over actions, Aileen can generate and execute a plan that will
achieve a desired goal state in the environment.
## 3 The Interactive Concept Learning Problem
With an understanding of how indexical comprehension connects language with
perceptions and actions and how tasks are executed, we can begin to define the
concept learning problem. Our main question is this - where does the
conceptual knowledge in semantic memory (in Figure 2) come from? We study how
this knowledge is acquired through interactions with an intelligent trainer
who demonstrates relevant concepts by structuring the learner’s environment.
In Soar, episodic memory stores contextual experiences while the semantic
memory stores general, context-independent facts. Our approach uses
supervision from an intelligent trainer to group contextual experiences
together. An analogical generalization process distills the common elements in
grouped contextual experience. This process can be seen as mediating knowledge
in Soar’s episodic and semantic memories.
To develop our ideas further, we focus on learning three kinds of concepts.
These concepts are crucial for ELP and ITL. Visual concepts correspond to
perceptual attributes of objects and include colors and shapes. They provide
meaning to nouns and adjectives in the linguistic input. Spatial concepts
correspond to configuration of objects and provide grounding to prepositional
phrases in the linguistic input. Action concepts correspond to temporal
changes in object configurations and provide grounding to verb phrases.
### 3.1 A Curriculum of Guided Participation
We introduce a novel interactive process for training Aileen to recognize and
use novel concepts - _guided participation_. Guided participation sequences
and presents lessons - conjoint stimuli (world and language) - to Aileen. A
lesson consists of a scenario setup in Aileen’s world and an interaction with
Aileen. A scenario can be a static scene when training visual and spatial
concepts or a sequence of scenes when training an action concept. An
interaction has a linguistic component (_content_) and a non-linguistic
component (_signal_). The signal component of instruction guides reasoning in
Aileen and determines how it processes and responds to the content. Currently,
Aileen can interpret and process the following types of signals:
1. 1.
inform: Aileen performs active learning. It uses all its available knowledge
to process the content through indexical comprehension (Section 3). If
failures occur, Aileen creates a learning goal for itself. In this goal, it
uses the current scenario to generate a concrete example of the concept
described in the content. This example is sent to its concept memory. If no
failure occurs, Aileen does not learn from the example. Aileen learning is
deliberate; it evaluates the applicability of its current knowledge in
processing the linguistic content. It learns only when the current knowledge
isn’t applicable, and consequently, Aileen accumulates the minimum number of
examples necessary to correctly comprehend the content.
2. 2.
verify: Aileen analyzes the content through indexical comprehension and
determines if the content refers to specific objects, spatial relationships,
or actions in the accompanying scenario. If Aileen lacks knowledge to complete
verification, Aileen indicates a failure to the instructor.
3. 3.
react: This signal is defined only when the linguistic content contains a
reference to an action. Aileen uses its knowledge to produce an action
instantiation. Upon instantiation, Aileen determines a goal state in the
environment and then plans, a sequence of actions to achieve the goal state.
This sequence of actions is executed in the environment.
Incorporating these variations in how Aileen responds to the linguistic
content in a lesson enables flexible interactive learning. A trainer can
evaluate the current state of knowledge in Aileen by assigning it verify and
react lessons. While the verify lesson tests if Aileen can recognize a concept
in the world, the react lesson tests if Aileen can use a known concept to
guide its own behavior in the environment. Observations of failures helps the
trainer in structuring inform lessons that guide Aileen’s learning. In an
inform lesson, Aileen evaluates its own learning and only adds examples when
necessary. Such learning strategy distributes the onus of learning between
both participants. Lessons can be structured in a flexible, reactive way in
real human-robot training scenarios.
Table 1: Predicate calculus representation for the world scene in Figure 1 corresponding to Soar’s working memory graph in Figure 2. CVCyl is short for the CVCylinder symbol and H for that holdsIn predicate that encodes which predicates hold in which episodic timepoint. Current world scene | Episodic trace
---|---
objects | relations | T0 | T1 | T2
(isa o1 CVBlue) | (e o1 o2) | (H T0 (dc o1 o2)) | (H T1 (held O1)) | (H T2 (w o1 o2))
(isa o1 CVCone) | (dc o1 o2) | (H T0 (e o1 o2)) | ... | ...
(isa o2 CVRed) | (w o2 o1) | ... | ... | (final T2 T1)
(isa o2 CVCyl) | (dc o2 o1) | (isa T0 start) | (after T1 T0) | (after T2 T1)
### 3.2 Desiderata for a Concept Memory
We extend the concept memory desiderata originally proposed by (Langley, 1987)
to enable embedding it within larger reasoning tasks, in this case ELP and
ITL:
1. D0
Is (a) architecturally integrated and (b) uses relational representations.
2. D1
Can represent and learn a diverse types of concepts. In particular, for
Aileen, the concept memory must be able to learn visual concepts, spatial
concepts, and action concepts.
3. D2
Learn from exemplars acquired through experience in the environment. Aileen is
taught through lessons that have two stimuli - a scenario and linguistic
content that describes it.
4. D3
Enable incremental accumulation of knowledge. Interactive learning is a
distinctive learning approach in which behavior is intertwined with learning.
It has been previously argued that interleaving behavior and learning splits
the onus of learning between the instructor and the learner such that the
instructor can observe the learner’s behavior and provide more
examples/instruction if necessary.
5. D4
Learn from little supervision as realistically humans cannot provide a lot of
examples.
6. D5
Facilitate diverse reasoning over definitions of concepts.
1. (a)
Evaluate existence of a concept in the current environment, including its
typicality. This enables recognizing a concept in the environment.
2. (b)
Envision a concept by instantiating it in the current environment. This
enables action in the environment.
3. (c)
Evaluate the quality of concept definitions. This enables active learning - if
the quality of a concept is poor, more examples can be added to improve it.
## 4 Concept Memory
Concept learning in Aileen begins with a failure during indexical
comprehension in an inform lesson. Assume that Aileen does not know the
meaning of _red_ , i.e, it does not know that _red_ implies the percept CVRed
in the object description. When attempting to ground the phrase _red cylinder_
in our example, Indexical comprehension will fail when it tries to look-up the
meaning of the word _red_ in its semantic memory. As in Rosie, a failure (or
an impasse) in Aileen is an opportunity to learn. Learning occurs through
interactions with a novel concept memory in addition to Soar’s semantic
memory. Similarly to Soar’s dLTM, the concept memory is accessed by placing
commands in a working memory buffer (a specific sub-graph). The concept memory
interface has $4$ commands: create, store, query, and project. Of these, store
and query are common with other Soar dLTMs. create and project are novel and
explained in the following sections.
Table 2: Terms used in analogical processing, their definitions, and values in Aileen’s concept memory Term | Definition
---|---
Similarity | The score representing the quality of an analogical match, degree of overlap
Correspondence | A one-to-one alignment between the compared representations |
Candidate Inference | Inferences resulting from the correspondences of the analogy |
Threshold | Definition | Value
Assimilation | Score required to include a new example into a generalization instead of storing it as an example | 0.01
Probability | Only facts exceeding this value are considered part of the concept. | 0.6
Match | Score required to consider that an inference is applicable in a given scene | 0.75
Aileen’s concept memory is built on two models of cognitive processes: SME
(Forbus et al., 2017) and SAGE (McLure et al., 2015) and can learn visual,
spatial, and action concepts (desiderata D0). Below we describe how each
function of concept memory is built with these models. The current
implementation of the memory represents knowledge as predicate calculus
statements or _facts_ , we have implemented methods that automatically
converts Soar’s object-oriented graph description to a list of facts when
needed. Example translations from Soar’s working memory graph to predicate
calculus statements are shown in Table 1. Visual and spatial learning requires
generating facts from the current scene. Examples for action learning are
provided through a demonstration which is automatically encoded in Soar’s
episodic memory. An episodic trace of facts is extracted from the episodic
memory (shown in Table 1). We will rely on examples in Table 1 for
illustrating the operation of the concept memory in the remainder of this
section. We have summarized various terms and parameters used in analogical
processing in Table 2.
### 4.1 Creation and Storage
When Aileen identifies a new concept in linguistic content (word _red_), it
creates a new symbol RRed. This new symbol in incorporated in a map in Soar’s
semantic memory and is passed on to the concept memory for creation of a new
concept via the create command. The concept memory creates a new reasoning
symbol as well as a new generalization context (shown in Figure 3). A
generalization context is an accumulation of concrete experiences with a
concept. Each generalization context is a set of individual examples and
generalizations.
Facts | P
---|---
(isa (GenEntFn 0 RRedMt) RRed) | 1.0
(isa (GenEntFn 0 RRedMt) CVRed) | 1.0
(isa (GenEntFn 0 RRedMt) CVCube) | 0.5
(isa (GenEntFn 0 RRedMt) CVCylinder) | 0.5
[table]
Figure 3: (left) SAGE maintains a generalization context for each concept. For
each example (circle) of a concept, it is either added to a generalization
(rounded rectangle) or maintained as an independent example for the concept.
(right) Facts and their probabilities in generalization context for RRed
After creating a new concept, Soar stores an example in the concept memory.
The command {store: [(isa o2 CVRed) (isa o2 CVCylinder) (isa o2 RRed)],
concept: RRed} stores that the object o2 in the world is an example of the
concept RRed. This example A is stored in the RRed generalization context as
is - as a set of facts. Assume that at a later time, Soar sends another
example B of RRed concept through the command {store: [(isa o3 CVRed) (isa o3
CVCube) (isa o3 RRed)], concept: RRed}. The concept memory adds the new
example to the RRed generalization context by these two computational steps:
1. 1.
SME performs an analogical match between the two examples. The result of
analogical matching has two components: a correspondence set and a similarity
score. A correspondence set contains alignment of each fact in one example
with at most one fact from other. The similarity score indicates the degree of
overlap between the two representations. In the two examples A and B, there
are two corresponding facts: (isa o2 CVRed) aligns with (isa o3 CVRed) and
(isa o2 RRed) aligns with (isa o3 RRed). If the similarity score exceeds an
_assimilation threshold_ (Table 2), SAGE continues to the next step to create
a generalization.
2. 2.
SAGE assimilates the two examples A and B into a generalization (e.g. Figure
3). It :
1. (a)
Uses the correspondence to create abstract entities. In the two examples
provided, (isa o2 RRed) aligns with (isa o3 RRed) and (isa o2 CVRed) with (isa
o3 CVRed). Therefore, identifiers o2 and o3 can be replaced with an abstract
entity (GenEntFn 0 RRedMt).
2. (b)
Maintains a probability that a fact belongs in the generalization. Because
(isa (GenEntFN 0 RRedMT) RRed) and (isa (GenEntFn 0 RRedMT) CVRed) are common
in both examples, they are assigned a probability of $1$. Other facts are not
in the correspondences and appear in $1$ of the $2$ examples in the
generalization resulting in a probability of $0.5$. Each time a new example is
added to this generalization, the probabilities will be updated the reflect
the number of examples for which the facts were aligned with each other.
Upon storage in a generalization context, a generalization becomes available
for matching and possible assimilation with future examples enabling
incremental (D3), example-driven (D2) learning.
### 4.2 Query
During indexical comprehension, Aileen evaluates if a known concept exists in
the current world through the query command. Assume that in an example scene
with two objects, indexical comprehension attempts to find the one that is
referred to by _red_ through {query: {scene: [(isa o4 CVRed) (isa o4 CVBox)
(isa o5 CVGreen) (isa o2 CVCylinder)], pattern: (isa ?o RRed)}}. In response
to this command, the concept memory evaluates if it has enough evidence in the
generalization context for RRed to infer (isa o2 RRed). The concept memory
performs this inference through the following computations.
1. 1.
SME generates a set of candidate inferences. It matches the scene with the
generalization in Figure 3 (right). This match results in a correspondence
between the facts (isa o4 CVRed) in scene) and (isa (GenEntFn 0 RRedMt)
CVRed), which aligns o4 with (GenEntFn 0 RRedMt). Other facts that have
arguments that align, but are not in the correspondences, are added to the set
of candidate inferences. In our example, a candidate inference would be (isa
o4 RRed).
2. 2.
AILEEN filters the candidate inferences based on the pattern in the query
command. It removes all inferences that do not fit the pattern. If the list
has an element, further support is calculated.
3. 3.
AILEEN evaluates the support for inference by comparing the similarity score
of the match to the _match threshold_. That is, the more facts in the
generalization that participate in the analogical match then it is more likely
that the inference is valid.
Through queries to the concept memory and resultant analogical inferences, the
working memory graph (of the world in Figure 4) is enhanced. This enhanced
working memory graph supports indexical comprehension in Section 3. Note that
the internal concept symbols in blue (such as RBlue) are generalization
contexts in the concept memory that accumulate examples from training.
Consequently, the ‘meaning’ of the world _blue_ will evolve as more examples
are accumulated.
Figure 4: Working memory graph corresponding to scene in Figure 1 now enhanced
with concept symbols (blue). Each concept symbol refers to a generalization
context in the concept memory. The graph is enhanced based on inferences
supported by analogical processing.
(H (:skolem (GenEntFn 0 0 rMoveMt)) (held O1)
(after (:skolem (GenEntFn 0 0 rMoveMt)) T0)
Figure 5: Candidate inferences indicate that the next state of the move action
is to hold object O5. Skolem terms are generated by SME to indicate that the
candidate inference refers to an entity from the concept for which there is no
correspondence in the scene. In this case, the skolem represents the next
temporal state of the action as denoted by the after relation.
### 4.3 Projection
In ITL, simply recognizing that an action has been demonstrated is
insufficient, the agent must also be able to perform the action if directed
(desiderata D5). One of the advantages of analogical generalization is that
the same mechanism is used for recognition and projection. Consider the
example scene Figure 1 in which the trainer asks Aileen to _move the blue cone
to the right of the red cylinder_ using the react signal. Assume that Aileen
has previously seen some other examples of this action that are stored in
concept memory as episodic traces (an example is shown in 1).
During indexical comprehension, Aileen performs queries to identify the _blue
cone_ , O1, and _red cylinder_ , O2. Similarly, it maps the verb and the
related preposition to RMove and RRightOf. To act, Aileen uses its concept
memory to project the action through the command {project: {trace: [(H T0 (dc
o1 o2)) (H T0 (e o1 o2)) (isa AileenStartTime T0) ...], concept: RMove}. A
summary is shown in Figure 5 starting at T0. In response, the concept memory
performs the following computations:
1. 1.
SME generates a set of candidate inferences. SME to matches the current scene
expressed as a trace against the generalization context of the action RMove.
SME generates all the candidate inferences that symbolically describe the next
states of the action concept.
2. 2.
Aileen filters the candidate inferences to determine which apply in the
immediate next state (shown in Figure 5). For example, the trace in the
project command contains episode T0 as the AileenStartTime. The filter
computation will select facts that are expected to be held in (t) and that the
(after (t) T0) holds.
This retrieval is accepted by Aileen to be next desired state it must try to
achieve in the environment.
## 5 Evaluation
In this section, we evaluate how the proposed concept memory address the
desiderata outlined in section 3.2. As per desiderata D0, the concept memory
can be integrated into a CMC architecture through its interfaces (defined in
section 4) and SME & SAGE support inference and learning over relational
representations (in Table 1). For the remaining desiderata, we conducted a set
of empirical experiments and demonstrations.
1. H1
As per D1, can the concept memory learn a diverse types of concepts? Our
hypothesis is that because SME & SAGE operate over relational, structured
representations, the concept memory designed with these algorithms can learn a
variety of concepts. We designed our experiments to study how Aileen learns
visual, spatial, and action concepts.
2. H2
As per D2, D3, & D4, can the concepts be learned incrementally through
limited, situated experience? Aileen can learn from a curriculum of guided
participation that incrementally introduces a variety of concepts through a
conjoint stimuli of scene information with language. We designed our learning
experiments to reflect how a human-like teaching (Ramaraj et al., 2021) would
unfold and report our observations about the memory’s performance especially
focusing on the number of examples needed to learn from.
3. H3
As per D5, does the concept memory support diverse reasoning? The
representations acquired by the concept memory not only support recognition of
a concept on the scene, it also guides action selection as well as identifying
opportunities to learn.
#### Method
We performed separate learning experiments for visual, spatial, and action
concepts (D1). We leverage the lessons of guided participation in the design
of our experimental trials. Each trial is a sequence of _inform_ lessons. In
an _inform_ lesson, a concept is randomly selected from a pre-determined set
and shown to Aileen accompanied with linguistic content describing the concept
(D2). The lesson is simplified, i.e, there are no distractor objects (examples
are shown in Figures 6, 7, & 8). The lesson is presented to Aileen and we
record the number of store requests it makes to the concept memory. Recall
that Aileen learns actively; i.e, it deliberately evaluates if it can
understand the linguistic content with its current knowledge and stores
examples only when necessary. The number of store requests made highlight the
impact of such active learning.
Additionally, to measure generality and correctness, we test Aileen knowledge
after every _inform_ lesson through two exams: generality and specificity
(examples are shown in Figures 6, 7, & 8). Both exams are made up of $5$
_verify_ lessons that are randomly selected at the beginning of the trial. As
Aileen learns, the scores on these test demonstrate how well Aileen can apply
what it has learned until now. In the generality lessons, Aileen is asked to
verify if the concept in the linguistic input exists on the scene. If Aileen
returns with a success status, it is given a score of $1$ and $0$ otherwise.
In the specificity exam, Aileen is asked to verify the existence of a concept,
however, the scenario does not contain the concept that is referred to in the
linguistic content. If Aileen returns with a failed status, it is given a
score of $1$ and $0$ otherwise. Both types of exam lessons have $0-3$
distractor objects introduced on the scene to evaluate if existence of noise
impacts the application of conceptual knowledge.
#### Results
Figure 6 illustrates visual concept learning. Aileen begins without any
knowledge of any concept. As two concepts (_green_ and _cone_) are introduced
in the first lesson, it provides several store commands to its concept memory
(shown in blue bars). The number of commands reduce as the training progresses
demonstrating that the learning is active and opportunistic (D5 c). As is
expected, the score on the generality exam is very low in the beginning
because Aileen doesn’t know any concepts. However, this score grows very
quickly with training eventually reaching perfect performance at lesson $15$.
The score on the specificity exam starts at $5$, this is to be expected as
well. This is because if a concept is unknown, Aileen cannot recognize it on
the scene. However, as the trial progress we see that this score doesn’t drop.
This indicates that conceptual knowledge of one concept doesn’t bleed into
others. Note that the exams have distractor objects while learning occurred
without any distractors - good scores on these exams demonstrate the strength
of relational representations implemented in Aileen. Finally, Aileen learns
from very few examples indicated that such learning systems can learn online
with human trainers (D3, D4).
Figure 6: (left) Learning curve for visual concepts averaged from $10$
trials. A trial includes lessons from $5$ colors and $4$ and shapes $=20$
unique objects. Lessons include reference only to shape and color and shape.
(right) Examples of an _inform_ lesson (I) and generality (G) and specificity
(S) exam lessons. The blue bars show the average number of create or store
commands executed in the concept memory. The pink and green lines show average
score on the generality and specificity exams respectively.
Figure 7 illustrates spatial concept learning (commenced after all visual
concepts are already known). Spatial relationships are defined between two
objects each of which can be $1/20$ possible in the domain. Concrete examples
include irrelevant information (e.g., _left of_ ” does not depend on visual
properties of the objects). Despite this large complex learning space,
learning is quick and spatial concepts can be learned with few examples. These
results demonstrate the strength of analogical generalization over relational
representations. An interesting observation is that generality scores do not
converge to $5$ as in visual concept learning. A further analysis revealed
that in noisy scenes when the trainer places several distractors on the scene,
sometimes the objects move because they are placed too close and the
environment has physics built into it. The movement causes objects to move
from the intended configuration leading to apparent error in Aileen’s
performance. This is a problem with our experimental framework. The learning
itself is robust as demonstrated by number of store commands in the trial
which reduce to $0$ at the end.
Figure 7: (left) Learning curve for spatial concepts averaged from $10$
trials. A trial includes lessons about $4$ types of binary relations defined
over $20$ unique objects. (right) Examples of an _inform_ lesson (I) and
generality (G) and specificity (S) exam lessons. The blue bars show the
average number of create or store commands executed in the concept memory. The
pink link shows average score on the generality exam and the green bar at the
top shows the average score on the specificity exam.
Figure 8 illustrates action learning (commenced after all visual and spatial
concepts have been learned). Actions are generated through the template _move
¡object reference 1¿ ¡relation¿ ¡object reference 2¿_. Similarly to spatial
concepts, the learning space is very large and complex. When Aileen asks, it
is provided a demonstration of action performance as shown in Figure 8 (T0,
T1, T2). Aileen stores the demonstration trace in its episodic memory. For
storing an example in the concept memory, information in Soar’s episodic
memory is translated into an episodic trace as shown Table 1. Similarly to
visual and spatial learning, inform lessons with simplified scene are used to
teach a concept. Exams made up of positive and negative verify lessons are
used to evaluate learning. As we see in Figure 8, Aileen can quickly learn
action concepts. Errors towards the later part of the experimental trial occur
for the same reason we identified in spatial learning.
Figure 8: (left) Learning curve for action concepts averaged from $5$ trials.
A trial includes lessons about $1$ verb _move_ with $4$ different relations
and two objects chosen from $20$ unique objects. The blue bars show the
average number of create or store commands executed in the concept memory. The
pink link shows average score on the generality exam and the green bar at the
top shows the average score on the specificity exam. (right) A demonstration.
Figure 9: A simplified view of how Aileen plans a sequence of actions using
its concept memory. The process starts at the current state in T0 that is used
to generate a project command to the concept memory. The memory returns the
predicates to be achieved in the next state. An iterative deepening search
determines the action that will achieve it. This successive projection and
planning continues until the terminal state.
#### Task Demonstration
After visual, spatial, and action concepts were taught, we used a react lesson
to see if Aileen could perform the actions when asked. Consider the time T0 in
Figure 9 when Aileen is asked to _move the blue cone right of the red
cylinder_. It can successfully use methods of analogical processing to guide
action planning through the concept memory interface. First, it uses its
visual concepts during indexical comprehension to resolve _blue cone_ to (O1)
and the _red cylinder_ to (O2). It maps the verb _move_ to a known action
trace indexed by RMove. Then, it projects this action in the future. As
described in Section 4.3, the concept memory returns with a set of predicates
that have to be true in the next state (holds(O1)). Aileen plans using its
pre-encoded actions models and iterative deepening search. The search results
in pick-up(O1) where O1. After executing a pick-up action, Aileen invokes
projection again to determine if RMove requires more steps. In this case, it
does, and the candidate inferences specify that O1 should be located to the w
of O2 and they should be topologically disjoint. Further, these candidate
inferences indicate that this is the last step in the action, and therefore
Aileen marks the action as completed after executing it.
The symbolic actions generated through planning are incrementally transformed
into concrete information required to actuate the robot. pick-up executed on a
specific object can be directly executed using an inverse kinematics solver.
place action is accompanied with qualitative constraints. For example, to
place o1 to _right of_ o2, it must be place in a location that is to the west
and such that their bounding boxes are disconnected. Aileen uses QSRLib to
sample a point that satisfies the constraint. Once a point is identified, the
inverse kinematics solver can actuate the robot to achieve the specified
configuration. The successive projection and their interaction with action
planning is shown in Figure 9.
## 6 Related Work
Diverse disciplines in AI have proposed approaches for concept learning from
examples however, not all approaches can be integrated in a CMC architecture.
We use the desiderata defined in Section 3.2 to evaluate the utility of
various concept learning approaches. The vast majority study the problem in
isolation and consider only flat representations violating the desiderata D0.
ML-based classification approaches are designed for limited types of concepts
(such as object properties), violating desiderata D1, and require a large
number of examples, violating desiderata D4, which are added in batch-mode,
violating desiderata D3. On the other hand, while EBL and Inductive logic
programming (Muggleton and De Raedt, 1994) can learn from few datapoints, they
require fully-specified domain theory violating desiderata D2. Bayesian
concept learning Tenenbaum (1999) uses propositional representations,
violating D0, and each demonstration has focused on a single type of concept,
violating D1.
There are a few cognitive systems’ approaches to the concept learning problem
that aim toward the desiderata that we delineated in Section 3. In the late
$1980$s - early $1990$s, there was a concerted effort to align machine
learning and cognitive science around concept formation (Fisher, 1987). For
example, Labyrinth (Thompson and Langley, 1991) creates clusters of examples,
summary descriptions, and a hierarchical organization of concepts using a
sequence of structure examples. COBWeb3 (Fisher, 1987) incorporates numeric
attributes and provides a probabilistic definition differences between
concepts. Building off these ideas, Trestle (MacLellan et al., 2015) learns
concepts that include structural, relational, and numerical information. Our
work can be seen as a significant step in advancing these research efforts.
First, the proposed concept memory leverages the computational models of
analogical processing that have been shown to emulate analogical reasoning in
humans. Second, we place the concept learning problem within the larger
problems of ELP and ITL in a cognitive architecture context. We demonstrate
not only concept formation but also how learned concepts are applied for
recognition, scene understanding, and action reasoning. By integrating with
vision techniques, we demonstrate one way in which concept formation is tied
to sensing.
Another thread of work in the cognitive system’s community that we build upon
is that of analogical learning and problem-solving. Early analogical problem-
solving systems include Cascade (VanLehn et al., 1991), Prodigy (Veloso et
al., 1995), and Eureka (Jones and Langley, 2005). They typically used analogy
in two ways: (1) as analogical search control knowledge where previous
examples were used to guide the selection of which problem-solving operator to
apply at any time, and (2) for the application of example-specific operators
in new situations. Aileen differs in two important ways: (1) it relaxes the
need for explicit goals further in its use of projection to specify the next
subgoal of an action, and (2) it uses analogical generalization on top of
analogical learning to remove extraneous knowledge from the concept.
## 7 Discussion, Conclusions, and Future Work
In this paper, we explored the design and evaluation of a novel concept memory
for Soar (and other CMC cognitive architectures). The computations in the
memory use models of analogical processing - SAGE and SME. This memory can be
used to acquire new situated, concepts in interactive settings. The concepts
learned are not only useful in ELP and recognition but also in task execution.
While the results presented here are encouraging, the work described in this
paper is only a small first step towards an architectural concept memory. We
have only explored a functional integration of analogical processing in Soar.
The memory has not be integrated into the architecture but is a separate
module that Soar interacts with. There are significant differences between
representations that Soar employs and those in the memory. For an efficient
integration and a reactive performance that Soar has historically committed
to, several engineering enhancements have to be made.
There are several avenues for extending this work. We are looking at three
broad classes of research: disjunctive concepts, composable concepts, and
expanded mixed-initiative learning. Disjunctive concepts arise from homographs
(e.g., _bow_ in musical instrument versus _bow_ the part of a ship) as well as
when the spatial calculi does not align with the concept or the functional
aspects of the objects must be taken into account (e.g., a cup is _under_ a
teapot when it is under the spigot, while a saucer is _under_ a cup when it is
directly underneath). One of the promises of relational declarative
representations of the form learned here is that they are composable. This
isn’t fully exploited for learning actions with spatial relations in them. Our
approach ends up with different concepts for move-left and move-above. A
better solution would be to have these in the same generalization such that
Aileen would be able to respond to the command to _move cube below cylinder_
assuming it been taught a _move_ action previously along with the concepts for
_below_ , _cube_ , and _cylinder_. Another avenue is contextual application of
concepts. For example, _bigger box_ requires comparison between existing
objects. Finally a cognitive system should learn not only from a structured
curriculum designed by an instructor but also in a semi-supervised fashion
while performing tasks. In our context this means adding additional examples
to concepts when they were used as part of a successful execution. This also
means, when there are false positives that lead to incorrect execution,
revising the learned concepts based on this knowledge. One approach from
analogical generalization focuses on exploiting these near-misses with SAGE
(McLure et al., 2015).
Inducing general conceptual knowledge from observations is a crucial
capability of generally intelligent agents. The capability supports a variety
of intelligent behavior such as operation in partially observable scenarios
(where conceptual knowledge elaborates what is not seen), in language
understanding (including ELP), in commonsense reasoning, as well in task
execution. Analogical processing enables robust incremental induction from few
examples and has been demonstrated as a key cognitive capability in humans.
This paper explores how analogical processing can be integrated into the Soar
cognitive architecture which is capable of flexible and contextual decision
making and has been widely used to design complex intelligent agents. This
paper paves way for an exciting exploration of new kinds of intelligent
behavior enabled by analogical processing.
## 8 Acknowledgements
The work presented in this paper was supported in part by the DARPA GAILA
program under award number HR00111990056 and an AFOSR grant on Levels of
Learning, a sub-contract from University of Michigan (PI: John Laird,
FA9550-18-1-0180). The views, opinions and/or findings expressed are those of
the authors’ and should not be interpreted as representing the official views
or policies of the Department of Defense, AFOSR, or the U.S. Government.
## References
* Anderson (2009) Anderson, J.R., 2009. How Can the Human Mind Occur in the Physical Universe? Oxford University Press.
* Chen et al. (2019) Chen, K., Rabkina, I., McLure, M.D., Forbus, K.D., 2019\. Human-Like Sketch Object Recognition via Analogical Learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 1336–1343.
* Derbinsky and Laird (2009) Derbinsky, N., Laird, J.E., 2009\. Efficiently Implementing Episodic Memory, in: International Conference on Case-Based Reasoning, Springer. pp. 403–417.
* Derbinsky et al. (2010) Derbinsky, N., Laird, J.E., Smith, B., 2010. Towards Efficiently Supporting Large Symbolic Declarative Memories, in: Proceedings of the 10th International Conference on Cognitive Modeling, Citeseer. pp. 49–54.
* Fisher (1987) Fisher, D.H., 1987. Knowledge Acquisition via Incremental Conceptual Clustering. Machine learning .
* Forbus et al. (2017) Forbus, K.D., Ferguson, R.W., Lovett, A., Gentner, D., 2017\. Extending SME to Handle Large-Scale Cognitive Modeling. Cognitive Science 41, 1152–1201.
* Gatsoulis et al. (2016) Gatsoulis, Y., Alomari, M., Burbridge, C., Dondrup, C., Duckworth, P., Lightbody, P., Hanheide, M., Hawes, N., Hogg, D., Cohn, A., et al., 2016\. QSRlib: A Software for Online Acquisition of QSRs from Video .
* Gentner (2003) Gentner, D., 2003. Why We’re So Smart? Language in Mind: Advances in the Study of Language and Thought .
* Gluck and Laird (2019) Gluck, K.A., Laird, J.E., 2019\. Interactive Task Learning: Humans, Robots, and Agents Acquiring New Tasks Through Natural Interactions. volume 26. MIT Press.
* Hinrichs and Forbus (2017) Hinrichs, T.R., Forbus, K.D., 2017\. Towards a Comprehensive Standard Model of Human-Like Minds, in: 2017 AAAI Fall Symposium Series.
* Jones and Langley (2005) Jones, R.M., Langley, P., 2005\. A Constrained Architecture for Learning and Problem Solving. Computational Intelligence 21, 480–502.
* Kirk and Laird (2014) Kirk, J.R., Laird, J.E., 2014\. Interactive Task Learning for Simple Games. Advances in Cognitive Systems .
* Kirk and Laird (2019) Kirk, J.R., Laird, J.E., 2019\. Learning Hierarchical Symbolic Representations to Support Interactive Task Learning and Knowledge Transfer, in: Proceedings of the 28th International Joint Conference on Artificial Intelligence, AAAI Press. pp. 6095–6102.
* Klenk et al. (2011) Klenk, M., Forbus, K., Tomai, E., Kim, H., 2011. Using Analogical Model Formulation with Sketches to Solve Bennett Mechanical Comprehension Test problems. Journal of Experimental & Theoretical Artificial Intelligence 23, 299–327.
* Laird (2012) Laird, J.E., 2012. The Soar Cognitive Architecture.
* Laird et al. (2017) Laird, J.E., Lebiere, C., Rosenbloom, P.S., 2017. Toward a Common Computational Framework Across Artificial Intelligence, Cognitive Science, Neuroscience, and Robotics. AI Magazine 38, 13–26.
* Langley (1987) Langley, P., 1987. Machine Learning and Concept Formation. Machine Learning 2, 99–102.
* Lockwood (2009) Lockwood, K., 2009. Using Analogy to Model Spatial Language Use and Multimodal Knowledge Capture. Doctor of Philosophy .
* MacLellan et al. (2015) MacLellan, C.J., Harpstead, E., Aleven, V., Koedinger, K.R., 2015\. Trestle: Incremental Learning in Structured Domains Using Partial Matching and Categorization, in: Proceedings of the 3rd Annual Conference on Advances in Cognitive Systems-ACS.
* McLure et al. (2015) McLure, M.D., Friedman, S.E., Forbus, K.D., 2015. Extending Analogical Generalization with Near-Misses, in: Twenty-Ninth AAAI Conference on Artificial Intelligence.
* Mininger and Laird (2018) Mininger, A., Laird, J.E., 2018\. Interactively Learning a Blend of Goal-Based and Procedural Tasks, in: Thirty-Second AAAI Conference on Artificial Intelligence.
* Mohan and Laird (2014) Mohan, S., Laird, J., 2014. Learning Goal-Oriented Hierarchical Tasks from Situated Interactive Instruction, in: Twenty-Eighth AAAI Conference on Artificial Intelligence.
* Mohan et al. (2014) Mohan, S., Mininger, A., Laird, J., 2014. Towards an Indexical Model of Situated Language Comprehension for Cognitive Agents in Physical Worlds. Advances in Cognitive Systems .
* Mohan et al. (2012) Mohan, S., Mininger, A.H., Kirk, J.R., Laird, J.E., 2012\. Acquiring Grounded Representations of Words with Situated Interactive Instruction. Advances in Cognitive Systems .
* Muggleton and De Raedt (1994) Muggleton, S., De Raedt, L., 1994\. Inductive Logic Programming: Theory and Methods. The Journal of Logic Programming 19, 629–679.
* Ramaraj et al. (2021) Ramaraj, P., Ortiz, C.L., Klenk, M., Mohan, S., 2021\. Unpacking Human Teachers’ Intentions For Natural Interactive Task Learning. arXiv:2102.06755.
* Redmon et al. (2016) Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016\. You Only Look Once: Unified, Real-Time Object Detection, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788.
* Rich et al. (2001) Rich, C., Sidner, C.L., Lesh, N., 2001. Collagen: Applying Collaborative Discourse Theory to Human-Computer Interaction. AI magazine 22, 15–15.
* Rosenbloom et al. (2016) Rosenbloom, P.S., Demski, A., Ustun, V., 2016. The Sigma Cognitive Architecture and System: Towards Functionally Elegant Grand Unification. Journal of Artificial General Intelligence 7, 1–103.
* Tenenbaum (1999) Tenenbaum, J.B., 1999. Bayesian Modeling of Human Concept Learning, in: Advances in neural information processing systems, pp. 59–68.
* Thompson and Langley (1991) Thompson, K., Langley, P., 1991\. Concept Formation in Structured Domains, in: Concept Formation.
* Tulving and Craik (2005) Tulving, E., Craik, F.I., 2005\. The Oxford Handbook of Memory. Oxford University Press.
* VanLehn et al. (1991) VanLehn, K., Jones, R.M., Chi, M., 1991. Modeling the Self-Explanation Effect with Cascade 3, in: Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, Lawrence Earlbaum Associates Hillsdale, NJ. pp. 137–142.
* Veloso et al. (1995) Veloso, M., Carbonell, J., Perez, A., Borrajo, D., Fink, E., Blythe, J., 1995\. Integrating Planning and Learning: The Prodigy Architecture. Journal of Experimental & Theoretical Artificial Intelligence 7, 81–120.
* Xu and Laird (2010) Xu, J.Z., Laird, J.E., 2010\. Instance-Based Online Learning of Deterministic Relational Action Models, in: Twenty-Fourth AAAI Conference on Artificial Intelligence.
|
# Magnetic Couplings in Edge-Sharing $d^{7}$ Compounds
Stephen M. Winter Department of Physics and Center for Functional Materials,
Wake Forest University, NC 27109, USA
###### Abstract
High-spin $d^{7}$ Co(II) compounds have recently been identified as possible
platforms for realising highly anisotropic and bond-dependent couplings
featured in quantum-compass models such as the celebrated Kitaev model. In
order to evaluate this potential, we consider all symmetry-allowed
contributions to the magnetic exchange for ideal edge-sharing bonds. Though a
combination of ab-initio and cluster many-body calculations we conclude that
bond-dependent couplings are generally suppressed in favor of Heisenberg
exchange for realistic materials. Consequences for several prominent materials
including Na2Co2TeO6 and BaCo2(AsO4)2 are discussed.
## I Introduction
Pursuit of strongly anisotropic $d$-block magnets has been motivated by the
possibility of material realization of so-called quantum compass
modelsNussinov and Van Den Brink (2015), such Kitaev’s celebrated honeycomb
modelKitaev (2006). In these materials, competition between different bond-
dependent magnetic interactions produces an extensive classical degeneracy
conducive to quantum spin liquid ground statesHermanns _et al._ (2018);
Broholm _et al._ (2020); Zhou _et al._ (2017). Realising these conditions in
real materials requires precise tuning and suppression of the usual isotropic
magnetic exchange. This can be accomplished, in principle, in edge-sharing
compounds with $d^{5}$ filling and strong spin-orbital coupling. Remarkably,
for ideal considerations, the specific spin-orbital composition of the local
moments suppresses all couplings except those bond-dependent Ising couplings
precisely prescribed by Kitaev’s modelJackeli and Khaliullin (2009). This
revelation led to a flurry of studiesSingh and Gegenwart (2010); Plumb _et
al._ (2014); Banerjee _et al._ (2016); Winter _et al._ (2017a); Trebst
(2017); Banerjee _et al._ (2017) in $5d^{5}$ Ir(IV) compounds such as A2IrO3
(A = Na, Li) and the $4d^{5}$ Ru(III) compound $\alpha$-RuCl3. These studies
have revealed clear evidence of dominant bond-dependent anisotropic couplings
in these compoundsHwan Chun _et al._ (2015); Suzuki _et al._ (2021), leading
to a variety of anomalous behaviors from the breakdown of conventional magnon
excitationsWinter _et al._ (2017b) to the possibility of a field-induced
spin-liquidBanerjee _et al._ (2018); Kasahara _et al._ (2018); Yokoi _et
al._ (2021). However, while Kitaev couplings are thought to be the largest
interaction, other couplings of similar magnitude always lift the classical
degeneracy leading to magnetic order at zero-field.
In this context, the seminal work of Liu et al.Liu and Khaliullin (2018); Liu
(2021); Liu _et al._ (2020) and Sano et al. Sano _et al._ (2018) renewed
hope for realizing Kitaev’s spin liquid, by showing that the magnetic exchange
in high-spin $3d^{7}$ Co(II) ions may also produce dominant Kitaev
interactions for ideal considerations. In particular, these studies assumed
the dominant hopping between metals occurs via metal-ligand hybridization,
which suppresses other couplings. While this condition is satisfied for
$5d^{5}$ Ir(IV) compounds such as A2IrO3, the presence of significant direct
metal-metal hopping in $4d^{5}$ $\alpha$-RuCl3 is the primary source of non-
Kitaev interactionsRau _et al._ (2014); Winter _et al._ (2016). It is not
clear that these assumptions are satisfied in $3d$ systems.
The possibility of strong bond-dependent Kitaev interactions also challenges
the conventional view that Co(II) compounds typically have bond-independent
XXZ anisotropy largely driven by the effects of local crystal field
distortions on the $j_{1/2}$ doublets Lines (1963); Oguchi (1965). For
example, CoNb2O6 (CNO) is considered to be a prototypal 1D Ising
ferromagnetScharf _et al._ (1979); Maartense _et al._ (1977); Kobayashi _et
al._ (1999), and has been studied in the context of transverse-field Ising
criticality Lee _et al._ (2010); Coldea _et al._ (2010); Morris _et al._
(2014). The structure consists of zigzag chains of edge-sharing CoO6
octahedra, which can be considered as alternating X- and Y-bonds per Fig.
1(a). While this bonding geometry might be expected to produce large bond-
dependent couplings, the dominant nearest neighbor interaction is known to
have an Ising form $S_{i}^{\alpha}S_{j}^{\alpha}$ with a common $\alpha$-axis
for all bonds. Recent studies have highlighted the importance of small
deviationsFava _et al._ (2020); Morris _et al._ (2021), but it is
nonetheless evident that the Kitaev coupling is not dominant.
More recently, the pursuit of 2D honeycomb materials with large bond-dependent
couplings has drawn attention to Na3Co2SbO6 (NCSO), and Na2Co2TeO6 (NCTO).
Both materials show zigzag antiferromagnetic orderLefrançois _et al._ (2016);
Bera _et al._ (2017); Wong _et al._ (2016); Bera _et al._ (2017); Chen _et
al._ (2021). This ground state is natural for strong bond-dependent
couplingsChaloupka _et al._ (2013); Rau _et al._ (2014), although longer-
range Heisenberg $J_{2}$ and $J_{3}$ may also be invokedFouet _et al._
(2001); Kimchi and You (2011) Indeed, analysis of inelastic neutron scattering
has led to a wide variety of proposed models for the couplingsChen _et al._
(2021); Songvilay _et al._ (2020); Kim _et al._ (2021); Lin _et al._
(2021), which span the entire range from dominant Heisenberg to dominant
Kitaev. Overall, the relative role of nearest neighbor bond-dependent coupling
vs. longer range Heisenberg exchange remains unclear.
Figure 1: (a) Three types of edge-sharing bonds, with definition of global
$(xyz)$ coordinates. (b) Energy level diagram showing the splitting of the
local electronic levels in the absence of spin-orbit coupling.
Two more honeycomb materials of recent interest are BaCo2(AsO4)2 (BCAO) and
BaCo2(PO4)2 (BCPO). Of these, BCPO displays only short-range incommensurate
correlations, suggesting strong frustrationNair _et al._ (2018). BCAO orders
in a state intermediary between zigzag antiferromagnetic and ferromagnetic
states with unconventional magnon dispersionRegnault _et al._ (2006, 2018),
which has been discussed as an incommensurate helimagnetRegnault _et al._
(1977) or double stripe zigzagRegnault _et al._ (2018). Under applied field
in-plane, BCAO undergoes a series of phase transitionsRegnault _et al._
(1977) between magnetization plateaus, and was proposed to host a field-
induced spin-liquidZhong _et al._ (2020); Zhang _et al._ (2021). However,
this was recently called into question due to the appearance of sharp magnon
modes in each of the phasesShi _et al._ (2021). As with NCTO, the relative
role of different couplings is a subject of much discussion; the first ab-
initio studiesDas _et al._ (2021); Maksimov _et al._ (2022) favored a nearly
XXZ model, in contrast with the assumption of large Kiteav interactions.
All of these findings call for a reinvestigation of the magnetic couplings in
edge-sharing Co(II) materials. In this work, we find, in contrast to the
assumptions of the initial theoretical analysis, that ligand-mediated hopping
is not large in these compounds. For this reason the character of the magnetic
couplings is significantly altered from the expected Kitaev form. In
particular, ferromagnetic Heisenberg $J$ typically dominates, while the myriad
of smaller anisotropic couplings may appear depending on the specific details
of the hopping and crystal field distortions.
The paper is organized as follows: We first review the single-ion ground
state, and the effect of crystal field distortions on the spin-orbital
composition of the $j_{1/2}$ moments. We then analyze the full set of relevant
symmetry-allowed hoppings in edge-sharing bonds. On the basis of these
hoppings, we then compute the resulting magnetic couplings. Finally, we
discuss the results in the context of materials of recent interest.
## II Single-Ion Considerations
### II.1 Local Electronic State
At each metal atom, we consider a Hamiltonian that is a sum, respectively, of
Coulomb interactions, crystal-field splitting, and spin-orbit coupling:
$\displaystyle\mathcal{H}_{i}=\mathcal{H}_{U}+\mathcal{H}_{\rm
CFS}+\mathcal{H}_{\rm SOC}$ (1)
The Coulomb interactions are most generally written:
$\displaystyle\mathcal{H}_{U}=\sum_{\alpha,\beta,\delta,\gamma}\sum_{\sigma,\sigma^{\prime}}U_{\alpha\beta\gamma\delta}\
c_{i,\alpha,\sigma}^{\dagger}c_{i,\beta,\sigma^{\prime}}^{\dagger}c_{i,\gamma,\sigma^{\prime}}c_{i,\delta,\sigma}$
(2)
where $\alpha,\beta,\gamma,\delta$ label different $d$-orbitals. 111The
coefficients $U_{\alpha\beta\gamma\delta}$ may be grouped according to the
number of unique orbital indices, from one to four. For example, the intra-
orbital Hubbard terms $n_{i,\alpha,\uparrow}n_{i,\alpha,\downarrow}$ have one
unique index $\alpha$, while the inter-orbital Hubbard terms
$n_{i,\alpha,\sigma}n_{i,\beta,\sigma^{\prime}}$ have two unique indices
$\alpha,\beta$. In the spherically symmetric approximation Sugano (2012), the
Coulomb coefficients with three and four indices vanish unless at least one of
the orbitals is an $e_{g}$ orbital. For this reason, $t_{2g}$-only (and
$e_{g}$-only) models reduce to the familiar Kanamori formgeorges2013strong;
Pavarini (2014), which includes only Hubbard density-density repulsion, Hund’s
exchange, and pair-hopping contributions. However, when both $e_{g}$ and
$t_{2g}$ orbitals are considered together, it is important to include the full
rotationally symmetric Coulomb terms. This is particularly true when computing
anisotropic magnetic exchange, because any approximations to the Coulomb
Hamiltonian are likely to explicitly break rotational symmetry, leading to
erroneous sources of anisotropy. In the spherically symmetric approximation
Sugano (2012), the coefficients $U_{\alpha\beta\gamma\delta}$ are all related
to the three Slater parameters $F_{0},F_{2},F_{4}$. In terms of these, the
familiar $t_{2g}$ Kanamori parameters are, for example:
$\displaystyle U_{t2g}=F_{0}+\frac{4}{49}\left(F_{2}+F_{4}\right)$ (3)
$\displaystyle J_{t2g}=\frac{3}{49}F_{2}+\frac{20}{441}F_{4}$ (4)
We take the approximate ratio $F_{4}/F_{2}=5/8$, following Ref. Pavarini,
2014. The full parameterization is described in Ref. Sugano, 2012. Unless
otherwise stated, we use $U_{t2g}=3.25$ eV, and $J_{t2g}=0.7$ eV to model
Co(II) compounds, following Ref. Das _et al._ , 2021.
For the crystal-field Hamiltonian, we consider an ideal trigonal distortion
within $D_{3d}$ site symmetry. The Hamiltonian can be written:
$\displaystyle\mathcal{H}_{\rm
CFS}=\sum_{\sigma}\mathbf{c}_{i,\sigma}^{\dagger}\ \mathbb{D}\
\mathbf{c}_{i,\sigma}$ (5)
where:
$\displaystyle\mathbf{c}_{i,\sigma}^{\dagger}=\left(c_{i,yz,\sigma}^{\dagger}\
c_{i,xz,\sigma}^{\dagger}\ c_{i,xy,\sigma}^{\dagger}\
c_{i,z^{2},\sigma}^{\dagger}\ c_{i,x^{2}-y^{2},\sigma}^{\dagger}\right)$ (6)
In terms of the global $(xyz)$ coordinates defined in Fig. 1(a), the CFS
matrix can be written:
$\displaystyle\mathbb{D}=\left(\begin{array}[]{ccccc}0&\Delta_{2}&\Delta_{2}&0&0\\\
\Delta_{2}&0&\Delta_{2}&0&0\\\ \Delta_{2}&\Delta_{2}&0&0&0\\\
0&0&0&\Delta_{1}&0\\\ 0&0&0&0&\Delta_{1}\end{array}\right)$ (12)
where $\Delta_{1}$ is the $t_{2g}$-$e_{g}$ splitting, and $\Delta_{2}$ is the
trigonal term. Generally, $\Delta_{2}>0$ corresponds to trigonal elongation,
as shown in Fig. 1(b), although the actual sign is further influenced by the
details of the ligand environments and longer ranged Coulomb potentials.
Without SOC, the $t_{2g}$ levels are split into a doubly degenerate $e$ pair
and a singly degenerate $a$ level, with $E_{a}-E_{e}=3\Delta_{2}$. As
discussed below, the trigonal splitting has a strong impact on the nature of
the local moments.
For the “high-spin” $d^{7}$ case, the ground state has nominal configuration
$(t_{2g})^{5}(e_{g})^{2}$, with three unpaired electrons ($S=3/2$), as shown
in Fig. 1(b). In the absence of trigonal splitting ($\Delta_{2}=0$), there is
a three-fold orbital degeneracy associated with the $t_{2g}$ levels, leading
to an effective orbital momentum $L_{\rm eff}=1$. Spin-orbit coupling
$\mathcal{H}_{\rm SOC}=\lambda\mathbf{L}\cdot\mathbf{S}$ splits the resulting
multiplets into $J_{\rm eff}=1/2$, 3/2, and $5/2$ states. The $j_{1/2}$
doublet is always the ground state, and furnishes the effective spin-orbital
moment relevant at low energiesLines (1963). The resulting $J_{\rm eff}$
multiplets are composed of many configurations belonging to different orbital
occupancies and spin values. However, third row metals typically satisfy
$J_{H},\Delta_{1}\gg\lambda,\Delta_{2}$, such that configurations belonging
precisely to the $(t_{2g})^{5}(e_{g})^{2}$, $S=3/2$ manifold carry the
dominant weight. When projected into this manifold, the ground state doublet
can be written in terms of $|m_{L},m_{S}\rangle$ states asLines (1963):
$\displaystyle\left|j_{1/2},+\frac{1}{2}\right\rangle=$ $\displaystyle\
c_{1}\left|-1,\frac{3}{2}\right\rangle+c_{2}\left|0,\frac{1}{2}\right\rangle+c_{3}\left|1,-\frac{1}{2}\right\rangle$
(13) $\displaystyle\left|j_{1/2},-\frac{1}{2}\right\rangle=$ $\displaystyle\
c_{1}\left|1,-\frac{3}{2}\right\rangle+c_{2}\left|0,-\frac{1}{2}\right\rangle+c_{3}\left|-1,\frac{1}{2}\right\rangle$
(14)
where the coefficients $c_{n}$ vary with $\Delta_{2},\lambda$. The pure $L,S$
multiplets $|m_{L},m_{S}\rangle$ can be conveniently expressed in terms of the
single-particle levels with precise orbital momentum:
$\displaystyle|e_{a,\sigma}\rangle=$ $\displaystyle\ |d_{z^{2},\sigma}\rangle$
(15) $\displaystyle|e_{b,\sigma}\rangle=$ $\displaystyle\
|d_{x^{2}-y^{2},\sigma}\rangle$ (16) $\displaystyle|t_{+,\sigma}\rangle=$
$\displaystyle\
-\frac{1}{\sqrt{2}}\left(|d_{yz,\sigma}\rangle+i|d_{xz,\sigma}\rangle\right)$
(17) $\displaystyle|t_{0,\sigma}\rangle=$ $\displaystyle\
|d_{xy,\sigma}\rangle$ (18) $\displaystyle|t_{-,\sigma}\rangle=$
$\displaystyle\
\frac{1}{\sqrt{2}}\left(|d_{yz,\sigma}\rangle-i|d_{xz,\sigma}\rangle\right)$
(19)
This leads to:
$\displaystyle\left|-1,\frac{3}{2}\right\rangle=$ $\displaystyle\
\left|e_{a,\uparrow}e_{b,\uparrow}t_{+,\uparrow}t_{0,\uparrow}t_{0,\downarrow}t_{-,\uparrow}t_{-,\downarrow}\right\rangle$
(20) $\displaystyle\left|0,\frac{1}{2}\right\rangle=$ $\displaystyle\
\frac{1}{\sqrt{3}}\left|e_{a,\uparrow}e_{b,\uparrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\downarrow}t_{-,\uparrow}t_{-,\downarrow}\right\rangle$
(21) $\displaystyle\
+\frac{1}{\sqrt{3}}\left|e_{a,\uparrow}e_{b,\downarrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\uparrow}t_{-,\uparrow}t_{-,\downarrow}\right\rangle$
$\displaystyle\
+\frac{1}{\sqrt{3}}\left|e_{a,\downarrow}e_{b,\uparrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\uparrow}t_{-,\uparrow}t_{-,\downarrow}\right\rangle$
$\displaystyle\left|1,-\frac{1}{2}\right\rangle=$ $\displaystyle\
\frac{1}{\sqrt{3}}\left|e_{a,\uparrow}e_{b,\downarrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\uparrow}t_{0,\downarrow}t_{-,\downarrow}\right\rangle$
(22) $\displaystyle\
+\frac{1}{\sqrt{3}}\left|e_{a,\downarrow}e_{b,\uparrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\uparrow}t_{0,\downarrow}t_{-,\downarrow}\right\rangle$
$\displaystyle\
+\frac{1}{\sqrt{3}}\left|e_{a,\downarrow}e_{b,\downarrow}t_{+,\uparrow}t_{+,\downarrow}t_{0,\uparrow}t_{0,\downarrow}t_{-,\uparrow}\right\rangle$
The time-reversed partners can be similarly obtained. For $\Delta_{2}=0$, the
coefficients are $c_{1}=1/\sqrt{2}$, $c_{2}=1/\sqrt{3}$, $c_{3}=1/\sqrt{6}$.
In this same limit, the multiplet energies satisfy:
$\displaystyle E_{3/2}-E_{1/2}=\frac{1}{2}\lambda$ (23) $\displaystyle
E_{5/2}-E_{1/2}=\frac{4}{3}\lambda$ (24)
With $\lambda_{\rm Co}\approx 60$ meV, the $j_{1/2}\to j_{3/2}$ excitation is
expected to appear in the range of $\sim 30$ meV, as has been seen
experimentally in numerous compoundsSarte _et al._ (2018); Ross _et al._
(2017); Songvilay _et al._ (2020); Kim _et al._ (2021).
### II.2 Local Effects of Trigonal Distortion
Figure 2: Evolution of the single-ion properties with trigonal CFS
$\Delta_{2}$. (a) Energy spectrum. (b) Wavefunction coefficients $c_{n}$. (c)
$g$-tensor components. $g_{||}$ refers to the direction parallel to the
trigonal distortion axis, $\hat{x}+\hat{y}+\hat{z}$.
For finite $\Delta_{2}$, the composition of the doublet is significantly
altered. Here, we review similar discussions in Ref. Lines, 1963; Liu, 2021.
In Fig. 2, we show the evolution of the local spectrum as a function of
$\Delta_{2}/\lambda$, as well as the coefficients $c_{n}$ and $g$-tensor for
the lowest doublet.
In the limit of large trigonal elongation $\Delta_{2}>0$, the unpaired hole in
the $t_{2g}$ levels occupies the singly degenerate $a$ level, thus quenching
the orbital moment completely. This corresponds to $c_{2}\to 1$ and
$c_{1},c_{3}\to 0$. The energetic splitting between the lowest two doublets
becomes small, thus restoring the fourfold degeneracy of the nearly pure
$S=3/2$ moment. The $m_{s}=\pm 1/2$ states lie slightly below the $m_{s}=\pm
3/2$ states, due to residual easy-plane single-ion anisotropy. As such, the
$g$-tensor for the lowest doublet satisfies $g_{\perp}>g_{||}$, where $g_{||}$
refers to the component along the trigonal axis. However, a model
incorporating only the lowest doublet remains valid only as long as the
single-ion anisotropy remains large compared to the intersite magnetic
exchange (roughly, if $\Delta_{2}<\lambda/2$).
For the opposite case of trigonal compression $\Delta_{2}<0$, the unpaired
hole in the $t_{2g}$ levels occupies the doubly degenerate $e$ levels, thus
retaining some orbital degeneracy consistent with $L_{\rm eff}=1/2$. This
corresponds to $c_{1}\to 1$, $c_{2},c_{3}\to 0$. The effect of SOC is then to
split the $S=3/2$, $L_{\rm eff}=1/2$ manifold into four doublets. Since the
lowest doublet corresponds to pure $m_{s}=\pm 3/2$, this may be considered as
strong easy-axis single-ion anisotropy. Consistently, the $g$-tensor satisfies
$g_{||}\gg g_{\perp}$ in this limit. The gap between the lowest doublets
converges to $\lambda/3\sim 20$ meV, which should typically exceed the
intersite magnetic coupling. For this reason, a model incorporating only the
lowest doublet may remain valid for large $\Delta_{2}<0$.
## III Edge-Sharing Bond Hoppings
### III.1 General Form
The effective $d$-$d$ hopping between metal sites is described by:
$\displaystyle\mathcal{H}_{\rm
hop}=\sum_{ij,\sigma}\mathbf{c}_{i,\sigma}^{\dagger}\ \mathbb{T}_{ij}\
\mathbf{c}_{j,\sigma}$ (25)
For an ideal edge-sharing bond, $C_{2v}$ symmetry restricts the form of the
hopping matrices. In terms of the global $(x,y,z)$ coordinates defined in Fig.
3, the matrices are constrained to take the following form, for the Z-bond:
$\displaystyle\mathbb{T}_{Z}=\left(\begin{array}[]{ccccc}t_{1}&t_{2}&0&0&0\\\
t_{2}&t_{1}&0&0&0\\\ 0&0&t_{3}&t_{6}&0\\\ 0&0&t_{6}&t_{4}&0\\\
0&0&0&0&t_{5}\end{array}\right)$ (31)
Of these, $t_{1},t_{3},t_{4}$, and $t_{5}$ are primarily direct hopping
between metal atoms, as shown in Fig. 3. Only $t_{2}$ and $t_{6}$ have
significant contributions from both direct hopping and hybridization with the
ligands.
Figure 3: Summary of symmetry allowed hoppings for ideal Z-bonds with $C_{2v}$
symmetry. $t_{1},t_{3},t_{4}$, and $t_{5}$ arise from direct metal-metal
hopping, while $t_{2}$ and $t_{6}$ have contributions from both direct and
ligand-assisted processes. The global $(xyz)$ and local
$(\hat{e}_{1}\hat{e}_{2}\hat{e}_{3})$ coordinates are shown.
### III.2 Survey of Materials
There are two main factors affecting the balance of direct vs. ligand-assisted
hopping: (i) the degree of hybridization with the ligands, and (ii) the Co-Co
bond lengths. In general, metal-ligand hybridization is typically lower in
third row metal compounds than their $4d$ and $5d$ counterparts, particularly
for $t_{2g}$ orbitals. It is precisely this effect that reduces $t_{2g}-e_{g}$
splitting $\Delta_{1}$ for $3d$ metals, which is required for stability of the
high-spin state in Co $3d^{7}$ compounds. For this reason, ligand-assisted
hopping is expected to be suppressed overall. Real materials span a wide range
Co-Co distances in edge-sharing Co(II) compounds, e.g. from $\sim 2.9$ Å in
BaCo2(AsO4)2Dordević (2008) to $\sim 3.9$ Å in CoI2Wyckoff and Wyckoff (1963).
While we leave complete discussion of individual materials for later work, it
is useful to establish realistic ranges of hoppings. In order to do so, we
employed fully relativistic density functional theory calculations performed
with FPLO at the GGA (PBE) level. Hopping integrals were extracted by
formulating Wannier orbitals via projection onto atomic $d$-orbitals and/or
$p$-orbitals.
Figure 4: Evolution of the relevant hoppings as a function of Co-Co distance.
Solid lines correspond to hypothetical stretched cubic CoO (see text). Points
correspond to real materials; BCAO = BaCo2(AsO4)2, NCTO = Na2Co2TeO6, NCSO =
Na3Co2SbO6, CNO = CoNb2O6. (a) Hoppings in the $d$-only scheme. (b) Hoppings
in the $p+d$ scheme.
In order to get a general idea of the of the bond-length dependence, we first
considered hypothetical cubic CoO (NaCl type; $Fm\bar{3}m$) structures with a
symmetrically stretched unit cells. Hoppings for the 5-band $3d$-only fitting
are shown in Fig. 4(a). This construction maintains 90∘ Co-O-Co bond angles,
which deviates slightly from real materials, but nonetheless provides insight.
In particular, we find in the entire range of Co-Co bond lengths, that direct
hopping is the largest, leading to $|t_{3}|>|t_{2}|,|t_{6}|$. This trend is
also true for estimates of real materials. In particular, we show in Fig. 4(a)
results for several prominent materials based on literature structures:
CoNb2O6 (Ref. Sarvezuk _et al._ , 2011), BaCo2(AsO4)2 (Ref. Dordević, 2008),
Na3Co2SbO6 (Ref. Songvilay _et al._ , 2020), and Na2Co2TeO6 (Ref. Xiao _et
al._ , 2019)222The Na2Co2TeO6 structure contains disorder in the Na position,
in which each Na position has occupancy 2/3. To perform calculations, we
artificially increased the occupancy to 1, which corresponds to Na3Co2TeO6. It
is expected this change in the filling should have minimal impact on the
computed hoppings.. For each case, the cubic projection coordinates were
defined to be orthogonal but minimize the difference with the corresponding
Co-O bond vectors in the (distorted) octahedra. From these results, it is
evident that the physical region corresponds to large $t_{3}$ and subdominant
$t_{6}$. By contrast, $t_{2}$ is suppressed, such that
$|t_{2}|\sim|t_{1}|,|t_{4}|,|t_{5}|\lesssim 0.05$ eV. A similar situationWellm
_et al._ (2021) was recently proposed for Na2BaCo(PO4)2. For materials with
Co-Co bond lengths $\sim$ 3.0 Å, direct hopping almost certainly dominates.
This differs from the previous theoretical works predicting large Kitaev
couplingsLiu and Khaliullin (2018); Liu (2021); Liu _et al._ (2020); Sano
_et al._ (2018), which considered ligand-mediated hopping $t_{2}$ and $t_{6}$
to be the largest. This discrepancy calls for a reexamination of the magnetic
couplings.
Finally, in Fig. 4(b), we show Slater-Koster hoppings extracted from the CoO
calculations by fitting with an 8-band $(3d+2p)$ model including explicitly
the O orbitals. These are relevant for considering some exchange processes
(see below). In terms of these, the $d$-only hoppings are given approximately
by:
$\displaystyle t_{2}\approx$ $\displaystyle\
-\frac{1}{2}t_{dd}^{\pi}+\frac{(t_{pd}^{\pi})^{2}}{\Delta_{pd}}$ (32)
$\displaystyle t_{3}\approx$ $\displaystyle\ t_{dd}^{\sigma}$ (33)
$\displaystyle t_{6}\approx$ $\displaystyle\
\frac{\sqrt{3}}{4}t_{dd}^{\sigma}-\frac{t_{pd}^{\pi}t_{pd}^{\sigma}}{\Delta_{pd}}$
(34)
where $\Delta_{pd}=4.5$ eV is the charge-transfer energy from Co $d$ to O $p$
orbitals. For $t_{2}$ and $t_{6}$, the contributions from ligand-assisted
hopping is positive, while the direct hopping contribution is negative.
Figure 5: (a-c) Evolution of the magnetic couplings in the $d$-only model for
ideal edge-sharing bond with no trigonal distortion ($\Delta_{2}=0$) and
$\Delta_{1}=1.1$ eV, $U=3.25$ eV, $J_{H}=0.7$ eV,
$t_{1}=|t_{3}|/4,t_{4}=t_{5}=-|t_{3}|/4,t_{6}=+0.1$ eV. The ferromagnetic
correction $\delta J$ due to ligand exchange processes is not included (see
text). (d) Computed couplings along the path indicated in (a-c), interpolating
between the direct and ligand-assisted hopping regimes. A correction $\delta
J=-2$ meV has been added.
## IV Magnetic Couplings
### IV.1 General Form
For ideal edge-sharing bonds with $C_{2v}$ symmetry, the magnetic couplings
may be written in the familiar formRau _et al._ (2014):
$\displaystyle\mathcal{H}_{ij}=$ $\displaystyle\ J\
\mathbf{S}_{i}\cdot\mathbf{S}_{j}+K\
S_{i}^{\gamma}S_{j}^{\gamma}+\Gamma\left(S_{i}^{\alpha}S_{j}^{\beta}+S_{i}^{\beta}S_{j}^{\alpha}\right)$
$\displaystyle+\Gamma^{\prime}\left(S_{i}^{\alpha}S_{j}^{\gamma}+S_{i}^{\gamma}S_{j}^{\alpha}+S_{i}^{\beta}S_{j}^{\gamma}+S_{i}^{\gamma}S_{j}^{\beta}\right)$
(35)
where $(\alpha,\beta,\gamma)=(x,y,z)$ for the Z-bonds, $(y,z,x)$ for the
X-bonds, and $(z,x,y)$ for the Y-bonds, in terms of the global $xyz$
coordinates. In order to estimate the couplings in the following sections, we
exactly diagonalize the full $d$-only Hamiltonian
$\mathcal{H}_{U}+\mathcal{H}_{\rm CFS}+\mathcal{H}_{\rm SOC}+\mathcal{H}_{\rm
hop}$ for two neighboring sites. The couplings are extracted by projecting
onto the ideal $j_{1/2}$ doublets defined in eq’n (13, 14). This procedure is
analogous to Ref. Winter _et al._ , 2016, and is guaranteed to yield
couplings that converge to the results of perturbation theory with respect to
$\mathcal{H}_{\rm hop}$.
As discussed in Ref. Liu and Khaliullin, 2018, there are several different
electronic processes that contribute to the magnetic couplings at low orders
in the full $p+d$ model. These can be grouped into two categories: (i) those
involving excited states with up to one hole occupying the ligand orbitals,
and (ii) those involving multiple excited ligand holes simultaneously. The
majority of these processes are captured, in principle, in the downfolded
$d$-only hopping model, assuming suitably renormalized hopping and on-site
Coulomb terms. We assume that the hopping integrals extracted from DFT
incorporate this renormalization already. However, our approach does not
capture the subset of processes in category (ii) in which two holes meet on a
ligand in different $p$-orbitals, and interact via Hund’s coupling. Such
processes effectively renormalize the nearest neighbor Coulomb terms when
downfolded, which we have not considered explicitly. We therefore estimate the
effects of the additional contributions. From Ref. Liu and Khaliullin, 2018,
there is a correction to both $J$ and $K$ given approximately by:
$\displaystyle\delta J\approx$ $\displaystyle\
-\frac{\gamma}{\Delta_{pd}^{2}}\left(\frac{5}{2}(t_{pd}^{\sigma})^{4}+\frac{3}{2}(t_{pd}^{\sigma})^{2}(t_{pd}^{\pi})^{2}+(t_{pd}^{\pi})^{4}\right)$
(36) $\displaystyle\delta K\approx$ $\displaystyle\
-\frac{\gamma}{\Delta_{pd}^{2}}\left(\frac{1}{2}(t_{pd}^{\sigma})^{2}(t_{pd}^{\pi})^{2}-\frac{1}{2}(t_{pd}^{\pi})^{4}\right)$
(37) $\displaystyle\gamma=$ $\displaystyle\
\frac{40J_{H}^{p}}{81(\Delta_{pd}+U_{p}/2)^{2}}$ (38)
where $J_{H}^{p}$ is the Hund’s coupling at the ligand, $U_{p}$ is the excess
Coulomb repulsion at the ligand, $\Delta_{pd}\approx 4.5$ eV is the charge-
transfer gap. We take the same approximations as Ref. Liu and Khaliullin, 2018
($U_{p}=0.7\ U_{t2g},\ J_{H}^{p}=0.3\ U_{p}$), and consider
$t_{pd}^{\sigma}\approx 1$ eV, $t_{pd}^{\pi}\approx-0.5$ eV, according to Fig.
4(b). From this, we estimate the correction to the Kitaev coupling to be
negligible $\delta K\sim 0.1$ meV, while the corrections to the Heisenberg
coupling may be typically in the range $\delta J\sim-2$ to $-6$ meV. The
remaining contributions to the exchange are investigated in the next sections.
### IV.2 General Hopping Dependence
In the following, we focus on the contributions to the magnetic exchange from
(downfolded) $d$-$d$ hopping. We first consider the case $\Delta_{2}=0$. With
the choice, $\Gamma^{\prime}=0$ strictly. Up to second order in hopping, the
couplings may be written:
$\displaystyle J=$ $\displaystyle\
\mathbf{t}\cdot\mathbb{M}_{J}\cdot\mathbf{t}^{T}+\delta J$ (39) $\displaystyle
K=$ $\displaystyle\ \mathbf{t}\cdot\mathbb{M}_{K}\cdot\mathbf{t}^{T}+\delta K$
(40) $\displaystyle\Gamma=$ $\displaystyle\
\mathbf{t}\cdot\mathbb{M}_{\Gamma}\cdot\mathbf{t}^{T}$ (41)
where:
$\displaystyle\mathbf{t}=\left(t_{1}\ t_{2}\ t_{3}\ t_{4}\ t_{5}\
t_{6}\right)$ (42)
and $\mathbb{M}$ is a function of $F_{n},\lambda,\Delta_{n}$. We use
$\Delta_{1}=1.1$ eV and $\lambda=0.06$ eV, which is consistent with estimates
from DFT in the previous sections and $U_{t2g}=3.25$ eV, and $J_{t2g}=0.7$ eV,
following Ref. Das _et al._ , 2021. To estimate $\mathbb{M}$ for these
parameters, we computed the magnetic couplings for a grid of hoppings
$-0.05<t_{n}<+0.05$ and fit the resulting couplings. This provides an estimate
of the couplings in the perturbative regime:
$\displaystyle\mathbb{M}_{J}=$ $\displaystyle\
\left(\begin{array}[]{c|cccccc}&t_{1}&t_{2}&t_{3}&t_{4}&t_{5}&t_{6}\\\
\hline\cr t_{1}&-55&0&143&10&2&0\\\ t_{2}&0&-76&0&0&0&-77\\\
t_{3}&0&0&-33&2&2&0\\\ t_{4}&0&0&0&260&0&0\\\ t_{5}&0&0&0&0&259&0\\\
t_{6}&0&0&0&0&0&165\end{array}\right)$ (50) $\displaystyle\mathbb{M}_{K}=$
$\displaystyle\
\left(\begin{array}[]{c|cccccc}&t_{1}&t_{2}&t_{3}&t_{4}&t_{5}&t_{6}\\\
\hline\cr t_{1}&128&0&-119&-9&35&0\\\ t_{2}&0&-108&0&0&0&86\\\
t_{3}&0&0&-8&-2&5&0\\\ t_{4}&0&0&0&-4&0&0\\\ t_{5}&0&0&0&0&1&0\\\
t_{6}&0&0&0&0&0&-147\end{array}\right)$ (58)
$\displaystyle\mathbb{M}_{\Gamma}=$ $\displaystyle\
\left(\begin{array}[]{c|cccccc}&t_{1}&t_{2}&t_{3}&t_{4}&t_{5}&t_{6}\\\
\hline\cr t_{1}&0&-34&0&0&0&49\\\ t_{2}&0&0&-116&-2&1&0\\\
t_{3}&0&0&0&0&0&-31\\\ t_{4}&0&0&0&0&0&-67\\\ t_{5}&0&0&0&0&0&0\\\
t_{6}&0&0&0&0&0&0\end{array}\right)$ (66)
in units of 1/eV. Recall, for real materials we generally anticipate
$|t_{3}|>|t_{6}|>|t_{1}|\sim|t_{2}|\sim|t_{4}|\sim|t_{5}|$. Furthermore,
$t_{1}>0$, $t_{3}<0$, $t_{4}<0$, $t_{6}>0$. These results highlight several
key aspects:
Heisenberg $J$: For $J$, there are various contributions of different sign.
Those arising from hopping between $t_{2g}$ orbitals ($t_{1},t_{2},t_{3}$) are
exclusively ferromagnetic. Processes involving hopping between $e_{g}$
orbitals ($t_{4},t_{5}$) are exclusively antiferromagnetic. The terms related
to $e_{g}$-$t_{2g}$ hopping tend to be antiferromagnetic $\propto t_{6}^{2}$
and $t_{2}t_{6}$ given that ab-initio tends to yield $t_{2}<0$ and $t_{6}>0$.
Kitaev $K$: There are also different contributions to $K$ of varying sign.
Hopping between $e_{g}$ orbitals ($t_{4},t_{5}$) makes little contribution to
the anisotropic couplings overall. The sign of the contribution from
$t_{2g}$-$t_{2g}$ hopping depends on the balance of transfer integrals: terms
$\propto t_{2}^{2}$ are ferromagnetic, while terms $\propto t_{1}^{2}$ and
$t_{1}t_{3}$ are antiferromagnetic. Contributions related to $t_{2g}$-$e_{g}$
hopping may take both signs: terms $\propto t_{6}^{2}$ are ferromagnetic,
while terms $\propto t_{2}t_{6}$ depend on the sign of $t_{2}$.
Off-diagonal $\Gamma$: For the off-diagonal couplings, the primary
contribution arises at order $t_{2}t_{3}$, and as a result
$\text{sign}(\Gamma)\approx-\text{sign}(t_{2}t_{3})$. There are no
contributions that are diagonal with respect to the hopping pairs. A similar
result appears in Ref. Liu and Khaliullin, 2018.
The appearance of hopping combinations such as $t_{2}t_{6}$, which do not
conserve $t_{2g}$ and $e_{g}$ occupancy, may appear surprising at first. If
the ground state doublets have approximate configuration
$(t_{2g})^{5}(e_{g})^{2}$, one might expect terms mixing the occupancy to be
forbidden at low orders, because they do not connect ground states. However,
in reality, neither occupancy is preserved by either spin-orbit coupling or
the full Coulomb terms, which are treated exactly (not perturbatively) in this
approach.
For general parameters, we expect all three couplings to be finite. The
computed hopping-dependence of $K,J,\Gamma$ are shown in Fig. 5 for the choice
$t_{1}=|t_{3}|/4,t_{4}=t_{5}=-|t_{3}|/4,t_{6}=+0.1$ eV, which is compatible
with the ab-initio estimates. With this choice, we interpolate between the
limits of dominant ligand vs. direct hopping.
In the hypothetical regime of pure ligand-mediated hopping ($t_{2}$ and
$t_{6}$), we find $\Gamma=0$, while contributions from $d$-$d$ hopping satisfy
$J>0$ is antiferromagnetic and $K<0$ is ferromagnetic. These findings verify
expectations from perturbation theory for this limitLiu and Khaliullin (2018);
Sano _et al._ (2018). The Kitaev coupling is the largest, with values
$|K/J|\sim 1-10$ depending on the precise balance of hoppings. If we consider
also the ferromagnetic correction $\delta J\sim-2$ to $-6$ meV discussed in
the previous section, the overall sign of $J$ should reverse, and the
magnitude may be suppressed, such that dominant Kitaev coupling is possible
with some tuning.
By contrast, for the physically relevant region of large $t_{3}$ and finite
values of all hoppings, we anticipate that ferromagnetic $J<0$ is the dominant
coupling, particularly due to contributions $\propto t_{1}t_{3}$ and the
correction $\delta J$. In fact, $\delta J$ (which is just the regular
ferromagnetic exchange for 90∘ bondsGoodenough (1963)) is the largest
contribution. All possible combinations of signs of $K$ and $\Gamma$ are
possible depending on the balance of hoppings, but their magnitude is
suppressed relative to $J$. For very short Co-Co bond lengths, where the
direct hopping contribution to $t_{2}$ is the largest ($t_{2}<0$), the
tendency is for $K,\Gamma<0$. For longer bond lengths, where ligand-mediated
contributions are the largest ($t_{2}>0$), then $K,\Gamma>0$.
### IV.3 Effect of Trigonal Distortion
We next consider the effects of trigonal distortion. Given the relatively
small value of the atomic SOC constant $\lambda_{\rm Co}\approx 60$ meV, small
distortions may be relevant for Co(II) compounds. This make clarifying the
size and sign of $\Delta_{2}$ important for modelling such materials.
Following Ref. Lines, 1963, the alterations to the nature of the local moments
are expected to induce significant uniaxial anisotropy along the trigonal
axis. While the $K,J,\Gamma,\Gamma^{\prime}$ notation is convenient for
discussing the couplings in the limit $\Delta_{2}\to 0$, the axial anisotropy
is more apparent in alternative local XXZ coordinates shown in Fig. 3. In
particular, for each bond we define local coordinates: $\hat{e}_{1}$ is
parallel to the bond and $\hat{e}_{3}=(\hat{x}+\hat{y}+\hat{z})/\sqrt{3}$ is
along the global trigonal axis. Thus, the couplings may be written Ross _et
al._ (2011); Maksimov _et al._ (2019):
$\displaystyle\mathcal{H}_{ij}=$ $\displaystyle\
J_{xy}\left(S_{i}^{1}S_{j}^{1}+S_{i}^{2}S_{j}^{2}\right)+J_{z}S_{i}^{3}S_{j}^{3}$
(67) $\displaystyle\
+2J_{\pm\pm}\left(S_{i}^{1}S_{j}^{1}-S_{i}^{2}S_{j}^{2}\right)+J_{z\pm}\left(S_{i}^{3}S_{j}^{2}+S_{i}^{2}S_{j}^{3}\right)$
where the superscript numbers refer to the local directions. The two
parameterizations may be relatedMaksimov _et al._ (2019) via:
$\displaystyle J_{xy}=$ $\displaystyle\
J+\frac{1}{3}\left(K-\Gamma-2\Gamma^{\prime}\right)$ (68) $\displaystyle
J_{z}=$ $\displaystyle\ J+\frac{1}{3}\left(K+2\Gamma+4\Gamma^{\prime}\right)$
(69) $\displaystyle J_{\pm\pm}=$ $\displaystyle\
-\frac{1}{6}\left(K+2\Gamma-2\Gamma^{\prime}\right)$ (70) $\displaystyle
J_{z\pm}=$ $\displaystyle\
-\frac{\sqrt{2}}{3}\left(K-\Gamma+\Gamma^{\prime}\right)$ (71)
A similar parameterization was also suggested in Ref. Liu, 2021; Liu _et al._
, 2020. In general, for $\Delta_{2}<0$, as the moments become more axial,
components of the exchange along the $\hat{e}_{3}$ direction are expected to
be enhanced compared to the $\hat{e}_{1}$ and $\hat{e}_{2}$ directionsLines
(1963). As a result $J_{xy}$ and $J_{\pm\pm}$ should be suppressed relative to
$J_{z}$ and $J_{z\pm}$. Trigonal elongation $\Delta_{2}>0$ should have the
opposite effect. As the moments become more planar, $J_{xy}$ and $J_{\pm\pm}$
should be relatively enhanced.
Figure 6: Anisotropic ferromagnetic corrections as a function of trigonal
distortion, for $\delta J_{0}=-2$ meV. (a) Local XXZ scheme. (b) Global
$J,K,\Gamma,\Gamma^{\prime}$ scheme. Figure 7: Magnetic couplings for ideal
edge-sharing bond with finite trigonal distortion. The parameters are
otherwise the same as Fig. 5. (a-f): $\Delta_{2}/\lambda=-0.5$, (g-l):
$\Delta_{2}/\lambda=+0.5$. The ferromagnetic corrections $\delta
J,\delta\Gamma,\delta\Gamma^{\prime}$ due to ligand exchange processes are not
included (see text). (e,k): Couplings along the path interpolating between
direct and ligand-assisted hoppings depicted in (a,g) in the
$J,K,\Gamma,\Gamma^{\prime}$ scheme. (f,l): Couplings along the path in the
$J_{z},J_{xy},J_{z\pm},J_{\pm\pm}$ scheme. For (e,f,k,l), solid lines indicate
the results of $d$-$d$ exchange only, and dashed lines indicate corrected
values $J+\delta J$, according to $\delta J_{0}=-2$ meV.
Following Ref. Lines, 1963; Liu _et al._ , 2020, the ferromagnetic
corrections to $J$ resulting from ligand exchange processes are rendered
anisotropic, with:
$\displaystyle\delta J_{xy}\approx$ $\displaystyle\ u_{xy}^{2}\ \delta J_{0}$
(72) $\displaystyle\delta J_{z}\approx$ $\displaystyle\ u_{z}^{2}\ \delta
J_{0}$ (73) $\displaystyle u_{xy}=$ $\displaystyle\
\frac{3}{5}\left(2\sqrt{3}c_{1}c_{3}+2c_{2}^{2}\right)$ (74) $\displaystyle
u_{z}=$ $\displaystyle\ \frac{3}{5}\left(1+2(c_{1}^{2}-c_{3}^{2})\right)$ (75)
with $c_{n}$ given in Fig. 2(b) and $\delta J_{0}$ defined according to eq’n
(36). In the $J,K,\Gamma,\Gamma^{\prime}$ parameterisation, these corrections
correspond to:
$\displaystyle\delta J=$ $\displaystyle\
\frac{1}{3}\left(u_{z}^{2}+2u_{xy}^{2}\right)\delta J_{0}$ (76)
$\displaystyle\delta\Gamma=$ $\displaystyle\
\delta\Gamma^{\prime}=\frac{1}{3}\left(u_{z}^{2}-u_{xy}^{2}\right)\delta
J_{0}$ (77)
As with the undistorted case, the corrections to the anisotropic couplings
$J_{\pm\pm}$ and $J_{z\pm}$ are predicted to be small. The evolution of the
ferromagnetic corrections with distortion are shown in Fig. 6. For
$\Delta_{2}<0$, the ratio $\delta J_{z}/\delta J_{xy}>1$ is, in principle,
unbounded (and should increase continuously with trigonal distortion). For
$\Delta_{2}>0$ the degree of anisotropy is restricted, because the distortion-
induced effects are bounded $1/4\lesssim J_{z}/J_{xy}<1$. The lower bound is
reached for large $\Delta_{2}$, where the orbital moment is quenched, thus
restoring the full degeneracy of the $S=3/2$ states. However, a low-energy
model including only the lowest doublet would no longer be sufficient, so this
limit does not represent a physically sensible model. In terms of global
coordinates, the trigonal distortion primarily introduces off-diagonal
couplings, where $\Delta_{2}<0$ tends to be associated with
$\delta\Gamma,\delta\Gamma^{\prime}<0$, and vice versa.
To explore the exchange contributions from (downfolded) $d$-$d$ hopping, we
recomputed the couplings using exact diagonalization with significant
distortion $\Delta_{2}/\lambda=\pm 0.5$ to emphasize the effects. Results are
shown in Fig. 7 for the choice
$t_{1}=|t_{3}|/4,t_{4}=t_{5}=-|t_{3}|/4,t_{6}=+0.1$ eV, which is compatible
with the ab-initio estimates. In Fig. 7(e,f) and (k,l), we also show the
effect of corrections $\delta J$. The results are as follow:
Trigonal compression: For $\Delta_{2}<0$, as shown in Fig. 7(a-e), we find all
four of the couplings $J,K,\Gamma,\Gamma^{\prime}$ may be of similar
magnitude. This is particularly true in the region of large ligand-assisted
hopping. For the physically relevant region of large direct hopping ($t_{3}\gg
t_{2}$), we find that $K$ is still relatively suppressed (same as for
$\Delta_{2}=0$), but large $\Gamma,\Gamma^{\prime}$, with
$\text{sign}(\Gamma,\Gamma^{\prime})\sim\text{sign}(J)$ are induced. These
results are more easily interpreted in the alternative XXZ parameterization
shown in Fig. 7(e). In particular, as the local moments become more axial with
larger trigonal distortion, the coupling becomes dominated by a ferromagnetic
Ising exchange $J_{z}$. Overall, the estimated ferromagnetic correction
$\delta J_{z}$ is quite large compared to the regular $d$-$d$ contributions.
For the physically relevant region, we anticipate $J_{xy}=-2$ to $0$ meV,
$J_{z}=-3$ to $-10$ meV, $J_{\pm\pm}=-0.5$ to $+0.5$ meV, and $J_{z\pm}=-0.5$
to $+1.5$ meV for significant trigonal distortion of $\Delta=-\lambda/2$. . As
a result, we expect such materials to be described mostly by Ising couplings
with a common axis for every bond.
Trigonal elongation: For $\Delta_{2}>0$, we find that $K$ is less suppressed.
The distortions induce off-diagonal couplings following roughly
$\text{sign}(\Gamma,\Gamma^{\prime})\sim-\text{sign}(J)$. In the XXZ
parameterization, this corresponds to an enhancement of $J_{xy}$. In the
hypothetical ligand-assisted hopping region, we find that $J_{z}$ may be
almost completely suppressed due to different values of the ferromagnetic
shifts $\delta J_{z}$ and $\delta J_{xy}$. While $J_{xy}$ appears to be the
largest coupling in this limit, the bond-dependent couplings $J_{z\pm}$ and
$J_{\pm\pm}$ may also remain significant. For the physically relevant region,
we find that $J_{xy}$ is typically the dominant coupling, with
$J_{xy}/J_{z}\sim 4$, which is the hypothetical limit. Overall, we anticipate
$J_{xy}=-2$ to $-10$ meV, $J_{z}=-0.5$ to $-4$ meV, $J_{\pm\pm}=-2$ to $+1$
meV, and $J_{z\pm}=0$ to $+1$ meV for significant trigonal distortion of
$\Delta=+\lambda/2$.
### IV.4 Longer Range Couplings
While we have discussed above that $t_{2g}$-ligand hybridization should
generally be small in $3d$ metal oxides (as reflected by small
$t_{pd}^{\pi}$), the $e_{g}$-ligand hybridization may still play a significant
role through the large $t_{pd}^{\sigma}$. This is particularly relevant for
third neighbor bonds in honeycomb materials, because it gives rise to a large
hopping between $d_{x^{2}-y^{2}}$ orbitals shown in Fig. 8 at order
$(t_{pd}^{\sigma})^{2}t_{pp}^{\sigma}/\Delta_{pd}^{2}\sim 0.05$ to 0.1 eV.
This is equivalent to a 3rd neighbor $t_{5}$, which allows the associated
coupling to be readily estimated from the matrices $\mathbb{M}$. In
particular, we estimate (for $\Delta_{2}=0$):
$\displaystyle J_{3}\approx+0.5\text{ to}+2.5\text{ meV}$ (78) $\displaystyle
K_{3}\approx\Gamma_{3}\approx 0$ (79)
This is the only major third neighbor hopping pathway, so there are no
additional terms to compete, and a relatively large antiferromagnetic $J_{3}$
should be expected for all honeycomb materials with partially occupied $e_{g}$
orbitals.
Figure 8: 3rd neighbor hopping relevant to $J_{3}$.
## V Conclusions
In this work, we have considered the magnetic couplings in edge-sharing
$d^{7}$ compounds. On this basis, we make several observations:
(1) All of the edge-sharing Co(II) oxides considered in this work appear to
fall outside the regime of primary focus in previous theoretical studiesLiu
and Khaliullin (2018); Liu (2021); Liu _et al._ (2020); Sano _et al._
(2018). In particular, direct hopping likely dominates over ligand-assisted
hopping ($t_{3}\gg t_{2}$). In the realistic regime, we find that $K$ is
generally suppressed compared to $J$, which calls into question models with
dominant $K$ proposed for these materials.
(2) Compared to heavy $d^{5}$ Kitaev materials such as iridates A2IrO3 and
$\alpha$-RuCl3, the weak spin-orbit coupling of Co increases the relative
importance of local distortions. The presence of the $e_{g}$ spins also opens
additional exchange pathways, whose balance depends sensitively on local
parameters such as $J_{H},U$, and $\Delta_{1}$. This makes anticipating the
magnetic Hamiltonian somewhat challenging. For oxides, fortuitous fine-tuning
may result in a different balance of couplings, but we anticipate that
ferromagnetic $J$ (or equivalently $J_{z},J_{xy}$) is likely always the
largest coupling. The signs and magnitudes of the other couplings
$K,\Gamma,\Gamma^{\prime}$ are influenced by the crystal field splitting and
specific details of the hoppings. We find regions with all possible signs and
relative magnitudes. Real materials with small trigonal distortions are likely
described by $|K/J|\sim 0.2$ to 0.5, and $K\approx\Gamma$; specifically:
$J\sim-8$ to $-2$ meV, $K\sim-2$ to $+2$ meV, and $\Gamma\sim-1$ to $+3$ meV.
$\Gamma^{\prime}$ is likely small unless there are significant departures from
ideal symmetry of the bonds. These findings are compatible with the overall
scale of those reported in the literatureRegnault _et al._ (1977); Nair _et
al._ (2018); Regnault _et al._ (2018); Fava _et al._ (2020). It is not clear
that a uniquely dominant $K$ is possible.
(3) For systems with significant crystal field distortions, our findings are
compatible with the historical description of Co(II) magnetic couplings in
terms of XXZ models by M. E. Lines (Ref. Lines, 1963). This is true
particularly because of the importance of ligand exchange processes, which are
responsible for ferromagnetic couplings in materials with 90∘ bond angles in
the Goodenough-Kanamori description Goodenough (1963). We estimate that these
are at least as important as processes involving $d$-$d$ hopping. In this
case, the considerations discussed in Ref. Lines, 1963; Liu _et al._ , 2020
become equivalent to the classic results of Lines. For trigonal crystal fields
with $\Delta_{2}<0$ (corresponding to $g_{||}>g_{\perp}$), the Ising
anisotropy induced by the crystal field may be very large, such that the
couplings are dominated by a ferromagnetic $J_{z}$ with a common Ising axis
for every bond. For positive crystal field $\Delta_{2}>0$ (corresponding to
$g_{\perp}>g_{||}$), XXZ anisotropy is more limited, but may still be large
for significant distortions $\Delta_{2}\sim\lambda/2$. Such materials are
generally more desirable for realising strongly bond-dependent couplings.
(4) Regarding NCSO and NCTO: Some constraints can be placed on the
interactions on the basis of the ordered moment directions in the zigzag
state. Related discussions appear in Ref. Sanders _et al._ , 2021. For NCTO,
it is generally agreed that the ordered moments lie nearly in the honeycomb
plane, oriented along the direction of the ferromagnetic chainsLefrançois _et
al._ (2016). For the zigzag domain with magnetic wavevector parallel to the
Z-bond, this is $\hat{e}_{2}^{Z}=(1,1,-2)/\sqrt{6}$ in cubic coordinates. This
orientation is generally expectedChaloupka and Khaliullin (2016) for
$K,\Gamma>0$, which is compatible with large $t_{3}$ and small $t_{2}>0$, as
we find in ab-initio for NCTO (see Fig. 4). The antiferromagnetic sign of $K$
is driven by a combination of hopping processes $\propto t_{2}t_{6},\
t_{1}^{2}$ and $t_{1}t_{3}$. Most of these were not previously considered in
the literature. For moments precisely along $\hat{e}_{2}^{Z}$, the magnetic
state is left invariant under a 180∘ rotation around (111) followed by time
reversal; it is thus reasonable to assume the couplings bear the same
symmetry. This places the constraint on the couplings
$\Gamma+\delta\Gamma=K+\Gamma^{\prime}+\delta\Gamma^{\prime}$. Experimental
estimatesKim _et al._ (2021); Liu _et al._ (2020); Liu (2021) for NCSO and
NCTO suggest $\Delta_{2}\sim+4$ to $+13$ meV, which corresponds to a
$\delta\Gamma,\delta\Gamma^{\prime}\sim 0.2-0.5$ meV. With these suggestions,
one may then consider the small magnitude of the magnon gap ($\sim 1$ meV)
observed in experiment at both the $\Gamma$-pointLin _et al._ (2021); Chen
_et al._ (2021) and the ordering wavevectorSongvilay _et al._ (2020); Kim
_et al._ (2021); Chen _et al._ (2021); Lin _et al._ (2021). This would be
anomalous for large departures from XXZ-symmetry. It may further be remarked
that the field-evolution of the ESR modesLin _et al._ (2021) follow
expectations for moderate easy-plane XXZ anisotropy. Taken together, we
suggest $J_{xy}=-3.25,J_{z}=-2.25,J_{\pm\pm}=-0.125,J_{z\pm}=0$ meV as an
appropriate starting point for analysis. These correspond to
$J_{1}=-3,J_{3}=+2.5,K=\Gamma^{\prime}=+0.25,\Gamma=+0.5$ meV, which are
essentially consistent with the model of Ref. Lin _et al._ , 2021.
(5) Regarding BCAO: The breadth of experimental data on BCAO, in terms of the
progression of field-induced phases and inelastic neutron data provide a
number of clues towards the magnetic model. While we leave full elaboration
for future studyMaksimov _et al._ (2022), some comments can be made. A recent
reinvestigationRegnault _et al._ (2018) of the zero-field structure suggested
it might better be described by a double stripe
$\uparrow\uparrow\downarrow\downarrow$ analogue of the zigzag antiferromagnet,
with moments oriented nearly along the in-plane $\hat{e}_{2}^{Z}$ direction,
as with NCTO. This orientation points to $K,\Gamma>0$. The large discrepancy
between in-plane critical fields (0.2, 0.5 T) and the out-of-plane critical
field (4T)Zhang _et al._ (2021) suggests significant anisotropy. Indeed, the
$g$-tensor appears to satisfyRegnault _et al._ (2018) $g_{z}\sim 0.5\
g_{xy}$, and within an XXZ model, $J_{z}\sim 0.4\ J_{xy}$. These findings
point to significant crystal field effects, with $\Delta_{2}\sim 0.2$ to
$0.25\ \lambda$, i.e. $\Delta_{2}\sim 15$ meV, implying significant $\Gamma$
and $\Gamma^{\prime}$ in the global coordinate scheme. In the XXZ scheme, the
nearly in-plane moments suggest small $J_{z\pm}$, while an apparently small
anisotropy between in-plane field directionsRegnault _et al._ (2018) may
place restrictions on $J_{\pm\pm}$. In contrast, the authors of Ref. Zhang
_et al._ , 2021 have advocated for small $J$, large $K<0$, and small average
off-diagonal coupling $\bar{\Gamma}=(\Gamma+2\Gamma^{\prime})/3$ on the basis
of THz spectroscopy experiments. It should be emphasized that these conditions
are not mutually compatible: small $J_{z\pm}$ and $J_{\pm\pm}$ implies small
$K$, and large anisotropy between $J_{xy}$ and $J_{z}$ implies large
$\Gamma+2\Gamma^{\prime}>0$. If we consider BCAO to be in the physical regime
of hoppings, our findings tend to contradict the Kitaev-dominant model. We
propose a model similar to NCTO: $K$ is small and likely antiferromagnetic,
$J<0$ is the dominant coupling, and $\Gamma,\Gamma^{\prime}>0$ reflect a
planar XY-anisotropy $|J_{xy}|>|J_{z}|$. The anomalous aspects of the ground
state are then understood as a competition between $J_{1},J_{2}$, and $J_{3}$,
as previously suggestedRegnault _et al._ (1977); Nair _et al._ (2018);
Regnault _et al._ (2018). These suggestions are compatible with the recent
ab-initio estimatesDas _et al._ (2021); Maksimov _et al._ (2022).
(6) From the perspective of chemistry, it is unclear how to access the
desirable ligand-assisted hopping regime where Kitaev coupling is largest. It
is necessary to increase the metal-ligand hybridization relative to direct
hopping between metal atoms. This may typically be achieved by matching the
electronegativity of the metals and ligands, such that $\Delta_{pd}$ is small.
As a general trend, electronegativity of transition metals increases for
heavier atoms, while the opposite is true for $p$-block ligands. The
combination of heavy metals with heavy ligands results in the most covalent
metal-ligand bonds. However, with increased covalency comes increased
$t_{2g}$-$e_{g}$ splitting, and heavier metals tend to have reduced Coulomb
terms $J_{H}$. As a result, the high-spin $(t_{2g})^{5}(e_{g})^{2}$ state is
only typically achievable in Co(II) compounds. By contrast, Rh(II) and Ni(III)
tend to adopt low-spin $(t_{2g})^{6}(e_{g})^{1}$ ground states. The most
promising avenue then appears to be the combination of Co(II) with heavier
ligands. With this in mind, we computed the hopping integrals for the
triangular lattice compounds CoCl2, CoBr2 and CoI2 according to crystal
structures from Ref. Wilkinson _et al._ , 1959; Wyckoff and Wyckoff, 1963.
For these compounds, we still estimate $|t_{3}/t_{2}|\sim 4$. Thus, it is not
clear that Kitaev-dominant exchange can be achieved without extreme fine
tuning.
## VI Acknowledgements
We acknowledge useful discussions with D. Smirnov, Y. Jiang, S. Streltsov, P.
Maksimov, and R. Valenti. We also thank P. Dai for bringing the problem to our
attention. This work was supported by a pilot grant from the Center for
Functional Materials. Computations were performed on the Wake Forest
University DEAC Cluster, a centrally managed resource with support provided in
part by Wake Forest University.
## References
* Nussinov and Van Den Brink (2015) Z. Nussinov and J. Van Den Brink, Rev. Mod. Phys. 87, 1 (2015).
* Kitaev (2006) A. Kitaev, Ann. Phys. 321, 2 (2006).
* Hermanns _et al._ (2018) M. Hermanns, I. Kimchi, and J. Knolle, Annu. Rev. Condens. Matter Phys. 9, 17 (2018).
* Broholm _et al._ (2020) C. Broholm, R. Cava, S. Kivelson, D. Nocera, M. Norman, and T. Senthil, Science 367, eaay0668 (2020).
* Zhou _et al._ (2017) Y. Zhou, K. Kanoda, and T.-K. Ng, Rev. Mod. Phys. 89, 025003 (2017).
* Jackeli and Khaliullin (2009) G. Jackeli and G. Khaliullin, Phys. Rev. Lett. 102, 017205 (2009).
* Singh and Gegenwart (2010) Y. Singh and P. Gegenwart, Phys. Rev. B 82, 064412 (2010).
* Plumb _et al._ (2014) K. Plumb, J. Clancy, L. Sandilands, V. V. Shankar, Y. Hu, K. Burch, H.-Y. Kee, and Y.-J. Kim, Phys. Rev. B 90, 041112 (2014).
* Banerjee _et al._ (2016) A. Banerjee, C. Bridges, J.-Q. Yan, A. Aczel, L. Li, M. Stone, G. Granroth, M. Lumsden, Y. Yiu, J. Knolle, S. Bhattacharjee, D. L. Kovrizhin, R. Moessner, D. A. Tennant, D. G. Mandrus, and S. E. Nagler, Nat. Mater. 15, 733 (2016).
* Winter _et al._ (2017a) S. M. Winter, A. A. Tsirlin, M. Daghofer, J. van den Brink, Y. Singh, P. Gegenwart, and R. Valenti, J. Phys. Condens. Matter 29, 493002 (2017a).
* Trebst (2017) S. Trebst, arXiv preprint arXiv:1701.07056 (2017).
* Banerjee _et al._ (2017) A. Banerjee, J. Yan, J. Knolle, C. A. Bridges, M. B. Stone, M. D. Lumsden, D. G. Mandrus, D. A. Tennant, R. Moessner, and S. E. Nagler, Science 356, 1055 (2017).
* Hwan Chun _et al._ (2015) S. Hwan Chun, J.-W. Kim, J. Kim, H. Zheng, C. C. Stoumpos, C. Malliakas, J. Mitchell, K. Mehlawat, Y. Singh, Y. Choi, T. Gog, A. Al-Zein, M. M. Sala, M. Krisch, J. Chaloupka, G. Jackeli, G. Khaliullin, and B. J. Kim, Nat. Phys. 11, 462 (2015).
* Suzuki _et al._ (2021) H. Suzuki, H. Liu, J. Bertinshaw, K. Ueda, H. Kim, S. Laha, D. Weber, Z. Yang, L. Wang, H. Takahashi, K. Fürsich, M. Minola, B. V. Lotsch, B. J. Kim, H. Yavas, M. Daghofer, J. Chaloupka, G. Khaliullin, H. Gretarsson, and B. Keimer, Nat. Commun. 12, 1 (2021).
* Winter _et al._ (2017b) S. M. Winter, K. Riedl, P. A. Maksimov, A. L. Chernyshev, A. Honecker, and R. Valentí, Nat. Commun. 8, 1 (2017b).
* Banerjee _et al._ (2018) A. Banerjee, P. Lampen-Kelley, J. Knolle, C. Balz, A. A. Aczel, B. Winn, Y. Liu, D. Pajerowski, J. Yan, C. A. Bridges, A. T. Savici, B. C. Chakoumakos, M. D. Lumsden, D. A. Tennant, R. Moessner, D. G. Mandrus, and S. E. Nagler, npj Quantum Mater. 3, 1 (2018).
* Kasahara _et al._ (2018) Y. Kasahara, T. Ohnishi, Y. Mizukami, O. Tanaka, S. Ma, K. Sugii, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, T. Shibauchi, and Y. Matsuda, Nature 559, 227 (2018).
* Yokoi _et al._ (2021) T. Yokoi, S. Ma, Y. Kasahara, S. Kasahara, T. Shibauchi, N. Kurita, H. Tanaka, J. Nasu, Y. Motome, C. Hickey, S. Trebst, and Y. Matsuda, Science 373, 568 (2021).
* Liu and Khaliullin (2018) H. Liu and G. Khaliullin, Phys. Rev. B 97, 014407 (2018).
* Liu (2021) H. Liu, Int. J. Mod. Phys. B 35, 2130006 (2021).
* Liu _et al._ (2020) H. Liu, J. Chaloupka, and G. Khaliullin, Phys. Rev. Lett. 125, 047201 (2020).
* Sano _et al._ (2018) R. Sano, Y. Kato, and Y. Motome, Phys. Rev. B 97, 014408 (2018).
* Rau _et al._ (2014) J. G. Rau, E. K.-H. Lee, and H.-Y. Kee, Phys. Rev. Lett. 112, 077204 (2014).
* Winter _et al._ (2016) S. M. Winter, Y. Li, H. O. Jeschke, and R. Valentí, Phys. Rev. B 93, 214431 (2016).
* Lines (1963) M. Lines, Phys. Rev. 131, 546 (1963).
* Oguchi (1965) T. Oguchi, J. Phys. Soc. Japan 20, 2236 (1965).
* Scharf _et al._ (1979) W. Scharf, H. Weitzel, I. Yaeger, I. Maartense, and B. Wanklyn, J. Magn. Magn. Mater. 13, 121 (1979).
* Maartense _et al._ (1977) I. Maartense, I. Yaeger, and B. Wanklyn, Solid State Commun. 21, 93 (1977).
* Kobayashi _et al._ (1999) S. Kobayashi, S. Mitsuda, M. Ishikawa, K. Miyatani, and K. Kohn, Phys. Rev. B 60, 3331 (1999).
* Lee _et al._ (2010) S. Lee, R. K. Kaul, and L. Balents, Nat. Phys. 6, 702 (2010).
* Coldea _et al._ (2010) R. Coldea, D. Tennant, E. Wheeler, E. Wawrzynska, D. Prabhakaran, M. Telling, K. Habicht, P. Smeibidl, and K. Kiefer, Science 327, 177 (2010).
* Morris _et al._ (2014) C. Morris, R. V. Aguilar, A. Ghosh, S. Koohpayeh, J. Krizan, R. Cava, O. Tchernyshyov, T. McQueen, and N. Armitage, Phys. Rev. Lett. 112, 137403 (2014).
* Fava _et al._ (2020) M. Fava, R. Coldea, and S. Parameswaran, Proc. Natl. Acad. Sci. 117, 25219 (2020).
* Morris _et al._ (2021) C. Morris, N. Desai, J. Viirok, D. Hüvonen, U. Nagel, T. Room, J. Krizan, R. Cava, T. McQueen, S. Koohpayeh, R. K. Kaul, and N. P. Armitage, Nat. Phys. 17, 832 (2021).
* Lefrançois _et al._ (2016) E. Lefrançois, M. Songvilay, J. Robert, G. Nataf, E. Jordan, L. Chaix, C. V. Colin, P. Lejay, A. Hadj-Azzem, R. Ballou, and V. Simonet, Phys. Rev. B 94, 214416 (2016).
* Bera _et al._ (2017) A. K. Bera, S. M. Yusuf, A. Kumar, and C. Ritter, Phys. Rev. B 95, 094424 (2017).
* Wong _et al._ (2016) C. Wong, M. Avdeev, and C. D. Ling, J. Solid State Chem. 243, 18 (2016).
* Chen _et al._ (2021) W. Chen, X. Li, Z. Hu, Z. Hu, L. Yue, R. Sutarto, F. He, K. Iida, K. Kamazawa, W. Yu, X. Lin, and Y. Li, Phys. Rev. B 103, L180404 (2021).
* Chaloupka _et al._ (2013) J. Chaloupka, G. Jackeli, and G. Khaliullin, Phys. Rev. Lett. 110, 097204 (2013).
* Fouet _et al._ (2001) J. Fouet, P. Sindzingre, and C. Lhuillier, Eur. Phys. J. B 20, 241 (2001).
* Kimchi and You (2011) I. Kimchi and Y.-Z. You, Phys. Rev. B 84, 180407 (2011).
* Songvilay _et al._ (2020) M. Songvilay, J. Robert, S. Petit, J. A. Rodriguez-Rivera, W. D. Ratcliff, F. Damay, V. Balédent, M. Jiménez-Ruiz, P. Lejay, E. Pachoud, A. Hadj-Azzem, V. Simonet, and C. Stock, Phys. Rev. B 102, 224429 (2020).
* Kim _et al._ (2021) C. Kim, J. Jeong, G. Lin, P. Park, T. Masuda, S. Asai, S. Itoh, H.-S. Kim, H. Zhou, J. Ma, and J.-G. Park, J. Phys. Condens. Matter 34, 045802 (2021).
* Lin _et al._ (2021) G. Lin, J. Jeong, C. Kim, Y. Wang, Q. Huang, T. Masuda, S. Asai, S. Itoh, G. Günther, M. Russina, Z. Lu, J. Sheng, L. Wang, J. Wang, G. Wang, Q. Ren, C. Xi, W. Tong, L. Ling, Z. Liu, J. Mei, Z. Qu, H. Zhou, J.-G. Park, Y. Wan, and J. Ma, Nat. Commun. 12, 1 (2021).
* Nair _et al._ (2018) H. S. Nair, J. Brown, E. Coldren, G. Hester, M. Gelfand, A. Podlesnyak, Q. Huang, and K. Ross, Phys. Rev. B 97, 134409 (2018).
* Regnault _et al._ (2006) L. Regnault, C. Boullier, and J. Henry, Physica B Condens. Matter 385, 425 (2006).
* Regnault _et al._ (2018) L.-P. Regnault, C. Boullier, and J. Lorenzo, Heliyon 4, e00507 (2018).
* Regnault _et al._ (1977) L. Regnault, P. Burlet, and J. Rossat-Mignod, Physica B Condens. Matter 86, 660 (1977).
* Zhong _et al._ (2020) R. Zhong, T. Gao, N. P. Ong, and R. J. Cava, Sci. Adv. 6, eaay6953 (2020).
* Zhang _et al._ (2021) X. Zhang, Y. Xu, R. Zhong, R. Cava, N. Drichko, and N. Armitage, arXiv preprint arXiv:2106.13418 (2021).
* Shi _et al._ (2021) L. Shi, X. Wang, R. Zhong, Z. Wang, T. Hu, S. Zhang, Q. Liu, T. Dong, F. Wang, and N. Wang, Phys. Rev. B 104, 144408 (2021).
* Das _et al._ (2021) S. Das, S. Voleti, T. Saha-Dasgupta, and A. Paramekanti, Phys. Rev. B 104, 134425 (2021).
* Maksimov _et al._ (2022) P. A. Maksimov, A. V. Ushakov, Z. V. Pchelkina, Y. Li, S. M. Winter, and S. V. Streltsov, arXiv preprint (2022).
* Note (1) The coefficients $U_{\alpha\beta\gamma\delta}$ may be grouped according to the number of unique orbital indices, from one to four. For example, the intra-orbital Hubbard terms $n_{i,\alpha,\delimiter 52568952}n_{i,\alpha,\delimiter 52573049}$ have one unique index $\alpha$, while the inter-orbital Hubbard terms $n_{i,\alpha,\sigma}n_{i,\beta,\sigma^{\prime}}$ have two unique indices $\alpha,\beta$. In the spherically symmetric approximation Sugano (2012), the Coulomb coefficients with three and four indices vanish unless at least one of the orbitals is an $e_{g}$ orbital. For this reason, $t_{2g}$-only (and $e_{g}$-only) models reduce to the familiar Kanamori formgeorges2013strong; Pavarini (2014), which includes only Hubbard density-density repulsion, Hund’s exchange, and pair-hopping contributions. However, when both $e_{g}$ and $t_{2g}$ orbitals are considered together, it is important to include the full rotationally symmetric Coulomb terms. This is particularly true when computing anisotropic magnetic exchange, because any approximations to the Coulomb Hamiltonian are likely to explicitly break rotational symmetry, leading to erroneous sources of anisotropy.
* Sugano (2012) S. Sugano, _Multiplets of transition-metal ions in crystals_ (Elsevier, 2012).
* Pavarini (2014) E. Pavarini, in _Many-Electron Approaches in Physics, Chemistry and Mathematics_ (Springer, 2014) pp. 321–341.
* Sarte _et al._ (2018) P. M. Sarte, R. A. Cowley, E. E. Rodriguez, E. Pachoud, D. Le, V. García-Sakai, J. W. Taylor, C. D. Frost, D. Prabhakaran, C. MacEwen, A. Kitada, A. J. Browne, M. Songvilay, Z. Yamani, W. J. L. Buyers, J. P. Attfield, and C. Stock, Phys. Rev. B 98, 024415 (2018).
* Ross _et al._ (2017) K. A. Ross, J. Brown, R. Cava, J. Krizan, S. E. Nagler, J. Rodriguez-Rivera, and M. B. Stone, Phys. Rev. B 95, 144414 (2017).
* Dordević (2008) T. Dordević, Acta Crystallogr. E 64, i58 (2008).
* Wyckoff and Wyckoff (1963) R. W. G. Wyckoff and R. W. Wyckoff, _Crystal structures_ , Vol. 1 (Interscience publishers New York, 1963).
* Sarvezuk _et al._ (2011) P. W. C. Sarvezuk, E. J. Kinast, C. Colin, M. Gusmão, J. Da Cunha, and O. Isnard, Int. J. Appl. Phys. 109, 07E160 (2011).
* Xiao _et al._ (2019) G. Xiao, Z. Xia, W. Zhang, X. Yue, S. Huang, X. Zhang, F. Yang, Y. Song, M. Wei, H. Deng, and D. Jiang, Crystal Growth & Design 19, 2658 (2019).
* Note (2) The Na2Co2TeO6 structure contains disorder in the Na position, in which each Na position has occupancy 2/3. To perform calculations, we artificially increased the occupancy to 1, which corresponds to Na3Co2TeO6. It is expected this change in the filling should have minimal impact on the computed hoppings.
* Wellm _et al._ (2021) C. Wellm, W. Roscher, J. Zeisner, A. Alfonsov, R. Zhong, R. J. Cava, A. Savoyant, R. Hayn, J. van den Brink, B. Büchner, O. Janson, and V. Kataev, Phys. Rev. B 104, L100420 (2021).
* Goodenough (1963) J. B. Goodenough, _Magnetism and the chemical bond_ , Vol. 1 (Interscience publishers, 1963).
* Ross _et al._ (2011) K. A. Ross, L. Savary, B. D. Gaulin, and L. Balents, Phys. Rev. X 1, 021002 (2011).
* Maksimov _et al._ (2019) P. Maksimov, Z. Zhu, S. R. White, and A. Chernyshev, Phys. Rev. X 9, 021017 (2019).
* Sanders _et al._ (2021) A. L. Sanders, R. A. Mole, J. Liu, A. J. Brown, D. Yu, C. D. Ling, and S. Rachel, arXiv preprint arXiv:2112.12254 (2021).
* Chaloupka and Khaliullin (2016) J. Chaloupka and G. Khaliullin, Phys. Rev. B 94, 064435 (2016).
* Wilkinson _et al._ (1959) M. Wilkinson, J. Cable, E. Wollan, and W. Koehler, Phys. Rev. 113, 497 (1959).
|
# Rinas: Training with Dataset Shuffling Can Be General and Fast
Tianle Zhong<EMAIL_ADDRESS>University of Virginia ,
Jiechen Zhao<EMAIL_ADDRESS>University of Toronto , Xindi Guo
<EMAIL_ADDRESS>University of Virginia , Qiang Su∗
<EMAIL_ADDRESS>City University of Hong Kong and Geoffrey Fox∗
<EMAIL_ADDRESS>University of Virginia
###### Abstract.
Deep learning datasets are expanding at an unprecedented pace, creating new
challenges for data processing in model training pipelines. A crucial aspect
of these pipelines is dataset shuffling, which significantly improves unbiased
learning and convergence accuracy by adhering to the principles of random
sampling. However, loading shuffled data for large datasets incurs significant
overhead in the deep learning pipeline and severely impacts the end-to-end
training throughput. To mitigate this, current deep learning systems often
resort to partial dataset shuffling, sacrificing global randomness to maintain
acceptable training throughput on large datasets, still leaving global
shuffling efficiency issues not fully explored.
In this work, we present Rinas, a data loading framework that systematically
addresses the performance bottleneck of loading global shuffled datasets. Our
key contribution is to offer an intra-batch unordered data fetching approach,
which unleashes unexplored parallelism of data loading. We implement Rinas
under the PyTorch framework for common dataset libraries HuggingFace and
TorchVision. Our experimental results show that Rinas improves the throughput
of general language model training and vision model training by up to 59% and
89%, respectively.
∗Qiang Su and Geoffrey Fox are corresponding authors. The Univesity of
Virginia team thanks NSF Grant 2210266 and DOE Grant DE-SC0023452 for partial
support. We acknowledge the excellent work of the Rivanna HPC Cluster team.
## 1\. Introduction
Shuffling large datasets remains a critical issue in data processing systems,
impacting a wide spectrum of applications (dean2008mapreduce, ;
10.1145/2934664, ; nicolae2016towards, ; shen2020magnet, ; hadoop, ). In the
realm of deep learning, dataset shuffling is not merely a procedural task but
a fundamental aspect of the data loading pipeline. It is instrumental in
avoiding overfitting better (ying2019overview, ; li2019research, ) and
benefiting convergence accuracy based on the theoretical foundation
(meng2019convergence, ).
Ideally, the training accuracy improvement derived from data shuffling should
introduce minimal shuffling overheads on overall training throughput.
Unfortunately, as the sizes of deep learning datasets expand rapidly, the
shuffling overhead is significantly growing. This is because the datasets
nowadays are towards tens of TBs, largely exceeding the system DRAM capacity
and thus leading to the use of slower disk I/O becoming an inevitable
bottleneck (lobster, ; sun2022solar, ; gu2022fluid, ). Worse, it is
challenging for current systems to effectively manage the shuffling overhead
on training throughput without sacrificing accuracy (nguyen2022globally, ;
exoshuffle, ).
The impact of large dataset shuffling on training throughput is substantial
and far from optimal. For example, we observe that data loading I/O for
shuffling can consume up to 85% of the total training time for models such as
ResNet-152 on the ImageNet dataset ($\sim$140 GB). This phenomenon
corroborates with the prior art (10.1145/3458817.3476181, ). For language
model training, the slowdown caused by shuffled loading can also lead to 30%
to 50% of training throughput degradation. This overhead persists as a
dominant bottleneck in training efficiency, even when deep learning pipelines
are integrated with advanced data processing systems and databases (deeplake,
).
In the practice of typical deep learning systems, shuffling for larger-than-
memory datasets is usually abstracted by a shuffled loading operation,
fetching a batch of data samples randomly from the dataset in a on-demand
fashion, which avoids pre-loading the entire dataset into DRAM for shuffling.
To fetch the data from the disk to the system DRAM, each data sample needs to
be indexed inside the dataset which enforces the data loading to perform
random disk I/O operations (Index, ; hf_map_vs_iter, ).
Amidst this backdrop, contemporary research has made strides in two
directions: (1) refining the shuffled dataset loading pipeline to conceal its
impact on end-to-end training throughput; and (2) identifying an equilibrium
between shuffling thoroughness and converged accuracy. However, these
advancements come with concessions: (1) hiding data loading overhead is
ineffective when it becomes the predominant factor in training performance;
(2) the complexity of the data loading pipeline demands extensive
modifications to existing software infrastructure; (3) the equilibrium point
is not universally applicable across the spectrum of datasets and learning
models; (4) the scope of trade-offs is constrained by system capabilities,
leaving practitioners to contend with a compromise that sacrifices either
accuracy or speed, since a not thorough shuffle harnesses the convergence
accuracy (xu2022stochastic, ).
The fundamental question is, can we devise a universal framework that
accelerates the loading of large, shuffled datasets without compromising
accuracy and training speed across diverse datasets and learning models?
To address this problem, we introduce Rinas, a comprehensive data loading
framework designed for efficient model training on large shuffled datasets. We
pinpoint the frequent random disk I/O on individual data samples as the
principal bottleneck in loading shuffled datasets. Under this observation, we
propose a novel data preparation method for model training processes: intra-
batch unordered data fetching.
Rinas is predicated on a key insight: within a training iteration, the
sequence of computing the average loss from a batch of randomly sampled data
does not influence the learning outcome. This insight suggests that the data
retrieval order for intra-batch samples does not affect the learning process.
Fundamentally, it allows us to shift from a strict global sample order to a
more flexible intra-batch unordered manner.
This insight brings in several benefits. First, relaxing such orders enables
parallel data retrieval without stringent ordering constraints. Second, Rinas
fundamentally negates the need to balance between accuracy and speed. Third,
Rinas is versatile enough to cater to various datasets and learning models
because of the guarantee of equivalent learning outcomes.
We architect Rinas with a data-agnostic control plane and a data plane for on-
demand and parallelized data fetching. The control plane offers an execution
model for loading samples and generating batches with them in an unordered
fashion. The data plane fully exploits the advantages of unordered parallel
data retrieval across diverse dataset structures.
Our implementation showcases Rinas’s applicability across major learning tasks
from computer vision to language models. We demonstrate our prototype within
the PyTorch DataLoader (Index, ), as well as TorchVision (torchvision2016, )
and HuggingFace Datasets libraries (lhoest-etal-2021-datasets, ). Our
experiments evaluate standard training workloads on large datasets, spanning
from computer vision model training to language model pretraining, within
typical deep learning training clusters. Our assessments cover the typical
training setup, revealing Rinas’s minimal loading-related overheads at
different scales, delivering up to 59% and 89% speed increases in training for
computer vision and language models, respectively.
The ensuing sections will delve into the background (§2) and motivation (§3),
review related work and its limitations, and then articulate the design (§4)
and implementation (§5) of Rinas. Finally, we will present our evaluation
findings (§6) and engage in a discussion (§7).
## 2\. Background
Dataset | Size | Description
---|---|---
ImageNet (deng2009imagenet, ) | 140 GB | For image classification
ImageNet-21k (deng2009imagenet, ) | 1.8 TB | Extended version of ImageNet
RedPajama (together2023redpajama, ) | 5 TB | Language modeling
C4 (2019t5, ) | 7 TB
CosmoFlow (mathuriya2018cosmoflow, ) | 10 TB | Cosmological simulations
Table 1. Large datasets for model training in computer vision and language
modeling.
### 2.1. Storing Emerging Very Large Datasets on Disks
ML model training has witnessed increasingly huge datasets that are beyond the
system’s memory capacity (DRAM), and it becomes impractical to preload the
entire dataset before training. Therefore, data preparation emerges as a
critical path for model training: loading large datasets into memory is often
non-trivial. Table 1 presents typical datasets for computer vision models and
large language models. The computer vision dataset collects a huge amount of
image files, and accessing these images typically requires indexing into a
structured file system; Large language models (devlin2019bert, ;
thoppilan2022lamda, ; touvron2023llama, ) are trained on vast text corpora,
with huge data volumes and complicated structures (together2023redpajama, ;
2019t5, ; OpenOrca, ). These datasets are often segmented into multiple files,
each packed with text entries. Retrieving specific samples requires effective
parsing or database systems with sophisticated indexing for efficient search
and access (lhoest-etal-2021-datasets, ; torchvision2016, ). Specifically,
image datasets are usually composed of individual image files on disk as data
samples, while text datasets are usually composed of a series of large files
on disk, each containing rows of text as data samples.
### 2.2. Dataset Shuffling in Model Training Pipeline
Figure 1. The logical data path of shuffling in end-to-end training.
Figure 1 presents the typical training pipeline upon the training datasets,
where the random sampling (olken1995random, ) on the datasets serves as a
critical operation. The dataset on the storage is randomly sampled into DRAM
space, then the sampled data are partitioned into batches to be fed into model
training iterations.
In practice, the operations to achieve random sampling upon the dataset are
generally described as dataset shuffling (liu2017shuffle_spark, ). By
shuffling the dataset at the beginning of each training epoch, the data sample
order to be exposed to the model training process is randomized. Typically,
there are two ways of dataset shuffling: shuffling the dataset in the storage
space (i.e., in-place shuffling), or loading the dataset into DRAM space with
a random order (i.e., shuffled loading).
In-place shuffling is a common operation in large-scale batch processing
systems (dean2008mapreduce, ; hadoop, ; 10.1145/2934664, ). However, unlike
smaller in-memory collections, shuffling large datasets that reside on
persistent storage devices imposes a considerable overhead (iShuffle, ;
liu2017shuffle_spark, ), and involves more than mere in-place reordering. It
requires careful consideration of the I/O throughput, storage latency, and
computational load on the system (welton2011improving, ; zhang2006storage, ;
deeplake, ; gupta2015amazon, ).
Unfortunately, systems highly optimized for in-place shuffling like Hadoop
(hadoop, ) and Spark (spark, ) prevent reading shuffled results until the
shuffle is fully completed, making model training systems hard to pipeline the
shuffling process with model training. On the other hand, shuffled loading
avoids expensive and complicated in-place reordering of the datasets in the
storage space, able to be pipelined with the model training process. As a
result, shuffled loading is our paper’s focus and many other papers’ focus is
how to pipeline the in-place shuffling with model training (ray, ; exoshuffle,
). Typically, there are two ways of shuffled loading: Buffered shuffling and
indices mapping. They are both popular choices and already supported by many
frameworks like PyTorch (Index, ; ofeidis2022overview, ).
Buffered shuffling. Figure 2 presents an example workflow of buffered
shuffling, which involves two steps to improve the efficiency of shuffled
loading. First, it leverages partial shuffling, sequentially loading a subset
of the dataset from the disk into a memory buffer allocated in the system
DRAM. Second, it performs the shuffle operation in this constrained space,
followed by the formation of batches from this shuffled subset.
This approach strikes a balance between the need for random access and the
performance limitations imposed by disk-based storage, aiming to provide a
partially shuffled dataset without the prohibitive overhead of random disk I/O
(DeepIO_buffer_shuffle, ). However, buffered shuffling cannot achieve true
random sampling due to the limited shuffling space compared to global
shuffling, which may inadvertently compromise convergence accuracy
(nguyen2022globally, ; xu2022stochastic, ).
Indices mapping. To avoid the loss of convergence accuracy due to the
compromised shuffle quality by partial shuffling, the dataset should be
globally shuffled (meng2019convergence, ). Figure 3 depicts an example of
indices mapping workflow. In this strategy, the dataset indices are shuffled
rather than data, and the data is read into DRAM following the order of
shuffled indices.
This approach successfully performs a shuffled loading strategy that fully
respects the random sampling principles but creates a sequence of random IO
that the storage system is typically less efficient at.
Since indices mapping guarantees true random sampling to benefit model
training convergence accuracy at most, our work focuses on addressing the
performance issue of indices mapping.
Figure 2. An example of buffered shuffle workflow. Figure 3. An example of
indices mapping workflow.
## 3\. Motivation
In this section, we begin with analyzing the current problems that indices
mapping incurs. Next, we discuss existing solutions and their limitations.
### 3.1. Inefficiency with Indices Mapping for Very Large Datasets
As the previous section discussed, although indices mapping reserves the
global randomness of the dataset, this approach is typically much slower than
buffered shuffling due to the necessity of non-contiguous data retrieval from
storage. Next, we explain the relationship between such a slowdown and the
dataset size.
Training slowdown. Figure 4 presents the end-to-end training throughput of
training RoBERTa-base model (liu2019roberta, ) at different batch sizes when
dataset size increases on a single NVIDIA A100 GPU. While the cleaned English
branch of C4 dataset tokenized by RoBERTa has $\sim 3.6\times 10^{8}$ rows, we
synthesize four different sizes of its subsets by choosing its first $10^{5}$,
$10^{6}$, $10^{7}$, and $10^{8}$ rows. We benchmark the training throughput
with all five sizes of datasets to show the throughput changes when dataset
size increases. Observe that there is a marked reduction in training
efficiency when the dataset size increases, leading to a 30% to 50% decrease
in speed.
This reveals that the primary cause of the training deceleration is the
overhead associated with shuffling by indices mapping against the large
datasets, as opposed to a scenario without shuffling. Notably, the negative
impact of this overhead is observed across both large and small batch sizes,
underscoring the pervasive influence of shuffling-related delays.
This indicates that in extensive large-scale scenarios, I/O overhead by
indices mapping can become the major factor of the total training time,
leading to substantial GPU idle time and data starvation. This phenomenon has
been corroborated by other research on other training scenarios and systems as
well (10.1145/3458817.3476181, ; sun2022solar, ).
This also indicates why shuffled data loading overhead for the large dataset
cannot be easily hidden by overlapping with computation: under this large
extent of degradation, the data loading overhead has dominated the overall
training time.
Figure 4. Training throughput for the RoBERTa-base model for varying batch
sizes within the C4 subsets of different sizes. A discernible trend emerges,
showing a decrement in end-to-end training throughput as the number of samples
in the dataset increases. Figure 5. Data loading throughput comparison for
varying batch sizes within the C4 subsets of different sizes.
Throughput degradation. The reason for the training slowdown is shuffled data
loading throughput degradation with large datasets. Figure 5 presents the pure
data loading throughput under the same settings as Figure 6 by excluding the
model training process and only performing data loading operations. By further
looking into the measured data loading throughput, a parallel decrease in data
loading throughput, akin to the trend seen in end-to-end training performance,
underscores the impact of data loading efficiency on overall training time.
When employing a batch size of 32, data loading throughput for small datasets
using indices mapping can capitalize on system I/O capabilities, achieving a
data loading throughput of $\sim$1000 samples per second. However, this
throughput significantly diminishes when applied to larger datasets with
indices mapping, with performance dropping to $\sim$50 samples per second,
primarily due to the random dataset indexing overhead which is positive
correlated with the dataset size.
This relationship indicates that throughput degradation during data loading is
a primary contributor to the observed training slowdown. This observation
underscores the prominence of data loading as a substantial bottleneck in the
training process, particularly when dealing with extensive datasets. This
phenomenon necessitates a reevaluation and optimization of data loading
practices to alleviate the undue time expenditure and enhance overall training
performance.
### 3.2. Existing Solutions
Prior work proposes solutions to mitigating the performance issue with large
shuffled datasets. There are three categories of solutions: balanced trade-off
between shuffle quality and speed, scheduled data prefetching, and taking
advantage of the data-parallel training.
Shuffle balancing. Considering the inefficiency of indices mapping, various
research endeavors have experimented with adjusting the degree of shuffling to
strike a balance between converged accuracy and training efficiency
(nguyen2022globally, ; sun2022solar, ). Nevertheless, the process of
identifying this equilibrium can be both time-consuming and highly dependent
on the specifics of the model and dataset, limiting its applicability to a
wider range of models and datasets. Furthermore, the trade-off space is
usually limited by hardware resources which can leave practitioners to contend
with a compromise that sacrifices either accuracy or speed (exoshuffle, ). For
example, for training the ResNet-50 model on the ImageNet dataset, the limited
shuffle can result in $\sim$20% of accuracy drop compared to global shuffled
learning (nguyen2022globally, ).
Data prefetch scheduling. Given that the sequence of shuffled indices is
predetermined, concurrently with the model’s computation on the current batch,
the read operation for the next batch can be initiated, allowing data
preloading to take place in an overlapped fashion, thereby enhancing
performance. There have been efforts such as NoPFS (10.1145/3458817.3476181, )
and ExoShuffle (exoshuffle, ) which integrate data loading pipeline with data
prefetching scheduling and attempt to overlap data loading with model training
computations. Those solutions are very effective when the shuffled loading
latency is at a relatively low level such as when employing buffered shuffling
or dealing with smaller datasets. However, as discussed in §3.1, when data
loading significantly overshadows training time, the benefits of such
overlapping become marginal and less effective. Moreover, such data prefetch
scheduling usually needs extensive modification on the underlying software
infrastructure like the file system (i.e, NoPFS) and data processing system
(i.e, ExoShuffle).
Figure 6. Illustration of distributed training throughput for the RoBERTa-base
model on various subsets of the C4 dataset, differentiated by batch size. 4
NVIDIA A100 (80 GB).
Data-parallel training. By replicating learners across multiple GPUs, each
learner has its own data loading process thus achieving parallel dataset
indexing in the global view (parameter_server, ; li2020pytorch_distributed, ).
To explore the effectiveness of data-parallel training with large shuffled
datasets, we conduct the distributed version of the end-to-end training
throughput experiment, shown as Figure 6. We can see that although data-
parallel training can improve the global end-to-end training throughput
effectively compared to non-distributed training, data-parallel training still
suffers from training throughput degradation when dataset size increases. This
is because each learner’s data loading procedure is still degraded at larger
datasets and hence becomes the bottleneck of its training process, resulting
the end-to-end throughput degradation like non-distributed training. Moreover,
utilizing multiple GPUs or hardware resources can be costly, both in terms of
initial investment and operational expenses. Also, distributing data and
aggregating results across GPUs introduce a communication overhead. This can
sometimes offset the training throughput benefits of parallel processing
(nguyen2022globally, ; dp_communication, ).
## 4\. Rinas Design
In this section, we first show the design principle of Rinas as a new paradigm
(4.1). Then, we introduce the design goals of this work (4.2), followed by an
analysis of rethinking the randomness in the general learning process (4.3).
Next, we describe the novel execution model under unordered batch generation
(4.4) and its requirements on dataset representation and indexing interface
(4.5). Finally, this section demonstrates the end-to-end system overview of
Rinas (4.6).
### 4.1. Towards Overcoming Throughput Degradation: A Paradigm Shift
Our approach aims at addressing the critical challenge of loading large
shuffled datasets and the resultant throughput degradation inherent in indices
mapping. By leveraging the deep learning-specific domain knowledge, our
approach unleashes unexplored parallelism in the data loading process.
By accelerating the data loading with indices mapping, one of the key benefits
is simplicity; programmers don’t need to bother to solve the above issues in
existing solutions described in 3.2. Practitioners can confidently apply
global shuffling to their model training process to maximize converged
accuracy without the concern of training slowdown caused by indices mapping.
This key distinction sets our work apart from existing solutions, focusing on
a fundamental shift in how intra-batch data is loaded for batch generation
during the training process.
Moreover, since our method is under the framework of indices mapping, the
integration with existing model training environments is straightforward.
In essence, Rinas proposes a paradigm shift in how data is prepared and
managed at batch generation stage to achieve dataset shuffling. Rinas paves
the way for a more easy-to-manage and efficient deep learning process with
high accuracy. Next, we discuss Rinas’s design goals in detail.
### 4.2. Design Goals
Our design objectives are twofold:
* •
Performance at scale. Mitigate the data loading bottleneck of indices mapping:
the extensive disk random I/Os due to frequent, non-contiguous data indexing,
which can significantly hamper performance. This is an even more challenging
goal for TB-scale datasets since the random dataset indexing overhead is
positively correlated with dataset size (recall 3.1).
* •
Agnostic to the learning process. Rinas has to reserve and guarantee the
global randomness of shuffling for model training process, which can make sure
that our approach is not specific to datasets or models.
The first goal addresses the performance bottleneck challenge we aim to
overcome, while the second goal serves as a constraint to ensure that our
solution remains versatile, and capable of being seamlessly incorporated into
existing deep learning systems for a variety of tasks. To achieve these goals,
Rinas leverages the deep learning-domain knowledge obtained by rethinking the
randomness in the general learning process.
### 4.3. Rethinking Intra-Batch Sample Randomness
A conventional principle in data loading is to adhere strictly to the order of
shuffled datasets. This practice is commonplace in data processing and
analysis, ensuring complete randomness in the data presented to the model. In
model training, the concept of batch size is introduced to specify the number
of samples processed in a single iteration of model training.
The update of the model parameters in one training step can be expressed as:
(1)
$\theta_{\text{new}}=\theta_{\text{old}}-\eta\cdot\nabla_{\theta}\mathcal{L}\left(\frac{1}{N}\sum_{i=1}^{N}\ell(x_{i},y_{i};\theta_{\text{old}})\right)$
In this equation, $\theta$ represents the model parameters, $\eta$ is the
learning rate, $\nabla_{\theta}\mathcal{L}$ denotes the gradient of the loss
function $\mathcal{L}$ with respect to the parameters, and $\ell$ is the per-
sample loss function. $x_{i}$ and $y_{i}$ are the input and target of the
$i$-th sample in the batch, and $N$ is the batch size.
We can see that the loss is computed as the average of individual sample
losses, suggesting that the order of samples within the same batch does not
influence the outcome. This leads us to a deep learning-specific insight: the
intra-batch sample order does not impact the learning outcome, opening
potential avenues for optimization in data loading and training efficiency.
To be specific, this insight releases the design of the data loading pipeline
from strictly respecting sample order randomness to an intra-batch sample
unordered manner. This enables additional parallelism space of intra-batch
data indexing. By out-of-order retrieval, we can parallelize the intra-batch
data indexing without worrying about the order of data arrival.
### 4.4. Unordered Batch Generation
Based on the previous observation on model training procedure, we can
conceptualize unordered batch generation, a versatile data loading pipeline
that permits batch generation with unordered intra-batch sample retrieval.
Figure 7. A comparative illustration of execution models between conventional
and Rinas’s unordered batch generation.
Figure 7 presents the execution model of both conventional method and
unordered batch generation. There are two major parts of unordered batch
generation: parallel dataset indexing and overlapped preprocessing.
Parallel dataset indexing. By eliminating the necessity to maintain intra-
batch sample order, data retrieval operations for distinct samples within a
batch can be executed in parallel through asynchronous threading. The data
arrival order is changed with such execution but would not affect the learning
outcome (recall 4.3).
Overlapped preprocessing. After data is fetched from the disk, it usually
needs to go through the user-defined preprocessing pipeline before being
passed into the model training process. Taking preprocessing as a part of the
batch generation process, we can safely overlap data preprocessing tasks with
data retrieval, thereby achieving additional performance enhancements.
### 4.5. Dataset Representation and Indexing Interface
The proposed execution model based on unordered batch generation comes with
three requirements on dataset representation and indexing interface.
Indexable dataset representation. Since Rinas is under the framework of
indices mapping, the dataset representation should be indexable, which
distinguishes Rinas from the non-indexable iterative-style datasets
representation like PyTorch iterative datasets and Ray Data (moritz2018ray, ).
This enables the on-demand indexing to retrieve an arbitrary sample of the
dataset at any time.
Interference-free retrieval. The unordered batch generation necessitates the
parallel execution of dataset indexing, which requires the data retrieval
process to be interference-free with each other. If the dataset indexing
procedure prevents another dataset indexing procedure from running
concurrently, it would fully force the retrieval process back to the one-by-
one manner, eliminating the benefits of unordered batch generation.
Reusing existing facility. Considering that there are already many dataset
abstractions satisfactory for the above requirements like map-style image
datasets in TorchVision, Rinas can directly employ them seamlessly. For
unsatisfactory datasets, we provide a case study in 5 to demonstrate how to
convert them into the needed fashion.
### 4.6. End-to-end View of Rinas
We now describe the end-to-end view of our approach. Figure 8 provides a
system overview of Rinas. The dataset representation against the dataset
storage provides a dataset indexing interface for data retrieval which serves
as a data plane. Rinas’s unordered batch generation serves as a control plane
to parallelize the given data retrieval for intra-batch data.
Figure 8. A system overview of Rinas.
Dataset initialization stage. Data representation needs to contain the mapping
from indices to actual data locations. Such information can be stored
externally and read at the initialization stage or created at runtime,
depending on the storage method of the datasets. The dataset representation
also exposes a dataset indexing interface as the data plane to be leveraged by
the control plane which is the unordered batch generation module.
Batch generation stage. The unordered batch generation module executes
parallel dataset indexing for data within the same batch and pipelines the
preprocessing and IO of different samples. Finally, a batch is generated and
fed into the model training process.
Scope of Rinas. Unlike other existing solutions (ray, ; exoshuffle, ), Rinas
is not designed for general dataset shuffling applications beyond model
training due to the fact that Rinas relies on the deep learning-specific
insight (recall 4.3). Any application that requires the intra-batch sample
order to be aligned with the shuffled indices is out of Rinas’s scope.
## 5\. Implementation
We implement a prototype of Rinas by extending the PyTorch framework and
HuggingFace Datasets library with $\sim$400 lines of Python code. Our
prototype involves an unordered batch generation control plane and an on-
demand, parallelizable data plane, each working as a standalone module.
Specifically, we override the _MapDatasetFetcher class to enforce the
unordered batch generation (4.4), and an asynchronous thread pool is created
to fetch data samples in parallel. Note that the index order is changed
according to the intra-batch fetch scheduling. Once the data sample is
fetched, it is immediately sent to the user-defined preprocessing pipeline,
and different data samples are processed in parallel.
### 5.1. A Case Study: Converting HuggingFace Datasets
We provide a case study for how to convert the dataset representation
unsatisfactory to our control plane in the needed fashion. The example case
here is the HuggingFace datasets, which are based on Apache Arrow format
(pyarrow, ), a columnar memory format for flat and hierarchical data,
organized for efficient analytic operations on modern hardware like CPUs and
GPUs.
Dataset storage. Specifically, HuggingFace stores datasets as memory-mapped
arrow stream files for efficient stream processing (arrow_flight, ). Arrow
stream format partitions the whole dataset into small data chunks on the
storage and enables users to efficiently iterate through the dataset at the
unit of data chunks. However, the arrow stream file format lacks data chunk
indices which is necessary for index-based loading. To address this,
HuggingFace chooses to iterate through the entire dataset at dataset
initialization to create a table that contains all the metadata and locations
of data chunks on the storage. Unfortunately, this method leads to two major
drawbacks:
1. (1)
Long dataset initialization time: the iterating procedure at dataset
initialization has a time cost that is linear to the dataset size. For a
dataset that is around 1 TB, it can take up to 10 minutes to finish the
dataset initialization.
2. (2)
Frequent page swaps during shuffled loading. The dataset initialization
process creates a memory-mapped arrow table instance in DRAM, hiding data
chunk access details from dataset developers. The data chunk accessing and
mapping is fully managed by the operating systems. Due to the limited DRAM
space compared to the dataset size, most data chunks are paged out during
dataset initialization and need to be re-mapped into DRAM at runtime, which
causes frequent page swaps and blocks the parallel access to data chunks.
Format conversion. Therefore, we need to optimize the HuggingFace
implementation to support the on-demand and parallelized data indexing.
Specifically, we convert the files from the arrow stream format to an arrow
indexable format, utilizing the PyArrow library (pyarrow, ).
Figure 9 presents the difference between accessing data chunks in arrow stream
file format and arrow indexable file format. The arrow stream file format
opens a message stream and sends data chunks in a sequence of messages. The
first message is the data schema, which contains the metadata for data chunks
in the file. The stream reader iterates through data chunks by read_next()
method without knowing the locations of data chunks. On the other hand, when
opening the file, the arrow indexable file format in our prototype reads the
data schema containing metadata, and file layout containing data chunk
locations. With file layout loaded at the beginning of file reading, the
indexable format allows accessing data chunks by its index in the file with
get_batch(index).
With this format, the data chunks can be accessed without page swaps or memory
re-mapping since the data chunk access and mapping becomes on-demand and
manageable, facilitating parallel reading of data chunks and aligning with the
efficient and scalable batch generation execution model in Rinas (recall 4.4).
Figure 9. A comparative illustration of arrow stream and indexable formats.
How to implement Rinas over other deep learning frameworks? In addition to
PyTorch, Rinas can be implemented on various deep learning frameworks
(abadi2016tensorflow, ; jax2018github, ; paszke2019pytorch, ). Specifically,
We provide some tips to enforce Rinas on TensorFlow and JAX.
TensorFlow (abadi2016tensorflow, ) utilizes the tf.data module for the data
input pipeline, but it does not support indices mapping. Therefore, to
implement Rinas, first, we need to modify the Dataset.batch function to create
the batched shuffled indices. After the shuffled indices are available for
TensorFlow datasets, we can force the tf.data.Iterator to iterate through the
dataset based on the shuffled indices. When the iterator iterates at the level
of batches, the intra-batch data can be fetched in parallel to achieve Rinas’s
unordered batch generation (4.4).
Since JAX can directly leverage PyTorch DataLoader for the data input
pipeline, our implementation can be naturally supported by adapting our
modifications on PyTorch DataLoader.
## 6\. Evaluation
We now present the evaluation of Rinas by answering the following questions.
1. (1)
What’s the improvements Rinas provides on the end-to-end model training
throughput? (6.1)
2. (2)
How do the control and data plane contribute to Rinas’s performance in various
scenarios? (6.2)
3. (3)
How does the global randomness brought by Rinas benefit the learning outcome?
(6.3)
4. (4)
What kinds of overhead Rinas introduces and how much is it? (6.4)
Testbed. We set up a testbed on a standard computing node equipped with an AMD
EPYC 7742 CPU with 96 GB RAM and 4 NVIDIA A100 GPUs. Each GPU has 80 GB
memory. In addition, we rely on the cluster-wide WEKA file system for dataset
storage (weka, ), which is commonly adopted in real-world and supported by
high-demand computing platforms like Amazon AWS and Google Cloud. For software
configurations, we utilize CUDA 11.7, PyTorch 2.0.1, HuggingFace Datasets
2.14.6.dev0, and Transfomers 4.34.0.dev0.
Methodology. We evaluate Rinas using two datasets: C4 and ImageNet, which are
typical text and image datasets for large language models and computer vision
models. Correspondingly, we train a typical large language model RoBERTa and a
computer vision model ResNet-152 to demonstrate Rinas’s benefits. To evaluate
the end-to-end training throughput, we conduct a series of model training
iterations, spanning 300 steps. The throughput is calculated by dividing the
total number of samples processed during these iterations by the total time
taken to complete them. The results are averaged over 3 runs. To avoid startup
interference, we run enough warm-up rounds before collecting the results.
Large language model. We train a RoBERTa-base model based on the C4 dataset.
RoBERTa (Robustly Optimized BERT Approach) is an advanced variant of the BERT
model, and the cleaned English branch of C4 dataset has $\sim$305 GB raw text
data and $\sim$1.1 TB after being tokenized by RoBERTa. We compare Rinas
against HuggingFace, which is the default approach to store datasets and
execute language model training (liu2019roberta, ; devlin2019bert, ;
touvron2023llama, ).
Computer vision model. We train the ResNet-152 model using the ImageNet
dataset that has $\sim$140 GB raw image data. The baseline is the training
pipeline that employs the official PyTorch DataLoader and the ImageNet dataset
implementation by TorchVision (torchvision2016, ), which is widely used for
computer vision models (simonyan2015deep, ; he2016deep, ;
dosovitskiy2021image, ).
Additionally, we also test the baselines when supercharged by Ray for both
text and image model training. Notice that we could not convert the used
datasets with Ray Data (ExoShuffle) due to the extensive DRAM usage for such
transformation (6.4). The only option is to use Ray Train’s wrappers for
HuggingFace and PyTorch DataLoaders.
### 6.1. End-to-end Training Throughput
RoBERTa-base training upon C4. Figure 10 presents the end-to-end distributed
training throughput of the RoBERTa-base model when the batch size increases.
We can see a general trend where Rinas can achieve larger throughput at larger
batch sizes; this is due to the higher parallelism of Rinas’s unordered batch
generation at larger batch sizes.
To further analyze the performance gain provided by Rinas, we measure the
training throughput of employing original HuggingFace pipeline as a baseline
and show the speedup of Rinas at different batch sizes as the Figure 11 shows.
As Figure 11 shows, Rinas can improve the end-to-end training throughput by
1.54-1.59$\times$, demonstrating the efficiency of Rinas at a wide range of
batch sizes. Rinas achieves higher speedup ratios at larger batch sizes,
suggesting Rinas’s better scalability than HuggingFace in terms of batch
sizes.
In addition, when the HuggingFace baseline is supercharged by Ray, the
training throughput has a slight drop. This is because Ray only takes the data
loading process as its actor’s application workload and does not optimize the
inner pipeline. With additional overhead from Ray’s processes, it is
reasonable to see a slight performance drop.
Figure 10. RoBERTa-base distributed training throughput comparison between
HuggingFace and Rinas. Figure 11. RoBERTa-base distributed training speedup
over HuggingFace at different batch sizes. Figure 12. ResNet-152 distributed
training throughput comparison between using PyTorch dataloader and Rinas.
Figure 13. ResNet-152 distributed training speedup at different batch sizes.
ResNet-152 training on ImageNet. Figure 12 presents the end-to-end distributed
training throughput of the ResNet-152 model when the batch size increases. We
can observe the same trend with RoBERTa training: Rinas can achieve higher
training throughput when batch size increases at larger batch sizes. For
comparison, training throughput using PyTorch DataLoader remains at a
relatively low level, suggesting the effectiveness of Rinas in image datasets
like ImageNet.
To further analyze the performance gain by Rinas, Figure 13 shows the speedup
ratios compared to the PyTorch DataLoader baseline at different batch sizes.
Similar to Figure 11, We can also see that Rinas can achieve higher speedup
ratios by up to 1.89$\times$ at larger batch sizes, demonstrating Rinas’s
scalability in the image dataset in terms of batch sizes as well.
We can also notice the slight performance drop when PyTorch DataLoader is
supercharged by Ray which is due to the same reason as explained in the
language model training experiment.
### 6.2. Breakdown Analysis
Figure 14. RoBERTa training throughput improvement breakdown under a batch
size of 32.
In order to analyze the benefits of Rinas’s control plane and data plane, we
analyze their contributions to the aforementioned experiments: On one hand, we
measure the RoBERTa-base end-to-end distributed training throughput at a batch
size of 32 by disabling the control plane to showcase the contribution from
solely data plane. Note that removing the control plane does not require any
modification on the side of the data plane for Rinas. Figure 14 displays the
results. We can see that Rinas’s data plane can improve RoBERTa-base
distributed training throughput by 49.4% when the batch size is 32. When the
control plane is further enforced, we can achieve a 58.9% cumulative
performance improvement, highlighting the great contribution from the Rinas
data plane. On the other hand, in the case of ResNet-152 training on the
ImageNet dataset, both Rinas and PyTorch DataLoader operate on the same
dataset implementation, differing only in their respective control planes
within the DataLoader. This highlights the importance of Rinas control plane
in image datasets like ImageNet. As for the speedup ratios difference between
text and image datasets, we would like to
Model | Dataset | Acc. w/ shuffle | Imps.
---|---|---|---
limited | global
TabNet | HIGGS (7.5 GB) | $\sim$70% | $\sim$76% | 1.09$\times$
ResNet-50 | DeepCAM (8.2 TB) | $\sim$78% | $\sim$83% | 1.06$\times$
ImageNet-21k (1.1 TB) | $\sim$37% | $\sim$45% | 1.22$\times$
ImageNet-1k (140 GB) | $\sim$50% | $\sim$70% | 1.40$\times$
ResNet-18 | criteo (1.3 TB) | $\sim$30% | $\sim$90% | 3.00$\times$
VGG-19 | criteo (1.3 TB) | $\sim$20% | $\sim$90% | 4.50$\times$
Table 2. Model training accuracy discrepancies between limited shuffle and
global shuffle.
### 6.3. Convergence Benefits
As Rinas guarantees the global shuffling against the datasets, global
shuffling benefits all gradient-decent-based trainers based on the theoretical
foundation (meng2019convergence, ).
Due to the extensive long time to train a model on large datasets until
convergence, Table 2 collects and presents the model accuracy discrepancies
discovered by other studies (nguyen2022globally, ; exoshuffle, ;
xu2022stochastic, ). The studied models contain TabNet (arik2020tabnet, ),
ResNet, and VGG and the studied datasets include HIGGS (misc_higgs_280, ),
ImageNet, DeepCAM (nguyen2023deepcam, ) and criteo (criteo, ). We can observe
that the accuracy discrepancies vary a lot depending on different models and
datasets; no matter what models and datasets the benefits of global shuffling
are always obvious. Besides vision models, in practice tabular data and models
such as language models and temporal series models are more sensitive to the
shuffle quality (tabular_shuffle, ; ray_global_shuffle, ; schluter-
varab-2018-data, ), further highlighting the necessities of global shuffling
in model training.
### 6.4. Resource Overhead
We consider Rinas’s CPU memory usage overhead at dataset preparation and both
CPU and GPU memory overhead at runtime.
Dataset preparation. For PyTorch Datasets, Rinas can directly optimize the
shuffled loading performance upon PyTorch Datasets. Thus, there is no
additional memory usage overhead for PyTorch Datasets. As for Rinas’s file
format transformation required for HuggingFace datasets, our transformation
process is based on the PyArrow stream processing which only requires less
than $\sim$100 MB DRAM capacity.
For comparison, Ray Data module provides a conversion method from PyTorch and
HuggingFace Datasets to Ray Datasets, which can be stored and shuffled in
Ray’s shared-memory object store. However, their implementations do not allow
us to finish such a transformation for either ImageNet or C4 on our testbed,
due to the out-of-memory errors caused by extensive memory usage of such a
transformation. This results from the fact that Ray needs to load the entire
dataset into its local object store before shuffling (exoshuffle, ), while
Rinas follows an indices mapping manner and only loads data into RAM on
demand.
Runtime. At runtime, another CPU memory usage overhead comes from the
additional thread pool employed by Rinas’s control plane implementation, which
relies on the multi-threaded asynchronous intra-batch data fetching.
Since each intra-batch data is mapped and fetched by a separate thread, the
number of threads needs to match the batch size, which requires a large number
of threads to create when the batch size increases. In the case of distributed
ResNet-152 training with a batch size of 256 on 4 GPUs described in Sec. 6.1,
the entire system needs to create a thread pool of 256 threads on each
learner’s process, cumulatively 1024 threads at a time in total.
This can potentially stress the CPU and the storage system while other data
loaders without such design typically only require a single thread for each
data loader process. However, considering that the modern training clusters
are typically equipped with many-core CPUs and usually not fully exploited
during model training, Rinas’s additional stress on the CPU can be handled.
Also, since the parallelism degree of Rinas’s unordered batch generation is
positively correlated with batch size, a larger batch size per GPU is
necessary to have a significant speedup which leads to larger CPU and GPU
memory consumption. However, based on the theoretical foundation
(gao2020study, ) and the results of practice (KANDEL2020312, ) , a large batch
size benefits model’s generalization ability. Due to this reason, it has been
a common practice to pursue larger batch sizes for model training with various
memory optimization techniques (korthikanti2022reducing, ; zhao2023fsdp, ;
chen2016training, ).
## 7\. Discussion
This section discusses some immediate concerns one may have about Rinas.
Does Rinas provide high-level dataset representations? Rinas introduces a new
tradeoff between the performance benefits and the dataset representations in
its implementation. Ideally, a unified dataset representation can facilitate
the programming for datasets, such as the memory-mapped Arrow table in
HuggingFace, with which developers don’t need to know where the dataset is
located. However, to gain Rinas’s performance benefits (recall 6), the dataset
developers should manipulate the data retrieval process, majorly involving
locating the data segments on the storage (recall 4.5 and 5 ).
Is Rinas able to be extended to other distributed computing systems? Rinas
introduces a novel shuffled loading paradigm for batch generation models. The
key operation is the unordered batch generation, which mainly relies on
asynchronous random IO to parallelize the dataset indexing (recall 4.4).
Therefore, it is feasible to extend Rinas by enforcing unordered batch
generation by integrating the asynchronous random IO into the dataset
iterators supported by a variety of data processing systems, such as Ray,
Spark, and Twister (ray, ; 10.1145/1851476.1851593, ; spark, ). However, since
Rinas is designed with deep learning-specific domain knowledge (recall 4.4),
the use of Rinas needs the consideration of whether their applications are
also applicable.
How should Rinas’s performance to be expected on even larger datasets? In
general, the size of the dataset to be used for training should be aligned
with the model size. To investigate Rinas’s real benefits with larger dataset
sizes, we also need to adjust the model we train. It would be very interesting
to see how it performs on larger datasets to train a truly large foundation
model, but it is beyond our capacity in computational resource. Investigating
such problem can be one of our future directions.
## 8\. Related Work
Shuffling in data processing systems. Since MapReduce (dean2008mapreduce, )
and Hadoop (hadoop, ), there are plenty of solutions with a focus on
optimizing disk I/O efficiency and pipelining for in-place shuffle operation
in data processing systems (iShuffle, ; hadoop_shuffle, ; ownership, ).
However, under model training workloads, prior approaches are usually
computationally heavy, and hard to be pipelined (recall 2).
To address this issue, Ray (ray, ) and ExoShuffle (exoshuffle, ) propose a
distributed futures-based shuffle system to provide better flexibility and
interoperability with model training workloads. However, those two proposals
mandate loading the entire dataset into its object store before they read and
shuffle the data, causing infeasible demands on DRAM resources for large
datasets (recall 6.4).
Shuffling in deep learning systems. Due to the inefficiency of shuffled
loading with indices mapping in large datasets (recall 3.1), the programmers
may have to use buffered shuffling at the cost of model convergence accuracy.
This inspires some deep learning systems to modify the global access order and
the buffer eviction scheme accordingly (sun2022solar, ; DeepIO_buffer_shuffle,
) to maximize the data reuse and the buffer hit rate. However, similar to
partial shuffling, this breaks the true randomness of global shuffling no
matter modifying inter-batch or inter-epoch order, which limits the applicable
scope to specific models and datasets. Similar to NoPFS
(10.1145/3458817.3476181, ), they usually also involve significant efforts in
modifying the underlying software infrastructure.
Notably, all these solutions for reducing shuffle-related overheads in model
training can leverage Rinas’s unordered batch generation to further enhance
the performance. Our insight is obtained from an observation of the general
model training process.
Shuffle pattern exploration for ML. Some researchers work on exploring
theoretical convergence bound of different local shuffle patterns
(meng2019convergence, ; Nguyen2020AUC, ; Ahn2020SGDWS, ;
Rajput2021PermutationBasedSI, ), with a hope to find an alternative to the
expensive global shuffling. However, their analysis is usually limited to
convex problems with low-dimensional data, leaving their effectiveness on
real-world problems and datasets not fully explored.
Data loading parallelism. Previous proposals also leverage multi-core
parallelism to uncover data loading parallelism in different types of systems.
First, while our paper focuses on faster data loading from local SSDs, some
works focus on faster data loading from a remote datastore or file system
through the network. Barclay et al. (barclay1994loading, ) leverage dataflow
parallelism for memory-to-memory data loading via parallel TCP connections.
Yang et al. (yang2019accelerating, ) propose to accelerate data loading for
PyTorch-based distributed training systems from network-attached remote
storage. Second, some other works focus on local data loading, the same as
this work, leveraging asynchronously I/O to unleash unordered parallelism on
in-memory (lim2014mica, ; zhao2022altocumulus, ) or on-disk databases
(cheng2014scanraw, ; dziedzic2017dbms, ). ScanRaw speculatively uses more
cores to load data when additional disk bandwidth is available
(cheng2014scanraw, ). Dziedzic et al. (dziedzic2017dbms, ) have a similar
observation of this paper that file format is important on data loading
performance and draws the conclusion that data loading is CPU-bound. We
inherit previous work’s insights on leveraging asynchronous I/Os to solve an
unexplored problem: releasing the data loading order of each batch in model
training.
Preprocessing acceleration. Some researchers are focusing on the
preprocessing acceleration for higher batch generation efficiency. For
example, NVIDIA Data Loading Library (DALI) (DALI, ) accelerates data loading
for image datasets by moving the image preprocessing workload to GPU. Since we
focus on different stages of the data loading pipeline and thus orthogonal to
each other, our work can be combined with DALI when working with image
datasets.
## 9\. Conclusions
Loading shuffled large datasets has become the key bottleneck in model
training throughput. However, existing solutions face different limitations
and usually leave programmers to face the trade-off between training speed and
convergence accuracy. In this work, we propose Rinas, a data loading framework
with the design of a novel execution model with unordered batch generation. By
leveraging the deep learning-specific domain knowledge, Rinas unleashes the
additional parallelism space in the shuffled loading process. Our evaluation
shows that Rinas accelerates model training substantially under the guarantee
of global randomness in shuffling. This work does not raise any ethical
issues.
## References
* [1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning, 2016.
* [2] Tanveer Ahmad. Benchmarking apache arrow flight - a wire-speed protocol for data transfer, querying and microservices. In Benchmarking in the Data Center: Expanding to the Cloud, BID’22, New York, NY, USA, 2022. Association for Computing Machinery.
* [3] Kwangjun Ahn, Chulhee Yun, and Suvrit Sra. Sgd with shuffling: optimal rates without component convexity and large epoch requirements. arXiv: Optimization and Control, 2020.
* [4] Sercan O. Arik and Tomas Pfister. Tabnet: Attentive interpretable tabular learning, 2020.
* [5] Tom Barclay, Robert Barnes, Jim Gray, and Prakash Sundaresan. Loading databases using dataflow parallelism. ACM Sigmod Record, 23(4):72–83, 1994.
* [6] Koh Bee Hock David, Chin Lim, Hasnae Rahimi, and Wai Lok Woo. Deep temporal convolution network for time series classification. Sensors, 21:603, 01 2021.
* [7] James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018.
* [8] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost, 2016.
* [9] Yu Cheng and Florin Rusu. Parallel in-situ data processing with speculative loading. In Proceedings of the 2014 ACM SIGMOD international conference on Management of data, pages 1287–1298, 2014.
* [10] Together Computer. Redpajama: An open source recipe to reproduce llama training dataset. https://github.com/togethercomputer/RedPajama-Data, 2023. Accessed: 2023-11-27.
* [11] LIBSVM Data. Criteo dataset. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/., 2011. Accessed: 2023-11-27.
* [12] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: simplified data processing on large clusters. Communications of the ACM, 51(1):107–113, 2008.
* [13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [14] PyTorch developers and maintainers. PyTorch dataloader. https://pytorch.org/docs/stable/data.html#map-style-datasets. Accessed: 2023-11-08.
* [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
* [16] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale, 2021.
* [17] Nikoli Dryden, Roman Böhringer, Tal Ben-Nun, and Torsten Hoefler. Clairvoyant prefetching for distributed machine learning i/o. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’21, New York, NY, USA, 2021. Association for Computing Machinery.
* [18] Nikoli Dryden, Tim Moon, Sam Ade Jacobs, and Brian Van Essen. Communication quantization for data-parallel training of deep neural networks. In 2016 2nd Workshop on Machine Learning in HPC Environments (MLHPC), pages 1–8, 2016.
* [19] Adam Dziedzic, Manos Karpathiotakis, Ioannis Alagiannis, Raja Appuswamy, and Anastasia Ailamaki. Dbms data loading: An analysis on modern hardware. In Data Management on New Hardware: 7th International Workshop on Accelerating Data Analysis and Data Management Systems Using Modern Processor and Storage Architectures, ADMS 2016 and 4th International Workshop on In-Memory Data Management and Analytics, IMDM 2016, New Delhi, India, September 1, 2016, Revised Selected Papers 4, pages 95–117. Springer, 2017.
* [20] Jaliya Ekanayake, Hui Li, Bingjing Zhang, Thilina Gunarathne, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox. Twister: A runtime for iterative mapreduce. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, HPDC ’10, page 810–818, New York, NY, USA, 2010. Association for Computing Machinery.
* [21] Fengli Gao and Huicai Zhong. Study on the large batch size training of neural networks based on the second order gradient, 2020.
* [22] Rong Gu, Kai Zhang, Zhihao Xu, Yang Che, Bin Fan, Haojun Hou, Haipeng Dai, Li Yi, Yu Ding, Guihai Chen, and Yihua Huang. Fluid: Dataset abstraction and elastic acceleration for cloud-native deep learning training jobs. In 2022 IEEE 38th International Conference on Data Engineering (ICDE), pages 2182–2195, 2022.
* [23] Yanfei Guo, Jia Rao, Dazhao Cheng, and Xiaobo Zhou. ishuffle: Improving hadoop performance with shuffle-on-write. IEEE Transactions on Parallel and Distributed Systems, 28(6):1649–1662, 2017.
* [24] Anurag Gupta, Deepak Agarwal, Derek Tan, Jakub Kulesza, Rahul Pathak, Stefano Stefani, and Vidhya Srinivasan. Amazon redshift and the case for simpler data warehouses. In Proceedings of the 2015 ACM SIGMOD international conference on management of data, pages 1917–1923, 2015.
* [25] Sasun Hambardzumyan, Abhinav Tuli, Levon Ghukasyan, Fariz Rahman, Hrant Topchyan, David Isayan, Mikayel Harutyunyan, Tatevik Hakobyan, Ivo Stranic, and Davit Buniatyan. Deep lake: a lakehouse for deep learning. Proceedings of CIDR, 2023.
* [26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [27] HuggingFace. Differences between dataset and iterabledataset. https://huggingface.co/docs/datasets/about_mapstyle_vs_iterable. Accessed: 2023-11-27.
* [28] AnyScale Inc. When to global shuffle. https://docs.ray.io/en/latest/data/performance-tips.html#when-should-you-use-global-per-epoch-shuffling. Accessed: 2023-11-27.
* [29] WEKA Inc. Weka file system. https://docs.weka.io/overview/about, 2023. Accessed: 2023-11-27.
* [30] Ibrahem Kandel and Mauro Castelli. The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset. ICT Express, 6(4):312–315, 2020.
* [31] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models, 2022.
* [32] Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 175–184, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics.
* [33] Haidong Li, Jiongcheng Li, Xiaoming Guan, Binghao Liang, Yuting Lai, and Xinglong Luo. Research on overfitting of deep learning. In 2019 15th international conference on computational intelligence and security (CIS), pages 78–81. IEEE, 2019.
* [34] Jingui Li, Xuelian Lin, Xiaolong Cui, and Yue Ye. Improving the shuffle of hadoop mapreduce. In 2013 IEEE 5th International Conference on Cloud Computing Technology and Science, volume 1, pages 266–273, 2013.
* [35] Mu Li, David G. Andersen, Jun Woo Park, Alexander J. Smola, Amr Ahmed, Vanja Josifovski, James Long, Eugene J. Shekita, and Bor-Yiing Su. Scaling distributed machine learning with the parameter server. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 583–598, Broomfield, CO, October 2014. USENIX Association.
* [36] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, and Soumith Chintala. Pytorch distributed: Experiences on accelerating data parallel training, 2020.
* [37] Wing Lian, Bleys Goodson, Eugene Pentland, Austin Cook, Chanvichet Vong, and ”Teknium”. Openorca: An open dataset of gpt augmented flan reasoning traces. https://https://huggingface.co/Open-Orca/OpenOrca, 2023. Accessed: 2023-11-27.
* [38] Hyeontaek Lim, Dongsu Han, David G Andersen, and Michael Kaminsky. $\\{$MICA$\\}$: A holistic approach to fast $\\{$In-Memory$\\}$$\\{$Key-Value$\\}$ storage. In Proceedings of the 11th USENIX Symposium on Networked Systems Design and Implementation (NSDI), pages 429–444, 2014.
* [39] Jie Liu, Bogdan Nicolae, and Dong Li. Lobster: Load balance-aware i/o for distributed dnn training. In Proceedings of the 51st International Conference on Parallel Processing, ICPP ’22, New York, NY, USA, 2023. Association for Computing Machinery.
* [40] Shuhao Liu, Hao Wang, and Baochun Li. Optimizing shuffle in wide-area data analytics. In 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pages 560–571, 2017.
* [41] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach, 2019.
* [42] Frank Sifei Luan, Stephanie Wang, Samyukta Yagati, Sean Kim, Kenneth Lien, Isaac Ong, Tony Hong, Sangbin Cho, Eric Liang, and Ion Stoica. Exoshuffle: An extensible shuffle architecture. In Proceedings of the ACM SIGCOMM 2023 Conference, ACM SIGCOMM ’23, page 564–577, New York, NY, USA, 2023. Association for Computing Machinery.
* [43] TorchVision maintainers and contributors. Torchvision: Pytorch’s computer vision library. https://github.com/pytorch/vision, 2016. Accessed: 2023-11-27.
* [44] Amrita Mathuriya, Deborah Bard, Peter Mendygral, Lawrence Meadows, James Arnemann, Lei Shao, Siyu He, Tuomas Karna, Daina Moise, Simon J. Pennycook, Kristyn Maschoff, Jason Sewall, Nalini Kumar, Shirley Ho, Mike Ringenburg, Prabhat, and Victor Lee. Cosmoflow: Using deep learning to learn the universe at scale, 2018.
* [45] Qi Meng, Wei Chen, Yue Wang, Zhi-Ming Ma, and Tie-Yan Liu. Convergence analysis of distributed stochastic gradient descent with shuffling. Neurocomputing, 337:46–57, 2019.
* [46] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emerging AI applications. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 561–577, Carlsbad, CA, October 2018. USENIX Association.
* [47] Philipp Moritz, Robert Nishihara, Stephanie Wang, Alexey Tumanov, Richard Liaw, Eric Liang, Melih Elibol, Zongheng Yang, William Paul, Michael I. Jordan, and Ion Stoica. Ray: A distributed framework for emerging ai applications, 2018.
* [48] Duy-Thanh Nguyen, Abhiroop Bhattacharjee, Abhishek Moitra, and Priyadarshini Panda. Deepcam: A fully cam-based inference accelerator with variable hash lengths for energy-efficient deep neural networks, 2023.
* [49] Lam M. Nguyen, Quoc Tran-Dinh, D. Phan, Phuong Ha Nguyen, and Marten van Dijk. A unified convergence analysis for shuffling-type gradient methods. J. Mach. Learn. Res., 22:207:1–207:44, 2020.
* [50] Truong Thao Nguyen, François Trahay, Jens Domke, Aleksandr Drozd, Emil Vatai, Jianwei Liao, Mohamed Wahib, and Balazs Gerofi. Why globally re-shuffle? revisiting data shuffling in large scale deep learning. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 1085–1096. IEEE, 2022.
* [51] Bogdan Nicolae, Carlos Costa, Claudia Misale, Kostas Katrinis, and Yoonho Park. Towards memory-optimized data shuffling patterns for big data analytics. In 2016 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), pages 409–412. IEEE, 2016.
* [52] NVIDIA. Nvidia data loading library. https://docs.nvidia.com/deeplearning/dali/user-guide/docs/. Accessed: 2023-11-27.
* [53] Iason Ofeidis, Diego Kiedanski, and Leandros Tassiulas. An overview of the data-loader landscape: Comparative performance analysis. arXiv preprint arXiv:2209.13705, 2022.
* [54] Frank Olken and Doron Rotem. Random sampling from databases: a survey. Statistics and Computing, 5:25–42, 1995.
* [55] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library, 2019.
* [56] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints, 2019.
* [57] Shashank Rajput, Kangwook Lee, and Dimitris Papailiopoulos. Permutation-based sgd: Is random optimal? ArXiv, abs/2102.09718, 2021.
* [58] Neal Richardson, Ian Cook, Nic Crane, Dewey Dunnington, Romain François, Jonathan Keane, Dragoș Moldovan-Grünfeld, Jeroen Ooms, Jacob Wujciak-Jens, and Apache Arrow. arrow: Integration to ’Apache’ ’Arrow’, 2023. R package version 14.0.1, https://arrow.apache.org/docs/r/.
* [59] Natalie Schluter and Daniel Varab. When data permutations are pathological: the case of neural natural language inference. In Ellen Riloff, David Chiang, Julia Hockenmaier, and Jun’ichi Tsujii, editors, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4935–4939, Brussels, Belgium, October-November 2018. Association for Computational Linguistics.
* [60] Min Shen, Ye Zhou, and Chandni Singh. Magnet: push-based shuffle service for large-scale data processing. Proceedings of the VLDB Endowment, 13(12):3382–3395, 2020.
* [61] Konstantin Shvachko, Hairong Kuang, Sanjay Radia, and Robert Chansler. The hadoop distributed file system. In 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), pages 1–10, 2010.
* [62] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition, 2015.
* [63] Baixi Sun, Xiaodong Yu, Chengming Zhang, Jiannan Tian, Sian Jin, Kamil Iskra, Tao Zhou, Tekin Bicer, Pete Beckman, and Dingwen Tao. Solar: A highly optimized data loading framework for distributed training of cnn-based scientific surrogates, 2022.
* [64] The Apache Software Foundation. SparkR: R Front End for ’Apache Spark’, 2023. https://www.apache.org https://spark.apache.org.
* [65] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, YaGuang Li, Hongrae Lee, Huaixiu Steven Zheng, Amin Ghafouri, Marcelo Menegali, Yanping Huang, Maxim Krikun, Dmitry Lepikhin, James Qin, Dehao Chen, Yuanzhong Xu, Zhifeng Chen, Adam Roberts, Maarten Bosma, Vincent Zhao, Yanqi Zhou, Chung-Ching Chang, Igor Krivokon, Will Rusch, Marc Pickett, Pranesh Srinivasan, Laichee Man, Kathleen Meier-Hellstern, Meredith Ringel Morris, Tulsee Doshi, Renelito Delos Santos, Toju Duke, Johnny Soraker, Ben Zevenbergen, Vinodkumar Prabhakaran, Mark Diaz, Ben Hutchinson, Kristen Olson, Alejandra Molina, Erin Hoffman-John, Josh Lee, Lora Aroyo, Ravi Rajakumar, Alena Butryna, Matthew Lamm, Viktoriya Kuzmina, Joe Fenton, Aaron Cohen, Rachel Bernstein, Ray Kurzweil, Blaise Aguera-Arcas, Claire Cui, Marian Croak, Ed Chi, and Quoc Le. Lamda: Language models for dialog applications, 2022.
* [66] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
* [67] Stephanie Wang, Eric Liang, Edward Oakes, Ben Hindman, Frank Sifei Luan, Audrey Cheng, and Ion Stoica. Ownership: A distributed futures system for Fine-Grained tasks. In 18th USENIX Symposium on Networked Systems Design and Implementation (NSDI 21), pages 671–686. USENIX Association, April 2021.
* [68] Benjamin Welton, Dries Kimpe, Jason Cope, Christina M Patrick, Kamil Iskra, and Robert Ross. Improving i/o forwarding throughput with data compression. In 2011 IEEE International Conference on Cluster Computing, pages 438–445. IEEE, 2011.
* [69] Daniel Whiteson. HIGGS. UCI Machine Learning Repository, 2014. DOI: https://doi.org/10.24432/C5V312.
* [70] Lijie Xu, Shuang Qiu, Binhang Yuan, Jiawei Jiang, Cedric Renggli, Shaoduo Gan, Kaan Kara, Guoliang Li, Ji Liu, Wentao Wu, Jieping Ye, and Ce Zhang. In-database machine learning with corgipile: Stochastic gradient descent without full data shuffle. In Proceedings of the 2022 International Conference on Management of Data, SIGMOD ’22, page 1286–1300, New York, NY, USA, 2022. Association for Computing Machinery.
* [71] Chih-Chieh Yang and Guojing Cong. Accelerating data loading in deep neural network training. In Proceedings of the 26th IEEE International Conference on High Performance Computing, Data, and Analytics (HiPC), pages 235–245. IEEE, 2019.
* [72] Xue Ying. An overview of overfitting and its solutions. In Journal of physics: Conference series, volume 1168, page 022022. IOP Publishing, 2019.
* [73] Matei Zaharia, Reynold S. Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J. Franklin, Ali Ghodsi, Joseph Gonzalez, Scott Shenker, and Ion Stoica. Apache spark: A unified engine for big data processing. Commun. ACM, 59(11):56–65, oct 2016.
* [74] Jianyong Zhang, Anand Sivasubramaniam, Qian Wang, Alma Riska, and Erik Riedel. Storage performance virtualization via throughput and latency control. ACM Transactions on Storage (TOS), 2(3):283–308, 2006.
* [75] Jiechen Zhao, Iris Uwizeyimana, Karthik Ganesan, Mark C Jeffrey, and Natalie Enright Jerger. Altocumulus: Scalable scheduling for nanosecond-scale remote procedure calls. In Proceedings of the 55th IEEE/ACM International Symposium on Microarchitecture (MICRO), pages 423–440. IEEE, 2022.
* [76] Yanli Zhao, Andrew Gu, Rohan Varma, Liang Luo, Chien-Chin Huang, Min Xu, Less Wright, Hamid Shojanazeri, Myle Ott, Sam Shleifer, Alban Desmaison, Can Balioglu, Pritam Damania, Bernard Nguyen, Geeta Chauhan, Yuchen Hao, Ajit Mathews, and Shen Li. Pytorch fsdp: Experiences on scaling fully sharded data parallel, 2023.
* [77] Yue Zhu, Fahim Chowdhury, Huansong Fu, Adam Moody, Kathryn Mohror, Kento Sato, and Weikuan Yu. Entropy-aware i/o pipelining for large-scale deep learning on hpc systems. In 2018 IEEE 26th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), pages 145–156, 2018.
|
# Optimal verification of entangled states with local measurements
Sam Pallister<EMAIL_ADDRESS>School of Mathematics, University
of Bristol, UK Quantum Engineering Centre for Doctoral Training, University
of Bristol, UK Noah Linden<EMAIL_ADDRESS>School of Mathematics,
University of Bristol, UK Ashley Montanaro<EMAIL_ADDRESS>School of Mathematics, University of Bristol, UK
###### Abstract
Consider the task of verifying that a given quantum device, designed to
produce a particular entangled state, does indeed produce that state. One
natural approach would be to characterise the output state by quantum state
tomography; or alternatively to perform some kind of Bell test, tailored to
the state of interest. We show here that neither approach is optimal amongst
local verification strategies for two qubit states. We find the optimal
strategy in this case and show that quadratically fewer total measurements are
needed to verify to within a given fidelity than in published results for
quantum state tomography, Bell test, or fidelity estimation protocols. We also
give efficient verification protocols for any stabilizer state. Additionally,
we show that requiring that the strategy be constructed from local, non-
adaptive and non-collective measurements only incurs a constant-factor penalty
over a strategy without these restrictions.
Efficient and reliable quantum state preparation is a necessary step for all
quantum technologies. However, characterisation and verification of such
devices is typically a time-consuming and computationally difficult process.
For example, tomographic reconstruction of a state of 8 ions required taking
$\sim 650,000$ measurements over 10 hours, and a statistical analysis that
took far longer Häffner _et al._ (2005); verification of a few-qubit photonic
state is similarly challenging Carolan _et al._ (2014); Laing and O’Brien
(2012). This is also the case in tomography of continuous-variable systems
Lvovsky and Raymer (2009); Bellini _et al._ (2012); Amosov _et al._ (2012).
One may instead resort to non-tomographic methods to verify that a device
reliably outputs a particular state, but such methods typically either: (a)
assume that the output state is within some special family of states, for
example in compressed sensing Flammia _et al._ (2012); Gross _et al._ (2010)
or matrix product state tomography Cramer _et al._ (2010); or (b) extract
only partial information about the state, such as when estimating entanglement
witnesses Tóth and Gühne (2005a, b).
Here, we derive the optimal local verification strategy for common entangled
states and compare its performance to bounds for non-adaptive quantum state
tomography in Sugiyama _et al._ (2013) and the fidelity estimation protocol
in Flammia and Liu (2011). Specifically, we demonstrate non-adaptive
verification strategies for arbitrary two-qubit states and stabilizer states
of $N$ qubits that are constructed from local measurements, and require
quadratically fewer copies to verify to within a given fidelity than for these
previous protocols. Moreover, the requirement that the measurements be local
incurs only a constant factor penalty over the best non-local strategy, even
if collective and adaptive measurements are allowed.
## Premise.
Colloquially, a quantum state verification protocol is a procedure for gaining
confidence that the output of some device is a particular state over any
other. However, for any scheme involving measurements on a finite number of
copies of the output state, one can always find an alternative state within
some sufficiently small distance that is guaranteed to fool the verifier.
Furthermore, the outcomes of measurements are, in general, probabilistic and a
verification protocol collects a finite amount of data; and so any statement
about verification can only be made up to some finite statistical confidence.
The only meaningful statement to make in this context is the statistical
inference that the state output from a device sits within a ball of a certain
small radius (given some metric) of the correct state, with some statistical
confidence. Thus the outcome of a state verification protocol is a statement
like: “the device outputs copies of a state that has $99\%$ fidelity with the
target, with $90\%$ probability”. Note that this is different to the setting
of state tomography; a verification protocol answers the question: “Is the
state ${|{\psi}\rangle}?$” rather than the more involved tomographic question:
“Which state do I have?”. Hence, unlike tomography, a verification protocol
may give no information about the true state if the protocol fails.
We now outline the framework for verification protocols that we consider. Take
a verifier with access to some set of allowed measurements, and a device that
produces states $\sigma_{1},\sigma_{2},\ldots\sigma_{n}$ which are supposed to
all be ${|{\psi}\rangle}$, but may in practice be different from
${|{\psi}\rangle}$ or each other. We have the promise that either
$\sigma_{i}={|{\psi}\rangle}\\!{\langle{\psi}|}$ for all $i$, or
${\langle{\psi}|}\sigma_{i}{|{\psi}\rangle}\leq 1-\epsilon$ for all $i$. The
verifier must determine which is the case with worst-case failure probability
$\delta$.
The protocol proceeds as follows. For each $\sigma_{i}$, the verifier randomly
draws a binary-outcome projective measurement $\\{P_{j},\mathds{1}-P_{j}\\}$
from a prespecified set $\mathcal{S}$ with some probability $\mu^{i}_{j}$.
Label the outcomes “pass” and “fail”; in a “pass” instance the verifier
continues to state $\sigma_{i+1}$, otherwise the protocol ends and the
verifier concludes that the state was not ${|{\psi}\rangle}$. If the protocol
passes on all $n$ states, then the verifier concludes that the state was
${|{\psi}\rangle}$. We impose the constraint that every $P_{j}\in\mathcal{S}$
_always_ accepts when $\sigma_{i}={|{\psi}\rangle}\\!{\langle{\psi}|}$,
$\forall i$ (i.e. that ${|{\psi}\rangle}$ is in the “pass” eigenspace of every
projector $P_{j}\in\mathcal{S}$). This may seem a prohibitively strong
constraint, but we later demonstrate that it is both achievable for the sets
of states we consider and is always asymptotically favourable to the verifier.
The maximal probability that the verifier passes on copy $i$ is
$\text{Pr}[\text{Pass on copy }i]=\max_{\begin{subarray}{c}\sigma\\\
{\langle{\psi}|}\sigma{|{\psi}\rangle}\leq
1-\epsilon\end{subarray}}\operatorname{tr}(\Omega_{i}\sigma),$ (1)
where $\Omega_{i}=\sum_{j}\mu_{j}^{i}P_{j}$. However, the verifier seeks to
minimise this quantity for each $\Omega_{i}$ and hence it suffices to take a
fixed set of probabilities and projectors $\\{\mu_{j},P_{j}\\}$, independent
of $i$. Then the verifier-adversary optimisation is
$\min_{\Omega}\max_{\begin{subarray}{c}\sigma\\\
{\langle{\psi}|}\sigma{|{\psi}\rangle}\leq
1-\epsilon\end{subarray}}\operatorname{tr}(\Omega\sigma)\coloneqq
1-\Delta_{\epsilon},$ (2)
where $\Omega=\sum_{j}\mu_{j}P_{j}$. We call $\Omega$ a strategy.
$\Delta_{\epsilon}$ is the expected probability that the state $\sigma$ fails
a single measurement. Then the maximal worst-case probability that the
verifier fails to detect that we are in the “bad” case that
${\langle{\psi}|}\sigma_{i}{|{\psi}\rangle}\leq 1-\epsilon$ for all $i$ is
$(1-\Delta_{\epsilon})^{n}$, so to achieve confidence $1-\delta$ it is
sufficient to take
$n\geq\frac{\ln\delta^{-1}}{\ln((1-\Delta_{\epsilon})^{-1})}\approx\frac{1}{\Delta_{\epsilon}}\ln\delta^{-1}.$
(3)
Protocols of this form satisfy some useful operational properties:
1. A.
_Non-adaptivity_. The strategy is fixed from the outset and depends only on
the mathematical description of ${|{\psi}\rangle}$, rather than the choices of
any prior measurements or their measurement outcomes.
2. B.
_Future-proofing_. The strategy is independent of the infidelity $\epsilon$,
and gives a viable strategy for any choice of $\epsilon$. Thus an
experimentalist is able to arbitrarily decrease the infidelity $\epsilon$
within which verification succeeds by simply taking more total measurements
following the strategy prescription, rather than modifying the prescription
itself. The experimentalist is free to choose an arbitrary $\epsilon>0$ and be
guaranteed that the strategy still works in verifying ${|{\psi}\rangle}$.
One may consider more general non-adaptive verification protocols given
$\mathcal{S}$ and $\\{\sigma_{i}\\}$, where measurements do not output “pass”
with certainty given input ${|{\psi}\rangle}$, and the overall determination
of whether to accept or reject is based on a more complicated estimator built
from the relative frequency of “pass” and “fail” outcomes. However, we show in
the Supplemental Material that these strategies require, asymptotically,
quadratically more measurements in $\epsilon$ than those where
${|{\psi}\rangle}$ is always accepted. We will also see that the protocol
outlined above achieves the same scaling with $\epsilon$ and $\delta$ as the
globally optimal strategy, up to a constant factor, and so any other strategy
(even based on non-local, adaptive or collective measurements) would yield
only at most constant-factor improvements.
Given no constraints on the verifier’s measurement prescription, the optimal
strategy is to just project on to ${|{\psi}\rangle}$. In this case, the fewest
number of measurements needed to verify to confidence $1-\delta$ and fidelity
$1-\epsilon$ is
$n_{opt}=\frac{-1}{\ln\left(1-\epsilon\right)}\ln\frac{1}{\delta}\approx\frac{1}{\epsilon}\ln\frac{1}{\delta}$
(see the Supplemental Material). However, in general the projector
${|{\psi}\rangle}\\!{\langle{\psi}|}$ will be non-local, which has the
disadvantage of being harder to implement experimentally. This is particularly
problematic in quantum optics, for example, where deterministic, unambiguous
discrimination of a complete set of Bell states is impossible Vaidman and
Yoran (1999); Calsamiglia and Lütkenhaus (2001); Ewert and van Loock (2014).
Thus, for each copy there is a fixed probability of the measurement returning
a “null” outcome; hence, regardless of the optimality of the verification
strategy, merely the probability of its successful operation decreases
exponentially with the number of measurements. Instead, we seek optimal
measurement strategies that satisfy some natural properties that make them
both physically realisable and useful to a real-world verifier. We impose the
following properties:
1. 1.
_Locality_. $\mathcal{S}$ contains only measurements corresponding to local
observables, acting on a single copy of the output state.
2. 2.
_Projective measurement_. $\mathcal{S}$ contains only binary-outcome,
projective measurements, rather than more elaborate POVMs.
3. 3.
_Trust_. The physical operation of each measurement device is faithful to its
mathematical description; it behaves as expected, without experimental error.
Thus for multipartite states we only consider strategies where each party
locally performs a projective measurement on a single copy, and the parties
accept or reject based on their collective measurement outcomes. We also
highlight the trust requirement to distinguish from self-testing protocols
Mayers and Yao (2004); McKague _et al._ (2012); Yang and Navascués (2013).
Given this prescription and the set of physically-motivated restrictions, we
now derive the optimal verification strategy for some important classes of
states. To illustrate our approach, we start with the case of a Bell state
before generalising to larger classes of states.
## Bell state verification.
Consider the case of verifying the Bell state
${|{\Phi^{+}}\rangle}=\frac{1}{\sqrt{2}}({|{00}\rangle}+{|{11}\rangle})$. If
we maintain a strategy where all measurements accept ${|{\Phi^{+}}\rangle}$
with certainty, then it must be the case that
$\Omega{|{\Phi^{+}}\rangle}={|{\Phi^{+}}\rangle}$. The optimisation problem
for the verifier-adversary pair is then given by $\Delta_{\epsilon}$:
$\Delta_{\epsilon}=\max_{\Omega}\min_{\begin{subarray}{c}\sigma\\\
{\langle{\psi}|}\sigma{|{\psi}\rangle}\leq
1-\epsilon\end{subarray}}\operatorname{tr}[\Omega({|{\Phi^{+}}\rangle}\\!{\langle{\Phi^{+}}|}-\sigma)].$
(4)
However, we show in the Supplemental Material that it is never beneficial for
the adversary to: (a) choose a non-pure $\sigma$; or (b) to pick a $\sigma$
such that ${\langle{\psi}|}\sigma{|{\psi}\rangle}<1-\epsilon$. Rewrite
$\sigma={|{\psi_{\epsilon}}\rangle}\\!{\langle{\psi_{\epsilon}}|}$, where
${|{\psi_{\epsilon}}\rangle}=\sqrt{1-\epsilon}{|{\Phi^{+}}\rangle}+\sqrt{\epsilon}{|{\psi^{\bot}}\rangle}$
for some state ${|{\psi^{\bot}}\rangle}$ such that
$\braket{\Phi^{+}}{\psi^{\bot}}=0$. Then,
$\displaystyle\Delta_{\epsilon}$
$\displaystyle=\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle}}\epsilon({\langle{\Phi^{+}}|}\Omega{|{\Phi^{+}}\rangle}-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle})$
$\displaystyle-2\sqrt{\epsilon(1-\epsilon)}\text{Re}{\langle{\Phi^{+}}|}\Omega{|{\psi^{\bot}}\rangle}.$
(5)
Given that $\Omega{|{\Phi^{+}}\rangle}={|{\Phi^{+}}\rangle}$, we can simplify
by noting that ${\langle{\Phi^{+}}|}\Omega{|{\Phi^{+}}\rangle}=1$ and
${\langle{\Phi^{+}}|}\Omega{|{\psi^{\bot}}\rangle}=0$. Thus,
$\displaystyle\Delta_{\epsilon}$
$\displaystyle=\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle}}\epsilon(1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle})$
$\displaystyle=\epsilon(1-\min_{\Omega}\max_{{|{\psi^{\bot}}\rangle}}{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}),$
(6)
where the verifier controls $\Omega$ and the adversary controls
${|{\psi^{\bot}}\rangle}$. Given that ${|{\Phi^{+}}\rangle}$ is itself an
eigenstate of $\Omega$, the worst-case scenario for the verifier is for the
adversary to choose ${|{\psi^{\bot}}\rangle}$ as the eigenstate of $\Omega$
with the next largest eigenvalue. If we diagonalise $\Omega$ we can write
$\Omega={|{\Phi^{+}}\rangle}\\!{\langle{\Phi^{+}}|}+\sum_{j=1}^{3}\nu_{j}{|{\psi^{\bot}_{j}}\rangle}\\!{\langle{\psi^{\bot}_{j}}|}$,
where $\braket{\Phi^{+}}{\psi^{\bot}_{j}}=0\;\forall j$. The adversary picks
the state ${|{\psi^{\bot}_{\text{max}}}\rangle}$ with corresponding eigenvalue
$\nu_{\text{max}}=\max_{j}\nu_{j}$. Now, consider the trace of $\Omega$: if
$\operatorname{tr}(\Omega)<2$ then the strategy must be a convex combination
of local projectors, at least one of which is rank 1. However, the only rank 1
projector that satisfies $P^{+}{|{\Phi^{+}}\rangle}={|{\Phi^{+}}\rangle}$ is
$P^{+}={|{\Phi^{+}}\rangle}\\!{\langle{\Phi^{+}}|}$, which is non-local; and
therefore $\operatorname{tr}(\Omega)\geq 2$. Combining this with the
expression for $\Omega$ above gives
$\operatorname{tr}(\Omega)=1+\sum_{j}\nu_{j}\geq 2$. It is always beneficial
to the verifier to saturate this inequality, as any extra weight on the
subspace orthogonal to ${|{\Phi^{+}}\rangle}$ can only increase the chance of
being fooled by the adversary. Thus the verifier is left with the optimisation
$\min\nu_{\text{max}}=\min\max_{k}\nu_{k},\quad\sum_{k}\nu_{k}=1.$ (7)
This expression is optimised for $\nu_{j}=\frac{1}{3},j=1,2,3$. In this case,
$\Omega=\frac{\mathds{1}}{3}$ on the subspace orthogonal to the state
${|{\Phi^{+}}\rangle}$. Then we can rewrite $\Omega$ as
$\Omega=\frac{1}{3}(P^{+}_{XX}+P^{+}_{-YY}+P^{+}_{ZZ}),$ (8)
where $P^{+}_{XX}$ is the projector onto the positive eigensubspace of the
tensor product of Pauli matrices $XX$ (and likewise for $-YY$ and $ZZ$). The
operational interpretation of this optimal strategy is then explicit: for each
copy of the state, the verifier randomly chooses a measurement setting from
the set $\\{XX,-YY,ZZ\\}$ all with probability $\frac{1}{3}$, and accepts only
on receipt of outcome “+1” on all $n$ measurements. Note that we could expand
$\Omega$ differently, for example by conjugating each term in the above
expression by any local operator that leaves ${|{\Phi^{+}}\rangle}$ alone; the
decomposition above is only one of a family of optimal strategies. As for
scaling, we know that
$\Delta_{\epsilon}=\epsilon(1-\nu_{\text{max}})=\frac{2\epsilon}{3}$, and the
number of measurements needed to verify the Bell state ${|{\Phi^{+}}\rangle}$
is then
$n_{opt}=\left[\ln\left(\frac{3}{3-2\epsilon}\right)\right]^{-1}\ln{\frac{1}{\delta}}\approx\frac{3}{2\epsilon}\ln\frac{1}{\delta}$.
Note that this is only worse than the optimal non-local strategy by a factor
of $1.5$.
In comparison, consider instead verifying a Bell state by performing a CHSH
test. Then even in the case of trusted measurements, the total number of
measurements scales like $O\left(\frac{1}{\epsilon^{2}}\right)$ Sugiyama
(2014), which is quadratically worse than the case of measuring the
stabilizers $\\{XX,-YY,ZZ\\}$. This suboptimal scaling is shared by the known
bounds for non-adaptive quantum state tomography with single-copy measurements
in Sugiyama _et al._ (2013) and fidelity estimation in Flammia and Liu
(2011). See da Silva _et al._ (2011); Ferrie and Blume-Kohout (2016);
Struchalin _et al._ (2016) for further discussion of this scaling in
tomography. Additionally, two-qubit tomography potentially requires five times
as many measurement settings. We also note that a similar quadratic
improvement was derived in adaptive quantum state tomography in Mahler _et
al._ (2013), in the sample-optimal tomographic scheme in Haah _et al._ (2016)
and in the quantum state certification scheme in Bădescu _et al._ (2017);
however, the schemes therein assume access to either non-local or collective
measurements.
## Arbitrary states of two qubits.
The goal is unchanged for other pure states of two qubits: we seek strategies
that accept the target state with certainty, and hence achieve the asymptotic
advantage outlined for Bell states above. It is not clear a priori that such a
strategy exists for general states, in a way that is as straightforward as the
previous construction. However, we show that for any two-qubit state not only
does such a strategy exist, but we can optimise within the family of allowable
strategies and give an analytic expression with optimal constant factors.
We first remark that we can restrict to states of the form
${|{\psi_{\theta}}\rangle}=\sin\theta{|{00}\rangle}+\cos\theta{|{11}\rangle}$
without loss of generality, as any state is locally equivalent to a state of
this form, for some $\theta$. Specifically, given any two qubit state
${|{\psi}\rangle}$ with optimal strategy $\Omega_{opt}$, a locally equivalent
state $(U\otimes V){|{\psi}\rangle}$ has optimal strategy $(U\otimes
V)\Omega_{opt}(U\otimes V)^{\dagger}$. The proof of this statement can be
found in the Supplemental Material. Given the restriction to this family of
states, we can now write down an optimal verification protocol.
Figure 1: The number of measurements needed to verify the state
$\Ket{\psi_{\theta}}=\sin\theta\Ket{00}+\cos\theta\Ket{11}$, as a function of
$\theta$, using the optimal strategy. See Eq. 10. Here, $1-\epsilon=0.99$ and
$1-\delta=0.9$.
Figure 2: A comparison of the total number of measurements required to verify
to fidelity $1-\epsilon$ for the strategy derived here, versus the known
bounds for estimation up to fidelity $1-\epsilon$ using non-adaptive
tomography in Sugiyama _et al._ (2013) and the fidelity estimation protocol
in Flammia and Liu (2011), and the globally optimal strategy given by
projecting onto $\Ket{\psi}$. Here, $1-\delta=0.9$ and $\theta=\frac{\pi}{8}$.
###### Theorem 1.
Any optimal strategy for verifying a state of the form
${|{\psi_{\theta}}\rangle}=\sin\theta{|{00}\rangle}+\cos\theta{|{11}\rangle}$
for $0<\theta<\frac{\pi}{2}$, $\theta\neq\frac{\pi}{4}$ that accepts
${|{\psi_{\theta}}\rangle}$ with certainty and satisfies the properties of
locality, trust and projective measurement, can be expressed as a strategy
involving four measurement settings:
$\displaystyle\Omega_{opt}$ $\displaystyle=\alpha(\theta)P^{+}_{ZZ}$
$\displaystyle+\frac{1-\alpha(\theta)}{3}\sum_{k=1}^{3}\left[\mathds{1}-({|{u_{k}}\rangle}\otimes{|{v_{k}}\rangle})({\langle{u_{k}}|}\otimes{\langle{v_{k}}|})\right],$
for $\displaystyle\alpha(\theta)=\frac{2-\sin(2\theta)}{4+\sin(2\theta)},$ (9)
where $P^{+}_{ZZ}$ is the projector onto the positive eigenspace of the Pauli
operator $ZZ$, and the sets of states $\\{{|{u_{k}}\rangle}\\}$ and
$\\{{|{v_{k}}\rangle}\\}$ are written explicitly in the Supplemental Material.
The number of measurements needed to verify to within infidelity $\epsilon$
and with power $1-\delta$ satisfies
$n_{opt}\approx(2+\sin\theta\cos\theta)\epsilon^{-1}\ln\delta^{-1}.$ (10)
The proof of this theorem is included in the Supplemental Material. Note that
the special cases for ${|{\psi_{\theta}}\rangle}$ where $\theta=0$,
$\theta=\frac{\pi}{2}$ and $\theta=\frac{\pi}{4}$ are omitted from this
theorem. In these cases, ${|{\psi_{\theta}}\rangle}$ admits a wider choice of
measurements that accept with certainty. We have already treated the Bell
state case $\theta=\frac{\pi}{4}$ above. In the other two cases, the state
${|{\psi_{\theta}}\rangle}$ is product and hence the globally optimal
measurement, just projecting onto ${|{\psi_{\theta}}\rangle}$, is a valid
local strategy. We note that this leads to a discontinuity in the number of
measurements needed as a function of $\theta$, for fixed $\epsilon$ (as seen
in Fig. 2). This arises since our strategies are designed to have the optimal
scaling $\left(O\left(\frac{1}{\epsilon}\right)\right)$ for fixed $\theta$,
achieved by having strategies that accept ${|{\psi}\rangle}$ with probability
$1$.
As for scaling, in Fig. 2 the number of measurements required to verify a
particular two-qubit state of this form, for three protocols, is shown. The
optimal protocol derived here gives a marked improvement over the previously
published bounds for both tomography Sugiyama _et al._ (2013) and fidelity
estimation Flammia and Liu (2011) for the full range of $\epsilon$, for the
given values of $\theta$ and $\delta$. The asymptotic nature of the advantage
for the protocol described here implies that the gap between the optimal
scheme and tomography only grows as the requirement on $\epsilon$ becomes more
stringent. Note also that the optimal local strategy is only marginally worse
than the best possible strategy of just projecting onto ${|{\psi}\rangle}$.
## Stabilizer states.
Additionally, it is shown in the Supplemental Material that we can construct a
strategy with the same asymptotic advantage for any stabilizer state, by
drawing measurements from the stabilizer group (where now we only claim
optimality up to constant factors). The derivation is analogous to that for
the Bell state above, and given that the Bell state is itself a stabilizer
state, the strategy above is a special case of the stabilizer strategy
discussed below. For a state of $N$ qubits, a viable strategy constructed from
stabilizers must consist of at least the $N$ stabilizer generators of
${|{\psi}\rangle}$. This is because a set of $k<N$ stabilizers stabilizes a
subspace of dimension at least $2^{N-k}$, and so in this case there always
exists at least one orthogonal state to ${|{\psi}\rangle}$ accessible to the
adversary that fools the verifier with certainty. In this minimal case, the
number of required measurements is $n_{opt}^{s.g.}\approx
N\epsilon^{-1}\ln\delta^{-1}$, with this bound saturated by measuring all
stabilizer generators with equal weight. Conversely, constructing a
measurement strategy from the full set of $2^{N}-1$ linearly independent
stabilizers requires a number of measurements
$n_{opt}^{stab}\approx\frac{2^{N}-1}{2^{(N-1)}}\epsilon^{-1}\ln\delta^{-1}$,
again with this bound saturated by measuring each stabilizer with equal
weight. For growing $N$, the latter expression for the number of measurements
is bounded from above by $2\epsilon^{-1}\ln\delta^{-1}$, which implies that
there is a local strategy for any stabilizer state, of an arbitrary number of
qubits, which requires at most twice as many measurements as the optimal non-
local strategy. Note that this strategy may not be exactly optimal; for
example, the state ${|{00}\rangle}$ is also a stabilizer state, and in this
case applying the measurement ${|{00}\rangle}\\!{\langle{00}|}$ is both
locally implementable and provably optimal. Thus, the exactly optimal strategy
may depend more precisely on the structure of the individual state itself.
However, the stabilizer strategy is only inferior by a small constant factor.
In comparison to the latter strategy constructed from every stabilizer, the
former strategy constructed from only the $N$ stabilizer generators of
${|{\psi}\rangle}$ has scaling that grows linearly with $N$. Thus there is
ultimately a trade-off between number of measurement settings and total number
of measurements required to verify within a fixed fidelity.
In principle, the recipe derived here to extract the optimal strategy for a
state of two qubits can be applied to any pure state. However, we anticipate
that deriving this strategy, including correct constants, may be somewhat
involved (both analytically and numerically) for states of greater numbers of
qubits.
Following the completion of this work, we became aware of Dimić and Dakić
(2017) which, among other results, applies a similar protocol to the Bell
state verification strategy in the context of entanglement detection.
###### Acknowledgements.
The authors thank Jeremy Adcock, Sam Morley-Short, Tony Short and Chris
Sparrow for helpful discussions, and thank Borivoje Dakic for pointing out
Dimić and Dakić (2017). SP was supported by the Bristol Quantum Engineering
Centre for Doctoral Training, EPSRC grant EP/L015730/1. AM was supported by
EPSRC Early Career Fellowship EP/L021005/1. No new data were created during
this study.
## References
* Häffner _et al._ (2005) H. Häffner, W. Hänsel, C. F. Roos, J. Benhelm, D. Chek-al Kar, M. Chwalla, T. Körber, U. D. Rapol, M. Riebe, P. O. Schmidt, C. Becher, O. Gühne, W. Dür, and R. Blatt, Nature 438, 643 (2005).
* Carolan _et al._ (2014) J. Carolan, J. D. A. Meinecke, P. J. Shadbolt, N. J. Russell, N. Ismail, K. Wörhoff, T. Rudolph, M. G. Thompson, J. L. O’Brien, J. C. F. Matthews, and A. Laing, Nature Photonics 8, 621 (2014).
* Laing and O’Brien (2012) A. Laing and J. L. O’Brien, arXiv:1208.2868 (2012).
* Lvovsky and Raymer (2009) A. I. Lvovsky and M. G. Raymer, Reviews of Modern Physics 81, 299 (2009).
* Bellini _et al._ (2012) M. Bellini, A. S. Coelho, S. N. Filippov, V. I. Man’ko, and A. Zavatta, Physical Review A 85, 052129 (2012).
* Amosov _et al._ (2012) G. G. Amosov, Y. A. Korennoy, and V. I. Man’ko, Physical Review A 85, 052119 (2012).
* Flammia _et al._ (2012) S. T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, New Journal of Physics 14, 095022 (2012).
* Gross _et al._ (2010) D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert, Physical Review Letters 105, 150401 (2010).
* Cramer _et al._ (2010) M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D. Bartlett, O. Landon-Cardinal, D. Poulin, and Y.-K. Liu, Nature Communications 1, 149 (2010).
* Tóth and Gühne (2005a) G. Tóth and O. Gühne, Physical Review Letters 94, 060501 (2005a).
* Tóth and Gühne (2005b) G. Tóth and O. Gühne, Physical Review A 72, 022340 (2005b).
* Sugiyama _et al._ (2013) T. Sugiyama, P. S. Turner, and M. Murao, Physical Review Letters 111, 160406 (2013).
* Flammia and Liu (2011) S. T. Flammia and Y.-K. Liu, Physical Review Letters 106, 230501 (2011).
* Vaidman and Yoran (1999) L. Vaidman and N. Yoran, Physical Review A 59, 116 (1999).
* Calsamiglia and Lütkenhaus (2001) J. Calsamiglia and N. Lütkenhaus, Applied Physics B 72, 67 (2001).
* Ewert and van Loock (2014) F. Ewert and P. van Loock, Physical Review Letters 113, 140403 (2014).
* Mayers and Yao (2004) D. Mayers and A. Yao, Quantum Information & Computation 4, 273 (2004).
* McKague _et al._ (2012) M. McKague, T. H. Yang, and V. Scarani, Journal of Physics A: Mathematical and Theoretical 45, 455304 (2012).
* Yang and Navascués (2013) T. H. Yang and M. Navascués, Physical Review A 87, 050102 (2013).
* Sugiyama (2014) T. Sugiyama, _Finite Sample Analysis in Quantum Estimation_ (Springer, 2014).
* da Silva _et al._ (2011) M. P. da Silva, O. Landon-Cardinal, and D. Poulin, Physical Review Letters 107, 210404 (2011).
* Ferrie and Blume-Kohout (2016) C. Ferrie and R. Blume-Kohout, Physical Review Letters 116, 090407 (2016).
* Struchalin _et al._ (2016) G. I. Struchalin, I. A. Pogorelov, S. S. Straupe, K. S. Kravtsov, I. V. Radchenko, and S. P. Kulik, Physical Review A 93, 012103 (2016).
* Mahler _et al._ (2013) D. H. Mahler, L. A. Rozema, A. Darabi, C. Ferrie, R. Blume-Kohout, and A. M. Steinberg, Physical Review Letters 111, 183601 (2013).
* Haah _et al._ (2016) J. Haah, A. W. Harrow, Z. Ji, X. Wu, and N. Yu, in _Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing - STOC 2016_ (ACM Press, New York, New York, USA, 2016) pp. 913–925.
* Bădescu _et al._ (2017) C. Bădescu, R. O’Donnell, and J. Wright, arXiv:1708.06002 (2017).
* Dimić and Dakić (2017) A. Dimić and B. Dakić, arXiv:1705.06719 (2017).
* Gottesman (1997) D. Gottesman, _Stabilizer Codes and Quantum Error Correction_ , Ph.D. thesis, Caltech (1997).
* Gottesman (1996) D. Gottesman, Physical Review A 54, 1862 (1996).
* Nielsen and Chuang (2010) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, 2010) p. 676.
* Hein (2005) M. Hein, _Entanglement in graph states_ , Ph.D. thesis, University of Innsbruck (2005).
* Cover and Thomas (2006) T. M. Cover and J. A. Thomas, _Elements of Information Theory_ (Wiley-Interscience, 2006).
Supplemental Material: Optimal verification of entangled states with local
measurements
The contents of the following supplemental material are as follows: in
Appendix A, we set up a formal framework for state verification protocols. In
Appendix B we simplify the form of the protocol using the set of physically-
motivated strategy requirements outlined in the main body. Appendix C is
concerned with deriving the optimal strategy for states of two qubits, in
particular proving Theorem 1; and in Appendix D we derive efficient
verification strategies for stabilizer states. Finally, Appendix E outlines
the hypothesis testing framework necessary for this paper.
## Appendix A Quantum state verification
We first set up a formal framework for general state verification protocols.
We assume that we have access to a device $\mathcal{D}$ that is supposed to
produce copies of a state ${|{\psi}\rangle}$. However, $\mathcal{D}$ might not
work correctly, and actually produces (potentially mixed) states
$\sigma_{1},\sigma_{2},\dots$ such that $\sigma_{i}$ might not be equal to
${|{\psi}\rangle}\\!{\langle{\psi}|}$. In order to distinguish this from the
case where the device works correctly by making a reasonable number of uses of
$\mathcal{D}$, we need to have a promise that these states are sufficiently
far from ${|{\psi}\rangle}$. So we are led to the following formulation of our
task:
Distinguish between the following two cases:
1. (a).
(Good) $\sigma_{i}={|{\psi}\rangle}\\!{\langle{\psi}|}$ for all $i$;
2. (b).
(Bad) For some fixed $\epsilon$,
$F({|{\psi}\rangle},\sigma_{i}):=\braket{\psi}{\sigma_{i}}{\psi}\leq
1-\epsilon$ for all $i$.
Given a verifier with access to a set of available measurements $\mathcal{S}$,
the protocols we consider for completing this task are of the following form:
Protocol Quantum state verification
1:for $i=1$ to $n$ do
2: Two-outcome measurement $M_{i}\in\mathcal{S}$ on $\sigma_{i}$, where
$M_{i}$’s outcomes are associated with “pass” and “fail”
3: if “fail” is returned then
4: Output “reject”
5:Output “accept”
We impose the conditions that in the good case, the protocol accepts with
certainty, whereas in the bad case, the protocol accepts with probability at
most $\delta$; we call $1-\delta$ the _statistical power_ of the protocol. We
then aim to find a protocol that minimises $n$ for a given choice of
${|{\psi}\rangle}$, $\epsilon$ and $\mathcal{S}$, such that these constraints
are satisfied. Insisting that the protocol accepts in the good case with
certainty implies that all measurements in $\mathcal{S}$ are guaranteed to
pass in this case. This is a desirable property in itself, but one could
consider more general non-adaptive protocols where measurements do not output
“pass” with certainty on ${|{\psi}\rangle}$, and the protocol determines
whether to accept based on an estimator constructed from the relative
frequency of “pass” and “fail” outcomes across all $n$ copies. We show in
Appendix E that this class of protocols has quadratically worse scaling in
$\epsilon$ than protocols where each measurement passes with certainty on
${|{\psi}\rangle}$.
We make the following observations about this framework:
1. 1.
Given no restrictions on $M_{i}$, the optimal protocol is simply for each
measurement to project onto ${|{\psi}\rangle}$. In fact, this remains optimal
even over the class of more general protocols making use of adaptivity or
collective measurements. One can see this as follows: if a two-outcome
measurement $M$ (corresponding to the whole protocol) is described by
measurement operators $P$ (accept) and $I-P$ (reject), then if $M$ accepts
${|{\psi}\rangle}^{\otimes n}$ with certainty, we must have
$P={|{\psi}\rangle}\\!{\langle{\psi}|}^{\otimes n}+P^{\prime}$ for some
residual positive semidefinite operator $P^{\prime}$. Then replacing $P$ with
${|{\psi}\rangle}\\!{\langle{\psi}|}^{\otimes n}$ gives at least as good a
protocol, as the probability of accepting ${|{\psi}\rangle}$ remains 1, while
the probability of accepting other states cannot increase.
The probability of acceptance in the bad case after $n$ trials is then at most
$(1-\epsilon)^{n}$, so it is sufficient to take
$n\geq\frac{\ln\delta^{-1}}{\ln((1-\epsilon)^{-1})}\approx\epsilon^{-1}\ln\delta^{-1}$
(S1)
to achieve statistical power $1-\delta$. This will be the yardstick against
which we will compare our more restricted protocols below.
2. 2.
We assume that the states $\sigma_{i}$ are independently and adversarially
chosen. This implies that if (as we will consider below) $\mathcal{S}$
contains only projective measurements and does not contain the measurement
projecting onto ${|{\psi}\rangle}\\!{\langle{\psi}|}$, it is necessary to
choose the measurement $M_{i}$ at random from $\mathcal{S}$ and unknown to the
adversary. Otherwise, we could be fooled with certainty by the adversary
choosing $\sigma_{i}$ to have support only in the “pass” eigenspace of $M_{i}$
for each copy $i$.
3. 3.
We can be explicit about the optimisation needed to derive the optimal
protocol in this adversarial setting. As protocols of the above form reject
whenever a measurement fails, the adversary’s goal at the $i$’th step is to
maximise the probability that the measurement $M_{i}$ at that step passes on
$\sigma_{i}$. If the $j$’th measurement setting in $\mathcal{S}$, $M^{j}$, is
picked from $\mathcal{S}$ at step $i$ with probability $\mu_{j}^{i}$, the
largest possible overall probability of passing for copy $i$ is
$\text{Pr}[\text{Pass on copy
}i]=\max_{\sigma_{i},\braket{\psi}{\sigma_{i}}{\psi}\leq
1-\epsilon}\sum_{j}\mu^{i}_{j}\operatorname{tr}(P_{j}\sigma_{i}),$ (S2)
where we denote the corresponding “pass” projectors $P_{j}$. We can write
$\Omega_{i}=\sum_{j}\mu_{j}^{i}P_{j}$, and then
$\text{Pr}[\text{Pass on copy }i]=\max_{\sigma,\braket{\psi}{\sigma}{\psi}\leq
1-\epsilon}\operatorname{tr}(\Omega_{i}\sigma).$ (S3)
As the verifier, we wish to minimise this expression over all $\Omega_{i}$, so
we end up with a final expression that does not depend on $i$. This leads us
to infer that optimal protocols of this form can be assumed to be non-adaptive
in two senses: they do not depend on the outcome of previous measurements
(which is clear, as the protocol rejects if it ever sees a “fail” outcome);
and they also do not depend on the measurement choices made previously.
Therefore, in order to find an optimal verification protocol, our task is to
determine
$\min_{\Omega}\max_{\sigma,\braket{\psi}{\sigma}{\psi}\leq
1-\epsilon}\operatorname{tr}(\Omega\sigma),$ (S4)
where $\Omega$ is an operator of the form $\Omega=\sum_{j}\mu_{j}P_{j}$ for
$P_{j}\in\mathcal{S}$ and some probability $\mu_{j}$. We call such operators
strategies. If $\mathcal{S}$ contained all measurement operators (or even all
projectors), $\Omega$ would be an arbitrary operator satisfying
$0\leq\Omega\leq I$. However, this notion becomes nontrivial when one
considers restrictions on $\mathcal{S}$. Here, we focus on the experimentally
motivated case where $\mathcal{S}$ contains only projective measurements that
can be implemented via local operations and classical postprocessing.
4. 4.
In a non-adversarial scenario, it may be acceptable to fix the measurements in
$\Omega$ in advance, with appropriate frequencies $\mu_{j}$. Then, given $n$,
a strategy $\Omega=\sum_{j}\mu_{j}P_{j}$ corresponds to a protocol where for
each $j$ we deterministically make $\mu_{j}n$ measurements
$\\{P_{j},I-P_{j}\\}$. For large $n$, and fixed $\sigma_{i}=\sigma$, this will
achieve similar performance to the above protocol.
5. 5.
More complicated protocols with adaptive or collective measurements, or
measurements with more than two outcomes, cannot markedly improve on the
strategies derived here. We do not treat these more general strategies
explicitly, but note that the protocols we will describe based on local
projective measurements already achieve the globally optimal bound (S1) up to
constant factors, so any gain from these more complex approaches would be
minor.
## Appendix B Verification strategy optimisation
In this appendix, we simplify the form of the optimisation in S4 using the
strategy requirements outlined previously. We start by making the following
useful observation:
###### Lemma 2.
We can assume without loss of generality that, in (S4), $\sigma$ is pure.
###### Proof.
Assume the adversary chooses a fixed density matrix $\sigma$, which is
globally optimal: it forces the verifier to accept $\sigma$ with the greatest
probability among states $\sigma$ such that
$\braket{\psi}{\sigma}{\psi}\coloneqq r\leq 1-\epsilon$. The probability of
accepting this $\sigma$ given strategy $\Omega$ is then
$\Pr[\text{Accept }\sigma]=\operatorname{tr}(\Omega\sigma).$ (S5)
We have asserted that $\Omega$ accepts ${|{\psi}\rangle}$ with certainty:
${\langle{\psi}|}\Omega{|{\psi}\rangle}=1$. However, for this to be the case
$\Omega$ must have ${|{\psi}\rangle}$ as an eigenstate with eigenvalue $1$;
thus we can write
$\Omega={|{\psi}\rangle}\\!{\langle{\psi}|}+\sum_{j}c_{j}{|{\psi^{\bot}_{j}}\rangle}\\!{\langle{\psi^{\bot}_{j}}|}$
(S6)
where the states $\\{{|{\psi^{\bot}_{j}}\rangle}\\}$ are a set of mutually
orthogonal states orthogonal to ${|{\psi}\rangle}$. Then
$\displaystyle\Pr[\text{Accept }\sigma]$
$\displaystyle={\langle{\psi}|}\sigma{|{\psi}\rangle}+\sum_{j}c_{j}{\langle{\psi^{\bot}_{j}}|}\sigma{|{\psi^{\bot}_{j}}\rangle}$
(S7)
$\displaystyle=r+\sum_{j}c_{j}{\langle{\psi^{\bot}_{j}}|}\sigma{|{\psi^{\bot}_{j}}\rangle}.$
(S8)
We can write
$\sigma=a{|{\psi}\rangle}\\!{\langle{\psi}|}+b\sigma^{\bot}+c{|{\psi}\rangle}\\!{\langle{\Phi^{\bot}}|}+c^{*}{|{\Phi^{\bot}}\rangle}\\!{\langle{\psi}|},$
(S9)
where $\sigma^{\bot}$ is a density matrix entirely supported in the subspace
spanned by the states ${|{\psi^{\bot}_{j}}\rangle}$, and
${|{\Phi^{\bot}}\rangle}$ is a vector in the subspace spanned by
${|{\psi^{\bot}_{j}}\rangle}$. We know that $a=r$ as
${\langle{\psi}|}\sigma{|{\psi}\rangle}=r$, and $b=1-r$ as
$\operatorname{tr}(\sigma)=1$. Now, note that the probability of accepting
$\sigma$ does not depend on the choice of ${|{\Phi^{\bot}}\rangle}$. Thus
$\operatorname{tr}(\Omega\sigma)$ is maximised when
$\sigma^{\bot}={|{\psi^{\bot}_{max}}\rangle}\\!{\langle{\psi^{\bot}_{max}}|}$,
where ${|{\psi^{\bot}_{max}}\rangle}$ is the orthogonal state in the spectral
decomposition of $\Omega$ with largest eigenvalue, $c_{max}$. Thus
$\max_{\sigma}\operatorname{tr}(\Omega\sigma)=r+(1-r)c_{max},$ (S10)
which is achieved by any density matrix of the form
$\sigma=r{|{\psi}\rangle}\\!{\langle{\psi}|}+(1-r){|{\psi^{\bot}_{max}}\rangle}\\!{\langle{\psi^{\bot}_{max}}|}+c{|{\psi}\rangle}\\!{\langle{\Phi^{\bot}}|}+c^{*}{|{\Phi^{\bot}}\rangle}\\!{\langle{\psi}|}.$
(S11)
Note that the pure state $\sigma={|{\phi}\rangle}\\!{\langle{\phi}|}$ for
${|{\phi}\rangle}=\sqrt{r}{|{\psi}\rangle}+\sqrt{1-r}{|{\psi^{\bot}_{max}}\rangle}$
is of this form, and so we can assume that the adversary makes this choice. ∎
Given that the state $\sigma$ can be taken to be pure and that the fidelity
$F({|{\psi}\rangle},\sigma)\leq 1-\epsilon$, we write
$\sigma={|{\psi_{\bar{\epsilon}}}\rangle}\\!{\langle{\psi_{\bar{\epsilon}}}|}$,
where
${|{\psi_{\bar{\epsilon}}}\rangle}:=\sqrt{1-\bar{\epsilon}}{|{\psi}\rangle}+\sqrt{\bar{\epsilon}}{|{\psi^{\bot}}\rangle}$
and $\braket{\psi}{\psi^{\bot}}=0$, for some $\bar{\epsilon}\geq\epsilon$
chosen by the adversary, to be optimised later. Denote
$\min_{\Omega}\max_{\begin{subarray}{c}\sigma\\\
{\langle{\psi}|}\sigma{|{\psi}\rangle}\leq
1-\epsilon\end{subarray}}\operatorname{tr}(\Omega\sigma)\coloneqq
1-\Delta_{\epsilon}.$ (S12)
Then the optimisation problem becomes to determine $\Delta_{\epsilon}$, where
$\displaystyle\Delta_{\epsilon}=\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle},\bar{\epsilon}\geq\epsilon}\bar{\epsilon}(1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle})-2\sqrt{\bar{\epsilon}(1-\bar{\epsilon})}\text{Re}({\langle{\psi}|}\Omega{|{\psi^{\bot}}\rangle})$
(S13) $\displaystyle\text{and }\Omega{|{\psi}\rangle}={|{\psi}\rangle}.$
This expression can be simplified given that
$\Omega{|{\psi}\rangle}={|{\psi}\rangle}$. In particular, we then know that
${\langle{\psi^{\bot}}|}\Omega{|{\psi}\rangle}=0$ for any choice of orthogonal
state ${|{\psi^{\bot}}\rangle}$. Thus the term
$\sqrt{\bar{\epsilon}(1-\bar{\epsilon})}\text{Re}({\langle{\psi}|}\Omega{|{\psi^{\bot}}\rangle})$
automatically vanishes. We are then left with the optimisation
$\displaystyle\Delta_{\epsilon}=\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle},\bar{\epsilon}\geq\epsilon}\bar{\epsilon}(1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}),$
(S14) $\displaystyle\text{where }\Omega{|{\psi}\rangle}={|{\psi}\rangle}.$
As for the optimisation of $\bar{\epsilon}$, note that it is the goal of the
adversary to make $\Delta_{\epsilon}$ as small as possible; and so they are
obliged to set $\bar{\epsilon}=\epsilon$. Then the optimisation becomes
$\displaystyle\Delta_{\epsilon}=\epsilon$
$\displaystyle\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle}}(1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}),$
(S15) $\displaystyle\text{where }\Omega{|{\psi}\rangle}={|{\psi}\rangle}.$
Note that this expression implies that any $\Omega$ where
$\Omega{|{\psi}\rangle}={|{\psi}\rangle}$ automatically satisfies the _future-
proofing_ property: firstly that $\Omega$ is independent of $\epsilon$, but
also that the strategy must be viable for any choice of $\epsilon$ (i.e. there
must not be a choice of $\epsilon$ where $\Delta_{\epsilon}=0$). For an
initial choice $\Delta_{\epsilon}>0$, we have that
$1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}>0$ and so
$\Delta_{\epsilon^{\prime}}>0$ for any $0<\epsilon^{\prime}<\epsilon$. Thus
the verifier is free to decrease $\epsilon$ arbitrarily without fear of the
strategy failing. Note also that this condition may not be automatically
guaranteed if the verifier chooses an $\Omega$ such that
$\Omega{|{\psi}\rangle}\neq{|{\psi}\rangle}$.
Regarding the optimisation problem in S15, for an arbitrary state
${|{\psi}\rangle}$ on $n$ qubits it is far from clear how to: (a) construct
families of viable $\Omega$ (built from local projective measurements) that
accept ${|{\psi}\rangle}$ with certainty; (b) to then solve this optimisation
problem over those families of $\Omega$. For the remainder of this work, we
focus on states of particular experimental interest where we can solve the
problem: arbitrary states of 2 qubits, and stabilizer states.
## Appendix C States of two qubits
We now derive the optimal verification strategy for an arbitrary pure state of
two qubits. We first give the proof of the statement in the main text that
optimal strategies for locally equivalent states are easily derived by
conjugating the strategy with the local map that takes one state to the other.
Hence, we can restrict our consideration to verifying states of the form
${|{\psi}\rangle}=\sin\theta{|{00}\rangle}+\cos\theta{|{11}\rangle}$ without
loss of generality. Specifically:
###### Lemma 3.
Given any two qubit state ${|{\psi}\rangle}$ with optimal strategy
$\Omega_{opt}$, a locally equivalent state $(U\otimes V){|{\psi}\rangle}$ has
optimal strategy $(U\otimes V)\Omega_{opt}(U\otimes V)^{\dagger}$.
###### Proof.
We must show that strategy $\Omega^{\prime}=(U\otimes V)\Omega_{opt}(U\otimes
V)^{\dagger}$ is both a valid strategy, and is optimal for verifying
${|{\psi^{\prime}}\rangle}=(U\otimes V){|{\psi}\rangle}$.
_Validity_ : If $\Omega_{opt}=\sum_{j}\mu_{j}P_{j}$ is a convex combination of
local projectors, then so is $\Omega^{\prime}$:
$\displaystyle\Omega^{\prime}=(U\otimes V)\Omega(U\otimes V)^{\dagger}$
$\displaystyle=\sum_{j}\mu_{j}(U\otimes V)P_{j}(U\otimes V)^{\dagger}$
$\displaystyle=\sum_{j}\mu_{j}P^{\prime}_{j}.$ (S16)
Also, if $\Omega_{opt}{|{\psi}\rangle}={|{\psi}\rangle}$ then
$\Omega^{\prime}{|{\psi^{\prime}}\rangle}={|{\psi^{\prime}}\rangle}$:
$\displaystyle\Omega_{opt}{|{\psi}\rangle}={|{\psi}\rangle}$
$\displaystyle\Rightarrow(U\otimes V)\Omega{|{\psi}\rangle}=p_{opt}(U\otimes
V){|{\psi}\rangle}$ (S17) $\displaystyle\Rightarrow(U\otimes V)\Omega(U\otimes
V)^{\dagger}(U\otimes V){|{\psi}\rangle}=(U\otimes V){|{\psi}\rangle}$
$\displaystyle\Rightarrow\Omega^{\prime}{|{\psi^{\prime}}\rangle}={|{\psi^{\prime}}\rangle}.$
_Optimality_ : The performance of a strategy is determined by the maximum
probability of accepting an orthogonal state ${|{\psi^{\bot}}\rangle}$. For
the strategy-state pairs $(\Omega_{opt},{|{\psi}\rangle})$ and
$(\Omega^{\prime},{|{\psi^{\prime}}\rangle})$, we denote this parameter
$q_{opt}$ and $q^{\prime}$, respectively. Then
$\displaystyle q_{opt}$
$\displaystyle=\max_{{|{\psi^{\bot}}\rangle}}{\langle{\psi^{\bot}}|}\Omega_{opt}{|{\psi^{\bot}}\rangle}=\max_{{|{\phi}\rangle},\braket{\psi}{\phi}=0}{\langle{\phi}|}\Omega_{opt}{|{\phi}\rangle}$
(S18) $\displaystyle=\max_{(U\otimes
V){|{\phi}\rangle},{\langle{\psi}|}(U\otimes V)^{\dagger}(U\otimes
V){|{\phi}\rangle}=0}{\langle{\phi}|}(U\otimes V)^{\dagger}(U\otimes
V)\Omega_{opt}(U\otimes V)^{\dagger}(U\otimes V){|{\phi}\rangle}$ (S19)
$\displaystyle=\max_{{|{\phi^{\prime}}\rangle},\braket{\psi^{\prime}}{\phi^{\prime}}=0}{\langle{\phi^{\prime}}|}\Omega^{\prime}{|{\phi^{\prime}}\rangle}=q^{\prime}.$
(S20)
So applying the same local rotation to the strategy and the state results in
no change in the performance of the strategy. Thus the following simple proof
by contradiction holds: assume that there is a better strategy for verifying
${|{\psi^{\prime}}\rangle}$, denoted $\Omega^{\prime\prime}$. But then the
strategy $(U\otimes V)^{\dagger}\Omega^{\prime\prime}(U\otimes V)$ must have a
better performance for verifying ${|{\psi}\rangle}$ than $\Omega_{opt}$, which
is a contradiction. Thus $\Omega^{\prime}$ must be the optimal strategy for
verifying ${|{\psi^{\prime}}\rangle}$. ∎
We will now prove Theorem 1 from the main body. However, we first prove a
useful lemma - that no optimal strategy can contain the identity measurement
(where the verifier always accepts regardless of the tested state). In the
following discussion, we denote the projector
$\Pi\coloneqq\mathds{1}-{|{\psi}\rangle}\\!{\langle{\psi}|}$. For a strategy
$\Omega$ where $\Omega{|{\psi}\rangle}={|{\psi}\rangle}$, the quantity of
interest which determines $\Delta_{\epsilon}$ in (S15) is the maximum
probability of accepting an orthogonal state ${|{\psi^{\bot}}\rangle}$:
$q\coloneqq\|\Pi\Omega\Pi\|=\max_{{|{\psi^{\bot}}\rangle}}{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}.$
(S21)
If a strategy is augmented with an accent or subscript, the parameter $q$
inherits that accent or subscript.
###### Lemma 4.
Consider an operator $0\leq\Omega\leq 1$,
$\Omega{|{\psi}\rangle}={|{\psi}\rangle}$ of the form
$\Omega=(1-\alpha)\Omega_{1}+\alpha\mathds{1}$ for $0\leq\alpha\leq 1$. Then
$q\geq q_{1}$.
###### Proof.
For arbitrary ${|{\psi^{\perp}}\rangle}$ such that
$\braket{\psi}{\psi^{\perp}}=0$,
$\braket{\psi^{\perp}}{\Omega}{\psi^{\perp}}=(1-\alpha)\braket{\psi^{\perp}}{\Omega_{1}}{\psi^{\perp}}+\alpha$.
This is maximised by choosing ${|{\psi^{\perp}}\rangle}$ such that
$\braket{\psi^{\perp}}{\Omega_{1}}{\psi^{\perp}}=q_{1}$, giving
$q=(1-\alpha)q_{1}+\alpha\geq q_{1}$. ∎
We are now in a position to prove Theorem 1. Note that the special cases where
${|{\psi}\rangle}$ is a product state ($\theta=0$ or $\frac{\pi}{2}$) or a
Bell state ($\theta=\frac{\pi}{4}$) are treated separately.
###### Theorem 1 (restated).
Any optimal strategy for verifying a state of the form
${|{\psi}\rangle}=\sin\theta{|{00}\rangle}+\cos\theta{|{11}\rangle}$ for
$0<\theta<\frac{\pi}{2}$, $\theta\neq\frac{\pi}{4}$ that accepts
${|{\psi_{\theta}}\rangle}$ with certainty and satisfies the properties of
locality, trust and projective measurement, can be expressed as a strategy
involving four measurement settings:
$\Omega^{opt}=\frac{2-\sin(2\theta)}{4+\sin(2\theta)}P^{+}_{ZZ}+\frac{2(1+\sin(2\theta))}{3(4+\sin(2\theta))}\sum_{k=1}^{3}(\mathds{1}-{|{\phi_{k}}\rangle}\\!{\langle{\phi_{k}}|}),$
(S22)
where the states ${|{\phi_{k}}\rangle}$ are
$\displaystyle{|{\phi_{1}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{2\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right),$ (S23)
$\displaystyle{|{\phi_{2}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{4\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{5\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right),$ (S24)
$\displaystyle{|{\phi_{3}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{1}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}-\frac{1}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right).$
(S25)
The number of measurements needed to verify to within fidelity $\epsilon$ and
statistical power $1-\delta$ is
$n_{opt}\approx(2+\sin\theta\cos\theta)\epsilon^{-1}\ln\delta^{-1}.$ (S26)
###### Proof.
The strategy $\Omega$ can be written as a convex combination of local
projectors. We can group the projectors by their action according to two local
parties, Alice and Bob, and then it must be expressible as a convex
combination of five types of terms, grouped by trace:
$\Omega=c_{1}\sum_{i}\mu_{i}(\rho^{i}_{1}\otimes\sigma^{i}_{1})+c_{2}\sum_{j}\nu_{j}(\rho^{j}_{2}\otimes\sigma^{j}_{2}+\rho_{2}^{j\bot}\otimes\sigma_{2}^{j\bot})+c_{3}\sum_{k}\eta_{k}(\mathds{1}-\rho^{k}_{3}\otimes\sigma^{k}_{3})+c_{4}\sum_{l}[\zeta_{l}(\rho^{l}_{4}\otimes\mathds{1})+\xi_{l}(\mathds{1}\otimes\sigma^{l}_{4})]+c_{5}\mathds{1}\otimes\mathds{1},$
(S27)
where $\rho^{k}_{i}$ and $\sigma^{k}_{i}$ are single-qubit pure states and the
subscript denotes the type of term in question. The state $\rho^{j\bot}$ is
the density matrix defined by $\operatorname{tr}(\rho^{j}\rho^{j\bot})=0$.
Qualitatively, given two local parties Alice and Bob with access to one qubit
each, and projectors with outcomes $\\{\lambda,\bar{\lambda}\\}$, the terms
above correspond to the following strategies: (1) Alice and Bob both apply a
projective measurement and accept if both outcomes are $\lambda$; (2) Alice
and Bob both apply a projective measurement and accept if both outcomes agree;
(3) Alice and Bob both apply a projective measurement and accept unless both
outcomes are $\lambda$; (4) Alice or Bob applies a projective measurement and
accepts on outcome $\lambda$, and the other party abstains; and (5) both Alice
and Bob accept without applying a measurement.
We show in Appendix E that strategies that accept ${|{\psi}\rangle}$ with
certainty have a quadratic advantage in scaling in terms of epsilon. Given
this, we enforce this constraint from the outset and then show that a viable
strategy can still be constructed. For the general strategy in Eq. S27 to
accept ${|{\psi}\rangle}$ with certainty, each term in its expansion must
accept ${|{\psi}\rangle}$ with certainty. However, this is impossible to
achieve for some of the terms in the above expansion. In particular, we show
that the terms $(\rho\otimes\sigma)$, $(\rho\otimes\mathds{1})$ and
$(\mathds{1}\otimes\sigma)$ cannot accept ${|{\psi}\rangle}$ with certainty,
and the form of the term $(\rho\otimes\sigma+\rho^{\bot}\otimes\sigma^{\bot})$
is restricted.
$\mathit{(\rho\otimes\sigma)}$: given that $\rho$ and $\sigma$ are pure, write
$\rho\otimes\sigma={|{u}\rangle}\\!{\langle{u}|}\otimes{|{v}\rangle}\\!{\langle{v}|}$,
and so this term only accepts ${|{\psi}\rangle}$ with certainty if
$\|({|{u}\rangle}\\!{\langle{u}|}\otimes{|{v}\rangle}\\!{\langle{v}|}){|{\psi}\rangle}\|=1$.
However, for $0<\theta<\frac{\pi}{2}$ the state ${|{\psi}\rangle}$ is
entangled and this condition cannot be satisfied.
$\mathit{(\rho\otimes\mathds{1})}$ or $\mathit{(\mathds{1}\otimes\sigma)}$:
For the term $(\rho\otimes\mathds{1})$, reexpress $\rho$ in terms of its Pauli
expansion: $\rho\otimes\mathds{1}=\frac{1}{2}(\mathds{1}+\alpha X+\beta
Y+\gamma Z)\otimes\mathds{1}$, for $-1\leq\alpha,\beta,\gamma\leq 1$. Then the
condition that this term accepts with probability $p=1$ is
${\langle{\psi}|}\frac{1}{2}(\mathds{1}+\alpha X+\beta Y+\gamma
Z)\otimes\mathds{1}{|{\psi}\rangle}=1.$ (S28)
By inserting the definition of ${|{\psi}\rangle}$, this becomes
$\frac{1}{2}(1-\gamma\cos(2\theta))=1$, which is unsatisfiable for
$0<\theta<\frac{\pi}{2}$. It is readily checkable that an identical condition
is derived for the term $\mathds{1}\otimes\sigma$, given the symmetry of the
state ${|{\psi}\rangle}$ under swapping.
$\mathit{(\rho\otimes\sigma+\rho^{\bot}\otimes\sigma^{\bot})}$: for this term,
we can expand both $\rho$ and $\sigma$ in terms of Pauli operators:
$\displaystyle\rho$ $\displaystyle=\frac{1}{2}(\mathds{1}+\alpha X+\beta
Y+\gamma Z);\quad$ $\displaystyle\rho^{\bot}=\frac{1}{2}(\mathds{1}-\alpha
X-\beta Y-\gamma Z)$ (S29) $\displaystyle\sigma$
$\displaystyle=\frac{1}{2}(\mathds{1}+\alpha^{\prime}X+\beta^{\prime}Y+\gamma^{\prime}Z);\quad$
$\displaystyle\sigma^{\bot}=\frac{1}{2}(\mathds{1}-\alpha^{\prime}X-\beta^{\prime}Y-\gamma^{\prime}Z).$
(S30)
Inserting these expressions and the definition of ${|{\psi}\rangle}$ into the
condition that $p=1$ gives the constraint
$\gamma\gamma^{\prime}+(\alpha\alpha^{\prime}-\beta\beta^{\prime})\sin(2\theta)=1.$
(S31)
Now, we know from the Cauchy-Schwarz inequality that
$\gamma\gamma^{\prime}+(\alpha\alpha^{\prime}-\beta\beta^{\prime})\sin(2\theta)\leq\sqrt{\alpha^{\prime
2}+\beta^{\prime 2}+\gamma^{\prime
2}}\sqrt{\alpha^{2}\sin^{2}(2\theta)+\beta^{2}\sin^{2}(2\theta)+\gamma^{2}}\leq
1,$ (S32)
where the second inequality is derived from the fact that
$\\{\alpha,\beta,\gamma\\}$,
$\\{\alpha^{\prime},\beta^{\prime},\gamma^{\prime}\\}$ are the
parameterisation of a pair of density matrices. There are two ways that this
inequality can be saturated: (a) $\sin(2\theta)=1$; (b)
$\alpha\alpha^{\prime}-\beta\beta^{\prime}=0$, $\gamma\gamma^{\prime}=1$. In
all other cases, the inequality is strict. Thus the constraint in Eq. S31
cannot be satisfied in general. Exception (a) corresponds to
$\theta=\frac{\pi}{4}$, which is omitted from this proof and treated
separately. In exception (b), we have that $\gamma\gamma^{\prime}=1$ and so
either $\gamma=\gamma^{\prime}=1$ or $\gamma=\gamma^{\prime}=-1$. In both
cases we have that
$\rho\otimes\sigma+\rho^{\bot}\otimes\sigma^{\bot}=\left(\frac{\mathds{1}+Z}{2}\otimes\frac{\mathds{1}+Z}{2}\right)+\left(\frac{\mathds{1}-Z}{2}\otimes\frac{\mathds{1}-Z}{2}\right)=P^{+}_{ZZ},$
(S33)
where $P^{+}_{ZZ}$ is the projector onto the positive eigenspace of $ZZ$. This
is the only possible choice for this particular term that accepts
${|{\psi}\rangle}$ with certainty.
We can also make use of Lemma 4 to remove the term
$\mathds{1}\otimes\mathds{1}$. Given this and the restrictions above from
enforcing that $p=1$, the measurement strategy can be written
$\Omega=\alpha
P^{+}_{ZZ}+(1-\alpha)\sum_{k}\eta_{k}(\mathds{1}-\rho_{k}\otimes\sigma_{k}),$
(S34)
where $\sum_{k}\eta_{k}=1$ and $0\leq\alpha\leq 1$.
We’ll try to further narrow down the form of this strategy by _averaging_ ;
i.e. by noting that, as ${|{\psi}\rangle}$ is an eigenstate of a matrix
$M_{\zeta}\otimes M_{-\zeta}$ where
$M_{\zeta}=\begin{pmatrix}1&0\\\ 0&e^{-i\zeta}\end{pmatrix},$ (S35)
then conjugating the strategy by $M_{\zeta}\otimes M_{-\zeta}$ and integrating
over all possible $\zeta$ cannot make the strategy worse; if we consider an
averaged strategy $\langle\Omega\rangle$ such that
$\langle\Omega\rangle=\frac{1}{2\pi}\int_{-\pi}^{\pi}d\zeta(M_{\zeta}\otimes
M_{-\zeta})\Omega(M_{-\zeta}\otimes M_{\zeta}),$ (S36)
then necessarily the performance of $\langle\Omega\rangle$ cannot be worse
than that of $\Omega$. To see this, note that the averaging procedure does not
affect the probability of accepting the state ${|{\psi}\rangle}$. However, for
each particular value of $\zeta$ the optimisation for the adversary may
necessarily lead to different choices for the orthogonal states
${|{\psi^{\bot}(\zeta)}\rangle}$, and so averaging over $\zeta$ cannot be
better for the adversary than choosing the optimal ${|{\psi^{\bot}}\rangle}$
at $\zeta=0$.
We can also consider discrete symmetries of the state ${|{\psi}\rangle}$. In
particular, ${|{\psi}\rangle}$ is invariant under both swapping the two
qubits, and complex conjugation (with respect to the standard basis); by the
same argument, averaging over these symmetries (i.e. by considering
$\Omega^{\prime}=\frac{1}{2}(\Omega+(\text{SWAP})\Omega(\text{SWAP}^{\dagger}))$
and $\Omega^{\prime\prime}=\frac{1}{2}(\Omega+\Omega^{*})$) cannot produce
strategies inferior to the original $\Omega$. Therefore we can consider a
strategy averaged over these families of symmetries of $\Omega$, without any
loss in performance.
This averaging process is useful for three reasons. Firstly, it heavily
restricts the number of free parameters in $\Omega$ requiring optimisation.
Secondly, it allows us to be explicit about the general form of $\Omega$.
Thirdly, the averaging procedures are distributive over addition; and so we
can make the replacement
$\displaystyle\Omega=\alpha
P^{+}_{ZZ}+(1-\alpha)\sum_{k}\eta_{k}(\mathds{1}-\rho_{k}\otimes\sigma_{k})\rightarrow\langle$
$\displaystyle\alpha
P^{+}_{ZZ}+(1-\alpha)\sum_{k}\eta_{k}(\mathds{1}-\rho_{k}\otimes\sigma_{k})\rangle$
$\displaystyle=$ $\displaystyle\alpha
P^{+}_{ZZ}+(1-\alpha)\sum_{k}\eta_{k}\langle\mathds{1}-\rho_{k}\otimes\sigma_{k}\rangle.$
(S37)
Note that a single term $\mathds{1}-\rho_{k}\otimes\sigma_{k}$, may, after
averaging, be a convex combination of multiple terms of the form
$\mathds{1}-\rho\otimes\sigma$. To proceed, we will use this averaging
procedure to show that it suffices to only include a single, post-averaging
term of the form $\langle\mathds{1}-\rho_{k}\otimes\sigma_{k}\rangle$ in the
strategy $\Omega$, and that the resulting operator can be explicitly
decomposed into exactly three measurement settings.
Consider a general operator $\Omega$, expressed as a $4\times 4$ matrix.
First, take the discrete symmetries of ${|{\psi}\rangle}$. Averaging over
complex conjugation in the standard basis implies that the coefficients of
$\langle\Omega\rangle$ are real; and averaging over qubit swapping implies
that $\langle\Omega\rangle$ is symmetric with respect to swapping of the two
qubits. Denote the operator after averaging these discrete symmetries as
$\bar{\Omega}$. Then consider averaging over the continuous symmetry of
${|{\psi}\rangle}$:
$\displaystyle\langle\Omega\rangle$
$\displaystyle=\frac{1}{2\pi}\int_{-\pi}^{\pi}d\zeta(M_{\zeta}\otimes
M_{-\zeta})\bar{\Omega}(M_{-\zeta}\otimes M_{\zeta})$ (S38)
$\displaystyle=\frac{1}{2\pi}\int_{-\pi}^{\pi}d\zeta\begin{pmatrix}1&0&0&0\\\
0&e^{i\zeta}&0&0\\\ 0&0&e^{-i\zeta}&0\\\
0&0&0&1\end{pmatrix}\begin{pmatrix}\omega_{00}&\omega_{01}&\omega_{01}&\omega_{03}\\\
\omega_{01}&\omega_{11}&\omega_{12}&\omega_{13}\\\
\omega_{01}&\omega_{12}&\omega_{11}&\omega_{13}\\\
\omega_{03}&\omega_{13}&\omega_{13}&\omega_{33}\end{pmatrix}\begin{pmatrix}1&0&0&0\\\
0&e^{-i\zeta}&0&0\\\ 0&0&e^{i\zeta}&0\\\ 0&0&0&1\end{pmatrix}$ (S39)
$\displaystyle=\begin{pmatrix}\omega_{00}&0&0&\omega_{03}\\\
0&\omega_{11}&0&0\\\ 0&0&\omega_{11}&0\\\
\omega_{03}&0&0&\omega_{33}\end{pmatrix}.$ (S40)
Thus after averaging using the above symmetries of ${|{\psi}\rangle}$,
$\langle\Omega\rangle$ can be written in the standard basis as
$\langle\Omega\rangle=\begin{pmatrix}a&0&0&b\\\ 0&c&0&0\\\ 0&0&c&0\\\
b&0&0&d\end{pmatrix},$ (S41)
for $a,b,c,d\in\mathbb{R}$. Enforcing that the strategy accepts
${|{\psi}\rangle}$ with certainty yields
$\langle\Omega\rangle{|{\psi}\rangle}={|{\psi}\rangle}$, or explicitly that
$\langle\Omega\rangle=\begin{pmatrix}1-b\cot\theta&0&0&b\\\ 0&c&0&0\\\
0&0&c&0\\\ b&0&0&1-b\tan\theta\end{pmatrix}.$ (S42)
The eigensystem of this operator is then completely specified; besides
${|{\psi}\rangle}$, it has the following eigenvectors:
${|{v_{1}}\rangle}=\cos\theta{|{00}\rangle}-\sin\theta{|{11}\rangle};\quad{|{v_{2}}\rangle}={|{01}\rangle};\quad{|{v_{3}}\rangle}={|{10}\rangle},$
(S43)
with corresponding eigenvalues $\lambda_{1}=1-b\csc\theta\sec\theta$ and
$\lambda_{2}=\lambda_{3}=c$. The maximum probability of accepting a state
orthogonal to ${|{\psi}\rangle}$, $q$, can then be written
$q=\|\Pi\langle\Omega\rangle\Pi\|=\max\\{\lambda_{1},\lambda_{2}\\},$ (S44)
where $\Pi=\mathds{1}-{|{\psi}\rangle}\\!{\langle{\psi}|}$. Therefore, any
reasoning about $q$ can be reduced to reasoning about the pair
$(\lambda_{1},\lambda_{2})$.
Now, we will show that it suffices to only consider a single term of the form
$\langle\mathds{1}-\rho_{k}\otimes\sigma_{k}\rangle$ in the decomposition of
$\Omega$. We write a strategy of this form as
$\Omega=\alpha
P^{+}_{ZZ}+(1-\alpha)\langle\mathds{1}-\rho\otimes\sigma\rangle.$ (S45)
For the term $\langle\mathds{1}-\rho\otimes\sigma\rangle$, we have a
constraint on the trace; if we label the eigenvalues for this term as
$\lambda_{1}^{(3)}$ and $\lambda_{2}^{(3)}$, we have the constraint that
$1+\lambda^{(3)}_{1}+2\lambda_{2}^{(3)}=\operatorname{tr}\langle\mathds{1}-\rho\otimes\sigma\rangle=3\Rightarrow\lambda_{2}^{(3)}=1-\frac{\lambda_{1}^{(3)}}{2}$.
The locus of points satisfying this constraint is plotted in the
$(\lambda_{1},\lambda_{2})$ plane as the thick black line in Fig. S2.
Moreover, we will show that a single term of this form can achieve any valid
choice of $\lambda_{1}^{(3)}$ on this locus (which we defer until we have an
explicit parameterisation of terms of this type; see Eq. S57, below).
However, we also have an additional constraint derived from insisting that the
strategy remains local. For example, the point $(0,1)$ in the
$(\lambda_{1},\lambda_{2})$ plane represents the strategy
$\Omega=\mathds{1}-{|{v_{1}}\rangle}\\!{\langle{v_{1}}|}$, which corresponds
to the strategy where the verifier projects onto ${|{v_{1}}\rangle}$ and
accepts if the outcome is not ${|{v_{1}}\rangle}$. But this type of
measurement is operationally forbidden as ${|{v_{1}}\rangle}$ is entangled.
It can be readily checked that, for an arbitrary $\theta$, it is not possible
to cover the full locus in the range $0\leq\lambda_{1}\leq 1$ with a separable
strategy; instead, there is a fixed lower bound on $\lambda_{1}^{(3)}$. To see
this, write
$\langle\mathds{1}-\rho\otimes\sigma\rangle={|{\psi}\rangle}\\!{\langle{\psi}|}+\lambda_{1}^{(3)}{|{v_{1}}\rangle}\\!{\langle{v_{1}}|}+\frac{2-\lambda_{1}^{(3)}}{2}({|{v_{2}}\rangle}\\!{\langle{v_{2}}|}+{|{v_{3}}\rangle}\\!{\langle{v_{3}}|}).$
(S46)
Then, taking just the $\langle\rho\otimes\sigma\rangle$ part and expressing as
a matrix in the computational basis gives
$\langle\rho\otimes\sigma\rangle=\begin{pmatrix}(1-\lambda_{1}^{(3)})\cos^{2}\theta&0&0&(\lambda_{1}^{(3)}-1)\cos\theta\sin\theta\\\
0&\frac{\lambda_{1}^{(3)}}{2}&0&0\\\ 0&0&\frac{\lambda_{1}^{(3)}}{2}&0\\\
(\lambda_{1}^{(3)}-1)\cos\theta\sin\theta&0&0&(1-\lambda_{1}^{(3)})\sin^{2}\theta\end{pmatrix}.$
(S47)
To enforce separability it is necessary and sufficient to check positivity
under partial transposition, yielding the constraint
$\lambda_{1}^{(3)}-(1-\lambda_{1}^{(3)})\sin(2\theta)\geq 0$. Simple
rearrangement gives a lower bound that must be satisfied for the strategy to
remain separable:
$\lambda_{1}^{(3)}\geq\frac{\sin(2\theta)}{1+\sin(2\theta)}\coloneqq\lambda_{LB}.$
(S48)
This additional locality constraint rules out any point on the black line to
the left of the red point in Fig. S2. The term $P^{+}_{ZZ}$ has parameters
$\lambda_{1}^{ZZ}=1$, $\lambda_{2}^{ZZ}=0$ and so represents a single point in
the $(\lambda_{1},\lambda_{2})$ plane. Thus the parameters
$(\lambda_{1},\lambda_{2})$ for the full strategy $\Omega$ must be represented
by a point in the convex hull of the single point representing the
$P^{+}_{ZZ}$ term and the locus of points representing the trace 3 part - i.e.
in the unshaded region in Fig. S2.
We now show that a strategy that includes more trace 3 terms cannot improve on
the performance of the strategy above. Write this expanded strategy as
$\Omega^{\prime}=\alpha
P^{+}_{ZZ}+(1-\alpha)\langle\sum_{k}\eta_{k}(\mathds{1}-\rho_{k}\otimes\sigma_{k})\rangle,$
(S49)
for $\sum_{k}\eta_{k}=1$. Firstly, we note again that the averaging operations
(SWAP, conjugation via $M_{\zeta}$ and complex conjugation in the standard
basis) are distributive over addition and so we can make the replacement
$\Omega^{\prime}=\alpha
P^{+}_{ZZ}+(1-\alpha)\sum_{k}\eta_{k}\langle\mathds{1}-\rho_{k}\otimes\sigma_{k}\rangle.$
(S50)
Write the composite term
$\sum_{k}\eta_{k}\langle\mathds{1}-\rho_{k}\otimes\sigma_{k}\rangle\coloneqq\Omega_{\text{comp}}$,
with parameters $\lambda_{1}^{\text{comp}}$ and $\lambda_{2}^{\text{comp}}$.
Note that each term in $\Omega_{\text{comp}}$ satisfies both the constraint
from the trace and the constraint from PPT in S48, and hence so does
$\Omega_{\text{comp}}$. Now, each operator in this term shares the same
eigenbasis (namely, the set of states $\\{{|{v_{i}}\rangle}\\}$ in S43). Thus
we know that $\lambda_{1}^{\text{comp}}=\sum_{k}\eta_{k}\lambda_{1,k}$, and
likewise for $\lambda_{2}^{\text{comp}}$; i.e. the strategy parameters for
this composite term are just a convex combination of those for its constituent
parts. A term $\Omega_{\text{comp}}$ is then specified in the
$(\lambda_{1},\lambda_{2})$ plane by a point
$\mathcal{P}_{\text{comp}}=(\lambda_{1}^{\text{comp}},\lambda_{2}^{\text{comp}})\in\text{Conv}(\lambda_{1,k},\lambda_{2,k})$
(i.e. the point $\mathcal{P}_{\text{comp}}$ must lie on the thick black line
bounding the unshaded region in Fig. S2).
Thus we know that $\text{Conv}(\Omega^{\prime})\subseteq\text{Conv}(\Omega)$,
and so any strategy writeable in the form S49 can be replaced by a strategy of
the form S45 with identical parameters $(\lambda_{1},\lambda_{2})$, and hence
identical performance. Thus, we need only consider strategies of the form
$\Omega=\alpha
P^{+}_{ZZ}+(1-\alpha)\langle\mathds{1}-\rho\otimes\sigma\rangle.$ (S51)
We can now be explicit about the form of the above strategy. For $\Omega$ to
accept ${|{\psi}\rangle}$ with certainty, $\rho\otimes\sigma$ must annihilate
${|{\psi}\rangle}$ and so we make the replacement
$\rho\otimes\sigma={|{\tau}\rangle}\\!{\langle{\tau}|}$, where
${|{\tau}\rangle}$ is the most general pure product state that annihilates
${|{\psi}\rangle}$. To be explicit about the form of the state
${|{\tau}\rangle}$, write a general two-qubit separable state as
${|{\tau}\rangle}=(\cos\phi{|{0}\rangle}+e^{i\eta}\sin\phi{|{1}\rangle})\otimes(\cos\xi{|{0}\rangle}+e^{i\zeta}\sin\xi{|{1}\rangle}),$
(S52)
where we take $0\leq\phi,\xi\leq\frac{\pi}{2}$, without loss of generality.
The constraint that this state annihilates
${|{\psi}\rangle}=\sin\theta{|{00}\rangle}+\cos\theta{|{11}\rangle}$ is
$\cos\phi\cos\xi\sin\theta+e^{-i(\eta+\zeta)}\sin\phi\sin\xi\cos\theta=0.$
(S53)
If either $\phi=0$ or $\xi=0$, then $\cos\phi\cos\xi\sin\theta=0$ implying
that $\xi=\frac{\pi}{2}$ or $\phi=\frac{\pi}{2}$, respectively. This yields
the annihilating states ${|{\tau}\rangle}={|{01}\rangle}$ and
${|{\tau}\rangle}={|{10}\rangle}$, respectively. If $\phi,\xi\neq 0$ then from
the imaginary part of Eq. S53 we find that $e^{-i(\eta+\zeta)}=-1$. Then we
can rearrange to give
$\tan\phi\tan\xi=\tan\theta.$ (S54)
Using this constraint and the identities
$\cos\xi=\frac{1}{\sqrt{1+\tan^{2}\xi}};\quad\sin\xi=\frac{\tan\xi}{\sqrt{1+\tan^{2}\xi}},$
(S55)
we can eliminate $\xi$ to yield
${|{\tau}\rangle}=(\cos\phi{|{0}\rangle}+e^{i\eta}\sin\phi{|{1}\rangle})\otimes\left(\frac{\tan\phi}{\sqrt{\tan^{2}\phi+\tan^{2}\theta}}{|{0}\rangle}-\frac{e^{-i\eta}\tan\theta}{\sqrt{\tan^{2}\phi+\tan^{2}\theta}}{|{1}\rangle}\right).$
(S56)
Note that, for $0<\theta<\frac{\pi}{2}$, taking the limits $\phi\rightarrow 0$
and $\phi\rightarrow\frac{\pi}{2}$ we recover the cases
${|{\tau}\rangle}={|{01}\rangle}$ and ${|{\tau}\rangle}={|{10}\rangle}$, up to
irrelevant global phases. Thus we can proceed without loss of generality by
assuming that $\rho\otimes\sigma={|{\tau}\rangle}\\!{\langle{\tau}|}$, where
${|{\tau}\rangle}$ is given by Eq. S56. Averaging over the symmetries of
${|{\psi}\rangle}$ outlined above then yields the following expression:
$\langle\rho\otimes\sigma\rangle=\frac{1}{t^{2}\phi+t^{2}\theta}\begin{pmatrix}s^{2}\phi&0&0&-s^{2}\phi
t\theta\\\ 0&\frac{1}{2}\left(c^{2}\phi t^{2}\theta+s^{2}\phi
t^{2}\phi\right)&0&0\\\ 0&0&\frac{1}{2}\left(c^{2}\phi t^{2}\theta+s^{2}\phi
t^{2}\phi\right)&0\\\ -s^{2}\phi t\theta&0&0&s^{2}\phi
t^{2}\theta\end{pmatrix},$ (S57)
using the shorthand $s$, $c$, $t$ for $\sin$, $\cos$ and $\tan$, respectively.
Given this explicit parameterisation we can extract the eigenvalue
$\lambda_{1}^{(3)}$:
$\lambda_{1}^{(3)}=1-\frac{\sec^{2}\theta\sin^{2}\phi}{\tan^{2}\theta+\tan^{2}\phi}.$
(S58)
It can be shown by simple differentiation w.r.t. $\phi$ that, for fixed
$\theta$, this expression has a minimum at $\lambda_{1}^{(3)}=\lambda_{LB}$.
Also, this expression is a continuous function of $\phi$ and therefore can
take any value up to its maximum (namely, $1$). Hence a single trace 3 term is
enough to achieve any point in the allowable convex hull in Fig. S2. For
convenience we will denote $\tan^{2}\phi=P,\;\tan^{2}\theta=T$ for $0\leq
P\leq\infty$, $0<T<\infty$. The explicit form for the whole strategy is then
$\Omega=\begin{pmatrix}\frac{T+P(P+T+\alpha)}{(1+P)(P+T)}&0&0&\frac{(1-\alpha)P\sqrt{T}}{(1+P)(P+T)}\\\
0&\frac{(1-\alpha)(T+2P+P^{2}+2PT)}{2(1+P)(P+T)}&0&0\\\
0&0&\frac{(1-\alpha)(T+2P+P^{2}+2PT)}{2(1+P)(P+T)}&0\\\
\frac{(1-\alpha)P\sqrt{T}}{(1+P)(P+T)}&0&0&\frac{T+P(1+P+\alpha
T)}{(1+P)(P+T)}\end{pmatrix}.$ (S59)
We now optimise over the two remaining free parameters, $\\{\alpha,\phi\\}$
(or alternatively, $\\{\alpha,P\\}$) for fixed $\theta$ (or fixed $T$). This
optimisation is rather straightforward from inspection (see Fig. S2), and the
reader may wish to skip to the answer in Eq. S66. However, we include an
analytic proof for the sake of completeness. We have shown that it suffices to
consider the eigenvalues $\lambda_{1}$ and $\lambda_{2}$, given in this case
by the expressions
$\lambda_{1}(\alpha,P,T)=1-\frac{P(1-\alpha)(1+T)}{(1+P)(P+T)};\quad\lambda_{2}(\alpha,P,T)=(1-\alpha)\left[1-\frac{T+P^{2}}{2(1+P)(P+T)}\right].$
(S60)
The parameter $q$ is given by the maximum of these two eigenvalues. Note that,
if $P=0$, the expression $\lambda_{1}(\alpha,0,T)=1$ which implies that the
adversary can pick a state that the verifier always accepts, and hence the
strategy fails. Likewise, taking the limit
$\lim_{P\rightarrow\infty}\lambda_{1}(\alpha,P,T)=1$. Thus we must restrict to
the range $0<P<\infty$ to construct a viable strategy for the verifier. The
quantity $q$ is minimised for fixed $T$ when the derivatives with respect to
$P$ and $\alpha$ vanish. First, we calculate the derivatives w.r.t. $\alpha$:
$\frac{\partial\lambda_{1}}{\partial\alpha}=\frac{P(1+T)}{(1+P)(P+T)};\quad\frac{\partial\lambda_{2}}{\partial\alpha}=\frac{-(2P+P^{2}+T+2PT)}{2(1+P)(P+T)}.$
(S61)
Given that $P>0$ and $T>0$, we have that for any choice of $T$,
$\partial_{\alpha}\lambda_{1}>0$ and $\partial_{\alpha}\lambda_{2}<0$. Thus,
one of three cases can occur: (a) for a given choice of $T$ and $P$, the lines
given by $\lambda_{1}$ and $\lambda_{2}$ intersect in the range
$0\leq\alpha\leq 1$ and hence there is a valid $\alpha$ such that $q$ is
minimised when $\lambda_{1}=\lambda_{2}$; (b) for a given choice of $T$ and
$P$, $\lambda_{1}>\lambda_{2}$ in the range $0\leq\alpha\leq 1$ and hence $q$
is minimised when $\alpha=0$; (c) for a given choice of $T$ and $P$,
$\lambda_{1}<\lambda_{2}$ in the range $0\leq\alpha\leq 1$ and hence $q$ is
minimised when $\alpha=1$. However, we note that this final case cannot occur;
it suffices to check that $\lambda_{1}(\alpha=1)>\lambda_{2}(\alpha=1)$, and
from the expressions in (S60) we have that $\lambda_{1}(\alpha=1)=1$ and
$\lambda_{2}(\alpha=1)=0$. As a visual aid for the remaining two cases, see
Fig. S2. In case (a),
$q=\lambda_{1}=\lambda_{2}=\frac{1}{2}+\frac{1}{2}\left(\frac{T+P^{2}}{T+P^{2}+4P(1+T)}\right).$
(S62)
In case (b), we have that
$q=\lambda_{1}(0,P,T)=\frac{T+P^{2}}{(1+P)(P+T)}.$ (S63)
We must also minimise w.r.t. $\phi$; however, we can safely minimise w.r.t.
$P$ as $\partial_{\phi}P>0$ (unless $\phi=0$, but in this case $q=1$ and the
strategy fails). In case (b), we have
$\frac{\partial q}{\partial P}=\frac{(P^{2}-T)(1+T)}{(1+P)^{2}(P+T)^{2}}.$
(S64)
In this case, consider the two points implicitly defined by the constraint
$\lambda_{1}(0,P,T)=\lambda_{2}(0,P,T)$ (drawn as the black points in Fig.
S2). Denote these points $f^{\pm}(T)$. It can be readily checked that in case
(b), $\partial_{P}q<0$ for any $q<f^{-}(T)$, and $\partial_{P}q>0$ for any
$q>f^{+}(T)$. Thus the minimum w.r.t $P$ must occur when
$\lambda_{1}(0,P,T)=\lambda_{2}(0,P,T)$ and hence we can restrict our
attention to case (a) (note Fig. S2). In this case, $\partial_{P}q$ becomes
$\frac{\partial q}{\partial P}=\frac{-2(1+T)(T-P^{2})}{[T+4PT+P(4+P)]^{2}}=0,$
(S65)
which implies that $P=\sqrt{T}$. Substituting in the optimal choices for the
parameters $\\{\alpha,P\\}$ and reexpressing solely in terms of $\theta$ gives
the optimal strategy
$\Omega^{opt}=\frac{2-\sin(2\theta)}{4+\sin(2\theta)}P^{+}_{ZZ}+\frac{2(1+\sin(2\theta))}{4+\sin(2\theta)}\Omega_{3}^{opt},$
(S66)
where $\Omega_{3}^{opt}$ is given by
$\Omega_{3}^{opt}=\mathds{1}-\frac{1}{(1+t)^{2}}\begin{pmatrix}1&0&0&-t\\\
0&t&0&0\\\ 0&0&t&0\\\ -t&0&0&t^{2}\end{pmatrix},\quad t=\tan\theta.$ (S67)
This strategy accepts an orthogonal state with probability
$q_{opt}=\frac{2+\sin(2\theta)}{4+\sin(2\theta)},$ (S68)
implying that the number of measurements needed to verify to within accuracy
$\epsilon$ and with statistical power $1-\delta$ under this test is
$n_{opt}=\frac{\ln\delta^{-1}}{\ln((1-\Delta_{\epsilon})^{-1})}=\frac{\ln\delta^{-1}}{\ln((1-\epsilon(1-q^{opt}))^{-1})}\approx(2+\sin\theta\cos\theta)\epsilon^{-1}\ln\delta^{-1}.$
(S69)
The final step is to show that the operator $\Omega_{3}^{opt}$ can be
decomposed into a small set of locally implementable, projective measurements.
We can do so with a strategy involving only three terms:
$\Omega_{3}^{opt}=\frac{1}{3}\left[\sum_{k=1}^{3}(\mathds{1}-{|{\phi_{k}}\rangle}\\!{\langle{\phi_{k}}|})\right],$
(S70)
where the set of separable states $\\{{|{\phi_{k}}\rangle}\\}$ are the
following:
$\displaystyle{|{\phi_{1}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{2\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right),$ (S71)
$\displaystyle{|{\phi_{2}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{4\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{e^{\frac{5\pi
i}{3}}}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right),$ (S72)
$\displaystyle{|{\phi_{3}}\rangle}$
$\displaystyle=\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}+\frac{1}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right)\otimes\left(\frac{1}{\sqrt{1+\tan\theta}}{|{0}\rangle}-\frac{1}{\sqrt{1+\cot\theta}}{|{1}\rangle}\right),$
(S73)
which gives a strategy of the required form. ∎
Figure S1: Shaded region: unreachable parameters given a strategy $\Omega$
that is both local and of the form $\Omega=\alpha
P^{+}_{ZZ}+(1-\alpha)\Omega_{3}$, where $\Omega_{3}$ is the trace 3 part.
Here, $\theta=\frac{\pi}{8}$.
Figure S2: A contour map of the function
$q(\alpha,\phi)=\max\\{\lambda_{1}(\alpha,\phi),\lambda_{2}(\alpha,\phi)\\}$
for $\theta=\frac{\pi}{8}$, where the pair $(\lambda_{1},\lambda_{2})$ are
given in S60. The pink curve denotes the minimum w.r.t $\alpha$ given fixed
$\phi$. Above the curve, $\lambda_{1}>\lambda_{2}$; below,
$\lambda_{1}<\lambda_{2}$.
We now briefly treat the special cases that were omitted from the above proof:
$\theta=0,\frac{\pi}{4},\frac{\pi}{2}$.
$\mathit{\theta=0,\theta=\frac{\pi}{2}}$: In these cases, the state
${|{\psi}\rangle}={|{00}\rangle}$ or ${|{\psi}\rangle}={|{11}\rangle}$. Then
the globally optimal strategy, just projecting onto ${|{\psi}\rangle}$, is an
allowed local measurement. Thus in these cases the optimal strategy is to just
apply the projector ${|{00}\rangle}\\!{\langle{00}|}$ or
${|{11}\rangle}\\!{\langle{11}|}$. Given this strategy we have that $p=1$ and
$q=0$, giving a scaling of the number of measurements required as
$n_{opt}\approx\epsilon^{-1}\ln\delta^{-1}.$ (S74)
$\mathit{\theta=\frac{\pi}{4}}$: This case is treated explicitly in the main
body. The optimal strategy is to perform the Pauli measurements $XX$, $-YY$
and $ZZ$ with equal weight; i.e.
$\Omega=\frac{1}{3}(P^{+}_{XX}+P^{+}_{-YY}+P^{+}_{ZZ}),$ (S75)
where $P^{+}_{M}$ is the projector onto the positive eigensubspace of the
operator $M$. In this case, the number of measurements required is
$n_{opt}\approx\frac{3}{2}\epsilon^{-1}\ln\delta^{-1}.$ (S76)
## Appendix D Stabilizer states
We now discuss verification strategies for stabilizer states. We take
${|{\psi}\rangle}$ to be a stabilizer state of $N$ qubits, namely that there
exists a generating set of $N$ commuting Pauli operators $M_{1},\dots,M_{N}$
on $N$ qubits such that $M_{i}{|{\psi}\rangle}={|{\psi}\rangle}$ for all $i$.
Stabilizer states are ubiquitous in various areas of quantum information, for
example in quantum error correction and measurement-based quantum computing;
for an introduction to the stabilizer formalism, see Gottesman (1997, 1996)
and Nielsen and Chuang (2010) Sec 10.5. We will describe below a strategy
constructed from only stabilizer measurements that accepts ${|{\psi}\rangle}$
with certainty, and hence achieves the same asymptotic scaling in the number
of required measurements with respect to $\epsilon$ as the two-qubit case
above. However, we do not rule out that there may be non-stabilizer strategies
that give a small constant factor improvement over the strategy defined here.
###### Theorem 5.
Write a stabilizer state ${|{\psi}\rangle}$ and strategy
$\Omega=\sum_{j=1}^{K}\mu_{j}P_{j}$, where the set $\\{P_{j}\\}$ are the
projectors onto the positive eigenspace of $K$ linearly independent
stabilizers of ${|{\psi}\rangle}$, for $K\leq 2^{N}-1$. Then the optimal
choice of the parameter $K$ and weights $\mu_{j}$ are
$K=2^{N}-1;\;\mu_{j}=\frac{1}{2^{N}-1}$ for all $j$. The number of
measurements needed to verify to within fidelity $\epsilon$ and statistical
power $1-\delta$ is then
$n_{opt}^{stab}\approx\frac{2^{N}-1}{2^{(N-1)}}\epsilon^{-1}\ln\frac{1}{\delta}.$
(S77)
###### Proof.
Recall that as the verifier accepts ${|{\psi}\rangle}$ with certainty, we are
concerned with the optimisation of $\Delta_{\epsilon}$, which can be written
as
$\displaystyle\Delta_{\epsilon}$
$\displaystyle=\max_{\Omega}\min_{{|{\psi^{\bot}}\rangle}}\epsilon(1-{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle})$
(S78)
$\displaystyle=\epsilon(1-\min_{\Omega}\max_{{|{\psi^{\bot}}\rangle}}{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}),$
(S79)
where the maximisation is over positive matrices $\Omega$ such that
$\Omega{|{\psi}\rangle}={|{\psi}\rangle}$.
Now consider $\Omega$ written as a matrix in the basis
$\\{{|{\psi}\rangle},{|{\psi_{j}^{\bot}}\rangle}\\}$, $j=1\ldots(2^{N}-1)$
where the states ${|{\psi_{j}^{\bot}}\rangle}$ are mutually orthogonal and all
orthogonal to ${|{\psi}\rangle}$. Given that
$\Omega{|{\psi}\rangle}={|{\psi}\rangle}$, we know that
${\langle{\psi_{j}^{\bot}}|}\Omega{|{\psi}\rangle}=0\;\forall j$. Then in this
basis $\Omega$ can be written
$\Omega=\begin{pmatrix}1&\mathbf{0}^{\top}\\\
\mathbf{0}&\mathbf{M}\end{pmatrix},$ (S80)
where $\mathbf{0}$ is the $(2^{N}-1)$-dimensional zero vector and $\mathbf{M}$
is a $(2^{N}-1)\times(2^{N}-1)$ Hermitian matrix. Then $\Omega$ must be
writable as
$\Omega={|{\psi}\rangle}\\!{\langle{\psi}|}+\sum_{j=1}^{2^{N}-1}\nu_{j}{|{\phi_{j}}\rangle}\\!{\langle{\phi_{j}}|}$,
where $\sum_{j}\nu_{j}{|{\phi_{j}}\rangle}\\!{\langle{\phi_{j}}|}$ is the
spectral decomposition of $\mathbf{M}$. Given this decomposition, the
optimisation for the adversary is straightforward – pick
${|{\psi^{\bot}}\rangle}$ to be the eigenstate in the decomposition of
$\mathbf{M}$ with largest eigenvalue:
${|{\psi^{\bot}}\rangle}={|{\phi_{max}}\rangle}$ where
$\nu_{max}=\max_{j}\nu_{j}$. Then
$\Delta_{\epsilon}=\epsilon(1-\min_{\Omega}{\langle{\phi_{max}}|}\Omega{|{\phi_{max}}\rangle})=\epsilon(1-\min_{\Omega}\nu_{max}).$
(S81)
Given this choice by the adversary, the verifier is then forced to set the
strategy such that all the eigenvalues of $\mathbf{M}$ are equal; i.e. that
$\mathbf{M}=a\mathds{1}$ for some constant $a$. To see this, consider an
alternative strategy where the eigenvalues $\nu_{j}$ are not equal. Now,
consider rewriting $\Omega$ in terms of stabilizers of ${|{\psi}\rangle}$. For
any stabilizer (i.e. tensor product of Paulis, perhaps with an overall phase)
$M$ over $N$ qubits, the projector onto the positive eigensubspace has
$\operatorname{tr}(P_{M}^{+})=2^{N-1}$. Given that $\Omega$ is built from a
convex combination of these projectors, and recalling from Lemma 4 that
$\Omega$ does not contain an identity term, we also know that
$\operatorname{tr}(\Omega)=2^{N-1}$. However, we have also expanded $\Omega$
as
$\Omega={|{\psi}\rangle}\\!{\langle{\psi}|}+\sum_{j}\nu_{j}{|{\phi_{j}}\rangle}\\!{\langle{\phi_{j}}|}$,
and so
$\operatorname{tr}(\Omega)=1+\sum_{j}\nu_{j}=2^{N-1}.$ (S82)
Then, it is straightforward to see that decreasing any eigenvalue below $a$
must result in an increase in at least one other eigenvalue in order to
maintain this equality, and hence would increase the value of $\nu_{max}$.
Thus the optimal choice for the verifier is to set
$\Omega={|{\psi}\rangle}\\!{\langle{\psi}|}+a\mathds{1}^{\bot}$, where
$\mathds{1}^{\bot}$ is the identity matrix on the subspace orthogonal to
${|{\psi}\rangle}$. Taking the trace of this expression gives
$\operatorname{tr}[{|{\psi}\rangle}\\!{\langle{\psi}|}+a\mathds{1}^{\bot}]=1+(2^{N}-1)a=2^{N-1}.$
(S83)
This can be rearranged for $a$ and then substituted into the expression for
$\Delta_{\epsilon}$, which gives
$\Delta_{\epsilon}=\frac{2^{(N-1)}}{2^{N}-1}\epsilon,$ (S84)
or that the number of stabilizer measurements required to verify
${|{\psi}\rangle}$ is bounded below by
$n_{opt}^{stab}\approx\frac{2^{N}-1}{2^{(N-1)}}\epsilon^{-1}\ln\delta^{-1}.$
(S85)
The optimal
$\Omega={|{\psi}\rangle}\\!{\langle{\psi}|}+\frac{2^{(N-1)}-1}{2^{N}-1}\mathds{1}^{\bot}$
and the optimal scaling above can be achieved by decomposing $\Omega$ into a
strategy involving a maximal set (excluding the identity) of $2^{N}-1$
linearly independent stabilizers, all with equal weight. To see this note that
for a stabilizer group of a state ${|{\psi}\rangle}$ of $N$ qubits, there are
$2^{N}$ linearly independent stabilizers (including the identity element).
Denote these stabilizers $\\{M_{i},i=1\ldots 2^{N}\\}$. Then, we make use of
the fact that Hein (2005)
$\frac{1}{2^{N}}\sum_{i=1}^{2^{N}}M_{i}={|{\psi}\rangle}\\!{\langle{\psi}|}.$
(S86)
Explicitly extracting the identity element gives
$\sum_{i=1}^{2^{N}-1}M_{i}=2^{N}{|{\psi}\rangle}\\!{\langle{\psi}|}-\mathds{1}.$
(S87)
Now, each stabilizer (for any $N$) is a two outcome measurement and so we can
make use of the fact that $M_{i}$ can be written in terms of the projector
onto the positive eigenspace of $M_{i}$, denoted $P^{+}_{i}$, as
$M_{i}=2P^{+}_{i}-\mathds{1}$. Substituting in this expression and rearranging
gives
$\sum_{i=1}^{2^{N}-1}P^{+}_{i}=2^{(N-1)}{|{\psi}\rangle}\\!{\langle{\psi}|}+(2^{(N-1)}-1)\mathds{1}.$
(S88)
Then normalising this expression over $2^{N}-1$ stabilizers yields
$\displaystyle\frac{1}{2^{N}-1}\sum_{i=1}^{2^{N}-1}P^{+}_{i}$
$\displaystyle=\frac{2^{(N-1)}}{2^{N}-1}{|{\psi}\rangle}\\!{\langle{\psi}|}+\frac{2^{(N-1)}-1}{2^{N}-1}\mathds{1}$
$\displaystyle=\frac{2^{(N-1)}+2^{(N-1)}-1}{2^{N}-1}{|{\psi}\rangle}\\!{\langle{\psi}|}+\frac{2^{(N-1)}-1}{2^{N}-1}\mathds{1}^{\bot}$
$\displaystyle={|{\psi}\rangle}\\!{\langle{\psi}|}+\frac{2^{(N-1)}-1}{2^{N}-1}\mathds{1}^{\bot}=\Omega,$
(S89)
where $\mathds{1}^{\bot}$ is the identity matrix on the subspace orthogonal to
${|{\psi}\rangle}$, as required. ∎
Note that for growing $N$, the quantity $n_{opt}^{stab}$ given in Eq. S85 is
bounded above by $2\epsilon^{-1}\ln\delta^{-1}$, which does not depend on $N$,
and implies that this stabilizer strategy requires at most a factor of two
more measurements than the optimal non-local verification strategy (just
projecting onto ${|{\psi}\rangle}$).
One could also consider a reduced strategy that involves measuring fewer
stabilizers. However, given a state of $N$ qubits and a set of $k$
stabilizers, the dimension of the subspace stabilized by this set is at least
$2^{N-k}$. Thus for any choice of $k<N$, there must always exist at least one
state ${|{\psi^{\bot}}\rangle}$ orthogonal to ${|{\psi}\rangle}$ that is
stabilized by every stabilizer in the set. Then, the adversary can construct a
$\sigma$ that always accepts, implying that the verifier has no discriminatory
power between ${|{\psi}\rangle}$ and $\sigma$ and thus the strategy fails.
Consider instead constructing a strategy from the $N$ stabilizer generators of
${|{\psi}\rangle}$, with corresponding projectors $\\{P^{s.g.}_{j}\\}$. Then,
$\Omega=\sum_{j}\mu_{j}P^{s.g.}_{j}$. The set of projectors
$\\{P^{s.g.}_{j}\\}$ commute and so share a common eigenbasis, denoted
$\\{{|{\lambda_{j}}\rangle}\\}$. To optimise this strategy over the weights
$\mu_{j}$, we first need the following lemma:
###### Lemma 6.
Write the unique sets of $N-1$ independent stabilizer generators of
${|{\psi}\rangle}$, $S_{k}=\\{M_{j},j=1\ldots N\\}\setminus M_{k}$, $k=1\ldots
N$. Then each $S_{k}$ corresponds to a state ${|{\lambda_{k}}\rangle}$,
$\braket{\lambda_{k}}{\psi}=0$, such that
$\braket{\lambda_{k}}{\lambda_{l}}=\delta_{kl}$.
###### Proof.
Each set $S_{k}$ stabilizes a space of dimension two, and so a
${|{\lambda_{k}}\rangle}$ where $\braket{\lambda_{k}}{\psi}=0$ exists.
Moreover, the stabilizer generators define an orthogonal eigenbasis of which
${|{\lambda_{k}}\rangle}$ is an element. To show that two sets $S_{k}$ and
$S_{l}$, $k\neq l$, define distinct eigenvectors, assume the converse; that
${|{\lambda_{k}}\rangle}\propto{|{\lambda_{l}}\rangle}$. However, then the set
$S=S_{k}\cup S_{l}$ would stabilize ${|{\lambda_{k}}\rangle}$, which is a
contradiction as $S$ is the full set of stabilizer generators and uniquely
stabilizes ${|{\psi}\rangle}$. ∎
We can now derive the optimal stabilizer generator strategy.
###### Theorem 7.
For a stabilizer state ${|{\psi}\rangle}$ and strategy
$\Omega=\sum_{j=1}^{N}\mu_{j}P^{s.g.}_{j}$, where the set $\\{P^{s.g.}_{j}\\}$
are the projectors onto the positive eigenspace of the stabilizer generators
of ${|{\psi}\rangle}$, the optimal choice of the weights $\mu_{j}$ is
$\mu_{j}=\frac{1}{N}$, for all $j$. The number of measurements needed to
verify to within fidelity $\epsilon$ and statistical power $1-\delta$ is then
$n^{s.g.}_{opt}\approx\frac{N}{\epsilon}\ln\frac{1}{\delta}.$ (S90)
###### Proof.
If we write a state orthogonal to ${|{\psi}\rangle}$ in the stabilizer
eigenbasis as
${|{\psi^{\bot}}\rangle}=\sum_{k}\alpha_{k}{|{\lambda_{k}}\rangle}$, we have
that
$\displaystyle{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}$
$\displaystyle=\sum_{k,m=1}^{2^{N}}\sum_{j=1}^{N}\bar{\alpha}_{k}\alpha_{m}\mu_{j}{\langle{\lambda_{k}}|}P^{s.g.}_{j}{|{\lambda_{m}}\rangle}$
$\displaystyle=\sum_{k,m=1}^{2^{N}}\sum_{j=1}^{N}\bar{\alpha}_{k}\alpha_{m}\mu_{j}\delta_{km}\epsilon_{jk}$
$\displaystyle=\sum_{k=1}^{2^{N}}\sum_{j=1}^{N}|\alpha_{k}|^{2}\mu_{j}\epsilon_{jk}\coloneqq\sum_{k=1}^{2^{N}}|\alpha_{k}|^{2}E_{k},$
(S91)
where $\epsilon_{jk}=1$ if
$P_{j}{|{\lambda_{k}}\rangle}={|{\lambda_{k}}\rangle}$ and zero otherwise.
This quantity is the _parity-check matrix_ for the set of stabilizers
$\\{P_{j}^{s.g.}\\}$. The quantity of interest with respect to verification is
$q=\min_{\Omega}\max_{{|{\psi^{\bot}}\rangle}}{\langle{\psi^{\bot}}|}\Omega{|{\psi^{\bot}}\rangle}=\min_{\mu_{j}}\max_{\alpha_{k}}\sum_{j,k}|\alpha_{k}|^{2}\mu_{j}\epsilon_{jk},$
(S92)
where the verifier’s minimisation is over the probabilities $\mu_{j}$ with
which a stabilizer generator indexed by $j$ is drawn in the protocol, and the
adversary maximises over the set of amplitudes $\alpha_{k}$ that describes the
state most likely to fool the verifier. Lemma 6 gives that, from the full set
of $2^{N}$ basis states ${|{\lambda_{k}}\rangle}$, there is a subset of $N$
basis states ${|{\lambda_{\tilde{k}}}\rangle}$, $\tilde{k}\in I$ for $|I|=N$,
stabilized by exactly $N-1$ generators; thus for basis states in this subset,
the quantity $\epsilon_{j\tilde{k}}=1-\delta_{j\tilde{k}}$. Then we can
compute the summation over $j$ as
$E_{\tilde{k}}=\sum_{j}\mu_{j}\epsilon_{j\tilde{k}}=\sum_{j}\mu_{j}(1-\delta_{j\tilde{k}})=1-\mu_{\tilde{k}},$
(S93)
using the fact that $\sum_{j}\mu_{j}=1$. Now, each element of $E_{k}$ for
$k\notin I$ is a summation of at most $N-2$ terms, $\mu_{j}$. Thus there
always exists another element $E_{\tilde{k}}$ for $\tilde{k}\in I$ that is at
least as large; and so it is never detrimental to the adversary to shift any
amplitude on the basis state labelled by $k$ to the basis state labelled by
$\tilde{k}$. Thus the optimal choice for the adversary’s state is
${|{\psi^{\bot}}\rangle}\in\text{span}\\{{|{\lambda_{\tilde{k}}}\rangle}:\tilde{k}\in
I\\}$. Given this choice by the adversary, we have that
$q=\min_{\mu_{\tilde{k}}}\max_{\alpha_{\tilde{k}}}\sum_{\tilde{k}}|\alpha_{\tilde{k}}|^{2}(1-\mu_{\tilde{k}})=\min_{\mu_{\tilde{k}}}\max_{\tilde{k}}(1-\mu_{\tilde{k}}).$
(S94)
It is straightforward to see that the optimal choice for the verifier is to
have $\mu_{\tilde{k}}=\frac{1}{N}$, for all $\tilde{k}$; then
$\Omega=\frac{1}{N}\sum{P_{j}^{s.g.}}$. Thus
$q=1-\frac{1}{N}\Rightarrow
n^{s.g.}_{opt}\approx\frac{N}{\epsilon}\ln\frac{1}{\delta}.$ (S95)
∎
Clearly, this scaling is much poorer in $N$ than in the case where the full
set of $2^{N}-1$ linearly independent stabilizers are allowed; indicating a
trade-off between the total number of required measurements and the accessible
number of measurement settings, in this case.
## Appendix E Concentration inequalities and the relative entropy
In a binary hypothesis test between hypotheses $H_{0}$ and $H_{1}$, the Type I
and Type II errors are, respectively,
Type I $\displaystyle:\quad$ $\displaystyle\text{Pr}[\text{Guess
}H_{1}|H_{0}]$ (S96) Type II $\displaystyle:$
$\displaystyle\text{Pr}[\text{Guess }H_{0}|H_{1}].$ (S97)
In general, in designing an effective hypothesis test there will be a trade-
off between the relative magnitude of these types of error; they cannot be
arbitrarily decreased simultaneously. In an _asymmetric_ hypothesis test, the
goal is to minimise one of these errors given a fixed upper bound on the
other. In this addendum, we prove the following proposition in the context of
asymmetric hypothesis testing:
###### Proposition 8.
Any strategy $\Omega$ that: (a) accepts ${|{\psi}\rangle}$ with certainty,
$p\coloneqq\operatorname{tr}(\Omega{|{\psi}\rangle}\\!{\langle{\psi}|})=1$;
and (b) does not accept $\sigma$ with certainty ($\Delta_{\epsilon}>0$)
requires asymptotically fewer measurements in infidelity $\epsilon$ to
distinguish these states to within a fixed Type II error than the best
protocol based on a strategy $\Omega^{\prime}$ where
$\operatorname{tr}(\Omega^{\prime}{|{\psi}\rangle}\\!{\langle{\psi}|})<1$.
We have inherited notation regarding verification strategies from Appendix A.
Here, hypothesis $H_{0}$ corresponds to accepting the target
${|{\psi}\rangle}$, and hypothesis $H_{1}$ corresponds to accepting the
alternative (that the output was far from ${|{\psi}\rangle}$). Proposition 8
states that, in a framework where we attempt to verify ${|{\psi}\rangle}$ by
repeatedly making two-outcome measurements picked from some set,
asymptotically it is always beneficial to use measurements that accept
${|{\psi}\rangle}$ with certainty. In this case, each measurement is a
Bernoulli trial with some acceptance probability. An example of a protocol
which would not satisfy this property would be estimating the probability of
violating a Bell inequality for a maximally entangled 2-qubit state
${|{\psi}\rangle}$.
In general, the optimum asymptotic rate at which the Type II error can be
minimised in an asymmetric hypothesis test is given by the _Chernoff-Stein
lemma_ :
###### Theorem 9 (Cover and Thomas Cover and Thomas (2006), Theorem 11.8.3.).
Let $X_{1},X_{2}\ldots X_{n}$ be drawn i.i.d. from a probability mass function
$Q$. Then consider the hypothesis test between alternatives $H_{0}$: $Q=P_{0}$
and $H_{1}$: $Q=P_{1}$. Let $A_{n}$ be an acceptance region for the null
hypothesis $H_{0}$; i.e. it is a set consisting of all possible strings of
outcomes with which the conclusion $H_{0}$ is drawn. Denote Type I and Type II
errors after $n$ samples as $\alpha_{n}^{*}$ and $\beta_{n}^{*}$,
respectively. Then for some constraint parameter $0<\chi<\frac{1}{2}$, define
$\delta_{n}^{\chi}=\min_{\begin{subarray}{c}A_{n}\\\
\alpha_{n}^{*}<\chi\end{subarray}}\beta_{n}^{*}.$
Then asymptotically
$\lim_{n\rightarrow\infty}\frac{1}{n}\ln\delta_{n}^{\chi}=-D(P_{0}\ \|P_{1}),$
where $D(P_{0}\|P_{1})$ is the relative entropy between probability
distributions $P_{0}$ and $P_{1}$.
For clarity we drop the sub- and superscript
$\delta_{n}^{\chi}\rightarrow\delta$. The relative entropy typically takes a
pair of probability distributions as arguments, but given that each hypothesis
is concerned only with a single Bernoulli-distributed random variable uniquely
specified by a a pair of real parameters (the quantities $p$ and
$p-\Delta_{\epsilon}$), we will use the shorthand $D(p\|q)$ for real variables
$p$ and $q$. In this case the relative entropy can be expanded as
$D(a\|b)=a\ln\frac{a}{b}+(1-a)\ln\frac{1-a}{1-b}.$ (S98)
Note that in the limit where $a\rightarrow 1$, using that $\lim_{a\rightarrow
1^{-}}(1-a)\ln(1-a)=0$, this expression becomes
$\lim_{a\rightarrow 1^{-}}D(a\|b)=\ln\frac{1}{b}.$ (S99)
After rearranging the expression for the optimal asymptotic Type II error
given by the Chernoff-Stein lemma, we can achieve a test with statistical
power $1-\delta$ by taking a number of measurements
$n>\frac{1}{D\left(p\|p-\Delta_{\epsilon}\right)}\ln\frac{1}{\delta}.$ (S100)
Moreover, this bound is tight in that it gives the correct asymptotic
relationship between $n$, $D$ and $\delta$; generically $\delta$ can be lower
bounded (Cover and Thomas (2006), p666) such that
$\frac{e^{-Dn}}{n+1}\leq\delta\leq e^{-Dn}.$ (S101)
Two important limiting cases of this expression have relevance here. Firstly,
if $p\gg\Delta_{\epsilon}$, then Taylor expanding $n$ for small
$\Delta_{\epsilon}$ gives that it is sufficient to take
$n\geq\frac{2p(1-p)}{\Delta_{\epsilon}^{2}}\ln\frac{1}{\delta}.$ (S102)
Secondly, if $p=1$, then it is sufficient to take
$n\geq\frac{-1}{\ln\left(1-\Delta_{\epsilon}\right)}\ln\frac{1}{\delta}\approx\frac{1}{\Delta_{\epsilon}}\ln\frac{1}{\delta},$
(S103)
which is in agreement with the scaling previously derived in Eq. S1. These are
the limiting cases of the scaling of $n$ with $\Delta_{\epsilon}$. In the
worst case, $n$ scales quadratically in $\Delta_{\epsilon}^{-1}$; however, for
any strategy where the state ${|{\psi}\rangle}$ to be tested is accepted with
certainty, only a total number of measurements linear in
$\Delta_{\epsilon}^{-1}$ are required. Thus asymptotically, a strategy where
$p=1$ is always favourable (i.e. gives a quadratic improvement in scaling with
$\Delta_{\epsilon}$) for any $\Delta_{\epsilon}>0$.
## References
* Häffner _et al._ (2005) H. Häffner, W. Hänsel, C. F. Roos, J. Benhelm, D. Chek-al Kar, M. Chwalla, T. Körber, U. D. Rapol, M. Riebe, P. O. Schmidt, C. Becher, O. Gühne, W. Dür, and R. Blatt, Nature 438, 643 (2005).
* Carolan _et al._ (2014) J. Carolan, J. D. A. Meinecke, P. J. Shadbolt, N. J. Russell, N. Ismail, K. Wörhoff, T. Rudolph, M. G. Thompson, J. L. O’Brien, J. C. F. Matthews, and A. Laing, Nature Photonics 8, 621 (2014).
* Laing and O’Brien (2012) A. Laing and J. L. O’Brien, arXiv:1208.2868 (2012).
* Lvovsky and Raymer (2009) A. I. Lvovsky and M. G. Raymer, Reviews of Modern Physics 81, 299 (2009).
* Bellini _et al._ (2012) M. Bellini, A. S. Coelho, S. N. Filippov, V. I. Man’ko, and A. Zavatta, Physical Review A 85, 052129 (2012).
* Amosov _et al._ (2012) G. G. Amosov, Y. A. Korennoy, and V. I. Man’ko, Physical Review A 85, 052119 (2012).
* Flammia _et al._ (2012) S. T. Flammia, D. Gross, Y.-K. Liu, and J. Eisert, New Journal of Physics 14, 095022 (2012).
* Gross _et al._ (2010) D. Gross, Y.-K. Liu, S. T. Flammia, S. Becker, and J. Eisert, Physical Review Letters 105, 150401 (2010).
* Cramer _et al._ (2010) M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D. Bartlett, O. Landon-Cardinal, D. Poulin, and Y.-K. Liu, Nature Communications 1, 149 (2010).
* Tóth and Gühne (2005a) G. Tóth and O. Gühne, Physical Review Letters 94, 060501 (2005a).
* Tóth and Gühne (2005b) G. Tóth and O. Gühne, Physical Review A 72, 022340 (2005b).
* Sugiyama _et al._ (2013) T. Sugiyama, P. S. Turner, and M. Murao, Physical Review Letters 111, 160406 (2013).
* Flammia and Liu (2011) S. T. Flammia and Y.-K. Liu, Physical Review Letters 106, 230501 (2011).
* Vaidman and Yoran (1999) L. Vaidman and N. Yoran, Physical Review A 59, 116 (1999).
* Calsamiglia and Lütkenhaus (2001) J. Calsamiglia and N. Lütkenhaus, Applied Physics B 72, 67 (2001).
* Ewert and van Loock (2014) F. Ewert and P. van Loock, Physical Review Letters 113, 140403 (2014).
* Mayers and Yao (2004) D. Mayers and A. Yao, Quantum Information & Computation 4, 273 (2004).
* McKague _et al._ (2012) M. McKague, T. H. Yang, and V. Scarani, Journal of Physics A: Mathematical and Theoretical 45, 455304 (2012).
* Yang and Navascués (2013) T. H. Yang and M. Navascués, Physical Review A 87, 050102 (2013).
* Sugiyama (2014) T. Sugiyama, _Finite Sample Analysis in Quantum Estimation_ (Springer, 2014).
* da Silva _et al._ (2011) M. P. da Silva, O. Landon-Cardinal, and D. Poulin, Physical Review Letters 107, 210404 (2011).
* Ferrie and Blume-Kohout (2016) C. Ferrie and R. Blume-Kohout, Physical Review Letters 116, 090407 (2016).
* Struchalin _et al._ (2016) G. I. Struchalin, I. A. Pogorelov, S. S. Straupe, K. S. Kravtsov, I. V. Radchenko, and S. P. Kulik, Physical Review A 93, 012103 (2016).
* Mahler _et al._ (2013) D. H. Mahler, L. A. Rozema, A. Darabi, C. Ferrie, R. Blume-Kohout, and A. M. Steinberg, Physical Review Letters 111, 183601 (2013).
* Haah _et al._ (2016) J. Haah, A. W. Harrow, Z. Ji, X. Wu, and N. Yu, in _Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing - STOC 2016_ (ACM Press, New York, New York, USA, 2016) pp. 913–925.
* Bădescu _et al._ (2017) C. Bădescu, R. O’Donnell, and J. Wright, arXiv:1708.06002 (2017).
* Dimić and Dakić (2017) A. Dimić and B. Dakić, arXiv:1705.06719 (2017).
* Gottesman (1997) D. Gottesman, _Stabilizer Codes and Quantum Error Correction_ , Ph.D. thesis, Caltech (1997).
* Gottesman (1996) D. Gottesman, Physical Review A 54, 1862 (1996).
* Nielsen and Chuang (2010) M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, 2010) p. 676.
* Hein (2005) M. Hein, _Entanglement in graph states_ , Ph.D. thesis, University of Innsbruck (2005).
* Cover and Thomas (2006) T. M. Cover and J. A. Thomas, _Elements of Information Theory_ (Wiley-Interscience, 2006).
|
# The Lévy Flight of Cities: Analyzing Social-Economical Trajectories with
Auto-Embedding
LFTLinfang Tian KZKai Zhao JMYJiaming Yin HVHuy Vo WXRWeixiong Rao School
of Software Engineering, Tongji University, Caoan Road, 201804 Shanghai,
China,
‡Linfang Tian and Kai Zhao contributed equally to the paper Robinson College
of Business, Georgia State University, Gilmer Street, Atlanta, USA the City
College of the City University of New York, and the Center for Urban Science
and Progress, New York University, New York, USA
###### Abstract
It has been found that human mobility exhibits random patterns following the
Lévy flight, where human movement contains many short flights and some long
flights, and these flights follow a power-law distribution. In this paper, we
study the social-economical development trajectories of urban cities. We
observe that social-economical movement of cities also exhibit the Lévy flight
characteristics. We collect the social and economical data such as the
population, the number of students, GDP and personal income, etc. from several
cities. Then we map these urban data into the social and economical factors
through a deep-learning embedding method Auto-Encoder. We find that the
social-economical factors of these cities can be fitted approximately as a
movement pattern of a power-law distribution. We use the Stochastic
Multiplicative Processes (SMP) to explain such movement, where in the presence
of a boundary constraint, the SMP leads to a power law distribution. It means
that the social-economical trajectories of cities also follow a Lévy flight
pattern, where some years have large changes in terms of social-economical
development, and many years have little changes.
Lévy Flight,
Movement Trajectories,
Urban Development,
###### keywords:
Research
## Introduction
Urban studies seek to understand and explain regularities observed in the
world’s major urban systems. Cities are complex systems [1, 2, 3, 4, 5, 6, 7,
8] with many people living in and complex relationships among various factors.
Previous works have studied the mobility of people [9] and show that the
movement of human society is statistically random. A lot of studies are about
the rank-order of cities [10, 11, 12, 13, 3, 14], Pareto law [11, 15] and
Zipf’s law [16, 4, 17]. In this paper, we study the development trajectories
of cities to contribute to the sustainability and innovation of cities [2, 18,
19, 20, 18]. This paper will aid policymakers, city planners and government
officials to understand the nature of urban development and design a
sustainable smart cities using computational social science models.
In this paper, we follow the urban dynamics model above and assume that cities
move in two directions [21]: one is economic growth, the other is the
development of social civilization. We study the datasets of four Asian
cities: two in China including Hong Kong and Shanghai, the third is Singapore,
the fourth is Tokyo, Japan. (see Table 1) All of them have economic factors
such as GDP, GDP of secondary industry and GDP of tertiary industry, and
social factors such as population, education and publication. It covers the
most commonly used data types for measuring urban development [10]. Firstly,
it is clear that the research object is urban mobility [22], namely the change
amount of urban economic and social development, and the change amount of
social and economic factors is obtained. Then, we apply the recently popular
artificial neural network embedding technique, namely Auto-Encoder [23], on
all economic factors to extract a low-dimensional latent vector. Min-max
normalization is performed on the data first, and the same was done on the
data of social factors. Next, we determine the step size distribution of the
city movement. According to Akaike Information Criterion [24], the
distribution model is fitted to get the optimal probability distribution. The
results show that the movement of Hong Kong is more in line with the truncated
power law [25] distribution, Shanghai city and Tokyo move more power law, and
Singapore moves in the pattern of exponential distribution. To the best of our
knowledge, this article is the first work that examines the movement of urban
social-economical developments using computational social science models and
explain the generalization model behind it.
The contribution of this paper is as follows. First, we extract the increment
distribution function of city’s society according to city’s economy. Second,
we demonstrate that log-normal processes [26, 27, 28] in the presence of a
boundary constraint, approximately yields a generative process with a power
law distribution. This result is a step towards explaining the emergence of
Lévy flight patterns in city development. Thirdly, we use the stochastic
multiplicative processes [29, 30, 31, 32, 33, 34] to explain the urban
development trajectory, regarding city as an organism growing theory [35].
## Results
Power-law fit for city trajectory flight. First, we draw the social-economical
trajectories (see Figure 1) of cities and get the histograms (see Figure 2) of
walk lengths. We fit the walk length distribution (see Figure 3) of the
Shanghai city, the Hong Kong City, Singapore and the Tokyo city. We fit
truncated power law [9], log-normal, power law [30, 26, 36, 25, 37], and
exponential distribution [38]. (see Table 2) Then use Akaike weights (see
Table 3) to choose the best fitted distribution. We find that the urban
development step size of Hong Kong fits Truncated power-law distribution with
$\alpha=1.3547$, and the walk distributions of the other three cities fit
Power-law distributions. The exponent $\alpha$ is 2.2829 for Shanghai, 2.6075
for Singapore and 2.6016 for Tokyo. Assuming that urban development satisfy
the stochastic multiplicative processes, we draw the walk length change rate
(see Figure 4) and logarithm of change rate (see Figure 5). The deducted
exponents $\alpha$ by SMP are similar to those of the fitted values of
$\alpha$. (see Table 4)
Mechanisms behind the Power law pattern. A city should be considered an ever
changing organism instead of a static one. At each step $t$, the organism may
grow or shrink [2], according to a random variable $R_{t}$, so that the change
of the city $l_{t}=r_{t-1}l_{t-1}$. This is stochastic multiplicative
processes [31] $l_{t}=r_{t}r_{t-1}...r_{1}l_{0}$. The idea is that the random
growth of an organism is expressed as a percentage of its current increment,
and is independent of its current actual size. Then we find
$\displaystyle\ln l_{t}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{t}\ln
r_{i}+\ln l_{0}$ (1)
Assuming the random variables $\ln R_{i}$ satisfy independent and identical
distributions with mean $v$ and variance $D$, the Central Limit Theorem says
that $\ln L_{t}=\sum_{i=1}^{t}\ln R_{i}+\ln l_{0}$ converges to a normal
distribution with mean $vt$ and variance $Dt$ for sufficiently large $t$,
which means $L_{t}$ converges to a log-normal distribution. In this paper, we
use Kolmogorov-Smirnov test to verify that all the datasets $lnr$ of four
cities can be reasonably assumed satisfy normal distributions (see Figure 5
and Table 5). Note here that $l_{t}$ is the length of the flight between time
$t-1$ and time $t$. The probability density function of the flight length with
the same change variable is log-normal.
$\displaystyle f(l_{t})$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2\pi
Dt}}\frac{1}{l_{t}}exp\left[-\frac{1}{2Dt}(\ln l_{t}-vt)^{2}\right]$ (2)
$\displaystyle=$ $\displaystyle\frac{1}{\sqrt{2\pi
Dt}}l_{t}^{-1+\frac{v}{D}}exp\left[-\frac{1}{2Dt}(\ln^{2}l_{t}+v^{2}t^{2})\right]$
Given
$f(l_{t})=\frac{1}{l_{t}\sqrt{2\pi Dt}}exp[-\frac{(\ln l_{t}-vt)^{2}}{2Dt}]$
$=\frac{1}{l_{t}\sqrt{2\pi Dt}}exp[-\frac{(\ln l_{t})^{2}-2vt\ln
l_{t}+v^{2}t^{2}}{2Dt}]$ $=\frac{1}{l_{t}\sqrt{2\pi Dt}}exp[-\frac{(\ln
l_{t})^{2}+v^{2}t^{2}}{2Dt}]exp(\frac{v\ln l_{t}}{D})$
$=\frac{1}{l_{t}\sqrt{2\pi Dt}}l_{t}^{\frac{v}{D}}exp[-\frac{(\ln
l_{t})^{2}+v^{2}t^{2}}{2Dt}]$
$=\frac{1}{\sqrt{2\pi Dt}}l_{t}^{-1+\frac{v}{D}}exp[-\frac{(\ln
l_{t})^{2}+v^{2}t^{2}}{2Dt}]$
This form shows that the log-normal distribution can be mistaken for an
apparent power law. If $\sigma\to\infty$, then $\frac{(\ln l_{t})^{2}}{2Dt}\to
0$.
$f(l_{t})\to\frac{1}{\sqrt{2\pi
Dt}}exp[-\frac{v^{2}t}{2D}]l_{t}^{-1+\frac{v}{D}}\to Cl_{t}^{\alpha}$
The Probability Density Function of log-normal distribution is
indistinguishable from that of power law distribution
$f(l_{t})=Cl_{t}^{-\alpha}$, where $1<\alpha\leq 3$.
If there exists a lower bound $l_{min}$,
$l_{t}=max(l_{min},r_{t-1}l_{t-1})$
then $L_{t}$ converges to a power law distribution, log-normal easily pushed
to a power law model.
Here the $v$ and the $D$ are the normalized mean and variance of $\ln R$.
If there exists a lower bound $l_{min}$, such that
$l_{t}=max(l_{min},r_{t-1}l_{t-1})$, then the random variable $L_{t}$
converges to a power law distribution, log-normal easily pushed to a power law
model.
## Discussion
Previous research suggests that power laws widely exist in city population,
financial markets and city-size [11, 14]. However, the rank-size distribution
between cities [13] is mostly static, The dynamic urban power-law distribution
focuses on the change of specific indicators over time, while the systematic
change [1] among urban factors has not been studied. By using a recently
popular neural network embedding technique to reduce the dimension of urban
factor data-sets into two dimensions: economy and society, we explore the city
development trajectory of Hong kong, Shanghai, Singapore and Tokyo. The urban
development of Hong Kong tends to be truncated power law distribution. This is
probably because the rapid development of China’s reform and opening up has
weakened Hong Kong’s status as an important city in Southeast Asia, and Hong
Kong is no longer the uniquely preferred city in the allocation of various
resources in China.
## Methods
Data Sets. We collected the official data-sets of Hong kong (see Table 6),
Shanghai (see Table 7), Singapore (see Table 8) and Tokyo (see Table 9) in our
work, The data-sets of the four cities were collated and matched. Using the
embedding technique to reduce the dimensions of those data-sets into two
dimensions: economy and society. Then, we draw the urban development
trajectory (see Figure 1) with economy as $x$-coordinates and society as
$y$-coordinate. we extract the following information from the graph: flight
lengths.
Obtaining Flight Length of each factor. To the best of our knowledge, this
article is the first work that examines the flight length distribution of
urban development. Firstly, we get raw data of each year’s flight length for
each factor. The GDP factor ranges from dozens to thousands, while Proportion
of industry ranges between 0 and 4, the range of values of raw data varies
widely. To avoid the flight length being governed by large value data, we use
min-max normalization to scale the range of each factor in [0, 1].
Obtaining Flight length of urban development by Embedding. The Manifold
Hypothesis states that real-world high-dimensional data lie on low-dimensional
manifolds embedded within the high-dimensional space. In this paper, we try to
get embedding layer through training Auto-encoder (AE), which is a type of
artificial neural network. Firstly, we classifies the data into two classes,
for example, regarding GDP, per capita GDP and primary GDP as economic
variables, regarding population and General higher education as social
variables. Secondly, we use min-max scaling to make sure variables that are
measured at different scales contribute equally to the model fitting. Thirdly,
in the AE, the input feature,the dataset of economic variables or social
variables, is transformed into one latent space with the encoder and then
reconstructed from latent space with decoder. The encoder is used as a
dimensionality reducer. To train this AE, an Adam algorithm was applied as an
optimizer and mean square error (MSE) as a loss function. We use two-layer
fully connected networks as the encoder and decoder, and the loss function is
$\displaystyle\mathcal{L}(\textbf{x},\textbf{x}^{\prime})$ $\displaystyle=$
$\displaystyle\left\|\textbf{x}-\textbf{x}^{\prime}\right\|^{2}$ (3)
$\displaystyle=$ $\displaystyle\left\|\textbf{x}-f(h(\textbf{x}))\right\|^{2}$
where $\textbf{x}\in\mathbb{R}^{n}$ is the input feature of one year and $n$
is the number of variables. The data from $m$ years construct $m$ training
samples. $\textbf{x}^{\prime}$ is the output of the decoder. The computation
of the encoder and decoder is defined as
$\displaystyle h(\textbf{x})$ $\displaystyle=$
$\displaystyle\sigma(W_{2}\sigma(W_{1}\textbf{x}+b_{1})+b_{2}),$
$\displaystyle f(\textbf{y})$ $\displaystyle=$
$\displaystyle\sigma^{\prime}(W_{2}^{\prime}\sigma^{\prime}(W_{1}^{\prime}\textbf{y}+b_{1}^{\prime})+b_{2}^{\prime})$
(4)
where
$W_{1},W_{2},W_{1}^{\prime},W_{2}^{\prime},b_{1},b_{2},b_{1}^{\prime},b_{2}^{\prime}$
are learnable parameters of the network, and $\sigma,\sigma^{\prime}$ are the
activation functions. We train the AE by minimizing the MSE of the input
feature and the output of the decoder. And the output of the encoder
$h(\mathbf{x})$ is the embedding of the original input.
Identifying the Scale Range. To fit a heavy tailed distribution such as a
power law distribution, we need to determine what portion of the data to fit
$x_{min}$ and the scaling parameter $\alpha$. We use the methods from [39] to
determine $x_{min}$ and $\alpha$. We create a power law fit starting from each
value in the dataset. Then we select the one that results in the minimal
Kolmogorov-Smirnov distance between the data and the fit, as the optimal value
of $x_{min}$. After that, the scaling parameter $\alpha$ in the power law
distribution is given by
$\displaystyle\alpha$ $\displaystyle=$ $\displaystyle
1+n\left(\sum_{i=1}^{n}\ln\frac{x_{i}}{x_{min}}\right)^{-1}$ (5)
where $x_{i}$ are the observed values of $x_{i}>x_{min}$ and $n$ is the number
of samples.
Exponential transformation The probability density function of exponential
distribution can be transformed into power law distribution. Let $X$ be an
exponential random variable whose probability density function is given by
$P(X=x)=\lambda e^{-\lambda x},\lambda>0,x>0$, then the cumulative probability
function is given by
$\displaystyle P(X\leq x)$ $\displaystyle=$ $\displaystyle\int_{0}^{x}\lambda
e^{-\lambda t}dt$ (6) $\displaystyle=$ $\displaystyle 1-e^{-\lambda
x},\lambda>0,x>0$
and let $Y$ be the random variable obtained through the transformation
$Y=ke^{X}$, $k>0$, we can express the cumulative density function of $Y$ in
terms of the cumulative density function of $X$ as
$\displaystyle P(Y\leq y)$ $\displaystyle=$ $\displaystyle P(ke^{X}\leq y)$
$\displaystyle=$ $\displaystyle P\left[X\leq\ln(\frac{y}{k})\right]$
$\displaystyle=$ $\displaystyle 1-e^{-\lambda\ln(\frac{y}{k})}$
$\displaystyle=$ $\displaystyle 1-(\frac{y}{k})^{-\lambda}$ $\displaystyle=$
$\displaystyle 1-k^{\lambda}y^{-\lambda}$ $\displaystyle P(Y=y)$
$\displaystyle=$ $\displaystyle\lambda k^{\lambda}y^{-(1+\lambda)}$ (7)
which corresponds to the Probability Density Function of the Power-law
distribution with shape factor $\alpha=1+\lambda$.
Akaike weights. We use Akaike weights to choose the best fitted distribution.
An Akaike weight is a normalized distribution selection criterion. Its value
is between 0 and 1. A larger value indicates a better fitted distribution.
Akaike’s information criterion (AIC) is used in combination with Maximum
likelihood estimation (MLE). MLE finds an estimator of $\hat{\theta}$ that
maximizes the likelihood function $L(\hat{\theta}|data)$ of one distribution.
AIC is used to describe the best fitting one among all fitted distributions,
$\displaystyle AIC$ $\displaystyle=$
$\displaystyle-2log\left(L(\hat{\theta}|data)\right)+2K.$ (8)
Here $K$ is the number of estimable parameters in the approximating model.
After determining the AIC value of each fitted distribution, we normalize
these values as follows. First of all, we extract the difference between
different AIC values called $\Delta_{i}$,
$\displaystyle\Delta_{i}$ $\displaystyle=$ $\displaystyle AIC_{i}-AIC_{min}.$
(9)
Then Akaike weights $W_{i}$ are calculated as follows,
$\displaystyle W_{i}$ $\displaystyle=$
$\displaystyle\frac{exp(-\Delta_{i}/2)}{\sum_{r=1}^{R}exp(-\Delta_{i}/2)}.$
(10)
The statistics can be see in Table 3.
A List of abbreviations
HK: Hong Kong
SHA: Shanghai
SG: Singapore
TYO: Tokyo
pdf: probability density function
SMP: Stochastic Multiplicative Processes
AE: Auto-encoder
MSE: mean square error
AIC: Akaike’s information criterion
MLE: maximum likelihood estimation
## Availability of data and material
The data can be collected from official websites, which are listed in Table 6,
Table 7, Table 8 and Table 9.
## Funding
This work is partially supported by National Natural Science Foundation of
China (Grant No. 61972286).
## Competing interests
The authors declare that they have no competing interests.
## Author’s contributions
Weixiong Rao and Kai Zhao conceived the experiments, Linfang Tian and Jiamin
Yin conducted the experiments, Linfang Tian analysed the results. All authors
reviewed the manuscript.
## Acknowledgements
Not applicable.
## References
* [1] Chronéer, D., Ståhlbröst, A., Habibipour, A.: Urban living labs: Towards an integrated understanding of their key components. Technology Innovation Management Review 9(3), 50–62 (2019)
* [2] Slach, O., Bosák, V., Krtička, L., Nováček, A., Rumpel, P.: Urban shrinkage and sustainability: Assessing the nexus between population density, urban structures and urban sustainability. Sustainability 11(15), 4142 (2019)
* [3] Frick, S.A., Rodríguez-Pose, A.: Big or small cities? on city size and economic growth. Growth and Change 49(1), 4–32 (2018)
* [4] Jiao, L., Xu, Z., Xu, G., Zhao, R., Liu, J., Wang, W.: Assessment of urban land use efficiency in china: A perspective of scaling law. Habitat International 99, 102172 (2020)
* [5] Alves, L.G., Ribeiro, H.V., Rodrigues, F.A.: Crime prediction through urban metrics and statistical learning. Physica A: Statistical Mechanics and its Applications 505, 435–443 (2018)
* [6] Ozdemir, O., et al.: Distributional effects of human capital in advanced economies: Dynamics of economic globalization. Business and Economics Research Journal 11(3), 591–607 (2020)
* [7] Topirceanu, A., Udrescu, M., Marculescu, R.: Weighted betweenness preferential attachment: A new mechanism explaining social network formation and evolution. Scientific reports 8(1), 1–14 (2018)
* [8] Beare, B.K., Toda, A.A.: On the emergence of a power law in the distribution of covid-19 cases. Physica D: Nonlinear Phenomena 412, 132649 (2020)
* [9] Corral, Á., Udina, F., Arcaute, E.: Truncated lognormal distributions and scaling in the size of naturally defined population clusters. Physical Review E 101(4), 042312 (2020)
* [10] González-Val, R.: The spanish spatial city size distribution. Environment and Planning B: Urban Analytics and City Science, 2399808320941860 (2020)
* [11] Luckstead, J., Devadoss, S.: Pareto tails and lognormal body of us cities size distribution. Physica A: Statistical Mechanics and its Applications 465, 573–578 (2017)
* [12] Luckstead, J., Devadoss, S., Danforth, D.: The size distributions of all indian cities. Physica A: Statistical Mechanics and its Applications 474, 237–249 (2017)
* [13] Wu, H., Levinson, D., Sarkar, S.: How transit scaling shapes cities. Nature Sustainability 2(12), 1142–1148 (2019)
* [14] Bee, M., Riccaboni, M., Schiavo, S.: Distribution of city size: Gibrat, pareto, zipf. The Mathematics of Urban Morphology, 77 (2019)
* [15] Toda, A.A.: A note on the size distribution of consumption: More double pareto than lognormal. Macroeconomic Dynamics 21(6), 1508–1518 (2017)
* [16] Wei, J., Zhang, J., Cai, B., Wang, K., Liang, S., Geng, Y.: Characteristics of carbon dioxide emissions in response to local development: Empirical explanation of zipf’s law in chinese cities. Science of The Total Environment 757, 143912 (2021)
* [17] Sun, X., Yuan, O., Xu, Z., Yin, Y., Liu, Q., Wu, L.: Did zipf’s law hold for chinese cities and why? evidence from multi-source data. Land Use Policy 106, 105460 (2021)
* [18] Bibri, S.E.: Big data science and analytics for smart sustainable urbanism. Unprecedented Paradigmatic Shifts and Practical Advancements; Springer: Berlin, Germany (2019)
* [19] Visvizi, A., Lytras, M.D., Damiani, E., Mathkour, H.: Policy making for smart cities: Innovation and social inclusive economic growth for sustainability. Journal of Science and Technology Policy Management (2018)
* [20] Durán-Sánchez, A., del Río-Rama, M.C., Sereno-Ramírez, A., Bredis, K., et al.: Sustainability and quality of life in smart cities: Analysis of scientific production. Innovation, Technology, and Knowledge Management, 159–181 (2017)
* [21] Bharath, H., Chandan, M., Vinay, S., Ramachandra, T.: Modelling urban dynamics in rapidly urbanising indian cities. The Egyptian Journal of Remote Sensing and Space Science 21(3), 201–210 (2018)
* [22] Jiang, R., Song, X., Fan, Z., Xia, T., Chen, Q., Miyazawa, S., Shibasaki, R.: Deepurbanmomentum: An online deep-learning system for short-term urban mobility prediction. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32 (2018)
* [23] Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep network. Advances in Neural Information Processing Systems 19, 153–160 (2007)
* [24] Sakamoto, Y., Ishiguro, M., Kitagawa, G.: Akaike information criterion statistics. Dordrecht, The Netherlands: D. Reidel 81(10.5555), 26853 (1986)
* [25] Chen, J., Shiyomi, M.: A power law model for analyzing spatial patterns of vegetation abundance in terms of cover, biomass, density, and occurrence: derivation of a common rule. Journal of plant research 132(4), 481–497 (2019)
* [26] Feng, M., Deng, L.-J., Chen, F., Perc, M., Kurths, J.: The accumulative law and its probability model: an extension of the pareto distribution and the log-normal distribution. Proceedings of the Royal Society A 476(2237), 20200019 (2020)
* [27] Montebruno, P., Bennett, R.J., Van Lieshout, C., Smith, H.: A tale of two tails: Do power law and lognormal models fit firm-size distributions in the mid-victorian era? Physica A: Statistical Mechanics and its Applications 523, 858–875 (2019)
* [28] Newberry, M.G., Savage, V.M.: Self-similar processes follow a power law in discrete logarithmic space. Physical review letters 122(15), 158303 (2019)
* [29] Sornette, D., Cont, R.: Convergent multiplicative processes repelled from zero: power laws and truncated power laws. Journal de Physique I 7(3), 431–444 (1997)
* [30] Mitzenmacher, M.: A brief history of generative models for power law and lognormal distributions. Internet mathematics 1(2), 226–251 (2004)
* [31] Guerrero, F.G., Garcia-Baños, A.: Multiplicative processes as a source of fat-tail distributions. Heliyon 6(7), 04266 (2020)
* [32] Hodgkinson, L., Mahoney, M.W.: Multiplicative noise and heavy tails in stochastic optimization (2020)
* [33] Zanette, D.H., Manrubia, S.: Fat tails and black swans: Exact results for multiplicative processes with resets. Chaos: An Interdisciplinary Journal of Nonlinear Science 30(3), 033104 (2020)
* [34] Fenner, T., Levene, M., Loizou, G.: A multiplicative process for generating the rank-order distribution of uk election results. Quality & Quantity 52(3), 1069–1079 (2018)
* [35] Shultz, A.J., Adams, B.J., Bell, K.C., Ludt, W.B., Pauly, G.B., Vendetti, J.E.: Natural history collections are critical resources for contemporary and future studies of urban evolution. Evolutionary applications 14(1), 233–247 (2021)
* [36] Sakiyama, T.: A power law network in an evolutionary hawk–dove game. Chaos, Solitons & Fractals 146, 110932 (2021)
* [37] Pang, G., Taqqu, M.S.: Nonstationary self-similar gaussian processes as scaling limits of power-law shot noise processes and generalizations of fractional brownian motion. High Frequency 2(2), 95–112 (2019)
* [38] Miyaguchi, T., Uneyama, T., Akimoto, T.: Brownian motion with alternately fluctuating diffusivity: stretched-exponential and power-law relaxation. Physical Review E 100(1), 012116 (2019)
* [39] Clauset, A., Shalizi, C.R., Newman, M.E.: Power-law distributions in empirical data. SIAM review 51(4), 661–703 (2009)
## Figures and Tables
(a) HK Auto-Encoder
(b) SHA Auto-Encoder
(c) SG Auto-encoder
(d) TYO Auto-Encoder
Figure 1: Auto-Encoder The economical and social trajectory of urban
development.As can be seen from the figure(a), before the opening up of the
Pudong District in 1990, the development step size of Shanghai was relatively
small.The 2010 Shanghai World Expo has led to a rapid economic development. As
can be seen from the figure(b), after Hong Kong’s return to China in 1997, it
enjoyed social and economical stability and prosperity, and successfully
fended off the Asian financial crisis in 1998. After 2010, Hong Kong’s
economic and social development was sluggish.
(a) HK histogram
(b) SHA histogram
(c) SG histogram
(d) TYO histogram
Figure 2: Histogram of flight length.As can be seen from both the figure(a),
(b), (c) and (d), all of the histograms are positive skewed. The statistical
analysis of the step size of the random walk shows obvious ”heavy tail”
feature, which satisfies the walking characteristics of frequent short-
distance walk and occasional long-distance jump.
(a) HK Truncated power-law (b) SHA Power-law (c) SG Exponential (d) TYO Power-
law
Figure 3: Fitted distributions. Hong Kong has experienced rapid development
and sharp decline. The urban development step size is relatively rich, with
more long-distance jump. The fitted Truncated power-law distribution has a
thicker tail, with alpha 1.3547. The development step size of Shanghai has
more short-distance walk and less long-distance jump. It is still in a stable
development period. The $\alpha$ of the fitted Power-law distribution is
2.2829. Singapore has been developing with relatively constant multiplicative
factor, the fitted Exponential distribution with $\lambda=1.6075$. With the
Exponential Transformation, the exponential distribution can deducted to
Power-law distribution with $\alpha=1+\lambda=2.6075$. Tokyo has been
developing relatively earlier than Shanghai, the fitted Power-law distribution
has a larger $\alpha$ 2.6016.
(a) HK r value
(b) SHA r value
(c) SG r value
(d) TYO r value
Figure 4: The relative change rates. The change rate is defined as the
relative change of length between two consecutive flights. From these figures
we observe that the change rate are uncorrelated from one time interval to the
other.
(a) HK $\ln r$ value
(b) SHA $\ln r$ value
(c) SG $\ln r$ value
(d) TYO $\ln r$ value
Figure 5: $\ln r$ of four cities Table 1: The analysis of Hongkong, Shanghai, Singapore and Tokyo Datasets Class | Description | HK | SHA | SG | TYO
---|---|---|---|---|---
Economy | GDP | 1981-2019 | 1978-2018 | 1960-2019 | 1968-2019
| Primary industry | 1981-2019 | 1978-2018 | NA | NA
| Secondary industry | 1981-2019 | 1978-2018 | 1960-2019 | NA
| Tertiary industry | 1981-2019 | 1978-2018 | 1960-2019 | NA
| Share of Primary industry | 1981-2019 | 1978-2018 | NA | NA
| Share of Secondary industry | 1981-2019 | 1978-2018 | 1960-2019 | NA
| Share of Tertiary industry | 1981-2019 | 1978-2018 | 1960-2019 | NA
| Per capita GDP | 1981-2019 | 1978-2018 | NA | NA
| Government revenue | 1981-2019 | 1978-2018 | NA | NA
| Government expenditure | 1981-2019 | 1978-2018 | 1960-2019 | NA
| Personal income | NA | NA | NA | 1960-2019
| Original insurance income | NA | 1978-2018 | NA | NA
| Original insurance pays out | NA | 1978-2018 | NA | NA
| Total fixed asset investment | NA | 1978-2018 | 1960-2019 | NA
| Industry | NA | 1978-2018 | 1960-2019 | 1968-2019
| GDP per capita(Dollar) | NA | 1978-2018 | NA | NA
| Proportion of industry | NA | 1978-2018 | 1960-2019 | NA
| Gross agricultural production | NA | 1978-2018 | NA | NA
| Gross industrial production | NA | 1978-2018 | 1960-2019 | 1968-2019
Society | Population | 1981-2019 | 1978-2018 | 1960-2019 | 1968-2019
| Labor | NA | NA | NA | 1968-2019
| General Tertiary education | 1981-2019 | 1978-2018 | 1960-2019 | 1968-2019
| Ordinary secondary school | 1981-2019 | 1978-2018 | 1960-2019 | 1968-2019
| Ordinary primary school | 1981-2019 | 1978-2018 | 1960-2019 | 1968-2019
| Book print run | NA | 1978-2018 | NA | NA
| Journal print run | NA | 1978-2018 | NA | NA
| Newspaper print run | NA | 1978-2018 | NA | NA
Table 2: Fitted distributions. With $1<\alpha\leq 3$, the Power-law distribution has infinite variance. It has infinite mean as $1<\alpha\leq 2$ and finite mean as $2<\alpha\leq 3$. Distribution | Probability density function (pdf)
---|---
Exponential | $\lambda e^{-\lambda x}$
Power-law | $Cx^{-\alpha}$
Lognormal | $\frac{1}{x\sigma\sqrt{2\pi}}exp[-\frac{(\ln(x)-\mu)^{2}}{2\sigma^{2}}]$
Truncated power-law | $Cx^{-\alpha}e^{-\gamma x}$
Table 3: Akaike weights of fitted distributions in the four cities datasets. Cities | Exponential | Power-law | Lognormal | Truncated Power-law
---|---|---|---|---
HK | 0.1979 | 0.2568 | 0.2226 | 0.3227
SHA | 0.0401 | 0.4603 | 0.2163 | 0.2814
SG | 0.6717 | 0.0014 | 0.1283 | 0.1985
TYO | 0.1594 | 0.4132 | 0.1979 | 0.2295
Table 4: The calculated and estimated parameters for consecutive flights length in the four cities datasets, with $\ln R_{i}$ taken in the interval [0.48, 1.48][29]. The mean is noted as $v^{\prime}$, and variance is $D^{\prime}$, $\hat{\alpha}$ is calculated, and $\alpha$ is the fitted exponent. Here the walk lengths of Hong kong is fitted Truncated power-law rather than Power-law distribution. Cities | $l_{min}$ | $v^{\prime}$ | $D^{\prime}$ | $v^{\prime}/D^{\prime}$ | $\hat{\alpha}$ | $\alpha$
---|---|---|---|---|---|---
HK* | 0.0752 | -0.1904 | 0.1443 | -1.3200 | 2.3200 | 1.3547
SHA | 0.0216 | -0.1010 | 0.0751 | -1.3453 | 2.3453 | 2.2829
SG | 0.0262 | -0.1310 | 0.1145 | -1.1445 | 2.1445 | 2.6075
TYO | 0.0033 | -0.11716 | 0.0875 | -1.3390 | 2.3390 | 2.6016
Table 5: The $p$ value of Kolmogorov-Smirnov test for four city datasets. Cities | $p$ value
---|---
HK | 0.9804
SHA | 0.9477
SG | 0.7399
TYO | 0.9933
Table 6: The URL of HK dataset. Description | URL
---|---
GDP | https://www.censtatd.gov.hk/sc/web_table.html?id=31#
Per capita GDP | https://www.censtatd.gov.hk/sc/web_table.html?id=31#
Primary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=35#
Secondary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=35#
Tertiary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=35#
Proportion of primary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=36#
Proportion of secondary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=36#
Proportion of tertiary industry | https://www.censtatd.gov.hk/sc/web_table.html?id=36#
Government revenue | https://www.censtatd.gov.hk/sc/web_table.html?id=193#
Government expenditure | https://www.censtatd.gov.hk/sc/web_table.html?id=194#
Population | https://www.censtatd.gov.hk/sc/web_table.html?id=1A#
Labour force | https://www.censtatd.gov.hk/sc/web_table.html?id=6#
Primary school | https://www.censtatd.gov.hk/sc/scode370.html#section6
Secondary school | https://www.censtatd.gov.hk/sc/scode370.html#section6
University | https://www.censtatd.gov.hk/sc/scode370.html#section6
Table 7: The URL of SHA dataset. Description | URL
---|---
GDP | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0401.htm
Primary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0401.htm
Secondary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0401.htm
Tertiary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0401.htm
Industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0401.htm
General public budget revenue | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0501.htm
General public budget expenditure | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0501.htm
Proportion of primary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0404.htm
Proportion of Secondary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0404.htm
Proportion of Tertiary industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0404.htm
Proportion of industry | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0404.htm
Total fixed asset investment | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0701.htm
General public budget revenue | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0501.htm
General public budget expenditure | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0501.htm
Gross agricultural production | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C1201.htm
Gross industrial production | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C1301.htm
Original insurance income | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C1801.htm
Original insurance pays out | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C1801.htm
Resident population at year-end | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0201.htm
Registered population at year-end | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C0201.htm
General higher education | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2103.htm
Ordinary secondary school | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2103.htm
Ordinary primary school | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2103.htm
Book print run | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2316.htm
Journal print run | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2317.htm
Newspaper print run | http://tjj.sh.gov.cn/tjnj/nj20.htm?d1=2020tjnj/C2318.htm
Table 8: The URL of SG dataset. Description | URL
---|---
GDP | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Goods Producing Industries | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Services Producing Industries | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Goods Proportioin | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Services Proportion | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Government Consumption | https://tablebuilder.singstat.gov.sg/table/TS/M015241
Gross Fixed Capital Formation | https://tablebuilder.singstat.gov.sg/table/TS/M015051
Total Population | https://tablebuilder.singstat.gov.sg/table/TS/M810001#!
Government Expenditure On Edu | https://tablebuilder.singstat.gov.sg/table/TS/M850011
Primary Schools | https://tablebuilder.singstat.gov.sg/table/TS/M850011
Secondary Schools | https://tablebuilder.singstat.gov.sg/table/TS/M850011
Tertiary | https://tablebuilder.singstat.gov.sg/table/TS/M850011
Literacy Rate | https://tablebuilder.singstat.gov.sg/table/TS/M850001
Table 9: The URL of TYO dataset. Description | URL
---|---
Loans | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i015.htm
Manufactured goods | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i016.htm
GDP | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i016.htm
Prefectural income | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i016.htm
Population | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i002.htm
Labor | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i002.htm
Children and students | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i017.htm
Elementary schools | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i017.htm
Junior secondary | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i017.htm
Senior secondary | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i017.htm
Universities | https://www.toukei.metro.tokyo.lg.jp/tnenkan/2019/tn19q3i017.htm
|
# Input Compression with Positional Consistency for Efficient Training and
Inference of Transformer Neural Networks
Amrit Nagarajan
School of ECE
Purdue University
<EMAIL_ADDRESS>Anand Raghunathan
School of ECE
Purdue University
<EMAIL_ADDRESS>
###### Abstract
Transformers have rapidly increased in popularity in recent years, achieving
state-of-the-art performance in processing text, images, audio and video.
However, Transformers present large computational requirements for both
training and inference, and are prone to overfitting during training. To
address these challenges, we present Input Compression with Positional
Consistency (ICPC), a new data augmentation method that, unlike prior
augmentation techniques, simultaneously improves both generalization and
training efficiency. ICPC applies varying levels of compression to each
training sample in each epoch. This leads to smaller input sequences being
processed by the Transformer, and hence faster training, while also
alleviating overfitting by presenting each input with different compression
levels. We introduce a consistency-aware position selection method in ICPC
that enables accurate processing of compressed inputs without any changes to
the underlying Transformer architecture. We detail compression-based
augmentation methods for four different modalities – insignificant word
pruning for text, resolution modulation for images, spatio-temporal resolution
modulation for videos, and spectogram size modulation for audio. ICPC also
enables efficient variable-effort inference, where samples are first inferred
at high compression levels, and progressively re-evaluated with lower
compression for more challenging inputs. On 9 diverse tasks spanning 4
different modalities, ICPC improves accuracy by up to 1%, while also
accelerating training and inference by up to 2.9$\times$ and 2.6$\times$,
respectively. Code is available at https://github.com/amrnag/ICPC.
## 1 Introduction
Transformers have recently emerged as the state-of-the-art neural network
architecture for machine learning tasks involving text, images, video and
audio. Remarkably, identical or near-identical Transformer backbones can be
used to create high-performance models across a wide range of input modalities
[4, 5, 16, 6]. This is achieved by simply adding a suitable encoding layer
which converts the input into a set of embedding vectors, and adds a
positional embedding to each vector, thereby outputting a sequence that can be
processed by the Transformer (Fig. 1). These advantages however come at a
cost: Transformers are orders-of-magnitude larger, and hence, more compute-
intensive during both training and inference compared to their predecessors
such as Convolutional Neural Networks (CNNs).
Figure 1: Transformers for different modalities. Similar backbone models are
used for all modalities, but different pre-processing steps are necessary to
generate embedding vectors from inputs of different modalities.
The number of embedding vectors generated from an input (Fig. 1) is the
primary factor that determines the computational effort expended by the
Transformer. In particular, the computational complexity of self-attention
scales quadratically, while all other operations performed by the Transformer
scale linearly with number of embedding vectors. We find that the number of
embedding vectors generated from an input is directly proportional to the size
of the input for all modalities (Fig. 2). For instance, the number of
embedding vectors generated from a given text sequence is directly
proportional to the number of words in the sequence. On the other hand, the
resolution of an image determines the number of embedding vectors generated
from it. Similarly, the number of embedding vectors generated from a video
depends on the number of frames and the resolution of each frame. Finally, the
number of embedding vectors generated from an audio input depends on the
number of time steps and the number of frequency banks used to represent the
signal at each time step. In addition to the computational challenges of
training Transformers, finding sufficient amounts of data for training them
can also be challenging. Transformers are highly susceptible to overfitting
for small datasets, as evidenced by the consistent increase in generalization
performance with increasing training dataset size [12].
We present Input Compression with Positional Consistency (ICPC), a new
augmentation method that simultaneously addresses the efficiency and
overfitting challenges of training Transformers. ICPC creates augmented views
of each training sample by applying varying levels of compression to it in
each epoch. Compression reduces the number of embedding vectors generated from
each sample, thereby accelerating training. In contrast, current augmentation
methods (such as rotation, translation, CutMix [28], mixup [29], AugMix [11],
Manifold Mixup [21], etc.) are size-preserving, i.e., they do not change the
number of embedding vectors generated per sample.
ICPC utilizes input compression methods for different modalities that take
advantage of their unique characteristics. For text inputs, we propose
insignificant word pruning, which reduces the number of embedding vectors by
pruning a random subset of less important words from an input sequence in each
epoch. For images, we use resolution modulation where images are compressed to
random resolutions in each epoch. For videos, we use spatio-temporal
resolution modulation to reduce both the number of frames and the resolution
of each frame. Finally, for audio signals, we utilize spectogram size
modulation, which randomly varies the number of time intervals that the signal
is divided into, and the number of frequency banks used to represent the
signal in each time interval.
Since Transformers were initially designed to process text sequences (which
can be arbitrarily long), they are inherently capable of processing variable-
length inputs. However, we find that naïvely providing compressed inputs leads
to a large drop in accuracy, especially for non-text inputs, due to incorrect
encoding of positional information. To overcome this challenge, we propose
consistency-aware position selection where the positions associated with each
input embedding vector are chosen so as to preserve consistency with the
original uncompressed input.
ICPC can also be used to improve the inference efficiency of Transformers
through variable-effort inference, wherein samples are first heavily
compressed and processed. Easy samples terminate at this stage, while more
difficult samples are re-evaluated with lower compression, leading to a net
savings in computational effort.
We summarize our main contributions as follows.
* •
We propose Input Compression with Positional Consistency (ICPC), a new
augmentation method for Transformers that simultaneously improves accuracy and
generalization performance.
* •
We describe input compression methods for different modalities that reduce the
number of embedding vectors generated for each input.
* •
We introduce a consistency-aware position selection method to enable ICPC
without any changes to the underlying model architecture.
* •
We demonstrate that training with ICPC consistently improves both accuracy and
efficiency over prior methods. We also show that ICPC can be used to improve
inference efficiency by modulating computational effort performed based on the
difficulty of the input.
## 2 Data Augmentation through Input Compression
The overarching idea behind ICPC is to compress inputs to achieve dual goals —
computational efficiency and data augmentation. While state-of-the-art
Transformer models for various modalities use similar architectures as their
backbone, they use modality-specific pre-processing steps for encoding the
inputs into sequences of embedding vectors. Hence, we propose input
compression methods for text, images, video and audio, all of which reduce the
number of embedding vectors required to represent a given input (Fig. 2). We
describe these methods in turn below.
Figure 2: Techniques for augmenting data through input compression for
different modalities.
Text: Embedding vectors are generated from text sequences by gathering the
relevant entries for each word in the sequence from an embedding table that
contains entries for all words in the model’s vocabulary (Fig. 1). Thus, the
number of embedding vectors generated from a given sequence is equal to the
number of words in the sequence. We propose insignificant word pruning to
compress text inputs by removing a random subset of less relevant words in
each input sequence in each epoch (Fig. 2). Our procedure for insignificant
word pruning is illustrated in Algorithm 1, lines 1-6. We identify
insignificant words in a given sequence using stopword filters. Stopwords are
words that do not contribute to the meaning of a sentence, but are required to
make them grammatically correct. As a result, pruning stopwords does not
affect the labels associated with sequences, thereby enabling augmentation
without impacting convergence. We first identify the stopwords in every
sequence in the training dataset. Then, the number of stopwords to prune from
each sequence in each epoch is determined by selecting a random number between
0 and the total number of stopwords in the sequence. Finally, we randomly
select and prune the determined number of stopwords from each sequence. In
effect, insignificant word pruning achieves data augmentation by pruning a
different subset of stopwords from the sample in every epoch, thereby ensuring
that the model does not see the same sequences repeatedly over the course of
training. Since pruning stopwords reduces the number of embedding vectors
generated from each sequence, insignificant word pruning also improves
training efficiency.
Images: Embedding vectors are generated from images by first splitting them
into non-overlapping fixed-size regions called patches, and subsequently
extracting an embedding vector from each patch through 2-D convolution.
Therefore, the resolution of an image (height, width) determines the number of
patches generated, which in turn, determines the embedding vectors generated
from the image. We propose resolution modulation for compression-based
augmentation of image data (Fig. 2), illustrated in Algorithm 1, lines 7-11.
We start by choosing two random numbers for each batch, with one number
indicating the height and the other number indicating the width that all
images in the batch will be resized to. Then, all images in the batch are
resized to the chosen (height, width) values through downsampling. We restrict
ourselves to determining resolution at the batch granularity in order to avoid
the padding and ineffectual computations introduced when images in a batch are
not of the same size.
Audio: Audio signals are represented using spectograms, and embedding vectors
are generated by treating the spectogram as a 2-D image and following the
method prescribed for images. The number of embedding vectors generated from a
given audio signal depends on two factors: the width of the spectogram is
determined by the number of time intervals the audio is segmented into (which
we refer to as the sampling rate), and the height of the spectogram is
determined by the number of frequency banks used to represent the signal. We
propose spectogram size modulation for compressing audio inputs (Fig. 2), with
the procedure described in Algorithm 1, lines 20-25. Spectogram size
modulation incorporates both sampling rate modulation and filterbank size
modulation. We randomly select a sampling rate and number of filterbanks for
each batch. Then, each audio sample in the batch is sampled using the selected
sampling rate, and converted to a spectogram with the selected number of
filter banks at each time step. The sampling rate and number of filterbanks
are chosen on a per-batch basis to avoid padding.
Video: Videos are represented as a series of images (or frames) ordered in
time, and embedding vectors are generated by applying the same method
prescribed for images to each frame. Consequently, the number of embedding
vectors generated from a given video depends on two factors: the number of
frames used to represent the video, and the resolution of each frame.
Therefore, two forms of compression are possible in videos: (1) spatial
compression, which involves reducing the resolution of each frame, and (2)
temporal compression, which involves reducing the number of frames. We propose
spatio-temporal resolution modulation for augmenting video samples (Fig. 2),
with the procedure described in Algorithm 1, lines 12-19. For each batch in
each epoch, we select a random number of frames and spatial resolution
(height, width) for each frame. Then, all video samples in the batch are
uniformly sampled to generate the selected number of frames, and each frame is
subsequently rescaled to the selected (height, width) values. The number of
frames and resolution are chosen on a per-batch basis to avoid padding and
ineffectual computations.
1
2 Function _Insignificant word pruning(_batch_)_:
3 for _sequence in batch_ do
4 stopwords = identify_stopwords(sequence)
5 num_stopwords_to_prune = random(low=0, high=(length(stopwords)-1)
6 stopwords_to_prune = random_select(stopwords, length=num_stopwords_to_prune)
7 sequence = sequence - stopwords_to_prune
8
9
10
11 Function _Resolution modulation(_batch, all_valid_heights,
all_valid_widths_)_:
12 batch_height = random_select(all_valid_heights, length=1)
13 batch_width = random_select(all_valid_widths, length=1)
14 for _image in batch_ do
15 image = resize(image, resolution=(batch_height, batch_width))
16
17
18
19 Function _Spatio-temporal modulation(_batch, all_valid_heights,
all_valid_widths, all_valid_frame_rates_)_:
20 batch_height = random_select(all_valid_heights, length=1)
21 batch_width = random_select(all_valid_widths, length=1)
22 batch_frame_rate = random_select(all_valid_frame_rates, length=1)
23 for _video in batch_ do
24 video = generate_frames(video, frame_rate=batch_frame_rate)
25 for _frame in video_ do
26 frame = resize(frame, resolution=(batch_height, batch_width))
27
28
29
30
31 Function _Spectogram size modulation(_batch, all_valid_sampling_rates,
all_valid_filterbank_sizes_)_:
32 batch_sampling_rate = random_select(all_valid_frame_rates, length=1)
33 batch_filterbank_size = random_select(all_valid_filterbank_sizes, length=1)
34 for _audio_signal in batch_ do
35 sampled_audio_signal = sample(audio_signal,
sampling_rate=batch_sampling_rate)
36 spectogram = create_spectogram(sampled_audio_signal,
num_filterbanks=batch_filterbank_size)
37
38
39
Algorithm 1 Data Augmentation through Input Compression for different
modalities
## 3 Consistency-aware position selection: Enabling ICPC in an architecture-
agnostic manner
Transformers are inherently capable of processing variable-length inputs,
i.e., input samples with different numbers of embedding vectors, since they
were originally designed to process text inputs that can be arbitrarily long.
As a result, inputs presented to the Transformer can have different sizes. In
contrast, all inputs presented to CNNs and RNNs are required to be of the same
size, since fully-connected layers used in these models require fixed-size
inputs. However, we find that position embeddings must be carefully selected
to encode inputs whose sizes are smaller than the maximum size supported by
the Transformer. In particular, we find that the relative positions of the
position embeddings selected to encode compressed inputs must be consistent
with the relative positions of vectors generated from the compressed inputs
along all dimensions. Consequently, we propose a consistency-aware position
selection method that finds the correct subset of position embeddings in the
original model for encoding compressed inputs. We describe our position
embedding selection methods for different modalities in turn below.
Figure 3: (a) Consistency-aware position selection for different modalities.
Blue rectangles represent position embeddings, and letters/numbers represent
their positions in the position embedding table. (b) Variable-effort inference
using ICPC.
Figure 4: Impact of position selection scheme on accuracy when processing
compressed inputs. Results are obtained using fine-tuned models downloaded
from the respective repositories. The image model is trained using 224*224
images. The video model is trained using eight 224*224 frames per video. The
audio model is trained using a sampling rate of 16KHz and 128 filterbanks. For
images, we also compare with the ”interpolation” method described in [5] for
fine-tuning at a different resolution than the one used for pre-training.
Text: Text inputs are 1-D arrays of words. Since Transformers were designed to
process variable-length text inputs, they incorporate a position embedding
selection mechanism for inputs that are shorter than the maximum length
supported by the Transformer that is designed to maintain 1-D consistency
between words in the sequence. For an input sequence of length $n$, 1-D
consistency is achieved by selecting the first $n$ entries from the position
embedding table (corresponding to the first $n$ positions in a sequence with
length equal to the maximum length supported by the Transformer), and encoding
the words in the order in which they appear (Fig. 3). For instance, if three
words (A, B, C) appear in that order in a sequence, they are encoded with
embeddings corresponding to the following positions: position(B) = 1 +
position(A), and position(C) = 1 + position(B).
Images: Patches derived from images are arranged into a 1-D stream and fed to
the Transformer (Fig. 1). Here, we find that simply selecting the first $n$
entries from the position embedding table (as done for text) does not
adequately capture the relative positions of patches (Fig. 4). We find that
images must be viewed as 2-D grids of patches for accurately selecting
position embeddings, since the position of a patch relative to other patches
cannot be uniquely determined in 1-D. For instance, the first-$n$ selection
method described above cannot encode the fact that two patches are adjacent to
each other along the y-axis in the 2-D grid. To address this challenge, we
propose a position embedding selection method designed to maintain 2-D
consistency between patches in compressed images (Fig. 3). In particular, if
patch A is adjacent to patch B along the x-axis and adjacent to patch C along
the y-axis in the 2-D grid, we encode these patches with embeddings
corresponding to the following positions – position(B) = 1 + position(A) and
position(C) = width of 2-D grid + position(A) – thereby encoding adjacency
information along both the x- and y-dimensions.
Audio: Audio signals are represented using spectograms. Since spectograms are
treated as 2-D images during pre-processing, the method described above for
encoding images works for encoding patches generated from spectograms also
(Fig. 3, Fig. 4). In particular, if patch A is adjacent to patch B along the
time-axis and adjacent to patch C along the frequency-axis in the 2-D grid, we
encode these patches with embeddings corresponding to the following positions:
position(B) = 1 + position(A) and position(C) = width of 2-D grid +
position(A).
Video: Patches derived from videos are also arranged into a 1-D stream and fed
to the Transformer (Fig. 1), similar to images. However, we find that the
position of each patch relative to other patches can only be accurately
captured in 3-D. In particular, encoding compressed videos with a set of 1-D
consistent position embeddings (as done with text) only captures the relative
positions of patches along the x-axis; adjacency of patches along the y- and
time-axes are not captured. 2-D consistent position embeddings (used for
encoding images) can capture the relative positions of patches along the x-
and y-axes, but not along the time axis (Fig. 4). Consequently, we propose a
position embedding selection method designed to maintain 3-D consistency by
viewing videos as 3-D grids of patches (Fig. 3). If patch A is adjacent to
patch B along the x-axis, adjacent to patch C along the y-axis and adjacent to
patch D along the time-axis in the 3-D grid, our method encodes these patches
with embeddings corresponding to the following positions – position(B) = 1 +
position(A), position(C) = width of 2-D grid representing each frame +
position(A) and position(D) = (width of 2-D grid * height of 2-D grid
representing each frame) + position(A) – thereby encoding adjacency
information along the x- and y-, and time-axes.
## 4 Efficient variable-effort inference using ICPC
When Transformers are deployed for inference, existing methods reshape all
input samples to the same shape. As a result, all inputs are represented using
the same number of embedding vectors, leading to the same amount of compute
time and energy being expended on all samples. However, we observe that many
samples can be accurately processed even when they are heavily compressed
(represented using only a small number of embedding vectors), and hence, these
”easy” samples can be processed at substantially lower computational cost. We
find that this is especially true in models trained with ICPC, since training
with compressed inputs substantially improves resilience to input compression
during inference. Consequently, we propose a variable-effort inference
framework that uses ICPC to modulate the computational effort based on the
difficulty of each sample (Fig 3). When a sample is presented during
inference, it is first heavily compressed and presented to the Transformer.
Only if the Transformer is not confident in predicting the compressed sample
($confidence<T_{c}$), a less compressed version of the sample is presented to
the model. Here, $confidence$ denotes the class probability of the predicted
class after softmax and $T_{c}$ is a hyperparameter that controls the level of
confidence required to terminate execution. We describe our variable-effort
inference strategies for different modalities in the following subsections.
Images, Video and Audio: Images are first inferred at low resolution, and are
subsequently processed at higher resolution only when necessary, i.e., when
the confidence of the Transformer in predicting the low resolution image is
less than the confidence threshold. Similarly, videos are first processed
using small numbers of frames and low frame resolutions. Higher numbers of
frames and resolutions are used only for difficult inputs. Audio signals are
initially sampled with low sampling rates and represented using a small number
of filterbanks. Both quantities are then progressively increased only for
samples that cannot be confidently predicted by the Transformer when
compressed.
Text: Text sequences are first heavily compressed by pruning all stopwords
from each sequence. If the model is not confident in processing the heavily
compressed sequence, the amount of compression applied is reduced. One
approach to reducing compression is to prune a random subset of stopwords from
each sequence, instead of pruning all stopwords. However, we observe that not
all stopwords are equally unimportant. For instance, the word ”an” is a
context-independent stopword, i.e., ”an” is irrelevant irrespective of the
context it appears in. On the other hand, words such as ”beyond” are context-
dependent stopwords, i.e., they are irrelevant in most contexts, but are
meaningful when the relative positions between certain objects is important
for accurately processing the sequence. Based on this observation, we create
an ordered list of stopwords based on their relative significance, which we
call the Word Importance Hierarchy (WIH). The WIH is created by analyzing the
impact of dropping each stopword on the accuracy of a pre-trained model. The
stopwords are then arranged in increasing order of accuracy loss incurred by
their pruning. Subsequently, different compression levels are created during
inference by pruning the first-$n$ stopwords from the WIH from each sequence.
In effect, the use of WIH substantially improves the probability of achieving
high-confidence predictions when processing compressed inputs compared to
random stopword pruning.
## 5 Experiments and Results
We implement ICPC in PyTorch, and perform experiments on 4 NVIDIA A40 GPUs,
each with 48 GB memory. We use a batch size of 1 during inference, similar to
prior works on variable-effort inference [20]. We randomly sample 5% of the
training dataset with class balance, and use this as the validation set for
determining the confidence threshold ($T_{c}$) for variable-effort inference.
Experiments on text: We use the Roberta-Base model [17] along with the
stopword list from NLTK [1]. We create the WIH by testing a pre-trained
Roberta-Base model (downloaded from [26]) on MNLI. During inference, heavy
compression is achieved by pruning all words from the stopword list from each
sequence. Medium compression is achieved by pruning only those stopwords that
lead to a $<=$1% accuracy drop on the pre-trained model.
Experiments on images: We use the ViT-Base-224 [5] model for ImageNet, and the
ViT-Base-384 model [5] for CIFAR10 and CIFAR100 (both models are pre-trained
on ImageNet-21K). During training, the height and width of each image is
randomly chosen from [96, 112, 128, …, 224/384]. Images are resized to
(112*112, 176*176) and (192*192, 304*304) for inference with (high, medium)
compression on ImageNet and CIFAR, respectively.
Experiments on video: We use the UMT-Base-patch16-224 model [16] pre-trained
on Kinetics710. During training, the number of frames is randomly chosen from
[4, 5, 6, 7, 8], and videos are uniformly sampled to generate the selected
number of frames. The height and width of each frame is then randomly chosen
from [96, 112, 128, 144, 160, 176, 192, 208, 224]. Videos are represented
using (5, 7) frames and frames are resized to (112*112, 176*176) for inference
with high and medium compression, respectively.
Experiments on audio: We use the AST model [6] pre-trained on ImageNet for
SpeechCommandsv2, and the AST model pre-trained on AudioSet for ESC50. During
training, the sampling rate is randomly chosen from [8, 9, 10, 11, 12, 13, 14,
15, 16]KHz, and the number of filterbanks is chosen randomly from [65, 75, 85,
95, 105, 115, 125, 128]. Audio signals are sampled at (10, 14)KHz and
represented using (75, 105) filterbanks for inference with high and medium
compression, respectively.
### 5.1 Primary Results
We present results of training and inference with ICPC on classification tasks
spanning multiple modalities in Table 1. For text, we present results on
sentiment analysis (SST-2 [22]) and text categorization (Reuters-21578 [9]).
We present results of image classification on CIFAR-10 [15], CIFAR-100 [15]
and ImageNet [3]. We present results on video action recognition using the
SomethingSomethingV2 [7] and Kinetics400 [13] datasets, and on speech
recognition and environment sound classification using the SpeechCommandsV2
[24] and ESC50 [19] datasets, respectively. We find that using ICPC during
both training and inference improves accuracy by up to 1%, while also
accelerating training and inference by up to 2.9$\times$ and 2.6$\times$,
respectively.
We observe two complementary sources of accuracy improvement: (1) The
additional augmentation from ICPC during training leads to a 0.15% average
accuracy gain across the 9 tasks when all samples are processed without any
compression during inference. (2) Applying ICPC during inference leads to an
additional 0.2% average accuracy gain. The accuracy improvement from using
ICPC for inference is surprising, since it indicates that for a given sample,
the largest possible size is not always optimal. In fact, some inputs are
processed more accurately when they are compressed. We hypothesize that this
is because the pre-processing steps that generate embedding vectors from
inputs can be seen as a form of feature extraction, where each embedding
vector represents some feature(s) of the input. When embedding vectors
generated from compressed inputs capture the salient features of the input
better than the embedding vectors generated from non-compressed inputs, input
compression also improves accuracy. During variable-effort inference with
ICPC, the confidence of the Transformer in predicting a sample can be viewed
as an assessment of the quality of features extracted from the sample. In
effect, our method greedily identifies the ideal compression level for each
input by progressively reducing compression until sufficiently good features
are obtained, thereby simultaneously improving accuracy and efficiency. In
fact, we find that $>$75% of samples have confidence $>$= $T_{c}$ at high
compression, and $<$15% of samples need to be processed with no compression in
all studied tasks.
Table 1: Results of training and inference with Transformers for different
modalities using ICPC. For the baselines, we follow the exact hyperparameter
settings suggested by the authors. ICPC entries are generated by using ICPC
during both training and inference. During training, ICPC is used as an
additional augmentation method (in addition to the augmenters used in the
baseline).
Modality | Dataset | Baseline | ICPC | Training | Inference
---|---|---|---|---|---
Speedup | Speedup
Text | SST-2 | 94.16 | 94.78 | 1.3$\times$ | 1.7$\times$
Reuters | 84.1 | 84.9 | 1.5$\times$ | 1.6$\times$
Image | CIFAR-10 | 98.84 | 99.21 | 2.4$\times$ | 2.2$\times$
CIFAR-100 | 92.36 | 93.31 | 2.3$\times$ | 1.9$\times$
ImageNet | 85.84 | 86.28 | 2.4$\times$ | 1.9$\times$
Video | SomethingSomethingV2 | 70.76 | 71.07 | 2.9$\times$ | 2.6$\times$
Kinetics400 | 87.42 | 87.63 | 2.8$\times$ | 2.2$\times$
Audio | SpeechCommandsV2 | 98.12 | 98.22 | 1.5$\times$ | 1.3$\times$
ESC50 | 95.75 | 95.89 | 1.6$\times$ | 1.3$\times$
### 5.2 Ablation: Evaluation of ICPC as an augmenter
We compare ICPC with MixUp [29], a popular augmentation strategy that is used
for image, video and audio inputs, in Fig. 5. When input compression is not
applied during inference, we find that models trained with ICPC are iso-
accurate to models trained with MixUp (difference in accuracy is $<$0.5% for
all tasks, with ICPC achieving higher accuracy on 5 out of the 9 studied
tasks). However, ICPC simultaneously accelerates training, while MixUp does
not improve training efficiency since all composite inputs are resized to
fixed shapes. In addition, we find that training with ICPC substantially
improves the resilience of models to test-time input compression (Fig. 5). The
extent of input compression performed during inference can be tuned to operate
at different points in the accuracy-efficiency trade-off space based on user
constraints, and ICPC-trained models are significantly more accurate than iso-
efficient MixUp-trained models under all levels of compression.
Figure 5: Impact of input compression on models trained with and without ICPC.
MixUp is not used when training with ICPC, and vice-versa in this experiment.
Our consistency-aware position embedding selection method is used for both
cases.
### 5.3 Further improving accuracy with Hardware-aware Test-time Augmentation
When a sample is presented during inference, multiple ”views” of the sample
can be generated using Test-time Augmentation, i.e., by applying the
augmentation methods used during training to the sample. Then, predictions on
different views of the sample can be combined using an ensembling function
(such as averaging, majority voting, etc.) to obtain the final prediction,
thereby improving accuracy and robustness. However, the time taken to process
each sample increases linearly with the number of augmented views generated
from the sample. To address this challenge, we propose Hardware-aware Test-
time Augmentation, which takes advantage of hardware under-utilization during
inference to enable Test-time Augmentation with minimal increase in latency.
In particular, hardware is under-utilized when small batch sizes are used
(Fig. 6), and increasing the batch size does not increase latency till the
batch size is high enough to fully utilize the available compute resources. We
term the smallest batch size where the hardware is fully utilized as the
”ideal batch size”. Latency typically remains constant (or changes very
minimally) for all batch sizes less than the ideal batch size. We implement
Test-time Augmentation by creating as many views of each sample as possible so
that the batch expands to the ideal batch size.
Our procedure for Hardware-aware Test-time Augmentation is as follows: (1) We
use ICPC to augment samples. Since inputs at different compression levels
generate different numbers of embedding vectors, padding is used to equalize
the lengths of all inputs for batching. Subsequently, attention masks are
applied in attention layers to prevent padding vectors from interfering with
the processing of valid vectors. (2) The ideal batch size varies with input
resolution (Fig. 6), with larger ideal batch sizes for smaller inputs.
Therefore, there is a trade-off between number of augmented views that can be
used for inference at iso-latency, and the maximum resolution of the augmented
samples. To find the configuration with the best trade-off, we randomly create
K different configurations, with each configuration having a different set of
resolutions (number of resolutions in each configuration is equal to the ideal
batch size for the maximum resolution in the configuration). For instance,
[176, 160, 144, 128] is a valid configuration for ViT-Base on ImageNet, since
the ideal batch size is 4 when the resolution is 176*176 (Fig. 6). All inputs
are evaluated at resolutions of 176, 160, 144 and 128, and the predictions are
combined to produce the final prediction when this configuration is used. (3)
All K configurations are evaluated on our validation set (5% of the training
set randomly sampled with class balance), and the configuration with the best
validation accuracy is evaluated on the test set (Table 2). We find that Test-
time Augmentation using ICPC leads to an average accuracy gain of 1.4 absolute
points, which is 0.6 absolute points higher than the average accuracy gain
from Test-time Augmentation through other augmenters used to train the
baseline models. We also find that configurations with lower maximum
resolution and higher ideal batch sizes (better for efficiency since the
processing time for a batch is primarily dependent on the maximum resolution
of samples in the batch) achieve higher average accuracy than configurations
with higher resolutions and lower ideal batch sizes.
Figure 6: Impact of increasing batch size on inference latency for the ViT-
Base-224 model on a NVIDIA A40 GPU. Table 2: Results of Hardware-Aware Test-
time Augmentation using ICPC. We create K=100 configurations, and choose the
best one using the validation set. Speedups and accuracy gains (absolute
points) are reported over the original (baseline) models.
Dataset | Accuracy Gain | Speedup
---|---|---
SST-2 | 1.7 | 1.4$\times$
Reuters | 2.0 | 1.2$\times$
CIFAR-10 | 0.6 | 1.8$\times$
CIFAR-100 | 1.7 | 1.6$\times$
ImageNet | 1.2 | 1.6$\times$
SomethingSomethingV2 | 1.8 | 1.9$\times$
Kinetics400 | 1.0 | 2.0$\times$
SpeechCommandsV2 | 0.6 | 1.2$\times$
ESC50 | 1.6 | 1.1$\times$
## 6 Related Work
Data augmentation: Data augmentation is a popular technique for preventing
overfitting during training. For text inputs, popular augmentation methods
include synonym replacement, shuffling, random insertion and deletion [25],
etc. On the other hand, image datasets are commonly augmented through
translation, rotation, noise addition, etc. In addition, techniques such as
MixUp [29], CutMix [28] and AugMix [10] achieve data augmentation by mixing
different training samples to create composite inputs. Since videos are
represented as sets of images ordered in time, augmentation techniques
designed for images have been shown to work well for videos also. Finally,
audio datasets are commonly augmented by adding background noise, and by
randomly masking out parts of the spectogram [18, 14]. ICPC, which applies
varying levels of compression to create augmented views of each sample, is
complementary to and can be used in conjunction with the aforementioned
augmentation methods. In addition, the vast majority of prior augmentation
methods are size-preserving, i.e, the transformations do not change the shape
of the input, and hence, they do not have any impact on efficiency.
Transformers for variable-length inputs: Prior works have proposed
modifications to position embeddings in Transformers to enable processing of
variable length inputs. SegFormer [27] enables semantic segmentation on
variable-resolution inputs through a position-embedding-free model design.
NaViT [2] uses fractional embeddings to process images at their native aspect
ratios. Patchout [14] uses two different sets of position embeddings – one
capturing time information, and the other capturing frequency information –
for encoding variable-size spectograms. Since these methods require
specialized architectures, they are not broadly applicable to all
Transformers.
Variable-effort inference: Variable-effort inference modulates computational
effort on a per-sample basis [8] by spending less computational effort in
processing easy samples compared to difficult samples. The most popular
example is early exit [20], which modulates network depth based on sample
difficulty. While early exit modulates model complexity for each sample, ICPC
takes a complementary data-centric approach and modulates input sizes. [23]
varies patch sizes based on input difficulty, but is not applicable to
modalities that do not involve patches (such as text).
## 7 Conclusion
We proposed Input Compression with Positional Consistency (ICPC), a new data
augmentation method that applies varying levels of compression to each sample
in every epoch. We introduced a consistency-aware position selection method
for encoding compressed inputs. We demonstrated that ICPC improved both
generalization performance and training efficiency. We also showed that ICPC
can be used to accelerate inference by modulating compression based on input
complexity.
## 8 Acknowledgement
This work was supported in part by the Center for the Co-Design of Cognitive
Systems (CoCoSys), a JUMP2.0 center sponsored by the Semiconductor Research
Corporation (SRC) and DARPA, and in part by SRC under the AIHW program.
## References
* Bird [2006] Steven Bird. NLTK: the natural language toolkit. In _ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Sydney, Australia, 17-21 July 2006_. The Association for Computer Linguistics, 2006.
* Dehghani et al. [2023] Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey A. Gritsenko, Mario Lucic, and Neil Houlsby. Patch n’ pack: Navit, a vision transformer for any aspect ratio and resolution. _CoRR_ , abs/2307.06304, 2023.
* Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20-25 June 2009, Miami, Florida, USA_ , pages 248–255. IEEE Computer Society, 2009.
* Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers)_ , pages 4171–4186. Association for Computational Linguistics, 2019.
* Dosovitskiy et al. [2021] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net, 2021.
* Gong et al. [2021] Yuan Gong, Yu-An Chung, and James R. Glass. AST: audio spectrogram transformer. _CoRR_ , abs/2104.01778, 2021.
* Goyal et al. [2017] Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michalski, Joanna Materzynska, Susanne Westphal, Heuna Kim, Valentin Haenel, Ingo Fründ, Peter Yianilos, Moritz Mueller-Freitag, Florian Hoppe, Christian Thurau, Ingo Bax, and Roland Memisevic. The ”something something” video database for learning and evaluating visual common sense. In _IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017_ , pages 5843–5851. IEEE Computer Society, 2017.
* Han et al. [2022] Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey. _IEEE Trans. Pattern Anal. Mach. Intell._ , 44(11):7436–7456, 2022.
* Hayes and Weinstein [1990] Philip J. Hayes and Steven P. Weinstein. CONSTRUE/TIS: A system for content-based indexing of a database of news stories. In _Proceedings of the The Second Conference on Innovative Applications of Artificial Intelligence (IAAI-90), Washington, DC, USA, May 1-3, 1990_ , pages 49–64. AAAI, 1990.
* Hendrycks et al. [2020a] Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net, 2020a.
* Hendrycks et al. [2020b] Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to improve robustness and uncertainty, 2020b.
* Kaplan et al. [2020] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _CoRR_ , abs/2001.08361, 2020.
* Kay et al. [2017] Will Kay, João Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The kinetics human action video dataset. _CoRR_ , abs/1705.06950, 2017.
* Koutini et al. [2021] Khaled Koutini, Jan Schlüter, Hamid Eghbal-zadeh, and Gerhard Widmer. Efficient training of audio transformers with patchout. _CoRR_ , abs/2110.05069, 2021.
* Krizhevsky [2009] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images. Technical report, Univ. Toronto, 2009.
* Li et al. [2023] Kunchang Li, Yali Wang, Yizhuo Li, Yi Wang, Yinan He, Limin Wang, and Yu Qiao. Unmasked teacher: Towards training-efficient video foundation models. _CoRR_ , abs/2303.16058, 2023.
* Liu et al. [2019] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretraining approach. _CoRR_ , abs/1907.11692, 2019.
* Park et al. [2019] Daniel S. Park, William Chan, Yu Zhang, Chung-Cheng Chiu, Barret Zoph, Ekin D. Cubuk, and Quoc V. Le. Specaugment: A simple data augmentation method for automatic speech recognition. In _Interspeech 2019, 20th Annual Conference of the International Speech Communication Association, Graz, Austria, 15-19 September 2019_ , pages 2613–2617. ISCA, 2019.
* Piczak [2015] Karol J. Piczak. ESC: dataset for environmental sound classification. In _Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, MM ’15, Brisbane, Australia, October 26 - 30, 2015_ , pages 1015–1018. ACM, 2015.
* Teerapittayanon et al. [2016] Surat Teerapittayanon, Bradley McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In _23rd International Conference on Pattern Recognition, ICPR 2016, Cancún, Mexico, December 4-8, 2016_ , pages 2464–2469. IEEE, 2016.
* Verma et al. [2019] Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, and Yoshua Bengio. Manifold mixup: Better representations by interpolating hidden states, 2019.
* Wang et al. [2018] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355, Brussels, Belgium, 2018. Association for Computational Linguistics.
* Wang et al. [2021] Yulin Wang, Rui Huang, Shiji Song, Zeyi Huang, and Gao Huang. Not all images are worth 16x16 words: Dynamic transformers for efficient image recognition. In _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , pages 11960–11973, 2021.
* Warden [2018] Pete Warden. Speech commands: A dataset for limited-vocabulary speech recognition. _CoRR_ , abs/1804.03209, 2018.
* Wei and Zou [2019] Jason W. Wei and Kai Zou. EDA: easy data augmentation techniques for boosting performance on text classification tasks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019_ , pages 6381–6387. Association for Computational Linguistics, 2019.
* Wolf et al. [2019] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface’s transformers: State-of-the-art natural language processing. _CoRR_ , abs/1910.03771, 2019.
* Xie et al. [2021] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, José M. Álvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. In _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , pages 12077–12090, 2021.
* Yun et al. [2019] Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Seong Joon Oh, Youngjoon Yoo, and Junsuk Choe. Cutmix: Regularization strategy to train strong classifiers with localizable features. In _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019_ , pages 6022–6031. IEEE, 2019.
* Zhang et al. [2018] Hongyi Zhang, Moustapha Cissé, Yann N. Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net, 2018.
|
# Structuring in Thin Films during Meniscus-Guided Deposition
René de Bruijn<EMAIL_ADDRESS>Department of Applied Physics and
Science Education, Eindhoven University of Technology, P.O. Box 513, 5600 MB
Eindhoven, The Netherlands Institute for Complex Molecular Systems, Eindhoven
University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands
Anton A. Darhuber Department of Applied Physics and Science Education,
Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The
Netherlands Jasper J. Michels Max Planck Institute for Polymer Research,
Mainz, Germany Paul van der Schoot Department of Applied Physics and Science
Education, Eindhoven University of Technology, P.O. Box 513, 5600 MB
Eindhoven, The Netherlands
###### Abstract
We study theoretically the evaporation-driven phase separation of a binary
fluid mixture in a thin film deposited on a moving substrate, as occurs in
meniscus-guided deposition for solution-processed materials. Our focus is on
rapid substrate motion during, where phase separation takes place far removed
from the coating device under conditions where the mixture is essentially
stationary with respect to the substrate. We account for the hydrodynamic
transport of the mixture within the lubrication approximation. In the early
stages of demixing, diffusive and evaporative mass transport predominates,
consistent with earlier studies on evaporation-driven spinodal decomposition.
By contrast, in the late-stage coarsening of the demixing process, the
interplay of solvent evaporation, diffusive, and hydrodynamic mass transport
results in a number of distinct coarsening mechanisms. The effective
coarsening rate is dictated by the (momentarily) dominant mass transport
mechanism and therefore depends on the material properties, evaporation rate
and time: slow solvent evaporation results in initially diffusive coarsening
that for sufficiently strong hydrodynamic transport transitions to
hydrodynamic coarsening, whereas rapid solvent evaporation can preempt and
suppress either or both hydrodynamic and diffusive coarsening. We identify a
novel hydrodynamic coarsening regime for off-critical mixtures, arising from
the interaction of the interfaces between solute-rich and solute-poor regions
in the film with the solution-gas interface. This interaction induces
directional motion of solute-rich droplets along gradients in the film
thickness, from regions where the film is relatively thick to where it is
thinner. The solute-rich domains subsequently accumulate and coalesce in the
thinner regions, enhancing domain growth.
## I Introduction
Solution-processed thin films are an essential component in the production of
organic electronics with applications ranging from organic photovoltaics to
sensors, transistors and many more Di Carlo Rasi and Janssen (2019); Mei _et
al._ (2013); Janasz _et al._ (2022). The films are commonly manufactured by
dissolving the constituents in a solution containing one or more volatile
solvents, and subsequently deposited onto a substrate where the solvent is
removed by drying Chen _et al._ (2020). In the course of the drying of the
(liquid) film a very complex microscopic morphology emerges Bornside _et al._
(1989); Diao _et al._ (2014), which typically forms via phase separation,
crystallization or a combination of both Diao _et al._ (2014); Chen _et al._
(2020). This morphology is believed to be crucial for the (efficient)
functioning of the devices Chen _et al._ (2020); Gu _et al._ (2016); Wang
_et al._ (2021) and therefore the ability to control the emergent morphology
is of paramount importance for the rational design of organic electronic
devices Chen _et al._ (2020); Schaefer _et al._ (2015); Peng _et al._
(2023).
One of the key factors affecting the final dry film morphology are the
processing settings, which are often specific to any particular deposition
technique Schaefer _et al._ (2015); Negi _et al._ (2018); Yildiz _et al._
(2022); van Franeker _et al._ (2015a, b, c). For instance, in spin coating
the rate of solvent evaporation, controlled by the so-called spin speed,
crucially governs the morphology Schaefer _et al._ (2015, 2016), impacting
both the initial demixing dynamics Schaefer _et al._ (2015, 2016); de Bruijn
_et al._ (2024a) and late-stage coarsening Schaefer _et al._ (2016); Negi
_et al._ (2018). Another frequently used family of deposition techniques is
meniscus-guided deposition, where the solution is deposited from a stationary
dispensing unit onto a moving substrate Diao _et al._ (2014). For this
technique the substrate velocity and hydrodynamic transport processes in the
film and meniscus that are present due to the directional motion of the
substrate become important control variables also de Bruijn _et al._ (2024b);
Yildiz _et al._ (2022); Michels _et al._ (2021); Rogowski _et al._ (2011).
In our recent study of the meniscus-guided deposition of a binary phase-
separating mixture, we show that these factors can significantly affect the
morphology but only if the substrate moves sufficiently slowly. Indeed, the
substrate should move more slowly than the growth rate of the demixed
structures de Bruijn _et al._ (2024b). At high velocities that typically are
within the so-called Landau-Levich regime, the solution dries significantly
only far removed from the dispensing unit under conditions that arguably
resemble those during spin coating Schaefer _et al._ (2015); Negi _et al._
(2018); Schaefer _et al._ (2016); Ronsin and Harting (2022a); van Franeker
_et al._ (2015b). Under these conditions, hydrodynamic transport processes
related to the deposition technique itself should not impact the demixed
morphology.
Even if hydrodynamics related to the deposition process is negligible,
hydrodynamic transport processes due to solvent evaporation, the evolution of
the free solution-gas surface and demixing do remain important irrespective of
the deposition technique. For non-volatile mixtures the impact of
hydrodynamics on the morphological evolution of a demixing solution, whether
in bulk or confined between parallel plates, has been studied extensively
theoretically Siggia (1979); Tanaka (1996); Bray (2002); Zoumpouli and
Yiantsios (2016), numerically Tanaka (1996); Chen and Chakrabarti (1997);
Tanaka (2001); Zoumpouli and Yiantsios (2016) and by means of experiments
Tanaka (2001); Bouttes _et al._ (2015); Sung _et al._ (1996); Song and
Torkelson (1995); Haas and Torkelson (1997). In general, it seems that
hydrodynamics is relevant only in the coarsening stage of the demixed
morphology. As is often observed in experiments and well understood
theoretically, coarsening is characterized by a characteristic feature size
$\langle L\rangle$ that adheres to a power law relation $\langle
L\rangle\propto t^{\alpha}$ with $\alpha$ the coarsening exponent that depends
on the dominant mass transport mechanism Tanaka (1996); Siggia (1979); Bray
(2002). For connected or bicontinuous morphologies, coarsening transitions
have been predicted and observed, from diffusive coarsening with an exponent
of either $\alpha=1/3$ or $1/4$, depending on the diffusive mobilities of the
solute and the solvent molecules, to viscous coarsening with an exponent of
$\alpha=1/2$ in two dimensions and $\alpha=1$ in three dimensions Siggia
(1979); Bray (2002); Lifshitz and Slyozov (1961); Wagner (1961). A second
transition exists from viscous to inertial coarsening with for the latter an
exponent $\alpha=2/3$ in both two and three dimensions Siggia (1979); Bray
(2002). These coarsening regimes are, however, absent if the morphology is
disconnected, as is the case for off-critical mixtures wherein one of the
phases forms droplets. Short-ranged hydrodynamic interactions between the
droplets still operate, which tend to facilitate their coalescence either via
attractive Marangoni-like interactions Shimizu and Tanaka (2015) or via a
cascade of coalescence events, because coalescing droplets result in motion of
the surrounding fluid Tanaka (1996); Chen and Chakrabarti (1997); Tanaka
(2001). Such short-ranged interactions give rise to a coarsening exponent of
$\alpha=1/3$ and is therefore in this sense often indistinguishable from
diffusive coarsening Tanaka (1996); Shimizu and Tanaka (2015).
In contrast to the studies on non-volatile mixtures, many theoretical and
numerical studies on volatile solutions have neglected both the phase-
separation hydrodynamics and the hydrodynamics caused by solvent evaporation
itself Schaefer _et al._ (2015); Negi _et al._ (2018); Schaefer _et al._
(2016). Only recently attention has shifted to include these transport
processes in order to better mimic the conditions and transport processes
taking place during the solution processing of thin (polymeric) films
Zoumpouli and Yiantsios (2016); Ronsin and Harting (2022a, b); Cummings _et
al._ (2018). These studies are, however, limited to the case of a stationary
film, whereas films are frequently fabricated via deposition on a moving
substrate. Moreover, we are not aware of any systematic studies on the effect
of hydrodynamics on the phase-separation kinetics in a volatile thin film,
during either the early-stage demixing or the late-stage coarsening. In this
work, we investigate by means of numerical calculations the effect of
hydrodynamic transport processes in a binary mixture confined to a thin film,
undergoing evaporation-driven spinodal decomposition on a moving substrate in
a meniscus-guided deposition setup. We focus in particular on the limit of
rapid substrate motion in the so-called Landau-Levich regime, where phase
separation occurs far from the meniscus, under conditions where the mixture is
for all intends and purposes stationary with respect to the substrate and the
solution-gas interface is parallel to the substrate. Following our earlier
work in the slow-coating evaporative regime de Bruijn _et al._ (2024b), we
here also treat the (hydrodynamic) transport processes within a height-
averaged approximation, suppressing stratification in the film de Bruijn _et
al._ (2024b); Thiele _et al._ (2016); Clarke (2005); Náraigh and Thiffeault
(2007, 2010). This is reasonable for films that are sufficiently thin, in
absence of preferential interactions of the components in solution with the
substrate or solution-gas interface and for sufficiently slow solvent
evaporation Clarke (2005); Náraigh and Thiffeault (2010); Thiele _et al._
(2016); Larsson and Kumar (2022).
Our findings show that during the early stages of phase separation,
hydrodynamic and diffusive transport modes decouple. During these early stages
the phase separation kinetics is dictated by diffusive and evaporative mass
transport, in agreement with the findings of Schaefer and collaborators who
neglect hydrodynamics altogether Schaefer _et al._ (2015, 2016). Demixing
typically occurs under off-critical conditions and the emergent morphology
just after demixing resembles a dispersion of solute-rich droplets in a
solvent-rich majority phase. This is a consequence of solvent evaporation
gradually destabilizing the solution starting from a (very) low solute
concentration. The morphology remains off-critical during the late stages of
the demixing process where several coarsening regimes present themselves.
These we illustrate schematically in Fig. 1, showing a side view of the film
with the solution-gas interface in blue, the solute-rich phase in gray and the
interfaces separating the solute-rich and solute-poor domains in orange. Each
coarsening mechanism shown is associated with one or more of the mass
transport processes present in our model description. If hydrodynamic
transport and solvent evaporation are slow relative to diffusive transport, we
find a diffusive (Ostwald-type) coarsening mode as depicted in Fig. 1A. If
evaporation is rapid, an evaporative coarsening mode emerges depicted in Fig.
1B. The decreasing height of the film results in lateral redistribution of the
material in the solute-rich domains. This, in combination with the effect of
hydrodynamic interactions between the solute-rich domains that promote
coalescence, produces an evaporation-induced coalescence pathway. One of the
(attractive) hydrodynamic interactions that promotes coalescence of nearby
droplets is the “compositional” Marangoni effect that is illustrated in Fig.
1C. This effect is a result of gradients in the liquid-liquid interfacial
tension that itself appears to originate from diffusive mass fluxes between
droplets due to Ostwald-type transport Shimizu and Tanaka (2015).
Surprisingly, we also find a, as far as we are aware, novel hydrodynamic
coarsening mechanism for off-critical mixtures that we refer to as confluent
coarsening and is illustrated in Fig. 1D. The physical origins of this
coarsening mode lie in the interaction between the liquid-liquid phase
boundaries and the solution–gas surface. This interaction induces a
directional motion of the solute-rich domains aligned with gradients in the
height of the film where the low-laying regions act as focal points for the
droplets to accumulate, again promoting domain coalescence.
Figure 1: A schematic representation of the four main coarsening mechanisms
that we find in our numerical calculations. Shown is a side view of the phase
separating film (not to scale). The solute-rich domains are shown in gray with
the fluid phase boundaries indicated in orange. The blue line represents the
free solution-gas interface. In panel A we show the classical Ostwald-ripening
that originates from differences in the Laplace pressure between small and
large domains. In panel B we depict evaporative coarsening where the
decreasing thickness of the thin film results in the lateral redistribution of
solute mass. Panel C illustrates the short-ranged attractive hydrodynamic
interactions known as the compositional Marangoni effect Shimizu and Tanaka
(2015), which originates from gradients in the solute-solvent surface tension
that itself find their origin in the Ostwaldian mass transport from small to
large droplets. In panel D we highlight what we refer to as confluent
coarsening, wherein droplets move advectively along gradients in the height of
the film towards low-laying regions of the film.
The remainder of this Chapter is structured as follows. In Section II, we
present our model and in Section III the results of our numerical
calculations. The early-time behavior that emerges from our model we discuss
in detail in Section IV, showing that for an initially homogeneous film the
compositional evolution is in essence not affected by hydrodynamic transport.
We subsequently return to the late-stage coarsening dynamics in Section V,
unveiling a novel hydrodynamic coarsening pathway originating from the
coupling of the hydrodynamics due to the (curved) solution–gas surface and
those due to bulk demixing. Finally, we discuss and conclude our work in Sec.
VII.
## II Theory
We consider the isothermal, evaporation-driven spinodal decomposition of an
incompressible binary solution comprising of a solute and a volatile solvent.
We focus on the deposition conditions present during the meniscus-guided
deposition of a fluid at high substrate velocities deep in the so-called
Landau-Levich regime, complementing our earlier work on phase separation in
the evaporative regime de Bruijn _et al._ (2024b). The film dries far removed
from the capillary zone near the fluid inlet where the solution-gas interface
is (nearly) parallel with the substrate and the fluid stationary with respect
to the (moving) substrate. We therefore use the equivalent situation of a
(stationary) solution confined between a stationary substrate and an initially
flat solution-gas interface.
In our model we neglect inertial effects, which is justified as Reynolds
numbers in thin films are typically much smaller than unity. Both the solute
and solvent are assumed to be neutral with respect to both the substrate and
the solution-gas interface, implying that (i) we ignore preferential
interactions with either surface, and (ii) the solution-gas surface tension is
independent of the composition. Hence, we also neglect Marangoni effects
associated with gradients in the surface tension of the free surface.
Additionally, we suppress stratification in the film itself. This is
reasonable if (i) the height of the film $h\equiv h(x,y,t)$, defined as the
distance between the substrate and the fluid-gas interface, is smaller than
the characteristic size of the demixed structures and (ii) evaporation is
sufficiently slow, that is, if $hk/D_{\mathrm{coop}}\ll 1$, where $k$ is the
velocity with which the height of the fluid-gas interface decreases due to
solvent evaporation and $D_{\mathrm{coop}}$ the cooperative or mutual
diffusion coefficient of the solute Schaefer _et al._ (2017); Larsson and
Kumar (2022); Náraigh and Thiffeault (2010).
If the slope of the height of the film remains small, $|\nabla h|\ll 1$,
hydrodynamic mass transport can be described invoking a height-averaged
approach also known as the lubrication approximation Oron _et al._ (1997). In
this approximation, the Stokes equations reduce to a (generalized) diffusion-
or Cahn-Hilliard-type equation for the height of the film, simplifying the
description considerably Oron _et al._ (1997). Within this framework, the
height-averaged advective and diffusive mass currents can be expressed via
gradients in the pressure $p$ and the density in the exchange chemical
potential $\Delta\mu$ (in units of energy per volume) Thiele _et al._ (2016).
We obtain these driving forces as the functional derivatives of a free energy
functional $\mathcal{F}[h,\psi]$ with respect to the height $h$ and the
“solute height” $\psi=h\phi$, where $\phi\equiv\phi(x,y,t)$ is the solute
volume fraction Mitlin (1993); Náraigh and Thiffeault (2007, 2010); Thiele
_et al._ (2016). Note that the correct expression for the exchange chemical
potential density $\Delta\mu$ cannot be obtained via the functional derivative
with respect to the solute volume fraction $\phi$ because $\phi$ implicitly
depends on the height as $\phi\propto h^{-1}$. Hence, $\phi$ and $h$ cannot be
varied independently in the variational sense. Defining $\Delta\mu$ via the
functional derivative with respect to the solute height $\psi$ effectively
addresses this issue Thiele _et al._ (2016). Parenthetically, since $\psi$ is
a height, we may also interpret the exchange chemical potential density
$\Delta\mu$ as the partial pressure of the solute.
The free energy functional $\mathcal{F}[h,\psi]$ for our model system can be
expressed as the sum of two contributions: one associated with the free
solution-gas interface, which is also known as the effective interfacial
Hamiltonian de Gennes _et al._ (2004); Thiele _et al._ (2016), and the other
describing the bulk solution. It is given by
$\begin{split}\mathcal{F}[h,\psi]=\int\mathrm{d}{\mathbf{r}}\bigg{[}\frac{\gamma}{2}|\nabla
h|^{2}+g(h)+\\\ \Delta
fh\left(f(\phi)+\frac{\kappa}{2}|\nabla\phi|^{2}\right)\bigg{]},\end{split}$
(1)
where we integrate over the substrate area in the $x$–$y$ plane and
$\nabla=(\partial_{x},\partial_{y})^{T}$ is the two-dimensional lateral
gradient operator. We express Eq. (1) in terms of the solute volume fraction
$\phi=\psi/h$ instead of the solute height $\psi$ for notational convenience.
The first two terms in Eq. (1) describe the free surface of the liquid
solution. The first term represents the work required to deform the interface,
where $\gamma$ is the surface tension, assumed to be independent of
temperature and solute concentration de Gennes _et al._ (2004); Thiele _et
al._ (2016). Hence, we suppress the thermal and so-called solutal Marangoni
effects associated with the free surface. The compositional Marangoni effect
that finds its origin in gradients in the liquid-liquid interfacial tension,
illustrated in Fig. 1C, remains active. The second term in Eq. (1) accounts
for the disjoining pressure, arising from the differences in the Van der Waals
interactions between the solution and the substrate on the one hand and the
gas phase and the substrate on the other. We define it as
$g(h)=-\frac{A_{\mathrm{H}}}{12\pi h^{2}},$ (2)
with $A_{\mathrm{H}}$ Hamaker’s constant. For the sake of simplicity, we
ignore slope and curvature corrections to the disjoining pressure Dai _et
al._ (2008). Contributions to Eq. (2) that would allow for the formation of a
(stable) precursor film are not included. This also means that our model
cannot correctly account for dewetting, which we deem to be outside the scope
of this work.
The remaining terms in Eq. (1) account for the properties of the bulk mixture,
assuming that the composition remains vertically uniform. We introduce $\Delta
f=k_{\mathrm{B}}T/b^{3}$ as the unit of (free) energy density, where
$k_{\mathrm{B}}$ is Boltzmann’s constant, $T$ the absolute temperature and
$b^{3}$ a microscopic volume that enters our model for dimensional
consistency. For the (dimensionless) bulk free energy density $f(\phi)$, we
adopt the Flory-Huggins model
$f(\phi)=\phi\ln\phi+(1-\phi)\ln(1-\phi)+\phi(1-\phi)\chi,$ (3)
where $\chi$ is the well-known Flory interaction parameter Flory (1942);
Huggins (1942). The solution phase separates if the solvent quality is
sufficiently low, that is, for $\chi>2$, and if the volume fraction of solute
is somewhere between the low and high-concentration branches of the binodal
Flory (1942).
The square-gradient contribution $\kappa/2|\nabla\phi|^{2}$ in Eq. (1)
penalizes the formation of interfaces between the phases in solution. The
“interfacial stiffness” $\kappa$ we take as a free and constant parameter for
simplicity albeit that in reality it may be a function of the volume fraction
$\phi$, the Flory interaction parameter $\chi$ and the molecular weight of the
solute for polymeric solutions de Gennes (1980); Debye (1959). Interfaces
between the solute and solvent-rich domains carry an interfacial tension
$\sigma\propto\Delta
f\sqrt{\kappa}(\Delta\phi)^{3/2}\left(\chi-\chi_{\mathrm{s}}(\phi)\right)^{2/3}$,
where $\Delta\phi$ is the difference between the equilibrium concentrations in
the solute-rich and solvent-rich phases, and
$\chi_{\mathrm{s}}=\frac{1}{2}\langle\phi\rangle(1-\langle\phi\rangle)$ the
Flory interaction parameter at the spinodal for a given (mean) solute
concentration $\langle\phi\rangle$ Cahn and Hilliard (1958, 1959); Anderson
_et al._ (1998); König _et al._ (2021). Strictly speaking, this expression
for the interfacial tension is valid only near the critical point, although it
seems to remain very accurate far from it König _et al._ (2021). In volatile
mixtures the mean solute concentration increases with time, rendering the
interfacial tension between solute-rich and solute-poor domains in the film,
$\sigma$, time dependent as well. Notably, it vanishes when the mean
concentration equals the concentration of either branch of the spinodal.
Following the standard approach, we next utilize Onsager’s reciprocal
relations to relate the time evolution equations for the order parameters $h$
and $\psi$ to the diffusive and hydrodynamic mass currents Onsager (1931a, b).
This yields
$\frac{\mathrm{\partial}{h}}{\mathrm{\partial}{t}}=\nabla\cdot\frac{h^{3}}{3\eta}\left(\nabla
p+\phi\nabla\Delta\mu\right)+f_{\mathrm{evap}}(\phi),$ (4)
for the height of the film and
$\frac{\mathrm{\partial}{\psi}}{\mathrm{\partial}{t}}=\nabla\cdot\frac{h^{2}\psi}{3\eta}\left(\nabla
p+\phi\nabla\Delta\mu\right)+\nabla\cdot hM(\phi)\nabla\Delta\mu+\zeta,$ (5)
for the solute height. Here, we introduce the known expressions for Onsager’s
mobility coefficients with $\eta$ the viscosity of the solution that we assume
to be constant for simplicity Mitlin (1993); Xu _et al._ (2015); Thiele _et
al._ (2016). The diffusive mobility for an incompressible binary mixture reads
Doi (2011)
$M=\Delta f^{-1}D\phi(1-\phi),$ (6)
which is also known as the “double degenerate” mobility Dai and Du (2016),
with $D$ the tracer (self) diffusivity that we assume to be constant. The
evaporation flux $f_{\mathrm{evap}}(\phi)$ and thermal noise $\zeta$ we return
to below. The pressure is given by $p=\delta\mathcal{F}/\delta
h=-\gamma\nabla^{2}h+\partial_{h}g(h)+p_{\mathrm{b}}$ with
$p_{\mathrm{b}}=\Delta f(f(\phi)+\kappa/2|\nabla\phi|^{2})-\phi\Delta\mu$ the
osmotic pressure of the solution. The exchange chemical potential density is
defined as $\Delta\mu=\delta\mathcal{F}/\delta\psi=\Delta
f\partial_{\phi}f(\phi)-\Delta fh^{-1}\kappa\nabla\cdot h\nabla\phi$. In
principle, we can now also derive from Eqs. (4) and (5) an evolution equation
for the solute volume fraction $\phi$. We opt to not do so as the equation for
the solute height Eq. (5) is simpler to implement numerically.
As is usual in the lubrication theory of thin films, we interpret
$\begin{split}\mathbf{u}&\equiv-\frac{h^{2}}{3\eta}\left(\nabla
p+\phi\nabla\Delta\mu\right)\\\
&=-\frac{h^{2}}{3\eta}\left[\nabla(p-p_{\mathrm{b}})+\Delta
f\kappa(\nabla|\nabla\phi|^{2}+(h^{-1}\nabla
h\cdot\nabla\phi)\nabla\phi)\right]\end{split}$ (7)
as the height-averaged fluid velocity Thiele _et al._ (2016); Oron _et al._
(1997). For the second equality sign we use $\nabla
p_{\mathrm{b}}=-\phi\nabla\Delta\mu+\Delta
f\kappa[\nabla|\nabla\phi|^{2}+(h^{-1}\nabla h\cdot\nabla\phi)\nabla\phi]$,
which follows by taking the gradient of the osmotic pressure $p_{\mathrm{b}}$
where we make use of the expression for the exchange chemical potential
$\Delta\mu$ Thiele _et al._ (2016). We are thus led to conclude that the
fluid velocity must be independent of the osmotic pressure $p_{\mathrm{b}}$
but that it does depend on the presence of interfaces between the solute-rich
and solvent-rich phases. As we shall show at a later stage of this work, the
final contribution $\Delta f\kappa(h^{-1}\nabla h\cdot\nabla\phi)\nabla\phi$
to Eq. (7) can result in directional motion of droplets if the height of the
film has a gradient.
Next, for the solvent evaporation flux $f_{\mathrm{evap}}(\phi)$, we use the
simple ansatz of a linear relation between the solvent concentration at the
solution-gas surface and the evaporation flux
$f_{\mathrm{evap}}(\phi)=-k(1-\phi).$ (8)
Here, $k$ is a phenomenological mass-transfer coefficient that depends on the
partial pressure of the solvent, the solvent quality, and so on Bornside _et
al._ (1989).
Finally, as usual the thermal noise $\zeta$ is delta-correlated with zero mean
$\langle\zeta(x,y,t)\rangle=0$ and covariance
$\langle\zeta(x,y,t)\zeta(x^{\prime},y^{\prime},t^{\prime})\rangle=-2k_{\mathrm{B}}T\omega^{2}\nabla\cdot
hM(\psi,h)\nabla\delta(x-x^{\prime})\delta(y-y^{\prime})\delta(t-t^{\prime})$.
Here, $\omega\leq 1$ is an ad hoc scaling parameter Cook (1970); Ronsin and
Harting (2022b) that has no physical origin but dampens the intensity of the
thermal fluctuations. This allows us to take larger steps in our time
integrator. For $\omega\neq 1$ the magnitude (in some sense) of the noise
violates the fluctuation-dissipation theorem, yet we find justification for
setting $\omega<1$ in the observation that it affects our results only
quantitatively not qualitatively, and that thermal fluctuations can generally
anyway be neglected in the late times of coarsening Puri and Oono (1988);
König _et al._ (2021). We do not account for any thermal fluctuations in the
height of the film in Eq. (4) Clarke (2005); Davidovitch _et al._ (2005);
Grün _et al._ (2006) nor for any cross-correlated thermal fluctuations, and
discuss the consequences of these approximations at a later stage in this
Chapter.
To make our model description as generic as possible, we nondimensionalize our
model using the initial height of the film $h_{0}$ as the characteristic scale
for the height and also to nondimensionalize the lateral lengths. For the
velocity scale we take $u=\Delta f\sqrt{\kappa}/\eta\propto\sigma/\eta$,
because $\Delta f\sqrt{\kappa}$ is a measure for the interfacial tension
between solute-rich and solute-poor domains, $\sigma$ Cahn and Hilliard (1958,
1959); König _et al._ (2021); see also our discussion earlier in this
section. The pressure scale we define as $p_{0}=\eta u/h_{0}$ and the
diffusive time scale as $t_{0}=h_{0}^{2}/D$. The relevant dimensionless groups
are (i) the Capillary number $\mathrm{Ca}\equiv
u\eta/\gamma\propto\sigma/\gamma$, which also acts as a measure for the ratio
of the interfacial tension between the solute-rich and solute-poor regions and
that of the fluid-gas interface, (ii) what we call the disjoining number
$\mathrm{G}=A_{\mathrm{H}}/6\pi h_{0}^{2}\eta u$, which measures the strength
of the disjoining forces relative to the capillary forces of the solute-
solvent interfaces, (iii) the Peclet number $\mathrm{Pe}=uh_{0}/D=\Delta
f\sqrt{\kappa}h_{0}/\eta D$, (iv) the Biot number $\mathrm{Bi}=kh_{0}/D$,
which measures the strength of evaporation relative to diffusion, and (v) the
Cahn number $\mathrm{Cn}=\kappa/h_{0}^{2}$. For the remainder of this work we
treat the Peclet number, the Biot number and the Capillary number as freely
adjustable parameters. We insert the dimensionless variables and operators
$h=h/h_{0}$, $\psi=\psi/h_{0}$, $p=p/p_{0}$, $\Delta\mu=\Delta\mu/\Delta f$,
$\nabla=h_{0}\nabla$, $t=t/t_{0}$, $M=M\Delta f/D$ and $\zeta=\zeta\Delta
f/Dh_{0}$ in the governing equations Eqs. (4)-(8), producing the following
dimensionless equations for the film height
$\frac{\mathrm{\partial}{h}}{\mathrm{\partial}{t}}=\mathrm{Pe}\nabla\cdot\frac{h^{3}}{3}\left(\nabla
p+\frac{1}{\sqrt{\mathrm{Cn}}}\phi\nabla\Delta\mu\right)-\mathrm{Bi}(1-\phi),$
(9)
with the pressure
$p=-\frac{1}{\mathrm{Ca}}\nabla^{2}h+G/h^{3}+\frac{1}{\sqrt{\mathrm{Cn}}}p_{\mathrm{b}},$
(10)
and for the solute height
$\frac{\mathrm{\partial}{\psi}}{\mathrm{\partial}{t}}=\mathrm{Pe}\nabla\cdot\frac{h^{2}\psi}{3}\left(\nabla
p+\frac{1}{\sqrt{\mathrm{Cn}}}\phi\nabla\Delta\mu\right)+\nabla\cdot
hM\nabla\Delta\mu+\zeta,$ (11)
with the exchange chemical potential
$\Delta\mu=\partial_{\phi}f(\phi)-\frac{\mathrm{Cn}}{h}\nabla\cdot
h\nabla\phi.$ (12)
We solve our model equations numerically using for the physical parameters and
dimensionless numbers the values listed in Table 1. For the concentration
gradient stiffness $\kappa$ we use that for organic semiconductors $\Delta
f\kappa$ is generally estimated to be on the order of $10^{-10}-10^{-12}$ J/m
Saylor _et al._ (2007); Wodo and Ganapathysubramanian (2014); Clarke (2005).
Using an order-of-magnitude estimate for the microscopic volume
$b^{3}=10^{-28}$ m3 and the thermal energy $k_{\mathrm{B}}T=10^{-21}$ J we
find $\kappa\approx\mathcal{O}(10^{-1}-10^{1})$ nm2. Our choice for
$\kappa=25$ nm2 is to ensure a sufficient number of grid points in the phase
boundaries, while still within the range of reasonable values. We discretize
the gradient operators using second-order central finite differences and the
contribution of the thermal noise using the method of Schaefer et al. Schaefer
_et al._ (2016). Time is integrated making use of a semi-implicit Euler time
integrator, integrating the thermal noise $\zeta$ explicitly and all other
terms implicitly. Our method conserves the solute mass up to negligible
numerical errors of the order of $<10^{-7}\%$ between the initial and final
solute mass. Adaptive time steps are employed following the approach outlined
by Wodo and Ganapathysubramanian Wodo and Ganapathysubramanian (2011). We
invoke periodic boundary conditions and initialize our calculations with a
homogeneous and flat thin film, setting the initial volume fraction equal to
the low concentration branch of the spinodal. This is reasonable as the
metastable region is typically traversed in experimental situations due to
fast evaporation. The model is implemented in parallel using the PETSc library
Abhyankar _et al._ (2018); *Balay2024PETSc/TAOManual; *Balay2024PETScPage.
In the next sections, we first present and discuss the phenomenology of our
numerical calculations and discuss how the effective evaporation rate depends
on the Peclet number. Subsequently, we discuss in detail the early stages of
demixing in our model, demonstrating that hydrodynamic and diffusive transport
modes decouple. Consequently, during the early stages of demixing, diffusive
and evaporative transport dominate. Finally, we examine the late stage
coarsening of the mixture across a range of values of the Peclet, Biot and
Capillary numbers. As already advertised, we identify a novel coarsening
mechanism driven by the interaction of the fluid-fluid interfaces with the
fluid-gas interface.
Table 1: A list of parameter values used in this Chapter. Parameter | units | value
---|---|---
$h_{0}$ | [nm] | $30$
$\phi_{0}$ | [ - ] | $0.1464467\dots$
$\gamma$ | [mN/m] | $25$
$D$ | [m2/s] | $10^{-10}$
$k$ | m/s | $\mathcal{O}(10^{-6}-10^{-3})$
$A_{\mathrm{H}}$ | J | $10^{-19}$
$\kappa$ | [nm2] | 25
$\chi$ | [ - ] | 4
$\omega$ | [ - ] | $10^{-2}$
$\mathrm{Ca}$ | [ - ] | $\mathcal{O}(10^{-3}-10^{-1})$
$\mathrm{Pe}$ | [ - ] | $\mathcal{O}(10^{-2}-10^{2})$
G | [ - ] | $7\times 10^{-2}$
$\mathrm{Cn}$ | [ - ] | $2.66\times 10^{-2}$
$\mathrm{Bi}$ | [ - ] | $\mathcal{O}(10^{-3}-10^{-1})$
## III Model calculations
In this section we present and discuss our numerical results for demixing
taking place in a binary fluid film containing a solute and a volatile
solvent. As already alluded to, our results apply to (i) stationary films and
(ii) films deposited on a rapidly moving substrate, i.e., deep in the Landau-
Levich regime during meniscus-guided deposition, wherein the mixture is, for
all intends and purposes, stationary with respect to the moving substrate. We
solve Eqs. (9)–(12) for a host of parameter values listed in Table 1. Because
the fluid-gas surface can freely respond to the formation of domains, and
because of the difference in evaporation rates in the solute and solvent-rich
phases, the effective evaporation rate depends not only on the Biot number but
also on the other parameters of our model. Hence, we also investigate how the
other parameters affect the evaporation and demixing kinetics.
Fig. 2 shows representative snapshots of the local volume fraction and height
of the film relative to the mean height in a square domain using periodic
boundary conditions, for different times and with a Peclet number
$\mathrm{Pe}=2\times 10^{-1}$ in Fig. 2A and Fig. 2B and with
$\mathrm{Pe}=2\times 10^{2}$ in Fig. 2C and Fig. 2D. We set
$\mathrm{Bi}=3\times 10^{-3}$, $\mathrm{Ca}=2\times 10^{-2}$, and the other
parameters as listed in Table 1. The indicated times $t$ are scaled to the
spinodal lag time $\tau_{\mathrm{L}}$, also known as the spinodal
amplification time Binder (1983). This is the waiting time between crossing
the spinodal at $t=0$ and the moment in time that the mixture phase separates
appreciably Schaefer _et al._ (2015, 2016). Note that we use the same seed
for our random number generator for both values of the Peclet number shown for
the sake of comparison. This results in (nearly) identical integrated thermal
noise for both values Peclet numbers, although the one-to-one correspondence
is eventually destroyed due to the adaptivity of our numerical time stepper.
Figure 2: Snapshots of the volume fraction (A and C) and the height of the
film (B and D) for two different values of the Peclet number at different
moments in time $t/\tau_{\mathrm{L}}$. The time is scaled to the spinodal lag
time $\tau_{\mathrm{L}}$, see the main text. Panels A and B are for
$\mathrm{Pe}=2\times 10^{-1}$, Panels C and D for $\mathrm{Pe}=2\times
10^{2}$. The left color bar indicates the volume fractions for panels A and C,
the right color bar is $h-\langle h\rangle$ for panels B and D. $\Delta
h=100\times\text{max}(|h-\langle h\rangle|/\langle h\rangle)$ is a measure for
the relative roughness of the film surface. Other model parameters are
$\mathrm{Cn}=0.0267$, $\mathrm{Ca}=2\times 10^{-2}$, $\mathrm{Bi}=3\times
10^{-3}$, $\chi=4$, and $G=2.3\times 10^{-4}$.
After crossing the spinodal at $t=0$, concentration fluctuations remain small
for times up to the spinodal lag time $\tau_{\mathrm{L}}$, as can be seen in
Fig. 2A and 2C Schaefer _et al._ (2015). For $t>\tau_{\mathrm{L}}$, the
solution phase separates into solute-rich droplets dispersed in a solvent-rich
majority phase, which is typical for off-critical phase separation. In our
calculations phase separation tends to occur under off-critical conditions
because the mixture is gradually destabilized by solvent evaporation starting
from the (very off-critical) low concentration branch of the spinodal. The
domains ripen due to solvent evaporation, as well as due to diffusive and
hydrodynamic coarsening. The late-stage morphological evolution and coarsening
rate appears to be quite sensitive to the Peclet number. We obtain a different
morphology and a larger characteristic feature size for the high Peclet
number, shown in Fig. 2C and D, than for small Peclet numbers, shown in Fig.
2A and B. This is actually somewhat surprising considering that hydrodynamic
coarsening is generally believed to be of minor importance for off-critical
mixtures Tanaka (1996); Shimizu and Tanaka (2015). We return to this in our
discussion of Sections V and VI. Under the action of solvent evaporation the
morphology eventually inverts from solute-rich droplets dispersed in a solvent
majority phase to a dispersion of solvent-rich droplets and the solution
subsequently redissolves. Redissolution commences upon crossing the high-
concentration branch of the binodal and is, of course, a property of the
binary solution, whereas for two or more non-volatile components the film
typically remains demixed even after the solvent is removed Negi _et al._
(2018); van Franeker _et al._ (2015b). Note that the solution actually
already redissolves slightly before crossing the binodal, because the free-
energetic cost of the liquid-liquid interface is for the very small solvent-
rich droplets not outweighed by the gain in free energy due to phase
separation.
Accompanying the morphological evolution of the bulk solution is that of the
free surface of the film, as shown in Fig. 2B for Peclet number
$\mathrm{Pe}=2\times 10^{-1}$ and Fig. 2D for $\mathrm{Pe}=2\times 10^{2}$.
The times associated with the sequence of panels in A and B, and of C and D,
are the same. Indicated in the panels of B and D are the quantities $\Delta
h=100\times\text{max}(|h-\langle h\rangle|/\langle h\rangle)$ that measure the
roughness of the film relative to the mean height. The surface only deforms
after the solution demixes for $t>\tau_{\mathrm{L}}$, on the one hand due to
the forces exerted on it by the fluid-fluid interfaces that develop in the
film and on the other due to the spatial gradients in the rate of solvent
evaporation at the film surface. The only concentration-dependent (downward)
force exerted on the solution-gas interface is the interfacial tension of the
liquid-liquid interfaces in the film itself, resulting at the free surface in
three-phase liquid-liquid-gas contact lines. The contact angles at the three-
phase contact line are always small because the interfacial tension between
demixing liquid domains is much smaller than that of the solution-gas
interface. This is to be expected de Gennes _et al._ (2004). Different values
of the interfacial tension between the solute-rich and solute-poor region and
of the fluid-gas interfacial tension, which we achieve by varying the Cahn or
Capillary numbers, yield qualitatively comparable results albeit with
different solute-solvent-gas contact angles (not shown).
In the later stages of the drying process before the mixture redissolves, the
regions rich in solute decrease less rapidly in height than the regions rich
in solvent do as Fig. 2 show for $t/\tau_{L}\gtrapprox 10$. This is caused by
difference in the evaporation rate between those regions. On the other hand,
the Laplace pressure that is a result of the curved liquid-gas interface tends
to counteract this, yet cannot immediately compensate for it. We read off from
Eqs. (9) and (10) that the contribution of evaporation to variations in the
film thickness relative to that of the material distribution driven by the
Laplace pressure of the solution-gas surface – the first term in Eq. (10) –
must scale as $\mathrm{Bi}\times(\mathrm{Ca}/\mathrm{Pe})$ in terms of the
Biot number $\mathrm{Bi}$, Peclet number $\mathrm{Pe}$ and Capillary number
$\mathrm{Ca}$. Hence, for constant evaporation rate the magnitude of
evaporation-induced surface roughness should increase with decreasing Peclet
number and increasing Capillary number. This is in agreement with our results
summarized in the snapshots of Figs. 2B and 2D. In fact, the bottom right
panel of Fig. 2B, shows that the surface inhomogeneities may persist even
after the solute redissolves and the solution is again of uniform composition.
These surface inhomogeneities are in our model not actually frozen in the dry
film, because we assume the viscosity to be independent of the composition.
Consequently, the liquid-gas interface in the end relaxes to become flat over
a wave number $q$ dependent time scale $\tau_{\mathrm{h}}(q)$ that we estimate
to obey the relation
$\tau_{\mathrm{h}}(q)\sim\frac{1}{\mathrm{Pe}}\left[\frac{G}{h_{\mathrm{dry}}}q^{2}+\frac{h_{\mathrm{dry}}^{3}}{\mathrm{Ca}}q^{4}\right]^{-1}$
(13)
with $h_{\mathrm{dry}}$ the dry height of the film, $q$ the (dimensionless)
wave number and $G$ the disjoining number. This estimate can be obtained from
Eq. (9) by calculating the linear response of the height of the film to a
periodic perturbation of wave number $q$ around the dry (solvent-free) height
of the film $h_{\mathrm{dry}}$. Apparently, the lifetime of the free-surface
roughness increases with decreasing Peclet number, a prediction that is in
agreement with our findings of Fig. 2.
Figure 3: The drying of a demixing solution compared to the drying of a non-
demixing, homogeneous film for four values of the Peclet number indicated in
the legend of panel B). In Panel A we show the deviation of the area-averaged
volume fraction $\langle\phi\rangle$ from the volume fraction
$\phi_{\mathrm{hom}}$ of a drying film that remains homogeneous, as a function
of the scaled time $t/\tau_{\mathrm{L}}$ with $t=\tau_{\mathrm{L}}$ the moment
in time that the film starts to demix. The dots on the curves indicate the
time at which the binary mixture redissolves. The horizontal dashed line is a
guide for the eye. B) The deviation of the area-averaged height $\langle
h\rangle$ of the film relative to that of a film that remains homogeneous
$h_{\mathrm{hom}}$, as a function of the scaled time. The values for the model
parameters are $\mathrm{Cn}=0.0267$, $\mathrm{Ca}=2\times 10^{-2}$,
$\mathrm{Bi}=3\times 10^{-3}$, $\chi=4$, $\phi(t=0)=\psi(t=0)=0.1464$ and
$G=7\times 10^{-2}$. The curves in A) and B) with the largest variation of the
volume fraction and height are those with the lowest Peclet number, $Pe=0.2$
Upon comparing the final snapshots in Fig. 2A and Fig. 2C, it transpires that
the solution redissolves earlier for $\mathrm{Pe}=2\times 10^{-1}$ than for
$\mathrm{Pe}=2\times 10^{2}$. Hence, the effective drying or evaporation rate
depends not only on the Biot number but also on the other dimensionless
numbers entering our model. How precisely, should be somewhat sensitive to the
evaporation model used. For our particular evaporation model, expressed in Eq.
(8), the reason that the drying rate depends on the other dimensionless
numbers entering our model is that the overall drying rate of the phase-
separating film is the surface average of that of the solvent- and solute-rich
domains. The former contribute more per unit area on account of the lower
volume fraction of solute. The fraction of the substrate that is covered by
the solvent-rich and poor phases also depends on the difference in their
height. The film turns out to be thicker in the solute-rich regions than in
the solvent-rich regions, and this difference increases with decreasing Peclet
number and with increasing Capillary number. Consequently, the fraction of the
substrate covered by the solvent-rich phase and therefore the effective
evaporation rate must also increase with decreasing Peclet number and
increasing Capillary number.
While this explains why the drying time not only depends on the Biot number
but also on the other dimensionless groups of our model, this does not explain
why we find that the actual time-resolved drying kinetics exhibits intervals
where the evaporation either speeds up or slows down. To quantify this, we
compare the time-dependent area-averaged concentration
$\langle\phi\rangle(t)$, and the area-averaged height $\langle h\rangle(t)$ to
those in a film of homogeneous composition and same initial volume fraction of
the solute wherein solvent evaporates according to Eq. (8). In Fig. 3A we
compare the area-averaged volume fraction $\langle\phi\rangle(t)$ with the
concentration $\phi_{\mathrm{hom}}(t)$ in a film that remains homogeneous,
i.e., with Flory interaction parameter $\chi=0$, for four different values of
the Peclet number between $\mathrm{Pe}=2\times 10^{-1}$ and
$\mathrm{Pe}=2\times 10^{2}$ for a Biot number of $\mathrm{Bi}=3\times
10^{-3}$ and a capillary number of $\mathrm{Ca}=5\times 10^{-2}$. The (color-
matched) circles on the curves indicate the moment in time that the solution
redissolves, which indeed shifts to earlier times for decreasing Peclet
number, again, this is a consequence of the effect of the Peclet number on the
difference in the height of the regions rich in either solute or solvent.
Positive values indicate that solvent evaporation is slower in the demixed
film than in a corresponding homogeneous film and negative values that it is
faster. For $t<\tau_{\mathrm{L}}$ both films are homogeneous and therefore dry
at an identical rate. For $t>\tau_{\mathrm{L}}$, we find that the evaporation
rate first increases a little with time to decrease significantly and
subsequently increase and finally to decrease again. So, there seem to be two
maxima separated by a minimum. The under- and overshoot for the small Peclet
number equal to $\mathrm{Pe}=2\times 10^{-1}$ decreases much more rapidly than
the other curves, which we explain below. Note also that the primary maximum
increases with increasing Peclet number, which is a consequence of the
variations in the height being somewhat larger at high than at low Peclet
number during the early stages of demixing, as can be read off Fig. 2B and
Fig. 2D.
In Fig. 3B, we compare the mean height of the film to the height of a
corresponding homogeneous film. In agreement with our observations from Fig.
3A, we conclude that the height of the film is initially larger than that in
the homogeneous film, subsequently decreases and remains below that of a
homogeneous film. For later times the height is smaller than that of a
homogeneous film, consistent with our argument that the effective evaporation
rate is faster for a demixed film with a deformed surface than for a
homogeneous film with a flat solution-gas surface. The curve for
$\mathrm{Pe}=2\times 10^{-1}$ decreases much more rapidly than for the other
values of the Peclet number. The reason is that for small values of the ratio
$\mathrm{Pe}/\mathrm{Ca}$ the effect of solvent evaporation is stronger than
that of hydrodynamic material redistribution. This results in larger
differences in the height of the solute-rich and solute-poor domains and
consequently also in larger differences in the mean volume fraction in
agreement with our findings presented in Fig. 3A. For even lower values of the
ratio $\mathrm{Pe}/\mathrm{Ca}$, we in fact find an evaporation-induced
dewetting transition. We deem this outside the scope of the present Chapter
and therefore do not study this in detail.
Having discussed the phenomenology of the demixing and the drying of the film,
we next investigate, separately, the early and late stages of demixing. First,
in the following section, we study the early stages of demixing up to the
moment in time that the solution phase separates, at $t=\tau_{\mathrm{L}}$.
Here, we interestingly find that solvent evaporation has a strong impact on
the early stage temporal evolution of the volume fractions. Following this, we
discuss the late-stage coarsening and put forward an explanation for the
differences in structural evolution observed in Fig. 2 for low and high Peclet
numbers, and identify a, as far as we are aware, novel coarsening mechanism.
This what we earlier in this work refer to as confluent coarsening is the
results of a coupling between the bulk and surface hydrodynamic transport
modes.
## IV Early stage behavior
Let us now focus on the growth of density fluctuations in the pre-demixing
stage, and investigate the linear response of the height and volume fraction
fields to a thermal excitation. For non-volatile mixtures this pre-demixing
stage has already been investigated by Clarke Clarke (2005) and Naraigh et al
Náraigh and Thiffeault (2010) showing that the height and volume fractions
evolve independently if the disjoining pressure and liquid-vapor surface
tension are independent of solute concentration. Moreover, the temporal
evolution of the volume fraction was found to be diffusive and unaffected by
hydrodynamic transport, which is consistent with predictions for bulk models
where hydrodynamic transport becomes important only after the liquid-liquid
phase boundary become sufficiently sharp Chen and Chakrabarti (1998); Tanaka
(1996). As we show next, volatile mixtures differ considerably from non-
volatile mixtures, because the height and volume fraction fields couple via
solvent evaporation. Nevertheless, we argue that because we neglect thermal
fluctuations in the height field, this coupling turns out to be weak and can
be disregarded in our numerical calculations.
In order to characterise the pre-demixing stage, we seek to extract the delay
in time $\tau_{\mathrm{L}}$ between crossing the spinodal at $t=0$ and the
moment in time at $t=\tau_{\mathrm{L}}$ that the solution actually phase
separates, as well as the characteristic feature size of the phase separated
solution measured in terms of the associated emergent wave number $q_{*}$. We
apply our analysis to the volume fraction field $\phi=\psi/h$ instead of the
partial height $\psi$, since the former is the order parameter that best
describes the demixing kinetics. To do this, we first recast the equation for
the solute height (11) into an evolution equation for the solute volume
fraction $\phi$ and subsequently linearize both the height and the volume
fraction field around a homogeneous but drying thin film with time-dependent
composition $\phi_{\mathrm{hom}}(t)$ and height $h_{\mathrm{hom}}(t)$. The set
of linearized equations read in Fourier space
$\frac{\partial}{\partial t}\begin{pmatrix}\delta\phi_{q}\\\ \delta
h_{q}\end{pmatrix}=\mathbf{Q}\cdot\begin{pmatrix}\delta\phi_{q}\\\ \delta
h_{q}\end{pmatrix}+\mathbf{\zeta}_{q}$ (14)
with $t$ again the dimensionless time, $\delta\phi_{q}$ and $\delta h_{q}$ the
Fourier transforms of the volume fraction and height fluctuations around
$\phi_{\mathrm{hom}}(t)$ and $h_{\mathrm{hom}}(t)$ with $q$ the wave number of
the fluctuation and $\mathbf{\zeta}_{q}$ the thermal fluctuations. For
simplicity, we assume in our analysis an initial thermal excitation only, and
neglect thermal fluctuations for $t>0$. The matrix of coefficients reads
$\mathbf{Q}=\begin{pmatrix}Q_{\mathrm{\phi\phi}}&Q_{\mathrm{\phi h}}\\\
Q_{\mathrm{h\phi}}&Q_{\mathrm{hh}}\end{pmatrix}=\begin{pmatrix}R(q,t)+\mathrm{Bi}\left(1-2\phi_{\mathrm{hom}}\right)h_{\mathrm{hom}}^{-1}&-\mathrm{Bi}\phi_{\mathrm{hom}}\left(1-\phi_{\mathrm{hom}}\right)h_{\mathrm{hom}}^{{}^{-}2}\\\
\mathrm{Bi}&\frac{1}{3}\mathrm{Pe}\hskip 2.84544pth_{\mathrm{hom}}^{3}\hskip
2.84544ptq^{2}\left(3\mathrm{G}\hskip
2.84544pth_{\mathrm{hom}}^{-4}+\mathrm{Ca}^{-1}q^{2}\right)\end{pmatrix},$
(15)
with
$R(q,t)=M(t)q^{2}\left[\phi_{\mathrm{hom}}^{-1}+(1-\phi_{\mathrm{hom}})^{-1}-2\chi+\mathrm{Cn}q^{2}\right]$
with $q$ the (dimensionless) wave number and $G$ the disjoining number. Note
that the kinetic matrix $Q$ depends on time because the reference state dries
too, and is described by a time-dependent volume fraction and film height
$\\{\phi_{\mathrm{hom}}(t),h_{\mathrm{hom}}(t)\\}$. Hence, we cannot proceed
by the usual linear stability analysis.
To make headway, let us first note that the first term in the diagonal
$Q_{\mathrm{\phi\phi}}$ component in Eq. (15) accounts for diffusive mass
transport via $R(q,t)$ and the second one in $Q_{\mathrm{\phi\phi}}$ accounts
for the effect of the concentration dependence of the solvent evaporation
rate. Both off-diagonal terms $Q_{\mathrm{\phi h}}$ and $Q_{\mathrm{h\phi}}$
that couple the local volume fraction and the height of the film originate
from solvent evaporation only. Hence, in agreement with earlier work that
include hydrodynamics, we conclude that hydrodynamic transport modes do not
contribute to the initial amplification of the primary unstable spinodal
density wave Chen and Chakrabarti (1998); Náraigh and Thiffeault (2010);
Clarke (2005); Siggia (1979); Tanaka (1996); Shimizu and Tanaka (2015). The
second diagonal contribution $Q_{\mathrm{hh}}$, describing coupling of
fluctuations of the height of the film, accounts for hydrodynamic
redistribution of the bulk material. It is interesting to note that the
kinetic matrix diagonalizes only for non-volatile mixtures for which
$\mathrm{Bi}=0$, which have been analyzed by Clarke Clarke (2005) and Nargaith
et al. Náraigh and Thiffeault (2010). This, perhaps surprisingly, also
suggests that (thermal) fluctuations in the height of the film must have a
different effect on the initial phase separation kinetics of volatile and that
of non-volatile mixtures.
We next seek a solution to Eq. (14) to obtain the spinodal lag time
$\tau_{\mathrm{L}}$ and the emergent wave number $q_{*}$. This is actually not
quite straightforward because the kinetic matrix $Q$ is time dependent, as
already announced. The standard approach to diagonalize the kinetic matrix $Q$
does not yield the exact solution to Eq. (14), but only provides a zeroth
order contribution to the solution in a so-called Magnus expansion Magnus
(1954). Higher order corrections can then be calculated in terms of the
commutator of the kinetic matrix with itself, evaluated at different moments
in time. For our model, this commutator is non-zero and therefore higher order
terms do not vanish. Instead, we opt to first simplify the problem at hand to
reflect our numerical calculations and subsequently solve the remaining
equations. First, we reiterate that we neglect in our calculations thermal
fluctuations in the height of the film and that the height is initially
constant. Fluctuations in the height of the film are therefore excited
indirectly via the thermal fluctuations in the solute height (or volume
fraction). In our numerical calculations, the magnitude of the fluctuations in
the height of the film remains many orders of magnitude smaller than the
fluctuations in the volume fractions. Hence, we argue that we may neglect the
off-diagonal contribution $Q_{\phi h}$. In practise, this means that the
volume fraction field evolves independently of the height field, whereas the
height field remains affected by and is subservient to the local volume
fraction.
Using these simplifications, we only need to solve the equation for the solute
volume fraction to extract the spinodal lag time $\tau_{\mathrm{L}}$ and the
emergent wave number $q_{*}$. This equation was already analyzed by Schaefer
et al. Schaefer _et al._ (2015), albeit for a different evaporation model
wherein the volume fraction increases linearly with time, in which case the
second term in $Q_{\phi\phi}$ in Eq. (15) drops out of the equation. Setting
this term to zero is actually also justified in our case because
$\mathrm{Bi}\ll 1$ is a necessary condition for our height-averaged model to
be valid. Following Schaefer et al. Schaefer _et al._ (2015), we introduce a
spinodal diffusion time $\tau_{\mathrm{d}}=\mathrm{Cn}/M(\phi_{\mathrm{s}})$
and an evaporative destabilization time
$\tau_{\mathrm{e}}=h_{0}/|f_{\mathrm{\phi\phi\phi}}|\mathrm{Bi}\phi_{\mathrm{s}}(1-\phi_{\mathrm{s}})$,
with $\mathrm{Bi}\phi_{\mathrm{s}}(1-\phi_{\mathrm{s}})/h_{0}$ the
(dimensionless) rate of change of the volume fraction due to evaporation, as
the two characteristic time scales that define the spinodal lag time Schaefer
_et al._ (2015)
$\tau_{\mathrm{L}}\approx
2^{5/3}r^{1/3}\left(\frac{\tau_{\mathrm{d}}}{\tau_{\mathrm{e}}^{2}}\right)^{1/3}\propto\mathrm{Bi}^{-2/3},$
(16)
and the emergent wave number $q_{*}$
$q_{*}\approx\left(\frac{1}{4\mathrm{Cn}}\frac{\tau_{\mathrm{L}}}{\tau_{\mathrm{e}}}\right)^{1/2}\approx\mathrm{Cn}^{-1/2}\left(\frac{r}{2}\frac{\tau_{\mathrm{d}}}{\tau_{\mathrm{e}}}\right)^{1/6}\propto\mathrm{Bi}^{1/6}.$
(17)
Here, $f_{\phi\phi\phi}=(1-\phi_{\mathrm{s}})^{-2}-\phi_{\mathrm{s}}^{-2}$ is
the third derivative of the local free energy density Eq. (3) evaluated at the
low volume fraction spinodal,
$r=\ln\delta\phi_{q_{*}}(\tau_{\mathrm{L}})/\delta\phi_{q_{*}}(0)$ is a
measure for the amplification of the fluctuation amplitude that we associate
with the spinodal lag time $\tau_{\mathrm{L}}$, which can in practice be
treated as a fitting parameter, $M(\phi)$ the mobility defined in Eq. (6), and
$\phi_{\mathrm{s}}$ the volume fraction at the low concentration spinodal
Schaefer _et al._ (2015). The factor
$\phi_{\mathrm{s}}(1-\phi_{\mathrm{s}})/h_{\mathrm{0}}$ with $h_{0}$ the
initial film height in the evaporation time scale $\tau_{\mathrm{e}}$ finds
its origin in our solvent evaporation model. Our numerical calculations are in
agreement with these predictions (not shown). Hence, we conclude that during
early times the demixing kinetics in our quasi two-dimensional model is
identical to those in a two-dimensional model without hydrodynamics, and
depends non-trivially on diffusion and the rate of solvent evaporation that
enter via two emergent time scales in Eqs. (16) and (17).
Next, we discuss the late-stage coarsening and how this is affected by the
hydrodynamic transport, solvent evaporation and the coupling of the bulk with
the fluid–gas interface.
## V Late stage coarsening
As can be concluded from Fig. 2 and as discussed in more detail in the
preceding Sections III and IV, the hydrodynamics of flow appears to mainly
influence the morphology and associated characteristic feature size during the
coarsening stage. While this is to be expected for critical or near-critical
mixtures that show a bicontinuous demixed morphology Siggia (1979); Bray
(2002), hydrodynamic coarsening for the typically off-critical dispersions of
droplets that form in the context of our calculations is often believed to be
of minor importance Tanaka (1996). This, of course, is not to say that
hydrodynamic interactions between droplets do not play a role, e.g., in the
compositional Marangoni effect associated with gradients in the solute-solvent
interfacial tension Shimizu and Tanaka (2015) or via the pumping action that
coalescing droplets exert on the surrounding fluid Tanaka (1996), but these
effects are relatively subtle. In this section, we show that in our model,
hydrodynamics in combination with evaporation does have a strong impact on the
coarsening under off-critical conditions in the sense that it speeds up the
process in comparison to diffusive coarsening, starting at a time that
decreases with increasing Peclet number. This resembles the effect of
hydrodynamics on coarsening in bulk mixtures of critical composition, although
the underlying mechanism turns out to be different Bray (2002). By
investigating how the coarsening dynamics depends on the Biot, Peclet and
Capillary numbers, we are able to explain the origins of this kind of rapid
coarsening.
We characterize the coarsening kinetics by focusing attention on a
characteristic compositional length scale $\langle L\rangle(t)$ that in the
literature is generally assumed to obey the scaling relation $\langle
L\rangle\propto t^{\alpha}$ with $\alpha$ an exponent. The value that this
exponent takes depends on the predominant coarsening mechanism Tanaka (1996);
Mullins (1986). Following standard procedure, we calculate the characteristic
length from a mean characteristic wave number $\langle q\rangle(t)$, where
$\langle L\rangle(t)=2\pi/\langle q\rangle(t)$, where $\langle
q\rangle(t)\equiv\int\mathrm{d}{q}qS(q)/\int\mathrm{d}{q}S(q)$ and
$S(q)=\langle|\delta\phi_{q}(t)|^{2}\rangle$ the ensemble-averaged structure
factor and $\delta\phi_{q}(t)$ the Fourier transform of the fluctuation in the
volume fraction defined in the previous section Bray (2002). Fig. 4 shows for
fixed values of the Biot number $\mathrm{Bi}=3\times 10^{-3}$ and the
Capillary number $\mathrm{Ca}=5\times 10^{-2}$ for initial concentration
$\phi_{0}=\psi_{0}=0.1464$ the characteristic length $\langle L\rangle$ as a
function of the scaled time for Peclet numbers ranging in value from $2\times
10^{-1}$ to $2\times 10^{2}$. Time is scaled to the spinodal lag time and the
characteristic length scale to the initial height $h_{0}$ of the film. The
“spike” in mean length just before the re-dissolution originates from the
brief moment in time that only a single domain is present in our calculations
and therefore is a finite-size effect.
What is immediately clear from the figure, is that late-stage coarsening
strongly depends on the Peclet number, in particular for $\mathrm{Pe}\gg 1$.
As a guide to the eye, we have inserted a dotted line to indicate the scaling
exponent $\alpha=1$, a dash-dotted line for $\alpha=1/4$ and a dashed line
$\alpha=1/3$. The solution demixes very swiftly switch at
$t/\tau_{\mathrm{L}}=1$, after which the characteristic length increases
relatively slowly with time: the fluid film coarsens. For a while, the
coarsening rate is an invariant of the Peclet number with a coarsening
exponent close to albeit slightly larger than $\alpha=1/4$. This is the
expected coarsening exponent for a diffusive mobility that is of a “double-
degenerate” form, i.e., large only in the solute-solvent interfaces, but
(very) small in both solute and solvent-rich phases. The customary value of
$\alpha=1/3$ that Lifshitz-Slyosov-Wagner theory predicts holds only for
constant or so-called one-sided mobilities Lifshitz and Slyozov (1961); Wagner
(1961); Dai and Du (2016).
For small Peclet numbers, the coarsening rate remains approximately constant
during the coarsening stage until the morphology changes and reverses from a
droplet phase with high solute concentration to one with relatively low solute
concentration and subsequently redissolves. For Peclet numbers larger than
about $\mathrm{Pe}=20$, we find a transition of the coarsening exponent from
$1/4$ to approximately $\alpha\approx 0.9$ – we speculate that for larger
Peclet numbers it actually approaches unity. The time of the transition shifts
to earlier times with increasing Peclet number. Note that during this second
coarsening regime the morphology remains that of a dispersion of droplets.
Interestingly, the coarsening rate approaches that of viscous coarsening in
three dimensions with $\alpha=1$ albeit that viscous coarsening is only
possible for bicontinuous and not for the droplet-like morphologies present in
our calculations Siggia (1979); Tanaka (1996). Even though the coarsening
exponent becomes similar to that of viscous coarsening, the underlying
mechanism turns out to be different. We return to this issue after discussing
how the Biot and Capillary numbers impact the demixed morphology.
Figure 4: The mean compositional length scale $\langle L\rangle$ scaled to the
initial film height $h_{0}$ as a function of the scaled time
$t/\tau_{\mathrm{L}}$ for different values of the Peclet number between
$\mathrm{Pe}=2\times 10^{-1}$ to $\mathrm{Pe}=2\times 10^{2}$ (bottom to top).
The other model parameter values are $\mathrm{Ca}=5\times 10^{-2}$,
$\mathrm{Bi}=3\times 10^{-3}$, $\chi=4$, $G=2.3\times 10^{-4}$ and initial
volume fraction $\phi_{0}=0.1464$. For $t/\tau_{\mathrm{L}}<1$, the solution
is still homogeneous and the mean length is of the order of the size of the
spatial discretization, which is the length scale implicit in the discretized
thermal noise. For $t/\tau_{\mathrm{L}}>1$ the solution phase separates and
subsequently coarsens. The dotted line indicates a scaling exponent of
$\alpha=1$, the dash-dotted lines $\alpha=1/4$ and the dashed line
$\alpha=1/3$.
In Fig. 5, we show for two values of the Peclet number $\mathrm{Pe}=2\times
10^{-1}$ and $\mathrm{Pe}=2\times 10^{2}$ how different rates of solvent
evaporation affect our results. The blue curves for $\mathrm{Bi}=3\times
10^{-3}$ are also shown in Fig. 4. To simplify comparison of the data for
different Biot numbers, we shift the curves for $\mathrm{Bi}=3\times 10^{-2}$
and $\mathrm{Bi}=3\times 10^{-1}$ vertically, such that the curves overlap at
$t/\tau_{\mathrm{L}}\approx 1.2$. Near $t/\tau_{\mathrm{L}}=1$ we find that
the mean length overshoots, an effect that appears to be more conspicuous at
higher Biot numbers. This overshoot hints at the presence of a secondary
length scale, which has already been observed and discussed by Schaefer and
collaborators Schaefer _et al._ (2015) in the context of volatile solutions
and we therefore do not discuss it here. Fig. 5 shows that time available for
coarsening decreases with increasing Biot number. Recall that
$\tau_{\mathrm{L}}\propto\mathrm{Bi}^{-2/3}$ and that the drying time is
proportional to $\mathrm{Bi}^{-1}$. Hence, the time available for coarsening
differs by about a factor ten between the data, or in the units of scaled time
$t/\tau_{\mathrm{L}}$ by a factor of $\mathrm{Bi}^{-1/3}$, so
$10^{-1/3}\approx 0.46$. For $\mathrm{Pe}=2\times 10^{-1}$ shown in Fig. 5A,
we again find the same coarsening exponent of about $1/4$, irrespective of the
Biot number.
For $\mathrm{Pe}=2\times 10^{2}$, shown in Fig. 5B, increasing the Peclet
number has a different effect depending on the value for the Biot number. For
$\mathrm{Bi}=3\times 10^{-2}$, the coarsening exponent initially attains a
value of about $0.2$, so below $1/4$, but since the scaling regime represents
much less than a decade we should perhaps not read too much into this. The
crossover to a power law of unity sets in subsequently, but again survives
only for a fraction of a decade, after which coalescence of solute-rich
droplets takes place, induced also by the decreasing distance between the
solute-rich domains in response to the decreasing height of the film. For the
fastest evaporation rate shown (green) the time available for coarsening is
short and evaporation-induced material redistribution is faster than both
diffusive or hydrodynamic transport. The coarsening exponent is approximately
unity but applies again over a small period time before coalescence and re-
dissolution take over. All in all, it seems that for small Peclet numbers, the
Biot number has no significant impact other than to shorten the period in time
over which coarsening can take place, at least if $\mathrm{Bi}<1$. This is not
so for large Peclet number, in which case an increasing Biot number leads to a
crossover to hydrodynamic behavior that depends non-monotonically on the Biot
number. For $\mathrm{Bi}\gg 1$ and irrespective of the Peclet number the
drying time eventually becomes shorter than the spinodal lag time
$\tau_{\mathrm{L}}$, which prevents the solution from phase separating.
Figure 5: The (shifted) mean compositional length scale $\langle L\rangle$
scaled to the initial film height $h_{0}$ plotted as a function of the scaled
time $t/\tau_{\mathrm{L}}$ for different values of the Biot number. The curves
for $\mathrm{Bi}=3\times 10^{-2}$ and $3\times 10^{-1}$ have been shifted
vertically for the sake of comparability by a factor of, respectively, $1.6$
and $2.1$ such that they overlap at $t/\tau_{\mathrm{L}}\approx 1.2$. The
model parameter values are $\mathrm{Ca}=2\times 10^{-2}$, $\chi=4$,
$G=2.3\times 10^{-4}$ and initial volume fraction $\phi_{0}=0.1464$ for two
different values of the Peclet number $\mathrm{Pe}=2\times 10^{-1}$ (A) and
$\mathrm{Pe}=2\times 10^{2}$ (B). The dash-dotted lines are a guide for the
eye representing a coarsening exponent of $\alpha=1/4$. See also the caption
to Fig. 4.
Finally, in Fig. 6, we show the coarsening rate for fixed value of the Biot
number $\mathrm{Bi}=3\times 10^{-3}$ for two values of the Capillary number
$\mathrm{Ca}=5\times 10^{-3}$ in Fig. 6A and $\mathrm{Ca}=5\times 10^{-1}$ in
Fig. 6B, for Peclet numbers ranging between $2\times 10^{-1}$ and $2\times
10^{2}$. The dashed-dotted, dashed and dotted lines are guides for the eye to
indicate coarsening exponents of $\alpha=1/4$, of $\alpha=1/2$ and of
$\alpha=1$. For very small Capillary number shown in Fig. 6A, we find that for
$t/\tau_{\mathrm{L}}<10$ the coarsening is independent of the Peclet number
with an exponent equal to approximately $\alpha=1/4$ for about a decade in
time, indicating that coarsening is diffusive. For a larger Capillary number,
shown in Fig. 6B, we obtain what resembles diffusive coarsening with
$\alpha\approx 1/4$ for $\mathrm{Pe}<2\times 10^{1}$. For $\mathrm{Pe}=2\times
10^{1}$, we find a transition in the coarsening rate similar to what we found
earlier for $\mathrm{Ca}=5\times 10^{-2}$ in Fig. 4. However, there seems to
be a second transition to a slower coarsening corresponding to an exponent of
$\alpha\approx 1/2$ albeit that we do not quite understand the physics
underlying this transition nor that of the slower rate of coarsening. The data
for $\mathrm{Ca}=0.5$ and $\mathrm{Pe}=2\times 10^{2}$ are not shown in Fig.
6B, as in this case hydrodynamic transport already becomes important during
the demixing stage, resulting in a much more rapid increase in the
characteristic feature size and finite-size effects are large, preventing us
from interpreting the results.
Figure 6: The mean compositional length scale $\langle L\rangle$ as a function
of the scale time $t/\tau_{\mathrm{L}}$. The model parameter values are
$\mathrm{Bi}=3\times 10^{-3}$, $\chi=4$, $G=2.3\times 10^{-4}$ and initial
volume fraction $\phi_{0}=0.1464$ and (A) $\mathrm{Ca}=2\times 10^{-1}$ and
(B) $\mathrm{Ca}=2\times 10^{-3}$, for the Peclet number $\mathrm{Pe}=2\times
10^{-1}$ in blue, $\mathrm{Pe}=2\times 10^{0}$ in orange and
$\mathrm{Pe}=2\times 10^{1}$ in green and $\mathrm{Pe}=2\times 10^{2}$ in red.
For $t/\tau_{\mathrm{L}}>1$ the solution phase separates and subsequently
coarsens. The dash-dotted, dashed and dotted lines are guides for the eye for
the coarsening exponents $\alpha=1/4$, $\alpha=1/2$ and $\alpha=1$.
Basing ourselves on the results shown in Figs. 4, 5 and 6, we conclude that
the transition from diffusive to any of the faster coarsening modes depends on
the predominance of hydrodynamic transport (described by the Peclet number)
and that of the fluid-gas surface tension relative to fluid-fluid interfacial
tension (described by the Capillary number). While we obtain similar
coarsening rates for the non-diffusive coarsening mode for low and high Biot
number, we actually identify two distinct coarsening mechanisms. For high Biot
numbers, the drying time is very short and so the available time for
coarsening is short also. At high Biot and high Peclet number, the inter-
droplet distance decreases rapidly, which facilitates the merging of the
solute-rich domains, a process aided by hydrodynamic interactions.
For small Biot and large Peclet numbers, we discover in Fig. 4 and Fig. 6B a
similar transition in the coarsening rate, with a coarsening exponent changing
from $\alpha=1/4$ to about $\alpha=0.9$. We associate this with a different
and as far as we are aware, novel coarsening mechanism that we refer to as
confluent coarsening. Since this mechanism only emerges at sufficiently high
Peclet and Capillary numbers, we argue that it is related to the hydrodynamic
transport processes originating from the (curved) solution-gas and the solute-
solvent interfaces. In the next section, we focus attention on the flow fields
and associated hydrodynamic transport mechanisms to unveil the physical
origins of confluent coarsening. We find that at its root is the coupling of
hydrodynamic transport in the phase-separating solution to gradients in the
height of the liquid-gas interface. This coupling results in the directional
motion of solute-rich droplets that accumulate in regions of the film where
the film is relatively thin. These domains subsequently coalesce, resulting in
enhanced domain growth.
## VI Flow fields and transport mechanisms
The rapid coarsening that we find for the combination of sufficiently large
Peclet and Capillary numbers in Figs. 4 and 6B, indicates that hydrodynamic
transport can have a strong influence on the late-stage morphology. In stark
contrast with bulk mixtures, this is true even for off-critical mixtures as
Fig. 2 also illustrates. In this section, we analyze the hydrodynamic
transport processes by visualizing the flow fields that we calculate using Eq.
(7). From our analysis, we find that the solute-rich droplets tend to move
advectively and that the droplet motion aligns with gradients in the height of
the film. We explain why this motion is appreciable only if the Peclet and
Capillary numbers are sufficiently high, or, equivalently if (i) hydrodynamic
transport is sufficiently rapid and (ii) the solution-vapor surface is
(relatively) easily deformed at the three phase contact lines. The regions
where the film is relatively thin act in some sense as focal points for the
droplets to accumulate and coalesce. This increases the rate of growth of
solute-rich domains and therefore results in enhanced coarsening.
To highlight the directional motion of the solute-rich droplets, we show the
velocity field in the laboratory frame in Fig. 7 for $\mathrm{Pe}=2\times
10^{2}$, $\mathrm{Bi}=3\times 10^{-3}$ and $\mathrm{Ca}=5\times 10^{-2}$ for
$t/\tau_{\mathrm{L}}=3.15$. The corresponding compositional snapshot is shown
in the bottom left panel of Fig. 2C. We superimpose the velocity field on the
local solute concentration field, where high solute concentration is colored
red and low solute concentration is colored blue. As Fig. 7A and B show, where
in the latter we zoom in on a single solute-rich domain, the fluid velocity
field inside the solute-rich domains is approximately uniform in both
direction and magnitude. See also Fig. 9 in the supplemental material for a
comparison of the fluid velocity of the droplet shown in Fig. 7B in the
laboratory and centre-of-mass reference frame. The direction of motion of the
droplets suggest that the droplets move towards a common region in the domain
shown, the reason for which we explain below. In the solvent-rich phase,
indicated in blue, the velocity field circles around the solute-rich domains,
highlighted in the closeup image of Fig. 7B. It shows that the droplet pushes
away solvent on the right side of the droplet and that this solvent is
transported to the wake of the droplet, on the left side of it in Fig. 7B. At
the phase boundaries perpendicular to the direction of motion of the droplet,
vortices can be seen with a clockwise and counter clockwise direction,
reminiscent of a vortex dipole.
Figure 7: Rendering of the quasi two-dimensional flow field in the late stages
of demixing of our binary fluid. The black arrows indicate the direction and
magnitude of the fluid velocity field in the laboratory frame for
$\mathrm{Pe}=2\times 10^{2}$, $\mathrm{Ca}=5\times 10^{-2}$,
$\mathrm{Bi}=3\times 10^{-3}$ and $t/\tau_{\mathrm{L}}=3.15$, equivalent to
the bottom left snapshot of Figs. 2B and D. For clarity, we do not indicate
local fluid velocities smaller than 1% of the maximum fluid velocity. Panel A:
overview. Panel B: enlarged flow field around one of the droplets of A. In
both panels, we superimpose the fluid-velocity field with the volume fraction
field $\phi$ in red indicating the solute-rich phase and in blue the solute-
poor phase. The domains translate from left to right.
While the fluid velocity shown in Fig. 7 correlates with the presence of
solute-rich domains, the direction of motion does not, that is, there is no
discernible gradient in the concentration of solute that correlates with it.
Instead, it is correlated with the slope of the height of the liquid-gas
interface. This we show in Fig. 8, presenting in 8A the fluid velocity field
superimposed on the concentration field and in 8B the height of the
corresponding solution-gas interface. The volume fraction and height color
bars are shown below the figures. As we deduce from Fig. 8B, the direction of
motion of the solute-rich droplets clearly aligns with the gradients in the
height of the film. The domains move deterministically from regions where the
film is thick to regions of space where the film is thin. Hence, regions where
the film is thin appear to represent areas where droplets accumulate. This
facilitates the coalescence of droplets, which eventually leads to an increase
in the average domain size.
Figure 8: The quasi two-dimensional flow field in the laboratory frame (black
arrows) for $\mathrm{Pe}=2\times 10^{2}$, $\mathrm{Ca}=5\times 10^{-2}$,
$\mathrm{Bi}=3\times 10^{-3}$ at $t/\tau_{\mathrm{L}}=3.15$, equivalent to the
bottom left panel of Figs. 2B and D. For clarity, we do not not show the local
fluid velocities smaller than 1% of the maximum fluid velocity. In panel A, we
superimpose the fluid-velocity field with the volume fraction field $\phi$,
red representing the solute-rich phase and blue the solute-poor phase. In
panel B, we superimpose the fluid-velocity field with the height field $h$,
red indicating relatively high regions and blue relatively low-lying regions.
The white box indicates the enlarged flow field shown in Fig. 7B.
We need to explain three things: (i) why the droplet motion couples to
gradients in the height of the film, (ii) why gradients in the height of the
film emerge in the first place and (iii) how our model parameters affect
droplet motion. The latter we explain while answering the first two questions.
To answer the first question, we draw the attention of the reader to the
expression for the velocity field in Eq. (7). Only the last term in Eq. (7)
that in dimensionless units reads $-\mathrm{Pe}\sqrt{\mathrm{Cn}}h(\nabla
h\cdot\nabla\phi)\nabla\phi/3$ couples the bulk hydrodynamics of the liquid-
liquid phase boundaries to gradients in the height of the film. With this in
mind, let us take the single droplet highlighted with the white box in Fig. 8,
which is the same droplet as shown in Fig. 7B, as an example to investigate
how precisely this contribution affects the fluid motion. In the region within
the white box the height of the film decreases with increasing $x$-coordinate,
and the motion of the droplet is also in that direction. We assume for our
argument that the droplet shown is perfectly circular. We only need to focus
on the fluid-fluid phase boundaries, because $\nabla\phi$ is negligibly small
outside of these phase boundaries. For the fluid-fluid interface on the left
hand side of the center of mass the term $(\nabla h\cdot\nabla\phi)$ is
negative and $\nabla\phi$ is positive, whereas for phase boundaries on the
other side $(\nabla h\cdot\nabla\phi)$ and $\nabla\phi$ are both negative.
Hence, we expect the droplet highlighted within the white box in Fig. 8 to
move from left to right, which is indeed what we observe. Since the magnitude
of the velocity is proportional to the Peclet number, this also explains why
this motion and therefore confluent coarsening is only noticeable for
sufficiently large Peclet numbers.
What remains is an explanation for the origin of (long-ranged) gradients in
the height of the film. These gradients do not originate from the three-phase
contact line of a single droplet because this results in a relatively short-
ranged and isotropic deformation of the solution-gas surface. Instead, the
gradients in the height of the film form during the initial stages of
demixing. While the demixed morphology becomes that of solute-rich droplets
dispersed isotropically in a solvent-rich majority phase, our numerical
calculations indicate that the kinetics of the initial demixing process is not
spatially homogeneous: the solute-rich domains tend to emerge somewhat
clustered. Hence, for a very brief period of time the phase-separated
morphology is that of a collection of clustered solute-rich domains, while
some regions in the film have not yet fully phase separated. The downward
force exerted on the solution-gas interface by these clusters of domains then
causes a collective, larger scale deformation of the film surface. These
deformations persist even after the solution is phase-separated everywhere in
the film, resulting in the gradients in the height of the film required for
the directional motion of the droplets. This process turns out to be regulated
by the Capillary number $\mathrm{Ca}$.
To show that this must be so, we take as characteristic measure of the slope
$\nabla h=\Delta h/\Delta L$, where $\Delta h$ is the magnitude of the
deformation of the height of the film and $\Delta L$ the typical length scale
of the deformation. We are able to get an estimate of $\Delta L$ from the
Laplace pressure $\Delta P=\Delta F/A=\gamma/\Delta L$, with $\Delta F$ the
force exerted by a cluster of solute-rich domains on the solution-gas
interface, $A$ the area over which the force is exerted and $\gamma$ the
solution-gas surface tension. (See also Eq. 1.) Hence $\Delta
L\propto\gamma\propto\mathrm{Ca}^{-1}$. For $\Delta h$ we assume that the
solution-gas interface has Hookean elasticity, suggesting $\Delta h\propto
1/\gamma\propto\mathrm{Ca}$. We deduce that $\nabla h\propto\Delta h/\Delta
L\propto\mathrm{Ca}^{2}$. All of this suggests that we can estimate the
droplet velocity as $\mathbf{u}=-\mathrm{Pe}\sqrt{\mathrm{Cn}}\hskip
2.84544pth(\nabla h\cdot\nabla\phi)\nabla\phi/3\propto\mathrm{Pe}\hskip
2.84544pt\mathrm{Ca}^{2}/\mathrm{Cn}^{1/2}$, using the fact that
$\sqrt{\mathrm{Cn}}$ is a measure for the width of the liquid-liquid
interface, and therefore that $\nabla\phi\propto\mathrm{Cn}^{-1/2}$. Hence, we
find that the relevant dimensionless parameters that set the droplet motion
are the Peclet, Capillary and Cahn numbers. For confluent coarsening to be
dominant, the droplet velocity must be sufficiently high for the motion to be
perceivable within the time window of our numerical experiment. In other
words, if the Peclet and Capillary number are sufficiently large and the Cahn
number sufficiently small. We now expect a transition from diffusive to
confluent coarsening if the droplets have translated a sufficiently large
distance, hence the time of the transition must be inversely proportional to
the droplet velocity; it decreases with increasing Peclet, Capillary and
increases with decreasing Cahn number (the latter we have not verified). This
is in agreement with our findings presented in Figs. 4 and 6.
While this discussion explains the origin of confluent coarsening and how it
depends on the parameters of our model, we have not attempted to theoretically
predict the value for the coarsening exponent that is associated with it.
Further, in the light of the above discussion where the Biot number does not
play a role, we expect that confluent coarsening should occur irrespective of
the solvent evaporation rate, at least if the drying time is not shorter than
the typical time required for the droplets to move a sufficient distance.
Indeed, we find by means of calculations on non-volatile off-critical mixtures
(not shown) that the kind of directional transport required for confluent
coarsening is present also. Interestingly, we do not observe this coarsening
mechanism in our calculations on non-volatile mixtures of critical composition
(not shown), even though similar longer-ranged gradients in the height of the
film are present in the calculations. Hence, we conclude that for non-volatile
(near-)critical mixtures the morphology ripens via another hydrodynamic
coarsening mechanism, i.e., viscous coarsening, which suppresses confluent
coarsening. In other words, confluent coarsening appears to emerge only for
off-critical mixtures.
## VII Discussion and conclusion
In summary, we have theoretically studied the evaporation-driven phase
separation of an incompressible binary fluid in a thin film. It is a model for
the fabrication of solution-processed thin films in which typically the
deposition of the film on a solid support is so fast that phase separation
occurs far removed from the deposition apparatus and associated capillary
zone, i.e., the zone close to the deposition apparatus where mass transport is
dictated by the deposition technique itself, e.g., in meniscus-guided
deposition. Away from the capillary zone the film is for all intents and
purposes flat, so no longer influenced by the curvature of the film in the
capillary zone, and the fluid velocity uniform and equal to that of the
substrate. This means that the problem reduces to that of a flat, stationary
film.
In our model, the solution is bounded by a non-deformable, flat and neutral
substrate and by a free interface with the gas phase. The solvent is volatile
and evaporates with a rate that is proportional to its volume fraction. We
focus on conditions where vertical stratification induced by the evaporation
or phase-separation cannot occur, and allow for both diffusive and advective
mass transport within the so-called lubrication approximation. The three main
dimensionless groups in our model are the Peclet number, the Capillary number
and the Biot number. The first describes the importance of hydrodynamic
transport of material relative to diffusive transport, the second measures the
relative strength of the liquid-gas and the liquid-liquid interfacial tensions
and the third expresses the strength of evaporation relative to diffusion. We
define two additional dimensionless groups, being the disjoining number and
the Cahn number, which describe the strength of the liquid-liquid capillary
forces relative to the van der Waals forces and the width of the liquid-liquid
interfaces. In our calculations we keep the magnitude of these two
dimensionless numbers fixed.
The demixing of the solution tends to occur under off-critical conditions,
which is a result of solvent evaporation gradually destabilizing the solution
starting from a solute concentration equal to the low concentration branch of
the spinodal. Hence, irrespective of the values of the dimensionless groups,
the morphology initially is that of solute-rich droplets dispersed in a
solvent-rich majority phase. The morphology eventually reverses to that of
solvent-rich droplets in a solute-rich majority phase and subsequently
redissolves due to ongoing evaporation. Associated with the compositional
morphology is structure formation in the height of the film. This structure
emerges due to disparity in the rate of evaporation in the regions rich in
either solute or solvent and the downward force exerted on it at the three-
phase solute-solvent-gas contact lines. The resulting roughness of the free
surface affects the drying kinetics, which we find to be more rapid at a
higher degree of surface roughness, i.e., for small Peclet and large capillary
numbers.
During the early stages of demixing the temporal evolution of the height and
volume fraction fields are, in principle, coupled on account of solvent
evaporation, irrespective of the values for the Peclet and Capillary numbers.
Mimicking the setup of our numerical calculations, wherein (thermal)
fluctuations in the height are only excited indirectly via the thermal
fluctuations in the volume fractions, we argue that in that case this coupling
is weak and can actually be neglected. The initial stages of demixing are
therefore dictated by diffusion and solvent evaporation only, and we reproduce
the work of Schaefer et al. that assumes a perfectly flat solution-gas
interface and ignores the presence of hydrodynamic flow fields altogether
Schaefer _et al._ (2015, 2016). The relevant dimensionless groups that set
the kinetics of the early stages of demixing are the Cahn and Biot numbers.
These results seemingly need to be modified for the case where thermal
fluctuations in the height of the film are excited directly. Interestingly,
this appears to be true only for volatile mixtures because the coupling of
bulk and surface fluctuation modes appears to be mediated via solvent
evaporation, whereas for non-volatile mixtures the bulk and surface modes
remain decoupled. How exactly these fluctuations affect the early stages of
demixing we leave for future work.
In the late stages of demixing, we discern a number of different coarsening
modes as summarized in Fig. 1. Each mode is associated with one or more of the
transport mechanisms at play in our model calculations and accompanied by a
different coarsening exponent. Apart from the well-known Ostwald-type ripening
with in our case a coarsening exponent of one-fourth and an earlier predicted
evaporative coarsening regime Schaefer _et al._ (2015); Negi _et al._
(2018), we find that hydrodynamics has a strong impact on the coarsening
behavior for high Peclet and Capillary numbers. For volatile mixtures that
phase separate under off-critical conditions, we identify two coarsening modes
that originate from the interplay between different hydrodynamic effects. The
first only emerges for fast evaporation and high Peclet numbers. Here, the
balance between evaporation and Laplace-pressure-driven material
redistribution rapidly decreases the inter-droplet distance, which in turn
promotes the coalescence of domains and hence the coarsening process. The
coarsening exponent appears somewhat larger than unity, but only persists for
a short period of time and we therefore cannot accurately determine it. The
second coarsening mode emerges for high Peclet and Capillary numbers, so if
the ratio of the liquid-liquid and liquid-gas interfacial tensions is
sufficiently large.
This second mode is, as far as we are aware, a novel coarsening mechanism,
which we find to be present only if the morphology is a dispersion of droplet-
like phase-separated domains, rather than bicontinuous. We refer to this
coarsening mechanism as confluent coarsening. It finds its origin in the
interplay of the three phase liquid-liquid-gas contact lines with the
gradients in the thickness of the liquid film. The gradient in the film height
emerges during the initial stages of demixing. By analyzing the fluid flow
fields, we show that this results in directional motion of these domains
towards regions of space where the film is relatively thin, which facilitates
the coalescence of droplets and results in fast coarsening. Indeed, we find a
coarsening rate with an exponent of approximately unity. Interestingly, this
coarsening exponent is similar to that observed for a three-dimensional
bicontinuous bulk morphology that evolves via viscous coarsening albeit that
this latter process is governed by a completely different mechanism. We do not
provide a theoretical explanation for the observed coarsening rate, which we
also leave for future work.
As far as we are aware, confluent coarsening has not yet been identified
experimentally. The reason for this may be that it is suppressed by a
potentially strong increase in the viscosity of the solution caused by solvent
evaporation and demixing, suppressing advective transport in the film. It
might also be that other mechanisms, not part of our model description, become
important, such as Marangoni effects originating from gradients in the liquid-
gas surface tension, caused by either gradients in the composition or the
temperature. In future work, we aim to address the limitations in our model by
including thermal and solutal marangoni effects, as well as a concentration
dependent viscosity. Moreover, a comparable study wherein we relax the
lubrication approximation may be required in the presence of strong Marangoni
fluxes Oron _et al._ (1997); Náraigh and Thiffeault (2010).
## VIII Supplemental Material
Figure 9: Rendering of the quasi two-dimensional flow field in the late stages
of demixing of our binary fluid around a single solute-rich droplet. The black
arrows indicate the direction and magnitude of the fluid velocity field. We
superimpose the fluid-velocity field with the volume fraction field $\phi$ in
red indicating the solute-rich phase and in blue the solute-poor phase. Panel
A is identical to Fig. 7B, and the black arrows indicate the direction and
magnitude of the fluid velocity field in the laboratory frame. In panel B the
black arrows indicate the flow field in the centre-of-mass frame of the
droplet. The parameter values are $\mathrm{Pe}=2\times 10^{2}$,
$\mathrm{Ca}=5\times 10^{-2}$, $\mathrm{Bi}=3\times 10^{-3}$ and
$t/\tau_{\mathrm{L}}=3.15$, equivalent to the bottom left snapshot of Figs. 2B
and 2D. For clarity, we do not indicate local fluid velocities smaller than 1%
of the maximum fluid velocity.
## References
* Di Carlo Rasi and Janssen (2019) D. Di Carlo Rasi and R. A. J. Janssen, Advanced Materials 31, 1806499 (2019).
* Mei _et al._ (2013) J. Mei, Y. Diao, A. L. Appleton, L. Fang, and Z. Bao, Journal of the American Chemical Society 135, 6724 (2013).
* Janasz _et al._ (2022) L. Janasz, M. Borkowski, P. W. M. Blom, T. Marszalek, and W. Pisula, Advanced Functional Materials 32, 2105456 (2022).
* Chen _et al._ (2020) M. Chen, B. Peng, S. Huang, and P. K. L. Chan, Advanced Functional Materials 30, 1905963 (2020).
* Bornside _et al._ (1989) D. E. Bornside, C. W. Macosko, and L. E. Scriven, Journal of Applied Physics 66, 5185 (1989).
* Diao _et al._ (2014) Y. Diao, L. Shaw, Z. Bao, and S. C. B. Mannsfeld, Energy Environ. Sci. 7, 2145 (2014).
* Gu _et al._ (2016) X. Gu, H. Yan, T. Kurosawa, B. C. Schroeder, K. L. Gu, Y. Zhou, J. W. F. To, S. D. Oosterhout, V. Savikhin, F. Molina‐Lopez, C. J. Tassone, S. C. B. Mannsfeld, C. Wang, M. F. Toney, and Z. Bao, Advanced Energy Materials 6, 1601225 (2016).
* Wang _et al._ (2021) Z. Wang, K. Gao, Y. Kan, M. Zhang, C. Qiu, L. Zhu, Z. Zhao, X. Peng, W. Feng, Z. Qian, X. Gu, A. K.-Y. Jen, B. Z. Tang, Y. Cao, Y. Zhang, and F. Liu, Nature Communications 12, 332 (2021).
* Schaefer _et al._ (2015) C. Schaefer, P. van der Schoot, and J. J. Michels, Physical Review E 91, 022602 (2015).
* Peng _et al._ (2023) Z. Peng, N. Stingelin, H. Ade, and J. J. Michels, Nature Reviews Materials 8, 439 (2023).
* Negi _et al._ (2018) V. Negi, O. Wodo, J. J. van Franeker, R. A. J. Janssen, and P. A. Bobbert, ACS Applied Energy Materials 1, 725 (2018).
* Yildiz _et al._ (2022) O. Yildiz, Z. Wang, M. Borkowski, G. Fytas, P. W. Blom, J. J. Michels, W. Pisula, and T. Marszalek, Advanced Functional Materials 32, 2107976 (2022).
* van Franeker _et al._ (2015a) J. J. van Franeker, M. Turbiez, W. Li, M. M. Wienk, and R. A. Janssen, Nature Communications 6, 6229 (2015a).
* van Franeker _et al._ (2015b) J. J. van Franeker, D. Westhoff, M. Turbiez, M. M. Wienk, V. Schmidt, and R. A. J. Janssen, Advanced Functional Materials 25, 855 (2015b).
* van Franeker _et al._ (2015c) J. J. van Franeker, G. H. L. Heintges, C. Schaefer, G. Portale, W. Li, M. M. Wienk, P. van der Schoot, and R. A. J. Janssen, Journal of the American Chemical Society 137, 11783 (2015c).
* Schaefer _et al._ (2016) C. Schaefer, J. J. Michels, and P. van der Schoot, Macromolecules 49, 6858 (2016).
* de Bruijn _et al._ (2024a) R. de Bruijn, J. J. Michels, and P. van der Schoot, The Journal of Chemical Physics 160, 084505 (2024a).
* de Bruijn _et al._ (2024b) R. de Bruijn, A. A. Darhuber, J. J. Michels, and P. van der Schoot, Periodic Phase-separation During Meniscus-guided Deposition (2024b).
* Michels _et al._ (2021) J. J. Michels, K. Zhang, P. Wucher, P. M. Beaujuge, W. Pisula, and T. Marszalek, Nature Materials 20, 68 (2021).
* Rogowski _et al._ (2011) R. Z. Rogowski, A. Dzwilewski, M. Kemerink, and A. A. Darhuber, The Journal of Physical Chemistry C 115, 11758 (2011).
* Ronsin and Harting (2022a) O. J. J. Ronsin and J. Harting, ACS Applied Materials & Interfaces 14, 49785 (2022a).
* Siggia (1979) E. D. Siggia, Physical Review A 20, 595 (1979).
* Tanaka (1996) H. Tanaka, The Journal of Chemical Physics 105, 10099 (1996).
* Bray (2002) A. J. Bray, Advances in Physics 51, 481 (2002).
* Zoumpouli and Yiantsios (2016) G. A. Zoumpouli and S. G. Yiantsios, Physics of Fluids 28, 10.1063/1.4961303 (2016).
* Chen and Chakrabarti (1997) H. Chen and A. Chakrabarti, Physical Review E 55, 5680 (1997).
* Tanaka (2001) H. Tanaka, Journal of Physics: Condensed Matter 13, 4637 (2001).
* Bouttes _et al._ (2015) D. Bouttes, O. Lambert, C. Claireaux, W. Woelffel, D. Dalmas, E. Gouillart, P. Lhuissier, L. Salvo, E. Boller, and D. Vandembroucq, Acta Materialia 92, 233 (2015).
* Sung _et al._ (1996) L. Sung, A. Karim, J. F. Douglas, and C. C. Han, Physical Review Letters 76, 4368 (1996).
* Song and Torkelson (1995) S.-W. Song and J. M. Torkelson, Journal of Membrane Science 98, 209 (1995).
* Haas and Torkelson (1997) C. K. Haas and J. M. Torkelson, Physical Review E 55, 3191 (1997).
* Lifshitz and Slyozov (1961) I. Lifshitz and V. Slyozov, Journal of Physics and Chemistry of Solids 19, 35 (1961).
* Wagner (1961) C. Wagner, Zeitschrift für Elektrochemie, Berichte der Bunsengesellschaft für physikalische Chemie 65, 581 (1961).
* Shimizu and Tanaka (2015) R. Shimizu and H. Tanaka, Nature Communications 6, 7407 (2015).
* Ronsin and Harting (2022b) O. J. J. Ronsin and J. Harting, Advanced Theory and Simulations 5, 2200286 (2022b).
* Cummings _et al._ (2018) J. Cummings, J. S. Lowengrub, B. G. Sumpter, S. M. Wise, and R. Kumar, Soft Matter 14, 1833 (2018).
* Thiele _et al._ (2016) U. Thiele, A. J. Archer, and L. M. Pismen, Physical Review Fluids 1, 083903 (2016).
* Clarke (2005) N. Clarke, Macromolecules 38, 6775 (2005).
* Náraigh and Thiffeault (2007) L. Náraigh and J.-L. Thiffeault, Physical Review E 76, 035303 (2007).
* Náraigh and Thiffeault (2010) L. Náraigh and J.-L. Thiffeault, Nonlinearity 23, 1559 (2010).
* Larsson and Kumar (2022) C. Larsson and S. Kumar, Physical Review Fluids 7, 094002 (2022).
* Schaefer _et al._ (2017) C. Schaefer, J. J. Michels, and P. van der Schoot, Macromolecules 50, 5914 (2017).
* Oron _et al._ (1997) A. Oron, S. H. Davis, and S. G. Bankoff, Reviews of Modern Physics 69, 931 (1997).
* Mitlin (1993) V. S. Mitlin, Journal of Colloid and Interface Science 156, 491 (1993).
* de Gennes _et al._ (2004) P.-G. de Gennes, F. Brochard-Wyart, and D. Quéré, _Capillarity and Wetting Phenomena_ (Springer New York, New York, NY, 2004).
* Dai _et al._ (2008) B. Dai, L. G. Leal, and A. Redondo, Physical Review E 78, 061602 (2008).
* Flory (1942) P. J. Flory, The Journal of Chemical Physics 10, 51 (1942).
* Huggins (1942) M. L. Huggins, The Journal of Physical Chemistry 46, 151 (1942).
* de Gennes (1980) P. G. de Gennes, The Journal of Chemical Physics 72, 4756 (1980).
* Debye (1959) P. Debye, The Journal of Chemical Physics 31, 680 (1959).
* Cahn and Hilliard (1958) J. W. Cahn and J. E. Hilliard, The Journal of Chemical Physics 28, 258 (1958).
* Cahn and Hilliard (1959) J. W. Cahn and J. E. Hilliard, The Journal of Chemical Physics 31, 688 (1959).
* Anderson _et al._ (1998) D. M. Anderson, G. B. McFadden, and A. A. Wheeler, Annual Review of Fluid Mechanics 30, 139 (1998).
* König _et al._ (2021) B. König, O. J. J. Ronsin, and J. Harting, Physical Chemistry Chemical Physics 23, 24823 (2021).
* Onsager (1931a) L. Onsager, Physical Review 37, 405 (1931a).
* Onsager (1931b) L. Onsager, Physical Review 38, 2265 (1931b).
* Xu _et al._ (2015) X. Xu, U. Thiele, and T. Qian, Journal of Physics: Condensed Matter 27, 085005 (2015).
* Doi (2011) M. Doi, Journal of Physics: Condensed Matter 23, 284118 (2011).
* Dai and Du (2016) S. Dai and Q. Du, Journal of Computational Physics 310, 85 (2016).
* Cook (1970) H. Cook, Acta Metallurgica 18, 297 (1970).
* Puri and Oono (1988) S. Puri and Y. Oono, Journal of Physics A: Mathematical and General 21, L755 (1988).
* Davidovitch _et al._ (2005) B. Davidovitch, E. Moro, and H. A. Stone, Physical Review Letters 95, 244505 (2005).
* Grün _et al._ (2006) G. Grün, K. Mecke, and M. Rauscher, Journal of Statistical Physics 122, 1261 (2006).
* Saylor _et al._ (2007) D. M. Saylor, C.-S. Kim, D. V. Patwardhan, and J. A. Warren, Acta Biomaterialia 3, 851 (2007).
* Wodo and Ganapathysubramanian (2014) O. Wodo and B. Ganapathysubramanian, Applied Physics Letters 105, 153104 (2014).
* Wodo and Ganapathysubramanian (2011) O. Wodo and B. Ganapathysubramanian, Journal of Computational Physics 230, 6037 (2011).
* Abhyankar _et al._ (2018) S. Abhyankar, J. Brown, E. M. Constantinescu, D. Ghosh, B. F. Smith, and H. Zhang, _arXiv e-preprints_ , Tech. Rep. (2018).
* Balay _et al._ (2024a) S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, J. Faibussowitsch, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. K. Kaushik, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. M. Munson, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. S. Smith, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang, _PETSc/TAO Users Manual_, Tech. Rep. ANL-21/39 - Revision 3.21 (Argonne National Laboratory, 2024).
* Balay _et al._ (2024b) S. Balay, S. Abhyankar, M. F. Adams, S. Benson, J. Brown, P. Brune, K. Buschelman, E. M. Constantinescu, L. Dalcin, A. Dener, V. Eijkhout, J. Faibussowitsch, W. D. Gropp, V. Hapla, T. Isaac, P. Jolivet, D. K. Kaushik, D. Kaushik, M. G. Knepley, F. Kong, S. Kruger, D. A. May, L. C. McInnes, R. T. Mills, L. M. Munson, T. Munson, J. E. Roman, K. Rupp, P. Sanan, J. S. Smith, B. F. Smith, S. Zampini, H. Zhang, H. Zhang, and J. Zhang, PETSc Web page, https://petsc.org/ (2024b).
* Binder (1983) K. Binder, The Journal of Chemical Physics 79, 6387 (1983).
* Chen and Chakrabarti (1998) H. Chen and A. Chakrabarti, The Journal of Chemical Physics 108, 6006 (1998).
* Magnus (1954) W. Magnus, Communications on Pure and Applied Mathematics 7, 649 (1954).
* Mullins (1986) W. W. Mullins, Journal of Applied Physics 59, 1341 (1986).
|
# An overview of optimization approaches for scheduling and rostering
resources in public transportation
Lucas Mertens Lena-Antonia Wolbeck David Rößler Lin Xie Natalia Kliewer
###### Abstract
Public transport is an essential component in satisfying people’s growing need
for mobility. Thus, providers are required to organize their services well in
order to meet the high demand of service quality at low operational costs. In
practice, optimized planning can lead to considerable improvements for
providers, customers, and municipalities. The planning process related to
public transport consists of various decision problems, of which the providers
are usually responsible for vehicle and crew planning. There is a growing body
of literature that recognizes the shift from sequential and iterative to
integrated solution approaches for these problems. Integrated optimization of
several planning phases enables higher degrees of freedom in planning, which
allows for operational costs savings and increased service quality. This paper
provides an overview of solution approaches for integrated optimization based
on operations research techniques for the vehicle scheduling, crew scheduling,
and crew rostering problem, extended by a selected number of relevant related
approaches from other industries. Therefore, existent optimization approaches
are analyzed with regard to different aspects such as mathematical modeling,
optimization objective, and method, as well as the source and scope of the
data used for evaluation. Additionally, we analyze the problem dimensions that
are usually required in practical applications. In doing so, we are able to
point out directions for future research, such as a stronger focus on
objectives besides cost-minimization like robustness, schedule regularity, or
fairness.
## 1 Introduction
Urbanization in developed and developing countries leads to quickly growing
needs for urban mobility. Cities and municipalities face severe challenges in
providing the necessary infrastructure to satisfy these needs. Individual
motorized traffic is rather part of the problem than of the solution: Traffic
jams leading to adverse effects such as long commuting times, frequent
accidents, and air pollution are just some of the issues that arise from
cities being overflown with cars [136]. An efficient public mass
transportation system can remedy these problems [115]. In addition to the
advantage of accommodating the growing mobility demand at lower external
effects and costs, public mass transport is safer as well as more resource-
efficient than individual transport [122].
Traditionally, public transport used to be provided solely by the public
sector. However, it has been deregulated in many countries. Nowadays, the
transport services offered by private companies are substantial, and
competition in this area is increasing [112]. For a public transport provider,
an effective and efficient operation is crucial to face the trade-off between
operating costs and service quality. Thus, in each phase of the planning
process, this trade-off is considered. Moreover, further goals like schedule
robustness, regularity, travel satisfaction, and fairness for employees have
gained more attention in recent years [88, 93, 76].
Since the underlying decision problems are not trivial to solve (to
optimality), public transport planning has been extensively studied in the
literature. Usually, the public transport planning process is divided into
planning steps that have to be performed subsequently. However, recent
advances in optimization methods allow a gradual integration of the
optimization subproblems arising from subsequent planning steps. While better
network designing, line planning, and timetabling effect both customer
satisfaction and cost structure, vehicle scheduling, crew scheduling, and crew
rostering mainly influence the provider’s profit as well as operational
timeliness. Superior calculated vehicle and crew schedules lead to lower
investments due to less required vehicles and personnel and lower variable
costs due to decreased deadheading distances and improved duty allocation.
Furthermore, crew rostering impacts costs and employee satisfaction likewise,
as crew members desire a fair distribution of duties and workload. Integrating
two or three subproblems increases the degree of freedom for these decision
problems, and thus, schedule and roster quality may improve. Other industries
like railway or aircraft face similar challenges, therefore their solution
approaches might be transferable to public transport planning.
The last decade has witnessed an enormous increase in publications on
integrated optimization approaches for public transport planning problems. In
2015, [115] conducted a literature review on solution approaches for bus
transport systems. In order to extend and update this overview, we will
analyze the state-of-the-art approaches that follow different variants of
integration and objectives in this paper. We first introduce the operational
problems in public transport and point out the contribution of the sequential
approach in Section 2. Second, we point out the ongoing shift from sequential
to an integrated approach in Section 3.
## 2 Decision problems within the operational public transport planning
process
The planning process in public transport comprises various decision problems,
which can be grouped according to their planning horizons (see Figure 1). On a
strategical level, public transport providers plan long-term, e.g., the
network design and the planning of lines (routes, frequencies). Tactical
decisions, however, aim to provide timetables and to reduce the operational
costs in the medium term [115]. In practice, such decisions are most likely
made by the principal, e.g., the municipality [113]. We consider strategical
and tactical planning decisions regarding routes, frequencies, and timetables
as input for the operational planning tasks of vehicle scheduling, crew
scheduling, and crew rostering. Therefore, in the scope of this paper, we
assume that public transport providers focus on minimizing costs concerning
vehicles and staff in the short term when operationally deciding on their
transport and employee schedules [115]. Following this, we look at three
decision problems: The Vehicle Scheduling Problem (VSP), the Crew Scheduling
Problem (CSP), and the Crew Rostering Problem (CRP), which are introduced in
the following.
Figure 1: The sequential planning process in public bus transit, as
illustrated in [143].
### 2.1 Vehicle scheduling problem
Given a timetable with specified service trips, the VSP relates to generating
an optimal vehicle schedule that covers all service trips and achieves the
lowest operational costs or optimizes further objectives [95]. A service trip
is defined by the line it belongs to, a departure and arrival time, as well as
the corresponding locations. To achieve a sequence of compatible trips,
additional deadhead trips can be added to connect subsequent service trips.
These deadhead trips comprise all unloaded trips, including the departure from
(pull-out) and arrival at (pull-in) the depot. A solution to the VSP
corresponds to a vehicle schedule consisting of vehicle blocks, each
representing a feasible sequence of trips for one vehicle [84]. Thus, a
vehicle block comprises one or several vehicle rotations starting from a
depot, executing one or more service trips, and returning to a depot.
Solving a VSP is not a trivial task and can vary greatly depending on
practical requirements and circumstances. The fundamental VSP is characterized
by a single depot, a homogeneous fleet, and the objective to minimize costs
only [95]. One of the first optimal solutions of such a VSP originates from
[134]. However, modern VSP evolved to cover a more complex environment.
Opposed to originating from one depot only, the Multi Depot Vehicle Scheduling
Problem (MDVSP) considers multiple depots as well as multiple vehicle types
and vehicle type groups. This enhancement majorly affects the way of solving
the problem. Whereas the single-depot VSP is described as a polynomially
solvable minimum cost flow problem, the MDVSP is considered to be NP-hard
[83]. By utilizing a linear programming approach with column generation, [123]
exactly solve the MDVSP. Considering multiple depots as well as a
heterogeneous fleet, [118] present a Time Space Network (TSN) to efficiently
model a network associated to the MDVSP. By modeling the MDVSP as a TSN, the
solution space can be reduced significantly. As a result, optimally solving
the multicommodity min-cost flow MIP-formulation of the MDVSP for real-world
instances is made possible. However, not only the underlying problem shifted
to a more complex model, but also the objective itself adapted. Whereas in the
beginning, the focus was primarily on cost-related objectives, other goals
like the schedule robustness are increasingly considered in recent years [119,
128]. Different dimensions regarding constraints such as the limited ranges of
electric buses [77, 131] and objectives shape each VSP individually. Several
modeling approaches, as well as specialized solution strategies for the VSP
and its extensions, have been developed in the last decades. For an overview
on vehicle scheduling and corresponding solution approaches, we refer to [90]
and [129].
### 2.2 Crew scheduling problem
In sequential planning, the decision problem of crew scheduling arises
succeeding to vehicle scheduling. The CSP (also known as driver, duty, or
shift scheduling) aims at finding a daily cost-optimal duty allocation that
encompasses all trips of the vehicle blocks [86]. These duties are not
assigned to specific drivers yet. Each anonymous duty is associated with a
predefined generic duty type. These heterogeneous duty types are characterized
by different lengths and attributes. A duty type, e.g., considers legal
requirements on working and break times as well as company-specific
regulations such as the kind of qualifications required [103].
Due to the vast amount of possible solutions based on the predefined duty
types covering the vehicle schedule’s trips, solving the CSP is considered to
be NP-hard [100]. The complexity of finding a solution to the CSP correlates
with the number of trips and especially with the quantity and diversity of the
generic duty types. Depending on practical requirements, each duty type at
least considers legal, union-related, and company-defined regulations. These
characteristics vary significantly regarding each problem specification. In
solving the CSP, it has prevailed to split all vehicle blocks into segments
according to predefined relief points [96]. Relief points indicate locations
at specific times, which allow an exchange of drivers. The tasks between two
relief points represent the smallest unit of work that has to be covered by
the same driver and is called duty element. Combining consecutive duty
elements and adding sign-on and sign-off tasks results in a possible shift,
and is called a piece of work. Final duties are composed of one or more pieces
of work, where usually two pieces of work are separated by a break [96].
Since the emerging duties are not associated with specific drivers yet,
commonly cost criteria shape the objective of solving the CSP [99, 113].
Depending on the practical application, both minimizing the total amount of
daily duties as well as minimizing the total required work time are achievable
tasks. Whereby the former determines the minimum demand of employees on a
daily basis, the latter aims at an optimal duty structure by avoiding
unnecessary breaks or waiting times. Utilizing fixed costs for duties and an
hourly rate for the working time, these objectives are usually transformed
into one that minimizes the total costs [96]. Commonly, for solving the CSP a
column generation approach in combination with Lagrangian or LP-relaxation
considering a set covering or partitioning problem is utilized. Solution
approaches for crew scheduling are reviewed in detail in [113] and [99].
### 2.3 Crew rostering problem
The crew rostering (or driver rostering) is concerned with assigning
anonymized duties to specific drivers. The results are individual schedules
for every crew member, so-called crew rosters [103]. As opposed to crew
scheduling, where the foremost objective is to minimize operative costs, crew
rostering takes crew welfare, such as balancing workload and additional
individual characteristics of each crew member and efficiency objectives,
e.g., minimizing layovers and crew deadheading, into account. Complementing
the legal daily duty requirements, already respected within the CSP, further
law and labor union rules have to be considered, solving a CRP. These
additional requirements range from minimal break times between two consecutive
shifts to a maximum weekly workload for a single driver. In constructing
personalized schedules, two different kinds of crew rosters can be
distinguished, namely cyclic and non-cyclic rosters [148]. The cyclic roster
occurs to be the less sophisticated approach and is developed for a group of
drivers with similar qualifications and preferences. A regular, repeating
working pattern is established for the entirety of drivers. This pattern is
constructed as such that all legal requirements are met, and the workload is
allocated evenly. However, this roster occurs not to feature a high degree of
individuality. Non-cyclic rosters, on the other hand, offer the possibility to
develop personalized schedules for a medium to long period of time [148].
Depending on the extent to consider individual preferences and shift requests,
constructing a non-cyclic pattern requires sophisticated techniques. A
multicommodity network flow formulation is developed in [148] to deal with
both cyclic and non-cyclic rostering, also in [127] for non-cyclic rostering.
In order to deal with the complexity, (Meta-)heuristics are applied to solve
non-cyclic rostering, such as in [147], [127]. [99] and [141] cover the crew
rostering problem in their literature reviews and elaborate approaches to
solve the CRP utilizing both cyclic and non-cyclic rosters. As a first step
towards more robustness in crew rostering, [145] consider a simplified version
of rostering but incorporate possible reserve shifts to cover the absences of
drivers.
## 3 Partial Integration & Integrated approaches
The three decision problems – more precisely VSP, CSP and CRP– have been
extensively studied by scholars. Various methods have been proposed to find
optimal or close-to-optimal solutions to each of these problems [90, 99, 141].
These problems constitute consecutive phases [96] within the operational
public transport planning process. Thus, choosing a sequential approach to
solving the entirety of these problems is straightforward. In such an
approach, the output of the previous phase is used as an input for the
subsequent planning problem. However, this traditionally utilized approach may
not lead to a globally optimal solution. A slightly adjusted timetable, e.g.,
might lead to more freedom for solving the VSP and hence a lower demand for
buses. Here, the gain from consecutive steps can outweigh the loss of the
adjusted prior phase or, due to choosing an indifferent solution of a previous
step, a Pareto-efficient improvement might even be possible. As a result,
iteratively solving the three sequential phases in order to leverage knowledge
gained in every iteration can improve the overall solution. However, repeated
executions of each phase might lead to prohibitively long run times or, due to
a fixed number of iterations, to local optima. Opposed to sequential or
iterative approaches, which solve each of the problems separately, integrated
approaches solve the VSP, CSP, or CRP conjointly. As a result, superior
solutions are attainable within acceptable computation time, even for problem
instances of realistic size.
We distinguish between the integration of the first two phases (VSP \+ CSP in
the following referred to as VCSP) and the last two phases (CSP + CRP in the
following referred to as CSRP). The highest level of integration is achieved
by simultaneously considering all three decision problems (VSP \+ CSP \+ CRP
in the following referred to as VCSRP).
The number of publications for integrated solution approaches is unevenly
distributed. In contrast to the wide range of publications considering the
VCSP, there are only three approaches for the VCSRP. The main objective in
either integrated approach is usually minimizing costs – while in recent
years, additional objectives such as robustness, regularity, and fairness have
become increasingly important. Similar to publications covering the VSP only,
there exists an evident trend to modeling the underlying problem as a TSN
instead of a connection-based Network (CBN). Due to the integration of the
planning phases, many approaches use column generation and (meta-)heuristics
(such as genetic algorithms, simulated annealing, ant colony algorithms) to
solve the remaining complexity problem. More than two-thirds of the evaluated
solution approaches use real data for evaluation and thus examine the
applicability of the methods in practice. It is noteworthy that the majority
of approaches employ combinations of solution approaches instead of individual
exact or heuristic methods. Looking at the methods, special attention is paid
to column generation, as it is prevalent in the sample, as well as non-exact
heuristics that are used. The right choice of model and combination of
solution algorithms facilitates solving problem instances of realistic size.
However, within the regarded sample, only a few publications from the bus
industry solve VSP instances of realistic urban size [[,
e.g.,]]kliewer2012multiple, amberg2018robust, steinzen2010time. Many rely on
evaluation using the random benchmark instances published in [113].
Similar decision problems under consideration occur in several industries.
Three are identified as the major industries: Airline, railway, and public
buses. Regarding vehicle scheduling, similarities, as well as differences, are
evident. All three industries have the goal in common to minimize the
operational costs and utilize the least possible number of vehicles. However,
the details of either industry differ greatly. Due to high initial costs for
rolling stock and railroads, as well as long construction times for the
latter, planning in the railway industry is highly constrained by its
infrastructure. In contrast, vehicle scheduling for a public bus provider
offers more decision-making possibilities. Various existing roads can be used,
and different vehicle types offer higher degrees of freedom in planning. Given
a fixed number of vehicles, scheduling for the airline industry is the least
restricted one. Changing the route of an airplane is usually only restricted
by costs, but not by infrastructural conditions. Depending on the
preconditions of each unique industry, mathematical modeling might be more
challenging given increased infrastructural requirements. The number of
constraints correlates strongly with the model’s degrees of freedom. More
flexibility in planning leads to an increased solution space. Both the
quantity of constraints and the size of the solution space enable different
solution approaches and might lead to different expedient ways of solving the
specific planning problem. Similar to vehicle scheduling, both crew scheduling
and crew rostering share similarities but differ in detail. As previously
described, labor law and other legal provisions, as well as collective and
individual agreements, restrict the CSP and CRP within the mentioned
industries [103, 111, 148]. However, buses, e.g., only need one driver while
airplanes and trains must have a crew. Crews typically consist of more than
two members who have to fulfill specific tasks and functions, and are thus
typically planned as teams [103]. Compared to public bus transport, the
railway and airline industry possibly cover huge distances. Thus, the crew
rostering has to consider individual home bases, take lodging into account and
return each crew member to its origin at some point [142].
Most studies in our sample deal with the public bus transport industry (in
either urban or rural environments). There are some important exceptions from
other industries, especially concerning Crew Scheduling Rostering Problem
(CSRP) such as [135], [103], [111], [124] and [138] in the airline industry
and [103] as well as [87] in railway.
In the following sections, we discuss the solution approaches from the
literature concerning the pairwise integrated problems (i.e., the VCSP and the
CSRP), and the “fully” integrated problem (i.e., the VCSRP) in more detail.
## 4 Pairwise integrated optimization
### 4.1 Integrated vehicle and crew scheduling
The majority of solution approaches for the VCSP in our sample follow a column
generation scheme to generate vehicle schedules and anonymous duties for a
given timetable and corresponding service trips. The VCSP is the master
problem, and duties are generated as columns by solving the pricing problem as
a constrained shortest path problem. All approaches investigated have in
common that minimizing costs is the central objective criterion. In recent
years, further optimization objectives such as robustness [114, 78, 117, 79]
and schedule regularity [140, 81, 80] have been considered.
For the corresponding VSP, the underlying network is usually explicitly
modeled. Historically, integrated optimization approaches have focused on
using a CBN with depots and stops as nodes, and all possible connections,
including pull-ins and pull-outs, are enumerated as arcs, such as in [82],
[104], [105], [106], [101] and [102]. This approach might be most intuitive
and was used mostly in the last century. Recent network modeling approaches
shift towards a TSN where time-space nodes represent possible arrivals and
departures at a location and where only feasible connections are modeled as
arcs such as in [110], [116], [139], [81], [78], [117] and [79]. The TSN
method has the advantage that much fewer connections are included, which
reduces the model complexity tremendously – especially for larger instances.
[109] report that the number of arcs in the TSN amounts to 1-3% of all arcs in
an equivalent CBN. Thus, the problem size could be reduced significantly
without reducing the solution space because all compatible trips are
implicitly connected.
### 4.2 Integrated crew scheduling and rostering
In the airline and railway industries, crew scheduling and crew rostering are
usually considered sequentially (see [91, 120, 149, 124] since it is not yet
possible to find an optimal solution for one of the two planning steps with
current state-of-the-art technologies for realistically sized models. An
overview of the developments until 1998 for air and rail transport was
presented in [98].
Integrated planning has received increasing attention since the 2000s, with a
focus on airlines and railways. Due to the high combinatorial complexity of
integrated planning, approaches to partial or iterative integration were first
published. In [97] the number of paired personnel crews in crew scheduling is
taken into account.
Most integrated crew scheduling and crew rostering approaches deal with
airline optimization [103, 111, 124, 138] and only a few tackle the public bus
transit [[, e.g.]]xie2012integrated, xie2013column, xie2017metaheuristics,
xie2015cyclic and the railway industry [[, e.g.]]borndoerfer2014integrierte,
lin2019integrated.
An iterative method through a feedback mechanism between the CSP and CRP is
implemented in [92]. All duties are generated in the first phase, and the
number of duties is reduced by heuristics in the second phase, such that
instances with various compositions with real-world characteristics can be
solved. [111] focus on partial integration based on the aggregated TSN. In the
first step, instead of a single duty, a chain of duties is generated, taking
into account the individual activities of the crew members planned in advance.
In addition, the number of crews is also taken into account in this step
[111]. The approach can solve even instances of up to 1977 tasks, considering
188 crew members, in acceptable time ($\sim$ 15.5 minutes).
In [150] the integration problem is formulated as an integer linear program,
and a new heuristic method is developed, which is used in a search procedure
for a subtree based on a rounding strategy. A decision support system is
developed in [103] for integrated crew scheduling in the airline and railway
sector, and a general set partitioning model is formulated, and a state-of-
the-art branch and price solver is generated. In [132] and [133] a column
generation approach is used to reduce the computing time of the sub-problem.
In further research projects regarding the complete integration of the two
planning phases, meta-heuristics, in particular specialized genetic
algorithms, are successfully used to solve the integrated problem [[,
see]]souai2009genetic, chen2013integrated.
Because of lower operational costs, optimization approaches for public
transport were developed only about ten years later than in the integrated
planning for the airline and railway industry. A Bender’s decomposition
approach is used in [85], where the crew rostering was simplified in such a
way that the duty sequences are anonymous and shift and duty templates were
used instead of services. In [144] it is shown that in practice, it is often
critical to underlay shifts with concrete duties.
[121] propose a Branch-and-Price-and-Cut (BPC) algorithm for solving the CSRP
for the Taiwanese railway system with regard to standby personnel. They
compare the results with solution approaches using expert knowledge or rules
of thumb, commercial standard solvers for the associated Mixed Integer Linear
Problem (MILP) and a sequential Depth-First Search (DFS) based algorithm for
several instances reaching real-world problem sizes regarding the number of
tasks to be performed. The employed DFS first enumerates all potential duties,
then identifies the minimum required duties to cover all tasks as a set
partitioning problem, and finally solves the shift-assignment to optimality.
Only the BPC algorithm was capable of solving all instances, whereas Gurobi
and the DFS-based algorithm are only tractable for the smallest and second-
smallest instance, respectively. For the smallest instances, the BPC can
recreate the optimal solution in less time and is the only algorithm capable
of solving all problem instances.
In addition to cost minimization, younger approaches aim at optimizing for
further goals such as the maximization of fairness of the drivers’ shift
allocation and the regularity of duty rosters to increase satisfaction [[,
e.g.]]borndoerfer2017integration, quesnel2020improving).
## 5 Integrated vehicle and crew scheduling and rostering
Few publications look into integrating all three phases, all of which take up
the bus industry. [137] consider several data sets and circumstances. They use
data from the Beijing Bus group to point out the practical constraints that
derive from Chinese law and culture. These include built-in meal periods,
multi-type bus scheduling, and restricting drivers to one or two particular
buses. The authors develop an iterative sequential heuristic algorithm that
consists of three steps: Firstly, the VSP is solved with a local search based
on $n$-opt operators. Then, the CSP is solved using a tabu-search heuristic.
Finally, driver rosters are proposed to the user and can be modified through
an interface. According to the authors, it is possible to find feasible
solutions for instances up to 107 buses and 164 duties within an appropriate
time frame of some minutes. The authors report savings in vehicle costs close
to 4.5% and driver wages of approximately 9.9% when comparing with manually
built solutions.
[125] use data from a bus company in Lisbon to demonstrate their preemptive
goal programming-based heuristic approach that prioritizes the Vehicle Crew
Scheduling Problem (VCSP) over the CRP part. Their approach is able to
generate optimal solutions within a short computing time for most instances.
When considering all costs, however, some instances could not be solved within
a reasonable time limit. Their integer formulation consists of a preemptive
goal programming framework that prioritizes the integrated vehicle-crew-
scheduling goals over the driver rostering goals. The problem is first
decomposed to solve one VSP \+ CSP per day and then establish a roster for a
longer time horizon.
Two years later, [126] manage to outperform the traditional sequential
approach by integrating VSP, CSP and CRP with a Bender’s decomposition problem
formulation using a multicommodity network flow formulation, set covering and
covering-assignment elements. They tackle the integrated problem by dividing
it into a master problem that contains the VSP and CSP and a sub-problem for
the CRP. Information from the sub-problem and its dual solution are used to
find better duties for the CSP. The authors minimize vehicle and driver costs
and take into account constraints regarding roster balancing and coverage of
all daily duties. Using data from two bus companies in Portugal, their rosters
have to match predefined days-off patterns based on the requirements of these
companies. The planning horizon for rosters is seven weeks long.
All three papers, [137], [125], and [126], evaluate their proposed algorithms
on real-world instances. However, they are too small (108 to 238 timetable
trips) to represent a larger, realistic urban bus system.
In summary, public transport providers recognize the necessity to organize
their services efficiently. Because of increasing urbanization, the demand for
public transport is rising, and thus competition and the need for efficiency
in the public transport planning process rise as well. Integrating the
operational phases of VSP, CSP, and CRP gives public transport providers more
degrees of freedom and can lead to better schedules and rosters.
Our findings include that there are three forms of integrated problems that
are solved in the literature. While most authors focus on the VCSP, some
approaches consider the CSRP and a few recent publications tackle the
challenge of solving the VCSRP.
All approaches to solving the VCSRP deal with the bus industry. This might be
because crew rostering is much easier when considering only one driver, as
compared to multiple-person crews in airline or railway planning, which would
exacerbate the integration even more.
Moreover, the majority of scholars focus on minimizing costs in their
approaches. However, a few authors have considered other objectives such as
robustness (see an overview of different robustness approaches in public
transit in [108]), regularity, and fairness of schedules in recent years. This
indicates that diverse objective functions are becoming more common over time.
In a recent study, [107] showed that adding a robustness objective to the
VCSRP model of [126] does not take much additional computational time in this
application, which is an interesting result. Many standard combinatorial
optimization models are used for decision problems, including the minimum cost
flow problem and its various special cases (e.g., the resource-constrained
shortest path problem, the multicommodity flow problem, and the linear
assignment problem). Set partitioning formulations are often used to solve the
CSP. Some authors prefer the easier set covering formulation where crew
members become passengers in the case of overlapping assignments. The TSN
formulation is a powerful tool to reduce network size and thus computing time
compared to Connection-Based Network (CBN), where all deadhead trips are
explicitly modeled.
In terms of solution techniques, column generation stands out as the most
powerful operations research method to solve integrated decision problems. It
is usually accompanied by relaxation techniques. Lagrangian relaxation seems
to work best for quickly finding reasonable bounds for the integer solution.
Branch-and-bound and branch-and-price techniques are popular to find feasible
integer solutions. Heuristics and meta-heuristics are used to speed up the
solution process. They include tabu-search, simulated annealing, ant colony
algorithms, genetic algorithms, and smaller-scale greedy heuristics. In
conclusion, we can say that there exists no single best approach to integrated
solve the VSP, CSP, and CRP. Which solution approach yields the best results
is always subject to the specific problem settings. Among others, it is
important to take into account the source and nature of data that is used, the
size of the instances, and the relevant constraints. Every new approach can be
a game-changer for some situations, while in others, it might prove less
useful.
## 6 Future Research
For future research, we propose to focus on further integrating the public
transport planning process. Since there are only three approaches towards a
threefold integration of all operational phases and the first results are
promising, more effort is needed in this direction. In addition, strategic
decision problems may also be included in integrated planning. In the
literature, the first approaches towards integrating timetabling or vehicle
routing with vehicle scheduling can be found. The long-term aim of an
integrated public transport planning process where all sub-problems are solved
simultaneously and thus making it possible to use all degrees of freedom is
still a long way off.
At the same time, the existing approaches should be enhanced in order to make
them suitable for real-world use with larger instances and more complex data
sets, such as public transport providers in larger cities. Furthermore,
schedule robustness, regularity, crew or driver preferences, and fairness as
optimization objectives should be examined more closely. There are some first
steps taken in individual publications in integrated scheduling. The crew
scheduling literature offers many more starting points that could be
incorporated into integrated planning as well. More publications that
explicitly deal with these topics are desirable.
## References
* [1] Roberto F Abenoza, Oded Cats and Yusak O Susilo “Travel satisfaction with public transport: Determinants, user classes, regional disparities and their evolution” In _Transportation Research Part A: Policy and Practice_ 95 Elsevier, 2017, pp. 64–84
* [2] Jonathan D Adler “Routing and scheduling of electric and alternative-fuel vehicles”, 2014
* [3] Bastian Amberg, Boris Amberg and Natalia Kliewer “Increasing delay-tolerance of vehicle and crew schedules in public transport by sequential, partial-integrated and integrated approaches” In _Procedia-Social and Behavioral Sciences_ 20, 2011, pp. 292–301
* [4] Bastian Amberg, Natalia Kliewer and Michael Beck “Robuste Effizienz bei der Einsatzplanung für Bus und Fahrer: -Aktuelle Entwicklungen von Optimierungskomponenten für verspätungstolerante Umlauf-und Dienstplanung im Busverkehr” In _Der Nahverkehr_ 30.1, 2012, pp. 15–19
* [5] Bastian Amberg, Boris Amberg and Natalia Kliewer “Robust Efficiency in Urban Public Transportation: Minimizing Delay Propagation in Cost-Efficient Bus and Driver Schedules” In _Transportation Science_ 53.1 Institute for Operations Researchthe Management Sciences (INFORMS), 2018, pp. 89–112 DOI: 10.1287/trsc.2017.0757
* [6] Boris Amberg, Bastian Amberg and Natalia Kliewer “Approaches for increasing the similarity of resource schedules in public transport” In _Procedia-Social and Behavioral Sciences_ 20, 2011, pp. 836–845
* [7] Michael Ball, Lawrence Bodin and Robert Dial “A matching based heuristic for scheduling mass transit crews and vehicles” In _Transportation Science_ 17.1, 1983, pp. 4–31
* [8] Alan A Bertossi, Paolo Carraresi and Giorgio Gallo “On some matching problems arising in vehicle scheduling models” In _Networks_ 17.3 Wiley Online Library, 1987, pp. 271–281
* [9] Lawrence Bodin and Bruce Golden “Classification in vehicle routing and scheduling” In _Networks_ 11.2 Wiley Online Library, 1981, pp. 97–108
* [10] Ralf Borndörfer, Martin Grötschel and Andreas Löbel “Duty scheduling in public transit”, 2001
* [11] Ralf Borndörfer et al. “Rapid Branching” In _Public Transport_ 5.1-2 Springer Berlin Heidelberg, 2012, pp. 3–23
* [12] Ralf Borndörfer et al. “Integrierte Dienst-und Dienstreihenfolgeplanung zur Erhöhung der Fahrerzufriedenheit”, 2014
* [13] Ralf Borndörfer et al. “Duty rostering in public transport-facing preferences, fairness, and fatigue”, 2015
* [14] Ralf Borndörfer, Christof Schulz, Stephan Seidl and Steffen Weider “Integration of duty scheduling and rostering to increase driver satisfaction” In _Public Transport_ 9.1-2, 2017, pp. 177–191 DOI: 10.1007/s12469-017-0153-3
* [15] Stefan Bunte and Natalia Kliewer “An overview on vehicle scheduling models” In _Public Transport_ 1.4 Springer, 2009, pp. 299–317 DOI: 10.1007/s12469-010-0018-5
* [16] Alberto Caprara et al. “Solution of large-scale railway crew planning problems: The Italian experience” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 1–18 DOI: 10.1007/978-3-642-85970-0\\_1
* [17] Alberto Caprara, Michele Monaci and Paolo Toth “A global method for crew planning in railway applications” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 17–36 DOI: 10.1007/978-3-642-56423-9\\_2
* [18] Oded Cats “The robustness value of public transport development plans” In _Journal of Transport Geography_ 51 Elsevier, 2016, pp. 236–246
* [19] Chiu-Hung Chen, Tung-Kuan Liu and Jyh-Horng Chou “Integrated short-haul airline crew scheduling using multiobjective optimization genetic algorithms” In _IEEE Transactions On Systems, Man, and Cybernetics: Systems_ 43.5 IEEE, 2013, pp. 1077–1090
* [20] Joachim R Daduna and José M Pinto Paixão “Vehicle scheduling for public mass transit—an overview” In _Computer-aided transit scheduling_ Springer, 1995, pp. 76–90
* [21] Guy Desaulniers and Mark D Hickman “Public transit” In _Handbooks in operations research and management science_ 14 Elsevier, 2007, pp. 69–127
* [22] Andreas T. Ernst et al. “An integrated optimization model for train crew management” In _Annals of Operations Research_ 108.1-4, 2001, pp. 211–224 URL: https://link.springer.com/article/10.1023/A:1016019314196
* [23] Andreas T. Ernst et al. “Rail crew scheduling and rostering optimization algorithms” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 53–71 DOI: 10.1007/978-3-642-56423-9\\_4
* [24] Andreas T. Ernst, Houyuan Jiang, Mohan Krishnamoorthy and David Sier “Staff scheduling and rostering: A review of applications, methods and models” In _European Journal of Operational Research_ 153.1, 2004, pp. 3–27
* [25] Matteo Fischetti, Silvano Martello and Paolo Toth “The fixed job schedule problem with spread-time constraints” In _Operations Research_ 35.6 INFORMS, 1987, pp. 849–858
* [26] Richard Freling, Albert P.. Wagelmans and José Paixão “An overview of models and techniques for integrating vehicle and crew scheduling” In _Computer-aided Systems in Public Transport_ 471 Springer, 1999, pp. 441–460
* [27] Richard Freling, Dennis Huisman and Albert P.. Wagelmans “Applying an integrated approach to vehicle and crew scheduling in practice” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 73–90 DOI: 10.1007/978-3-642-56423-9\\_5
* [28] Richard Freling, Dennis Huisman and Albert P.. Wagelmans “Models and algorithms for integration of vehicle and crew scheduling” In _Journal of Scheduling_ 6.1, 2003, pp. 63–85
* [29] Richard Freling, Ramon M. Lentink and Albert P.. Wagelmans “A decision support system for crew planning in passenger transportation using a flexible branch-and-price algorithm” In _Annals of Operations Research_ 127.1-4, 2004, pp. 203–222
* [30] Christian Friberg and Knut Haase “An exact branch and cut algorithm for the vehicle and crew scheduling problem” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 63–80 DOI: 10.1007/978-3-642-85970-0\\_4
* [31] A. Gaffi and M. Nonato “An integrated approach to ex-urban crew and vehicle scheduling” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 103–128
* [32] Liping Ge et al. “Revisiting the richness of integrated vehicle and crew scheduling” In _Public Transport_ Springer, 2022, pp. 1–27 DOI: 10.1007/s12469-022-00292-6
* [33] Liping Ge, Stefan Voß and Lin Xie “Robustness and disturbances in public transport” In _Public Transport_ 14.1 Springer, 2022, pp. 191–261
* [34] Vitali Gintner, Natalia Kliewer and Leena Suhl “Solving large multiple-depot multiple-vehicle-type bus scheduling problems in practice” In _OR Spectrum_ 27.4 Springer, 2005, pp. 507–523
* [35] Vitali Gintner, Natalia Kliewer and Leena Suhl “A crew scheduling approach for public transit enhanced with aspects from vehicle scheduling” In _Computer-aided Systems in Public Transport_ Springer, 2008, pp. 25–42 DOI: 10.1007/978-3-540-73312-6\\_2
* [36] Yufeng Guo, Taieb Mellouli, Leena Suhl and Markus P. Thiel “A partially integrated airline crew scheduling approach with time-dependent crew capacities and multiple home bases” In _European Journal of Operational Research_ 171.3, 2006, pp. 1169–1181
* [37] Robert Hrelja, Tom Rye and Caroline Mullen “Partnerships between operators and public transport authorities. Working practices in relational contracting and collaborative partnerships” In _Transportation Research Part A: Policy and Practice_ 116 Elsevier, 2018, pp. 327–338
* [38] Dennis Huisman “Integrated and dynamic vehicle and crew scheduling: Geïntegreerde en dynamische voertuig- en personeelsplanning: Zugl.: Rotterdam, Erasmus Univ., Diss., 2003” 325, Research series / Erasmus University Rotterdam Amsterdam: Thela Thesis, 2004
* [39] Dennis Huisman, Richard Freling and Albert PM Wagelmans “A robust solution approach to the dynamic vehicle scheduling problem” In _Transportation Science_ 38.4 INFORMS, 2004, pp. 447–458 DOI: 10.1287/trsc.1030.0069
* [40] Omar J. Ibarra-Rojas, F. Delgado, R. Giesen and J.. Muñoz “Planning, operation, and control of bus transport systems: A literature review” In _Transportation Research Part B: Methodological_ 77, 2015, pp. 38–75 DOI: 10.1016/j.trb.2015.03.002
* [41] Andras Keri and Knut Haase “Simultaneous Vehicle and Crew Scheduling with Trip Shifting” In _OPERATIONS RESEARCH PROCEEDINGS 2007_ , Operations Research Proceedings Springer-Verlag Berlin, 2008, pp. 467–472
* [42] Natalia Kliewer, Taieb Mellouli and Leena Suhl “A time–space network based exact optimization model for multi-depot bus scheduling” In _European journal of operational research_ 175.3 Elsevier, 2006, pp. 1616–1627 DOI: 10.1016/j.ejor.2005.02.030
* [43] Natalia Kliewer, Bastian Amberg and Boris Amberg “Multiple depot vehicle and crew scheduling with time windows for scheduled trips” In _Public Transport_ 3.3, 2012, pp. 213–244 DOI: 10.1007/s12469-011-0049-6
* [44] Stefan Kramkowski, Natalia Kliewer and Christian Meier “Increasing delay-tolerance of vehicle schedules in public bus transport” In _Proceedings of the Metaheuristic International Conference VIII_ , 2009 Hamburg Germany DOI: 10.1016/J.SBSPRO.2011.08.035
* [45] Chi-Kong Lee and Chao-Hui Chen “Scheduling of train driver for Taiwan railway administration” In _Journal of Eastern Asia Society of Transportation Studies_ 5, 2003, pp. 292–306
* [46] Dung-Ying Lin and Meng-Rung Tsai “Integrated crew scheduling and roster problem for trainmasters of passenger railway transportation” In _IEEE Access_ 7 IEEE, 2019, pp. 27362–27375
* [47] Todd Litman “The hidden traffic safety solution: public transportation”, 2016
* [48] Andreas Löbel “Solving large-scale multiple-depot vehicle scheduling problems” In _Computer-Aided Transit Scheduling_ Springer, 1999, pp. 193–220 DOI: 10.1007/978-3-642-85970-0\\_10
* [49] Claude P. Medard and Nidhi Sawhney “Airline crew scheduling from planning to operations” In _European Journal of Operational Research_ 183.3, 2007, pp. 1013–1027 DOI: 10.1016/j.ejor.2005.12.046
* [50] Marta Mesquita et al. “A new model for the integrated vehicle-crew-rostering problem and a computational study on rosters” In _Journal of Scheduling_ 14.4, 2011, pp. 319–334 DOI: 10.1007/s10951-010-0195-8
* [51] Marta Mesquita, Margarida Moz, Ana Paias and Margarida Pato “A decomposition approach for the integrated vehicle-crew-roster problem with days-off pattern” In _European Journal of Operational Research_ 229.2, 2013, pp. 318–331 DOI: 10.1016/j.ejor.2013.02.055
* [52] Marta Mesquita, Margarida Moz, Ana Paias and Margarida Pato “A decompose-and-fix heuristic based on multi-commodity flow models for driver rostering with days-off pattern” In _European Journal of Operational Research_ 245.2 Elsevier, 2015, pp. 423–437
* [53] Marc Naumann, Leena Suhl and Stefan Kramkowski “A stochastic programming approach for robust vehicle scheduling in public bus transport” In _Procedia Social and Behavioral Sciences_ 20 Elsevier, 2011, pp. 826–835 DOI: 10.1016/j.sbspro.2011.08.091
* [54] Ann-Sophie Pepin, Guy Desaulniers, Alain Hertz and Dennis Huisman “A comparison of five heuristics for the multiple depot vehicle scheduling problem” In _Journal of Scheduling_ 12.1, 2009, pp. 17–30 DOI: 10.1007/s10951-008-0072-x
* [55] Frédéric Quesnel, Guy Desaulniers and François Soumis “Improving air crew rostering by considering crew preferences in the crew pairing problem” In _Transportation Science_ 54.1 INFORMS, 2020, pp. 97–114
* [56] Josephine Reuer, Natalia Kliewer and Lena Wolbeck “The Electric Vehicle Scheduling Problem A study on time-space network based and heuristic solution” In _Proceedings of the Conference on Advanced Systems in Public Transport_ , 2015
* [57] Mohammed Saddoune, Guy Desaulniers, Issmail Elhallaoui and François Soumis “Integrated airline crew scheduling: A bi-dynamic constraint aggregation method using neighborhoods” In _European Journal of Operational Research_ 212.3, 2011, pp. 445–454 DOI: 10.1016/j.ejor.2011.02.009
* [58] Mohammed Saddoune, Guy Desaulniers, Issmail Elhallaoui and François Soumis “Integrated Airline Crew Pairing and Crew Assignment by Dynamic Constraint Aggregation” In _Transportation Science_ 46.1, 2012, pp. 39–55 DOI: 10.1287/trsc.1110.0379
* [59] JL Saha “An algorithm for bus scheduling problems” In _Journal of the Operational Research Society_ 21.4 Springer, 1970, pp. 463–474
* [60] Rivi Sandhu and Diego Klabjan “Integrated Airline Fleeting and Crew-Pairing Decisions” In _Operations Research_ 55.3, 2007, pp. 439–456 DOI: 10.1287/opre.1070.0395
* [61] David Schrank, Tim Lomax and Bill Eisele “2019 urban mobility report” In _Texas Transportation Institute. Available: http://mobility. tamu.edu/ums/report_ , 2019
* [62] Yindong Shen and Jiahong Xia “Integrated bus transit scheduling for the Beijing bus group based on a unified mode of operation” In _International Transactions in Operational Research_ 16.2, 2009, pp. 227–242 DOI: 10.1111/j.1475-3995.2009.00673.x
* [63] Nadia Souai and Jacques Teghem “Genetic algorithm based approach for the integrated airline crew-pairing and rostering problem” In _European Journal of Operational Research_ 199.3, 2009, pp. 674–683 DOI: 10.1016/j.ejor.2007.10.065
* [64] Ingmar Steinzen, Leena Suhl and Natalia Kliewer “Branching strategies to improve regularity of crew schedules in ex-urban public transit” In _OR Spectrum_ 31.4, 2009, pp. 727–743
* [65] Ingmar Steinzen, Vitali Gintner, Leena Suhl and Natalia Kliewer “A Time-Space Network Approach for the Integrated Vehicle- and Crew-Scheduling Problem with Multiple Depots” In _Transportation Science_ 44.3 INFORMS, 2010, pp. 367–382 URL: http://www.jstor.org/stable/25769505
* [66] Jorne van den Bergh et al. “Personnel scheduling: A literature review” In _European Journal of Operational Research_ 226.3, 2013, pp. 367–385
* [67] Xin Wen, Xuting Sun, Yige Sun and Xiaohang Yue “Airline crew scheduling: Models and algorithms” In _Transportation research part E: logistics and transportation review_ 149 Elsevier, 2021, pp. 102304
* [68] Lin Xie “Decision Support for Crew Rostering in Public Transit: Web-Based Optimization System for Cyclic and Non-Cyclic Rostering” Springer, 2014
* [69] Lin Xie “Decision Support for Crew Rostering in Public Transit – Web-based Optimization System for Cyclic and Non-Cyclic Rostering” Wiesbaden: Springer Fachmedien Wiesbaden, 2015 DOI: 10.1007/978-3-658-08167-6
* [70] Lin Xie and Leena Suhl “Cyclic and non-cyclic crew rostering problems in public bus transit” In _OR Spectrum_ 37.1, 2015, pp. 99–136 DOI: 10.1007/s00291-014-0364-9
* [71] Lin Xie, Natalia Kliewer and Leena Suhl “Integrated Driver Rostering Problem in Public Bus Transit” In _Procedia-Social and Behavioral Sciences_ 54, 2012, pp. 656–665 DOI: 10.1016/j.sbspro.2012.09.783
* [72] Lin Xie, Natalia Kliewer and Leena Suhl “A column generation approach for the cyclic and non-cyclic bus driver rostering problem.”, 2013
* [73] Lin Xie, Marius Merschformann, Natalia Kliewer and Leena Suhl “Metaheuristics approach for solving personalized crew rostering problem in public bus transit” In _Journal of Heuristics_ 23.5, 2017, pp. 321–347 DOI: 10.1007/s10732-017-9348-7
* [74] Tallys H. Yunes, Arnaldo V. Moura and Cid C. Souza “Hybrid Column Generation Approaches for Urban Transit Crew Management Problems” In _Transportation Science_ 39.2, 2005, pp. 273–288 DOI: 10.1287/trsc.1030.0078
* [75] F.. Zeghal and M. Minoux “Modeling and solving a Crew Assignment Problem in air transportation” In _European Journal of Operational Research_ 175.1, 2006, pp. 187–209 DOI: 10.1016/j.ejor.2004.11.028
## References
* [76] Roberto F Abenoza, Oded Cats and Yusak O Susilo “Travel satisfaction with public transport: Determinants, user classes, regional disparities and their evolution” In _Transportation Research Part A: Policy and Practice_ 95 Elsevier, 2017, pp. 64–84
* [77] Jonathan D Adler “Routing and scheduling of electric and alternative-fuel vehicles”, 2014
* [78] Bastian Amberg, Boris Amberg and Natalia Kliewer “Increasing delay-tolerance of vehicle and crew schedules in public transport by sequential, partial-integrated and integrated approaches” In _Procedia-Social and Behavioral Sciences_ 20, 2011, pp. 292–301
* [79] Bastian Amberg, Boris Amberg and Natalia Kliewer “Robust Efficiency in Urban Public Transportation: Minimizing Delay Propagation in Cost-Efficient Bus and Driver Schedules” In _Transportation Science_ 53.1 Institute for Operations Researchthe Management Sciences (INFORMS), 2018, pp. 89–112 DOI: 10.1287/trsc.2017.0757
* [80] Bastian Amberg, Natalia Kliewer and Michael Beck “Robuste Effizienz bei der Einsatzplanung für Bus und Fahrer: -Aktuelle Entwicklungen von Optimierungskomponenten für verspätungstolerante Umlauf-und Dienstplanung im Busverkehr” In _Der Nahverkehr_ 30.1, 2012, pp. 15–19
* [81] Boris Amberg, Bastian Amberg and Natalia Kliewer “Approaches for increasing the similarity of resource schedules in public transport” In _Procedia-Social and Behavioral Sciences_ 20, 2011, pp. 836–845
* [82] Michael Ball, Lawrence Bodin and Robert Dial “A matching based heuristic for scheduling mass transit crews and vehicles” In _Transportation Science_ 17.1, 1983, pp. 4–31
* [83] Alan A Bertossi, Paolo Carraresi and Giorgio Gallo “On some matching problems arising in vehicle scheduling models” In _Networks_ 17.3 Wiley Online Library, 1987, pp. 271–281
* [84] Lawrence Bodin and Bruce Golden “Classification in vehicle routing and scheduling” In _Networks_ 11.2 Wiley Online Library, 1981, pp. 97–108
* [85] Ralf Borndörfer et al. “Integrierte Dienst-und Dienstreihenfolgeplanung zur Erhöhung der Fahrerzufriedenheit”, 2014
* [86] Ralf Borndörfer, Martin Grötschel and Andreas Löbel “Duty scheduling in public transit”, 2001
* [87] Ralf Borndörfer et al. “Rapid Branching” In _Public Transport_ 5.1-2 Springer Berlin Heidelberg, 2012, pp. 3–23
* [88] Ralf Borndörfer et al. “Duty rostering in public transport-facing preferences, fairness, and fatigue”, 2015
* [89] Ralf Borndörfer, Christof Schulz, Stephan Seidl and Steffen Weider “Integration of duty scheduling and rostering to increase driver satisfaction” In _Public Transport_ 9.1-2, 2017, pp. 177–191 DOI: 10.1007/s12469-017-0153-3
* [90] Stefan Bunte and Natalia Kliewer “An overview on vehicle scheduling models” In _Public Transport_ 1.4 Springer, 2009, pp. 299–317 DOI: 10.1007/s12469-010-0018-5
* [91] Alberto Caprara et al. “Solution of large-scale railway crew planning problems: The Italian experience” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 1–18 DOI: 10.1007/978-3-642-85970-0\\_1
* [92] Alberto Caprara, Michele Monaci and Paolo Toth “A global method for crew planning in railway applications” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 17–36 DOI: 10.1007/978-3-642-56423-9\\_2
* [93] Oded Cats “The robustness value of public transport development plans” In _Journal of Transport Geography_ 51 Elsevier, 2016, pp. 236–246
* [94] Chiu-Hung Chen, Tung-Kuan Liu and Jyh-Horng Chou “Integrated short-haul airline crew scheduling using multiobjective optimization genetic algorithms” In _IEEE Transactions On Systems, Man, and Cybernetics: Systems_ 43.5 IEEE, 2013, pp. 1077–1090
* [95] Joachim R Daduna and José M Pinto Paixão “Vehicle scheduling for public mass transit—an overview” In _Computer-aided transit scheduling_ Springer, 1995, pp. 76–90
* [96] Guy Desaulniers and Mark D Hickman “Public transit” In _Handbooks in operations research and management science_ 14 Elsevier, 2007, pp. 69–127
* [97] Andreas T. Ernst et al. “An integrated optimization model for train crew management” In _Annals of Operations Research_ 108.1-4, 2001, pp. 211–224 URL: https://link.springer.com/article/10.1023/A:1016019314196
* [98] Andreas T. Ernst et al. “Rail crew scheduling and rostering optimization algorithms” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 53–71 DOI: 10.1007/978-3-642-56423-9\\_4
* [99] Andreas T. Ernst, Houyuan Jiang, Mohan Krishnamoorthy and David Sier “Staff scheduling and rostering: A review of applications, methods and models” In _European Journal of Operational Research_ 153.1, 2004, pp. 3–27
* [100] Matteo Fischetti, Silvano Martello and Paolo Toth “The fixed job schedule problem with spread-time constraints” In _Operations Research_ 35.6 INFORMS, 1987, pp. 849–858
* [101] Richard Freling, Dennis Huisman and Albert P.. Wagelmans “Applying an integrated approach to vehicle and crew scheduling in practice” In _Computer-aided Systems in Public Transport_ Springer, 2001, pp. 73–90 DOI: 10.1007/978-3-642-56423-9\\_5
* [102] Richard Freling, Dennis Huisman and Albert P.. Wagelmans “Models and algorithms for integration of vehicle and crew scheduling” In _Journal of Scheduling_ 6.1, 2003, pp. 63–85
* [103] Richard Freling, Ramon M. Lentink and Albert P.. Wagelmans “A decision support system for crew planning in passenger transportation using a flexible branch-and-price algorithm” In _Annals of Operations Research_ 127.1-4, 2004, pp. 203–222
* [104] Richard Freling, Albert P.. Wagelmans and José Paixão “An overview of models and techniques for integrating vehicle and crew scheduling” In _Computer-aided Systems in Public Transport_ 471 Springer, 1999, pp. 441–460
* [105] Christian Friberg and Knut Haase “An exact branch and cut algorithm for the vehicle and crew scheduling problem” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 63–80 DOI: 10.1007/978-3-642-85970-0\\_4
* [106] A. Gaffi and M. Nonato “An integrated approach to ex-urban crew and vehicle scheduling” In _Computer-aided Systems in Public Transport_ Springer, 1999, pp. 103–128
* [107] Liping Ge et al. “Revisiting the richness of integrated vehicle and crew scheduling” In _Public Transport_ Springer, 2022, pp. 1–27 DOI: 10.1007/s12469-022-00292-6
* [108] Liping Ge, Stefan Voß and Lin Xie “Robustness and disturbances in public transport” In _Public Transport_ 14.1 Springer, 2022, pp. 191–261
* [109] Vitali Gintner, Natalia Kliewer and Leena Suhl “Solving large multiple-depot multiple-vehicle-type bus scheduling problems in practice” In _OR Spectrum_ 27.4 Springer, 2005, pp. 507–523
* [110] Vitali Gintner, Natalia Kliewer and Leena Suhl “A crew scheduling approach for public transit enhanced with aspects from vehicle scheduling” In _Computer-aided Systems in Public Transport_ Springer, 2008, pp. 25–42 DOI: 10.1007/978-3-540-73312-6\\_2
* [111] Yufeng Guo, Taieb Mellouli, Leena Suhl and Markus P. Thiel “A partially integrated airline crew scheduling approach with time-dependent crew capacities and multiple home bases” In _European Journal of Operational Research_ 171.3, 2006, pp. 1169–1181
* [112] Robert Hrelja, Tom Rye and Caroline Mullen “Partnerships between operators and public transport authorities. Working practices in relational contracting and collaborative partnerships” In _Transportation Research Part A: Policy and Practice_ 116 Elsevier, 2018, pp. 327–338
* [113] Dennis Huisman “Integrated and dynamic vehicle and crew scheduling: Geïntegreerde en dynamische voertuig- en personeelsplanning: Zugl.: Rotterdam, Erasmus Univ., Diss., 2003” 325, Research series / Erasmus University Rotterdam Amsterdam: Thela Thesis, 2004
* [114] Dennis Huisman, Richard Freling and Albert PM Wagelmans “A robust solution approach to the dynamic vehicle scheduling problem” In _Transportation Science_ 38.4 INFORMS, 2004, pp. 447–458 DOI: 10.1287/trsc.1030.0069
* [115] Omar J. Ibarra-Rojas, F. Delgado, R. Giesen and J.. Muñoz “Planning, operation, and control of bus transport systems: A literature review” In _Transportation Research Part B: Methodological_ 77, 2015, pp. 38–75 DOI: 10.1016/j.trb.2015.03.002
* [116] Andras Keri and Knut Haase “Simultaneous Vehicle and Crew Scheduling with Trip Shifting” In _OPERATIONS RESEARCH PROCEEDINGS 2007_ , Operations Research Proceedings Springer-Verlag Berlin, 2008, pp. 467–472
* [117] Natalia Kliewer, Bastian Amberg and Boris Amberg “Multiple depot vehicle and crew scheduling with time windows for scheduled trips” In _Public Transport_ 3.3, 2012, pp. 213–244 DOI: 10.1007/s12469-011-0049-6
* [118] Natalia Kliewer, Taieb Mellouli and Leena Suhl “A time–space network based exact optimization model for multi-depot bus scheduling” In _European journal of operational research_ 175.3 Elsevier, 2006, pp. 1616–1627 DOI: 10.1016/j.ejor.2005.02.030
* [119] Stefan Kramkowski, Natalia Kliewer and Christian Meier “Increasing delay-tolerance of vehicle schedules in public bus transport” In _Proceedings of the Metaheuristic International Conference VIII_ , 2009 Hamburg Germany DOI: 10.1016/J.SBSPRO.2011.08.035
* [120] Chi-Kong Lee and Chao-Hui Chen “Scheduling of train driver for Taiwan railway administration” In _Journal of Eastern Asia Society of Transportation Studies_ 5, 2003, pp. 292–306
* [121] Dung-Ying Lin and Meng-Rung Tsai “Integrated crew scheduling and roster problem for trainmasters of passenger railway transportation” In _IEEE Access_ 7 IEEE, 2019, pp. 27362–27375
* [122] Todd Litman “The hidden traffic safety solution: public transportation”, 2016
* [123] Andreas Löbel “Solving large-scale multiple-depot vehicle scheduling problems” In _Computer-Aided Transit Scheduling_ Springer, 1999, pp. 193–220 DOI: 10.1007/978-3-642-85970-0\\_10
* [124] Claude P. Medard and Nidhi Sawhney “Airline crew scheduling from planning to operations” In _European Journal of Operational Research_ 183.3, 2007, pp. 1013–1027 DOI: 10.1016/j.ejor.2005.12.046
* [125] Marta Mesquita et al. “A new model for the integrated vehicle-crew-rostering problem and a computational study on rosters” In _Journal of Scheduling_ 14.4, 2011, pp. 319–334 DOI: 10.1007/s10951-010-0195-8
* [126] Marta Mesquita, Margarida Moz, Ana Paias and Margarida Pato “A decomposition approach for the integrated vehicle-crew-roster problem with days-off pattern” In _European Journal of Operational Research_ 229.2, 2013, pp. 318–331 DOI: 10.1016/j.ejor.2013.02.055
* [127] Marta Mesquita, Margarida Moz, Ana Paias and Margarida Pato “A decompose-and-fix heuristic based on multi-commodity flow models for driver rostering with days-off pattern” In _European Journal of Operational Research_ 245.2 Elsevier, 2015, pp. 423–437
* [128] Marc Naumann, Leena Suhl and Stefan Kramkowski “A stochastic programming approach for robust vehicle scheduling in public bus transport” In _Procedia Social and Behavioral Sciences_ 20 Elsevier, 2011, pp. 826–835 DOI: 10.1016/j.sbspro.2011.08.091
* [129] Ann-Sophie Pepin, Guy Desaulniers, Alain Hertz and Dennis Huisman “A comparison of five heuristics for the multiple depot vehicle scheduling problem” In _Journal of Scheduling_ 12.1, 2009, pp. 17–30 DOI: 10.1007/s10951-008-0072-x
* [130] Frédéric Quesnel, Guy Desaulniers and François Soumis “Improving air crew rostering by considering crew preferences in the crew pairing problem” In _Transportation Science_ 54.1 INFORMS, 2020, pp. 97–114
* [131] Josephine Reuer, Natalia Kliewer and Lena Wolbeck “The Electric Vehicle Scheduling Problem A study on time-space network based and heuristic solution” In _Proceedings of the Conference on Advanced Systems in Public Transport_ , 2015
* [132] Mohammed Saddoune, Guy Desaulniers, Issmail Elhallaoui and François Soumis “Integrated airline crew scheduling: A bi-dynamic constraint aggregation method using neighborhoods” In _European Journal of Operational Research_ 212.3, 2011, pp. 445–454 DOI: 10.1016/j.ejor.2011.02.009
* [133] Mohammed Saddoune, Guy Desaulniers, Issmail Elhallaoui and François Soumis “Integrated Airline Crew Pairing and Crew Assignment by Dynamic Constraint Aggregation” In _Transportation Science_ 46.1, 2012, pp. 39–55 DOI: 10.1287/trsc.1110.0379
* [134] JL Saha “An algorithm for bus scheduling problems” In _Journal of the Operational Research Society_ 21.4 Springer, 1970, pp. 463–474
* [135] Rivi Sandhu and Diego Klabjan “Integrated Airline Fleeting and Crew-Pairing Decisions” In _Operations Research_ 55.3, 2007, pp. 439–456 DOI: 10.1287/opre.1070.0395
* [136] David Schrank, Tim Lomax and Bill Eisele “2019 urban mobility report” In _Texas Transportation Institute. Available: http://mobility. tamu.edu/ums/report_ , 2019
* [137] Yindong Shen and Jiahong Xia “Integrated bus transit scheduling for the Beijing bus group based on a unified mode of operation” In _International Transactions in Operational Research_ 16.2, 2009, pp. 227–242 DOI: 10.1111/j.1475-3995.2009.00673.x
* [138] Nadia Souai and Jacques Teghem “Genetic algorithm based approach for the integrated airline crew-pairing and rostering problem” In _European Journal of Operational Research_ 199.3, 2009, pp. 674–683 DOI: 10.1016/j.ejor.2007.10.065
* [139] Ingmar Steinzen, Vitali Gintner, Leena Suhl and Natalia Kliewer “A Time-Space Network Approach for the Integrated Vehicle- and Crew-Scheduling Problem with Multiple Depots” In _Transportation Science_ 44.3 INFORMS, 2010, pp. 367–382 URL: http://www.jstor.org/stable/25769505
* [140] Ingmar Steinzen, Leena Suhl and Natalia Kliewer “Branching strategies to improve regularity of crew schedules in ex-urban public transit” In _OR Spectrum_ 31.4, 2009, pp. 727–743
* [141] Jorne van den Bergh et al. “Personnel scheduling: A literature review” In _European Journal of Operational Research_ 226.3, 2013, pp. 367–385
* [142] Xin Wen, Xuting Sun, Yige Sun and Xiaohang Yue “Airline crew scheduling: Models and algorithms” In _Transportation research part E: logistics and transportation review_ 149 Elsevier, 2021, pp. 102304
* [143] Lin Xie “Decision Support for Crew Rostering in Public Transit: Web-Based Optimization System for Cyclic and Non-Cyclic Rostering” Springer, 2014
* [144] Lin Xie “Decision Support for Crew Rostering in Public Transit – Web-based Optimization System for Cyclic and Non-Cyclic Rostering” Wiesbaden: Springer Fachmedien Wiesbaden, 2015 DOI: 10.1007/978-3-658-08167-6
* [145] Lin Xie, Natalia Kliewer and Leena Suhl “Integrated Driver Rostering Problem in Public Bus Transit” In _Procedia-Social and Behavioral Sciences_ 54, 2012, pp. 656–665 DOI: 10.1016/j.sbspro.2012.09.783
* [146] Lin Xie, Natalia Kliewer and Leena Suhl “A column generation approach for the cyclic and non-cyclic bus driver rostering problem.”, 2013
* [147] Lin Xie, Marius Merschformann, Natalia Kliewer and Leena Suhl “Metaheuristics approach for solving personalized crew rostering problem in public bus transit” In _Journal of Heuristics_ 23.5, 2017, pp. 321–347 DOI: 10.1007/s10732-017-9348-7
* [148] Lin Xie and Leena Suhl “Cyclic and non-cyclic crew rostering problems in public bus transit” In _OR Spectrum_ 37.1, 2015, pp. 99–136 DOI: 10.1007/s00291-014-0364-9
* [149] Tallys H. Yunes, Arnaldo V. Moura and Cid C. Souza “Hybrid Column Generation Approaches for Urban Transit Crew Management Problems” In _Transportation Science_ 39.2, 2005, pp. 273–288 DOI: 10.1287/trsc.1030.0078
* [150] F.. Zeghal and M. Minoux “Modeling and solving a Crew Assignment Problem in air transportation” In _European Journal of Operational Research_ 175.1, 2006, pp. 187–209 DOI: 10.1016/j.ejor.2004.11.028
|
# Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural
Language Understanding
Shuwen Deng1, Paul Prasse1, David R. Reich1, Tobias Scheffer1, Lena A.
Jäger1,2
1 Department of Computer Science, University of Potsdam, Germany
2 Department of Computational Linguistics, University of Zurich, Switzerland
{deng, prasse, david.reich<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Human gaze data offer cognitive information that reflects natural language
comprehension. Indeed, augmenting language models with human scanpaths has
proven beneficial for a range of NLP tasks, including language understanding.
However, the applicability of this approach is hampered because the abundance
of text corpora is contrasted by a scarcity of gaze data. Although models for
the generation of human-like scanpaths during reading have been developed, the
potential of synthetic gaze data across NLP tasks remains largely unexplored.
We develop a model that integrates synthetic scanpath generation with a
scanpath-augmented language model, eliminating the need for human gaze data.
Since the model’s error gradient can be propagated throughout all parts of the
model, the scanpath generator can be fine-tuned to downstream tasks. We find
that the proposed model not only outperforms the underlying language model,
but achieves a performance that is comparable to a language model augmented
with real human gaze data. Our code is publicly
available.111https://github.com/aeye-lab/EMNLP-SyntheticScanpaths-NLU-
PretrainedLM.
## 1 Introduction and Related Work
When humans read, they naturally engage in the cognitive process of
comprehending language, which, in turn, is reflected in their gaze behavior
(Just and Carpenter, 1980). In a nutshell, a scanpath (i.e., sequence of
consecutive fixations) on a stimulus text approximates the reader’s attention,
which can be exploited to inform Natural Language Processing (NLP) tasks.
Figure 1: Synthetic scanpath-augmented language model: the Scanpath Generation
Model predicts a sequence of fixations for an input sentence; token embeddings
are rearranged according to the order of fixations.
Gaze data has been shown to be beneficial in various NLP tasks, such as part-
of-speech-tagging Barrett et al. (2016), named entity recognition (Hollenstein
and Zhang, 2019), generating image captions (Takmaz et al., 2020) and question
answering (Sood et al., 2021). Researchers have explored the use of aggregated
word-level gaze features to regularize neural attention mechanisms (Barrett et
al., 2018; Sood et al., 2020). Moreover, non-aggregated scanpaths, which
capture the complete sequential ordering of the reader’s gaze behavior, have
also demonstrated promise in NLP tasks (Mishra et al., 2017, 2018a; Yang and
Hollenstein, 2023).
However, collecting gaze data is a resource-intensive endeavor, even for very
small text corpora. Hence, human gaze data is scarce, and NLP task-specific
gaze recordings are even scarcer. Moreover, applying a language model that
additionally consumes gaze data requires gaze data to be available for the
input text at deployment time—which is unrealistic for most use cases. To
overcome these limitations, researchers have proposed a multi-task learning
approach for NLP tasks such as sentence compression (Klerke et al., 2016),
sentiment analysis (Mishra et al., 2018b), and predicting text readability
(González-Garduño and Søgaard, 2017). In this approach, labeled data for the
specific NLP task is used as the primary task, while a separate eye-tracking
corpus is utilized as an auxiliary task. While this approach helps mitigate
the need for task-specific gaze data during training and testing, the problem
of general scarcity of gaze samples remains and hinders effective supervision
for data-intensive architectures.
In this paper, we propose an alternative approach by using synthetic gaze
data, which can be generated easily for any given text, to provide cognitive
signals across NLP tasks. The seminal work of Sood et al. (2020), which
integrates eye movement data generated by a computational cognitive model of
eye-movement-control-during-reading for tasks such as sentence compression and
paraphrase generation, demonstrated the potential of synthetic eye-gaze data.
Khurana et al. (2023) explored a proof-of-concept model that integrated
synthetic gaze data across multiple NLP tasks, but their results did not reach
the performance of a fine-tuned BERT model Devlin et al. (2019) without eye
gaze on the General Language Understanding Evaluation (GLUE) benchmark. In our
work, we build on recent advances in the development of machine-learning
models for generating human-like scanpaths during reading Deng et al. (2023);
Bolliger et al. (2023); Khurana et al. (2023); Nilsson and Nivre (2011).
We develop a model that combines synthetic scanpath generation with a
scanpath-augmented language model, eliminating the need for human gaze data.
The model allows for fine-tuning the scanpath generator to downstream tasks by
propagating the error gradient through the entire model. Our approach not only
outperforms the underlying language model in multiple tasks on the GLUE,
especially in low-resource settings, but even reaches a performance comparable
to an eye-gaze augmented model that uses real, rather than synthetic, eye
movement data in sentiment classification.
## 2 Model
We develop a model that combines a scanpath generation model with a scanpath-
augmented language model to perform NLP downstream tasks. Figure 1 depicts the
proposed model architecture.
#### Scanpath Generation Model
We adopt Eyettention Deng et al. (2023), an open-source state-of-the-art model
for scanpath generation over text. Eyettention predicts consecutive fixation
locations, represented as word indices, based on a stimulus sentence and the
preceding fixations. It consists of two encoders, one for embedding the
stimulus sentence, and the other for embedding the scanpath history. A cross-
attention layer aligns the outputs of the two encoders, and a decoder produces
a probability distribution over saccade ranges at each timestep. The next
fixated word index is determined by sampling from this distribution.
#### Scanpath-Augmented Language Model
We adopt the PLM-AS framework Yang and Hollenstein (2023), which augments pre-
trained language models with human scanpaths for sentiment classification.
This framework uses a language model to extract token embeddings for a
sentence, associating each embedding with its position index. By utilizing a
human scanpath (fixation index sequence) as input, the model rearranges the
token embedding sequence based on the order in which the words are fixated by
the reader. The transformed sequence is then fed into a scanpath encoder,
implemented as a layer of gated recurrent units (GRU), where the output of the
last step is used as the final feature for sentiment classification. This
framework allows for the use of different language models and achieves high
performance through fine-tuning. In this work, we employ BERTBASE222Note that
BERT can be substituted with other advanced pre-trained language models,
potentially leading to further enhancements in task performance. Devlin et al.
(2019) as the language model, following Yang and Hollenstein (2023).
#### Joint Modeling for NLP Tasks
To eliminate the need for human gaze data, we integrate the synthetic scanpath
generated by the Eyettention model consisting of a fixation index sequence
into the PLM-AS framework. Before integration, the word index sequence
generated by Eyettention is converted into a token index sequence. During
training, the error gradient of the scanpath-augmented language model can be
back-propagated through the Eyettention model, allowing its parameters to be
adapted for a specific NLP task. To handle the non-differentiable sampling
from a categorical distribution involved in scanpath generation, we employ the
Gumbel-softmax distribution Jang et al. (2017) as a fully differentiable
approximation. The training process consists of two phases. First, we pre-
train the Eyettention model on a natural reading task. Second, we train the
entire model, which includes fine-tuning the language model and the
Eyettention model, as well as training the scanpath encoder from scratch. For
the Eyettention model, we add residual connections in both encoders to enhance
its performance.
## 3 Experiments
In this section, we describe the data and present the evaluation results of
our model for a wide range of NLP tasks. Further details about training and
hyper-parameter tuning can be found in Appendix B.
### 3.1 Data Sets
CELER Berzak et al. (2022): We pre-train the scanpath generation model
Eyettention on the L1 subset of CELER, which contains eye-tracking recordings
collected from 69 native speakers of English during natural reading of 5,456
sentences.
ETSA Mishra et al. (2016) contains task-specific gaze recordings for sentiment
classification of 7 subjects who each read 383 positive and 611 negative
sentences, including sarcastic quotes, short movie reviews, and tweets.
GLUE Wang et al. (2018) includes sentiment analysis (SST-2), linguistic
acceptability (CoLA), similarity and paraphrase tasks (MRPC, STS-B, QQP), and
natural language inference tasks (MNLI, QNLI, RTE). No gaze data are
available.
### 3.2 Sentiment Classification
Table 1 presents the results of our model on the sentiment classification task
ETSA (Mishra et al., 2016), in comparison to BERT and previous state-of-the-
art eye-gaze augmented models. We follow a 10-fold cross-validation regime. In
each iteration, BERT is fine-tuned on the training portion of the ETSA text
corpus, and PLM-AS is fine-tuned on the training portion of the ETSA text
corpus and gaze data. Our model is fine-tuned on the training portion of the
ETSA text corpus and, instead of the ETSA gaze data, synthetic gaze data
generated by Eyettention. Since each sentence is associated with multiple
scanpaths, we compute the final prediction by averaging the pre-softmax logits
obtained from the models across all scanpaths for the PLM-AS baseline. Our
model averages equally many synthetic scanpaths. We make multiple notable
observations in Table 1:
(a) Our model outperforms both BERT and the state-of-the-art ScanTextGAN
Khurana et al. (2023) augmented with gaze data.
(b) Our model, augmented with _synthetic_ scanpaths, achieves comparable
performance to the PLM-AS model augmented with _human_ scanpaths, eliminating
the need for human scanpaths.
(c) Ablation experiments (bottom two rows) show that when the Eyettention
model is frozen or not pre-trained, the performance decreases. This
demonstrates the importance of both pre-training and task-specific fine-tuning
of the scanpath generator.
Model | Scanpath (#) | F1 | AUC
---|---|---|---
BERT$\star$ | - | 82.932.26 | 92.421.62
ScanTextGAN | real | 83.34 | -
ScanTextGAN | synthetic | 84.77 | -
PLM-AS$\star$ | real (7) | 85.811.16 | 94.791.02
Ours$\star$ | synthetic (7) | 85.351.77 | 94.900.94
Eyettention (frozen)$\star$ | synthetic (7) | 84.521.79 | 94.501.03
Eyettention (scratch)$\star$ | synthetic (7) | 85.031.6 | 94.771.03
Table 1: Results for sentiment classification on ETSA, with standard errors
indicated as subscript. Results obtained from our experiments are marked with
$\star$; other results are from the respective papers for recapitulation.
#### Varying the number of scanpaths
We analyze the impact of the number of scanpaths sampled both at training and
at application time on model performance. Figure 2 shows the F1 score as a
function of the number of scanpaths used by BERT without eye gaze, PLM-AS with
human scanpaths, and our model with synthetic scanpaths. We observe that the
performance of scanpath-augmented models improves as the number of scanpaths
increases, reaching its peak at seven scanpaths.333The optimal number of
scanpaths to be used by the model is considered a hyperparameter for the
subsequent experiments. Importantly, our model outperforms BERT and, when
being augmented with five or more synthetic scanpaths, approaches the
performance of PLM-AS augmented with human scanpaths.
Figure 2: Sentiment classification performance on ETSA with varying numbers of
scanpaths at training and application time. Error bars show the standard
error.
#### Low-Resource Performance
We hypothesize that eye gaze might be most beneficial in low-resource
settings. To test this hypothesis, we sample a small subset of the training
sentences K = {200, 400, 600} from the total number of around 800 training
instances, and evaluate the performance of our model augmented with seven
synthetic scanpaths (the best-performing configuration from the previous
experiments). The performance comparison between our model and the baseline
model BERT is shown in Figure 3. Our model consistently outperforms BERT, with
larger improvements observed when using less training data.
Figure 3: Sentiment classification performance on ETSA in the low-resource
setting. Error bars represent the standard error.
### 3.3 GLUE Benchmark
| | | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Avg.
---|---|---|---|---|---|---|---|---|---|---|---
K | Model | Gaze | 392k | 363k | 108k | 67k | 8.5k | 5.7k | 3.5k | 2.5k | -
200 | BERT | $\times$ | 42.901.51 | 57.422.03 | 73.070.16 | 78.781.10 | 16.952.74 | 79.430.69 | 81.180.04 | 54.301.50 | 60.50
Ours | ✓ | 48.970.83 | 61.631.78 | 70.460.62 | 80.760.74 | 24.083.55 | 74.941.20 | 81.850.17 | 59.351.47 | 62.75
500 | BERT | $\times$ | 52.091.05 | 65.130.37 | 77.040.19 | 82.550.47 | 35.611.74 | 83.140.41 | 81.530.29 | 60.720.61 | 67.23
Ours | ✓ | 56.480.38 | 67.810.23 | 77.600.26 | 84.630.50 | 36.411.39 | 81.990.58 | 82.320.52 | 61.881.24 | 68.64
1000 | BERT | $\times$ | 58.970.58 | 67.350.49 | 78.880.36 | 85.800.55 | 39.891.64 | 85.420.21 | 84.181.00 | 63.390.99 | 70.49
Ours | ✓ | 61.280.25 | 70.650.14 | 80.740.10 | 86.060.29 | 41.190.50 | 85.130.43 | 84.610.68 | 64.551.18 | 71.78
all | BERT | $\times$ | 82.9 | 69.7 | 90.1 | 93.1 | 53.9 | 84.8 | 87.7 | 66.1 | 78.54
Ours | ✓ | 83.6 | 69.6 | 90.1 | 93.8 | 50.2 | 85.8 | 87.7 | 67.3 | 78.51
Table 2: Results on the GLUE benchmark with K = {200, 500, 1000, all} training
samples. Below each task, the total number of training samples for each
dataset is indicated. We use F1 for QQP and MRPC, Spearman correlation for
STS-B, Matthews correlation for CoLA, and accuracy for the remaining tasks.
The standard error is indicated as the subscript.
In contrast to the small and single task-specific ETSA data set, we extended
our evaluation to assess whether gaze data could enhance language models
across different tasks, including scenarios with substantial text data. To
achieve this, we evaluate our model on the GLUE benchmark, a comprehensive
collection of 8 diverse NLP tasks with a large number of text samples. As no
eye gaze data is available for GLUE, we focus on the comparison with the BERT
baseline, and investigate both, high- and low-resource settings.
#### High-Resource Performance
The results of our model on the GLUE test set using all training samples (K =
all) are reported in the bottom two rows of Table 2. The results are obtained
from the GLUE leaderboard. Our model outperforms BERT in 4 out of 8 tasks, and
achieves comparable performance in 3 tasks. However, our model’s performance
is notably poor in the CoLA task, possibly due to the model’s emphasis on gaze
sequence ordering, potentially overshadowing the importance of the original
word order, which is critical to determine linguistic acceptability of
sentences.
#### Low-Resource Performance
We present the results on the GLUE benchmark with K = {200, 500, 1000}
training samples in Table 2. We take additional 1,000 samples from the
original training set as the development set used for early stopping. The
original development set is utilized for testing. We perform 5 runs with
different random seeds to shuffle the data and report the average results.
Overall, our model consistently outperforms BERT across tasks, except for the
STS-B task. In terms of average score, our model shows performance gains of
2-4% compared to BERT.
## 4 Discussion and Conclusion
We developed a model that integrates synthetic scanpath generation into a
scanpath-augmented language model. We observe that the model achieves results
that are comparable to a language model augmented with human scanpaths, which
eliminates the need for human scanpaths during both training and testing.
Human gaze data are only available for a very limited number of NLP tasks and
data sets. At application time, under any standard use case scenario of NLP
tasks, no gaze recordings are available. Synthetic gaze data not only open the
possibility to train high-capacity gaze-augmented models across tasks, which
would otherwise require the collection of an impractical large volume of gaze
data, but also allow for the exploitation of eye gaze signals as model input
at application time.
Using the GLUE benchmark, we observe that gaze signals show benefits not only
for sentiment classification tasks (SST-2), as reported in previous research,
but also for entailment classification tasks (MNLI, RTE) and a sentence
similarity task (STS-B). This highlights the potential of integrating
cognitive signals from eye gaze into a wider range of NLP tasks in the future.
Nevertheless, it is evident that not all tasks derive equal benefits from gaze
data. It remains up to future research to explore which types of tasks benefit
most from gaze signals.
Our results further show that the potential benefit of augmenting language
models with gaze data is higher for low-resource settings. Hence, we believe
that the augmentation with gaze data might be particularly interesting for
low-resource languages. Two ongoing multi-lab efforts to collect large
multilingual eye-tracking-while-reading corpora (MECO444https://meco-read.com
and MultiplEYE555https://multipleye.eu) include a range of low-resource
languages, which will allow for training scanpath generators and augmenting
language models with synthetic eye gaze for these languages in the near
future.
## Limitations
One limitation of our work is that the scanpath generation model Eyettention
was pre-trained on eye-tracking data recorded on isolated sentences (single
sentence reading paradigm). Since the majority of tasks in the GLUE benchmark
involve two-sentence classification, future work could involve pre-training
the model on an eye-tracking data set specifically designed for two-sentence
reading tasks to enhance its performance. Additionally, scanpath augmentation
turned out to be detrimental to the language model’s performance for the task
of identifying linguistically acceptable sentences (CoLA). This finding was to
be expected as the actual word order is more relevant for linguistic
acceptability of a sentence than the order in which the words are fixated.
Pre-training the scanpath generator on an eye-tracking corpus that includes
both acceptable and unacceptable sentences may be beneficial for improving the
model’s performance.
Furthermore, in our proposed framework, the sampling process involved in
scanpath generation during training and at inference time is not conducive to
a high model efficiency. Future work could explore alternative scanpath
generation models that do not rely on auto-regressive architectures to improve
efficiency.
## Ethics Statement
It is crucial to acknowledge potential privacy risks in collecting, sharing,
and processing human gaze data. Since eye movements are highly individual, it
can be possible to extract a participant’s identity from gaze data Jäger et
al. (2020); Makowski et al. (2021). Other personal information such as gender
Sammaknejad et al. (2017) and ethnicity Blignaut and Wium (2014) that can be
detected to some degree today may turn out to be extractable accurately in the
future, which incurs a risk of leakage of personal information from gaze data.
Synthetic gaze data can reduce the need for large-scale experiments with human
subjects, even though some amount of human gaze data is still necessary to
train generative models.
## Acknowledgements
This work was partially funded by the German Federal Ministry of Education and
Research under grant 01$|$ S20043.
## References
* Barrett et al. (2018) Maria Barrett, Joachim Bingel, Nora Hollenstein, Marek Rei, and Anders Søgaard. 2018. Sequence classification with human attention. In _Proceedings of the 22nd Conference on Computational Natural Language Learning (CoNLL)_ , pages 302–312, Brussels, Belgium.
* Barrett et al. (2016) Maria Barrett, Frank Keller, and Anders Søgaard. 2016. Cross-lingual transfer of correlations between parts of speech and gaze features. In _Proceedings of the 26th International Conference on Computational Linguistics (COLING): Technical Papers_ , pages 1330–1339, Osaka, Japan.
* Berzak et al. (2022) Yevgeni Berzak, Chie Nakamura, Amelia Smith, Emily Weng, Boris Katz, Suzanne Flynn, and Roger Levy. 2022. CELER: A 365-participant corpus of eye movements in L1 and L2 English reading. _Open Mind_ , pages 1–10.
* Blignaut and Wium (2014) Pieter Blignaut and Daniël Wium. 2014. Eye-tracking data quality as affected by ethnicity and experimental design. _Behavior Research Methods_ , 46:67–80.
* Bolliger et al. (2023) Lena S. Bolliger, David R. Reich, Patrick Haller, Deborah N. Jakobi, Paul Prasse, and Lena A. Jäger. 2023. ScanDL: A Diffusion Model for Generating Synthetic Scanpaths on Texts. In _Proceedings of Empirical Methods in Natural Language Processing (EMNLP)_ , Singapore, Singapore.
* Cho et al. (2014) Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder–decoder approaches. In _Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation_ , pages 103–111, Doha, Qatar.
* Deng et al. (2023) Shuwen Deng, David R Reich, Paul Prasse, Patrick Haller, Tobias Scheffer, and Lena A Jäger. 2023. Eyettention: An attention-based dual-sequence model for predicting human scanpaths during reading. _Proceedings of the ACM on Human-Computer Interaction_ , 7(ETRA):1–24.
* Devlin et al. (2019) Jacob Devlin, Ming Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)_ , pages 4171–4186, Minneapolis, MN, USA.
* González-Garduño and Søgaard (2017) Ana Valeria González-Garduño and Anders Søgaard. 2017. Using gaze to predict text readability. In _Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, EMNLP_ , pages 438–443, Copenhagen, Denmark.
* Hollenstein and Zhang (2019) Nora Hollenstein and Ce Zhang. 2019. Entity recognition at first sight: Improving NER with eye movement information. In _Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)_ , pages 1–10, Minneapolis, Minnesota.
* Jäger et al. (2020) Lena A. Jäger, Silvia Makowski, Paul Prasse, Liehr Sascha, Maximilian Seidler, and Tobias Scheffer. 2020. Deep Eyedentification: Biometric identification using micro-movements of the eye. In _Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2019_ , volume 11907 of _Lecture Notes in Computer Science_ , pages 299–314, Cham, Switzerland. Springer International Publishing.
* Jang et al. (2017) Eric Jang, Shixiang Gu, and Ben Poole. 2017. Categorical reparameterization with gumbel-softmax. In _Proceedings of the 5th International Conference on Learning Representations (ICLR)_ , Toulon, France.
* Just and Carpenter (1980) Marcel A Just and Patricia A Carpenter. 1980. A theory of reading: From eye fixations to comprehension. _Psychological Review_ , 87(4):329.
* Khurana et al. (2023) Varun Khurana, Yaman Kumar, Nora Hollenstein, Rajesh Kumar, and Balaji Krishnamurthy. 2023. Synthesizing human gaze feedback for improved NLP performance. In _Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL)_ , pages 1895–1908, Dubrovnik, Croatia.
* Klerke et al. (2016) Sigrid Klerke, Yoav Goldberg, and Anders Søgaard. 2016. Improving sentence compression by learning to predict gaze. In _Proceedings of North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT)_ , pages 1528–1533, San Diego, California.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In _Proceedings of International Conference on Learning Representations (ICLR)_ , New Orleans, Louisiana, United States.
* Makowski et al. (2021) Silvia Makowski, Paul Prasse, David R Reich, Daniel Krakowczyk, Lena A Jäger, and Tobias Scheffer. 2021. Deepeyedentificationlive: Oculomotoric biometric identification and presentation-attack detection using deep neural networks. _IEEE Transactions on Biometrics, Behavior, and Identity Science_ , 3(4):506–518.
* Mao et al. (2022) Yuning Mao, Lambert Mathias, Rui Hou, Amjad Almahairi, Hao Ma, Jiawei Han, Scott Yih, and Madian Khabsa. 2022. UniPELT: A unified framework for parameter-efficient language model tuning. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 6253–6264, Dublin, Ireland.
* Mishra et al. (2018a) Abhijit Mishra, Pushpak Bhattacharyya, Abhijit Mishra, and Pushpak Bhattacharyya. 2018a. Scanpath complexity: modeling reading/annotation effort using gaze information. _Cognitively Inspired Natural Language Processing: An Investigation Based on Eye-tracking_ , pages 77–98.
* Mishra et al. (2017) Abhijit Mishra, Kuntal Dey, and Pushpak Bhattacharyya. 2017. Learning cognitive features from gaze data for sentiment and sarcasm classification using convolutional neural network. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL)_ , pages 377–387, Vancouver, Canada.
* Mishra et al. (2016) Abhijit Mishra, Diptesh Kanojia, and Pushpak Bhattacharyya. 2016. Predicting readers’ sarcasm understandability by modeling gaze behavior. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 30, Phoenix, Arizona, USA.
* Mishra et al. (2018b) Abhijit Mishra, Srikanth Tamilselvam, Riddhiman Dasgupta, Seema Nagar, and Kuntal Dey. 2018b. Cognition-cognizant sentiment analysis with multitask subjectivity summarization based on annotators’ gaze behavior. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 32, New Orleans, Lousiana, USA.
* Nilsson and Nivre (2011) Mattias Nilsson and Joakim Nivre. 2011. Entropy-driven evaluation of models of eye movement control in reading. In _Proceedings of the 8th International NLPCS Workshop_ , pages 201–212, Copenhagen, Denmark.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. In _Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS)_ , pages 8024–8035, Vancouver, Canada.
* Sammaknejad et al. (2017) Negar Sammaknejad, Hamidreza Pouretemad, Changiz Eslahchi, Alireza Salahirad, and Ashkan Alinejad. 2017. Gender classification based on eye movements: A processing effect during passive face viewing. _Advances in Cognitive Psychology_ , 13(3):232.
* Sood et al. (2021) Ekta Sood, Fabian Kögel, Philipp Müller, Dominike Thomas, Mihai Bace, and Andreas Bulling. 2021. Multimodal integration of human-like attention in visual question answering. _Computing Research Repository_.
* Sood et al. (2020) Ekta Sood, Simon Tannert, Philipp Müller, and Andreas Bulling. 2020. Improving natural language processing tasks with human gaze-guided neural attention. In _Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS)_ , pages 6327–6341, Online.
* Takmaz et al. (2020) Ece Takmaz, Sandro Pezzelle, Lisa Beinborn, and Raquel Fernández. 2020. Generating image descriptions via sequential cross-modal alignment guided by human gaze. In _Proceedings of Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4664–4677, Online.
* Wang et al. (2018) Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In _Proceedings of EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP_ , pages 353–355, Brussels, Belgium.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of Empirical Methods in Natural Language Processing (EMNLP): System Demonstrations_ , pages 38–45, Online.
* Yang and Hollenstein (2023) Duo Yang and Nora Hollenstein. 2023. PLM-AS: Pre-trained language models augmented with scanpaths for sentiment classification. In _Proceedings of the Northern Lights Deep Learning Workshop_ , Tromsø, Norway.
## Appendix A Model Details
#### PLA-AS Framework
For the PLM-AS framework, we adhere to the design of the original paper Yang
and Hollenstein (2023). The scanpath encoder consists of a single-direction
GRU layer Cho et al. (2014) with a hidden size of 768 and a dropout rate of
0.1. We initialize the hidden state of the scanpath encoder using the [CLS]
token outputs from the final layer of BERT.
## Appendix B Training Details
We train all neural networks using the PyTorch Paszke et al. (2019) library on
an NVIDIA A100-SXM4-40GB GPU using the NVIDIA CUDA platform. For training, we
use the AdamW optimizer Loshchilov and Hutter (2019), and a batch size of 32.
We train 20 epochs and select the model with the best validation performance
for evaluation. The training is early stopped if the validation performance
does not increase for 3 consecutive epochs. During the training of our model,
we employ the Gumbel-softmax distribution with a temperature hyperparameter
set to 0.5. We use the pre-trained checkpoints from the HuggingFace repository
Wolf et al. (2020) for the language model BERTBASE.
#### Sentiment Classification
During training, each scanpath associated with one sentence is treated as a
separate instance. However, during evaluation, the pre-softmax logits obtained
from multiple scanpaths associated with the same sentence are averaged to
generate a single prediction for this sentence. We use a learning rate of 1e-5
for training all investigated models.
#### GLUE Benchmark
Table 3: Optimal number of scanpaths used for our model in GLUE Benchmark with K = {200, 500, 1000, all} training sentences. K | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE
---|---|---|---|---|---|---|---|---
200 | 5 | 3 | 3 | 7 | 3 | 7 | 3 | 7
500 | 7 | 5 | 3 | 3 | 3 | 7 | 3 | 7
1000 | 3 | 5 | 5 | 7 | 3 | 5 | 7 | 7
all | 2 | 2 | 4 | 2 | 3 | 3 | 3 | 3
We evaluate each GLUE data set using the metric specified in the benchmark. We
use the code provided in the HuggingFace repository
666https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-
classification to train the BERT model and compute the metrics.
In the high-resource setting, we fine-tune the BERT model using the
hyperparameter tuning procedure outlined in the original paper (Devlin et al.,
2019). We select the best learning rate from {5e-5, 4e-5, 3e-5, 2e-5} for each
task based on the performance on the development set. The same learning rate
is used for training our model.
Additionally, for our model, we perform a hyperparameter search on the
development set to determine the optimal number of scanpaths to be used by the
model for each task. We explore different numbers of scanpaths from {2, 3, 4}
and select the configuration that achieves the best performance on the
development set. The optimal configuration for each task can be found in Table
3.
In the low-resource setting, we use the same learning rate that was found
optimal in the high-resource setting for each task. Besides, we perform a
hyperparameter search on the development set, investigating different numbers
of scanpaths from {3, 5, 7} to be used by our model. The optimal
configurations for each task can be found in Table 3.
To reduce variance, we apply shuffling to the training data using 5 different
random seeds. We use the first K samples as the new training set, and the
subsequent 1,000 samples as the development set. The data seeds used for
shuffling are {111,222,333,444,555}, while the seed s=42 is consistently used
for model training across all models. The procedure was adapted from Mao et
al. (2022).
|
custom-line = letter = : , command = dashedline , ccommand = cdashedline ,
tikz = dashed
# HetGPT: Harnessing the Power of Prompt Tuning in Pre-Trained Heterogeneous
Graph Neural Networks
Yihong Ma University of Notre DameNotre DameIndianaUSA<EMAIL_ADDRESS>, Ning
Yan Futurewei Technologies Inc.Santa ClaraCaliforniaUSA<EMAIL_ADDRESS>,
Jiayu Li Syracuse UniversitySyracuseNew YorkUSA<EMAIL_ADDRESS>, Masood
Mortazavi Futurewei Technologies Inc.Santa ClaraCaliforniaUSA
<EMAIL_ADDRESS>and Nitesh V. Chawla University of Notre
DameNotre DameIndianaUSA<EMAIL_ADDRESS>
###### Abstract.
Graphs have emerged as a natural choice to represent and analyze the intricate
patterns and rich information of the Web, enabling applications such as online
page classification and social recommendation. The prevailing “ _pre-train,
fine-tune_ ” paradigm has been widely adopted in graph machine learning tasks,
particularly in scenarios with limited labeled nodes. However, this approach
often exhibits a misalignment between the training objectives of pretext tasks
and those of downstream tasks. This gap can result in the “negative transfer”
problem, wherein the knowledge gained from pre-training adversely affects
performance in the downstream tasks. The surge in prompt-based learning within
Natural Language Processing (NLP) suggests the potential of adapting a “ _pre-
train, prompt_ ” paradigm to graphs as an alternative. However, existing graph
prompting techniques are tailored to homogeneous graphs, neglecting the
inherent heterogeneity of Web graphs. To bridge this gap, we propose HetGPT, a
general post-training prompting framework to improve the predictive
performance of pre-trained heterogeneous graph neural networks (HGNNs). The
key is the design of a novel prompting function that integrates a virtual
class prompt and a heterogeneous feature prompt, with the aim to reformulate
downstream tasks to mirror pretext tasks. Moreover, HetGPT introduces a multi-
view neighborhood aggregation mechanism, capturing the complex neighborhood
structure in heterogeneous graphs. Extensive experiments on three benchmark
datasets demonstrate HetGPT’s capability to enhance the performance of state-
of-the-art HGNNs on semi-supervised node classification.
††copyright: none††conference: ; ;
## 1\. Introduction
The Web, an ever-expanding digital universe, has transformed into an
unparalleled data warehouse. Within this intricate web of data, encompassing
diverse entities and patterns, graphs have risen as an intuitive
representation to encapsulate and examine the Web’s multifaceted content, such
as academic articles (Fu et al., 2020), social media interactions (Cao et al.,
2021), chemical molecules (Guo et al., 2023), and online grocery items (Tian
et al., 2022). In light of this, graph neural networks (GNNs) have emerged as
the state of the art for graph representation learning, which enables a wide
range of web-centric applications such as online page classification (Qi and
Davison, 2009), social recommendation (Fan et al., 2019), pandemic trends
forecasting (Ma et al., 2022), and dynamic link prediction (Wang et al., 2020,
2021c).
A primary challenge in traditional supervised graph machine learning is its
heavy reliance on labeled data. Given the magnitude and complexity of the Web,
obtaining annotations can be costly and often results in data of low quality.
To address this limitation, the “ _pre-train, fine-tune_ ” paradigm has been
widely adopted, where GNNs are initially pre-trained with some self-supervised
pretext tasks and are then fine-tuned with labeled data for specific
downstream tasks. Yet, this paradigm faces the following challenges:
* •
(C1) Fine-tuning methods often overlook the inherent gap between the training
objectives of the pretext and the downstream task. For example, while graph
pre-training may utilize binary edge classification to draw topologically
proximal node embeddings closer, the core of a downstream node classification
task would be to ensure nodes with the same class cluster closely. Such
misalignment makes the transferred node embeddings sub-optimal for downstream
tasks, _i.e.,_ negative transfer (Wang et al., 2021b; Zhang et al., 2022). The
challenge arises: _how to reformulate the downstream node classification task
to better align with the contrastive pretext task?_
* •
(C2) In semi-supervised node classification, there often exists a scarcity of
labeled nodes. This limitation can cause fine-tuned networks to highly overfit
these sparse (Tan et al., 2023) or potentially imbalanced (Ma et al., 2023)
nodes, compromising their ability to generalize to new and unlabeled nodes.
The challenge arises: _how to capture and generalize the intricate
characteristics of each class in the embedding space to mitigate this
overfitting?_
* •
(C3) Given the typically large scale of pre-trained GNNs, the attempt to
recalibrate all their parameters during the fine-tuning phase can considerably
slow down the rate of training convergence. The challenge arises: _how to
introduce only a small number of trainable parameters in the fine-tuning stage
while keeping the parameters of the pre-trained network unchanged?_
One potential solution that could partially address these challenges is to
adapt the “ _pre-train, prompt_ ” paradigm from natural language processing
(NLP) to the graph domain. In NLP, prompt-based learning has effectively
generalized pre-trained language models across diverse tasks. For example, a
sentiment classification task like “ _The WebConf will take place in the
scenic city of Singapore in 2024_ ” can be reframed by appending a specific
textual prompt “ _I feel so_ [MASK]” to the end. It is highly likely that a
language model pre-trained on next word prediction will predict “[MASK]” as “
_excited_ ” instead of “ _frustrated_ ”, without necessitating extensive fine-
tuning. With this methodology, certain downstream tasks can be seamlessly
aligned with the pre-training objectives. While few prior work (Sun et al.,
2023, 2022; Fang et al., 2022a; Liu et al., 2023a; Tan et al., 2023) has
delved into crafting various prompting templates for graphs, their emphasis
remains strictly on homogeneous graphs. This narrow focus underscores the last
challenge inherent to the heterogeneous graph structures typical of the Web:
* •
(C4) Homogeneous graph prompting techniques typically rely on the pre-trained
node embeddings of the target node or the aggregation of its immediate
neighbors’ embeddings for downstream node classification, which ignores the
intricate neighborhood structure inherent to heterogeneous graphs. The
challenge arises: _how to leverage the complex heterogeneous neighborhood
structure of a node to yield more reliable classification decisions?_
To comprehensively address all four aforementioned challenges, we propose
HetGPT, a general post-training prompting framework tailored for heterogeneous
graphs. Represented by the acronym Heterogeneous Graph Prompt Tuning, HetGPT
serves as an auxiliary system for HGNNs that have undergone constrastive pre-
training. At the core of HetGPT is a novel _graph prompting function_ that
reformulates the downstream node classification task to align closely with the
pretext contrastive task. We begin with the the _virtual class prompt_ , which
generalizes the intricate characteristics of each class in the embedding
space. Then we introduce the _heterogeneous feature prompt_ , which acts as a
task-specific augmentation to the input graph. This prompt is injected into
the feature space and the prompted node features are then passed through the
pre-trained HGNN, with all parameters in a frozen state. Furthermore, a
_multi-view neighborhood aggregation_ mechanism, that encapsulates the
complexities of the heterogeneous neighborhood structure, is applied to the
target node, generating a node token for classification. Finally, Pairwise
similarity comparisons are performed between the node token and the class
tokens derived from the virtual class prompt via the contrastive learning
objectives established during pre-training, which effectively simulates the
process of deriving a classification decision. In summary, our main
contributions include:
* •
To the best of our knowledge, this is the first attempt to adapt the “ _pre-
train, prompt_ ” paradigm to heterogeneous graphs.
* •
We propose HetGPT, a general post-training prompting framework tailored for
heterogeneous graphs. By coherently integrating a virtual class prompt, a
heterogeneous feature prompt, and a multi-view neighborhood aggregation
mechanism, it elegantly bridges the objective gap between pre-training and
downstream tasks on heterogeneous graphs.
* •
Extensive experiments on three benchmark datasets demonstrate HetGPT’s
capability to enhance the performance of state-of-the-art HGNNs on semi-
supervised node classification.
## 2\. Related Work
Heterogeneous graph neural networks. Recently, there has been a surge in the
development of heterogeneous graph neural networks (HGNNs) designed to learn
node representations on heterogeneous graphs (Wang et al., 2022; Yang et al.,
2020; Lv et al., 2021). For example, HAN (Wang et al., 2019) introduces
hierarchical attention to learn the node-level and semantic-level structures.
MAGNN (Fu et al., 2020) incorporates intermediate nodes along metapaths to
encapsulate the rich semantic information inherent in heterogeneous graphs.
HetGNN (Zhang et al., 2019) employs random walk to sample node neighbors and
utilizes LSTM to fuse heterogeneous features. HGT (Hu et al., 2020a) adopts a
transformer-based architecture tailored for web-scale heterogeneous graphs.
However, a shared challenge across these models is their dependency on high-
quality labeled data for training. In real-world scenarios, obtaining such
labeled data can be resource-intensive and sometimes impractical. This has
triggered numerous studies to explore pre-training techniques for
heterogeneous graphs as an alternative to traditional supervised learning.
Heterogeneous graph pre-training. Pre-training techniques have gained
significant attention in heterogeneous graph machine learning, especially
under the scenario with limited labeled nodes (Liu et al., 2022; Xie et al.,
2022). Heterogeneous graphs, with their complex types of nodes and edges,
require specialized pre-training strategies. These can be broadly categorized
into generative and contrastive methods. Generative learning in heterogeneous
graphs primarily focuses on reconstructing masked segments of the input graph,
either in terms of the underlying graph structures or specific node attributes
(Hu et al., 2020b; Fang et al., 2022b; Tian et al., 2023). On the other hand,
contrastive learning on heterogeneous graphs aims to refine node
representations by magnifying the mutual information of positive pairs while
diminishing that of negative pairs. Specifically, representations generated
from the same data instance form a positive pair, while those from different
instances constitute a negative pair. Some methods emphasizes contrasting
node-level representations (Jiang et al., 2021a; Yang et al., 2022; Wang et
al., 2021a; Jiang et al., 2021b), while another direction contrasts node-level
representations with graph-level representations (Park et al., 2020; Jing et
al., 2021; Ren et al., 2019). In general, the efficacy of contrastive methods
surpasses that of generative ones (Tian et al., 2023), making them the default
pre-training strategies adopted in this paper.
Prompt-based learning on graphs. The recent trend in Natural Language
Processing (NLP) has seen a shift from traditional fine-tuning of pre-trained
language models (LMs) to a new paradigm: “ _pre-train, prompt_ ” (Liu et al.,
2023b). Instead of fine-tuning LMs through task-specific objective functions,
this paradigm reformulates downstream tasks to resemble pre-training tasks by
incorporating textual prompts to input texts. This not only bridges the gap
between pre-training and downstream tasks but also instigates further research
integrating prompting with pre-trained graph neural networks (Sun et al.,
2023). For example, GPPT (Sun et al., 2022) and GraphPrompt (Liu et al.,
2023a) introduce prompt templates to align the pretext task of link prediction
with downstream classification. GPF (Fang et al., 2022a) and VNT-GPPE (Tan et
al., 2023) employ learnable perturbations to the input graph, modulating pre-
trained node representations for downstream tasks. However, all these
techniques cater exclusively to homogeneous graphs, overlooking the distinct
complexities inherent to the heterogeneity in real-world systems.
## 3\. Preliminaries
###### Definition 0: Heterogeneous graph.
A heterogeneous graph is defined as
${\mathcal{G}}=\\{{\mathcal{V}},{\mathcal{E}}\\}$, where ${\mathcal{V}}$ is
the set of nodes and ${\mathcal{E}}$ is the set of edges. It is associated
with a node type mapping function $\phi:{\mathcal{V}}\rightarrow{\mathcal{A}}$
and an edge type mapping function
$\varphi:{\mathcal{E}}\rightarrow{\mathcal{R}}$. ${\mathcal{A}}$ and
${\mathcal{R}}$ denote the node type set and edge type set, respectively. For
heterogeneous graphs, we require $|{\mathcal{A}}|+|{\mathcal{R}}|>2$. Let
${\mathcal{X}}=\\{{\bm{X}}_{A}\mid A\in{\mathcal{A}}\\}$ be the set of all
node feature matrices for different node types. Specifically,
${\bm{X}}_{A}\in{\mathbb{R}}^{\left|{\mathcal{V}}_{A}\right|\times d_{A}}$ is
the feature matrix where each row corresponds to a feature vector
${\bm{x}}_{i}^{A}$ of node $i$ of type $A$. All nodes of type $A$ share the
same feature dimension $d_{A}$, and nodes of different types can have
different feature dimensions.
Figure 1(a) illustrates an example heterogeneous graph with three types of
nodes: author (A), paper (P), and subject (S), as well as two types of edges:
“write” and “belong to”.
###### Definition 0: Network schema.
The network schema is defined as
${\mathcal{S}}=({\mathcal{A}},{\mathcal{R}})$, which can be seen as a meta
template for a heterogeneous graph ${\mathcal{G}}$. Specifically, network
schema is a graph defined over the set of node types ${\mathcal{A}}$, with
edges representing relations from the set of edge types ${\mathcal{R}}$.
Figure 1(b) presents the network schema for a heterogeneous graph. As per the
network schema, we learn that a paper is written by an author and that a paper
belongs to a subject.
###### Definition 0: Metapath.
A metapath $P$ is a path defined by a pattern of node and edge types, denoted
as
$A_{1}\xrightarrow{R_{1}}A_{2}\xrightarrow{R_{2}}\cdots\xrightarrow{R_{l}}A_{l+1}$
(abbreviated as $A_{1}A_{2}\cdots A_{l+1}$), where $A_{i}\in{\mathcal{A}}$ and
$R_{i}\in{\mathcal{R}}$.
Figure 1(c) shows two metapaths for a heterogeneous graph: “PAP” represents
that two papers are written by the same author, while “PSP” indicates that two
papers share the same subject.
###### Definition 0: Semi-supervised node classification.
Given a heterogeneous graph ${\mathcal{G}}=\\{{\mathcal{V}},{\mathcal{E}}\\}$
with node features ${\mathcal{X}}$, we aim to predict the labels of the target
node set ${\mathcal{V}}_{T}$ of type $T\in{\mathcal{A}}$. Each target node
$v\in{\mathcal{V}}_{T}$ corresponds to a class label $y_{v}\in{\mathcal{Y}}$.
Under the semi-supervised learning setting, while the node labels in the
labeled set ${\mathcal{V}}_{L}\subset{\mathcal{V}}_{T}$ are provided, our
objective is to predict the labels for nodes in the unlabeled set
${\mathcal{V}}_{U}={\mathcal{V}}_{T}\setminus{\mathcal{V}}_{L}$.
Figure 1. A example of a heterogeneous graph.
###### Definition 0: Pre-train, fine-tune.
We introduce the “ _pre-train, fine-tune_ ” paradigm for heterogeneous graphs.
During the pre-training stage, an encoder $f_{\theta}$ parameterized by
$\theta$ maps each node $v\in{\mathcal{V}}$ to a low-dimensional
representation ${\bm{h}}_{v}\in{\mathbb{R}}^{d}$. Typically, $f_{\theta}$ is
an HGNN that takes a heterogeneous graph
${\mathcal{G}}=\\{{\mathcal{V}},{\mathcal{E}}\\}$ and its node features
${\mathcal{X}}$ as inputs. For each target node $v\in{\mathcal{V}}_{T}$, we
construct its positive ${\mathcal{P}}_{v}$ and negative sample sets
${\mathcal{N}}_{v}$ for contrastive learning. The contrastive head $g_{\psi}$,
parameterized by $\psi$, discriminates the representations between positive
and negative pairs. The pre-training objective can be formulated as:
(1)
$\theta^{*},\psi^{*}=\operatorname*{arg\,min}_{\theta,\psi}{\mathcal{L}}_{con}\left(g_{\psi},f_{\theta},{\mathcal{V}}_{T},{\mathcal{P}},{\mathcal{N}}\right),$
where ${\mathcal{L}}_{con}$ denotes the contrastive loss. Both
${\mathcal{P}}=\left\\{{\mathcal{P}}_{v}\mid v\in{\mathcal{V}}_{T}\right\\}$
and ${\mathcal{N}}=\left\\{{\mathcal{N}}_{v}\mid
v\in{\mathcal{V}}_{T}\right\\}$ can be nodes or graphs. They may be direct
augmentations or distinct views of the corresponding data instances,
contingent on the contrastive learning techniques employed.
In the fine-tuning stage, a prediction head $h_{\eta}$, parameterized by
$\eta$, is employed to optimize the learned representations for the downstream
node classification task. Given a set of labeled target nodes
${\mathcal{V}}_{L}$ and their corresponding label set ${\mathcal{Y}}$, the
fine-tuning objective can be formulated as:
(2)
$\theta^{**},\eta^{*}=\operatorname*{arg\,min}_{\theta^{*},\eta}{\mathcal{L}}_{sup}\left(h_{\eta},f_{\theta^{*}},{\mathcal{V}}_{L},{\mathcal{Y}}\right),$
where ${\mathcal{L}}_{sup}$ is the supervised loss. Notably, the parameters
$\theta$ are initialized with those obtained from the pre-training stage,
$\theta^{*}$.
## 4\. Method
Figure 2. Overview of the HetGPT architecture: Initially, an HGNN is pre-
trained alongside a contrastive head using a contrastive learning objective,
after which their parameters are frozen. Following this, a _heterogeneous
feature prompt_ (Sec. 4.3) is injected into the input graph’s feature space.
These prompted node features are then processed by the pre-trained HGNN,
producing the prompted node embeddings. Next, a _multi-view neighborhood
aggregation_ mechanism (Sec. 4.4) captures both local and global heterogeneous
neighborhood information of the target node, generating a node token. Finally,
pairwise similarity comparisons are performed between this node token and
class tokens derived from the _virtual class prompt_ (Sec. 4.2) via the same
contrastive learning objective from pre-training. As an illustrative example
of employing HetGPT for node classification: consider a target node $P_{2}$
associated with class $1$, its positive samples during prompt tuning are
constructed using the class token of class $1$, while negative samples are
drawn from class tokens of classes $2$ and $3$ (_i.e.,_ all remaining
classes).
In this section, we introduce HetGPT, a novel graph prompting technique
specifically designed for heterogeneous graphs, to address the four challenges
outlined in Section 1. In particular, HetGPT consists of the following key
components: (1) _prompting function design_ ; (2) _virtual class prompt_ ; (3)
_heterogeneous feature prompt_ ; (4) _multi-view neighborhood aggregation_ ;
(5) _prompt-based learning and inference_. The overall framework of HetGPT is
shown in Figure 2.
### 4.1. Prompting Function Design (C1)
Traditional fine-tuning approaches typically append an additional prediction
head and a supervised loss for downstream tasks, as depicted in Equation 2. In
contrast, HetGPT pivots towards leveraging and tuning prompts specifically
designed for node classification.
In prompt-based learning for NLP, a prompting function employs a pre-defined
template to modify the textual input, ensuring its alignment with the input
format used during pre-training. Meanwhile, within graph-based pre-training,
contrastive learning has overshadowed generative learning, especially in
heterogeneous graphs (Park et al., 2020; Jing et al., 2021; Wang et al.,
2021a), as it offers broader applicability and harnesses overlapping task
subspaces, which are optimal for knowledge transfer. Therefore, these findings
motivate us to reformulate the downstream node classification task to align
with contrastive approaches. Subsequently, a good design of graph prompting
function becomes pivotal in matching these contrastive pre-training
strategies.
Central to graph contrastive learning is the endeavor to maximize mutual
information between node-node or node-graph pairs. In light of this, we
propose a graph prompting function, denoted as $l(\cdot)$. This function
transforms an input node $v$ into a pairwise template that encompasses a node
token ${\bm{z}}_{v}$ and a class token ${\bm{q}}_{c}$:
(3) $l(v)=[{\bm{z}}_{v},{\bm{q}}_{c}].$
Within the framework, ${\bm{q}}_{c}$ represents a trainable embedding for
class $c$ in the downstream node classification task, as explained in Section
4.2. Concurrently, ${\bm{z}}_{v}$ denotes the latent representation of node
$v$, derived from the pre-trained HGNN, which will be further discussed in
Section 4.3 and Section 4.4.
### 4.2. Virtual Class Prompt (C2)
Instead of relying solely on direct class labels, we propose the concept of a
virtual class prompt, a paradigm shift from traditional node classification.
Serving as a dynamic proxy for each class, the prompt bridges the gap between
the abstract representation of nodes and the concrete class labels they are
affiliated with. By leveraging the virtual class prompt, we aim to reformulate
downstream node classification as a series of mutual information calculation
tasks, thereby refining the granularity and adaptability of the classification
predictions. This section delves into the design and intricacies of the
virtual class prompt, illustrating how it can be seamlessly integrated into
the broader contrastive pre-training framework.
#### 4.2.1. Class tokens.
We introduce class tokens, the building blocks of the virtual class prompt,
which serve as representative symbols for each specific class. Distinct from
discrete class labels, these tokens can capture intricate class-specific
semantics, providing a richer context for node classification. We formally
define the set of class tokens, denoted as ${\mathcal{Q}}$, as follows:
(4)
${\mathcal{Q}}=\left\\{{\bm{q}}_{1},{\bm{q}}_{2},\dots,{\bm{q}}_{C}\right\\},$
where $C$ is the total number of classes in ${\mathcal{Y}}$. Each token
${\bm{q}}_{c}\in{\mathbb{R}}^{d}$ is a trainable vector and shares the same
embedding dimension $d$ with the node representations from the pre-trained
network $f_{\theta^{*}}$.
#### 4.2.2. Prompt initialization.
Effective initialization of class tokens facilitates a smooth knowledge
transfer from pre-trained heterogeneous graphs to the downstream node
classification. We initialize each class token, ${\bm{q}}_{c}$, by computing
the mean of embeddings for labeled nodes that belong to the respective class.
Formally,
(5)
${\bm{q}}_{c}=\frac{1}{N_{c}}\sum_{\begin{subarray}{c}v\in{\mathcal{V}}_{L}\\\
y_{v}=c\end{subarray}}{\bm{h}}_{v},\quad\forall c\in\\{1,2,\dots,C\\},$
where $N_{c}$ denotes the number of nodes with class $c$ in the labeled set
${\mathcal{V}}_{L}$, and ${\bm{h}}_{v}$ represents the pre-trained embedding
of node $v$. This initialization aligns each class token with the prevalent
patterns of its respective class, enabling efficient prompt tuning afterward.
### 4.3. Heterogeneous Feature Prompt (C3)
Inspired by recent progress with visual prompts in the vision domain (Jia et
al., 2022; Bahng et al., 2022), we propose a heterogeneous feature prompt.
This approach incorporates a small amount of trainable parameters directly
into the feature space of the heterogeneous graph ${\mathcal{G}}$. Throughout
the training phase of the downstream task, the parameters of the pre-trained
network $f_{\theta^{*}}$ remain unchanged. The key insight behind this feature
prompt lies in its ability to act as task-specific augmentations to the
original graph. It implicitly tailors the pre-trained node representations for
an effective and efficient transfer of the learned knowledge from pre-training
to the downstream task.
Prompting techniques fundamentally revolve around the idea of augmenting the
input data to better align with the pretext objectives. This makes the design
of a graph-level transformation an important factor for the efficacy of
prompting. To illustrate, let’s consider a homogeneous graph ${\mathcal{G}}$
with its adjacency matrix ${\bm{A}}$ and node feature matrix ${\bm{X}}$. We
introduce $t_{\xi}$, a graph-level transformation function parameterized by
$\xi$, such as changing node features, adding or removing edges, _etc_. Prior
research (Fang et al., 2022a; Sun et al., 2023) has proved that for any
transformation function $t_{\xi}$, there always exists a corresponding feature
prompt ${\bm{p}}^{*}$ that satisfies the following property:
(6) $f_{\theta^{*}}({\bm{A}},{\bm{X}}+{\bm{p}}^{*})\equiv
f_{\theta^{*}}(t_{\xi}({\bm{A}},{\bm{X}}))+O_{{\bm{p}}\theta},$
where $O_{{\bm{p}}\theta}$ represents the deviation between the node
representations from the graph that’s augmented by $t_{\xi}$ and the graph
that’s prompted by ${\bm{p}}^{*}$. This discrepancy is primarily contingent on
the quality of the learned prompt ${\bm{p}}^{*}$ as the parameters
$\theta^{*}$ of the pre-trained model are fixed. This perspective further
implies the feasibility and significance of crafting an effective feature
prompt within the graph’s input space, which emulates the impact of learning a
specialized augmentation function tailored for downstream tasks.
However, in heterogeneous graphs, nodes exhibit diverse attributes based on
their types, and each type has unique dimensionalities and underlying semantic
meanings. Take a citation network for instance: while paper nodes have
features represented by word embeddings derived from their abstracts, author
nodes utilize one-hot encoding as features. Given this heterogeneity, the
approach used in homogeneous graph prompting methods may not be effective or
yield optimal results when applied to heterogeneous graphs, as it uniformly
augments node features for all node types via a single and all-encompassing
feature prompt.
#### 4.3.1. Type-specific feature tokens
To address the above challenge, we introduce type-specific feature tokens,
which are a set of designated tokens that align with the diverse input
features inherent to each node type. Given the diversity in scales and
structures across various graphs, equating the number of feature tokens to the
node count is often sub-optimal. This inefficiency is especially obvious in
large-scale graphs, as this design demands extensive storage due to its
$O(|{\mathcal{V}}|)$ learnable parameters. In light of this, for each node
type, we employ a feature prompt consisting of a limited set of independent
basis vectors of size $K$, _i.e.,_ ${\bm{f}}_{k}^{A}\in{\mathbb{R}}^{d_{A}}$,
with $d_{A}$ as the feature dimension associated with node type
$A\in{\mathcal{A}}$:
(7) $\displaystyle{\mathcal{F}}$ $\displaystyle=\left\\{{\mathcal{F}}_{A}\mid
A\in{\mathcal{A}}\right\\},$ $\displaystyle{\mathcal{F}}_{A}$
$\displaystyle=\left\\{{\bm{f}}^{A}_{1},{\bm{f}}^{A}_{2},\dots,{\bm{f}}^{A}_{K}\right\\},$
where $K$ is a hyperparameter and its value can be adjusted based on the
specific dataset in use.
#### 4.3.2. Prompted node features
For each node $i$ of type $A\in{\mathcal{A}}$, its node feature vector
${\bm{x}}_{i}^{A}$ is augmented by a linear combination of feature token
${\bm{f}}_{k}^{A}$ through an attention mechanism, where the attention weights
are denoted by $w_{i,k}^{A}$. Consequently, the prompted node feature vector
evolves as:
(8)
$\displaystyle\tilde{{\bm{x}}}_{i}^{A}={\bm{x}}_{i}^{A}+\sum_{k=1}^{K}w_{i,k}^{A}\cdot{\bm{f}}_{k}^{A},$
(9) $\displaystyle
w_{i,k}^{A}=\frac{\exp\left(\sigma\left(({\bm{f}}_{k}^{A})^{\top}\cdot{\bm{x}}_{i}^{A}\right)\right)}{\sum_{j=1}^{K}\exp\left(\sigma\left(({\bm{f}}_{j}^{A})^{\top}\cdot{\bm{x}}_{i}^{A}\right)\right)},$
where $\sigma(\cdot)$ represents a non-linear activation function.
Subsequently, we utilize these prompted node features, represented as
$\tilde{{\mathcal{X}}}$, together with the heterogeneous graph,
${\mathcal{G}}$. They are then passed through the pre-trained HGNN
$f_{\theta^{*}}$ during the prompt tuning phase to obtain a prompted node
embedding matrix $\tilde{{\bm{H}}}$:
(10)
$\tilde{{\bm{H}}}=f_{\theta^{*}}({\mathcal{G}},\tilde{{\mathcal{X}}})\in{\mathbb{R}}^{|{\mathcal{V}}|\times
d}.$
### 4.4. Multi-View Neighborhood Aggregation (C4)
In prompt-based learning for homogeneous graphs, the node token ${\bm{z}}_{v}$
in Equation 3 for a given node $v\in{\mathcal{V}}$ is directly equated to
${\bm{h}}_{v}$, which is the embedding generated by the pre-trained network
$f_{\theta^{*}}$ (Wen et al., 2023). Alternatively, it can also be derived
from an aggregation of the embeddings of its immediate neighboring nodes (Sun
et al., 2022). However, in heterogeneous graphs, such aggregations are
complicated due to the inherent heterogeneity of neighboring structures. For
example, given a target node with the type “paper”, connections can be
established either with other “paper” nodes through different metapaths
(_e.g.,_ PAP, PSP) or with nodes of varied types (_i.e.,_ author or subject)
based on the network schema. Furthermore, it is also vital to leverage the
prompted pre-trained node embeddings $\tilde{{\bm{H}}}$ (as detailed in
Section 4.3) in the aggregation. Taking all these into consideration, we
introduce a multi-view neighborhood aggregation mechanism. This strategy
incorporates both type-based and metapath-based neighbors, ensuring a
comprehensive representation that captures both local (_i.e.,_ network schema)
and global (_i.e.,_ metapath) patterns.
#### 4.4.1. Type-based aggregation
Based on the network schema outlined in Definition 3.2, a target node
$i\in{\mathcal{V}}_{T}$ can directly connect to $M$ different node types
$\\{A_{1},A_{2},\dots,A_{M}\\}$. Given the variability in contributions from
different nodes of the same type to node $i$ and the diverse influence from
various types of neighbors, we utilize a two-level attention mechanism (Wang
et al., 2019) to aggregate the local information of node $i$. For the first
level, the information ${\bm{h}}_{i}^{A_{m}}$ is fused from the neighbor set
${\mathcal{N}}^{A_{m}}_{i}$ for node $i$ using node attention:
(11)
$\displaystyle{\bm{h}}_{i}^{A_{m}}=\sigma\left(\sum_{j\in{\mathcal{N}}^{A_{m}}_{i}\cup\\{i\\}}\alpha_{i,j}^{A_{m}}\cdot\tilde{{\bm{h}}}_{j}\right),$
(12)
$\displaystyle\alpha_{i,j}^{A_{m}}=\frac{\exp\left(\sigma\left({\mathbf{a}}_{A_{m}}^{\top}\cdot[\tilde{{\bm{h}}}_{i}\|\tilde{{\bm{h}}}_{j}]\right)\right)}{\sum_{k\in{\mathcal{N}}^{A_{m}}_{i}\cup\\{i\\}}\exp\left(\sigma\left({\mathbf{a}}_{A_{m}}^{\top}\cdot[\tilde{{\bm{h}}}_{i}\|\tilde{{\bm{h}}}_{k}]\right)\right)},$
where $\sigma(\cdot)$ is a non-linear activation function, $\|$ denotes
concatenation, and ${\mathbf{a}}_{A_{m}}\in{\mathbb{R}}^{2d\times 1}$ is the
node attention vector shared across all nodes of type $A_{m}$. For the second
level, the type-based embedding of node $i$, denoted as
${\bm{z}}_{i}^{\text{TP}}$, is derived by synthesizing all type
representations
$\\{{\bm{h}}_{i}^{A_{1}},{\bm{h}}_{i}^{A_{2}},\dots,{\bm{h}}_{i}^{A_{M}}\\}$
through semantic attention:
(13) $\displaystyle\begin{aligned}
{\bm{z}}_{i}^{\text{TP}}&=\sum_{i=1}^{M}\beta_{A_{m}}\cdot{\bm{h}}_{i}^{A_{m}},&\beta_{A_{m}}&=\frac{\exp(w_{A_{m}})}{\sum_{k=1}^{M}\exp(w_{A_{k}})},\end{aligned}$
(14) $\displaystyle
w_{A_{m}}=\frac{1}{|{\mathcal{V}}_{T}|}\sum_{i\in{\mathcal{V}}_{T}}{\mathbf{a}}_{\text{TP}}^{\top}\cdot\text{tanh}({\bm{W}}_{\text{TP}}\cdot{\bm{h}}_{i}^{A_{m}}+{\bm{b}}_{\text{TP}}),$
where ${\mathbf{a}}_{\text{TP}}\in{\mathbb{R}}^{d\times 1}$ is the type-based
semantic attention vector shared across all node types,
${\bm{W}}_{\text{TP}}\in{\mathbb{R}}^{d\times d}$ is the weight matrix, and
${\bm{b}}_{\text{TP}}\in{\mathbb{R}}^{d\times 1}$ is the bias vector.
#### 4.4.2. Metapath-based aggregation
In contrast to type-based aggregation, metapath-based aggregation provides a
perspective to capture global information of a target node
$i\in{\mathcal{V}}_{T}$. This is attributed to the nature of metapaths, which
encompass connections that are at least two hops away. Given a set of defined
metapaths $\\{P_{1},P_{2},\dots,P_{N}\\}$, the information from neighbors of
node $i$ connected through metapath $P_{n}$ is aggregated via node attention:
(15)
$\displaystyle{\bm{h}}_{i}^{P_{n}}=\sigma\left(\sum_{j\in{\mathcal{N}}^{P_{n}}_{i}\cup\\{i\\}}\alpha_{i,j}^{P_{n}}\cdot\tilde{{\bm{h}}}_{i}\right),$
(16)
$\displaystyle\alpha_{i,j}^{P_{n}}=\frac{\exp\left(\sigma\left({\mathbf{a}}_{P_{n}}^{\top}\cdot[\tilde{{\bm{h}}}_{i}\|\tilde{{\bm{h}}}_{j}]\right)\right)}{\sum_{k\in{\mathcal{N}}^{P_{n}}_{i}\cup\\{i\\}}\exp\left(\sigma\left({\mathbf{a}}_{P_{n}}^{\top}\cdot[\tilde{{\bm{h}}}_{i}\|\tilde{{\bm{h}}}_{k}]\right)\right)},$
where ${\mathbf{a}}_{P_{n}}\in{\mathbb{R}}^{2d\times 1}$ is the node attention
vector shared across all nodes connected through metapath $P_{n}$. To compile
the global structural information from various metapaths, we fuse the node
embeddings
$\\{{\bm{h}}_{i}^{P_{1}},{\bm{h}}_{i}^{P_{2}},\dots,{\bm{h}}_{i}^{P_{N}}\\}$
derived from each metapath into a single embedding using semantic attention:
(17) $\displaystyle\begin{aligned}
{\bm{z}}_{i}^{\text{MP}}&=\sum_{i=1}^{N}\beta_{P_{n}}\cdot{\bm{h}}_{i}^{P_{n}},&\beta_{P_{n}}&=\frac{\exp(w_{P_{n}})}{\sum_{k=1}^{N}\exp(w_{P_{k}})},\end{aligned}$
(18) $\displaystyle
w_{P_{n}}=\frac{1}{|{\mathcal{V}}_{T}|}\sum_{i\in{\mathcal{V}}_{T}}{\mathbf{a}}_{\text{MP}}^{\top}\cdot\text{tanh}({\bm{W}}_{\text{MP}}\cdot{\bm{h}}_{i}^{P_{n}}+{\bm{b}}_{\text{MP}}),$
where ${\mathbf{a}}_{\text{MP}}\in{\mathbb{R}}^{d\times 1}$ is the metapath-
based semantic- attention vector shared across all metapaths,
${\bm{W}}_{\text{MP}}\in{\mathbb{R}}^{d\times d}$ is the weight matrix, and
${\bm{b}}_{\text{MP}}\in{\mathbb{R}}^{d\times 1}$ is the bias vector.
Integrating the information from both aggregation views, we obtain the final
node token, ${\bm{z}}_{i}$, by concatenating the type-based and the metapath-
based embedding:
(19)
${\bm{z}}_{i}=\sigma\left({\bm{W}}[{\bm{z}}_{i}^{\text{MP}}\|{\bm{z}}_{i}^{\text{TP}}]+{\bm{b}}\right),$
where $\sigma(\cdot)$ is a non-linear activation function,
${\bm{W}}\in{\mathbb{R}}^{2d\times d}$ is the weight matrix, and
${\bm{b}}\in{\mathbb{R}}^{d\times 1}$ is the bias vector.
### 4.5. Prompt-Based Learning and Inference
Building upon our prompt design detailed in the preceding sections, we present
a comprehensive overview of the prompt-based learning and inference process
for semi-supervised node classification. This methodology encompasses three
primary stages: (1) _prompt addition_ , (2) _prompt tuning_ , and (3) _prompt-
assisted prediction_.
#### 4.5.1. Prompt addition.
Based on the graph prompting function $l(\cdot)$ outlined in Equation (3), we
parameterize it using the trainable virtual class prompt ${\mathcal{Q}}$ and
the heterogeneous feature prompt ${\mathcal{F}}$. To ensure compatibility
during the contrastive loss calculation, which we detail later, we use a
single-layer Multilayer Perceptron (MLP) to project both ${\bm{z}}_{v}$ and
${\bm{q}}_{c}$, onto the same embedding space. Formally:
(20) $\displaystyle{\bm{z}}^{\prime}_{v}$
$\displaystyle=\text{MLP}({\bm{z}}_{v}),$ $\displaystyle{\bm{q}}^{\prime}_{c}$
$\displaystyle=\text{MLP}({\bm{q}}_{c}),$ $\displaystyle
l_{{\mathcal{Q}},{\mathcal{F}}}(v)$
$\displaystyle=[{\bm{z}}^{\prime}_{v},{\bm{q}}^{\prime}_{c}].$
#### 4.5.2. Prompt tuning.
Our prompt design allows us to reuse the contrastive head from Equation 1 for
downstream node classification without introducing a new prediction head.
Thus, the original positive ${\mathcal{P}}_{v}$ and negative samples
${\mathcal{N}}_{v}$ of a labeled node $v\in{\mathcal{V}}_{L}$ used during pre-
training are replaced with the virtual class prompt corresponding to its given
class label $y_{v}$.
(21) $\displaystyle{\mathcal{P}}_{v}$
$\displaystyle=\left\\{{\bm{q}}_{y_{v}}\right\\},$
$\displaystyle{\mathcal{N}}_{v}$
$\displaystyle={\mathcal{Q}}\setminus\left\\{{\bm{q}}_{y_{v}}\right\\},$
Consistent with the contrastive pre-training phase, we employ the InfoNCE
(Oord et al., 2018) loss to replace the supervised classification loss
${\mathcal{L}}_{sup}$:
(22)
${\mathcal{L}}_{con}=-\sum_{v\in{\mathcal{V}}_{L}}\log\left(\frac{\exp(\text{sim}({\bm{z}}^{\prime}_{v},{\bm{q}}^{\prime}_{y_{v}})/\tau)}{\sum_{c=1}^{C}\exp(\text{sim}({\bm{z}}^{\prime}_{v},{\bm{q}}^{\prime}_{c})/\tau)}\right).$
Here, $\text{sim}(\cdot)$ denotes a similarity function between two vectors,
and $\tau$ denotes a temperature hyperparameter. To obtain the optimal
prompts, we utilize the following prompt tuning objective:
(23)
${\mathcal{Q}}^{*},{\mathcal{F}}^{*}=\operatorname*{arg\,min}_{{\mathcal{Q}},{\mathcal{F}}}{\mathcal{L}}_{con}\left(g_{\psi^{*}},f_{\theta^{*}},l_{{\mathcal{Q}},{\mathcal{F}}},{\mathcal{V}}_{L}\right)+\lambda{\mathcal{L}}_{orth},$
where $\lambda$ is a regularization hyperparameter. The orthogonal
regularization (Brock et al., 2016) loss ${\mathcal{L}}_{orth}$ is defined to
ensure the label tokens in the virtual class prompt remain orthogonal during
prompt tuning, fostering diversified representations of different classes:
(24)
${\mathcal{L}}_{orth}=\left\|{\bm{Q}}{\bm{Q}}^{\top}-{\bm{I}}\right\|^{2}_{F},$
where
${\bm{Q}}=\left[{\bm{q}}_{1},{\bm{q}}_{2},\dots,{\bm{q}}_{C}\right]^{\top}\in{\mathbb{R}}^{C\times
d}$ is the matrix form of the virtual class prompt ${\mathcal{Q}}$, and
${\bm{I}}\in{\mathbb{R}}^{C\times C}$ is an identity matrix.
#### 4.5.3. Prompt-assisted prediction
During the inference phase, for an unlabeled target node
$v\in{\mathcal{V}}_{U}$, the predicted probability of node $v$ belonging to
class $c$ is given by:
(25)
$P(y_{v}=c)=\frac{\exp(\text{sim}({\bm{z}}^{\prime}_{v},{\bm{q}}^{\prime}_{c}))}{\sum_{k=1}^{C}\exp(\text{sim}({\bm{z}}^{\prime}_{v},{\bm{q}}^{\prime}_{k}))}.$
This equation computes the similarity between the projected node token
${\bm{z}}^{\prime}_{v}$ and each projected class token
${\bm{q}}^{\prime}_{c}$, using the softmax function to obtain class
probabilities. The class with the maximum likelihood for node $v$ is
designated as the predicted class $\hat{y}_{v}$:
(26) $\hat{y}_{v}=\operatorname*{arg\,max}_{c}P(y_{v}=c),$
## 5\. Experiments
Table 1. Detailed statistics of the benchmark datasets. Underlined node types
are the target nodes for classification.
c—c—c—c—c Dataset # Nodes # Edges Metapaths # Classes
ACM Paper: 4,019 Author: 7,167 Subject: 60 P-A: 13,407 P-S: 4,019 PAP PSP 3
DBLP Author: 4,057 Paper: 14,328 Term: 7,723 Conference: 20 P-A: 19,645 P-T:
85,810 P-C: 14,328 APA APCPA APTPA 4
IMDB Movie: 4,278 Director: 2,081 Actor: 5,257 M-D: 4,278 M-A: 12,828 MAM MDM
3
Table 2. Experiments results on three semi-supervised node classification
benchmark datasets. We report the average performance for 10 repetitions. The
best results are highlighted in bold, while improved results attributed to
HetGPT are underlined. The “+” symbol indicates the integration of HetGPT with
the corresponding original models as an auxiliary system.
c—c—c—ccccc—cc:cc:cc Dataset Metric # Train HAN HGT MAGNN HGMAE GPPT DMGI
+HetGPT HeCo +HetGPT HDMI +HetGPT
ACM Ma-F1 127.08$\pm 2.05$49.74$\pm 9.38$38.62$\pm 2.87$28.00$\pm
7.21$21.85$\pm 1.09$47.28$\pm 0.23$52.07$\pm 3.28$54.24$\pm 8.42$55.90$\pm
8.42$65.58$\pm 7.45$71.00$\pm 5.32$
584.84$\pm 0.95$84.40$\pm 7.48$84.45$\pm 0.79$87.34$\pm 1.62$71.77$\pm
6.73$86.12$\pm 0.45$87.91$\pm 0.77$86.55$\pm 1.36$87.03$\pm 1.15$88.88$\pm
1.73$91.08$\pm 0.37$
2084.37$\pm 1.25$84.40$\pm 5.31$85.13$\pm 1.58$88.61$\pm 1.10$80.90$\pm
0.88$86.64$\pm 0.65$88.65$\pm 0.81$88.09$\pm 1.21$88.63$\pm 0.88$90.76$\pm
0.79$92.15$\pm 0.25$
4086.33$\pm 0.66$86.17$\pm 6.26$86.26$\pm 0.67$88.31$\pm 1.09$81.78$\pm
1.46$87.52$\pm 0.46$87.88$\pm 0.69$87.03$\pm 1.40$86.88$\pm 0.95$90.62$\pm
0.21$91.31$\pm 0.39$
6086.31$\pm 2.16$86.15$\pm 6.05$86.56$\pm 1.96$88.81$\pm 0.72$84.15$\pm
0.47$88.71$\pm 0.59$90.33$\pm 0.41$88.95$\pm 0.85$89.13$\pm 0.59$91.29$\pm
0.57$92.09$\pm 0.35$
Mi-F1 149.76$\pm 0.35$58.52$\pm 6.75$51.27$\pm 0.45$40.82$\pm 7.26$34.32$\pm
3.87$49.63$\pm 0.25$54.29$\pm 4.49$54.81$\pm 9.88$63.01$\pm 9.61$64.89$\pm
8.20$73.41$\pm 2.51$
584.96$\pm 1.12$85.11$\pm 4.06$85.31$\pm 1.14$87.47$\pm 1.53$75.41$\pm
3.66$86.16$\pm 0.47$88.05$\pm 0.77$86.85$\pm 1.33$87.26$\pm 1.09$89.01$\pm
1.69$91.09$\pm 0.37$
2083.33$\pm 1.58$83.05$\pm 3.62$83.88$\pm 1.60$88.31$\pm 1.15$81.20$\pm
0.63$85.94$\pm 0.64$88.40$\pm 0.79$87.87$\pm 1.24$88.60$\pm 0.79$90.55$\pm
0.82$91.85$\pm 0.26$
4086.24$\pm 0.67$86.21$\pm 3.68$86.39$\pm 0.69$88.29$\pm 1.04$82.02$\pm
1.49$87.09$\pm 0.47$87.78$\pm 0.79$86.56$\pm 1.56$86.64$\pm 1.05$90.41$\pm
0.23$91.11$\pm 0.39$
6085.56$\pm 2.48$85.49$\pm 4.74$86.03$\pm 2.40$88.59$\pm 0.71$84.16$\pm
0.45$88.34$\pm 0.63$90.13$\pm 0.43$88.48$\pm 0.94$88.91$\pm 0.62$91.16$\pm
0.56$91.94$\pm 0.33$
DBLP Ma-F1 150.28$\pm 8.41$70.86$\pm 6.82$52.52$\pm 8.67$82.75$\pm
7.96$39.17$\pm 1.25$76.00$\pm 3.27$81.33$\pm 1.90$88.79$\pm 0.44$89.44$\pm
0.54$88.28$\pm 0.58$90.25$\pm 0.29$
582.85$\pm 8.60$82.70$\pm 5.28$82.24$\pm 0.85$83.47$\pm 4.57$54.13$\pm
1.06$81.12$\pm 1.20$81.85$\pm 1.89$91.56$\pm 0.23$91.87$\pm 0.43$91.00$\pm
0.38$91.39$\pm 0.46$
2089.41$\pm 0.61$89.61$\pm 5.70$89.36$\pm 0.58$89.31$\pm 1.47$71.06$\pm
0.31$84.03$\pm 1.20$84.41$\pm 1.32$89.90$\pm 0.37$91.17$\pm 0.52$91.30$\pm
0.17$91.64$\pm 0.33$
4089.25$\pm 0.55$89.59$\pm 6.69$89.42$\pm 0.53$89.99$\pm 0.45$73.39$\pm
0.59$85.43$\pm 1.09$85.91$\pm 0.91$90.45$\pm 0.31$91.48$\pm 0.41$90.77$\pm
0.28$91.84$\pm 0.34$
6089.77$\pm 0.55$88.99$\pm 8.69$89.15$\pm 0.52$91.30$\pm 0.28$72.99$\pm
0.44$86.54$\pm 0.95$87.09$\pm 0.70$90.25$\pm 0.29$91.27$\pm 0.17$90.67$\pm
0.33$91.39$\pm 0.14$
Mi-F1 151.72$\pm 8.02$73.71$\pm 5.74$51.23$\pm 0.76$84.34$\pm 7.02$41.84$\pm
1.11$78.62$\pm 2.53$82.83$\pm 1.63$89.59$\pm 0.37$90.15$\pm 0.52$89.71$\pm
0.41$91.02$\pm 0.22$
583.35$\pm 8.43$84.03$\pm 3.44$83.45$\pm 0.89$83.59$\pm 4.57$54.82$\pm
0.82$81.12$\pm 1.20$81.85$\pm 1.89$91.83$\pm 0.25$92.12$\pm 0.42$91.25$\pm
0.39$91.68$\pm 0.45$
2090.49$\pm 0.56$90.29$\pm 2.90$90.60$\pm 0.54$90.38$\pm 1.36$72.49$\pm
0.30$84.03$\pm 1.20$84.41$\pm 1.32$91.01$\pm 0.36$92.05$\pm 0.50$92.16$\pm
0.14$92.46$\pm 0.29$
4090.11$\pm 0.42$90.85$\pm 5.67$90.80$\pm 0.47$90.99$\pm 0.41$74.56$\pm
0.64$85.43$\pm 1.09$85.91$\pm 0.91$91.35$\pm 0.28$92.19$\pm 0.36$91.72$\pm
0.26$92.53$\pm 0.31$
6091.70$\pm 0.42$90.25$\pm 6.22$91.58$\pm 0.48$92.13$\pm 0.27$73.63$\pm
0.42$86.54$\pm 0.95$87.09$\pm 0.70$91.30$\pm 0.25$92.22$\pm 0.16$91.80$\pm
0.23$92.35$\pm 0.13$
IMDB Ma-F1 123.26$\pm 1.59$28.99$\pm 3.21$35.75$\pm 1.85$29.87$\pm
2.28$31.08$\pm 0.96$37.70$\pm 2.21$40.22$\pm 2.50$28.00$\pm 1.65$32.51$\pm
3.86$38.29$\pm 2.44$40.28$\pm 2.83$
539.79$\pm 2.21$35.72$\pm 4.29$39.59$\pm 1.08$37.17$\pm 2.79$37.47$\pm
1.13$45.58$\pm 3.05$49.63$\pm 1.04$35.92$\pm 2.60$37.66$\pm 2.28$48.82$\pm
1.40$51.87$\pm 1.69$
2045.76$\pm 1.87$48.75$\pm 2.56$48.77$\pm 0.46$45.85$\pm 1.62$44.08$\pm
0.53$47.30$\pm 5.01$49.56$\pm 1.07$42.16$\pm 2.17$43.75$\pm 1.43$50.87$\pm
1.69$52.14$\pm 2.27$
4045.58$\pm 0.78$47.98$\pm 1.57$46.37$\pm 0.40$44.40$\pm 1.73$42.47$\pm
0.71$45.25$\pm 3.14$48.77$\pm 1.30$45.94$\pm 1.74$46.48$\pm 1.50$51.18$\pm
1.57$52.81$\pm 1.36$
6049.51$\pm 0.72$51.53$\pm 1.06$48.97$\pm 0.38$46.60$\pm 2.30$44.78$\pm
0.89$47.14$\pm 7.22$51.14$\pm 1.25$48.12$\pm 1.27$49.19$\pm 1.42$52.17$\pm
1.67$53.83$\pm 1.36$
Mi-F1 138.23$\pm 0.40$39.33$\pm 1.31$40.28$\pm 0.96$37.97$\pm 1.18$36.16$\pm
1.42$37.99$\pm 1.85$39.95$\pm 2.51$33.02$\pm 2.44$35.45$\pm 2.11$40.19$\pm
1.70$41.99$\pm 2.26$
542.92$\pm 1.00$40.25$\pm 1.80$44.01$\pm 1.08$39.23$\pm 2.21$41.54$\pm
0.96$45.48$\pm 2.99$49.39$\pm 0.98$37.77$\pm 1.33$38.74$\pm 2.16$51.77$\pm
1.17$51.36$\pm 1.30$
2045.80$\pm 1.74$50.29$\pm 2.04$48.78$\pm 0.42$46.65$\pm 1.62$44.85$\pm
0.58$48.58$\pm 2.99$49.22$\pm 1.12$42.61$\pm 2.13$44.33$\pm 1.57$52.08$\pm
1.36$52.72$\pm 1.22$
4045.55$\pm 0.84$48.68$\pm 1.50$46.39$\pm 0.35$44.90$\pm 1.62$43.36$\pm
0.71$46.11$\pm 2.65$48.52$\pm 1.31$46.31$\pm 1.05$47.24$\pm 1.63$52.14$\pm
1.16$52.71$\pm 1.18$
6049.46$\pm 0.73$53.05$\pm 0.95$49.00$\pm 0.41$47.10$\pm 2.24$45.52$\pm
0.91$49.38$\pm 2.90$50.86$\pm 1.31$48.53$\pm 1.25$49.92$\pm 1.43$52.41$\pm
1.25$53.72$\pm 1.94$
In this section, we conduct a thorough evaluation of our proposed HetGPT to
address the following research questions:
* •
(RQ1) Can HetGPT improve the performance of pre-trained heterogeneous graph
neural networks on the semi-supervised node classification task?
* •
(RQ2) How does HetGPT perform under different settings, _i.e.,_ ablated models
and hyperparameters?
* •
(RQ3) How does the prompt tuning efficiency of HetGPT compare to its fine-
tuning counterpart?
* •
(RQ4) How interpretable is the learned prompt in HetGPT?
### 5.1. Experiment Settings
#### 5.1.1. Datasets
We evaluate our methods using three benchmark datasets: ACM (Zhao et al.,
2020), DBLP (Fu et al., 2020), and IMDB (Fu et al., 2020). Detailed statistics
and descriptions of these datasets can be found in Table 5. For the semi-
supervised node classification task, we randomly select 1, 5, 20, 40, or 60
labeled nodes per class as our training set. Additionally, we set aside 1,000
nodes for validation and another 1,000 nodes for testing. Our evaluation
metrics include Macro-F1 and Micro-F1.
#### 5.1.2. Baseline models
We compare our approach against methods belonging to three different
categories:
* •
Supervised HGNNs: HAN (Wang et al., 2019), HGT (Hu et al., 2020a), MAGNN (Fu
et al., 2020);
* •
HGNNs with “ _pre-train, fine-tune_ ”:
* –
Generative: HGMAE (Tian et al., 2023);
* –
Contrastive (our focus): DMGI (Park et al., 2020),HeCo (Wang et al.,
2021a),HDMI (Jing et al., 2021);
* •
GNNs with“ _pre-train, prompt_ ”: GPPT (Sun et al., 2022).
#### 5.1.3. Implementation details
For the homogeneous method GPPT, we evaluate using all the metapaths and
present the results with the best performance. Regarding the parameters of
other baselines, we adhere to the configuration specified in their original
papers.
In our HetGPT model, the heterogeneous feature prompt is initialized using
Kaiming initialization (He et al., 2015). During the prompt tuning phase, we
employ the Adam optimizer (Kingma and Ba, 2014) and search within a learning
rate ranging from 1e-4 to 5e-3. We also tune the patience for early stopping
from 20 to 100. The regularization hyperparameter $\lambda$ is set to 0.01. We
experiment with the number of feature tokens $K$, searching values from { 1,
5, 10, 15, 20 }. Lastly, for our non-linear activation function
$\sigma(\cdot)$, we use LeakyReLU.
### 5.2. Performance on Node Classification (RQ1)
Experiment results for semi-supervised node classification on three benchmark
datasets are detailed in Table 5. Compared to the pre-trained DMGI, HeCo, and
HDMI models, our post-training prompting framework, HetGPT, exhibits superior
performance in 88 out of the 90 comparison pairs. Specifically, we observe a
relative improvement of 3.00% in Macro-F1 and 2.62% in Micro-F1. The standard
deviation of HetGPT aligns closely with that of the original models,
indicating that the improvement achieved is both substantial and robust. It’s
crucial to note that the three HGNNs with “ _pre-train, fine-tune_ ” - DMGI,
HeCo, and HDMI, are already among the state-of-the-art methods for semi-
supervised node classification. By integrating them with HetGPT, we push the
envelope even further, setting a new performance pinnacle. Furthermore,
HetGPT’s edge becomes even more significant in scenarios where labeled nodes
are extremely scarce, achieving an improvement of 6.60% in Macro-F1 and 6.88%
in Micro-F1 under the 1-shot setting. Such marked improvements in few-shot
performance strongly suggest HetGPT’s efficacy in mitigating the overfitting
issue. The strategic design of our prompting function, especially the virtual
class prompt, effectively captures the intricate characteristics of each
class, which can potentially obviate the reliance on costly annotated data.
Additionally, GPPT lags considerably on all datasets, which further
underscores the value of HetGPT’s effort in tackling the unique challenges
inherent to heterogeneous graphs.
### 5.3. Performance under Different Settings (RQ2)
#### 5.3.1. Ablation study
To further demonstrate the effectiveness of each module in HetGPT, we conduct
an ablation study to evaluate our full framework against the following three
variants:
* •
w/o VCP: the variant of HetGPT without the virtual class prompt from Section
4.2;
* •
w/o HFP: the variant of HetGPT without the heterogeneous feature prompt from
Section 4.3;
* •
w/o MNA: the variant of HetGPT without the multi-view neighborhood aggregation
from Section 4.4.
Experiment results on ACM and DBLP, shown in Figure 3, highlight the
substantial contributions of each module to the overall effectiveness of
HetGPT. Notably, the virtual class prompt emerges as the most pivotal
component, indicated by the significant performance drop when it’s absent.
This degradation mainly stems from the overfitting issue linked to the
negative transfer problem, especially when labeled nodes are sparse. The
virtual class prompt directly addresses this issue by generalizing the
intricate characteristics of each class within the embedding space.
(a) ACM
(b) IMDB
Figure 3. Ablation study of HetGPT on ACM and IMDB.
#### 5.3.2. Hyper-parameter sensitivity
We evaluate the sensitivity of HetGPT to its primary hyperparameter: the
number of basis feature tokens $K$ in Equation (7). As depicted in Figure 4,
even a really small value of $K$ (_i.e.,_ 5 for ACM, 20 for DBLP, and 5 for
IMDB) can lead to satisfactory node classification performance. This suggests
that the prompt tuning effectively optimizes performance without the need to
introduce an extensive number of new parameters.
(a) ACM, DBLP
(b) IMDB
Figure 4. Performance of HetGPT with the different number of basis feature
vectors on ACM, DBLP, and IMDB.
### 5.4. Prompt Tuning Efficiency Analysis (RQ3)
Our HetGPT, encompassing the virtual class prompt and the heterogeneous
feature prompt, adds only a few new trainable parameters (_i.e.,_ comparable
to a shallow MLP). Concurrently, the parameters of the pre-trained HGNNs and
the contrastive head remain unchanged during the entire prompt tuning phase.
Figure 5 illustrates that HetGPT converges notably faster than its traditional
“ _pre-train, fine-tune_ ” counterpart, both recalibrating the parameters of
the pre-trained HGNNs and introducing a new prediction head. This further
demonstrates the efficiency benefits of our proposed framework, allowing for
effective training with minimal tuning iterations.
### 5.5. Interpretability Analysis (RQ4)
To gain a clear understanding of how the design of the virtual class prompt
facilitates effective node classification without relying on the traditional
classification paradigm, we employ a t-SNE plot to visualize the node
representations and the learned virtual class prompt on ACM and DBLP, as shown
in Figure 6. Within this visualization, nodes are depicted as colored circles,
while the class tokens from the learned virtual class prompt are denoted by
colored stars. Each color represents a unique class label. Notably, the
embeddings of these class tokens are positioned in close vicinity to clusters
of node embeddings sharing the same class label. This immediate spatial
proximity between a node and its respective class token validates the efficacy
of similarity measures inherited from the contrastive pretext for the
downstream node classification task. This observation further reinforces the
rationale behind our node classification approach using the virtual class
prompt, _i.e.,_ a node is labeled as the class that its embedding is most
closely aligned with.
(a) DBLP
(b) IMDB
Figure 5. Comparison of training losses over epochs between HetGPT and its
fine-tuning counterpart on DBLP and IMDB.
(a) ACM
(b) DBLP
Figure 6. Visualization of the learned node tokens and class tokens in virtual
class prompt on ACM and DBLP.
## 6\. Conclusion
In this paper, we propose HetGPT, a general post-training prompting framework
to improve the node classification performance of pre-trained heterogeneous
graph neural networks. Recognizing the prevalent issue of misalignment between
the objectives of pretext and downstream tasks, we craft a novel prompting
function that integrates a virtual class prompt and a heterogeneous feature
prompt. Furthermore, our framework incorporates a multi-view neighborhood
aggregation mechanism to capture the complex neighborhood structure in
heterogeneous graphs. Extensive experiments on three benchmark datasets
demonstrate the effectiveness of HetGPT. For future work, we are interested in
exploring the potential of prompting methods in tackling the class-imbalance
problem on graphs or broadening the applicability of our framework to diverse
graph tasks, such as link prediction and graph classification.
## References
* (1)
* Bahng et al. (2022) Hyojin Bahng, Ali Jahanian, Swami Sankaranarayanan, and Phillip Isola. 2022. Exploring visual prompts for adapting large-scale models. _arXiv:2203.17274_ (2022).
* Brock et al. (2016) Andrew Brock, Theodore Lim, James M Ritchie, and Nick Weston. 2016. Neural photo editing with introspective adversarial networks. _arXiv:1609.07093_ (2016).
* Cao et al. (2021) Yuwei Cao, Hao Peng, Jia Wu, Yingtong Dou, Jianxin Li, and Philip S Yu. 2021\. Knowledge-preserving incremental social event detection via heterogeneous gnns. In _TheWebConf_.
* Fan et al. (2019) Wenqi Fan, Yao Ma, Qing Li, Yuan He, Eric Zhao, Jiliang Tang, and Dawei Yin. 2019\. Graph neural networks for social recommendation. In _TheWebConf_.
* Fang et al. (2022a) Taoran Fang, Yunchao Zhang, Yang Yang, Chunping Wang, and Lei Chen. 2022a. Universal Prompt Tuning for Graph Neural Networks. _arXiv:2209.15240_ (2022).
* Fang et al. (2022b) Yang Fang, Xiang Zhao, Yifan Chen, Weidong Xiao, and Maarten de Rijke. 2022b. PF-HIN: Pre-Training for Heterogeneous Information Networks. _IEEE Transactions on Knowledge and Data Engineering_ (2022).
* Fu et al. (2020) Xinyu Fu, Jiani Zhang, Ziqiao Meng, and Irwin King. 2020\. Magnn: Metapath aggregated graph neural network for heterogeneous graph embedding. In _TheWebConf_.
* Guo et al. (2023) Zhichun Guo, Kehan Guo, Bozhao Nan, Yijun Tian, Roshni G Iyer, Yihong Ma, Olaf Wiest, Xiangliang Zhang, Wei Wang, Chuxu Zhang, et al. 2023\. Graph-based molecular representation learning. In _IJCAI_.
* He et al. (2015) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015\. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _ICCV_.
* Hu et al. (2020b) Ziniu Hu, Yuxiao Dong, Kuansan Wang, Kai-Wei Chang, and Yizhou Sun. 2020b. Gpt-gnn: Generative pre-training of graph neural networks. In _KDD_.
* Hu et al. (2020a) Ziniu Hu, Yuxiao Dong, Kuansan Wang, and Yizhou Sun. 2020a. Heterogeneous graph transformer. In _TheWebConf_.
* Jia et al. (2022) Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, and Ser-Nam Lim. 2022. Visual Prompt Tuning. _arXiv:2203.12119_ (2022).
* Jiang et al. (2021a) Xunqiang Jiang, Tianrui Jia, Yuan Fang, Chuan Shi, Zhe Lin, and Hui Wang. 2021a. Pre-training on large-scale heterogeneous graph. In _KDD_.
* Jiang et al. (2021b) Xunqiang Jiang, Yuanfu Lu, Yuan Fang, and Chuan Shi. 2021b. Contrastive pre-training of GNNs on heterogeneous graphs. In _CIKM_.
* Jing et al. (2021) Baoyu Jing, Chanyoung Park, and Hanghang Tong. 2021\. Hdmi: High-order deep multiplex infomax. In _WWW_.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In _ICLR_.
* Liu et al. (2023b) Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023b. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. _Comput. Surveys_ (2023).
* Liu et al. (2022) Yixin Liu, Ming Jin, Shirui Pan, Chuan Zhou, Yu Zheng, Feng Xia, and S Yu Philip. 2022. Graph self-supervised learning: A survey. _IEEE Transactions on Knowledge and Data Engineering_ (2022).
* Liu et al. (2023a) Zemin Liu, Xingtong Yu, Yuan Fang, and Xinming Zhang. 2023a. GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks. In _WWW_.
* Lv et al. (2021) Qingsong Lv, Ming Ding, Qiang Liu, Yuxiang Chen, Wenzheng Feng, Siming He, Chang Zhou, Jianguo Jiang, Yuxiao Dong, and Jie Tang. 2021\. Are we really making much progress? revisiting, benchmarking and refining heterogeneous graph neural networks. In _KDD_.
* Ma et al. (2022) Yihong Ma, Patrick Gerard, Yijun Tian, Zhichun Guo, and Nitesh V Chawla. 2022. Hierarchical spatio-temporal graph neural networks for pandemic forecasting. In _CIKM_.
* Ma et al. (2023) Yihong Ma, Yijun Tian, Nuno Moniz, and Nitesh V Chawla. 2023\. Class-Imbalanced Learning on Graphs: A Survey. _arXiv:2304.04300_ (2023).
* Oord et al. (2018) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. _arXiv:1807.03748_ (2018).
* Park et al. (2020) Chanyoung Park, Donghyun Kim, Jiawei Han, and Hwanjo Yu. 2020\. Unsupervised attributed multiplex network embedding. In _AAAI_.
* Qi and Davison (2009) Xiaoguang Qi and Brian D Davison. 2009. Web page classification: Features and algorithms. _ACM computing surveys (CSUR)_ (2009).
* Ren et al. (2019) Yuxiang Ren, Bo Liu, Chao Huang, Peng Dai, Liefeng Bo, and Jiawei Zhang. 2019\. Heterogeneous deep graph infomax. _arXiv:1911.08538_ (2019).
* Sun et al. (2022) Mingchen Sun, Kaixiong Zhou, Xin He, Ying Wang, and Xin Wang. 2022. GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks. In _KDD_.
* Sun et al. (2023) Xiangguo Sun, Hong Cheng, Jia Li, Bo Liu, and Jihong Guan. 2023. All in One: Multi-Task Prompting for Graph Neural Networks. In _KDD_.
* Tan et al. (2023) Zhen Tan, Ruocheng Guo, Kaize Ding, and Huan Liu. 2023\. Virtual Node Tuning for Few-shot Node Classification. In _KDD_.
* Tian et al. (2023) Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, and Nitesh V Chawla. 2023. Heterogeneous graph masked autoencoders. In _AAAI_.
* Tian et al. (2022) Yijun Tian, Chuxu Zhang, Zhichun Guo, Yihong Ma, Ronald Metoyer, and Nitesh V Chawla. 2022\. Recipe2vec: Multi-modal recipe representation learning with graph neural networks. In _IJCAI_.
* Wang et al. (2021c) Daheng Wang, Zhihan Zhang, Yihong Ma, Tong Zhao, Tianwen Jiang, Nitesh Chawla, and Meng Jiang. 2021c. Modeling co-evolution of attributed and structural information in graph sequence. _IEEE Transactions on Knowledge and Data Engineering_ (2021).
* Wang et al. (2020) Daheng Wang, Zhihan Zhang, Yihong Ma, Tong Zhao, Tianwen Jiang, Nitesh V Chawla, and Meng Jiang. 2020. Learning attribute-structure co-evolutions in dynamic graphs. In _DLG_.
* Wang et al. (2021b) Liyuan Wang, Mingtian Zhang, Zhongfan Jia, Qian Li, Chenglong Bao, Kaisheng Ma, Jun Zhu, and Yi Zhong. 2021b. Afec: Active forgetting of negative transfer in continual learning. In _NeurIPS_.
* Wang et al. (2022) Xiao Wang, Deyu Bo, Chuan Shi, Shaohua Fan, Yanfang Ye, and S Yu Philip. 2022\. A survey on heterogeneous graph embedding: methods, techniques, applications and sources. _IEEE Transactions on Big Data_ (2022).
* Wang et al. (2019) Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Yanfang Ye, Peng Cui, and Philip S Yu. 2019. Heterogeneous graph attention network. In _TheWebConf_.
* Wang et al. (2021a) Xiao Wang, Nian Liu, Hui Han, and Chuan Shi. 2021a. Self-supervised heterogeneous graph neural network with co-contrastive learning. In _KDD_.
* Wen et al. (2023) Zhihao Wen, Yuan Fang, Yihan Liu, Yang Guo, and Shuji Hao. 2023. Voucher Abuse Detection with Prompt-based Fine-tuning on Graph Neural Networks. In _CIKM_.
* Xie et al. (2022) Yaochen Xie, Zhao Xu, Jingtun Zhang, Zhengyang Wang, and Shuiwang Ji. 2022. Self-supervised learning of graph neural networks: A unified review. _IEEE transactions on pattern analysis and machine intelligence_ (2022).
* Yang et al. (2020) Carl Yang, Yuxin Xiao, Yu Zhang, Yizhou Sun, and Jiawei Han. 2020. Heterogeneous network representation learning: A unified framework with survey and benchmark. _IEEE Transactions on Knowledge and Data Engineering_ (2020).
* Yang et al. (2022) Yaming Yang, Ziyu Guan, Zhe Wang, Wei Zhao, Cai Xu, Weigang Lu, and Jianbin Huang. 2022\. Self-supervised heterogeneous graph pre-training based on structural clustering. In _NeurIPS_.
* Zhang et al. (2019) Chuxu Zhang, Dongjin Song, Chao Huang, Ananthram Swami, and Nitesh V Chawla. 2019. Heterogeneous graph neural network. In _KDD_.
* Zhang et al. (2022) Wen Zhang, Lingfei Deng, Lei Zhang, and Dongrui Wu. 2022\. A survey on negative transfer. _IEEE/CAA Journal of Automatica Sinica_ (2022).
* Zhao et al. (2020) Jianan Zhao, Xiao Wang, Chuan Shi, Zekuan Liu, and Yanfang Ye. 2020. Network schema preserving heterogeneous information network embedding. In _IJCAI_.
|
# The Quadratic Wasserstein Metric With Squaring Scaling For Seismic Velocity
Inversion
Zhengyang Li Department of Mathematical Sciences, Tsinghua University,
Beijing, China 100084. Yijia Tang School of Mathematical Sciences, Shanghai
Jiao Tong University, Shanghai, China 200240. Jing Chen Hao Wu
###### Abstract
The quadratic Wasserstein metric has shown its power in measuring the
difference between probability densities, which benefits optimization
objective function with better convexity and is insensitive to data noise.
Nevertheless, it is always an important question to make the seismic signals
suitable for comparison using the quadratic Wasserstein metric. The squaring
scaling is worth exploring since it guarantees the convexity caused by data
shift. However, as mentioned in [Commun. Inf. Syst., 2019, 19:95-145], the
squaring scaling may lose uniqueness and result in more local minima to the
misfit function. In our previous work [J. Comput. Phys., 2018, 373:188-209],
the quadratic Wasserstein metric with squaring scaling was successfully
applied to the earthquake location problem. But it only discussed the inverse
problem with few degrees of freedom. In this work, we will present a more in-
depth study on the combination of squaring scaling technique and the quadratic
Wasserstein metric. By discarding some inapplicable data, picking seismic
phases, and developing a new normalization method, we successfully invert the
seismic velocity structure based on the squaring scaling technique and the
quadratic Wasserstein metric. The numerical experiments suggest that this
newly proposed method is an efficient approach to obtain more accurate
inversion results.
Keywords: Optimal Transport, Wasserstein metric, Waveform inversion, Seismic
velocity inversion, Squaring Scaling.
††∗ Corresponding author.
## 1 Introduction
Full waveform inversion (FWI) has been receiving wide attention in recent
years [9, 14, 22, 32, 36, 37] due to its high-resolution imaging in
geophysical properties. Generally, it can be formulated as a PDE constrained
optimization problem in mathematics, which consists of two parts [31]: the
forward modeling of seismic wavefield, and the optimization problem searching
for suitable model parameters to minimize the mismatch between the predicted
and observed seismic signals. In previous decades, limited by the computing
power, most tomography methods were based on the ray theory, which ignores
finite frequency phenomena such as wave-front healing and scattering [15], and
thus results in low-resolution inverion results. With the rapid development of
computing power and the forward modeling method, more accurate synthetic
signals could be computed by directly simulating seismic wave propagation.
This makes it possible to obtain high-resolution results by FWI, which could
provide important information for seismic hazard assessment [28] and
exploration geophysics [31].
The $L^{2}$ metric-based model is the simplest and most common FWI. However,
it suffers from the well-known cycle skipping problem [31] that the solution
may be trapped in the local minima during the iteration, leading to incorrect
inversion results. The quadratic Wasserstein metric ($W_{2}$) from the Optimal
transport (OT) theory [29, 30] seems to be a solution to the above problem. It
measures the difference between two probability distributions by minimizing
the transport cost from one distribution to the other, which is insensitive to
the data noise and keeps convexity to the data shift, dilation, and partial
amplitude change [10, 11]. The number of local minima of the FWI model based
on this metric is therefore significantly reduced. Thus, it is favored by
researchers and has been widely applied to the earthquake location and seismic
tomography [5, 10, 11, 12, 13, 36, 37, 38]. In applying the quadratic
Wasserstein metric to the seismic inverse problem, there is a critical
problem. The quadratic Wasserstein metric compares the normalized and
nonnegative data while the seismic signal does not meet this requirement.
Thus, various techniques are developed to deal with this problem, e.g., linear
scaling [36], squaring scaling [5], and exponential scaling [26]. Among all
these methods, squaring scaling is considered to maintain the convexity of the
optimization objective function. But this method seems to lose uniqueness and
result in additional minima. This may be the reason why we haven’t seen the
application of squaring scaling and quadratic Wasserstein metric to the
velocity inversion problem. Moreover, there are also some other metrics based
on the OT theory, e.g., the WFR metric and the KR norm, which have been
successfully applied to the seismic inverse problem [23, 24, 38].
In our previous work [5], the quadratic Wasserstein metric with squaring
scaling is successfully applied to the earthquake location problem. The
squaring scaling ensures the differentiability and nice convexity property,
leading to a large convergent domain and accurate inversion results. However,
it is still a challenging problem for velocity inversion with a large number
of degrees of freedom since the squaring scaling may lose uniqueness and
result in additional local minima to the misfit function [12]. In this work,
we would like to provide a comprehensive approach to the seismic velocity
inversion based on squaring scaling and the quadratic Wasserstein metric. The
key ingredient of this work consists of two parts. First, for seismic velocity
inversion, the fundamental geophysical characteristic of seismic signals
should be taken into account. For example, certain erroneous seismic signals
and multi-arrival seismic signals, which have destructive effects on the
inverse process, should be deleted in the preprocessing stage. Moreover, a
more accurate optimal transport map can be obtained by picking appropriate
seismic phases. Secondly, a new normalization method is developed to obtain a
more accurate optimal transport map for the squared seismic signals. From
this, we can calculate better sensitivity kernels, which are more consistent
with physical intuition.
The rest of the paper is organized as follows. In Section 2, we briefly review
the mathematical formula of seismic velocity inversion and the basics of the
quadratic Wasserstein metric. We discuss important issues in the inversion and
present detailed implementations in Section 3. Meanwhile, we illustrate the
necessity of our method by some toy models. In Section 4, the numerical
experiments are provided to demonstrate the effectiveness and efficiency of
our method. Finally, we conclude the paper in Section 5.
## 2 The quadratic Wasserstein metric and seismic velocity inversion
We review the full waveform seismic tomography and the adjoint state method in
this section. The mathematical formulation of seismic velocity inversion can
be written as the PDE constrained optimization problem,
$c_{T}(\boldsymbol{x})=\operatorname*{argmin}_{c(\boldsymbol{x})}\Xi(c(\boldsymbol{x})),\quad\Xi(c(\boldsymbol{x}))=\sum_{i=1}^{N}\sum_{j=1}^{M}\chi_{ij}(c(\boldsymbol{x})),$
(2.1)
where index $(i,j)$ indicates the source-receiver pair. We used $N$ seismic
events, and considered $M$ seismic signals for each event. Correspondingly,
the misfit function $\chi_{ij}$ is defined as
$\chi_{ij}(c(\boldsymbol{x}))=\mathcal{D}(s_{ij}(t;c(\boldsymbol{x})),d_{ij}(t)).$
(2.2)
Here, $\mathcal{D}$ is the distance function that measures the difference
between the real seismic signal $d_{ij}(t)$ and the synthetic signal
$s_{ij}(t;c(\boldsymbol{x}))$, which can be regarded as the solution
$d_{ij}(t)=u_{i}(\boldsymbol{\eta}_{j},t;c_{T}(\boldsymbol{x})),\quad
s_{ij}(t;c(\boldsymbol{x}))=u_{i}(\boldsymbol{\eta}_{j},t;c(\boldsymbol{x})),$
(2.3)
of the following acoustic wave equation with the initial boundary condition
$\displaystyle\frac{\partial^{2}u_{i}(\boldsymbol{x},t;c(\boldsymbol{x}))}{\partial
t^{2}}=\nabla\cdot\left(c^{2}(\boldsymbol{x})\nabla
u_{i}(\boldsymbol{x},t;c(\boldsymbol{x}))\right)+R(t-\tau_{i})\delta(\boldsymbol{x}-\boldsymbol{\xi}_{i}),\quad\boldsymbol{x}\in\Omega,t>0,$
(2.4) $\displaystyle u_{i}(\boldsymbol{x},0;c(\boldsymbol{x}))=\frac{\partial
u_{i}(\boldsymbol{x},0;c(\boldsymbol{x}))}{\partial
t}=0,\quad\boldsymbol{x}\in\Omega,$ (2.5)
$\displaystyle\boldsymbol{n}\cdot\left(c^{2}(\boldsymbol{x})\nabla
u_{i}(\boldsymbol{x},t;c(\boldsymbol{x}))\right)=0,\quad\boldsymbol{x}\in\partial\Omega,t>0.$
(2.6)
Here, the locations of the earthquake and receiver station are
$\boldsymbol{\xi}_{i}$ and $\boldsymbol{\eta}_{j}$, the origin time of the
earthquake is $\tau_{i}$. The seismic rupture is modeled by the point source
$\delta(\boldsymbol{x}-\boldsymbol{\xi})$ since its scale is much smaller
compared to the scale of seismic wave propagation [1, 20]. And the source time
function is simplified as the Ricker wavelet
$R(t)=A\left(1-2\pi^{2}f_{0}^{2}t^{2}\right)e^{-\pi^{2}f_{0}^{2}t^{2}},$ (2.7)
where $f_{0}$ denotes the dominant frequency, and $A$ is the normalization
factor. The outward unit normal vector to the simulation domain boundary
$\partial\Omega$ is $\boldsymbol{n}$. In practice, the perfectly matched layer
absorbing boundary condition [17] is used to deal with the propagation of
waves outside the area. In this section, we use the reflection boundary
condition to simplify the derivation.
###### Remark 1.
Here, we consider the trace by trace strategy [36] to apply the 1-D quadratic
Wasserstein metric to the waveform inversion. Considering the fact that
receiver stations are located far from each other on the geological scale,
this approach is more in line with physical reality and also easier in
mathematics.
### 2.1 The adjoint method
Below, we briefly review the adjoint method [11, 25] for solving the
optimization problems (2.1)-(2.7). For small perturbation of seismic velocity
structure $\delta c$, it causes the perturbation of the wavefield
$\delta
u_{i}(\boldsymbol{x},t;c(\boldsymbol{x}))=u_{i}(\boldsymbol{x},t;c+\delta
c)-u_{i}(\boldsymbol{x},t;c).$ (2.8)
For the sake of brevity, we will omit the parameter $c(\boldsymbol{x})$ of the
wavefield and the signals in the following. The perturbation $\delta
u_{i}(\boldsymbol{x},t)$ satisfies the equations
$\displaystyle\frac{\partial^{2}\delta u_{i}(\boldsymbol{x},t)}{\partial
t^{2}}=\nabla\cdot\left(c^{2}(\boldsymbol{x})\nabla\delta
u_{i}(\boldsymbol{x},t)\right)$ (2.9) $\displaystyle\quad\quad\quad\quad\quad\
\ +\nabla\cdot\left(\left(2c(\boldsymbol{x})+\delta
c(\boldsymbol{x})\right)\delta c(\boldsymbol{x})\nabla(u_{i}+\delta
u_{i})(\boldsymbol{x},t)\right),\quad\boldsymbol{x}\in\Omega,$
$\displaystyle\delta u_{i}(\boldsymbol{x},0)=\frac{\partial\delta
u_{i}(\boldsymbol{x},0)}{\partial t}=0,\quad\boldsymbol{x}\in\Omega,$ (2.10)
$\displaystyle\boldsymbol{n}\cdot\left(c^{2}(\boldsymbol{x})\nabla\delta
u_{i}(\boldsymbol{x},t)+\left(2c(\boldsymbol{x})+\delta
c(\boldsymbol{x})\right)\delta c(\boldsymbol{x})\nabla(u_{i}+\delta
u_{i})(\boldsymbol{x},t)\right)=0,\quad\boldsymbol{x}\in\partial\Omega.$
(2.11)
Multiply test function $w_{i}(\boldsymbol{x},t)$ on equation (2.9) and
integrate it on $\Omega\times[0,t_{f}]$ for sufficient large time $t_{f}$.
Using integration by parts yields
$\int_{0}^{t_{f}}\int_{\Omega}\frac{\partial^{2}w_{i}}{\partial t^{2}}\delta
u_{i}\mathrm{d}\boldsymbol{x}\mathrm{d}t-\int_{\Omega}\left.\frac{\partial
w_{i}}{\partial t}\delta
u_{i}\right|_{t=t_{f}}\mathrm{d}\boldsymbol{x}+\int_{\Omega}\left.w_{i}\frac{\partial\delta
u_{i}}{\partial t}\right|_{t=t_{f}}\mathrm{d}\boldsymbol{x}\\\
=\int_{0}^{t_{f}}\int_{\Omega}\nabla\cdot(c^{2}\nabla w_{i})\delta
u_{i}\mathrm{d}\boldsymbol{x}\mathrm{d}t-\int_{0}^{t_{f}}\int_{\partial\Omega}\boldsymbol{n}\cdot(c^{2}\nabla
w_{i})\delta
u_{i}\mathrm{d}\zeta\mathrm{d}t-\int_{0}^{t_{f}}\int_{\Omega}\left(2c+\delta
c\right)\delta c\nabla w_{i}\cdot\nabla(u_{i}+\delta
u_{i})\mathrm{d}\boldsymbol{x}\mathrm{d}t\\\
\approx\int_{0}^{t_{f}}\int_{\Omega}\nabla\cdot(c^{2}\nabla w_{i})\delta
u_{i}\mathrm{d}\boldsymbol{x}\mathrm{d}t-\int_{0}^{t_{f}}\int_{\partial\Omega}\boldsymbol{n}\cdot(c^{2}\nabla
w_{i})\delta
u_{i}\mathrm{d}\zeta\mathrm{d}t-\int_{0}^{t_{f}}\int_{\Omega}2c\delta c\nabla
w_{i}\cdot\nabla u_{i}\mathrm{d}\boldsymbol{x}\mathrm{d}t,$ (2.12)
where the higher-order terms are ignored in the last step since we can
naturally assume that $\left\|\delta u_{i}\right\|\ll\left\|u_{i}\right\|$ and
$\left\|\delta c(\boldsymbol{x})\right\|\ll\left\|c(\boldsymbol{x})\right\|$.
On the one hand, the perturbation of misfit $\delta\chi_{ij}$ results from the
wave speed perturbation $\delta c(\boldsymbol{x})$, which writes
$\delta\chi_{ij}(c)=\mathcal{D}\big{(}s_{ij}(t)+\delta
s_{ij}(t),d_{ij}(t)\big{)}-\mathcal{D}\big{(}s_{ij}(t),d_{ij}(t)\big{)}\\\
\approx\langle Q_{ij}(t),\;\delta
s_{ij}(t)\rangle=\int_{0}^{t_{f}}Q_{ij}(t)\delta s_{ij}(t)\mathrm{d}t.$
Here, $Q_{ij}(t)$ indicates the Fréchet gradient of the distance $\mathcal{D}$
with respect to the synthetic data $s_{ij}(t)$:
$Q_{ij}(t)=\nabla_{s}\mathcal{D}(s,d)\big{|}_{s=s_{ij}(t),d=d_{ij}(t)},$
(2.13)
which will be specified later. Let $w_{i}(\boldsymbol{x},t)$ satisfy the
adjoint equation
$\displaystyle\frac{\partial^{2}w_{i}(\boldsymbol{x},t)}{\partial
t^{2}}=\nabla\cdot\left(c^{2}(\boldsymbol{x})\nabla
w_{i}(\boldsymbol{x},t)\right)+\sum_{j=1}^{M}Q_{ij}(t)\delta(\boldsymbol{x}-\boldsymbol{\eta}_{j}),\quad\boldsymbol{x}\in\Omega,$
(2.14) $\displaystyle w_{i}(\boldsymbol{x},t_{f})=\frac{\partial
w_{i}(\boldsymbol{x},t_{f})}{\partial t}=0,\quad\boldsymbol{x}\in\Omega,$
(2.15) $\displaystyle\boldsymbol{n}\cdot\left(c^{2}(\boldsymbol{x})\nabla
w_{i}(\boldsymbol{x},t)\right)=0,\quad\boldsymbol{x}\in\partial\Omega.$ (2.16)
Multiply $\delta u_{i}(\boldsymbol{x},t)$ on equation (2.14), integrate it on
$\Omega\times[0,t_{f}]$ and subtract (2.12) to obtain
$\sum_{j=1}^{M}\int_{0}^{t_{f}}Q_{ij}(t)\delta
s_{ij}(t)\mathrm{d}t=\sum_{j=1}^{M}\int_{0}^{t_{f}}\int_{\Omega}Q_{ij}(t)\delta(\boldsymbol{x}-\boldsymbol{\eta}_{j})\delta
u_{i}(\boldsymbol{x},t)\mathrm{d}t\\\
=-\int_{0}^{t_{f}}\int_{\Omega}2c(\boldsymbol{x})\delta
c(\boldsymbol{x})\nabla w_{i}(\boldsymbol{x},t)\cdot\nabla
u_{i}(\boldsymbol{x},t)\mathrm{d}\boldsymbol{x}\mathrm{d}t.$
The linear relationship between $\delta\Xi$ and $\delta c(\boldsymbol{x})$ is
established as
$\delta\Xi(c)=\sum_{i=1}^{N}\sum_{j=1}^{M}\delta\chi_{ij}(c)=\sum_{i=1}^{N}\int_{\Omega}K_{i}(\boldsymbol{x})\delta
c(\boldsymbol{x})\mathrm{d}\boldsymbol{x},$ (2.17)
where the sensitivity kernel of the $i$-th source for $c(\boldsymbol{x})$ is
defined as
$K_{i}(\boldsymbol{x})=-\int_{0}^{t_{f}}2c(\boldsymbol{x})\nabla
w_{i}(\boldsymbol{x},t)\cdot\nabla u_{i}(\boldsymbol{x},t)\mathrm{d}t.$ (2.18)
### 2.2 The quadratic Wasserstein metric
As we discussed at the beginning of this section, the synthetic signal
$s_{ij}(t)$ and real seismic signal $d_{ij}(t)$ are time series. As we know,
the quadratic Wasserstein metric between the 1-D probability density functions
has an analytic form [5, 29, 30, 36], i.e.,
$W_{2}^{2}(f,g)=\int_{0}^{t_{f}}\left|t-T(t)\right|^{2}f(t)\mathrm{d}t,\quad
T(t)=G^{-1}\left(F(t)\right).$ (2.19)
Here $f(t),\;g(t)$ are probability density functions defined on $[0,t_{f}]$
and $F(t),\;G(t)$ are cumulative density functions defined on $[0,t_{f}]$,
$F(t)=\int_{0}^{t_{f}}f(\tau)\mathrm{d}\tau,\quad
G(t)=\int_{0}^{t_{f}}g(\tau)\mathrm{d}\tau.$
Note that the seismic signals are not probability density functions. We need
to transform them into nonnegative and normalized functions for the quadratic
Wasserstein metric comparison. In other words, the misfit function defined in
(2.2) can be written as
$\chi_{ij}=\mathcal{D}(s_{ij}(t),d_{ij}(t))=W^{2}_{2}(\mathcal{P}(s_{ij}(t)),\mathcal{P}(d_{ij}(t))).$
(2.20)
The operator $\mathcal{P}$ converts the seismic signals into probability
density functions, including processing them into nonnegative and normalized
time series. In the later part, we will discuss this in detail. Thus, we can
obtain the expression of the Fréchet gradient [5, 36] mentioned in (2.13),
$\nabla_{s}\mathcal{D}(s,d)=\nabla_{f}W^{2}_{2}(f,g)|_{f=\mathcal{P}(s),g=\mathcal{P}(d)}\cdot\nabla_{s}\mathcal{P}(s)=\left\langle
2\int_{0}^{t}\tau-T(\tau)\mathrm{d}\tau,\nabla_{s}\mathcal{P}(s)\right\rangle.$
(2.21)
## 3 Data preprocessing and new normalization
In this section, we discuss two important issues when carrying out seismic
velocity inversion. First of all, when using real data for inversion, we do
not use all the data in each iteration. Some data, such as the case where the
direct wave and the reflected wave arrive simultaneously, are difficult to use
and can be ignored. In order to avoid the mismatch between different types of
seismic phases, we only retain the direct waves in the real seismic signals
and the synthetic signals. This processing procedure ensures reasonable
optimal transport maps and accurate sensitivity kernels. Secondly, we will
carefully design the operator $\mathcal{P}$ to get a better OT map $T$. In the
following, we will present detailed implementations and discussions.
### 3.1 Selecting source-receiver pairs and picking seismic phases
The complex subsurface structures, such as the velocity discontinuity
interfaces, may lead to different types of seismic phases, including the
direct wave and the reflected wave. These seismic waves propagate along
different wave paths and carry distinct underground structure information.
Sometimes, the direct wave and the reflected wave arrive simultaneously and
can not be distinguished, called the multipath phenomenon [27]. It is not
trivial to extract robust information from this kind of constraint. In
practice, these source-receiver pairs are always manually excluded to avoid
interference caused by unreliable constraints [3, 16]. We will also use this
strategy in this study.
From the perspective of signal processing, different phases of the real
seismic signal and the synthetic signal should be matched separately. If there
is a matching error, for example, part of the direct wave of the synthetic
signal is matched with part of the reflected wave of the real seismic signal,
it would lead to the optimal transport map being inconsistent with basic
seismic knowledge and further result in the artifacts in the sensitivity
kernel [10]. In particular, for the squaring scaling and quadratic Wasserstein
metric based seismic velocity inversion, this problem is more prominent. The
reason is that the quadratic Wasserstein metric requires mass conservation and
global match. When the masses of the real seismic signal and the synthetic
signal are unbalanced in the same phase, the mass transportation between
different phases will occur, causing the inconsistency between the OT map with
seismic reality. Moreover, the squaring scaling could further magnify the
problem. The idea of solving the above problems is also easy. By picking the
phases, we only match the same phases of the real seismic signals and the
synthetic signals. This is a common strategy in seismic inversion [21, 6], and
it can be achieved simply by calculating the arrival time of the direct phase
and the reflected phase [7, 33].
Figure 1: Illustration of the two-layer model. Left: the real seismic velocity
model with a high-velocity anomaly; Right: the initial velocity model. The
green inverted triangles indicate the receiver stations and the white stars
indicate the earthquakes. The specific source-receiver pair is highlighted by
the black star and inverted triangle. The cyan and tan dashed lines are the
direct wave path and the reflected wave path, respectively.
Next, we explain the necessity of the above-mentioned data preprocessing
method. The initial and real seismic velocity models are shown in Figure 1,
and the parameter settings can be found in Section 4.1. The main goal is to
detect the high-velocity anomaly above the Moho discontinuity.
Whether initial or real seismic velocity models, there are at least two paths
from the earthquake hypocenter to the receiver station: the direct wave (cyan
dashed lines) and the reflected wave (tan dashed lines). In the real seismic
velocity model, the wave amplitude of the direct wave signal is slightly
smaller since it partially reflects when passing through the high-velocity
anomaly. On the other hand, the reflected wave signal should be the same since
the velocity structure on the reflected wave path is the same in the initial
and real seismic velocity models, see Figure 1 for illustration.
Figure 2: Illustration of the Optimal Transport map between the real seismic
signal and synthetic signal (left) and the sensitivity kernel (right). The
mass transportation from the direct wave of the synthetic signal to the
reflected wave of the real seismic signal (within the green box of the upper
left subgraph) will cause artifacts in the sensitivity kernel, which arise
around the reflected wave path (the blue dashed lines of the upper right
subgraph). In the lower subgraphs, we can obtain the satisfactory OT map and
sensitivity kernel since only direct waves are picked.
The above difference between the real seismic signal and the synthetic signal
is further magnified by the squaring scaling. It leads to unreasonable mass
transportation from the direct wave of the synthetic signal to the reflected
wave of the real seismic signal (upper left subgraph of Figure 2). Therefore,
there will be artifacts in the sensitivity kernel $K_{i}(\boldsymbol{x})$, as
we illustrate in the upper right subgraph of Figure 2. On the other hand, if
we only consider the direct waves for inversion, the above-mentioned
difficulties will be easily solved, as we illustrate in the lower subgraphs of
Figure 2.
###### Remark 2.
In fact, the reflected wave signals are also important to constrain the
underground velocity structures [16]. The reflection phases can also be
similarly picked, processed, and used for inversion by our approach. However,
the utilization of the reflected wave is not trivial, and more technical
details are required in practice [2, 35, 39]. Thus, we will not discuss the
issues of the reflected wave in the following sections.
### 3.2 New normalization method
As it is well known, the quadratic Wasserstein metric measures the difference
between two probability density functions, which is not directly suitable for
seismic signals. Thus, some processing procedures, i.e., choosing an
appropriate operator $\mathcal{P}$ in (2.20) are required to convert the
seismic signals into probability density functions. Several different
approaches, e.g., linear scaling [36], squaring scaling [5], and exponential
scaling [26], have been proposed to address this issue. Among these methods,
the squaring scaling maintains convexity very well, and it is worthy of more
discussions.
The normalization operator with squaring scaling consists of two ingredients:
squaring seismic signal to ensure non-negativity and normalization to
guarantee the same mass. A natural approach is
$\mathcal{P}_{1}(s(t))=\frac{s^{2}(t)}{\left\|s^{2}(t)\right\|},$ (3.1)
in which
$\left\|s(t)\right\|=\int_{0}^{t_{f}}s(t)\mathrm{d}t.$
Substitute the above formula into equation (2.20), the form of the misfit
function is given by
$\chi=\mathcal{D}(s(t),d(t))=W^{2}_{2}\left(\frac{s^{2}(t)}{\left\|s^{2}(t)\right\|},\frac{d^{2}(t)}{\left\|d^{2}(t)\right\|}\right).$
Here the subscript indices $i$ and $j$ are dropped for simplicity. According
to the discussions in Section 2.2, we need to compute the inverse of the
following cumulative distribution function
$G(t)=\int_{0}^{t_{f}}\frac{d^{2}(t)}{\left\|d^{2}(t)\right\|}\mathrm{d}t.$
However, $G^{-1}(t)$ is not well defined when the real seismic signal $d(t)=0$
in certain interval. Correspondingly, there will be difficulties in the
computation of the misfit function.
In order to avoid the above-mentioned problem, we can make a slight upward
shift on the squared signal before the normalization, i.e.,
$\mathcal{P}_{2}(s(t))=\frac{s^{2}(t)+\varepsilon}{\left\|s^{2}(t)+\varepsilon\right\|}.$
(3.2)
Here $\varepsilon>0$ is a small parameter. However, the misfit function in
(2.20) with this normalization operator
$\chi=\mathcal{D}(s(t),d(t))=W^{2}_{2}\left(\frac{s^{2}(t)+\varepsilon}{\left\|s^{2}(t)+\varepsilon\right\|},\frac{d^{2}(t)+\varepsilon}{\left\|d^{2}(t)+\varepsilon\right\|}\right)$
still leads to unreasonable mass transportation (green box in the upper left
subgraph of Figure 3) since the additional mass does not equal
$\frac{\varepsilon}{\left\|s^{2}(t)+\varepsilon\right\|}\neq\frac{\varepsilon}{\left\|d^{2}(t)+\varepsilon\right\|}.$
This again leads to artifacts in the sensitivity kernel
$K_{i}(\boldsymbol{x})$ (upper right subgraph of Figure 3).
With a simple trick, we can solve the problem of unequal additional masses by
modifying the normalization operator as
$\mathcal{P}_{3}(s(t))=\frac{\frac{s^{2}(t)}{\left\|s^{2}(t)\right\|}+\varepsilon}{1+t_{f}\varepsilon}.$
(3.3)
We can clearly see that regardless of the values of $s(t)$ and $d(t)$, the
additional mass is $\frac{\varepsilon}{1+t_{f}\varepsilon}$. As a result, we
can avoid all the mentioned troubles. Both the OT map and the sensitivity
kernel are satisfactory, as we illustrate in the lower subgraphs of Figure 3.
###### Remark 3.
In the squaring scaling, a parameter $\varepsilon$ is added to avoid the
singularity. It is noted that large $\varepsilon$ could destroy the convexity
property. On the other hand, there will still be numerical singularities when
$\varepsilon$ is small. In practice, $\varepsilon$ is feasible in a relatively
large range, e.g., $10^{-4}\sim 10^{-2}$. In the following numerical
experiments, we select $\varepsilon=10^{-3}$.
Figure 3: Illustration of the Optimal Transport map between the real seismic
signal and synthetic signal (left) and the sensitivity kernel (right). In the
upper subgraphs, the newly created mass by the operator $\mathcal{P}_{2}$
could not be balanced, which leads to unreasonable mass transportation (upper
left) and artifacts in the sensitivity kernel (upper right). In the lower
subgraphs, we can obtain the satisfactory OT map and sensitivity kernel since
a new operator $\mathcal{P}_{3}$ is used.
## 4 Numerical Experiments
In this section, we present two numerical experiments to investigate the
validity of our inversion method based on the quadratic Wasserstein metric
with squaring scaling. We use the finite difference method to solve the
acoustic wave equation [8, 19, 36]. The perfectly matched layer boundary
condition [17] is applied to absorb the outgoing wave. The delta source
function is discretized by piecewise polynomial given in [34]
$\delta_{h}(x)=\left\\{\begin{array}[]{ll}\frac{1}{h}\left(1-\frac{5}{4}\left|\frac{x}{h}\right|^{2}-\frac{35}{12}\left|\frac{x}{h}\right|^{3}+\frac{21}{4}\left|\frac{x}{h}\right|^{4}-\frac{25}{12}\left|\frac{x}{h}\right|^{5}\right),&\left|x\right|\leq
h,\\\
\frac{1}{h}\left(-4+\frac{75}{4}\left|\frac{x}{h}\right|-\frac{245}{8}\left|\frac{x}{h}\right|^{2}+\frac{545}{24}\left|\frac{x}{h}\right|^{3}-\frac{63}{8}\left|\frac{x}{h}\right|^{4}+\frac{25}{24}\left|\frac{x}{h}\right|^{5}\right),&h<\left|x\right|\leq
2h,\\\
\frac{1}{h}\left(18-\frac{153}{4}\left|\frac{x}{h}\right|+\frac{255}{8}\left|\frac{x}{h}\right|^{2}-\frac{313}{24}\left|\frac{x}{h}\right|^{3}+\frac{21}{8}\left|\frac{x}{h}\right|^{4}-\frac{5}{24}\left|\frac{x}{h}\right|^{5}\right),&2h<\left|x\right|\leq
3h,\\\ 0,&\left|x\right|>3h.\end{array}\right.$
Here $h$ is related to the mesh size.
### 4.1 The Two-Layer Model
Consider the two-layer model in a bounded domain
$\Omega=[0,80\;km]\times[0,60\;km]$, which consists of the crust, the
uppermost mantle, and the Moho discontinuity at a depth of $30\;km$, see
Figure 1 for illustration. The real seismic velocity model includes a $+15\%$
high-velocity anomaly in the crust, given by
$c_{T}(x,z)=\left\\{\begin{array}[]{ll}6.67\ km/s,&(x,z)\in[35\ km,45\
km]\times[10\ km,20\ km],\\\ 8.1\ km/s,&z>30\ km,\\\ 5.8\
km/s,&others.\end{array}\right.$
Our goal is to perform the seismic velocity inversion to detect this high-
velocity anomaly. Correspondingly, the initial velocity model without high-
velocity anomaly is as follows
$c_{0}(x,z)=\left\\{\begin{array}[]{ll}5.8\ km/s,&z\leq 30\ km,\\\ 8.1\
km/s,&z>30\ km.\end{array}\right.$
The computational time interval is $[0\;s,21\;s]$. The inversion grid step is
$2\;km$ and the number of degrees of freedom amounts to $1200$. The space and
time steps in the forward simulation are $0.2\ km$ and $0.01\ s$,
respectively. The dominant frequency of the earthquakes in (2.7) is
$f_{0}=2\;Hz$. We randomly choose $25$ receiver stations deployed on the
surface and $80$ earthquakes distributed in the study region.
We then perform the seismic velocity inversion by using the quadratic
Wasserstein metric with squaring scaling. As a comparison, the inversion is
also performed with the traditional $L^{2}$ metric.
To quantitatively compare the results of different methods, we also compute
the relative model error
$RME=\frac{\int_{\Omega}|c_{k}(\boldsymbol{x})-c_{T}(\boldsymbol{x})|^{2}\mathrm{d}\boldsymbol{x}}{\int_{\Omega}|c_{0}(\boldsymbol{x})-c_{T}(\boldsymbol{x})|^{2}\mathrm{d}\boldsymbol{x}},$
and the relative misfit function
$RMF=\frac{\Xi(c_{k}(\boldsymbol{x}))}{\Xi(c_{0}(\boldsymbol{x}))},$
where $c_{k}(\boldsymbol{x})$ indicates the velocity model in the $k$-th
iteration.
Figure 4: The inversion results of the two-layer model. Upper subgraphs: the
result for $L^{2}$ metric after 20 steps (upper left); the convergent
trajectories of the relative model error (upper middle); the convergent
trajectories of the relative misfit function (upper right). In the middle and
the lower subgraphs, we present the results for the $W_{2}$ metric with the
operators $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$, respectively. From left to
right, the inversion iteration steps are $20$, $40$, and $80$. All the results
are shown in the same color bar.
In Figure 4, we present the inversion results of $L^{2}$ metric and $W_{2}$
metric. Obviously, the $L^{2}$-based inversion could not capture the $+15\%$
high-velocity anomaly (upper left subgraph of Figure 4). Although the misfit
function decreases in the iteration (upper middle subgraph of Figure 4), the
model error increases (upper right subgraph of Figure 4).
In Figure 4 and Table 1, we also compare the inversion results of the
quadratic Wasserstein metric with different operators $\mathcal{P}_{2}$ and
$\mathcal{P}_{3}$. From the convergent trajectories (upper middle and upper
right subgraphs of Figure 4), we can see the relative model error and the
relative misfit function of the operator $\mathcal{P}_{3}$ both have a faster
descent rate than those of the operator $\mathcal{P}_{2}$. Quantitatively, we
can see from Table 1 that the operator $\mathcal{P}_{3}$ only needs half of
the iteration steps of the operator $\mathcal{P}_{2}$ to achieve almost the
same relative model error and relative misfit function. This significantly
saves the expensive computational cost of the seismic velocity inversion
problem. Finally, it can be seen from the middle and lower subgraphs of Figure
4, the velocity inversion results of the operator $\mathcal{P}_{3}$ are
significantly better than those of the operator $\mathcal{P}_{2}$ under the
same iteration steps. The above discussions show that our approach has higher
efficiency and better inversion results.
Table 1: The two-layer model. Relative Model Error and Relative Misfit Function of $W_{2}$ with the operators $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$ in $20$, $40$ and $80$ iteration steps, respectively. Iteration Steps | Relative Model Error | Relative Misfit Function
---|---|---
$W_{2}$ with $P_{2}$ | $W_{2}$ with $P_{3}$ | $W_{2}$ with $P_{2}$ | $W_{2}$ with $P_{3}$
$20$ | $3.69\times 10^{-1}$ | $2.15\times 10^{-1}$ | $4.99\times 10^{-3}$ | $6.90\times 10^{-4}$
$40$ | $2.23\times 10^{-1}$ | $1.04\times 10^{-1}$ | $3.61\times 10^{-4}$ | $9.41\times 10^{-5}$
$80$ | $8.35\times 10^{-2}$ | $3.04\times 10^{-2}$ | $1.25\times 10^{-5}$ | $2.75\times 10^{-6}$
### 4.2 The Crustal Root Model
Let us consider the crustal root model, a kind of subsurface structure usually
found along the orogen. This model consists of the two-layered crust divided
by the Conrad discontinuity. A dipping and discontinuous Moho interface
separates the crust and the mantle. The depiction of these tectonic features
helps us better understand the forming of the old mountains. In mathematics,
we consider this three-layer model in the bounded domain
$\Omega=[0,80\;km]\times[0,80\;km]$. Three layers are divided by the Conrad
discontinuity at $20\;km$ depth and the Moho discontinuity whose location
$(x,L(x))$ is formulated with a quadratic function is given by
$L(x)=\left\\{\begin{array}[]{ll}36+\frac{25}{1600}x^{2}\ km,&0\ km\leq x\leq
40\ km,\\\ 36\ km,&40\ km<x\leq 80\ km.\end{array}\right.$
The seismic wave speed at each layer refers to the AK135 model [18],
generating the real seismic velocity model (Figure 5, left)
$c_{T}(x,z)=\left\\{\begin{array}[]{ll}5.8\ km/s,&z\leq 20\ km,\\\ 6.5\
km/s,&20\ km<z\leq L(x),\\\ 8.04\ km/s,&others.\end{array}\right.$
Our goal is to perform the seismic velocity inversion to detect this crustal
root. Correspondingly, the initial velocity model (Figure 5, right) without
crustal root anomaly is as follows
$c_{0}(x,z)=\left\\{\begin{array}[]{ll}5.8\ km/s,&z\leq 20\ km,\\\ 6.5\
km/s,&20\ km<z\leq 36\ km\\\ 8.04\ km/s,&others.\end{array}\right.$
The computational time interval is $[0\;s,21\;s]$. The inversion grid step is
$2\;km$ and the number of degrees of freedom amounts to $1600$. The space and
time steps in the forward simulation are $0.2\ km$ and $0.01\ s$,
respectively. The dominant frequency of the earthquakes in (2.7) is
$f_{0}=2\;Hz$. We randomly choose $40$ receiver stations deployed on the
surface and $80$ earthquakes distributed in the study region.
Figure 5: Illustration of the crustal root model. Left: the real seismic
velocity model. Right: the initial velocity model. The green inverted
triangles and the white stars indicate the receiver stations and the
earthquakes, respectively.
Similar to subsection 4.1, we present the inversion results of $L^{2}$ metric
and $W_{2}$ metric with the operators $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$
in Figure 6. Obviously, the $L^{2}$-based inversion could not capture the
crustal root structure. The relative model error and the relative misfit
function with respect to different normalization operators are given in Table
2. Correspondingly, the convergent trajectories are output in the upper middle
and upper right subgraphs of Figure 6. In the middle and lower subgraphs of
Figure 6, the inversion results are also presented. From which, we can draw
the same conclusions as those in subsection 4.1.
Figure 6: The inversion results of the crustal root model. Upper subgraphs: the result for $L^{2}$ metric after 40 steps (upper left); the convergent trajectories of the relative model error (upper middle); the convergent trajectories of the relative misfit function (upper right). In the middle and the lower subgraphs, we present the results for the $W_{2}$ metric with the operators $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$, respectively. From left to right, the inversion iteration steps are $40$, $80$, and $160$. All the results are shown in the same color bar. Table 2: The crustal root model. Relative Model Error and Relative Misfit Function of $W_{2}$ with the operators $\mathcal{P}_{2}$ and $\mathcal{P}_{3}$ in $40$, $80$ and $160$ iteration steps, respectively. Iteration Steps | Relative Model Error | Relative Misfit Function
---|---|---
$W_{2}$ with $P_{2}$ | $W_{2}$ with $P_{3}$ | $W_{2}$ with $P_{2}$ | $W_{2}$ with $P_{3}$
$40$ | $6.43\times 10^{-1}$ | $5.59\times 10^{-1}$ | $5.47\times 10^{-3}$ | $6.35\times 10^{-4}$
$80$ | $5.37\times 10^{-1}$ | $4.68\times 10^{-1}$ | $7.83\times 10^{-4}$ | $1.74\times 10^{-4}$
$160$ | $4.32\times 10^{-1}$ | $3.99\times 10^{-1}$ | $1.33\times 10^{-4}$ | $6.11\times 10^{-5}$
## 5 Conclusion
What we have seen from the above is the solution to the problem that the
seismic velocity inversion based on squaring scaling and the quadratic
Wasserstein metric is difficult, as mentioned in [Commun. Inf. Syst., 2019,
19:95-145] and [Meth. Appl. Anal., 2019, 2:133-148]. We can not only solve the
seismic velocity inversion with a large number of degrees of freedom. By
introducing a better normalization operator, the convergence efficiency is
significantly improved. We would like to combine the above techniques with the
double-difference traveltime adjoint tomography [4], which has significant
advantages in real seismic data. This may result in a more robust and reliable
seismic velocity inversion method. We are currently investigating this
interesting topic and hope to report this in an independent publication.
## Acknowledgments
This work was supported by National Natural Science Foundation of China (Grant
No. 11871297) and Tsinghua University Initiative Scientific Research Program.
## References
* [1] K. Aki and P.G. Richards, Quantitative Seismology: Theory and Methods volume II, W.H. Freeman & Co (Sd), 1980.
* [2] R. Brossier, S. Operto and J. Virieux, Velocity model building from seismic reflection data by full-waveform inversion, Geophysical Prospecting, 63(2), 354-367, 2015.
* [3] S.-J. Chang and C.-E. Baag, Crustal Structure in Southern Korea from Joint Analysis of Regional Broadband Waveforms and Travel Times, Bulletin of the Seismological Society of America, 96(3), 856–870, 2006.
* [4] J. Chen, G.X. Chen, H. Wu, J.Y. Yao and P. Tong, Adjoint tomography of NE Japan revealed by common-source double-difference traveltime data, preprint.
* [5] J. Chen, Y.F. Chen, H. Wu and D.H. Yang, The quadratic Wasserstein metric for Earthquake Location, J. Comput. Phys., 373, 188-209, 2018.
* [6] Y. Chen, J. Hill, W. Lei, M. Lefebvre, J. Tromp, E. Bozdag and D. Komatitsch, Automated time-window selection based on machine learning for full-waveform inversion, SEG Technical Program Expanded Abstracts, 1604-1609, 2017.
* [7] R. Chu, S. Ni, A. Pitarka and D.V. Helmberger, Inversion of Source Parameters for Moderate Earthquakes Using Short-Period Teleseismic P Waves, Pure and Applied Geophysics, 171(7), 1329–1341, 2014.
* [8] M.A. Dablain, The application of high-order differencing to the scalar wave equation, Geophysics, 51(1), 54-66, 1986.
* [9] M. Dunlop and Y.N. Yang, New likelihood functions and level-set prior for Bayesian full-waveform inversion, In SEG Technical Program Expanded Abstracts 2020, pages 825-829. Society of Exploration Geophysicists, 2020.
* [10] B. Engquist and B.D. Froese, Application of the Wasserstein metric to seismic signals, Commun. Math. Sci., 12(5), 979-988, 2014.
* [11] B. Engquist, B.D. Froese and Y.N. Yang, Optimal transport for seismic full waveform inversion, Commun. Math. Sci., 14(8), 2309-2330, 2016.
* [12] B. Engquist and Y.N. Yang, Seismic imaging and optimal transport, Communications in Information and Systems, 19(2), 95-145, 2019.
* [13] B. Engquist and Y.N. Yang, Seismic inversion and the data normalization for optimal transport, Methods and Applications of Analysis, 26(2), 133-148, 2019.
* [14] B. Engquist and Y.N. Yang, Optimal Transport Based Seismic Inversion:Beyond Cycle Skipping, Comm. Pure Appl. Math.. https://doi.org/10.1002/cpa.21990, 2021.
* [15] S.-H. Hung, F.A.Dahlen and G. Nolet, Wavefront healing: a banana–doughnut perspective, Geophys. J. Int., 146(2), 289-312, 2001.
* [16] X. Huang, D. Yang, P. Tong, J. Badal and Q. Liu, Wave equation-based reflection tomography of the 1992 Landers earthquake area, Geophysical Research Letters, 43(5), 1884-1892, 2016.
* [17] D. Komatitsch and J. Tromp, A perfectly matched layer absorbing boundary condition for the second-order seismic wave equation, Geophys. J. Int., 154(1), 146-153, 2003.
* [18] B.L.N. Kennett, E.R. Engdahl and R. Buland, Constraints on seismic velocities in the Earth from traveltimes, Geophys. J. Int., 122(1), 108-124, 1995.
* [19] J.S. Li, D.H. Yang, H. Wu and X. Ma, A low-dispersive method using the high-order stereo-modelling operator for solving 2-D wave equations, Geophys. J. Int., 210(3), 1938-1964, 2017.
* [20] R. Madariaga, Seismic Source Theory, in Treatise on Geophysics (Second Edition), pp. 51-71, ed. Gerald, S., Elsevier B.V., 2015.
* [21] A. Maggi, C. Tape, M. Chen, D. Chao and J. Tromp, An automated time-window selection algorithm for seismic tomography, Geophys. J. Int., 178(1), 257–281, 2009.
* [22] L. Métivier, R. Brossier, J. Virieux and S. Operto, Full waveform inversion and the truncated Newton method, SIAM Journal on Scientific Computing, 35(2), B401-B437, 2013.
* [23] L. Metivier, R. Brossier, Q. Merigot, E. Oudet and J. Virieux, Measuring the misfit between seismograms using an optimal transport distance: application to full waveform inversion, Geophys. J. Int., 205(1), 345–377, 2016.
* [24] L. Metivier, R. Brossier, Q. Merigot, E. Oudet and J. Virieux, An optimal transport approach for seismic tomography: application to 3D full waveform inversion, Inverse Problems, 32(11), 115008, 2016.
* [25] R.-E. Plessix, A review of the adjoint-state method for computing the gradient of a functional with geophysical applications, Geophys. J. Int., 167(2), 495–503, 2006.
* [26] L. Qiu, J. Ramos-Martínez, A. Valenciano, Y. Yang and B. Engquist, Full-waveform inversion with an exponentially encoded optimal-transport norm, In SEG Technical Program Expanded Abstracts 2017, pages 1286–1290. Society of Exploration Geophysicists, 2017.
* [27] N. Rawlinson, M. Sambridge and J. Hauser, Multipathing, reciprocal traveltime fields and raylets, Geophys. J. Int., 181(2), 1077-1092, 2010.
* [28] C. Tape, Q.Y. Liu, A. Maggi and J. Tromp, Seismic tomography of the southern California crust based on spectral-element and adjoint methods, Geophys. J. Int., 180(1), 433-462, 2010.
* [29] C. Villani, Topics in Optimal transportation, Graduate Studies in Mathematics, American Mathematical Society, 2003.
* [30] C. Villani, Optimal Transport: Old and New, Springer Science $\&$ Business Media, 2008.
* [31] J. Virieux and S. Operto, An overview of full-waveform inversion in exploration geophysics, Geophysics, 74(6), WCC1-WCC26, 2009.
* [32] J. Wang, D.H. Yang, H. Jing and H. Wu, Full waveform inversion based on the ensemble Kalman filter method using uniform sampling without replacement, Science Bulletin, 64(5), 321-330, 2019.
* [33] S. Wang and H. Tkalčić, Seismic event coda-correlation: Toward global coda-correlation tomography, Journal of Geophysical Research: Solid Earth, 125, e2019JB018848, 2020.
* [34] X. Wen, High Order Numerical Quadratures to One Dimensional Delta Function Integrals, SIAM J. Sci. Comput., 30(4), 1825-1846, 2008.
* [35] S. Xu, D.L. Wang, F. Chen, G. Lambaré and Y. Zhang, Inversion on Reflected Seismic Wave, SEG Technical Program Expanded Abstracts, 1-7, 2012.
* [36] Y.N. Yang, B. Engquist, J.Z. Sun and B.D. Froese, Application of Optimal Transport and the Quadratic Wasserstein Metric to Full-Waveform Inversion, Geophysics, 83(1), R43-R62, 2018.
* [37] Y.N. Yang and B. Engquist, Analysis of optimal transport and related misfit functions in full-waveform inversion, Geophysics, 83(1), A7-A12, 2018.
* [38] D. T. Zhou, J. Chen, H. Wu, D.H. Yang and L.Y. Qiu, The Wasserstein-Fisher-Rao metric for waveform based earthquake location, J. Comp. Math., accepted.
* [39] W. Zhou, R. Brossier, S. Operto and J. Virieux, Full waveform inversion of diving & reflected waves for velocity model building with impedance inversion based on scale separation, Geophys. J. Int., 202(3), 1535-1554, 2015.
|
# Delocalization-localization dynamical phase transition of random walks on
graphs
Giorgio Carugno<EMAIL_ADDRESS>Department of Mathematics, King’s
College London, Strand, London WC2R 2LS, UK Pierpaolo Vivo
<EMAIL_ADDRESS>Department of Mathematics, King’s College London,
Strand, London WC2R 2LS, UK Francesco Coghi<EMAIL_ADDRESS>Nordita,
KTH Royal Institute of Technology and Stockholm University, Hannes Alfvéns väg
12, SE-106 91 Stockholm, Sweden
###### Abstract
We consider random walks evolving on two models of connected and undirected
graphs and study the exact large deviations of a local dynamical observable.
We prove, in the thermodynamic limit, that this observable undergoes a first-
order dynamical phase transition (DPT). This is interpreted as a ‘co-
existence’ of paths in the fluctuations that visit the highly connected bulk
of the graph (delocalization) and paths that visit the boundary
(localization). The methods we used also allow us to characterize analytically
the scaling function that describes the finite size crossover between the
localized and delocalized regimes. Remarkably, we also show that the DPT is
robust with respect to a change in the graph topology, which only plays a role
in the crossover regime. All results support the view that a first-order DPT
may also appear in random walks on infinite-size random graphs.
## I Introduction
Random walks on graphs are versatile tools to model real-world noisy dynamical
processes embedded in spatial structures Hughes1995 ; Noh2004 ; Barrat2008 ;
Newman2010 ; Latora2017 ; Masuda2017 . These processes describe both natural
and man-made phenomena such as the spreading of infectious diseases Barrat2008
; Pastor-Satorras2015 , the transport of vesicles in cell cytoskeletons
Julicher1997 , the propagation of information in communication networks
Castellano2009 ; Liu2014 , and the robustness of networks to random failures
Barrat2008 to name just a few examples. Often, the focus in these
applications is towards time-averaged quantities including stationary
distributions, and energy and particle currents. Indeed, these are observables
commonly used in applications to gather information on the average state
occupation and mobility in network structures Barrat2008 ; Masuda2017 . On the
other hand, fluctuations are also fundamental to understand the behavior of
physical systems living in unstable environments as rare events are often
responsible for the evolution dynamics Albeverio2006 ; Kishore2011 . However,
much less is known about them and in the last decades many research efforts
have been deployed towards the development of theoretical frameworks that
allow for their study, e.g., large deviation theory DenHollander2000 ;
Touchette2009 ; Dembo2010 ; Chetrite2015 ; Jack2020a ; Carugno2022 .
Recently, signatures of a dynamical phase transition (DPT), viz. a transition
between different fluctuation mechanisms, has been identified in the study of
the mean degree (connectivity) visited by unbiased random walks evolving on
sparse random graphs DeBacco2016 ; Coghi2019 ; Gutierrez2021 . There are good
grounds to consider it as a first-order DPT where we observe the coexistence
of two ‘phases’ characterized by random walk paths that visit the whole graph,
and paths localized in dangling chains, i.e., lowly connected structures of
the graph. However, a rigorous proof for ensembles of random graphs is still
lacking and, in fact, the community still debates on the real nature and
interpretation of DPTs Whitelam2018 ; Whitelam2021 .
In this paper, we contribute to the debate by analyzing two exactly solvable
models where the transition appears to be first-order and characterized by an
absorbing dynamics. This sees, on the one hand, the random walk fully
localized in dangling structures, and on the other hand, the random walk fully
absorbed by the bulk of the graph, which acts as an entropic basin and allows
the random walk to be fully delocalized. We make use of a theoretical
framework for the calculation of large deviations that we developed in
Carugno2022 and that allows us to: (i) consider general time-additive
observables, (ii) analytically characterize the behavior of random walks on
finite-size graphs, and (iii) rigorously study the scaling (with respect to
the size of the graph) of fluctuations around the critical value of the DPT.
Remarkably, in agreement with Whitelam2018 ; Whitelam2021 we notice that an
important ingredient for the appearance of a first order DPT in both models is
the presence of absorbing dynamics, generated by different scalings of the
hopping probabilities in the graph. Furthermore, we notice that although the
first order DPT appears in both the models we investigated, the scaling of the
fluctuations around the transition is different and we argue that it is both
function of the dynamical process and of the inherent topology of the network.
A brief outline of the paper follows. In Section II we set up a general model
of an URW hopping on a graph, discuss the general form of observables that we
consider in this manuscript, and introduce the theory of large deviations in
this setting. In Section III we collect our results related to two exactly
solvable models. In Section IV we conclude the paper by summarizing the
results obtained and briefly discussing open questions.
## II Setting and large deviations
We consider an unbiased discrete-time random walk (URW)
$X=\left(X_{\ell}\right)_{\ell=1}^{n}=(X_{1},X_{2},\dots,X_{n})$ evolving on a
finite connected graph $G=(V,E)$, with $V$ denoting the set of $N$ vertices
(or nodes) and $E$ the set of edges (or links). The topology of the graph is
encoded in the symmetric adjacency matrix $A$, which has components
$A_{ij}=\begin{cases}1&i\in\partial j\\\ 0&\text{otherwise}\,,\end{cases}$ (1)
where $\partial j$ denotes the set of neighbors of node $j$. Notice that we
choose to consider an unweighted symmetric graph for simplicity, but our
methods can be easily generalized to more structured cases. The dynamics of
the random walk is defined by the transition matrix $\Pi$ having components
$\Pi_{ij}=\frac{A_{ij}}{k_{i}}\,,$ (2)
where $k_{i}=\sum_{j\in V}A_{ij}$ is the degree of node $i$, viz. the number
of edges in which node $i$ participates. The matrix $\Pi$ characterizes the
uniform probability of going from a vertex $X_{\ell}=i$ at time $\ell$ to a
vertex $X_{\ell+1}=j$ at time $\ell+1$ – that is, the probability of
transitioning from $i$ to $j\in\partial i$ does not depend on $j$.
Furthermore, for simplicity we restrict the random walk to be ergodic, viz.
$\Pi$ is irreducible and aperiodic. In the rest of the manuscript we use the
index $\ell$ to refer to time and the indices $i$ and $j$ to refer to nodes of
the graph.
The long-time behavior of the URW is well understood. Thanks to ergodicity,
the random walk has a unique stationary distribution
$\rho_{i}=\frac{k_{i}}{\sum_{j\in V}k_{j}}\,,$ (3)
which is found to be proportional to the degree of each node. Furthermore, the
URW is also reversible, viz. it is an equilibrium process, as it satisfies the
detailed balance condition
$\rho_{i}\Pi_{ij}=\rho_{j}\Pi_{ji}\,$ (4)
for each pair of nodes in $V$.
In this setting, we assume that the URW $X$ accumulates a cost in time given
by
$C_{n}=\frac{1}{n}\sum_{\ell=1}^{n}f(X_{\ell})\,,$ (5)
where $f$ is any function of the vertex state. In nonequilibrium statistical
mechanics, this cost is also called a dynamical observable Touchette2009 and,
depending on $f$, it may represent interesting physical quantities, such as
occupation times Chetrite2015 , internal energy Sekimoto2010 , chemical
concentrations Dykman1998 , activities Gutierrez2021 , and entropy production
rates Coghi2019 . Because of the ergodicity of the URW, in the long-time limit
the observable $C_{n}$ converges with probability $1$ to the ergodic average
$\sum_{i\in V}\rho_{i}f(i)\eqqcolon c^{*}\,.$ (6)
This convergence property is often used to estimate properties of large graphs
such as degree distributions or centrality measures, by running random walks
(or, generally speaking, agents) on the graphs for long times Newman2010 .
Following the introduction, here we study fluctuations of $C_{n}$ around the
typical value $c^{*}$ by calculating its probability distribution
$\mathbb{P}(C_{n}=c)$ in the $n\rightarrow\infty$ limit. The probabilistic
theory of large deviations DenHollander2000 ; Touchette2009 ; Dembo2010 tells
us that this distribution has an exponentially decaying form
$\mathbb{P}(C_{n}=c)=e^{-nI(c)+o(n)}\,,$ (7)
described by the rate function $I$ given by the following limit
$I(c)=-\lim_{n\rightarrow\infty}\frac{1}{n}\log\mathbb{P}(C_{n}=c)\,.$ (8)
The rate function $I$ is a pivotal object in the theory of large deviations as
it characterizes the fluctuations of $C_{n}$ to leading order in $n$; it is a
non-negative function and it is equal to $0$ for ergodic random walks only at
$c^{*}$ (where the probability concentrates exponentially fast with time).
Much effort is drawn towards the development of methods that allow one to
calculate $I$ in (8) efficiently Touchette2009 . Spectral and variational
techniques can both be implemented and, depending on the particular model
studied, it may well be that some work better than others Coghi2021PhD .
Spectral techniques based on moment generating functions have the merit to
reformulate the problem in a different setting—similarly to a microcanonical-
canonical change of ensemble—whereas variational techniques based on the
contraction principle DenHollander2000 ; Touchette2009 are useful to find
probabilistic bounds Hoppenau2016 . In the following, we will base our large
deviation study on the techniques discussed in Carugno2022 which try to merge
the pros of spectral and variational methods.
In line with Carugno2022 and previous works Whittle1955 ; Dawson1957 ;
Goodman1958 ; Billingsley1961 ; CsiszaR1987 ; DenHollander2000 ; Polettini2015
; Dembo2010 , in order to calculate the rate function $I$ associated with the
observable $C_{n}$ in (5) we move the focus on to the study of the higher-
dimensional pair-empirical occupation measure
$L^{(2)}_{n}(i,j)=\frac{1}{n}\sum_{\ell=1}^{n}\delta_{X_{\ell},i}\delta_{X_{\ell+1},j}=\nu_{ij}\hskip
28.45274pt\forall i,j\in V\,,$ (9)
which counts the fraction of jumps $\nu_{ij}$ that the URW makes between each
couple of nodes in the graph – see Carugno2022 . Remarkably, the value of
$C_{n}$ can be deduced via the formula
$C_{n}=\sum_{i,j\in V}f(i)L_{n}^{(2)}(i,j)\,.$ (10)
We can calculate the rate function (8) by means of the Gärtner–Ellis theorem
Touchette2009 ; Dembo2010 . To do so we need to introduce the scaled cumulant
generating function (SCGF) of $C_{n}$, which is defined as
$\lambda_{s,N}[\nu^{*}]=\lim_{n\rightarrow\infty}\frac{1}{n}\mathbb{E}\left[e^{nsC_{n}}\right]\
,$ (11)
and calculate its Legendre–Fenchel transform, i.e.,
$I(c)=\sup_{s\in\mathbb{R}}\\{sc-\lambda_{s,N}[\nu^{*}]\\}\,.$ (12)
In Carugno2022 , we showed that $\lambda_{s,N}[\nu^{*}]$ can be obtained
minimizing the following action
$\displaystyle\lambda_{s,N}$
$\displaystyle[\nu]=\lambda_{1,N}[\nu]+\lambda_{2,N}[\nu]+\lambda_{3,N}[\nu]+\lambda_{4,N}[\nu]$
(13) $\displaystyle\lambda_{1,N}$
$\displaystyle[\nu]=\sum_{i=1}^{N}\sum_{j=1}^{N}\nu_{ij}\left(\log\left(\sum_{k=1}^{N}\nu_{ik}\right)-\log(\nu_{ij})\right)$
(14) $\displaystyle\lambda_{2,N}$
$\displaystyle[\nu]=\sum_{i=1}^{N}\sum_{j=1}^{N}\log(\Pi_{ij})\ \nu_{ij}$ (15)
$\displaystyle\lambda_{3,N}$
$\displaystyle[\nu]=s\sum_{i=1}^{N}f(i)\sum_{j=1}^{N}\ \nu_{ij}$ (16)
$\displaystyle\lambda_{4,N}$
$\displaystyle[\nu]=\epsilon\left(\sum_{i=1}^{N}\sum_{j=1}^{N}\nu_{ij}-1\right)+\sum_{i=1}^{N}\eta_{i}\left(\sum_{j=1}^{N}\nu_{ij}-\sum_{j=1}^{N}\nu_{ji}\right)\
,$ (17)
with respect to $\nu_{ij}$, $\epsilon$ and $\eta_{i}$, which are respectively
the fraction of jumps from node $i$ to node $j$, and the Lagrange multipliers
fixing the normalization constraint and the global balance. We remark that
these formulas are valid for any finite $N$. In our setting the rate function
$I$ in (12) reduces to
$I(c)=-\lambda_{1,N}[\nu^{*}]-\lambda_{2,N}[\nu^{*}]-\lambda_{4,N}[\nu^{*}]\,,$
(18)
where the dependence on the fluctuation $c$ enters through the minimizer
$\nu^{*}$, which depends on the optimized tilting parameter $s^{*}(c)$, i.e.,
$\nu^{*}\equiv\nu^{*}(s^{*}(c))$. For further details on the methods we used
to derive (13)-(17) and on a useful physical characterization of the action
(13) we refer the reader to Carugno2022 and related bibliography.
Although the equations for the minimum of (13) may be complicated to solve
analytically for complex models, the proposed approach has several advantages
that will also be highlighted in the next sections when studying simplified
scenarios. Firstly, the explicit form of the action (13) for any finite $N$
allows us to study directly the scaling of the fluctuations with varying graph
size. As we will see in the following, this is an important feature that will
help us in characterizing fluctuations around critical points. Furthermore,
the tilting parameter $s$ is responsible for biasing the dynamics of the URW
via (16) to realize a fluctuation $c$ for the observable $C_{n}$ fixed by the
Legendre duality equation
$c=\frac{d\lambda_{s,N}[\nu]}{ds}\,.$ (19)
Therefore, the minimizer $\nu^{*}$ of the action $\lambda_{s,N}$ characterizes
the typical configuration of jumps on the graph $G$ that give rise to the
fluctuation $c$ defined by (19) (similarly to Gutierrez2021 ).
It is natural to introduce a biased dynamics for which $C_{n}=c$ is realized
in the typical state: this biased dynamics has been thoroughly characterized
in its general form in Chetrite2015 ; Chetrite2015a and also in the setting
of random walks on graphs in Coghi2019 . The process characterized by the
biased dynamics is known as driven (or effective/auxiliary) process and in
this context is a locally-biased version of the URW whose transition
probability matrix has been described in Coghi2019 . Within our approach, we
can also fully define the driven process and its transition matrix, which
reads
$\left(\Pi_{s}\right)_{ij}=\frac{\nu^{*}_{ij}(s)}{\sum_{k\in
V}\nu^{*}_{ik}(s)}\,.$ (20)
In other words, the driven process is the effective biased random walk that
explains how a fluctuation $C_{n}=c$ is created up to time $n$.
We remark that the method used here to calculate the SCGF
$\lambda_{s,N}[\nu^{*}]$ via the minimization of (13) is equivalent to
spectral methods based on the so-called tilted matrix Touchette2009 ;
Dembo2010 ; Touchette2018 ; Coghi2021PhD . In particular, in Carugno2022 we
show that the Euler–Lagrange equations for the minima of (13) are a useful re-
writing of the dominant eigenvalue problem associated with the tilted matrix.
The main pros of the method reviewed in Carugno2022 are (i) to give a clear
physical interpretation of all the terms in the action and SCGF and (ii) to
express the driven process in terms of the minimizers of (13), which are the
optimal jumps that create a fluctuation $C_{n}=c$.
## III Delocalization-localization dynamical phase transition
Recently, it has been pointed out DeBacco2016 ; Coghi2019 ; Gutierrez2021
that an URW that accumulates a cost proportional to the degree of each visited
node, e.g., $f(X_{l})=k_{X_{l}}$ in (5), and that runs on the largest
connected component of an Erdős–Rényi random graph seems to undergo a DPT.
This transition is localized in the fluctuations of the mean degree visited
when this is lower than the mean connectivity of the graph and is interpreted
as a ‘co-existence’ of paths that visit nodes with low degree and paths that
visit the whole graph DeBacco2016 ; Coghi2019 . [Noticeably, another DPT may
arise when the random walk visits more often highly connected regions of the
graph and localizes around the highest degree node of the graph. Although as
interesting, in this manuscript we will not focus on this behavior.] The
Erdős–Rényi random graph in question is picked from a ‘canonical’ ensemble of
random graphs having a fixed number of nodes $N$ and edges randomly placed
with a small probability $p$ between each pair of nodes such that $Np=\bar{c}$
is fixed to be the mean degree of the graph. Noticeably, signatures of the DPT
disappear when $\bar{c}$ is large, revealing that a fundamental topological
ingredient for the appearance of the transition is the presence of lowly
connected structures in the graph (such as trees and dangling chains of
nodes). These structures carry strong spatial correlations—a node of degree
one is likely to be connected with a node of degree two in a dangling chain
and these correlations are responsible for the dynamics of the random walk
when visiting low-degree nodes. The overall picture is that of a random walk
whose behavior fluctuates between two distinct phases characterized by (i)
being localized in lowly-connected regions of the graph and (ii) being spread
over the bulk (most connected region) of the graph which acts as an ergodic
basin absorbing the dynamics (see model in Appendix A of Coghi2019 and also
Whitelam2018 ).
As far as the current state of the art is concerned, it is not clear whether
such a DPT appears in the infinite-size limit of ensemble of random graphs.
However, as mentioned in the previous paragraph, various numerical studies
indicate an abrupt change in the mechanisms that generate fluctuations,
endorsing the idea of a dynamical phase transition DeBacco2016 ; Coghi2019 ;
Gutierrez2021 .
In the following, by applying the theory discussed in Section II, we
analytically characterize the DPT in two models, which catch what we think are
the most relevant physical features of this phenomenon. We believe that these
characteristics are shared by the dynamics of URWs on Erdős–Rényi random
graphs: in particular, the heterogeneity of the scaling of the degree and the
presence of lowly connected regions such as dangling chains. We show that (i)
the DPT is first order – that is, $\lambda_{s,N}[\nu^{*}]$ as a function of
$s$ has a non-differentiable point $s_{c}$ in which the first derivative is
discontinuous; (ii) the behavior around $s_{c}$ is characterized by a scaling
function which is not universal, and depends on both the dynamics of the URW
and the topology of the graph; (iii) the driven process can be fully
characterized, allowing us to understand how fluctuations arise in time. These
results give further evidence of the presence of a first-order DPT in random
walks exploring random graphs.
### III.1 Bulk-dangling model
The first model we look at is based on an URW with transition matrix (2)
collecting a cost (5) of the form
$C_{n}=\frac{1}{n}\sum_{\ell=1}^{n}\frac{k_{X_{\ell}}}{\bar{k}}\,,$ (21)
by visiting a graph of $N$ nodes composed by a fully connected bulk of $N-2$
nodes and a single dangling chain of $2$ nodes (see Fig. 1). The structure of
this graph incorporates two relevant features: (i) the presence of a spatially
correlated dangling chain and (ii) a fully connected bulk that allows the URW
to uniformly spread over the network. Given this structure, the mean degree of
the graph is $\bar{k}=((N-3)(N-3)+(N-2)+2+1)/N$, which evidently scales
linearly with $N$ for large-size graphs. This feature allows us to deduce the
behavior of the observable $C_{n}$ in two opposite scenarios in the
$N\rightarrow\infty$ limit. Evidently, if the random walk is uniformly spread
over the bulk $C_{n}\sim 1$, whereas if the random walk is localized in the
chain $C_{n}\sim 0$. We argue that this behavior does not depend on the length
$L$ of the dangling chain – the choice $L=2$ is made to simplify calculations.
Figure 1: Bulk-dangling model graph for $N=11$. Node $1$ has degree $k_{1}=1$,
node $2$ has degree $k_{2}=2$ and node $3$ has degree $k_{3}=N-2=9$: the first
two nodes represent the dangling chain, while node $3$ bridges the chain with
the bulk, being part of the latter. All the remaining $N-3=8$ nodes are of
type $4$, having degree $k_{4}=N-3=8$. Together with node $3$, they form the
fully connected bulk.
Following Section II we calculate the action $\lambda_{s,N}[\nu]$ in (13).
Because of the bulk of the graph being highly symmetric, i.e., every link in
the bulk is equivalent, and the global balance imposed on the dynamics, i.e.,
incoming and outgoing flow of a node being equal, we are only left with four
degrees of freedom (variables) $\nu_{ij}$ that determine the action. We name
the fraction of jumps
$\displaystyle\nu_{12}$ for both directions: $1\rightarrow 2$ and
$2\rightarrow 1$ (22) $\displaystyle\nu_{23}$ for both directions:
$2\rightarrow 3$ and $3\rightarrow 2$ (23) $\displaystyle\nu_{34}$ for both
directions of each link: $3\rightarrow 4$ and $4\rightarrow 3$ (24)
$\displaystyle\nu_{44}$ $\displaystyle\;\;\text{for both directions of each
link in the bulk}\,.$ (25)
Notice that if at this stage we also imposed the normalization constraint,
i.e., $\sum_{i,j\in V}\nu_{ij}=1$, we would be left with only three degrees of
freedom. However, for practical reasons in the calculation of the minimum of
the action $\lambda_{s,N}[\nu]$, we leave this last constraint as an implicit
parametrization with a Lagrange multiplier $\epsilon$ entering the action.
The action can be explicitly written as
$\lambda_{s,N}[\nu]=h(s,N,\nu_{12},\nu_{23},\nu_{34},\nu_{44})+\epsilon(1-2\nu_{12}-2\nu_{23}-2(N-3)\nu_{34}-(N-3)(N-4)\nu_{44})\,,$
(26)
with the long form of the function $h$ postponed to the Appendix A. The
minimum and minimizers of the action (26) can be found by solving the saddle-
point equations that can be cast in the following linear system:
$\displaystyle\nu_{12}=\nu_{23}\frac{a(s,N,\epsilon)}{1-a(s,N,\epsilon)}$ (27)
$\displaystyle\nu_{23}=\nu_{34}\frac{b(s,N,\epsilon)}{1-b(s,N,\epsilon)}$ (28)
$\displaystyle\nu_{34}=\nu_{23}\frac{c(s,N,\epsilon)}{1-(N-3)c(s,N,\epsilon)}$
(29)
$\displaystyle\nu_{44}=\nu_{34}\frac{d(s,N,\epsilon)}{1-(N-4)d(s,N,\epsilon)}$
(30) $\displaystyle
2\nu_{12}+2\nu_{23}+2(N-3)\nu_{34}+(N-3)(N-4)\nu_{44}=1\,,$ (31)
with
$\displaystyle a(s,N,\epsilon)=e^{3\frac{s}{\bar{k}}-\log 2+2\epsilon}$ (32)
$\displaystyle b(s,N,\epsilon)=\frac{e^{N\frac{s}{\bar{k}}-\log
2-\log(N-2)+2\epsilon}}{1-e^{3\frac{s}{\bar{k}}-\log 2+2\epsilon}}$ (33)
$\displaystyle
c(s,N,\epsilon)=\frac{e^{(2N-5)\frac{s}{\bar{k}}-\log(N-2)-\log(N-3)+2\epsilon}}{1-e^{(N-3)\frac{s}{\bar{k}}-\log(N-3)+\epsilon}}$
(34) $\displaystyle
d(s,N,\epsilon)=e^{(N-3)\frac{s}{\bar{k}}-\log(N-3)+\epsilon}\,.$ (35)
We report the form of the minimizers
$\nu^{*}=(\nu^{*}_{12},\nu^{*}_{23},\nu^{*}_{34},\nu^{*}_{44})$ in the
Appendix A. It is important to remark that these minimizers are not yet in a
fully explicit form as they depend on the Lagrange parameter $\epsilon$
(which, in Carugno2022 , has also been proved to be the negative SCGF
$\lambda_{s,N}[\nu^{*}]$). However, the Lagrange parameter $\epsilon$—hence,
also the SCGF we are after—can be determined by solving the normalization
constraint in (31) after having replaced the form of the minimizers
$\nu^{*}=(\nu^{*}_{12},\nu^{*}_{23},\nu^{*}_{34},\nu^{*}_{44})$. The equation
reads
$\begin{split}2(N-2)(N-3)-2&\tau(N-2)(N-4)e^{(N-3)\frac{s}{\bar{k}}}-\tau^{2}(N-3)\left((N-2)e^{3\frac{s}{\bar{k}}}+e^{N\frac{s}{\bar{k}}}+2e^{(2N-5)\frac{s}{\bar{k}}}\right)+\\\
&\tau^{3}(N-4)\left((N-2)e^{N\frac{s}{\bar{k}}}+e^{(2N-3)\frac{s}{\bar{k}}}\right)+\tau^{4}(N-3)e^{(2N-2)\frac{s}{\bar{k}}}=0\,,\end{split}$
(36)
with $\tau=e^{\epsilon}$. Noticeably, (36) can also be derived by imposing
that the matrix of the coefficients of the four linear equations (27)–(30) has
a nullspace of dimension greater than zero—that is, when its determinant is
zero. Equation (36) is fourth order in $\tau$ and hence it admits four
solutions of which only one is physical. This can be selected by noticing
that, since $\nu^{*}$ must be positive, the right hand side of the four
equations (27)–(30) is also positive (we postpone the exact form of the
inequality constraints to the Appendix A). In this way, we obtained the SCGF
$\lambda_{s,N}[\nu^{*}]$ valid for any finite-size graph.
(a) SCGF $\lambda_{s,N}$
(b) Derivative of SCGF $\lambda^{\prime}_{s,N}$
(c) Rate function $I(c)$
Figure 2: Large deviation study for the bulk-dangling model. In all three
figures, different colors correspond to a different number $N$ of nodes: i)
light blue is $N=15$; ii) orange is $N=25$; iii) green is $N=100$; iv) black
is $N\to\infty$. All finite $N$ curves where obtained by solving (36)
numerically, while analytical expressions for the black curves are presented
in (37) for figure (a), (38) for figure (b) and (39) for figure (c).
By carefully taking the limit $N\rightarrow\infty$ in the polynomial equation
(36) we can also explicitly obtain the SCGF in the infinite-size limit of the
graph, which is
$\lambda_{s,\infty}=\begin{cases}-\frac{\log 2}{2}&s\leq-\frac{\log 2}{2}\\\
s&s>-\frac{\log 2}{2}\end{cases}$ (37)
and highlights a non-differentiable point at $s_{c}=-\log 2/2$. The derivative
of $\lambda_{s,\infty}$, according to (19), is
$\frac{d\lambda_{s,\infty}}{ds}=\begin{cases}0&s\leq-\frac{\log 2}{2}\\\
1&s>-\frac{\log 2}{2}\,,\end{cases}$ (38)
which explicitly describes the fluctuation $C_{n}=c$ happening with varying
tilting parameter $s$. This confirms our expectations: on the left of the
critical point $s_{c}$ the random walk is localized in the dangling chain—the
only region of the graph where the cost accumulated $C_{n}$ (see (5)) does not
scale with the size $N$—whereas on the right of $s_{c}$ the random walk is
spread in the bulk where it accumulates a cost that scales linearly with $N$.
Furthermore, the value of $s_{c}$—and with it all the left branch of
$\lambda_{s,\infty}$ in (37)—can easily be interpreted as the mean entropy of
the random walk that, localized in the dangling chain, keeps going back and
forth from the node of degree $1$ to the node of degree $2$ (see also
Carugno2022 ). Eventually, the rate function $I$ can easily be obtained by
Legendre transforming the two analytical branches of (37) and by connecting
them with a linear section or by implementing directly (8); in both cases we
obtain
$I(c)=\begin{cases}\frac{\log 2}{2}-c\frac{\log 2}{2}&0\leq c\leq 1\\\
\infty&\text{otherwise}\,,\end{cases}$ (39)
and we remark that the non-differentiable point $s_{c}$ for the SCGF
$\lambda_{s,\infty}$ is mapped onto the linear section characterizing the rate
function. We graphically show in Fig. 2 the SCGF, its derivative, and the rate
function for finite-size graphs and in the infinite-size limit.
The non-differentiability of the SCGF can be physically related to a first-
order DPT that is interpreted here as a coexistence of paths that either visit
predominantly the bulk of the graph ($C_{n}\sim 1$) or are localized in the
dangling chain ($C_{n}\sim 0$). A further characterization of this DPT is
given by identifying the mechanisms that give rise to the fluctuations around
the critical point $s_{c}$.
As it appears from formula (38) and Fig. 2(b), the normalized mean-degree
visited by the URW (21) computed from (19) is a piece-wise constant function
of the tilting parameter $s$ in the infinite-size limit of the graph. We refer
to the region $s<s_{c}$ ($s>s_{c}$) corresponding to the localized
(delocalized) phase as $s^{-}$ ($s^{+}$) and we study in these two regions the
scaling with $N$ of the transition probabilities of the driven process (20).
The calculation can be done by properly taking the $N\rightarrow\infty$ limit
of the minimizer
$\nu^{*}=(\nu^{*}_{12},\nu^{*}_{23},\nu^{*}_{34},\nu^{*}_{44})$ and inserting
the result in (20). We get the following two transition matrices that
characterize the probability of stepping from a node to another one in the
graph of Fig. 1:
$\Pi_{s^{-}}=\left(\begin{array}[]{cccccc}0&1&0&\cdots&0&\cdots\\\
1+O(N^{-1})&0&O(N^{-1})&\cdots&0&\cdots\\\
0&-\frac{e^{s}}{3s}+O(N^{-1})&0&O(N^{-1})&\cdots&\cdots\\\
\vdots&&\ddots&&\ddots&\cdots\\\
0&0&1-\sqrt{2}e^{s}&0&O(N^{-1})&\cdots\end{array}\right)$ (40)
$\Pi_{s^{+}}=\left(\begin{array}[]{cccccc}0&1&0&\cdots&0&\cdots\\\
\frac{e^{-2s}}{2}+O(N^{-1})&0&\left(1-\frac{e^{-2s}}{2}\right)+O(N^{-1})&\cdots&0&\cdots\\\
0&O(N^{-1})&0&O(N^{-1})&\cdots&\cdots\\\ \vdots&&\ddots&&\ddots&\cdots\\\
0&0&O(N^{-1})&0&O(N^{-1})&\cdots\end{array}\right)\,.$ (41)
Evidently, for fluctuations obtained by fixing $s<s_{c}$ the random walk is
biased towards localizing in the dangling chain, e.g., if the random walk is
on node two, the probability of hopping onto node one is one order of
magnitude (with respect to the system size) bigger than moving towards the
fully connected bulk. For $s>s_{c}$ instead, the bulk behaves as an entropic
basin absorbing the random walk and allowing it to be fully spread over the
graph.
Figure 3: Crossover regime of the SCGF of the bulk-dangling model around
$s_{c}$ as a function of the scaling variable $\tilde{s}$ for different values
of $N$. As $N$ increases, the colored curves collapse into the limiting curve
predicted theoretically (45).
We conclude the study of the bulk-dangling model showing how fluctuations
scale with the graph size locally around the critical point $s_{c}$ (in
analogy with the study carried out in Whitelam2018 ). This can be done by
centering and rescaling the tilting variable $s$ as
$s=-\frac{\log 2}{2}+\frac{\tilde{s}}{N}\,,$ (42)
and the Lagrange parameter (or negative SCGF)
$\epsilon=\frac{\log 2}{2}-\frac{\tilde{\epsilon}_{\tilde{s}}}{N}\,,$ (43)
in the polynomial equation (36). Using this scaling and expanding the
polynomial to leading order in $N$ we obtain
$\lambda_{\tilde{s},N}\approx-\frac{\log
2}{2}+\frac{\tilde{\epsilon}_{\tilde{s}}}{N}\,,$ (44)
with
$\tilde{\epsilon}_{\tilde{s}}=\frac{1}{8}\left(\sqrt{2}+4\tilde{s}-7\log
2+\sqrt{2+16\sqrt{2}+2\sqrt{2}\log 2-\log^{2}2-8\sqrt{2}\tilde{s}-8\log
2\tilde{s}+16\tilde{s}^{2}}\right)\,,$ (45)
which explains how the SCGF locally scales as a function of the graph size
around $s_{c}$. We report in Fig. 6 the function
$\tilde{\epsilon}_{\tilde{s}}$ (translated to be centered in $(0,0)$ and not
in $(-\log 2/2,-\log 2/2$). Evidently, the function continuously joins the two
branches of fluctuations separated by the critical point $s_{c}$ in Fig. 2(a):
on the left, for $\tilde{s}\ll 0$, $\tilde{\epsilon}_{\tilde{s}}$ tends to $0$
(hence, $-\log 2/2$)), on the right, for $\tilde{s}\gg 0$, it behaves linearly
with respect to $\tilde{s}$.
In conclusion, the critical point $s_{c}$ marks a first-order DPT for the
observable $C_{n}$ in (21), however, thanks to a proper rescaling showed in
(42) and (43) we can get more precise information on how fluctuations scale
with the system size around the critical point $s_{c}$.
### III.2 Two-state Markov chain
In this Subsection we analyze another model which takes its cue from the
findings in the previous model and is also inspired by the works of
Whitelam2018 ; Coghi2019 . The model is a two-state Markov chain as
represented in Fig. 4. If the Markov chain is found on the state on the left,
namely $1$, at a certain time $\ell$, it collects a unitary reward $1$,
whereas if it is on the right, namely $b$, it collects a reward $b\geq 1$
(eventually $b\rightarrow\infty$). Further, although the probability of moving
from the left to the right is totally unbiased, the probability of moving from
the right to the left inversely scales with the reward $b$.
Figure 4: Sketch of the two-state Markov chain model, composed by state $1$
and state $b$. The transition probabilities – depicted in the figure above the
arrows – read explicitly: $p_{11}=1/2$, $p_{1b}=1/2$, $p_{b1}=1/b$,
$p_{bb}=1-1/b$.
The observable we focus on has the general form in (5) and in this particular
scenario it reduces to
$C_{n}=\frac{1}{n}\sum_{\ell=1}^{n}\frac{X_{\ell}}{b}\,,$ (46)
which is the mean reward collected over time renormalized by the maximum
reward $b$.
This model tries to catch once again the most relevant physical ingredients
that may lead to a delocalization-localization first-order DPT. In doing so,
however, we take a further simplification: we try to rule out as much as we
can the graph topology, replacing bulk and dangling contributions with two
single states which respectively give a reward of $b$ and $1$ to the observed
cost in (46). In comparison with the previous model, the reward $b$ should be
analogous to the graph size $N$—a random walk lost in the bulk of a graph
observes nodes with a degree that scales with $N$ in (21)—whereas the reward
$1$ should mimic the observed degree in the dangling chain. Furthermore, the
inverse scaling with $b$ of the probability of moving from the right state to
the left one, should give rise to an absorbing dynamics in the right state for
$b\rightarrow\infty$. As we will see in the following, the topology of the
graph does not play a pivotal role in the appearance of the first-order DPT,
but it may play an important role in determining the exact fluctuations in the
crossover regime around the critical point.
Once again, following Section II we calculate the action $\lambda_{s,b}[\nu]$
in (13). Because the global balance is imposed on the dynamics, we only have
to deal with three degrees of freedom (variables) $\nu_{ij}$ that determine
the action. These are
$\displaystyle\nu_{11}$ fraction of jumps from $1$ to $1$ (47)
$\displaystyle\nu_{1b}$ for both directions: $1\rightarrow b$ and
$b\rightarrow 1$ (48) $\displaystyle\nu_{bb}$ $\displaystyle\;\;\text{fraction
of jumps from $b$ to $b$}\,.$ (49)
Notice that if we also imposed the normalization constraint, i.e.,
$\sum_{i,j\in\left\\{1,b\right\\}}\nu_{ij}=1$, we would be left with only two
degrees of freedom. However, analogously to the previous Subsection, we leave
this last constraint as an implicit parametrization with a Lagrange multiplier
$\epsilon$ entering the action.
The action can be explicitly written as
$\begin{split}\lambda_{s,b}[\nu]&=-\nu_{11}\log\nu_{11}-\nu_{bb}\log\nu_{bb}-2\nu_{1b}\log\nu_{1b}+(\nu_{1b}+\nu_{bb})\log(\nu_{1b}+\nu_{bb})+(\nu_{1b}+\nu_{11})\log(\nu_{1b}+\nu_{11})+\\\
&\hskip 28.45274pt+\frac{s}{b}(\nu_{11}+\nu_{1b})+s(\nu_{1b}+\nu_{bb})-\log
2\nu_{11}+\log\frac{(b-1)}{b}\nu_{bb}+\nu_{1b}(-\log 2-\log
b)+\epsilon(2\nu_{1b}+\nu_{11}+\nu_{bb}-1)\,.\end{split}$ (50)
The minimum and minimizers of the action (50) can be found by solving the
saddle-point equations and imposing the normalization constraint. We get
$\displaystyle\nu_{11}=\frac{e^{\frac{s}{b}+\epsilon}(-b+(b-1)e^{s+\epsilon})}{-4b+2(b-1)e^{s+\epsilon}+be^{\frac{s}{b}+\epsilon}}$
(51)
$\displaystyle\nu_{bb}=\frac{(b-1)e^{s+\epsilon}(-2+e^{\frac{s}{b}+\epsilon})}{-4b+2(b-1)e^{s+\epsilon}+be^{\frac{s}{b}+\epsilon}}$
(52)
$\displaystyle\nu_{1b}=\frac{1}{\frac{b}{b-(b-1)e^{s+\epsilon}}-\frac{2}{-2+e^{\frac{s}{b}+\epsilon}}}\,,$
(53)
as still functions of the Lagrange multiplier $\epsilon$. This last can be
determined by, for instance, using the equation for the minimum of
$\lambda_{s,b}[\nu]$ w.r.t $\nu_{1b}$ and by replacing the values of
$\nu_{11}$ and $\nu_{bb}$ with those in (51) and (52). We find that the SCGF
$\lambda_{s,b}[\nu^{*}]$ is analytically given by
$\lambda_{s,b}=-\epsilon=\log\left[\frac{1}{4b}\left(be^{\frac{s}{b}}+2(b-1)e^{s}+\sqrt{4(b-1)^{2}e^{2s}+b^{2}e^{\frac{2s}{b}}-4(b-3)be^{\frac{bs+s}{b}}}\right)\right]\,.$
(54)
By carefully taking the limit $b\rightarrow\infty$ of (54) we explicitly
obtain the SCGF in the infinite size limit of the reward, which reads
$\lambda_{s,\infty}=\begin{cases}-\log 2&s\leq-\log 2\\\ s&s>-\log
2\end{cases}$ (55)
and highlights the appearance of a non-differentiable point at $s_{c}=-\log 2$
(this value can always be interpreted as the mean entropy of the random walk
localized in $1$). The derivative of $\lambda_{s,\infty}$, as in (19),
explicitly describes the fluctuation $C_{n}=c$ happening with varying tilting
parameter $s$ and analogously to (37) we obtain
$\frac{d\lambda_{s,\infty}}{ds}=\begin{cases}0&s\leq-\log 2\\\ 1&s>-\log
2\,.\end{cases}$ (56)
This says that on the left of the critical point $s_{c}$ the Markov chain is
localized in the left state where it accumulates a cost that does not scale
with the reward $b$, whereas on the right of $s_{c}$ the Markov chain is
localized in the right state where it accumulates a cost $b$ at every step. We
remark that at finite $N$ the critical point is absent, replaced by a
crossover region where the Markov chain visits both nodes for a finite
fraction of time.
The rate function $I$ can also be easily obtained as explained in the previous
Subsection and reads
$I(c)=\begin{cases}\log 2-c\log 2&0\leq c\leq 1\\\
\infty&\text{otherwise}\,.\end{cases}$ (57)
We graphically show in Fig. 5 the SCGF, its derivative, and the rate function
for the finite reward case and in the infinite-reward $b$ limit. Noticeably,
the rate function obtained in (39) is exactly half the rate function obtained
above here. This is consequence of the mean entropy
$\lambda_{1,N}+\lambda_{2,N}$ (14), (15) that the random walk has in the
localized state: in the bulk-dangling model is half with respect to the two-
state model presented here.
(a) SCGF $\lambda_{s,b}$
(b) Derivative of SCGF $\lambda^{\prime}_{s,b}$
(c) Rate function $I(c)$
Figure 5: Large deviation study for the two-state model. In all three figures,
different colors correspond to a different value $b$ of the reward: i) light
blue is $b=15$; ii) orange is $b=50$; iii) green is $b=250$; iv) black is
$b\to\infty$. All finite $b$ curves where obtained from (54), while analytical
expressions for the black curves are presented in (55) for figure (a), (56)
for figure (b) and (57) for figure (c).
The non-differentiability of the SCGF can be physically related to a first-
order DPT also in this case. Once again, this is interpreted as a coexistence
of paths that are either absorbed by the state $b$ ($C_{n}\sim 1$) or are
localized in the state $1$ ($C_{n}\sim 0$). We can further characterize this
DPT by writing the driven process (20) that leads to fluctuations for
$s^{-}\equiv s<s_{c}$ or for $s^{+}\equiv s>s_{c}$. This can be done by
properly taking the $b\rightarrow\infty$ limit of the minimizer
$\nu^{*}=(\nu^{*}_{11},\nu^{*}_{1b},\nu^{*}_{bb})$ and inserting the results
in (20). We get the following two transition matrices:
$\Pi_{s^{-}}=\left(\begin{array}[]{cc}1+O(b^{-1})&O(b^{-1})\\\
1-2e^{s}+O(b^{-1})&2e^{s}+O(b^{-1})\end{array}\right)$ (58)
$\Pi_{s^{+}}=\left(\begin{array}[]{cc}\frac{1}{2e^{s}}+O(b^{-1})&1-\frac{1}{2e^{s}}+O(b^{-1})\\\
O(b^{-1})&1+O(b^{-1})\end{array}\right)\,.$ (59)
For $s<s_{c}$ the Markov chain is biased towards localizing in the state $1$,
whereas for $s>s_{c}$, the state $b$ absorbs the Markov chain. This is very
similar to what we have seen in the bulk-dangling model, with the only
difference that now the role of the topology has been replaced by different
rewards on the two states of the chain.
Although this structural change in the model does not seem to affect the
appearance of a first-order DPT, we notice that fluctuations scale differently
around the critical point $s_{c}=-\log 2$. This is made evident by rescaling
the tilting parameter $s$ and the SCGF $\lambda_{s,b}$ similarly to the
previous Subsection, we obtain
$\lambda_{\tilde{s},b}\approx-\log
2+\frac{\tilde{\epsilon}_{\tilde{s}}}{2\sqrt{b}}\,,$ (60)
with
$\tilde{\epsilon}_{\tilde{s}}=\tilde{s}+\sqrt{4+\tilde{s}^{2}}\,.$ (61)
Eq. (60) describes fluctuations locally around $s_{c}$ for large (but finite)
reward $b$. Also in this case, we plot in Fig. 6 the function
$\tilde{\epsilon}_{\tilde{s}}$ along with $b$-finite scalings.
Figure 6: Crossover regime of the SCGF of the two-state Markov chain model
around $s_{c}$ as a function of the scaling variable $\tilde{s}$ for different
values of $b$. As $b$ increases, the colored curves collapse into the limiting
curve predicted theoretically (61).
The scaling of fluctuations is much different from what we found in (44) for
the bulk-dangling model. We argue that the exact form of the scaling is not
only determined by the dynamics of the model, but also by the topology of the
graph considered. Indeed, the two-state Markov chain is only composed by two
nodes playing the role of bulk and dangling chain, whereas in the bulk-
dangling model, as previously mentioned, we count four key nodes (see Fig. 2)
and among these, differently from the two-state Markov chain, the orange and
red one play the role of a gate between the bulk and the yellow node of degree
one.
To further corroborate this argument we also studied generalizations of the
two-state Markov chain analyzed so far. These are obtained by considering as a
probability to escape the rightmost state the value $b^{-\gamma}$, with
$\gamma\geq 1$, and by rescaling, or not, the reward $b$ by $b^{-\gamma}$. In
all cases considered (not shown here), the scaling of fluctuations around the
critical point for the DPT are different from the case of the bulk-dangling
model. These results support the argument that changing the dynamics does not
make up for having different graph topologies.
To investigate the robustness of the aforementioned DPT, we investigated also
other variants of the two-state model, where the reward of the two nodes are
set to $1$ and $k$ respectively. In this version of the model $k$ does not
depend on $b$, and $1/b$ is only the transition probability to remain in state
$k$. Interestingly, the DPT appears also in this case when $b$ goes to
infinity, but both the critical tilting parameter $s_{c}$ and the behavior for
$s<s_{c}$ depend on the value of $k$. This is in line with previous works on
two state models Whitelam2018 , although with a remarkable difference. In
these works, the author considers models where the transition matrix is
symmetric, so that both nodes become absorbing in the appropriate limit. As a
consequence, $s_{c}$ in those models is exactly $0$. In our work, instead, we
are naturally led to consider non-symmetric transition matrices, such that
only one of the two states becomes absorbing. To win this asymmetric absorbing
dynamics, an infinitesimal $s$ is not sufficient, hence $s_{c}\neq 0$.
## IV Conclusions
In this manuscript, we have shown the appearance of a first-order dynamical
phase transition in two models that catch the relevant physical aspects of the
dynamics of random walks on random graphs, for which the dynamical phase
transition has hitherto only been argued. In both models, the random walk
collects a cost—with general form given in (5)—which scales differently in
different regions of the graph. In the bulk-dangling model, very much
similarly to a random walk on a random graph, the cost scales proportionally
to the size of the graph in the bulk, whereas it gives only a constant
contribution in the dangling chain. In the two-state Markov chain instead, we
greatly simplified the topology of the graph and made the cost scale with a
reward rather than keeping it linked to the graph structure. As a consequence,
to keep the analogy with random walks on random graphs, we also suitably
rescaled the transition probabilities inversely with the reward. We analyzed
both models by applying a large deviation framework Carugno2022 that allowed
us to carry out analytical results.
Remarkably, regardless of the precise details of the model, a first-order
dynamical phase transition in the cost accumulated by the random walk always
appears (see Fig. 2 and 5). This is interpreted as a coexistence of paths that
visit regions of the graph where the cost scales proportionally with the
relevant physical parameter of the model (size $N$ or reward $b$) and paths
that visit regions that only contribute to the cost with constant increments.
We gave further evidence for this interpretation by also calculating the
relevant driven process, which explains how fluctuations arise in time (see
Eqs. (40) and (41), and (58) and (59)). Furthermore, by zooming around the
critical value for the transition, we exactly determined how fluctuations
scale either with the system size $N$ or the reward $b$. Since the scaling
turns out to be different in the two models investigated, we argue that
although the dynamical phase transition is robust to topological changes in
the model in the thermodynamic limit, the exact structure of the graph plays a
role—along with the dynamics—for finite systems.
These results support the idea that also random walks on sparse random graphs
undergo first-order phase transitions in the fluctuations of the mean-degree
visited DeBacco2016 ; Coghi2019 for infinite-size graphs. However, a full
proof has yet to be advanced. We believe that by implementing the large
deviation framework discussed in Carugno2022 and in this manuscript one
should be able to average the relevant action over the infinite realizations
of the random graph ensemble, obtaining the scaled cumulant generating
function in the thermodynamic-size limit. Furthermore, it would also be
interesting to study transient regimes in time—and not only the asymptotics
given by the large deviation theory—of the URW fluctuations. This could be
done in principle by following ideas presented in Polettini2015 ; Causer2022 ;
Carugno2022 .
Evidently, there is much scope for future work, both theoretical, as just
mentioned, and applied. Related to the latter, for instance, by appropriately
tuning the tilting parameter one could exploit the driven processes to
generate optimal explorers of networks, a topic that has recently gained much
attention Adam2019 ; Carletti2020 .
## References
* (1) B. D. Hughes, Random walks and random environments: Random walks. Oxford University Press, 1995.
* (2) J. D. Noh and H. Rieger, “Random walks on complex networks,” Physical Review Letters, vol. 92, no. 11, p. 118701, 2004.
* (3) A. Barrat, M. Barthélemy, and A. Vespignani, Dynamical processes on complex networks. Cambridge University Press, 2008.
* (4) M. E. J. Newman, Networks: An Introduction. Oxford University Press, 2010.
* (5) V. Latora, V. Nicosia, and G. Russo, Complex Networks: principles, methods and applications. Cambridge University Press, 2017.
* (6) N. Masuda, M. A. Porter, and R. Lambiotte, “Random walks and diffusion on networks,” Physics Reports, vol. 716-717, pp. 1–58, 2017.
* (7) R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani, “Epidemic processes in complex networks,” Reviews of Modern Physics, vol. 87, no. 3, p. 925, 2015.
* (8) F. Jülicher, A. Ajdari, and J. Prost, “Modeling molecular motors,” Reviews of Modern Physics, vol. 69, no. 4, pp. 1269–1281, 1997.
* (9) C. Castellano, S. Fortunato, and V. Loreto, “Statistical physics of social dynamics,” Reviews of Modern Physics, vol. 81, no. 2, pp. 591–646, 2009\.
* (10) C. Liu and Z. K. Zhang, “Information spreading on dynamic social networks,” Communications in Nonlinear Science and Numerical Simulation, vol. 19, no. 4, pp. 896–904, 2014.
* (11) S. Albeverio, V. Jentsch, and H. Kantz, eds., Extreme Events in Nature and Society. The Frontiers Collection, Berlin, Heidelberg: Springer Berlin Heidelberg, 2006.
* (12) V. Kishore, M. S. Santhanam, and R. E. Amritkar, “Extreme events on complex networks,” Physical Review Letters, vol. 106, no. 13, p. 188701, 2011\.
* (13) F. den Hollander, Large Deviations. American Mathematical Society, 2000.
* (14) H. Touchette, “The large deviation approach to statistical mechanics,” Physics Reports, vol. 478, no. 1-3, pp. 1–69, 2009.
* (15) A. Dembo and O. Zeitouni, Large Deviations Techniques and Applications, vol. 38 of Stochastic Modelling and Applied Probability. Springer Berlin Heidelberg, 2010.
* (16) R. Chetrite and H. Touchette, “Nonequilibrium Markov processes conditioned on large deviations,” Annales Henri Poincare, vol. 16, no. 9, pp. 2005–2057, 2015.
* (17) R. L. Jack, “Ergodicity and large deviations in physical systems with stochastic dynamics,” European Physical Journal B, vol. 93, no. 4, p. 74, 2020.
* (18) G. Carugno, P. Vivo, and F. Coghi, “Graph-combinatorial approach for large deviations of Markov chains,” arXiv:2201.00582, 2022.
* (19) C. De Bacco, A. Guggiola, R. Kühn, and P. Paga, “Rare events statistics of random walks on networks: localisation and other dynamical phase transitions,” Journal of Physics A: Mathematical and Theoretical, vol. 49, no. 18, p. 184003, 2016.
* (20) F. Coghi, J. Morand, and H. Touchette, “Large deviations of random walks on random graphs,” Physical Review E, vol. 99, no. 2, p. 022137, 2019.
* (21) R. Gutierrez and C. Perez-Espigares, “Generalized optimal paths and weight distributions revealed through the large deviations of random walks on networks,” Physical Review E, vol. 103, no. 2, p. 022319, 2021.
* (22) S. Whitelam, “Large deviations in the presence of cooperativity and slow dynamics,” Physical Review E, vol. 97, p. 62109, 2018.
* (23) S. Whitelam and D. Jacobson, “Varied phenomenology of models displaying dynamical large-deviation singularities,” Physical Review E, vol. 103, no. 3, p. 032152, 2021.
* (24) K. Sekimoto, Stochastic Energetics, vol. 799 of Lecture Notes in Physics. Springer Berlin Heidelberg, 2010.
* (25) M. I. Dykman, E. Mori, J. Ross, and P. M. Hunt, “Large fluctuations and optimal paths in chemical kinetics,” The Journal of Chemical Physics, vol. 100, no. 8, p. 5735, 1998.
* (26) F. Coghi, Large deviation methods for the study of nonequilibrium systems: variational and spectral approaches. PhD thesis, Queen Mary, University of London, 2021.
* (27) J. Hoppenau, D. Nickelsen, and A. Engel, “Level 2 and level 2.5 large deviation functionals for systems with and without detailed balance,” New Journal of Physics, vol. 18, no. 8, p. 083010, 2016.
* (28) P. Whittle, “Some distribution and moment formulae for the Markov chain,” Journal of the Royal Statistical Society: Series B (Methodological), vol. 17, no. 2, pp. 235–242, 1955.
* (29) R. Dawson and I. J. Good, “Exact Markov probabilities from oriented linear graphs,” The Annales of Mathematical Statistics, vol. 28, no. 4, pp. 946–956, 1957.
* (30) L. A. Goodman, “Exact probabilities and asymptotic relationships for some statistics from $m$-th order Markov chains,” The Annals of Mathematical Statistics, vol. 29, no. 2, pp. 476–490, 1958.
* (31) P. Billingsley, “Statistical methods in Markov chains,” The Annales of Mathematical Statistics, vol. 32, no. 1, pp. 12–40, 1961.
* (32) I. Csiszár, T. M. Cover, and B. S. Choi, “Conditional Limit theorems under Markov conditioning,” IEEE Transactions on Information Theory, vol. 33, no. 6, pp. 788–801, 1987.
* (33) M. Polettini, “BEST statistics of Markovian fluxes: a tale of Eulerian tours and Fermionic ghosts,” Journal of Physics A: Mathematical and Theoretical, vol. 48, no. 36, p. 365005, 2015.
* (34) R. Chetrite and H. Touchette, “Variational and optimal control representations of conditioned and driven processes,” Journal of Statistical Mechanics: Theory and Experiment, vol. 2015, no. 12, p. P12001, 2015\.
* (35) H. Touchette, “Introduction to dynamical large deviations of Markov processes,” Physica A: Statistical Mechanics and its Applications, vol. 504, pp. 5–19, 2018.
* (36) L. Causer, M. C. Bañuls, and J. P. Garrahan, “Finite time large deviations via matrix product states,” Physical Review Letters, vol. 128, no. 9, p. 090605, 2022.
* (37) I. Adam, D. Fanelli, T. Carletti, and G. Innocenti, “Reactive explorers to unravel network topology,” The European Physical Journal B 2019 92:5, vol. 92, no. 5, pp. 1–8, 2019.
* (38) T. Carletti, M. Asllani, D. Fanelli, and V. Latora, “Nonlinear walkers and efficient exploration of congested networks,” Physical Review Research, vol. 2, no. 3, p. 033012, 2020.
## Appendix A Details on ‘Bulk-dangling model’
The exact form of the function $h$ appearing in (26) reads
$\begin{split}h(s,N,\nu_{12}&,\nu_{23},\nu_{34},\nu_{44})=\nu_{12}\left(\frac{3s}{\bar{k}}-\log
2\right)+\nu_{23}\left(\frac{Ns}{\bar{k}}-\log(2(N-2))\right)+\nu_{34}(N-3)\left(\frac{(2N-5)s}{\bar{k}}-\log((N-3)(N-2))\right)+\\\
&+\nu_{44}(N-3)(N-4)\left(\frac{(N-3)s}{\bar{k}}-\log(N_{3})\right)+\nu_{12}\log\left(\frac{\nu_{12}+\nu_{23}}{\nu_{12}}\right)+\nu_{23}\left(\log\left(\frac{\nu_{12}+\nu_{23}}{\nu_{23}}\right)+\log\left(\frac{\nu_{23}+(N-3)\nu_{34}}{\nu_{23}}\right)\right)+\\\
&+\nu_{34}(N-3)\left(\log\left(\frac{\nu_{23}+(N-3)\nu_{34}}{\nu_{34}}\right)+\log\left(\frac{\nu_{34}+(N-4)\nu_{44}}{\nu_{34}}\right)\right)+\nu_{44}(N-3)(N-4)\log\left(\frac{\nu_{34}+(N-4)\nu_{44}}{\nu_{44}}\right)+\\\
&+\epsilon\left(1-2\nu_{12}-2\nu_{23}-2(N-3)\nu_{34}-(N-3)(N-4)\nu_{44}\right)\,.\end{split}$
(62)
The minimizers $\nu^{*}=(\nu^{*}_{12},\nu^{*}_{23},\nu^{*}_{34},\nu^{*}_{44})$
of the action (26) are explicitly given by
$\displaystyle\begin{split}\nu_{12}(s,N,\epsilon)&=\frac{a(s,N,\epsilon)}{(a(s,N,\epsilon)-1)}(N-3)\frac{b(s,N,\epsilon)}{(b(s,N,\epsilon)-1)}\times\\\
&\times\frac{((-1+a(s,N,\epsilon))(-1+b(s,N,\epsilon))(-1+c(s,N,\epsilon)(-4+N)))}{((-2+a(s,N,\epsilon)(-1+b(s,N,\epsilon))(-2+c(s,N,\epsilon)(-4+N))+(1+b(s,N,\epsilon))c(s,N,\epsilon)(-4+N))(-3+N))}\end{split}$
(63)
$\displaystyle\begin{split}\nu_{23}(s,N,\epsilon)&=(N-3)\frac{b(s,N,\epsilon)}{(b(s,N,\epsilon)-1)}\times\\\
&\times\frac{((-1+a(s,N,\epsilon))(-1+b(s,N,\epsilon))(-1+c(s,N,\epsilon)(-4+N)))}{((-2+a(s,N,\epsilon)(-1+b(s,N,\epsilon))(-2+c(s,N,\epsilon)(-4+N))+(1+b(s,N,\epsilon))c(s,N,\epsilon)(-4+N))(-3+N))}\end{split}$
(64)
$\displaystyle\begin{split}\nu_{34}(s,N,\epsilon)&=\frac{((-1+a(s,N,\epsilon))(-1+b(s,N,\epsilon))(-1+c(s,N,\epsilon)(-4+N)))}{((-2+a(s,N,\epsilon)(-1+b(s,N,\epsilon))(-2+c(s,N,\epsilon)(-4+N))+(1+b(s,N,\epsilon))c(s,N,\epsilon)(-4+N))(-3+N))}\end{split}$
(65)
$\displaystyle\begin{split}\nu_{44}(s,N,\epsilon)&=\frac{c(s,N,\epsilon)}{((N-4)c(s,N,\epsilon)-1)}\times\\\
&\times\frac{((-1+a(s,N,\epsilon))(-1+b(s,N,\epsilon))(-1+c(s,N,\epsilon)(-4+N)))}{((-2+a(s,N,\epsilon)(-1+b(s,N,\epsilon))(-2+c(s,N,\epsilon)(-4+N))+(1+b(s,N,\epsilon))c(s,N,\epsilon)(-4+N))(-3+N))}\end{split}$
(66)
The inequality constraints that select the physical solution of (36) are
$\displaystyle\epsilon$ $\displaystyle>\frac{3s}{2\bar{k}}-\frac{\log 2}{2}$
(67) $\displaystyle\epsilon$
$\displaystyle>-\frac{1}{2}\log\left(\frac{2(N-2)}{(N-2)e^{3\frac{s}{\bar{k}}}+e^{N\frac{s}{\bar{k}}}}\right)$
(68) $\displaystyle 0$
$\displaystyle>\tau^{2}(N-3)+\tau(N-2)(N-4)e^{-(N-2)\frac{s}{\bar{k}}}\tau-(N-2)(N-3)e^{-(2N-5)\frac{s}{\bar{k}}}$
(69) $\displaystyle\epsilon$
$\displaystyle>(N-3)\frac{s}{\bar{k}}-\log(N-3)+\log(N-4).$ (70)
|
# In Nonparametric and High-Dimensional Models, Bayesian Ignorability is an
Informative Prior
Antonio R. Linero Department of Statistics and Data Sciences, University of
Texas at Austin, email<EMAIL_ADDRESS>
###### Abstract
In problems with large amounts of missing data one must model two distinct
data generating processes: the outcome process which generates the response
and the missing data mechanism which determines the data we observe. Under the
_ignorability_ condition of Rubin, (1976), however, likelihood-based inference
for the outcome process does not depend on the missing data mechanism so that
only the former needs to be estimated; partially because of this
simplification, ignorability is often used as a baseline assumption. We study
the implications of Bayesian ignorability in the presence of high-dimensional
nuisance parameters and argue that ignorability is typically incompatible with
sensible prior beliefs about the amount of selection bias. We show that, for
many problems, ignorability directly implies that the prior on the selection
bias is tightly concentrated around zero. This is demonstrated on several
models of practical interest, and the effect of ignorability on the posterior
distribution is characterized for high-dimensional linear models with a ridge
regression prior. We then show both how to build high-dimensional models which
encode sensible beliefs about the selection bias and also show that under
certain narrow circumstances ignorability is less problematic.
## 1 Introduction
Dealing with missing data is a fundamental problem in data analysis; for
example, missingness complicates inference in clinical trials (National
Research Council,, 2010) and is inherent in the potential outcomes framework
for causal inference (Rubin,, 2005). A common starting point for addressing
missingness is to assume that the mechanism which generated the missingness is
_ignorable_ (Rubin,, 1976). Ignorability allows likelihood-based inference to
proceed without modeling the missing data mechanism, which can greatly
simplify an analysis.
In this paper we consider the Bayesian approach to account for missingness.
For generality, we consider a potential outcome $Y_{i}(a)$ for some exposure
level $a\in\mathscr{A}$ such that we observe both the exposure level $A_{i}$
and its associated potential outcome $Y_{i}(A_{i})$ ($Y_{i}(a)$ is regarded as
missing for all $a\neq A_{i})$. Let $X_{i}$ be a vector of confounders which
are predictive of both $A_{i}$ and $Y_{i}(a)$. By defining $Y_{i}(1)$ as the
outcome of interest, this framework subsumes the standard missing data
problem, where $A_{i}$ is now a missing data indicator such that we observe
the outcome when $A_{i}=1$. We say that the _missing data mechanism_
$f_{\phi}(A_{i}\mid X_{i})$ is Bayesian-ignorable, or simply ignorable, if the
following conditions hold.
1. IG.1
The potential outcomes $\\{Y_{i}(a):a\in\mathscr{A}\\}$ are conditionally
independent of $A_{i}$ given $X_{i}$.
2. IG.2
The parameters $\beta$ and $\phi$ are a-priori independent, where $\beta$
parameterizes the model for the potential outcomes and $\phi$ parameterizes
the missing data mechanism. That is, the prior factors as
$\pi(\beta,\phi)=\pi_{\beta}(\beta)\,\pi_{\phi}(\phi)$.
Condition IG.1 constrains the data generating mechanism and is a (type of)
_missing at random_ (MAR) assumption (Seaman et al.,, 2013); in the causal
inference literature, assumptions like IG.1 are sometimes themselves referred
to as strong ignorability assumptions (Rosenbaum and Rubin,, 1983; Imai et
al.,, 2010), and it is sometimes conflated with ignorability in missing data
sense of Rubin, (1976) as well (see Seaman et al.,, 2013, for a through
discussion of MAR and ignorability). Condition IG.2, which constrains the
prior, is also key to ignorability: it guarantees that the posterior
distribution of $\beta$ given the observed data is proportional to
$\pi_{\beta}(\beta)\ \prod_{i}f_{\beta}\\{Y_{i}(A_{i})\mid X_{i}\\}$, which
does not depend on the missing data mechanism. Without IG.2 we are still
obligated to model the missing data mechanism even when the missing data is
MAR, as $A_{i}$ provides information about $\beta$ through $\phi$.
It has been argued before that, from a Frequentist perspective, IG.2 is highly
problematic in high-dimensional problems (Robins and Ritov,, 1997). We
complement this Frequentist view and study the implications of IG.2 from a
Bayesian perspective. In particular we will argue that, while IG.2 is
seemingly innocuous, it actually is highly informative about the selection
bias in high-dimensional problems to the degree that the data has no
reasonable chance at overcoming the prior. We refer to this as _prior
dogmatism_ about the selection bias. We make the following three points.
1. 1.
Priors which impose IG.2 are dogmatic about the amount of selection bias. This
is particularly true in models which require the use of informative priors,
such as high-dimensional or Bayesian nonparametric models. We conclude that
IG.2 does not reflect substantive prior knowledge in most cases; in the case
of a causal ridge regression model, we are able to quantify these problems
explicitly using random matrix theory (see Dobriban and Wager,, 2018 and
Dicker,, 2016 for related applications of random matrix theory).
2. 2.
By understanding this induced prior on the selection bias, we are able to
identify several highly effective ways of correcting this problem and unify
several approaches proposed in the Bayesian causal inference literature which
were not motivated by Bayesian considerations. Our remedies take the form of
propensity score adjustments, which have typically been recommended in applied
Bayesian analysis on the grounds of pragmatism and robustness (see, e.g.,
Rubin,, 1985) rather than subjective Bayesian principles.
3. 3.
We study some relatively narrow settings in which prior dogmatism does not
occur, even in high dimensional problems. For example, strong dependence
structure in $X_{i}$ can act as a shield against dogmatism; in the case of
causal ridge regression, we again use random matrix theory to quantify this
behavior. Despite this, we find little payoff for failing to correct for
dogmatism in these settings.
###### Remark 1.
While we will consider the Frequentist properties of the posterior
distribution, our goal at the outset is not to construct priors specifically
to attain ideal Frequentist properties. It is often quite easy to construct
priors which are doubly robust and attain some semiparametric efficiency bound
if that is our goal from the outset, and various complete class theorems
(Robert,, 2007, Chapter 8) suggest that we can usually construct _some_ Bayes
estimator which is at-least-as-good as any given Frequentist estimator.
Rather, we (i) show that priors of the form IG.2 are inherently dangerous in
purely-Bayesian terms, (ii) explain in which situations the problem is most
acute, and (iii) use dogmatism to show where corrections are needed.
### 1.1 Notation
We let $Y_{i}(\cdot)\in\mathbb{R}$ denote an outcome, $X_{i}\in\mathbb{R}^{P}$
a covariate/confounder, and $A_{i}\in\mathbb{R}$ a treatment/missing data
indicator for $i=1,\ldots,N$. When considering causal inference problems, we
define $Y_{i}=Y_{i}(A_{i})$ to be the observed outcome; when $A_{i}$ is a
missingness indicator, we instead define $Y_{i}=Y_{i}(1)$ so that $Y_{i}$ is
missing when $A_{i}\neq 1$. We set $\bm{Y}=(Y_{1},\ldots,Y_{N})^{\top}$,
$\bm{A}=(A_{1},\ldots,A_{N})$, and let $\bm{X}$ denote an $N\times P$ matrix
obtained by stacking the row vectors $X_{i}^{\top}$. Let $\beta$ parameterize
the distribution of $[Y_{i}(\cdot)\mid X_{i}]$, let $\phi$ parameterize the
distribution of $[A_{i}\mid X_{i}]$, and let $\theta=(\beta,\phi)$. We invoke
IG.1 throughout. We let $\mathbb{E}_{\theta}(\cdot)$ denote the expectation
operator conditional on $\theta$. If the subscript $\theta$ is omitted then
$\mathbb{E}(\cdot)$ is the expectation operator with respect to a prior
distribution on $\theta$, e.g.,
$\mathbb{E}(Y_{i})=\int\mathbb{E}_{\theta}(Y_{i})\,\pi(\theta)\ d\theta$. We
use the Big-O notation $W=O_{p}(V)$ to mean that $|W|/|V|$ is bounded in
probability as $P\to\infty$ (with the dependence of $W$ and $V$ on $P$
suppressed).
### 1.2 Three Illustrative Problems
We consider three problems to illustrate the existence of dogmatism and how to
correct for it. The first two concern causal inference with a continuous
exposure, while the third is a missing data problem. We assume
$X_{i}\sim\operatorname{Normal}(0,\Sigma)$ for some
$\Sigma\in\mathbb{R}^{P\times P}$ to simplify our analysis. All proofs are
deferred to the Supplementary Material.
##### Ridge regression in causal inference
Let $Y_{i}(a)$ denote the outcome observed when individual $i$ receives the
level $a$ of some continuous exposure and let $A_{i}$ denote the value of the
exposure which is actually received. We observe $(A_{i},Y_{i})$ where
$Y_{i}=Y_{i}(A_{i})$. We posit the linear models
$Y_{i}(a)=X_{i}^{\top}\beta+\gamma\,a+\epsilon_{i}(a)$ and
$A_{i}=X_{i}^{\top}\phi+\nu_{i}$ with
$\epsilon_{i}(a)\sim\operatorname{Normal}(0,\sigma^{2}_{y})$ and
$\nu_{i}\sim\operatorname{Normal}(0,\sigma^{2}_{a})$. The Bayesian ridge
regression prior, which satisfies IG.2, takes
$\beta\sim\operatorname{Normal}(0,\tau^{2}_{\beta}\ \mathrm{I})$ and
$\phi\sim\operatorname{Normal}(0,\tau^{2}_{\phi}\ \mathrm{I})$. The parameter
of inference is the mean response at a given exposure
$\mathbb{E}_{\theta}\\{Y_{i}(a)\\}=\gamma\,a$. We assume that $X_{i}$ is high-
dimensional in the sense that $P$ is potentially larger than $N$.
##### Sparsity priors in causal inference
This is the same problem as the ridge regression problem, except that $\beta$
and $\phi$ are sparse. We consider independent spike-and-slab priors for the
coefficients, i.e.,
$\beta_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}(1-p_{\beta})\,\delta_{0}+p_{\beta}\,\operatorname{Normal}(0,\tau^{2}_{\beta})$
and
$\phi_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}(1-p_{\phi})\,\delta_{0}+p_{\phi}\,\operatorname{Normal}(0,\tau^{2}_{\phi})$
where $\delta_{0}$ denotes a point-mass distribution at $0$.
##### Semiparametric regression with missing data
An outcome $Y_{i}=Y_{i}(1)$ is observed if $A_{i}=1$ and missing if $A_{i}=0$,
and our goal is to estimate $\mathbb{E}_{\theta}(Y_{i})$. We consider the
model $Y_{i}=\beta(X_{i})+\epsilon_{i}$ with
$\epsilon_{i}\sim\operatorname{Normal}(0,\sigma^{2}_{y})$ and
$A_{i}\sim\operatorname{Bernoulli}\\{\phi(X_{i})\\}$. We assume that
$\beta(\cdot)$ has a _Gaussian process_ prior (Rasmussen and Williams,, 2006)
with covariance function $\kappa(\cdot,\cdot)$, written
$\operatorname{GP}(0,\kappa)$.
## 2 The Induced Prior on the Selection Bias
The fundamental difficulty with missingness is _selection bias_. When
estimating $\mathbb{E}_{\theta}\\{Y_{i}(a)\\}$ this amounts to the fact that
$\Delta(a)=\mathbb{E}_{\theta}\\{Y_{i}(a)\mid
A_{i}=a\\}-\mathbb{E}_{\theta}\\{Y_{i}(a)\\}\neq 0$. That the selection bias
parameter $\Delta(a)$ is non-zero is the only feature of the problem which
makes estimation of $\mathbb{E}_{\theta}\\{Y_{i}(a)\\}$ non-trivial, as
otherwise we could ignore the covariates $X_{i}$ and directly estimate
$\mathbb{E}_{\theta}\\{Y_{i}(a)\\}$ by estimating
$\mathbb{E}_{\theta}\\{Y_{i}(a)\mid A_{i}=a\\}$ nonparametrically. The
following proposition gives an expression for $\Delta$ in each of our
problems.
###### Proposition 1.
In the ridge and spike-and-slab regression problems, the selection bias is
given by
$\Delta(a)=a\,\dfrac{\phi^{\top}\Sigma\beta}{\sigma^{2}_{a}+\phi^{\top}\Sigma\phi}=a\,\dfrac{\sum_{j}\lambda_{j}\,W_{j}\,Z_{j}}{\sigma^{2}_{a}+\sum_{j}\lambda_{j}\,Z_{j}^{2}}$
where $W=\Gamma^{\top}\beta$, $Z=\Gamma^{\top}\phi$, and
$\Sigma=\Gamma\Lambda\Gamma^{\top}$ is the spectral decomposition of $\Sigma$
with $\Lambda=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{P})$. In the
semiparametric regression problem with missing data, we instead have
$\Delta\equiv\Delta(1)={\operatorname{Cov}_{\theta}\\{\beta(X_{i}),\phi(X_{i})\\}}/{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}.$
Given the importance $\Delta$ and the working assumption that selection bias
is non-negligible, one would hope that the prior distribution of $\Delta$ is
relatively diffuse. Using Proposition 1 we can gain insight into how the prior
on the selection bias changes as the dimension $P$ increases. For example, for
the ridge regression problem we have the following result.
###### Proposition 2.
Assume the setup of Proposition 1 for the ridge regression problem and suppose
$\beta\sim\operatorname{Normal}(0,\tau^{2}_{\beta}\,\mathrm{I})$ and
$\phi\sim\operatorname{Normal}(0,\tau_{\phi}^{2}\,\mathrm{I})$ independently.
Assume $\frac{1}{P}\sum_{j=1}^{P}\lambda_{j}^{k}$ converges to a positive
constant as $P\to\infty$ for $k=1,2,2+\epsilon$ for some $\epsilon$, and let
$\widetilde{\lambda}$ and $\bar{\lambda}^{2}$ be the limits with $k=1,2$. Then
$\Delta(a)\stackrel{{\scriptstyle{}_{\bullet}}}{{\sim}}\operatorname{Normal}(0,c/P)$
where
$c=a^{2}\,(\tau^{2}_{\beta}/\tau^{2}_{\phi})\,(\bar{\lambda}^{2}/\widetilde{\lambda}^{2})$.
We will return to the conditions on $\Sigma$ (which are moment conditions on
the spectral distribution of $\Sigma$) later and focus on the conclusion
$\Delta(a)\stackrel{{\scriptstyle{}_{\bullet}}}{{\sim}}\operatorname{Normal}(0,c/P)$
for some constant $c$. If selection bias is a-priori of concern for us then it
seems unwise to posit a $\operatorname{Normal}(0,c/P)$ prior for it when $P$
is large. This behavior becomes even more suspect when one considers that the
definition of $\Delta(a)$ is completely free of the $X_{i}$’s, and that
logically the act of measuring additional covariates should not change our
beliefs about $\Delta(a)$. In Section 2.1 we follow up on the inferential
consequences of this.
At a high level, the source of the problem in our three illustrative examples
is the following well-known phenomenon which we refer to as the _orthogonality
principle_.
###### Principle 1 (The Orthogonality Principle).
Let $\widetilde{\beta}$ and $\widetilde{\phi}$ be random unit vectors with
mean $0$ taking values in some high/infinite dimensional Hilbert space
$\mathcal{H}$ with inner product $\langle\cdot,\cdot\rangle$. Then, if
$\widetilde{\phi}$ and $\widetilde{\beta}$ are independent and there is no
other special structure in the problem, with high probability we have
$\langle\widetilde{\beta},\widetilde{\phi}\rangle\approx 0$.
For a concrete example, by the law of large numbers and the central limit
theorem, if
$\beta_{j},\phi_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,1)$
then $\beta^{\top}\phi/\sqrt{P}\to\operatorname{Normal}(0,1)$ but
$\|\beta\|\,\|\phi\|/P\to 1$ so that
$\langle\beta/\|\beta\|,\phi/\|\phi\|\rangle=O_{p}(P^{-1/2})$ with respect to
the Euclidean inner product. The statement of the orthogonality principle is
intentionally vague as to what it means for the dimension to be “high,” what
$\approx$ means, and what constitutes “special structure.” Nevertheless, it
provides immediate intuition for what to expect: unless one has reason to
believe otherwise, expect high-dimensional unit vectors to be nearly
orthogonal.
The orthogonality principle becomes important when $P$ is large (or in
nonparametric problems) because $\Delta$ is quantifiable in terms of
$\langle\beta,\phi\rangle$ for some suitable inner product (c.f. Proposition
1). If IG.2 holds then the orthogonality principle immediately suggests
$\langle\beta,\phi\rangle\approx 0$ with high probability, implying that our
prior is dogmatic about the selection bias.
### 2.1 Asymptotics for High-Dimensional Ridge Regression
While the dogmatism implied by Proposition 2 is troubling, one might hope that
the informative prior on $\Delta$ is a theoretical curiosity which is
nevertheless swamped by the data. By analogy, a strict prior analysis of the
flat prior $\mu\sim\operatorname{Normal}(0,10^{100})$ in the normal model
$Y_{i}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(\mu,1)$
would similarly suggest that we ought to be concerned about the implication of
the prior on the magnitude of $\mu$; instead the data quickly swamps the
diffuse prior to produce a sensible posterior
$\mu\approx\operatorname{Normal}(\bar{Y},1/N)$. That is, in terms of
consequences for inference, what matters is the impact of IG.2 on the
posterior rather than the prior.
We now show that this hopeful scenario is not borne out, and that the prior
concentration on $\Delta$ leads to heavily biased inferences if $P$ grows
sufficiently quickly with $N$. We summarize our main results as follows:
* •
In the regime $P/N\to r$ for some $r\in(0,\infty)$ (i.e., $P$ grows at the
same rate as $N$), the Bayes estimator which takes a flat prior on $\gamma$
and a Gaussian prior
$\beta\sim\operatorname{Normal}(0,\tau^{2}\,P^{-1}\,\mathrm{I})$ is heavily
biased. Specifically, when selection bias is present through the auxiliary
covariate $\widehat{A}_{i}=X_{i}^{\top}\phi$, the Bayes estimate will have
bias of order $\Delta(1)$.
* •
In some sense, the setting $\Sigma=\mathrm{I}$ is inherently difficult, and
the problem is generally easier when the components of $X_{i}$ are highly
correlated. We return to this point in Section 5.
We make two sets of assumptions. The first (high-dimensional asymptotics, or
HDA) is used to describe the distribution of the $X_{i}$’s as $N\to\infty$.
The second (random effects model, or REM) describes a particular random
effects model for the regression coefficients. This framework modifies the
framework of Dobriban and Wager, (2018) so that it is suitable for our aims.
1. HDA.1
The covariates are multivariate normal with
$X_{i}\sim\operatorname{Normal}(0,\Sigma)$.
2. HDA.2
As $N\to\infty$ we have $P/N\to r$ for some $r\in(0,\infty)$.
3. HDA.3
The spectral distribution $\sum_{p=1}^{P}\delta_{\lambda_{p}}/P$ associated to
$\Sigma$ converges to some limiting distribution $H$ on $[0,\infty)$, where
$\lambda_{1},\ldots,\lambda_{P}$ are the eigenvalues of $\Sigma$ and
$\delta_{\lambda}$ denotes a point-mass distribution at $\lambda$.
HDA is a standard assumption for understanding the case where $P$ grows like
$N$, though HDA.1 may be replaced with a moment condition on $X_{i}$. HDA.3
allows us to use results from random matrix theory to compute
$\lim_{P\to\infty}\operatorname{tr}\\{(\bm{X}^{\top}\bm{X}+N\lambda\,\mathrm{I})^{-k}\\}$
for $k\in\mathbb{N}$. Under HDA, the empirical distribution of the eigenvalues
of $\underline{S}=\bm{X}\bm{X}^{\top}/N$, namely
$\widehat{F}(dx)=\sum_{i=1}^{N}\delta_{\widehat{\lambda}_{i}}/N$, converges to
a distribution $F(dx)$ called the _empirical spectral distribution_.
Next, we describe a random effects model (REM) for $\beta$ and $\phi$ we will
base our analysis on. Similar models have been used to study both the
prediction risk and minimax-optimality of ridge regression (Dobriban and
Wager,, 2018; Dicker,, 2016). REM is a fruitful assumption for us as it allows
exact formulas for the bias to be derived which are free of the particular
values of $\beta$ and $\phi$.
1. REM.1
The coefficient vector $\phi$ is randomly sampled as
$\phi\sim\operatorname{Normal}(0,\tau^{2}\,P^{-1}\,\mathrm{I})$.
2. REM.2
The coefficient vector $\beta$ is randomly sampled as
$\beta\sim\operatorname{Normal}(\omega_{0}\,\phi,\tau^{2}\,P^{-1}\,\mathrm{I})$.
3. REM.3
Given $\beta$ and $\phi$,
$Y_{i}\sim\operatorname{Normal}(X_{i}^{\top}\beta+A_{i}\,\gamma_{0},1)$ and
$A_{i}\sim\operatorname{Normal}(X_{i}^{\top}\phi,1)$.
To motivate REM.2, note that it is equivalent to setting
$Y_{i}\sim\operatorname{Normal}(X_{i}^{\top}b+\omega_{0}\,\widehat{A}_{i}+\gamma_{0}\,A_{i},1)$
where $\widehat{A}_{i}=X_{i}^{\top}\phi=\mathbb{E}(A_{i}\mid X_{i},\phi)$ and
$b\sim\operatorname{Normal}(0,\tau^{2}\,P^{-1}\,\mathrm{I})$. REM.2 allows for
non-negligible selection bias to enter the model, and priors based on this
parameterization have been used to account for selection bias by other
researchers (Zigler et al.,, 2013; Hahn et al.,, 2018). The parameter
$\omega_{0}$ is intimately connected to the selection bias.
###### Proposition 3.
Suppose that HDA and REM hold and that $\Sigma$ satisfies the conditions of
Proposition 2. Then
$\Delta(1)\to\omega_{0}\frac{\tau^{2}\,\widetilde{\lambda}}{1+\tau^{2}\,\widetilde{\lambda}}$
in probability as $P\to\infty$.
Theorem 1 explicitly computes the bias of the ridge regression estimator under
IG.2 when the prior
$\beta\sim\operatorname{Normal}(0,N^{-1}\lambda^{-1}\mathrm{I})$ is used,
i.e., when we apply the usual ridge regression estimator. We sketch a proof of
Theorem 1 and verify it numerically in the Supplementary Material.
###### Theorem 1.
Suppose HDA and REM hold. Let
$(\widetilde{\gamma},\widetilde{\beta}^{\top})^{\top}$ denote the Bayes
estimate of $(\gamma,\beta^{\top})^{\top}$ under a prior which takes
$\beta\sim\operatorname{Normal}(0,N^{-1}\,\lambda^{-1}\mathrm{I})$ and places
a flat prior on $\gamma$ under IG.2. Then the asymptotic bias of
$\widetilde{\gamma}$ is given by
$\displaystyle\lim_{N,P\to\infty}\mathbb{E}(\widetilde{\gamma}-\gamma_{0})=\frac{\omega_{0}\int
x/(x+\lambda)\ F(dx)}{\int(x+\eta)/(x+\lambda)\
F(dx)}=\omega_{0}\times\frac{1-\lambda\,v(-\lambda)}{1-(\lambda-\eta)\,v(-\lambda)}$
(1)
where $v(z)=\int_{0}^{\infty}\frac{F(dx)}{x-z}$ is the Stieltjes transform of
$F(dx)$ and $\eta=r/\tau^{2}$.
Ideally we would like the bias to be close to $0$ for moderate-to-large values
of $\lambda$ so that we have both small variance and bias; the approach
outlined in Section 4.1 _does_ accomplish this goal for a properly-chosen
$\lambda$. Figure 1 contrasts this alternative method with standard ridge
regression when $\Sigma=\mathrm{I}$ and we see that the bias is quite large
for ridge regression unless $\lambda$ is close to $0$ and $r\leq 1$; this
latter case corresponds to OLS, which (while unbiased) defeats the purpose of
using ridge regression.
Figure 1: Comparison of the bias of naive ridge regression (dashed, blue) to
the direct Z-prior (solid, orange) of Section 4.1 for different values of
$\eta$ and $r$ with $\omega_{0}\equiv 1$. Estimated values of $\lambda$ based
on a single simulated dataset for each combination of $\eta$ and $r$ are given
by the points.
A qualitative observation based on (1) is that smaller bias is obtained when
most of the eigenvalues of $\underline{S}$ are small. For example,
unbiasedness is possible if $F(dx)$ assigns _any_ mass to $0$, since taking
$\lambda\to 0$ will cause $\lambda\,v(-\lambda)\to 0$ (by bounded convergence)
while $\eta\,v(-\lambda)\to\infty$ in the denominator. This occurs naturally
when $N>P$, and incidentally would also occur if duplicate rows of $\bm{X}$
were possible even if $P\gg N$; this latter observation makes intuitive sense,
as we could then identify $\gamma_{0}$ using exact-matching on the $X_{i}$’s.
When $P>N$ the only hope for non-negligible bias is for the eigenvalues of
$\underline{S}$ to be heavily concentrated near $0$. As $\underline{S}$ has
the same non-zero eigenvalues as the sample covariance
$S=\bm{X}^{\top}\bm{X}/N$ this means we should hope for strong colinearities
among the covariates. A particularly unfavorable setting is
$\Sigma=\mathrm{I}$, where the Marchenko-Pastur theorem (see, e.g., Couillet
and Debbah,, 2011, Theorem 2.13) states that if $r\geq 1$ then $F(dx)$ has
density
$q(\lambda)=\frac{\sqrt{(b-\lambda)(\lambda-a)}}{2\,\pi\,\lambda}I(a<\lambda<b)$
where $(a,b)=(1\pm\sqrt{r^{-1}})^{2}$; this places the bulk of the eigenvalues
rather far from $0$. In Section 5 we show that much better results are
obtained when the $X_{i}$’s follow a latent factor model.
### 2.2 Selection Bias Dogmatism for Semiparametric Regression
In the semiparametric regression problem with missing data the selection bias
parameter is given by
$\Delta=\frac{\operatorname{Cov}_{\theta}\\{\beta(X_{i}),\phi(X_{i})\\}}{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}.$
Figure 2 gives a sense of what to expect for nonparametric priors. In this
figure, $\beta(x)$ and $\Phi^{-1}\\{\phi(x)\\}$ are given independent BART
priors (Chipman et al.,, 2010; Hill,, 2011) where $\Phi(\cdot)$ is a probit
function. We see that as $P$ increases the variance of $\Delta$ decreases
substantially. As in the setting of ridge regression, this is troubling both
because (i) it will typically violate our prior beliefs about $\Delta$ for
large $P$ and (ii) given the definition of $\Delta$ as
$\mathbb{E}_{\theta}(Y_{i}\mid A_{i}=1)-\mathbb{E}_{\theta}(Y_{i})$ there is
no reason for our prior beliefs to be dependent on the number of confounders
we happen to have measured.
Figure 2: Prior distribution of $\Delta$ for the BART model in Section 2.2 for
$P\in\\{1,10,50\\}$.
For convenience we will assume that $\phi(x)$ has a point-mass prior at some
$\phi_{0}$ and that $\beta$ has a Gaussian process prior (Rasmussen and
Williams,, 2006). Recall that $\beta\sim\operatorname{GP}(m,\kappa)$ means
that, for any finite collection $(x_{1},\ldots,x_{M})$, we have
$\big{(}\beta(x_{1}),\ldots,\beta(x_{M})\big{)}^{\top}\sim\operatorname{Normal}(\bm{m},\bm{K})$
where $\bm{m}=\big{(}m(x_{1}),\ldots,m(x_{M})\big{)}^{\top}$ and $\bm{K}$ has
$(j,k)^{\text{th}}$ entry $\kappa(x_{j},x_{k})$. Gaussian processes have been
proposed as priors for causal inference by several authors (Ray and van der
Vaart,, 2020; Ren et al.,, 2021) and they are particularly easy to study
theoretically.
The relevant Hilbert space for applying the orthogonality principle is
$\mathscr{L}_{2}(F_{X})$, the space of square-integrable functions $\\{g:\int
g^{2}\ dF_{X}<\infty\\}$ under the usual inner product
$\langle\beta,\phi\rangle=\int\beta(x)\,\phi(x)\ F_{X}(dx)$, with $F_{X}$
denoting the distribution of $X_{i}$. Let
$\bar{\beta}(x)=\beta(x)-\int\beta(x)\ F_{X}(dx)$ and
$\bar{\phi}(x)=\phi(x)-\int\phi(x)\ F_{X}(dx)$, and define the normalizations
of these functions by $\widetilde{\beta}(x)=\bar{\beta}(x)/\|\bar{\beta}\|$
and $\widetilde{\phi}(x)=\bar{\phi}(x)/\|\bar{\phi}\|$. The following
proposition shows that the selection bias is controlled by
$\langle\widetilde{\beta},\widetilde{\phi}\rangle$, implying that the
orthogonality principle is in effect.
###### Proposition 4.
Suppose $\beta\sim\operatorname{GP}(m,\kappa)$ such that
$\sup_{P}\mathbb{E}\\{\beta(X_{i})^{2}\\}<\infty$ and that there exists
$\delta>0$ such that $\mathbb{E}\\{\phi(X_{i})\\}\geq\delta$ as $P\to\infty$.
Then
$\Delta=\frac{\|\bar{\beta}\|\,\|\bar{\phi}\|}{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}\
\langle\widetilde{\beta},\widetilde{\phi}\rangle=O_{p}(\langle\widetilde{\beta},\widetilde{\phi}\rangle).$
The question now is how quickly
$\langle\widetilde{\beta},\widetilde{\phi}\rangle$ tends to $0$. In the
nonparametric case, in addition to the dimension ($P$) and distribution of the
covariates ($F_{X}$), this will also depend on the _smoothness_ of
$\widetilde{\beta}$ dictated by the covariance function $\kappa(\cdot,\cdot)$.
Note that $\bar{\beta}=\beta-\int\beta\ dF_{X}$ is also a Gaussian process
with covariance function
$\displaystyle\bar{\kappa}(x,x^{\prime})=\kappa(x,x^{\prime})-\int\kappa(x,z)\
F_{X}(dz)-\int\kappa(x^{\prime},z)\ F_{X}(dz)+\iint\kappa(x,x^{\prime})\
F_{X}(dx)\ F_{X}(dx^{\prime}).$
The following proposition explicitly calculates the prior distribution of
$\Delta$.
###### Proposition 5.
Let $\beta\sim\operatorname{GP}\\{0,\tau^{2}_{\beta}\,\rho(\cdot,\cdot)\\}$
where $\rho(\cdot,\cdot)$ is a correlation function. Then
$\Delta\sim\operatorname{Normal}(0,c)$ where $c$ is
$\displaystyle\frac{\tau^{2}_{\beta}}{\mathbb{E}\\{\phi(X_{i})\\}^{2}}\iint\bar{\phi}(x)\,\bar{\phi}(x^{\prime})\,\bar{\rho}(x,x^{\prime})\
F_{X}(dx)\
F_{X}(dx^{\prime})=\frac{\tau^{2}_{\beta}}{\mathbb{E}\\{\phi(X_{i})\\}^{2}}\sum_{j=1}^{\infty}\lambda_{j}\operatorname{Cov}\\{\phi(X_{i}),v_{j}(X_{i})\\}^{2}$
and
$\rho(x,x^{\prime})=\sum_{j=1}^{\infty}\lambda_{j}\,v_{j}(x)\,v_{j}(x^{\prime})$
is the Karhunen–Loève expansion of $\rho(x,x^{\prime})$ in
$\mathscr{L}_{2}(F_{X})$.
We see that the only way for $c$ to be non-negligible is for $\phi$ to be
highly correlated with some of the leading eigenfunctions of $\rho$; this
suggests, at a minimum, that the kernel $\rho(x,x^{\prime})$ should not be
chosen in a manner which does not reference $\phi(\cdot)$. The next
proposition shows that, should we not incorporate $\phi$ into
$\rho(x,x^{\prime})$, the resulting kernel can be universally poor: the
selection bias can decay exponentially in $P$ irrespective of $\phi$.
###### Proposition 6.
Consider the setup of Proposition 5 where the covariance function
$\rho(x,x^{\prime})$ is given by the Gaussian kernel
$\rho(x,x^{\prime})=\exp\\{-(x-x^{\prime})^{\top}H^{-1}(x-x^{\prime})/2\\}$
for some bandwidth matrix $H$ and suppose (i)
$X_{i}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,\Sigma)$,
(ii) $\det(\Sigma)^{1/P}/\det(H)^{1/P}$ is bounded away from $0$, and (iii)
there exists a $\delta>0$ such that $\mathbb{E}\\{\phi(X_{i})\\}\geq\delta$
for all $P$. Then $\Delta\sim\operatorname{Normal}(0,c)$ where
$c\leq\exp\\{-CP\\}$ for some constant $C>0$ which is independent of $\phi$.
In particular, this occurs if either:
1. (a)
$H=k\,\Sigma$ for some $k>0$ with $k^{1/P}$ bounded and $\Sigma$ full-rank; or
2. (b)
$H=\xi\,\mathrm{I}$ and $\xi^{-1}\prod_{j}\lambda_{j}^{1/P}$ is bounded away
from $0$.
Proposition 6 shows that the Gaussian kernel is dogmatic in a _uniform_ sense:
no matter how favorably $\phi(x)$ is selected, the Gaussian kernel makes the
prior variance on $\Delta$ decrease exponentially in $P$. Moreover, this
exponential decay holds for some potentially-desirable choices of the
bandwidth matrix $H$ (both when $H$ is aligned with $\Sigma$ and when the
kernel is isotropic).
### 2.3 Selection Bias Dogmatism for Spike-and-Slab Priors
A common strategy for dealing with the $N\ll P$ setting in linear regression
is to use a sparsity inducing spike-and-slab prior like the one described in
Section 1.2. Even when we impose sparsity, however, serious problems occur for
the selection bias prior. Suppose that $\Sigma=\sigma^{2}_{x}\mathrm{I}$ and
let $\mathfrak{d}_{j}^{\beta}=I(\beta_{j}\neq 0)$ and
$\mathfrak{d}^{\phi}_{j}=I(\phi_{j}\neq 0)$. Then we can write the selection
bias as
$\Delta(a)=a\frac{\sum_{j:\mathfrak{d}^{\beta}_{j}=\mathfrak{d}^{\phi}_{j}=1}\sigma^{2}_{x}\,\phi_{j}\,\beta_{j}}{\sigma^{2}_{a}+\sum_{j:\mathfrak{d}^{\phi}_{j}=1}\sigma^{2}_{x}\,\phi_{j}^{2}}.$
In this case, the denominator (by the law of large numbers) will be of order
$\sum_{j}\mathfrak{d}_{j}^{\phi}\equiv D_{\phi}$ while the numerator (by the
central limit theorem) will be of order $D_{\phi\cap\beta}^{1/2}$ where
$\sum_{j}\mathfrak{d}_{j}^{\phi}\,\mathfrak{d}_{j}^{\beta}\equiv
D_{\phi\cap\beta}$. If we now use independent spike-and-slab priors for
$\beta$ and $\phi$ which are calibrated to have on average $Q$ variables, we
expect $D_{\phi}\approx Q$ while $D_{\phi\cap\beta}\approx Q/P$, so that the
selection bias will be a-priori negligible in high-dimensional sparse settings
in which spike-and-slab priors are applied. Hence, even if sparsity is
expected (but IG.2 is otherwise in effect) we run into essentially the same
problem as with ridge regression: the prior on $\Delta(a)$ regularizes it
towards zero.
## 3 Specifying Priors Which Violate IG.2: Z-Priors
Part of the appeal of IG.2 is that the treatment $A_{i}$ plays no role in
posterior sampling of the parameter of interest $\beta$. This is
computationally convenient because the updates for $\beta$ and $\phi$ in (say)
a Markov chain Monte Carlo experiment can be carried out independently. It
also prevents the phenomenon of _model feedback_ from occurring, wherein
misspecification of the $A_{i}$-model can result in inconsistent estimation of
$\beta$ even when the $Y_{i}$-model is correctly specified (Robins and Ritov,,
1997; Zigler et al.,, 2013).
Arguably the most natural way to specify a Bayesian model violating IG.2 is to
use the factorization
$\displaystyle\pi(\phi)\,\pi(\beta\mid\phi)\,f_{\phi}(\bm{A}\mid\bm{X})\,f_{\beta}(\bm{Y}\mid\bm{A},\bm{X}).$
(2)
That is, we specify the model in the usual way, but make $\beta$ dependent on
$\phi$. An alternative approach, which is less natural but far more
convenient, is to use the model specification
$\displaystyle\pi(\phi,\beta,\bm{A},\bm{Y}\mid\bm{X})=\pi(\phi)\,f_{\phi}(\bm{A}\mid\bm{X})\,\pi(\beta\mid\bm{A},\bm{X})\,f_{\beta}(\bm{Y}\mid\bm{A},\bm{X}).$
(3)
This approach is argued for by Hahn et al., (2020) who refer to specifications
like (3) as _Zellner priors_ due to the fact that the prior for $\beta$ is
allowed to depend on the design matrix $(\bm{A},\bm{X})$ of $\bm{Y}$ (as is
the case for Zellner’s famous $g$-prior). In order to avoid confusion with
Zellner’s $g$-prior, we refer to priors of this form as Z-priors. Figure 3
shows a schematic comparing these two factorizations with IG.2. This prior
makes $\beta$ and $\phi$ _conditionally independent_ given $(\bm{A},\bm{X})$
so that it retains the principle advantage of IG.2: feedback between the
$Y_{i}$ and $A_{i}$ models is severed, and the updates for $(\beta,\phi)$ are
no longer coupled. Note that (3) induces a dependent prior of the form
$\pi(\phi,\beta)=\pi(\phi)\,\int\pi(\beta\mid\bm{A},\bm{X})\,f_{\phi}(\bm{A}\mid\bm{X})\
d\bm{A}\ dF_{X},$ so that IG.2 is violated.
Figure 3: Directed acyclic graphs showing different conditional independence
structures for model and prior specification; (a) shows the graph implied by
IG.2, (b) shows the graph implied by (2), and (c) shows the graph implied by
(3).
To illustrate the point, in Section 4.1 we will consider a prior for the ridge
regression problem which is of the form
$\beta\sim\operatorname{Normal}(\omega\,\phi,\tau^{2}_{\beta}\,\mathrm{I})$
where $\omega$ is given a diffuse prior. This prior conforms to (2). In our
actual experiments, however, we use the prior
$\beta\sim\operatorname{Normal}(\omega\,\widehat{\phi},\tau^{2}_{\beta}\,\mathrm{I})$
where $\widehat{\phi}=\mathbb{E}(\phi\mid\bm{A},\bm{X})$ is a data-adaptive
ridge estimator. This prior is of the form (3) and is trivial to implement in
a two-stage fashion: fit the model for $A_{i}$, compute $\widehat{\phi}$, and
plug this into the prior for $\beta$ when fitting the $Y_{i}$-model.
In our experience, some Bayesians feel uneasy about using priors like
$\beta\sim\operatorname{Normal}(\omega\widehat{\phi},\tau^{2}_{\beta})$
because it “understates the uncertainty” in $\beta$ due to the fact that it
appears to use a plug-in estimate of $\phi$ rather than $\phi$ itself. While
it is true that using the actual value of $\phi$ rather than an estimate
typically (although not always! see Hirano et al.,, 2003) results in improved
Frequentist performance, the justification of this prior as being of the form
(3) shows that there is no explicit violation of the Bayesian calculus in
using this prior. For example, for the ridge regression Z-prior discussed
above we can explicitly derive the induced prior on
$\pi(\beta,\phi,\omega\mid\bm{X})$ as
$\pi(\beta,\phi,\omega\mid\bm{X})=\pi(\phi,\omega)\,\operatorname{Normal}\big{(}\beta\mid\omega\,\phi,\tau^{2}_{\beta}\,\mathrm{I}+\sigma^{2}_{a}(\bm{X}^{\top}\bm{X})^{-1}\big{)}.$
## 4 Correcting for Dogmatism
### 4.1 Direct Priors for Ridge Regression
A simple approach to addressing dogmatism in the ridge regression setting is
to note that we can make $\beta^{\top}\Sigma\phi$ large by encouraging $\beta$
to align with $\phi$. For example, we might center $\beta$ on $\phi$ by taking
$\beta\sim\operatorname{Normal}(\omega\phi,\tau^{2}_{\beta}\,\mathrm{I}).$
Doing this, we now have $\Delta(a)=a\frac{\phi^{\top}\Sigma
b}{\sigma^{2}_{a}+\phi^{\top}\Sigma\phi}+a\,\omega\frac{\phi^{\top}\Sigma\phi}{\sigma^{2}_{a}+\phi^{\top}\Sigma\phi},$
where $b\sim\operatorname{Normal}(0,\tau^{2}_{\beta}\,\mathrm{I})$. By the
same argument as in Proposition 2, the first term is $O_{p}(P^{-1/2})$; the
second term, however, does not tend to $0$ as $P\to\infty$, preventing prior
dogmatism from taking hold. By centering the prior for $\beta$, we can now
specify a _direct_ prior on $\Delta(a)$ by placing a prior on $\omega$. For
example, we can express prior ignorance about the degree of selection bias by
placing a flat prior on $\omega$.
This approach is related to the targeted maximum likelihood estimation
strategy of introducing a “clever covariate” into the outcome model to account
for selection (see, e.g., van der Laan and Rose,, 2011, Section 4.2.1). The
parameterization $\beta=b+\omega\phi$ gives
$Y_{i}(a)=\beta_{0}+X_{i}^{\top}b+\omega(X_{i}^{\top}\phi)+\gamma\,a+\epsilon_{i}(a),$
which effectively introduces the new covariate
$\widehat{A}_{i}=X_{i}^{\top}\phi$ into the model. A related idea proposed by
(Hahn et al.,, 2018) is to replace $a$ in the outcome model with the residual
$(a-\widehat{A}_{i})$, which is equivalent to setting $\omega=-\gamma$.
#### Bias of the Z-prior Estimate under HDA and REM
In practice, we will not usually use the direct prior described above;
instead, we will use a Z-prior which plugs in a point-estimate of the clever
covariate $\widehat{A}_{i}=X_{i}^{\top}\widehat{\phi}$. Assuming HDA and REM,
the bias of the Bayes estimator under the Z-prior can be shown to be
$\displaystyle\mathbb{E}(\widetilde{\gamma}-\gamma_{0})=\mathbb{E}\left\\{\frac{\bm{A}^{\top}(\mathrm{I}-\widehat{\Psi}\bar{H}\widehat{\Psi}^{\top})\Psi\binom{\omega_{0}}{b}}{\bm{A}^{\top}(\mathrm{I}-\widehat{\Psi}\bar{H}\widehat{\Psi}^{\top})\bm{A}}\right\\}$
where $\widehat{\Psi}=[\bm{X}\widehat{\phi},\bm{X}]$,
$\Psi=[\bm{X}\phi,\bm{X}]$ and
$\bar{H}^{-1}=\widehat{\Psi}^{\top}\widehat{\Psi}+N\lambda\left(\begin{smallmatrix}0&0\\\
0&\mathrm{I}\end{smallmatrix}\right)$. If one happened to know the exact value
of $\phi$ and set $\widehat{\phi}=\phi$ then it is easy to show that the
resulting Bayes estimator $\widetilde{\gamma}$ is unbiased for $\gamma$,
irrespective of the prior for $b$. It is also easy to show that if we take
$\widehat{\phi}=\widetilde{\phi}$ where $\widetilde{\phi}$ is the Bayes
estimate under the correct prior
$\widetilde{\phi}=\mathbb{E}(\phi\mid\bm{A},\bm{X})$ then $\widetilde{\gamma}$
remains unbiased.
The selling point now is that there are moderate values of $\lambda$ for which
ridge regression will have $0$ bias, a situation which was not possible with
the naive prior. In practice, under REM we will know neither $\phi$ nor
$\widetilde{\phi}$ (because we will not know the signal level
$\tau_{\phi}^{2}$). Data is typically quite informative about
$\tau^{2}_{\phi}$, however, and we have had success placing a prior on
$\tau^{2}_{\phi}$ to obtain nearly unbiased estimates. This is seen in Figure
1 ($\Sigma=\mathrm{I}$), where the point on the solid line corresponds to the
bias if we plug in a Bayes estimate of $\tau^{2}_{\phi}$ to construct a ridge
estimator.
It is also possible to show that the asymptotic bias of the Z-prior when
$\widehat{\phi}$ is estimated with ridge regression is
$\displaystyle\left(\omega_{0}\,\lambda\left[\frac{\psi_{11}}{\eta}-\frac{\psi_{22}}{\eta}\frac{(\psi_{21}+\psi_{22}/\eta)}{(\psi_{32}+\psi_{33}/\eta)}\right]\right)/\left({\frac{1-r}{r}+\lambda\left[\psi_{10}+\frac{\psi_{11}}{\eta}-\frac{(\psi_{21}+\psi_{22}/\eta)^{2}}{(\psi_{32}+\psi_{33}/\eta)}\right]}\right)$
where $\psi_{jk}=\int_{0}^{\infty}\frac{x^{k}}{(x+\lambda)^{j}}\ G(dx)$ and
$G(dx)$ is the empirical spectral distribution corresponding to
$\bm{X}^{\top}\bm{X}/N$ (also known as the companion spectral distribution to
$F(dx)$). Each of the $\psi_{jk}$’s can be computed by noting the recursive
identity $\psi_{jk}=\psi_{j-1,k-1}-\lambda\psi_{j,k-1}$ and the fact that
$\psi_{j0}=m^{(j-1)}(-\lambda)/(j-1)!$ where $m(z)$ is the Stieltjes transform
of $G(dx)$.
A plot of the bias is given in Figure 1 when $\Sigma=\mathrm{I}$, and we
numerically our bias formula in the Supplementary Material.Rather curiously,
for $r>1$ we see that the bias of the Z-prior estimate is ill-behaved near
$0$; fortunately, we can estimate $\tau^{2}$ accurately enough to avoid this
region.
#### Evaluation of the Direct Z-Prior via Simulation
In order to determine if there is any benefit to using the Z-prior with the
prior on $\omega$ being $\operatorname{Uniform}(-\infty,\infty)$ relative to
either (i) the naive ridge regression prior or (ii) the approach of (Hahn et
al.,, 2018) which we call the “debiased” approach (equivalent to fixing
$\omega=-\gamma$), we conducted a simulation study. In all cases we set
$N=200$ and $P=1000$ so that $N\ll P$. We consider a dense model with
$\phi=(1,\ldots,1)/\sqrt{P}$, a randomly chosen $\beta$ vector
$\beta_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,P^{-1})$,
and $\sigma^{2}_{a}=\sigma^{2}_{y}=1$. The methods differ in the treatment
effect size $\gamma$ and the degree to which the coefficients are shifted in
the direction of $\phi$. We considered three simulation settings.
Random
Both $\gamma$ and $\omega$ are $\operatorname{Normal}(0,1)$ random variables
and differ for each replication of the experiment.
Fixed
We set $\gamma=2$ and $\omega=-\gamma/4$ so that $\beta$ is shifted in the
direction of $\phi$, but not by the amount implied by the debiased approach.
Debiased
We set $\gamma=2$ and $\omega=-2$ so that $\beta$ is shifted in the direction
of $\phi$ by exactly the amount implied by the debiased approach.
Naive
We set $\gamma=2$ and $\omega=0$ so that the model corresponds precisely to
the naive ridge model.
The simulation was replicated 200 times for each setting. We evaluated each
procedure according to the following criteria. Coverage: The proportion of
nominal 95% credible intervals which capture the true value of $\gamma$.
Width: The average width of the nominal 95% credible interval. Avg SE: The
average estimated standard error from the model, i.e., the the posterior
standard deviation of $\gamma$ averaged over all replications. RMSE: The root
mean squared error in estimating $\gamma$ with the Bayes estimator
$\widehat{\gamma}$.
Setting | Method | Coverage | Width | Avg SE | RMSE
---|---|---|---|---|---
$\omega\sim\operatorname{Normal}(0,1)$ | Direct | 0.94 | 0.39 | 0.10 | 0.10
Debiased | 0.94 | 0.51 | 0.13 | 0.12
Naive | 0.19 | 0.31 | 0.08 | 0.49
$\omega=\gamma/4$ | Direct | 0.95 | 0.39 | 0.10 | 0.11
Debiased | 0.94 | 0.54 | 0.13 | 0.14
Naive | 0.12 | 0.29 | 0.07 | 0.25
$\omega=-\gamma$ | Direct | 0.96 | 0.39 | 0.10 | 0.11
Debiased | 0.95 | 0.39 | 0.10 | 0.11
Naive | 0.00 | 0.40 | 0.10 | 0.96
Naive ($\omega=0$) | Direct | 0.95 | 0.39 | 0.10 | 0.11
Debiased | 0.93 | 0.63 | 0.16 | 0.16
Naive | 0.90 | 0.28 | 0.07 | 0.08
Table 1: Comparison of different approaches for estimating $\gamma$ under
differing levels of selection bias. Direct denotes the approach which sets
$\omega\sim\operatorname{Flat}$, Debiased denotes the approach of Hahn et al.,
(2018).
Results are compiled in Table 1. The direct and debiased approaches always
attain the nominal coverage level, while the naive approach does not come
close when the selection bias is non-negligible. We also see that the debiased
approach will generally require substantially larger intervals than the direct
approach to cover at the appropriate rate; the only exception is when
$\omega=-\gamma$, which is to be expected as this setting agrees exactly with
the debiased prior. When the naive ridge model actually holds (i.e.,
$\omega=0$) we see that the naive ridge model unsurprisingly performs
substantially better, and is the best in terms of RMSE, with the direct prior
still outperforming the debiased prior.
### 4.2 Variable Sharing for Spike-and-Slab Priors
For the variable selection prior it remains valid to include the “clever
covariate” $X_{i}^{\top}\widehat{\phi}$ in the model to correct for dogmatism.
However, there are other approaches one can take which make specific use of
the variable selection aspect of the model. One possibility is to use _shared
variable selection_ for the two models; in particular, we want to ensure any
variable appearing in the selection model should also appear in the outcome
model. To implement this, we might set
$\phi_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}(1-p_{\phi})\,\delta_{0}+p_{\phi}\,\operatorname{Normal}(0,\tau^{2}_{\phi})$
and conditionally set
$\beta_{j}\stackrel{{\scriptstyle\textnormal{indep}}}{{\sim}}\\{1-p_{\beta}(\phi_{j})\\}\,\delta_{0}+p_{\beta}(\phi_{j})\,\operatorname{Normal}(0,\tau^{2}_{\beta})$.
Setting $p_{\beta}(\phi_{j})=1$ if $\phi_{j}\neq 0$ guarantees that
$\beta_{j}$ will be included whenever $\phi_{j}$ is included.
We conduct a small simulation experiment to justify our claim that shared
variable selection is an effective strategy for combating dogmatism. For our
ground truth we consider $N=P=200$, $\sigma^{2}_{a}=\sigma^{2}_{y}=1$, and
$\gamma=1$. We then sample $\phi_{j}$ from the spike-and-slab prior with
$\tau^{2}_{\phi}=1$ and $p_{\phi}=5/200$. We then consider four different
schemes for sampling $\beta$. Naive: we sample $\beta_{j}$ from the spike-and-
slab prior with $p_{\beta}=5/200$ and $\tau^{2}_{\beta}=1$; Shared: we sample
$\beta_{j}$ from the spike-and-slab prior with $p_{\beta}=5/200$ if
$\phi_{j}=0$ and $p_{\beta}=1$ if $\phi_{j}\neq 0$; Direct: we sample
$\beta_{j}$ according to the Naive prior and then add $-\phi_{j}$ to it; and
Both: we sample $\beta_{j}$ according to the Shared prior then we add
$-\phi_{j}$ to it.
We compare the following prior specifications.
Naive
$\beta_{j}$ has a spike-and-slab prior with $p_{\beta}\equiv 5/200$ and which
is independent of $\phi_{j}$.
Shared
We use a spike-and-slab Z-prior with $p_{\beta,j}=5/200$ if
$\Pr(\phi_{j}=0\mid\bm{A},\bm{X})<0.5$, and $p_{\beta,j}=1$ otherwise.
Direct
We use the Naive prior with the additional covariate
$\widehat{A}_{i}=X_{i}^{\top}\widehat{\phi}$ where
$\widehat{\phi}=\mathbb{E}(\phi\mid\bm{A})$. The variable $\widehat{A}_{i}$ is
included with probability $1$.
Setting | Method | Coverage | Width | Std. Err | RMSE | Bias
---|---|---|---|---|---|---
Shared | Direct | 0.95 | 0.29 | 0.07 | 0.07 | 0.00
Naive | 0.65 | 0.25 | 0.07 | 0.13 | -0.00
Shared | 0.94 | 0.28 | 0.07 | 0.07 | 0.00
Direct | Direct | 0.95 | 0.29 | 0.07 | 0.08 | -0.03
Naive | 0.71 | 0.30 | 0.08 | 0.34 | -0.17
Shared | 0.93 | 0.29 | 0.07 | 0.08 | -0.03
Naive | Direct | 0.96 | 0.28 | 0.07 | 0.07 | -0.00
Naive | 0.94 | 0.14 | 0.04 | 0.04 | -0.00
Shared | 0.97 | 0.28 | 0.07 | 0.07 | -0.00
Both | Direct | 0.94 | 0.29 | 0.07 | 0.07 | 0.00
Naive | 0.77 | 0.27 | 0.07 | 0.12 | -0.01
Shared | 0.94 | 0.28 | 0.07 | 0.07 | 0.00
Table 2: Results for the simulation experiment described in Section 4.2
Results for this simulation are given in Table 4.2. Each simulation setting
was replicated $200$ times. We see that the Direct and Shared methods perform
essentially the same despite correcting for dogmatism in different ways — both
methods have virtually identical coverage, root-mean-squared error, interval
lengths, and bias. The Naive approach, which just applies the spike-and-slab
prior for $\beta$ under IG.2, performs extremely poorly by contrast, unless
the data was generated under the Naive prior.
### 4.3 Semiparametric Regression with Clever Covariates
Mimicking our strategy in Section 4.1 we set
$\beta(x)=\beta^{\star}(x)+g\\{\phi(x)\\}$ for some choice of $g(\cdot)$, with
$\beta^{\star}(x)$ given (say) a Gaussian process prior independent of
$\phi(\cdot)$. The selection bias for this model is given by
$\displaystyle\Delta=\frac{\operatorname{Cov}_{\theta}\\{\beta^{\star}(X_{i}),\phi(X_{i})\\}}{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}+\frac{\operatorname{Cov}_{\theta}[g\\{\phi(X_{i})\\},\phi(X_{i})]}{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}\approx\frac{\operatorname{Cov}_{\theta}[g\\{\phi(X_{i})\\},\phi(X_{i})]}{\mathbb{E}_{\theta}\\{\phi(X_{i})\\}}$
by the orthogonality principle. Even if we model $g(\cdot)$ nonparametrically
using (say) a Gaussian process the orthogonality principle will not kick in
because $g(\phi)$ is a function on $\mathbb{R}$ rather than $\mathbb{R}^{P}$.
There are many different choices one can make for the function $g(\cdot)$. If
we are concerned strictly with obtaining good Frequentist properties, an
appropriate choice is to take $g(\phi)=\phi^{-1}$; when $\phi$ is known, this
guarantees $\sqrt{n}$-consistency. Alternatively, we can set
$g\sim\operatorname{GP}(0,\kappa_{g})$ with the covariance function
$\kappa(\phi,\phi^{\prime})=\tau^{2}_{g}\exp\\{-(\phi-\phi^{\prime})^{2}/(2s^{2}_{g})\\}$.
This choice of covariance function was noted by Ren et al., (2021) to induce
matching on the propensity score: individuals with similar propensity scores
have their values of $g(\phi)$ shrunk together. The penalized-spline-of-
propensity approach of Zhou et al., (2019) is similar, except that $g(\phi)$
is chosen to be a spline instead of a Gaussian process. Another approach to
incorporating $\phi(x)$ is to plug it in as a regular covariate, i.e., we
replace $\beta(x)$ with $\beta\\{x,\phi(x)\\}$.
#### Simulation Experiment
We consider the simulation setting of Hahn et al., (2020, Section 6.1) to
evaluate several different approaches to correcting a Gaussian process prior
for dogmatism using the usual evaluation criteria (RMSE, interval width, and
bias, and coverage). We consider the model
$Y_{i}(a)=\mu(X_{i})+a\,\tau(X_{i})+\epsilon_{i},\epsilon_{i}\sim\operatorname{Normal}(0,1)$
with the $X_{ij}$’s being iid $\operatorname{Normal}(0,1)$ random variables
with the exception of $X_{i2}$ which is $\operatorname{Bernoulli}(1/2)$ and
$X_{i4}$ which is uniform on $\\{1,2,3\\}$. We let $\mu(x)$ and $\tau(x)$ be
given
$\displaystyle\tau(x)=\begin{cases}3\quad&\text{homogeneous},\\\\[-10.00002pt]
1+2\,x_{2}\,x_{5}&\text{heterogeneous},\end{cases}\quad\text{and}\quad\mu(x)=\begin{cases}1+g(x_{4})+x_{1}\,x_{3}\quad&\text{linear},\\\\[-10.00002pt]
-6+g(x_{4})+6\,|x_{3}-1|&\text{nonlinear},\end{cases}$ (4)
where $g(1)=2,g(2)=-1$, and $g(3)=-4$. We then set
$A_{i}\sim\operatorname{Bernoulli}\\{\phi(X_{i})\\}$ with
$\phi(x)=0.8\,\Phi\\{3\,\mu(x)/s-0.5\,x_{1}\\}+0.1$, where $s$ is the
empirical standard deviation of the $\mu(X_{i})$’s. In total, we consider 16
possible simulation settings, corresponding to a factorial design with
$N\in\\{250,500\\}$, $P\in\\{5,20\\}$, with four combinations of
linear/nonlinear and homogeneous/heterogeneous.
We consider modeling $\mathbb{E}\\{Y_{i}(a)\mid X_{i}=x\\}=\beta(a,x)$ using a
Gaussian process $\beta\sim\operatorname{GP}(0,\kappa)$ where the kernel
function $\kappa\big{(}(a,x),(a^{\prime},x^{\prime})\big{)}$ is given by the
following choices.
Naive
A kernel which makes no correction for the propensity score:
$\kappa\big{(}(a,x),(a^{\prime},x^{\prime})\big{)}=100(1+a\,a^{\prime})+\lambda\,\exp\\{-b\|(a,x)-(a^{\prime},x^{\prime})\|^{2}_{2}\\}$.
IPW-GP
A kernel which incorporates the inverse propensity score as a “clever
covariate” which enters the model linearly:
$\kappa\big{(}(a,x),(a^{\prime},x^{\prime})\big{)}=100(1+a\,a^{\prime}+w\,w^{\prime}+z\,z^{\prime})+\lambda\,\exp\\{-b\|(a,x)-(a^{\prime},x^{\prime})\|^{2}_{2}\\}$
where $w=a/\phi(x)$ and $z=(1-a)/(1-\phi(x))$.
Spline-of-propensity-GP
A kernel which incorporates the propensity score using a spline basis function
expansion:
$\kappa\big{(}(a,x),(a^{\prime},x^{\prime})\big{)}=100(1+a\,a^{\prime}+\sum_{k}\psi_{k}\,\psi_{k}^{\prime})+\lambda\exp\\{-b\|(a,x)-(a^{\prime},x^{\prime})\|^{2}_{2}\\}$
where $\psi_{k}=\psi_{k}(x)$, $\psi^{\prime}_{k}=\psi(x^{\prime})$, and
$\\{\psi_{1},\ldots,\psi_{K}\\}$ are natural cubic spline basis functions
using $10$ knots.
Spline-of-propensity
Same as Spline-of-propensity-GP but without the Gaussian kernel.
In order to to separate the issue of accurately estimating the propensity
scores from the benefit of using them, we assume that $\phi(x)$ is known
a-priori. For all methods, the kernel hyperparameters and the standard
deviation $\epsilon$ are estimated via empirical Bayes, i.e., by maximizing
the marginal likelihood of the data.
Our main goals are to (i) determine the extent to which the Naive kernel
suffers due to dogmatism, (ii) determine which of the IPW or spline approaches
perform better in this case, and (iii) determine whether the propensity score
alone is sufficient to produce a good estimator. A subset of the results
corresponding to the nonlinear heterogeneous setting with $N=250$ are given in
Figure 4, with the remaining results deferred to the Supplementary Material.
Summarizing these results, we find (i) that the Naive kernel performs well
when $P=5$ where dogmatism is mild, but breaks down completely when $P=20$;
(ii) that the IPW-GP and spline-of-propensity-GP approaches perform comparably
in terms of coverage, but that the spline-of-propensity-GP generally produces
smaller standard errors and RMSEs, suggesting that the spline-of-propensity
approach is more stable while accomplishing the same goals as IPW methods; and
(iii) that the spline-of-propensity-GP produces smaller standard errors and
RMSEs than the spline-of-propensity approach, indicating that there is some
benefit to going beyond simply adjusting for the propensity score.
Figure 4: Results for the simulation study of Section 4.3 in the nonlinear
heterogeneous setting. Naive denotes the naive approach, IPW denotes the IPW-
GP approach, SOP denotes the spline-of-propensity approach, and SOP-GP denotes
the spline-of-propensity-GP approach.
## 5 Factors Mitigating Dogmatism
In demonstrating the issue of dogmatism we made use of the orthogonality
principle, which we noted only holds when “other structures” are not present.
For example, we introduced additional structure into the model by making
$\beta$ and $\phi$ dependent in Section 4, giving us direct control of the
prior on $\Delta(a)$.
Another possible source of structure is dependence structure in $X_{i}$. The
benefit of this is evident in Proposition 2 and Theorem 1, where the spectral
distribution of $\Sigma$ figures prominently. We examine the role of
dependence structure in the ridge and semiparametric regression problems.
### 5.1 Dependence Structure and Ridge Regression
To understand the role of $\Sigma$ in ridge regression, we consider a _latent
factor model_ which takes $X_{i}=\Lambda\eta_{i}+\sigma_{x}\,\nu_{i}$ where
$\Lambda\in\mathbb{R}^{P\times L}$ is a matrix of factor loadings and
$\eta_{i}\in\mathbb{R}^{L}$ is an $L$-dimensional vector of latent factors for
observation $i$. If $\sigma_{x}=0$ in this model then $X_{i}$ is restricted to
be in the $L$-dimensional subspace $\operatorname{span}(\Lambda)$; similarly,
if $\sigma_{x}$ is small, then $X_{i}$ lies very close to
$\operatorname{span}(\Lambda)$.
We first examine the induced prior on the selection bias parameter for such a
$\Sigma$. Assuming $\eta_{i}\sim\operatorname{Normal}(0,\mathrm{I})$ and
$\nu\sim\operatorname{Normal}(0,\mathrm{I})$, we have
$\operatorname{Var}(X_{i})\equiv\Sigma=\Lambda\Lambda^{\top}+\sigma_{x}^{2}\mathrm{I}$.
Letting $\kappa_{1},\ldots,\kappa_{L}$ denote the $L$ non-zero eigenvalues of
$\Lambda\Lambda^{\top}$, Proposition 1 gives
$\displaystyle\Delta(a)$
$\displaystyle=a\frac{\sum_{j=1}^{L}(\kappa_{j}+\sigma^{2}_{x})\,W_{j}\,Z_{j}}{\sigma^{2}_{a}+\sum_{j=1}^{L}(\kappa_{j}+\sigma^{2}_{x})\,Z_{j}^{2}}+a\frac{\sum_{j=L+1}^{P}\sigma^{2}_{x}\,W_{j}\,Z_{j}}{\sigma^{2}_{a}+\sum_{j=L+1}^{P}\sigma_{x}^{2}\,Z_{j}^{2}}\approx
a\frac{\sum_{j=1}^{L}\kappa_{j}\,W_{j}\,Z_{j}}{\sigma^{2}_{a}+\sum_{j=1}^{L}\kappa_{j}\,Z_{j}^{2}}$
where
$W_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,\tau^{2}_{\beta})$
and
$Z_{j}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,\tau_{\phi}^{2})$.
The approximation holds when $\sigma_{x}$ is near zero so that $\Sigma$ is
approximately low-rank. Because the left-hand-side depends only on $L$ rather
than $P$, we expect $\Delta(a)$ to be roughly of order $L^{-1/2}$ rather than
$P^{-1/2}$ for the ridge regression prior. Hence, even if $P\gg N$, we may
still avoid dogmatism if $L\ll N$.
Revisiting Theorem 1 we can characterize the effect of $\Sigma$ on the bias.
Recall that Theorem 1 relates the bias of the naive ridge estimator to the
empirical spectral distribution $F(dx)$ of $\bm{X}\bm{X}^{\top}/N$ through the
Stieltjes transform $v(-\lambda)=\int_{0}^{\infty}\frac{F(dx)}{x+\lambda}$.
Smaller values of $\lambda$ such that $v(-\lambda)\approx\lambda^{-1}$ result
in small bias, which occurs when $F(dx)$ places substantial mass near $0$. In
Figure 5 we plot both $v(-\lambda)$ and the bias for the latent class model
with $L=5$ and the entries of $\Lambda$ being iid
$\operatorname{Normal}(0,1)$. We see that as $\sigma_{x}$ decreases we have
substantially less bias. The ridge regression estimator is able to leverage
the fact that $X_{i}$ is approximately in $\operatorname{span}(\Lambda)$
without this being explicitly encoded into the model. In the extreme case
where $\sigma_{x}=0$ this is easy to see, as we can write
$\bm{X}=\bm{E}\bm{D}^{1/2}\Gamma^{\top}$ where $\bm{E}\in\mathbb{R}^{N\times
L}$ has iid $\operatorname{Normal}(0,1)$ entries,
$\Gamma\in\mathbb{R}^{P\times L}$ is semi-orthogonal,
$\bm{D}\in\mathbb{R}^{L\times L}$ is diagonal, and
$\Sigma=\Gamma\bm{D}\Gamma^{\top}$. We can then rewrite
$\bm{X}\beta=\bm{E}\bm{D}^{1/2}\Gamma^{\top}\beta=\bm{E}\zeta$ where
$\zeta\sim\operatorname{Normal}(0,\bm{D}\,\tau^{2}_{\beta})$. Hence performing
ridge regression on the $N\times P$-dimensional $\bm{X}$ is exactly equivalent
to performing ridge regression on the lower-dimensional matrix $\bm{E}$, with
$\zeta_{\ell}$ having variance proportional to the $\ell^{\text{th}}$ non-zero
eigenvalue of $\Sigma$.
Figure 5: Top: $v(-\lambda)$ for the latent factor model for different
$(r,\sigma_{x})$ with a dashed line giving the ideal
$v(-\lambda)=\lambda^{-1}$. Bottom: the associated bias of the ridge
regression estimator.
Figure 6 shows the root mean squared error of the direct approach we proposed
in Section 4.1 and the naive ridge regression estimator. To generate this
figure, we applied the two approaches under the following conditions:
$\sigma^{2}_{a}=\sigma^{2}_{y}=1$; $L=5$; $N=P=200$;
$\Lambda_{p\ell}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,1)$;
$\gamma=1$; and both $\beta$ and $\phi$ chosen so that
$X_{i}^{\top}\beta=X_{i}^{\top}\phi=\sum_{\ell=1}^{L}\mathbb{E}(\eta_{i\ell}\mid
X_{i})$.
Figure 6: Root mean squared error in estimating $\gamma$ for direct and naive
methods as a function of $\sigma_{x}$.
### 5.2 Dependence Structure and Semiparametric Regression
We present evidence that the results for ridge regression is part of a much
more general phenomenon which also holds in nonparametric problems: namely,
that low-dimensional structure can shield us from the effects of prior
dogmatism. Rather than assuming that $X_{i}$ is near a hyperplane, we now
assume that $X_{i}$ is concentrated near a smooth manifold of intrinsic
dimension $L$ in $\mathbb{R}^{P}$. Specifically, we take
$\widetilde{X}_{i}=\Lambda(\eta_{i})+\sigma_{x}\,\epsilon_{i},\epsilon_{i}\sim\operatorname{Normal}(0,1)$
where $\Lambda:\mathbb{R}^{L}\to\mathbb{R}^{P}$ is nonlinear; we then scale
$\widetilde{X}_{ij}$ by its standard deviation to get $X_{ij}$, so that
$\sigma_{x}$ indexes how close $X_{i}$ is to
$\mathscr{M}=\\{x:x=\Lambda(\eta),\eta\in\mathbb{R}\\}$. We randomly generated
$\Lambda(\eta)=\big{(}\Lambda_{1}(\eta),\ldots,\Lambda_{P}(\eta)\big{)}^{\top}$
by generating $P$ independent Gaussian processes using the kernel function
$\rho(\eta,\eta^{\prime})=\exp\\{-\|\eta-\eta^{\prime}\|^{2}_{2}\\}$.
After generating the covariates $X_{i}$, we consider a continuous exposure
$A_{i}=r_{a}(X_{i})+\nu_{i}$ and a continuous outcome
$Y_{i}(a)=r_{y}(X_{i})+a\,\gamma+\epsilon_{i}(a)$ with
$\epsilon_{i}(a),\nu_{i}\stackrel{{\scriptstyle\textnormal{iid}}}{{\sim}}\operatorname{Normal}(0,1)$
where $r_{y}(x)=r^{\star}_{y}(x)+r_{a}(x)$. We then generate $r^{\star}_{y}$
and $r_{a}$ as independent Gaussian processes with kernel
$\rho(x,x^{\prime})=\exp\\{-\|x-x^{\prime}\|^{2}_{2}\\}$. Our parameter of
interest is $\gamma$, which represents the causal effect of the exposure on
the outcome. We consider two priors.
Naive
We impose IG.2, but otherwise use the “true” prior for $r^{\star}(x)$ using
the kernel $2\rho(x,x^{\prime})$. We specify a
$\operatorname{Normal}(0,10^{2})$ prior for $\gamma$.
Direct
We use the model
$Y_{i}(a)=r_{y}(X_{i})+\omega\,\widehat{r}_{a}(X_{i})+\gamma\,a$ where
$\widehat{r}_{a}(x)$ is a pilot estimate of $r_{a}(x)$ obtained from fitting a
Gaussian process to the relationship $A_{i}=r_{a}(X_{i})+\nu_{i}$. We specify
a $\operatorname{Normal}(0,10^{2})$ prior for both $\gamma$ and $\omega$.
For the ground truth, we set $\gamma=1$, $L=1$, and consider
$P\in\\{10,200\\}$, $N=300$, and $\sigma_{x}=2^{-j}$ where the $j$’s are
evenly spaced between $-7$ and $-2$. For each $\sigma_{x}$ and $P$ we
generated $200$ simulated datasets and applied the Direct and Naive methods to
estimate $\gamma$ and construct a 95% credible interval.
Figure 7: Results for the semiparametric regression on a manifold problem of
Section 5. Bias denotes the average bias of $\widehat{\gamma}$, coverage
denotes the coverage of nominal 95% intervals, RMSE denotes the root-mean-
squared-error in estimating $\gamma$, and SE denotes the average posterior
standard deviation of $\gamma$.
Results are summarized in Figure 7, which displays the bias, coverage, RMSE,
and average standard error. For $P=10$, we see similar behavior as we did with
the ridge regression problem: the Direct approach performs uniformly well and
the naive approach performs much better as $\sigma_{x}$ is decreased. While
the naive approach never does catch up to the direct approach, we do see that
its deficiencies are attenuated as the $X_{i}$’s are generated closer and
closer to $\mathscr{M}$. For $P=200$, while the naive approach does perform
better as $\sigma_{x}$ is decreased, the problem appears to be too difficult
for methods which do not explicitly account for dogmatism. The behavior of the
direct approach is now very interesting, however. We note a sharp phase
transition around $\sigma_{x}=0.07$ where the problem essentially goes from
infeasible to feasible — the bias, RMSE, and standard error all decrease
dramatically at this point. We also see that the direct approach is much more
honest in terms of its uncertainty: when the problem is infeasible, the model
correctly gives a large posterior standard error, whereas the naive model is
always overconfident about its predictions.
## 6 Discussion
The main concrete recommendation we make in this article is that, for both
causal inference and missing data problems, Bayesian ignorability (and in
particular IG.2) corresponds to an informative prior on the degree of
selection bias. This causes selection bias parameters to be regularized
towards $0$, introducing substantial bias in high-dimensional or nonparametric
problems and should not be imposed in most situations. Instead, Bayesians
should reject IG.2 by default in favor of a prior which allows for more direct
control over the selection bias, and we have illustrated how to do this in
several problems of interest. Of secondary interest, we have noted that
certain features of the design can mitigate prior dogmatism about the
selection bias, and showed that both ridge regression priors and Gaussian
process priors possess some degree of adaptivity towards low-dimensional
structures in $X_{i}$. But this does not change our general recommendation, as
we consistently have observed improved performance of priors which reject IG.2
even when such low-dimensional structures exist.
Dogmatism about other features of the model may also have deleterious effects
on our inferences. In future work, we will extend our results to other
problems in causal inference. Two of potential interest are estimation of the
conditional average treatment effect (CATE) in observational studies and
estimation of the natural direct and indirect effects in mediation analysis.
In the latter setting, one must control for two different selection
mechanisms: the effect of the confounders both on the treatment received and
on the mediating variable.
While we have presented a number of corrections for dogmatism, we have not
presented any coherent framework for _deriving_ corrections. This presents an
important question: are there any objective Bayes principles which
automatically lead to priors which adequately account for dogmatism? Certain
strategies, such as using Jeffreys priors, cannot work because they usually
_imply_ that IG.2 holds. By contrast, other objective principles which are not
parameterization invariant and do not necessarily imply IG.2, such as priors
constructed from decision theoretic principles, entropy maximization, and
reference priors have some chance of working (see Kass and Wasserman,, 1996,
for a review). With the exception of entropy-maximizing priors, the
computational difficulty of implementing these priors makes numeric
experimentation difficult. Interestingly, entropy maximization with respect to
the distribution of the observed data can be used to generate models which
possess very strong Frequentist properties, but these models are (in our
opinion) lacking a satisfying justification.
## References
* Chipman et al., (2010) Chipman, H. A., George, E. I., and McCulloch, R. E. (2010). BART: Bayesian additive regression trees. The Annals of Applied Statistics, 4(1):266–298.
* Couillet and Debbah, (2011) Couillet, R. and Debbah, M. (2011). Random Matrix Methods for Wireless Communications. Cambridge University Press.
* Dicker, (2016) Dicker, L. H. (2016). Ridge regression and asymptotic minimax estimation over spheres of growing dimension. Bernoulli, 22(1):1–37.
* Dobriban and Wager, (2018) Dobriban, E. and Wager, S. (2018). High-dimensional asymptotics of prediction: Ridge regression and classification. The Annals of Statistics, 46(1):247–279.
* Hahn et al., (2018) Hahn, P. R., Carvalho, C. M., Puelz, D., and He, J. (2018). Regularization and confounding in linear regression for treatment effect estimation. Bayesian Analysis, 13(1):163–182.
* Hahn et al., (2020) Hahn, P. R., Murray, J. S., and Carvalho, C. M. (2020). Bayesian regression tree models for causal inference: Regularization, confounding, and heterogeneous effects (with discussion). Bayesian Analysis, 15(3):965–1056.
* Hill, (2011) Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20(1):217–240.
* Hirano et al., (2003) Hirano, K., Imbens, G. W., and Ridder, G. (2003). Efficient estimation of average treatment effects using the estimated propensity score. Econometrica, 71(4):1161–1189.
* Imai et al., (2010) Imai, K., Keele, L., and Tingley, D. (2010). A general approach to causal mediation analysis. Psychological Methods, 15(4):309.
* Kass and Wasserman, (1996) Kass, R. E. and Wasserman, L. (1996). The selection of prior distributions by formal rules. Journal of the American statistical Association, 91(435):1343–1370.
* National Research Council, (2010) National Research Council (2010). The Prevention and Treatment of Missing Data in Clinical Trials. The National Academies Press.
* Rasmussen and Williams, (2006) Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). MIT Press, Cambridge.
* Ray and van der Vaart, (2020) Ray, K. and van der Vaart, A. (2020). Semiparametric Bayesian causal inference. The Annals of Statistics, 48(5):2999 – 3020.
* Ren et al., (2021) Ren, B., Wu, X., Braun, D., Pillai, N., and Dominici, F. (2021). Bayesian modeling for exposure response curve via Gaussian processes: Causal effects of exposure to air pollution on health outcomes. arXiv preprint arXiv:2105.03454.
* Robert, (2007) Robert, C. (2007). The Bayesian choice: from decision-theoretic foundations to computational implementation. Springer Science & Business Media.
* Robins and Ritov, (1997) Robins, J. M. and Ritov, Y. (1997). Toward a curse of dimensionality appropriate (CODA) asymptotic theory for semi-parametric models. Statistics in medicine, 16(3):285–319.
* Rosenbaum and Rubin, (1983) Rosenbaum, P. R. and Rubin, D. B. (1983). The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55.
* Rubin, (1976) Rubin, D. B. (1976). Inference and missing data. Biometrika, 63:581–592.
* Rubin, (1985) Rubin, D. B. (1985). The use of propensity scores in applied Bayesian inference. Bayesian Statistics, 2:463–472.
* Rubin, (2005) Rubin, D. B. (2005). Causal inference using potential outcomes. Journal of the American Statistical Association, 100(469).
* Seaman et al., (2013) Seaman, S., Galati, J., Jackson, D., and Carlin, J. (2013). What is meant by “missing at random”? Statistical Science, 28(2):257–268.
* van der Laan and Rose, (2011) van der Laan, M. J. and Rose, S. (2011). Targeted Learning: Causal Inference for Observational and Experimental Data. Springer Science & Business Media.
* Zhou et al., (2019) Zhou, T., Elliott, M. R., and Little, R. (2019). Penalized spline of propensity methods for treatment comparison. Journal of the American Statistical Association, 114(525):1–19.
* Zigler et al., (2013) Zigler, C. M., Watts, K., Yeh, R. W., Wang, Y., Coull, B. A., and Dominici, F. (2013). Model feedback in Bayesian propensity score estimation. Biometrics, 69(1):263–273.
See pages - of 2021-Informative-Supplement.pdf
|
Implicit Design Choices and Their Impact on Emotion Recognition Model
Development and EvaluationMimansa JaiswalDoctor of Philosophy Computer Science
and Engineering2023 Professor Emily Mower Provost, Chair
Professor Nikola Bancovic
Professor Benjamin Fish
Douwe Kiela, Contextual AI
Professor V.G. Vinod Vydiswaran
Mimansa Jaiswal
<EMAIL_ADDRESS>
ORCID iD: 0009-0001-8290-7743
Dedicated to my parents and my late granddad, B.P. Bhagat, and my late
grandmom, Smt. Radha Devi for their boundless love and support
ACKNOWLEDGEMENTS
I would like to start by expressing my profound gratitude towards my PhD
advisor, Emily (Emily Mower Provost). She has been my go-to person for any
research brainstorming and has shown me tremendous patience, support, and
guidance throughout my PhD journey. Without her persistence and suggestions,
completing this PhD would not have been possible. I am also incredibly
grateful for my thesis committee members: Vinod (VG Vinod Vydiswaran), Nikola
(Nikola Banovic), Benjamin (Benjamin Fish), and Douwe (Douwe Kiela). Their
valuable insights during my thesis proposal helped shape the final version of
the thesis.
For putting up with me during my PhD journey, my immense gratitude goes
towards my family, especially my parents, grandparents, and Bert. My parents
have been a rock for me over the past six years. Though I could not visit them
often or talk to them much, they were always there when I needed someone. They
seem to have aged fifteen years in the six years of my PhD, stressed about me,
but their support never wavered. My mom (Archana Kumari) received her own PhD
in 2021, and my dad (Abhay Kumar) became the Vice-Chancellor of a new IIIT—two
accomplishments that were their lifelong dreams and inspired me immensely. I
unfortunately lost two of my grandparents during the PhD program, and I will
never forget their blessings and excitement for me embarking on my higher
education journey. During early 2020, in the midst of COVID, I adopted a cat
named Bert—yes, named after the language model. Without him, I would not have
maintained my sanity during the dark, lonely nights and tiring, long work
days. His purring loudly into my ear calmed me down on the worst of nights.
I was lucky enough to secure three internships and have amazing research
mentors for all of them. Ahmad (Ahmad Beirami) taught me how to approach
Conversational AI, how to create effective presentations, and how to write
research proposals. Adina (Adina Williams) taught me how to work with
linguistics mixed in with NLP, and how subjectivity can infiltrate seemingly
objective parts (like NLI) of NLP. Ana (Ana Marasović) was the first person I
worked with on really large language models (foundational models), and she
taught me how to approach evaluation and benchmarking for generative models—a
major part of my current research path.
I want to thank my lab members, starting with Zak (Zakaria Aldeneh). Zak
exemplifies what all senior PhD mentors should be, helping me with code,
brainstorming, and working with me on papers. He has been an amazing research
collaborator. I also want to thank Minxue (Minxue Sandy Niu) for being the
junior research collaborator anyone would be proud of. She has not only been
an amazing collaborator but was also always willing to discuss interesting
research problems. I want to thank Matt (Matthew Perez) for being the
batchmate who has always been there to help, to vent, to advise, and to
collaborate, serving as my go-to person for any speech-based research
questions. Finally, I want to thank Amrit (Amrit Romana) for being an amazing
lab member; her observant questions helped me immensely during lab
presentations.
I also want to thank my friends, without whom this journey would not have been
possible. I will start with Abhinav (Abhinav Jangda), who has been my support
system throughout my PhD journey, starting from the application process.
Diksha (Diksha Dhawan) was the best PhD roommate one could ask for during the
first four years of my PhD. She shared laughter and tears with me, cooked with
me, and supported me through all the highs and lows. Without her, I could not
have survived my PhD. She taught me the value of being proud of my interests
in both my personal and professional life, and how friends can sometimes be
family, which is the best gift anyone could have given me. Eesh (Sudheesh
Srivastava), for all the conversations at the intersection of machine
learning, physics, and philosophy, has taught me about areas and theories that
I would have otherwise not encountered in any way or format. Conversations
with him have always left me rejuvenated, happy, and feeling peppier—a
testament to how amazing a best friend he is. Sagarika (Sagarika Srishti), for
all her support, both in India and when she came to the US. Her move to the US
during my PhD was a major personal highlight. Ariba (Ariba Javed), thanks for
all the discussions, talks, and emotional conversations, and for always being
up for anything interesting, including a pottery class. Shobhit (Shobhit
Narain) has been an amazing companion, helping me with job applications and
always being the sarcastic, serious, yet most helpful guy I have had the
pleasure of calling a friend. And finally, Sai (Sairam Tabibu) helped me fill
out the PhD application for UMich on the exact deadline, without which, I
would not be here at all.
This is probably an unconventional paragraph in acknowledgments, but these
were unconventional times during COVID. For the two years of lockdown, I
turned to Among Us when I felt lonely or lost in my research. I am really
thankful for the streamers whose broadcasts provided some semblance of social
interaction. For almost three years, I watched them stream at least 8 hours a
day while I worked, to simulate a social environment. And when my research
progress stalled, I turned to anonymous Discord communities, playing Among Us
and golf for hours, which helped alleviate feelings of depression and sadness,
providing a much-needed uplift.
My PhD journey wasn’t easy, and a lot happened over the six years, but I made
it through. The credit for that goes to all the people mentioned here, to whom
I am forever indebted.
TABLE OF CONTENTS
toc
LIST OF FIGURES
lof
LIST OF TABLES
lot
ABSTRACT
Emotion recognition is a complex task due to the inherent subjectivity in both
the perception and production of emotions. The subjectivity of emotions poses
significant challenges in developing accurate and robust computational models.
This thesis examines critical facets of emotion recognition, beginning with
the collection of diverse datasets that account for psychological factors in
emotion production. To address these complexities, the thesis makes several
key contributions.
To handle the challenge of non-representative training data, this work
collects the Multimodal Stressed Emotion dataset, which introduces controlled
stressors during data collection to better represent real-world influences on
emotion production. To address issues with label subjectivity, this research
comprehensively analyzes how data augmentation techniques and annotation
schemes impact emotion perception and annotator labels. It further handles
natural confounding variables and variations by employing adversarial networks
to isolate key factors like stress from learned emotion representations during
model training. For tackling concerns about leakage of sensitive demographic
variables, this work leverages adversarial learning to strip sensitive
demographic information from multimodal encodings. Additionally, it proposes
optimized sociological evaluation metrics aligned with cost-effective, real-
world needs for model testing.
The findings from this research provide valuable insights into the nuances of
emotion labeling, modeling techniques, and interpretation frameworks for
robust emotion recognition. The novel datasets collected help encapsulate the
environmental and personal variability prevalent in real-world emotion
expression. The data augmentation and annotation studies improve label
consistency by accounting for subjectivity in emotion perception. The
stressor-controlled models enhance adaptability and generalizability across
diverse contexts and datasets. The bimodal adversarial networks aid in
generating representations that avoid leakage of sensitive user information.
Finally, the optimized sociological evaluation metrics reduce reliance on
extensive expensive human annotations for model assessment.
This research advances robust, practical emotion recognition through
multifaceted studies of challenges in datasets, labels, modeling, demographic
and membership variable encoding in representations, and evaluation. The
groundwork has been laid for cost-effective, generalizable emotion recognition
models that are less likely to encode sensitive demographic information.
## Chapter I Introduction
In human communication, perceiving and responding to others’ emotions in
interpersonal conversations play a crucial role [76]. To create systems that
can aid in human-centered interpersonal situations, it is necessary for these
systems to possess the capability to recognize emotions effectively [220].
Robust Emotion Recognition (ER) models can be beneficial in various
situations, such as crisis text lines or passive mental health monitoring
[160]. However, these ML models often lack robustness when faced with unseen
data situations, making deploying them in high-risk situations or healthcare a
challenging task [241].
Recongizing emotion is a challenging task because it is subjective in both
perception and production [178]. The labels used to train emotion recognition
models are perceptually subjective [28]. The same emotion can be perceived
differently by different people, depending on their cultural background,
personal experiences, and other factors [147]. Additionally, there is
production subjectivity. The same emotion can be expressed differently by
different people, depending on their individual personality, cultural
background, physiological and other factors [13]. The subjectivity of emotion
recognition makes it difficult to develop accurate and robust models that
account for these numerous variations [221].
In addition to the challenges posed by subjectivity, there are challenges that
relate to the information that is learned in addition to and beyond the
expression of emotion itself. The manner in which emotions are expressed are
correlated with a person’s demographic and identifying features. Hence,
systems trained to recognize emotion can often learn implicit associations
between an individual’s demographic factors and emotion [208]. When used as a
component in larger systems, these implicit associations can lead to either
the leakage of demographic information, or can bias the larger system’s output
based on demographic information, even when not explicitly trained to do so.
Training any robust machine learning model necessitates having access to large
amounts of diverse and labelled data. Training models for emotion recognition
faces the challenge of not having access to large quantities of diverse data.
Scraping data over the internet, as is done for other areas, leads to a
dataset that is often demographically biased, and, often exaggerated for
entertainment purposes. On the other hand, data collected in laboratory
environments is intentionally cleaner and often exaggerated in case of
scripted sessions. Therefore, both of these data collection methods do not
encapsulate possible environmental and personal factors, which leads to models
often being trained on either highly skewed or non-representative data. The
resulting models are either fragile or biased, and ultimately unable to handle
real-world variability.
In this dissertation, critical facets of emotion recognition are thoroughly
explored, beginning with the collection of datasets, which take into account
psychological factors in producing emotions. This is followed closely by
examining the influence that alterations in data augmentation processes have
on emotion labels, while also challenging and interrogating the validity of
previously established labels. Alterations in labeling techniques and the
resulting effects on annotator-assigned labels are also scrutinized.
Simultaneously, the research develops robust models specifically trained to
disregard certain physiological emotion production factors. Integral to the
research is the creation of bimodal models that generate representations
aiming to tackle the reduction of leakage of sensitive demographic variables.
The concluding portion of the study involves an in-depth evaluation of the
robustness and impartiality of these models, carried out in a human-centric
manner, ensuring an emphasis on minimal costs for data annotation. From this
extensive research, valuable insights are gained into the complexities of
emotion recognition, which pave the way for more nuanced and robust labeling,
modeling, and interpretation techniques. It also lays the groundwork for
future efforts in the development of robust and cost-effective emotion
recognition models.
### 1.1 Emotion Theories and the Impact on Emotion Recognition Model
Development
To better understand the subjectivity inherent in emotion recognition and its
correlation with the research gaps and challenges, we must first explore the
contrasts between emotion production and emotion perception theories. These
theories elucidate the distinct factors related to the subjectivity of
emotions in both production and recognition processes and offer valuable
insights for developing robust and unbiased emotion recognition models.
#### 1.1.1 Emotion Production and Emotion Perception
Emotion production refers to experiencing and generating emotional responses,
encompassing several factors, including cognitive appraisal, physiological
response, behavior and expression, and subjective experience. These components
work together to create the unique process of producing emotions within each
person.
Emotion perception, conversely, focuses on recognizing and interpreting
others’ emotional signals, influenced by factors such as emotional cues,
context and environment, past experiences and learning, and individual
differences. This process involves making sense of others’ emotions based on
various internal and external factors.
#### 1.1.2 Emotion Theories, Research Challenges, and Implications for
Emotion Recognition
Various theories of emotion provide insights into the challenges faced in
developing computational models for emotion recognition in speech or text.
Below, we discuss the relevance and implications of some prominent theories in
the context of speech or text-based (bimodal) emotion recognition.
* •
James-Lange Theory and Cannon-Bard Theory [204]: Both theories emphasize
physiological responses’ importance in emotion. In speech or text-based
recognition, it is vital to consider correlations between observable features
(e.g., vocal tonality, speech patterns) and underlying physiological
responses. Accounting for these correlations can help capture emotions, even
though the relationship might be subjective due to personal and cultural
differences.
* •
Schachter-Singer Two-Factor Theory [204]: This theory stresses the importance
of both physiological arousal and cognitive appraisal for experiencing
emotions. In speech or text-based emotion recognition, cognitive appraisal
aspects such as semantic content, contextual factors, and discourse patterns
can be extracted. However, the subjectivity of cognitive appraisal processes
presents challenges given personal experiences’ impact on interpretation.
* •
Lazarus Cognitive-Mediational Theory [204]: Centered around the role of
cognitive appraisal, this theory highlights the need for emotion recognition
systems to account for individuals’ interpretations of situations through cues
that may suggest appraisal (e.g., word choice, phrase structure,
conversational context). Advanced models might need to factor in users’
personal and demographic features to better understand cognitive appraisal
processes. This approach introduces more subjectivity and potential privacy
concerns, as individual perspectives and experiences can vary significantly.
Integrating insights from these theories can aid unraveling the complexities
and subjective nature of emotions expressed through language, as speech or
text-based emotion recognition relies primarily on linguistic patterns, tone,
and content analysis.
#### 1.1.3 Addressing Challenges Through Thesis Contributions
The thesis contributions align with and address the subjectivity challenges in
emotion production and perception, thus tackling the complexities involved in
developing robust and unbiased emotion recognition models.
* •
Collecting datasets that account for psychological factors in emotion
production: By considering psychological factors influencing unique emotional
experiences, more diverse datasets are created, allowing models to account for
subjectivity in emotion production and generalize across emotions.
* •
Examining the influence of data augmentation processes on emotion perception
labels: This contribution seeks to understand data augmentation’s impact on
ground truth labels, creating better representations of emotions in the
datasets, accounting for subjectivity in emotion perception.
* •
Analyzing labeling setups’ impact on annotators’ emotion perception labels:
This investigates how labeling setups influence emotion perception, aiming to
improve label consistency and reduce inter-annotator disagreement, thus better
representing subjectivity in emotion perception.
* •
Training robust models by explicitly disregarding emotion production factors:
This minimizes the impact of subjective elements associated with emotion
production, enabling models to focus on core emotional cues.
* •
Developing bimodal models for generation of emotion representations that are
debiased and reduce encoding of demographic and membership information: This
creates models that consider multiple emotional cues while disregarding
sensitive features, addressing subjectivity challenges in both emotion
production and perception.
* •
Evaluating models in a human-centric manner: Designing evaluation methods
aligned with real-world expectations and without incurring significant
annotation costs ensures the models effectively tackle subjectivity challenges
in a practical way.
By focusing on these contributions, the thesis emphasizes the connection
between emotion production and perception’s subjectivity and its influence on
model development, advancing the creation of more robust and unbiased emotion
recognition models.
### 1.2 Emotion Recognition
Emotion recognition models are customarily trained using laboratory-collected
data encompassing video, audio, and corresponding text. These algorithms
strive to capture the speaker’s underlying emotional state either autonomously
or as part of a larger pipeline, such as response generation. Supervised
learning techniques predominantly train these models. Obtaining ground truth
labels for the dataset samples is crucial for successfully training a
supervised learning model. The emotion theories presented earlier are
intrinsically linked with the complexity of emotion recognition. Understanding
the interplay between these theories and model development is essential.
#### 1.2.1 Emotion Labels
Emotion labels typically fall into two categories: categorical and
dimensional. Categorical variables aim to discretely categorize emotion
attributes, such as excitement, happiness, anger, or sadness. These labels’
limitations align with the James-Lange and Cannon-Bard theories—emotions are
subjective, making it difficult to define universal emotions across cultures.
This subjectivity is intensified by both personal physiological responses to
stimuli and cultural context.
Dimensional emotional labels describe emotions across two dimensions, valence
(sad to happy) and arousal (calm to excited). The dimensional approach is more
consistent with the James-Lange and Cannon-Bard theories, addressing the
physiological components of emotions, as well as the cognitive components
emphasized by the Schachter-Singer Two-Factor Theory and Lazarus Cognitive-
Mediational Theory. However, these dimensional labels also face the challenge
of cultural and personal influences on the perception and expression of
emotions.
#### 1.2.2 Emotion Features
Three primary modalities are used in combination to train emotion recognition
models: text, audio, and video. This thesis focuses predominantly on audio and
its corresponding text as the feature set for these models.
Mel-filterbanks (MFBs) are often used as inputs to neural network models in
speech. MFBs can capture correlations between vocal tonality, speech patterns,
and underlying physiological responses. Nevertheless, factors like pitch,
volume, or other nuances of speech may be affected by cultural and linguistic
contexts. Furthermore, personal characteristics can influence these features,
further complicating emotion recognition in cross-cultural or highly diverse
settings.
Language features, which provide contextualized representations for words,
capture the cognitive appraisal aspects (semantic content, contextual factors,
and discourse patterns). The Lazarus Cognitive-Mediational Theory further
highlights the need for models that account for user demographics. More
advanced models may need to balance the understanding of individual emotions
with ethical considerations.
#### 1.2.3 Emotion Recognition Models
Audio-based emotion recognition models initially relied on Hidden Markov
Models (HMMs) or Gaussian Mixture Models (GMMs) and later shifted focus to
LSTMs and RNNs. These models aim to capture the dynamic and time-varying
nature of speech, reflecting the James-Lange Theory and Cannon-Bard Theory’s
emphasis on physiological responses. However, these models must also account
for the inherent cultural and linguistic differences in the way emotions are
expressed through speech.
Language-based models, like recent advances in transformer architectures,
address long and indirect contextual information challenges, in line with the
Schachter-Singer Two-Factor Theory’s cognitive appraisal aspects. These models
strive to understand the nuances of language, cultural expressions, and
individual semantic and contextual differences in recognizing emotions.
Multi-modal models exploit relevant information from text, audio, or video to
form powerful emotion recognition models. Informed by the emotion theories,
these models take into account the subjectivity of emotions by leveraging
different modalities to discern the nuances of emotion expression. By
combining these modes, models can better account for the emotional complexity
that arises from intercultural and personal differences in perception,
expression, and context.
### 1.3 Challenges in Emotion Recognition
The variable and subjective nature of emotions make it challenging to train
models that can accurately identify emotion in any given scenario. Addressing
three major challenges is necessary for any emotion recognition model deployed
in a real-world setting: (a) Non-representative training data, (b) Subjective
labels, (c) Unintentional encoding and leakage of sensitive information.
Previous work has looked at varying ways to counter these challenges, talked
about in detail in Chapter III, Section 2.1.
#### 1.3.1 Non-representative data
Emotion production in real-world settings is influenced by various factors,
including data collection settings, demographics, and personal factors.
Addressing these confounding factors aligns with the implications of the
earlier-discussed emotion theories. Researchers can tackle this challenge by
developing more robust models, incorporating real-world variability through
dataset augmentation or mitigating confounding factors.
#### 1.3.2 Label Subjectivity
As highlighted in the emotion theories, emotions are inherently subjective and
deeply influenced by personal experiences, culture, and context. This
subjectivity leads to difficulty in pinpointing an objective and universal
ground truth for training emotion recognition models. Researchers should
account for label subjectivity by using diverse and representative datasets,
annotations from multiple sources, and considering multiple emotion theories
during the model design process.
#### 1.3.3 Unintentional encoding and leakage of sensitive information
Variability can lead unintentional encoding and leakage of sensitive
information concerns, specifically in human centered tasks, such as emotion
recognition models, as the associative nature of the task and sensitive
demographic variables may inadvertently lead to encoding personal information.
### 1.4 Proposed Methods
A robust and effective emotion recognition system must successfully navigate a
range of challenges, including addressing subjectivity in emotion production
and perception, handling natural variations and confounding variables,
reducing encoded sensitive information, and providing relevant evaluation
metrics. Here, we present a series of proposed methods aligned with the
outlined contributions to address these challenges.
#### 1.4.1 Dataset Collection for Emotion Recognition
Tackling the challenge of subjectivity in emotion production, it’s essential
that we consider the issues in widely used emotion recognition datasets that
arise due to design choices, methodology of data collection, and inherent
subjectivity. Emotion datasets traditionally aim for minimal variation to
ensure generalizability. However, this can result in non-robust models that
struggle with unexpected variability. We propose the construction and
validation of a new dataset called Multimodal Stressed Emotion (MuSE), which
introduces a controlled situational confounder (stress) to better account for
subjectivity. In addition, we discuss the use of domain adversarial networks
to achieve more stable and reliable cross-corpus generalization while avoiding
undesired characteristics in encodings.
#### 1.4.2 Data Augmentation with Noise in Emotion Datasets
Addressing the challenge of subjectivity in emotion perception, we examine
data augmentation with noise in emotion datasets, focusing on the Interactive
Emotional Dyadic Motion Capture (IEMOCAP) dataset, which features dyadic
interactions with text, video, and audio modalities. Introducing realistic
noisy samples through environmental and synthetic noise, we evaluate how
ground truth and predicted labels change due to noise sources. We discuss the
effects of commonly used noisy augmentation techniques on human emotion
perception, potential inaccuracies in model robustness testing, and provide
recommendations for noise-based augmentation and model deployment.
#### 1.4.3 Annotations of Emotion Datasets
To further address subjectivity in emotion perception, we investigate how
design choices in the annotation collection process impact the performance of
trained models. Focusing on contextual biasing, we examine how annotators
perceive emotions differently in the presence or absence of context. Commonly-
used emotion datasets often involve annotators who have knowledge of previous
sentences, but models are frequently evaluated on individual utterances. We
explore the implications of this discrepancy on model evaluation, and its
potential for generating errors.
#### 1.4.4 Methods for Handling Natural Variations and Confounding Variables
As mentioned earlier, we collect a dataset of differences in similar emotion
production under varying levels of stress. Emotion recognition models may
spuriously correlate these stress-based factors to perceived emotion labels,
which could limit generalization to other datasets. Consequently, we
hypothesize that controlling for stress variations can improve the models’
generalizability. To achieve this, we employ adversarial networks to
decorrelate stress modulations from emotion representations, examining the
impact of stress on both acoustic and lexical emotion predictions. By
isolating stress-related factors from emotion representations, we aim to
enhance the model’s ability to generalize across different stress conditions.
Furthermore, we analyze the transferability of these refined emotion
recognition models across various domains, assessing their adaptability to
evolving contexts and scenarios. Ultimately, our approach aims to improve
emotion recognition model robustness by addressing the inherent variability of
emotional expression due to stress and ensuring greater applicability across
multiple domains.
#### 1.4.5 Approaches for Tackling Sensitive Information Leakage in Trained
Emotion Recognition Models
Emotions are inherently related to demographic factors such as gender, age,
and race. Consequently, emotion recognition models often learn these latent
variables even if they are not explicitly trained to do so. This learning
behavior poses a risk to user privacy, as the models inadvertently capture
sensitive demographic information. Storing representations instead of raw data
does not fully mitigate this issue, as latent variables can still compromise
user privacy. To address this challenge, we present approaches for mitigating
the learning of certain demographic factors in emotion recognition embeddings.
Furthermore, we tackle the issue of user-level membership identification by
employing an adversarial network that strips this information from the final
encoding, reduced leakage of sensitive information from generated
representations.
#### 1.4.6 Methods for Model Evaluation and Perception
Large language models face limitations in subjective tasks like emotion
recognition due to inadequate annotation diversity and data coverage.
Acquiring comprehensive annotations and evaluations is often costly and time-
consuming. To address these challenges, we propose cost-effective sociological
metrics for emotion generalization and reduced demographic vairable leakage.
These metrics reduce reliance on expensive human-based feedback while still
capturing the nuances of human emotions. By evaluating model performance and
demographic variables encoded in generated representations, the proposed
metrics improve cross-corpus results and allow for the development of
accurate, relevant emotion recognition models in a more economic manner.
### 1.5 Contributions
This dissertation proposes several investigations and novel solutions to
address various concerns related to real-world emotion recognition model
deployment.
The contributions of the works in this dissertation can be summarized as
follows:
* •
Chapter IV:
* –
Introduction of Multimodal Stressed Emotion (MuSE) dataset.
* –
Detailed data collection protocol.
* –
Potential uses and emotion content annotations.
* –
Performance measuring baselines for emotion and stress classification.
* •
Chapter V:
* –
Speech emotion recognition’s impact under influence of various factors such as
noise.
* –
Investigation of noise-altered annotation labels and their aftermath.
* –
Consequences on evaluation of ML models considering noise.
* –
Specific recommendations for noise augmentations in emotion recognition
datasets.
* •
Chapter VI:
* –
Crowdsourced experiments to study the subjectivity in emotion expression and
perception.
* –
Contextual and randomized annotation schemes of the MuSE dataset.
* –
Comparative analysis revealing contextual scheme’s closeness to speaker’s
self-reported labels.
* •
Chapter VII:
* –
Examination of emotion expressions under stress variations.
* –
Utilization of adversarial networks to separate stress modulations from
emotion representations.
* –
Exploration of stress’s impact on acoustic and lexical emotional predictions.
* –
Evidence of improved generalizability with stress control during model
training.
* •
Chapter VIII:
* –
Highlighting the unintentional leak of sensitive demographic information in
multimodal representations.
* –
Use of adversarial learning paradigm to improve sensitive information
reduction metric.
* –
Maintenance of primary task performance, despite improvements to privacy.
* •
Chapter IX:
* –
New template formulation to derive human-centered, optimizable and cost-
effective metrics.
* –
Correlation establishment between emotion recognition performance, biased
representations and derived metrics.
* –
Employment of metrics for training an emotion recognition model with increased
generalizability and decreased bias.
* –
Finding of positive correlation between proposed metrics and user preference.
### 1.6 Outline of the dissertation
Initiating with Chapter III, it delves into a comprehensive review of
pertinent literature spanning from emotion recognition and privacy
preservation to adversarial networks, model interpretability, and
crowdsourcing designs. Moving forward, Chapter II provides an introduction to
the common datasets, and features employed throughout this research.
Subsequent chapters, from Chapter IV to IX, engage in a thorough exploration
and discussion of the research work undertaken, characterized in the
Contributions section. Lastly, Chapter X serves as a conclusive summary
encapsulating the primary contributions made, elaborating on the proposed
future works.
## Chapter II Related Work: Modeling Emotions
Emotion recognition is a complex, multifaceted field drawing on various
research areas. This chapter explores the various methods and considerations
in this field, from the use of crowdsourcing to the importance of context, and
from handling confounding factors to the impact of noise on machine learning
models. We explore the ethical considerations of unintentional encoding of
sensisitive variables in data collection and neural networks, the role of
interpretability in model trustworthiness, and the importance of automating
human in the loop feedback. We also delve into the challenge of
generalizability in emotion recognition.
### 2.1 Concerns with Emotion Recognition Datasets
Some aspects of the above mentioned datasets limit their applicability,
including: a lack of naturalness, unbalanced emotion content, unmeasured
confounding variables, small size, small number of speakers, and presence of
background noise. These datasets are also limited in the number of modalities
they use, usually relying on visual and acoustic/lexical information.
#### 2.1.1 Recorded Modalities
As shown in Table 2.1, the most common modalities are video, acoustics, and
text. In addition to these modalities, we chose to record two more modalities:
thermal and physiological. Previous research has shown that thermal recordings
perform well as non-invasive measurement of physiological markers like,
cardiac pulse and skin temperature [173, 172, 80]. They have been shown to be
correlated to stress symptoms, among other physiological measures. We used the
physiological modality to measure stress responses [234, 210] to psychological
stressors. This modality has been previously noted in literature for measuring
stress [96], usually measured in polygraph tests. We perform baseline
experiments to show that the modalities collected in the dataset are indeed
informative for identifying stress and emotion.
#### 2.1.2 Lack of Naturalness
A common data collection paradigm for emotion is to ask actors to portray
particular emotions. These are usually either short snippets of information
[36], a single sentence in a situation [38], or obtained from sitcoms and
rehearsed broadcasts [47]. A common problem with this approach is that the
resulting emotion display is not natural [113]. These are more exaggerated
versions of singular emotion expression rather than the general, and messier,
emotion expressions that are common in the real world [12, 21, 72]. Further,
expressions in the real world are influenced by both conversation setting and
psychological setting. While some datasets have also collected spontaneous
data [36, 38], these utterances, though emotionally situated, are often
neutral in content when annotated. The usual way to get natural emotional data
is to either collect data using specific triggers that have been known to
elicit a certain kind of response or to completely rely on in-the wild data,
which however often leads to unbalanced emotional content in the dataset
[183].
#### 2.1.3 Unbalanced Emotion Content
In-the-wild datasets are becoming more popular [47, 118, 138]. The usual
limitation to this methodology is that, firstly, for most people, many
conversations are neutral in emotion expression. This leads to a considerable
class imbalance [183]. To counter this issue, MSP-Podcast [143] deals with
unbalanced content by pre-selecting segments that are more likely to have
emotional content. Secondly, data collected in particular settings, e.g.,
therapy [162], or patients with clinical issues [130] comprise mostly of
negative emotions because of the recruitment method used in the collection
protocol.
#### 2.1.4 Presence of Interactional Variables
The common way of inducing emotions involves either improvisation prompts or
scripted scenarios. Emotion has been shown to vary with a lot of factors that
are different from the intended induction [198, 240, 156]. These factors in
general can be classified into: (a) recording environment confounders and (b)
collection confounders. Recording environment-based variables hamper the
models’ ability to to learn the emotion accurately. These can be environment
noise [16], placement of sensors or just ambient temperature [31].
Table 2.1: Summary of some of the existing emotion corpora. Lexical modality
is mentioned for manually transcribed datasets. A - Audio, L - Lexical, T-
Thermal, V- Visual, P - Physiological.
| Corpus | Size | Speakers | Rec. Type | Language | Modality | Annotation Type
---|---|---|---|---|---|---|---
1. | IEMOCAP | 12h26m | 10 | improv/acted | English | A, V, L | Ordinal, Categorical
2. | MSP-Improv | 9h35m | 12 | improv/acted | English | A, V | Ordinal
3. | VAM | 12h | 47 | spontaneous | German | A, V | Ordinal
4. | SEMAINE | 6h21m | 20 | spontaneous | English | A, V | Ordinal, Categorical
5. | RECOLA | 2h50m | 46 | spontaneous | French | A, V, P | Ordinal
6. | FAU-AIBO | 9h12m | 51 | spontaneous | German | A, L | Categorical
7. | TUM AVIC | 0h23m | 21 | spontaneous | English | A, V, L | Categorical
8. | Emotion Lines | 30k samples | - | spont/scripted | English | A, L | Categorical
9. | OMG-Emotion | 2.4k samples | - | spontaenous | English | A, V, L | Ordinal
10. | MSP-Podcast | 27h42m | 151 | spontaenous | English | A | Ordinal, Categorical
11. | MuSE | 10h | 28 | spontaneous | English | A, V, L, T, P | Ordinal (Random, Context)
#### 2.1.5 Demographics in Dataset Collection Recruitment
The data collection variations influence both the data generation and data
annotation stages. The most common confounders are gender, i.e., ensuring an
adequate mix of male vs female, and culture, i.e., having a representative
sample to train a more general classifier. Another confounding factor includes
personality traits [242], which influence how a person both produces [242] and
perceives [158] emotion. Another confounder that can occur at the collection
stage is the familiarity between the participants, like RECOLA [183], which
led to most of the samples being mainly positive due to the colloquial
interaction between the participants. They also do not account for the
psychological state of the participant. Psychological factors such as stress
[132], anxiety [229] and fatigue [26] have been shown previously to have
significant impact on the display of emotion. But the relation between these
psychological factors and the performance of models trained to classify
emotions in these situations has not been studied.
### 2.2 Crowdsourcing and Context in Emotion Recognition
Crowdsourcing has emerged as a highly efficient approach for gathering
dependable emotion labels, as extensively investigated by Burmania et al.
[33]. In addition to this, previous studies have concentrated on enhancing the
dependability of annotations by employing quality-control methods. For
instance, Soleymani et al. [201] have proposed the utilization of
qualification tests to weed out spammers from the crowd, thereby ensuring the
quality of collected data. Furthermore, Burmania et al. [35] have explored the
use of gold-standard samples to continuously monitor the reliability and
fatigue levels of annotators.
The interpretation of emotions is heavily influenced by the context in which
they are expressed. Various factors such as tone, choice of words, and facial
expressions can significantly impact how individuals perceive and understand
emotions [129]. It is noteworthy that this contextual information is
implicitly incorporated in the labeling schemes of commonly used emotion
datasets like IEMOCAP [36] and MSP-Improv [38]. However, a notable disparity
often exists between the information available to human annotators and that
accessible to emotion classification systems. This discrepancy arises because
emotion recognition systems are typically trained on individual utterances
[10, 3, 157, 190].
### 2.3 Handling Confounding Factors
#### 2.3.1 Singularly Labeled or Unlabeled Factors
To address confounding factors that are either labeled singularly or cannot be
labeled, researchers have devised specific methods. For instance, Ben-David et
al. [23] conducted a study wherein they showed that a sentiment classifier,
trained to predict the sentiment expressed in reviews, could also implicitly
learn to predict the category of the products being reviewed. This finding
highlights the potential of classifiers to capture additional information
beyond their primary task. In a similar vein, Shinohara [196] employed an
adversarial approach to train noise-robust networks for automatic speech
recognition. By leveraging this technique, Shinohara aimed to enhance the
network’s ability to handle noisy and distorted speech signals.
#### 2.3.2 Explicitly Labeled Factors
In addition to addressing confounding factors that are singularly or
unlabeled, researchers have also developed methods to handle confounding
factors that are explicitly labeled during the data collection process. One
such approach involves the use of adversarial multi-task learning, which aims
to mitigate variances caused by speaker identity [153]. By incorporating this
technique, researchers can reduce the influence of speaker-specific
characteristics on the emotion recognition system, thereby enhancing its
generalizability. Furthermore, a similar approach has been employed to prevent
networks from learning publication source characteristics, which could
introduce biases in the classification process [149]
### 2.4 Noise and Approaches to Dealing with it in Machine Learning Models
The impact of noise on machine learning models has been the subject of
extensive research, which can be broadly classified into three main
directions: robustness in automatic speech recognition, noise-based
adversarial example generation, and performance improvement through model
augmentation with noise.
One area of focus is the robustness of models in automatic speech recognition
(ASR) when exposed to noisy environments. Researchers have explored various
techniques to enhance the performance of ASR systems in the presence of noise.
This includes the development of noise-robust feature extraction methods, such
as mel-frequency cepstral coefficients (MFCCs) and perceptual linear
prediction (PLP) features [135]. These techniques aim to minimize the impact
of noise on the accuracy of speech recognition systems, enabling them to
effectively operate in real-world, noisy conditions.
Another line of research involves the generation of noise-based adversarial
examples, which are intentionally crafted to deceive machine learning models.
Adversarial attacks exploit vulnerabilities in models by adding imperceptible
noise to input samples, causing the models to misclassify or produce incorrect
outputs. Carlini and Wagner [42] and Gong et al. [89] have proposed
methodologies for generating adversarial audio examples that can fool ASR
systems. These techniques highlight the importance of understanding and
addressing the susceptibility of machine learning models to adversarial noise.
Furthermore, researchers have explored the potential benefits of incorporating
noise during the training and augmentation process of machine learning models.
By augmenting the training data with various types of noise, models can become
more robust and adaptable to real-world conditions. For instance, Sohn et al.
[200] and Wallace et al. [224] have investigated the effectiveness of noise
augmentation techniques in improving the performance of models across
different tasks. These methods aim to enhance model generalization and reduce
overfitting, ultimately leading to better model performance in noise-affected
scenarios.
While evaluating model robustness to noise or adversarial attacks, researchers
commonly introduce noise into the dataset and assess the model’s performance
[5]. However, when it comes to emotion recognition, introducing noise while
ensuring that the perception of emotions remains intact can be highly
challenging. It is crucial to strike a balance between adding noise for
robustness evaluation purposes and preserving the original emotional content.
This ensures that the introduced noise does not distort or alter the true
emotional expression, enabling accurate and reliable emotion recognition
systems.
### 2.5 Unintentional Sensisitve Variable Encoding, and Ethical
Considerations in Data Collection and Neural Networks
The preservation of privacy in data collection has been a key area of focus in
early research. Various methods such as rule-based systems and the
introduction of background noise have been explored in order to achieve this
goal [88, 69]. However, more recent studies have shifted their attention
towards privacy preservation in the context of neural networks. In particular,
researchers have primarily concentrated on ensuring that the input data used
in these networks are not memorized and cannot be retrieved even when the
model is deployed [41, 2].
Another crucial consideration in the field of privacy preservation is fair
algorithmic representation. The objective here is to develop networks that are
invariant to specific attributes, often related to demographic information, in
order to ensure fairness [29, 59, 61]. Although certain methods have
demonstrated promise in achieving fairness, they may still inadvertently lead
to privacy violations [108].
### 2.6 The Role of Interpretability in Model Trustworthiness
The aspect of interpretability plays a crucial role in establishing
trustworthiness of models. Studies have indicated that individuals are more
inclined to trust the decisions made by a model if its explanations align with
their own decision-making processes [203, 73, 195]. In addition,
interpretability methods can be employed by model designers to evaluate and
debug a trained model [68]. These methods provide insights into the inner
workings of the model and facilitate a better understanding of its decision-
making process.
### 2.7 Automating Human in the Loop Feedback
In order to automate human in the loop feedback, several approaches have been
proposed. One such approach involves the utilization of a teacher-student
feedback model, where feedback from human teachers is used to improve the
performance of the model [179]. Another avenue of research focuses on
enhancing active learning techniques, which aim to select the most informative
data points for annotation by human experts, thereby reducing the overall
labeling effort required [115].
These methods often incorporate a combination of fine-tuning and prompt-based
learning techniques, which further enhance the model’s ability to learn from
human feedback and adapt its performance accordingly [217]. By fine-tuning the
model based on the feedback received and utilizing prompts as guiding cues,
these approaches enable the model to continually improve its performance,
making it more effective in addressing the specific task or problem at hand.
### 2.8 Generalizability in Emotion Recognition
Achieving generalizability in emotion recognition poses a significant
challenge for researchers. To address this challenge, various methods have
been explored in order to obtain models that can generalize well across
different datasets and scenarios. One approach is the use of combined and
cross-dataset training, where multiple datasets are combined during the
training process to create a more comprehensive and diverse training set. This
helps the model learn a wider range of emotion patterns and improves its
ability to generalize to unseen data [137].
Another technique that has been investigated is transfer learning, which
involves leveraging knowledge acquired from pre-trained models on a related
task and applying it to the emotion recognition task. By transferring the
learned representations and weights from a pre-trained model, the model can
benefit from the general knowledge and feature extraction capabilities it has
acquired, leading to improved generalizability in emotion recognition [137].
Furthermore, researchers have also explored the concept of generalizability
from the perspective of noisy signals. Emotion recognition often deals with
noisy data, such as speech with background noise or facial expressions with
occlusions. By developing models that are robust to such noise and can
effectively extract emotion-related information from imperfect signals, the
generalizability of the models can be enhanced [93].
### 2.9 Conclusion
The field of emotion recognition is complex, with many factors and
considerations influencing the development and deployment of effective models.
This chapter has explored some of the key areas in this field, highlighting
the importance of crowdsourcing, context, handling confounding factors,
dealing with noise, and ensuring that the representations don’t inadverdently
encode sensitive demographic or membership information. The role of
interpretability in model trustworthiness and the challenge of automating
human in the loop feedback were also discussed. Although progress has been
made in many of these areas, the challenge of generalizability in emotion
recognition remains, and future research will need to continue to address this
issue.
## Chapter III Datasets and Pre-processing
This thesis focusses on emotion recognition as a task. For this purpose, we
use a standard set of datasets and features as described in this chapter. This
allows us to perform experiments with a set of known and commonly used
datasets, keeping them uniform across experimental variables.
### 3.1 Datasets Used In Thesis
In the past years, there have been multiple emotional databases collected and
curated to develop better emotion recognition systems. Table 2.1 shows the
major corpora that are used for emotion recognition.
#### 3.1.1 IEMOCAP
The IEMOCAP dataset was created to explore the relationship between emotion,
gestures, and speech. Pairs of actors, one male and one female (five males and
five females in total), were recorded over five sessions. Each session
consisted of a pair performing either a series of given scripts or
improvisational scenarios. The data were segmented by speaker turn, resulting
in a total of 10,039 utterances (5,255 scripted turns and 4,784 improvised
turns).
#### 3.1.2 MSP-Improv
The MSP-Improv dataset was collected to capture naturalistic emotions from
improvised scenarios. It partially controlled for lexical content by including
target sentences with fixed lexical content that are embedded in different
emotional scenarios. The data were divided into 652 target sentences, 4,381
improvised turns (the remainder of the improvised scenario, excluding the
target sentence), 2,785 natural interactions (interactions between the actors
in between recordings of the scenarios), and 620 read sentences for a total of
8,438 utterances.
#### 3.1.3 MSP-Podcast
The MSP-Podcast dataset was collected to build a naturlisitic emotionally
balanced speech corpus by retrieving emotional speech from existing podcast
recordings. This was done using machine learning algorithms, which along with
a cost-effective annotation process using crowdsourcing, led to a vast and
balanced dataset. We use a pre-split part of the dataset which has been
identified for gender of the speakers which comprises of 13,555 utterances.
The dataset as a whole contains audio recordings.
#### 3.1.4 MuSE
The MuSE dataset consists of recordings of 28 University of Michigan college
students, 9 female and 19 male, in two sessions: one in which they were
exposed to an external stressor (final exams period at University of Michigan)
and one during which the stressor was removed (after finals have concluded).
Each recording is roughly 45-minutes. We expose each subject to a series of
emotional stimuli, short-videos and emotionally evocative monologue questions.
These stimuli are different across each session to avoid the effect of
repetition, but capture the same emotion dimensions. At the start of each
session, we record a short segment of the user in their natural stance without
any stimuli, to establish a baseline. We record their behavior using four main
recording modalities: 1) video camera, both close-up on the face and wide-
angle to capture the upper body, 2) thermal camera, close-up on the face, 3)
lapel microphone, 4) physiological measurements, in which we choose to measure
heart rate, breathing rate, skin conductance and skin temperature (Figure
4.1). The data include self-report annotations for emotion and stress
(Perceived Stress Scale, PSS) [56, 57], as well as emotion annotations
obtained from Amazon Mechanical Turk (AMT). To understand the influence of
personality on the interaction of stress and emotion, we obtain Big-5
personality scores [87], which was filled by 18 of the participants, due to
the participation being voluntary.
### 3.2 Data Pre-Processing
We use these features consistently across the thesis to have a standardized
set of inputs, aiming to avoid variability that comes from different labelling
or pre-processing schemas. Our preprocessing corresponds to converting Likert
scale emotion annotations to classes based on quartiles. The feature
processing has 2 components, acoustic and lexical, for training, testing or
fine-tuning speech-only, text-only or bimodal models.
#### 3.2.1 Emotion Labels
Each utterance in the MuSE dataset was labeled for activation and valence on a
nine-point Likert scale by eight crowd-sourced annotators [105], who observed
the data in random order across subjects. We average the annotations to obtain
a mean score for each utterance, and then bin the mean score into one of three
classes, defined as, {“low”: [min, 4.5], “mid”: (4.5, 5.5], “high”: (5.5,
max]}. The resulting distribution for activation is: {“high”: $24.58\%$,
“mid”: $40.97\%$ and “low”: $34.45\%$} and for valence is {“high”: $29.16\%$,
“mid”: $40.44\%$ and “low”: $30.40\%$}. Utterances in IEMOCAP and MSP-Improv
were annotated for valence and activation on a five-point Likert scale. The
annotated activation and valence values were averaged for an utterance and
binned as: {“low”: [1, 2.75], “mid”: (2.75, 3.25], “high”: (3.25, max]}
#### 3.2.2 Stress Labels
Utterances in the the MuSE dataset include stress annotations, in addition to
the activation and valence annotations. The stress annotations for each
session were self-reported by the participants using the Perceived Stress
Scale (PSS) [58]. We perform a paired t-test for subject wise PSS scores, and
find that the scores are significantly different for both sets (16.11 vs
18.53) at $p<0.05$. This especially true for question three (3.15 vs 3.72),
and hence, we double the weightage of the score for this question while
obtaining the final sum. We bin the original nine-point adjusted stress scores
into three classes, {“low”: (min, mean$-2$], “mid”: (mean$-2$, mean$+2$],
“high”: (mean$+2$, max]}. We assign the same stress label to all utterances
from the same session. The distribution of our data for stress is “high”:
$40.33\%$, “mid”: $25.78\%$ and “low”: $38.89\%$
Improvisation Labels. Utterances in the IEMOCAP dataset were recorded in
either a scripted scenario or an improvised one. We label each utterance with
a binary value {“scripted”, “improvised”} to reflect this information.
### 3.3 Lexical and Acoustic Feature Extraction
#### 3.3.1 Acoustic
We use Mel Filterbank (MFB) features, which are frequently used in speech
processing applications, including speech recognition, and emotion recognition
[116, 126]. We extract the 40-dimensional MFB features using a 25-millisecond
Hamming window with a step-size of 10-milliseconds. As a result, each
utterance is represented as a sequence of 40-dimensional feature vectors. We
$z$-normalize the acoustic features by session for each speaker.
#### 3.3.2 Lexical
We have human transcribed data available for MuSE and IEMOCAP. We use the
word2vec representation based on these transcriptions, which has shown success
in sentiment and emotion analysis tasks [121]. We represent each word in the
text input as a 300-dimensional vector using a pre-trained word2vec model
[155], replacing out-of-vocab words with the $\langle unk\rangle$ token.
Besides, we also incorporate BERT embeddings for enhanced contextual
understanding. These embeddings, generated from the pre-trained BERT model,
provide deep, bidirectional representations by understanding the text context
from both directions. Each utterance is eventually represented as a sequence
of 768-dimensional feature vectors. We use just acoustic inputs for MSP-Improv
because human transcriptions are not available.
## Chapter IV Emotion Recognition Dataset: MuSE
### 4.1 Motivation and Contributions
Endowing automated agents with the ability to provide support, entertainment
and interaction with human beings requires sensing of the users’ affective
state. These affective states are impacted by a combination of emotion
inducers, current psychological state, and various contextual factors.
Although emotion classification in both singular and dyadic settings is an
established area, the effects of these additional factors on the production
and perception of emotion is understudied. This chapter presents a dataset,
Multimodal Stressed Emotion (MuSE), to study the multimodal interplay between
the presence of stress and expressions of affect. We describe the data
collection protocol, the possible areas of use, and the annotations for the
emotional content of the recordings. The chapter also presents several
baselines to measure the performance of multimodal features for emotion and
stress classification.
### 4.2 Introduction
Virtual agents have become more integrated into our daily lives than ever
before [144]. For example, Woebot is a chatbot developed to provide cognitive
behavioral therapy to a user [74]. For this chatbot agent to be effective, it
needs to respond differently when the user is stressed and upset versus when
the user is calm and upset, which is a common strategy in counselor training
[213]. While virtual agents have made successful strides in understanding the
task-based intent of the user, social human-computer interaction can still
benefit from further research [54]. Successful integration of virtual agents
into real-life social interaction requires machines to be emotionally
intelligent [27, 238].
But humans are complex in nature, and emotion is not expressed in isolation
[90]. Instead, it is affected by various external factors. These external
factors lead to interleaved user states, which are a culmination of
situational behavior, experienced emotions, psychological or physiological
state, and personality traits. One of the external factors that affects
psychological state is stress. Stress can affect everyday behavior and
emotion, and in severe states, is associated with delusions, depression and
anxiety due to impact on emotion regulation mechanisms [122, 193, 216, 225].
Virtual agents can respond in accordance to users’ emotions only if the
machine learning systems can recognize these complex user states and correctly
perceive users’ emotional intent. We introduce a dataset designed to elicit
spontaneous emotional responses in the presence or absence of stress to
observe and sample complex user states.
There has been a rich history of visual [235, 111], speech [143], linguistic
[207], and multimodal emotion datasets [38, 36, 183]. Vision datasets have
focused both on facial movements [111] and body movement [131]. Speech
datasets have been recorded to capture both stress and emotion separately but
do not account for their inter-dependence [185, 97, 127, 243]. Stress datasets
often include physiological data [234, 210].
Existing datasets are limited because they are designed to elicit emotional
behavior, while neither monitoring external psychological state factors nor
minimizing their impact by relying on randomization. However, emotions
produced by humans in the real world are complex. Further, our natural
expressions are often influenced by multiple factors (e.g., happiness and
stress) and do not occur in isolation, as typically assumed under laboratory
conditions. The primary goal of this work is to collect a multimodal
stress+emotion dataset – Multimodal Stressed Emotion (MuSE) – to promote the
design of algorithms that can recognize complex user states.
For MuSE, The extracted features for each modality, and the anonymized dataset
(other than video) will be released publicly along with all the corresponding
data and labels. We present baseline results for recognizing both emotion and
stress in the chapter, in order to validate that the presence of these
variables can be computationally extracted from the dataset, hence enabling
further research.
### 4.3 MuSE Dataset
#### 4.3.1 Experimental Protocol
Figure 4.1: Experimental Protocol For Recording
We collect a dataset that we refer to as Multimodal Stressed Emotion (MuSE) to
facilitate the learning of the interplay between stress and emotion. The
protocol for data collection is shown in Figure 4.1. There were two sections
in each recording: monologues and watching emotionally evocative videos. We
measure the stress level at the beginning and end of each recording. The
monologue questions and videos were specifically chosen to cover all
categories of emotions. At the start of each recording, we also recorded a
short one-minute clip without any additional stimuli to register the baseline
state of the subject.
Previous research has elicited situational stress such as public speaking
[123, 85, 8], mental arithmetic tasks [139] or use Stroop Word Test [215].
However, these types of stress are often momentary and fade rapidly in two
minutes [139]. We alleviate this concern by recording both during and after
final exams (we anticipate that these periods of time are associated with high
stress and low stress, respectively) in April 2018. We measure stress using
Perceived Stress Scale [57] for each participant. We measure their self-
perception of the emotion using Self-Assessment Manikins (SAM) [30]. The
recordings and the survey measures were coordinated using
Qualtrics111umich.qualtrics.com enabling us to ensure minimal intervention
and limit the effect of the presence of another person on the emotion
production.
Each monologue section comprised of five questions broken into sections meant
to elicit a particular emotion (Table 4.1). These questions were shown to
elicit thoughtful and emotional responses in their data pool to generate
interpersonal closeness [11]. We include an icebreaker and ending question to
ensure cool off periods between change in recording section, i.e., from
neutral to monologues, and from monologues to videos, hence decreasing the
amount of carry-over emotion from the previous monologue to the next. Each
subject was presented with a different set of questions over the two
recordings to avoid repetition effect. We also shuffle the order of the other
three questions to account for order effects [133]. Each subject was asked to
speak for a minimum of two minutes. After their response to each question, the
subjects marked themselves on two emotion dimensions: activation and valence
on a Likert Scale of one to nine using self-assessment manikins [30].
For the second part of the recording, the subjects were asked to watch videos
in each of the four quadrants i.e., the combination of {low, high} $\times$
{activation, valence} of emotion. These clips were selected from the corpus
[140, 20], which tested for the emotion elicited from the people when watching
these clips (Table 4.2). The subjects were monitored for their reaction to the
clips. After viewing a clip, subjects are asked to speak for thirty seconds
about how the video made them feel. After their response, they marked a
emotion category, e.g., angry, sad, etc. for the same clip. When switching
videos, the subjects were asked to view a one-minute neutral clip to set their
physiological and thermal measures back to the baseline [189].
The 28 participants were also asked to fill out an online survey used for
personality measures on the big-five scale [87], participation being
voluntary. This scale has been validated to measure five different dimensions
named OCEAN (openness, conscientiousness, extraversion, agreeableness, and
neuroticism) using fifty questions and has been found to correlate with
passion [60], ambition [19], and emotion mechanisms [181]. We received
responses for this survey from 18 participants. These labels can be used in
further work to evaluate how these personality measures interact with the
affects of stress in emotion production, as previously studied in [242].
Table 4.1: Emotion elicitation questions. Icebreaker
---
1. Given the choice of anyone in the world, whom would you want as a dinner guest? 2. Would you like to be famous? In what way?
Positive
1. For what in your life do you feel most grateful? 2. What is the greatest accomplishment of your life?
Negative
1. If you could change anything about the way you were raised, what would it be? 2. Share an embarrassing moment in your life.
Intensity
1. If you were to die this evening with no opportunity to communicate with anyone, what would you most regret not having told someone? 2. Your house, containing everything you own, catches fire. After saving your loved ones and pets, you have time to safely make a final dash to save any one item. What would it be? Why?
Ending
1. If you were able to live to the age of 90 and retain either the mind or body of a 30-year old for the last 60 years of your life, which would you choose? 2. If you could wake up tomorrow having gained one quality or ability, what would it be?
Table 4.2: Emotion elicitation clips. Movie | Description
---|---
Low Valence, Low Activation (Sad)
City of Angels | Maggie dies in Seth’s arms
Dangerous Minds | Students find that one of their classmates has died
Low Valence, High Activation (Anger)
Sleepers | Sexual abuse of children
Schindler’s List: | Killing of Jews during WWII
High Valence, Low Activation (Contentment)
Wall-E | Two robots dance and fall in love
Love Actually | Surprise orchestra at the wedding
High Valence, High Activation (Amusement)
Benny and Joone | Actor plays the fool in a coffee shop
Something About Mary | Ben Stiller fights with a dog
Neutral
A display of zig-zag lines across the screen
Screen-saver pattern of changing colors
#### 4.3.2 Equipment Setup
The modalities considered in our setup are: thermal recordings of the
subject’s face, audio recordings of the subject, color video recording of the
subject’s face, a wide-angle color video recording the subject from the waist
up and physiological sensors measuring skin conductance, breathing rate, heart
rate and skin temperature. For these modalities we have set up the following
equipment:
1. 1.
FLIR Thermovision A40 thermal camera for recording the close-up thermal
recording of the subject’s face. This camera provides a 640x512 image in the
thermal infrared spectrum.
2. 2.
Raspberry Pi with camera module V2 with wide-angle lens used for the waist up
shot of the subject. We have chosen Raspberry Pi’s due to its low price and
support for Linux OS, which integrates easily into a generic setup.
3. 3.
Raspberry Pi with camera module V2 used to record the subject from the waist
up.
4. 4.
TASCAM DR-100 mk II used to record audio. We chose this product for its high
fidelity. It can record 24-bit audio at 48kHz.
5. 5.
ProComp∞-8 channel biofeedback and neurofeedback system v6.0 used to measure
blood volume pulse (BVP sensor), skin conductance (SC sensor), skin
temperature (T sensor), and abdominal respiration (BR sensor)
Figure 4.2: Close-up view of the thermal and video recording equipment.
The equipment operator started and marked the synchronization point between
video and audio recordings using a clapper. Subsequent time stamps are
recorded by the qualtrics survey using subject click timings.
#### 4.3.3 Post-processing
Splitting of the Recordings. Each modality is split into neutral recordings of
one-minute, five questions and four video recordings with associated
monologues, resulting in fourteen recordings for emotional content, thus 28
recordings per subject. In total we have 784 distinct recordings over five
modalities, 28 subjects and two stress states, for a total of 3920 recording
events. Temperatures are clamped to between $0^{o}$C and $50^{o}$C. This helps
reduce the size of the thermal recording files after being zipped.
Utterance Construction. The five monologues extracted above were divided into
utterances. However, since the monologues are a form of spontaneous speech,
there are no clear sentence boundaries marking end of utterance. We manually
created utterances by identifying prosodic or linguistic boundaries in
spontaneous speech as defined by [125]. The boundaries used for this work are:
(a) clear ending like a full stop or exclamation, (b) a change in context
after filler words or completely revising the sentence to change meaning, or
(c) a very long pause in thought. This method has been previously shown to be
effective in creating utterances that mostly maintain a single level of
emotion [118].
The dataset contains 2,648 utterances with a mean duration of 12.44 $\pm$ 6.72
seconds (Table 4.3). The mean length of stressed utterances ($11.73\pm 5.77$
seconds) is significantly different (using two-sample t-test) from that of the
non-stressed utterances ($13.30\pm 6.73$ seconds). We remove utterances that
are shorter than $3$-seconds and longer than $35$-seconds and end up retaining
$97.2\%$ of our dataset. This allows us to to avoid short segments that may
not have enough information to capture emotion, and longer segments that can
have variable emotion, as mentioned in [118]. Because our dataset is comprised
of spontaneous utterances, the mean length of utterance is larger than those
in a scripted dataset [38] due to more corrections and speech overflow.
Stress State Verification. We perform a paired t-test for subject wise PSS
scores, and find that the mean scores are significantly different for both
sets (16.11 vs 18.53) at $p<0.05$. This implied that our hypothesis of exams
eliciting persistently more stress than normal is often true. In our dataset,
we also provide levels of stress which are binned into three categories based
on weighted average (using questions for which the t-test score was
significant).
### 4.4 Emotional Annotation
#### 4.4.1 Crowdsourcing
Crowdsourcing has previously been shown to be an effective and inexpensive
method for obtaining multiple annotations per segment [99, 34]. We posted our
experiments as Human Intelligence Tasks (HITs) on Amazon Mechanical Turk and
used selection and training mechanisms to ensure quality [106]. HITs were
defined as sets of utterances in a monologue. The workers were presented with
a single utterance and were asked to annotate the activation and valence
values of that utterance using Self-Assessment Manikins [30]. Unlike the
strategy adopted in [47], the workers could not go back and revise the
previous estimate of the emotion. We did this to ensure similarity to how a
human listening into the conversation might shift their perception of emotion
in real time. These HITs were presented in either the contextual or the random
presentation condition defined below.
In the contextual experiment, we posted each HIT as a collection of ordered
utterances from each section of a subject’s recording. Because each section’s
question was designed to elicit an emotion, to randomize the carry-over effect
in perception, we posted the HITs in a random order over the sections from all
the subjects in our recording. For example, a worker might see the first HIT
as Utterance 1…N from Section 3 of Subject 4’s stressed recording and see the
second HIT as Utterance 1…M from Section 5 of Subject 10’s non-stressed
recording where N, M are the number of utterances in those sections
respectively. This ensures that the annotator adapts to the topic and
fluctuations in speaking patterns over the monologue being annotated.
In the randomized presentation, each HIT is an utterance from any section, by
any speaker, in random order. So, a worker might see the first HIT as
Utterance 11 from Section 2 of Subject 1’s stressed recording monologue and
see the second HIT as Utterance 1 from Section 5 of Subject 10’s non-stressed
monologue recording. We use this method of randomization to ensure lack of
adaptation to both speaker specific style and the contextual information. The
per-utterance and the contextual labels can be used to train different machine
learning models that are apt for either singular one-off instances or for
holding multiple turn natural conversation, respectively.
Table 4.3: Data summary (R:random, C:context, F:female, M:male). Monologue
Subset
---
Mean no. of utterances/monologue | $9.69\pm 2.55$
Mean duration of utterances | $12.44\pm 6.72$ seconds
Total no. of utterances | 2,648
Selected no. of utterances | 2,574
Gender distribution | 19 (M) and 9 (F)
Total annotated speech duration | $\sim 10$ hours
Crowdsourced Data
Num of workers | 160 (R) and 72 (C)
Blocked workers | 8
Mean activation | 3.62$\pm$0.91 (R)
3.69$\pm$0.81 (C)
Mean valence | 5.26$\pm$0.95 (R)
5.37$\pm$1.00 (C)
|
---|---
Figure 4.3: Distribution of the activation and valence ratings in random
labeling scheme (on left) and contextual labeling scheme (on right).
#### 4.4.2 Emotion Content Analysis
We show the distribution of the annotations received in both the random and
contextual setting in Table 4.3 and Figure 4.3. The labels obtained for our
dataset form a distribution that mostly covers negative and neutral levels of
activation, and all but extremities for valence. This can also be seen in the
data summary in Table 4.3. We performed a paired t-test between the labels
obtained from random vs contextual presentation and found that these labels
are significantly different (using paired t-test at $p<0.05$ for both
activation and valence for utterances in the non-stressed situation). Although
the obtained labels are significantly different for valence in the stressed
category using the same method as above, the same does not hold true for the
activation annotations in this category.
Figure 4.4: An overview of the instructions provided to the annotators for
annotating an utterance.
Figure 4.5: Annotation scale used by MTurk workers to annotate the emotional
content of the corpus. They annotate valence and activation for each
utterance.
### 4.5 Experiments
In this section, we describe our baseline experiments for predicting emotion
and stress in the recorded modalities. We have a more granular marked
annotation of emotion, i.e., over each utterance, as compared to stress over
the complete monologue. Hence, we extract features for each modality over
continuous one second frame intervals for predicting stress, and over the
complete utterance for emotion. Audio and lexical features are still extracted
over a complete utterance for stress due to higher interval of variation over
time.
#### 4.5.1 Evaluation of Emotion Recognition
We use the following set of features for our baseline models:
1. 1.
Acoustic Features. We extract acoustic features using OpenSmile [71] with the
eGeMAPS configuration [70]. The eGeMAPS feature set consists of $88$
utterance-level statistics over the low-level descriptors of frequency,
energy, spectral, and cepstral parameters. We perform speaker-level
$z$-normalization on all features.
2. 2.
Lexical Features. We extract lexical features using Linguistic Inquiry and
Word Count (LIWC) [174]. These features have been shown to be indicative of
stress, emotion, veracity and satisfaction [86, 161, 164]. We normalize all
the frequency counts by the total number of words in the sentence accounting
for the variations due to utterance length.
3. 3.
Thermal Features. For each subject a set of four regions were selected in the
thermal image: the forehead area, the eyes, the nose and the upper lip as
previously used in [172, 80, 6]. These regions were tracked for the whole
recording and a 150-bin histogram of temperatures was extracted from the four
regions per frame, i.e., 30 frames a second for thermal recordings. We further
reduced the histograms to the first four measures of central tendency, e.g.
Mean, Standard Deviation, Skewness and Kurtosis. We combined these features
over the utterance using first delta measures (min, max, mean, SD) of all the
sixteen extracted measures per frame, resulting in 48 measures in total.
4. 4.
Close-up Video Features. We use OpenFace [15] to extract the subject’s facial
action units. The AUs used in OpenFace for this purpose are AU1, AU2, AU4,
AU5, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU17, AU20, AU23, AU25, AU26, AU28
and AU25 comprising of eyebrows, eyes and mouth. These features have been
previously shown to be indicative of emotion [227, 64] and have been shown to
be useful for predicting deception [110]. We summarize all frames into a
feature using summary statistics (maximum, minimum, mean, variance, quantiles)
across the frames and across delta between the frames resulting in a total of
144 dimensions.
Network Setup. We train and evaluate multiple unimodal Deep Neural Networks
(DNN) models for predicting valence and activation using Keras [91]. [106]
have shown that a match between the context provided to the classifier and the
annotator leads to better classification performance. Because we are
performing single utterance classification, for all further experiments, we
use the annotations obtained in a random manner as mentioned above. In all
cases, we predict the continuous annotation using regression.
We also use an ensemble of these four networks (audio, lexical, visual and
thermal) to measure multimodal performance. For each network setup, we follow
a five-fold subject independent evaluation scheme and report the average RMSE
across the folds. For each test-fold, we use the previous fold for hyper-
parameter selection and early stopping. The hyper-parameters include: number
of layers $\\{2,3,4\\}$ and layer width $\\{64,128,256\\}$. We use ReLU
activation and train the networks with MSE loss using the Adam optimizer.
We train our networks for a maximum of 50 epochs and monitor the validation
loss after each epoch. We perform early stopping if the loss doesn’t decrease
for 15 consecutive epochs. We save the weights that achieved the lowest
validation performance during training. We train each network five times with
different seeds and average the predictions to account for variations due to
random initialization.
Table 4.4: RMSE for emotion classification models using multiple modalities.
Significance established at $p<0.05$.
| Activation | Valence
---|---|---
Unimodal Models | |
Acoustic (A) | 1.004∗ | 1.122
Lexical (L) | 1.343 | 0.980
Close Video (V) | 1.111 | 0.879∗∗
Thermal (T) | 2.012 | 1.565
Ensemble | |
A+L | 0.987 | 0.981
A+V | 0.970 | 0.899
L+V | 0.981 | 0.901
A+L+V | 0.972 | 0.856∗
A+L+V+T (All) | 0.961∗ | 0.868
Results. We show our results in Table 4.4. We find that between acoustic and
lexical modalities, the acoustic modality carries more information about
activation and the lexical for valence. This is in line with previous research
[232, 39]. We also note that the visual modality significantly outperforms
both the speech and lexical modalities for valence prediction.
When we merge these networks using late voting on each modality (decision
fusion), we find that the combination of all modalities performs the best for
predicting activation. But for predicting valence, the best performance is
shown by the combination of acoustic, lexical, visual and thermal modalities.
We believe this is true because previous work has shown that thermal features
are mostly indicative of intensity and discomfort [94] and hence improves
performance on activation prediction, while the visual expressions are most
informative about valence [186].
#### 4.5.2 Evaluation of Presence of Stress
We use the following set of features for our baseline models. Given that
stress vs non-stressed state is classified for the complete section (monologue
or neutral recording), we extract visual features differently to use the the
sequential information over the whole segment, i.e., a monologue. We also use
physiological features for our network, since we found that even though they
are highly variable over shorter segments (utterances), they are informative
for recognizing physiological state on a whole section.
1. 1.
Acoustic, Lexical, and Thermal Features. We use the same features as extracted
for predicting emotion.
2. 2.
Wide-angle Video Features. We extract the subject’s pose using OpenPose [40,
199, 228] at 25 frames per second. For each frame, we extract 14 three-
dimensional points representing anchor points for the upper body. For
classification of each 3D point is interpolated over one second using a
$5^{th}$ order spline [167, 102]. The parameters of the splines are then used
as features for classification.
3. 3.
Close-up Video Features. We use OpenFace to extract the subject’s action units
[15]. The features are extracted for every frame. In each frame, features
include the gaze direction vectors, gaze angles, 2D eye region landmarks, head
locations, rotation angles of the head, landmark locations, and facial action
units. Landmarks locations offset by the nose location. We window the data
into segments of one-second windows with 0.5 second overlap and calculate
summary statistics (maximum, minimum, mean, variance). We retain the top 300
features based on the F values between the training features and corresponding
labels (stressed vs non-stressed).
4. 4.
Physiological Features. While the physiological features varied greatly per
second to be informative for emotion, they are informative for recognizing
presence or absence of stress. We consider the raw measurements for heart
rate, breathing rate, skin conductance and skin temperature and compute the
first four measures of central tendency, e.g. mean, standard deviation,
skewness, and kurtosis.
Table 4.5: Baseline results for classifying stressed and non-stressed
situations per time unit, unless specified otherwise. A - Accuracy, P -
Precision, R - Recall.
Recording Parts | $A$ | $P$ | $R$ | $F_{1}$
---|---|---|---|---
| Thermal
Neutral | 0.61 | 0.67 | 0.62 | 0.64
Questions | 0.50 | 0.64 | 0.52 | 0.57
| Wide-angle Video
Neutral | 0.66 | 0.41 | 0.96 | 0.58
Questions | 0.69 | 0.45 | 0.82 | 0.58
| Close-up Video
Neutral | 0.61 | 0.78 | 0.33 | 0.46
Questions | 0.65 | 0.65 | 0.69 | 0.67
| Physiological
Neutral | 0.66 | 0.47 | 0.89 | 0.64
Questions | 0.70 | 0.55 | 0.88 | 0.67
| Audio - Per utterance
Questions | 0.67 | 0.70 | 0.69 | 0.69
| Text - Per utterance
Questions | 0.60 | 0.74 | 0.61 | 0.67
| Late Fusion - Voting
Questions | 0.60 | 0.74 | 0.61 | 0.67
Network. We train a DNN to perform binary classification, i.e., to recognize
stressed vs. non-stressed situation using ReLU as activation, with softmax as
the classification method.The final layer uses a soft-max activation. We train
six different networks for thermal, wide-angle video, close-up video,
physiological, audio, and lexical modalities. Each network is trained in a
subject-independent manner. We train network to recognize stress vs non-stress
situation in both neutral recording,i.e., when the subject isn’t speaking at
the beginning of the recording, and during emotional monologue questions. To
do so, we decide the final prediction by a majority vote over one-second
predictions for the complete section of the recording. For the lexical and
acoustic modality, we train the network for the question monologues, and
decide the final prediction based on a majority vote over prediction for each
utterance.
Results. We report our results for prediction of stress vs non-stress
situation using various modalities in Table 4.5. We see that the captured
modalities are indeed informative for recognizing stress vs non-stressed
situations. We find that for recognizing this distinction when the subjects
are speaking, audio and physiological features perform the best. This is in
agreement with previous related work [131, 234, 96]. Interestingly, we also
find that the thermal and physiological modality is apt at recognizing
differences in stress, even in the neutral recording, i.e., when the subject
is not speaking. This advantage of thermal modality has been previously
documented by researchers [7, 173, 172, 80]. We find that answering emotional
monologue questions interferes with the recorded thermal modality, leading to
a poorer performance at stress recognition.
### 4.6 Conclusions and Future Work
In this chapter, we introduced a dataset that aims to capture the interplay
between psychological factors such as stress and emotion. While various other
datasets have explored the relationship between gender or personality measures
and emotion production and perception, the relationship between psychological
factors and emotion is understudied from a data collection point of view, and
hence an automated modeling perspective.
We verified that the presence of emotion and stress can be detected in our
dataset. Our baseline results for emotion classification using DNNs with
acoustic, linguistic and visual features on our dataset are similar to
reported results on other datasets such as IEMOCAP [36] and MSP-Improv [38].
## Chapter V Best Practices for Noise Based Augmentation in Emotion Datasets
### 5.1 Motivation and Contributions
Speech emotion recognition is an important component of any human centered
system. But speech characteristics produced and perceived by a person can be
influenced by a multitude of reasons, both desirable such as emotion, and
undesirable such as noise. To train robust emotion recognition models, we need
a large, yet realistic data distribution, but emotion datasets are often small
and hence are augmented with noise. Often noise augmentation makes one
important assumption, that the prediction label should remain the same in
presence or absence of noise, which is true for automatic speech recognition
but not necessarily true for perception based tasks. In this chapter we make
three novel contributions. We validate through crowdsourcing that the presence
of noise does change the annotation label and hence may alter the original
ground truth label. We then show how disregarding this knowledge and assuming
consistency in ground truth labels propagates to downstream evaluation of ML
models, both for performance evaluation and robustness testing. We end the
chapter with a set of recommendations for noise augmentations in speech
emotion recognition datasets.
### 5.2 Introduction
Speech emotion recognition is increasingly included as a component in many
real-world human-centered machine learning models. Modulations in speech can
be produced for a multitude of reasons, both desirable and undesirable. In our
case desirable modulations encode information that we want our model to learn
and be informed by, such as speaker characteristics or emotion. Undesirable
modulations encode information that are extrinsic factors change with the
environment, such as noise. In order to handle these modulations, we need
large datasets that capture the range of possible speech variations and their
relationship to emotion expression. But, such datasets are generally not
available for emotion tasks. To bridge this gap, researchers have proposed
various methods to generate larger datasets. One of the most common is noise
augmentation. The baseline assumption of noise augmentation is that the labels
of the emotion examples do not change once noise has been added [169]. While
this assumption can be confidently made for tasks such as automatic speech
recognition (ASR), the same cannot be said for perception-based tasks, such as
emotion recognition.
In this chapter, we question the assumption that the annotation label remains
the same in the presence of noise. We first create a noise augmented dataset
and conduct a perception study to label the emotion of these augmented
samples, focused on the type of noise in samples whose perception has changed
or remained the same given the agumentation. We use the results from this
study to classify the complete set of augmentation noises into two categories,
perception-altering (i.e., noises that may change the perception of emotion)
and perception-retaining (i.e., noises that do not change the perception of
emotion). We propose that the perception-altering noises should not be used in
supervised learning or evaluation frameworks because we cannot confidently
maintain that the original annotation holds for a given sample. We evaluate
the effects of disregarding emotion perception changes by examining how the
performance of emotion recognition models and analyses of their robustness
change in unpredictable manners when we include samples that alter human
perception in the training of these models. Lastly, we provide a set of
recommendations for noise based augmentation of speech emotion recognition
datasets based on our results.
Researchers have considered the impact of noise on emotion perception and
thereby the annotation of emotions. [X] looked at how pink and white noises in
varying intensities change the perception of emotion. Another set of research
has concentrated on training and validating noise robust models with the
assumption that intent label prediction remains consistent in the presence of
noise. For example, [X] have looked at training student teacher models that
aim to ignore the effect of noise introduced to the model. On the other hand
[X] have proposed copy pasting various emotion segments together along with
neutral noise to balance the classes in an emotion dataset, thus improving
performance.
In this chapter, we claim that the standard assumption about perception and
hence, label retention of emotion in the presence of noise may not hold true
in a multiple noise categories. To understand which noises impact emotion
perception, we use a common emotion dataset, IEMOCAP and introduce various
kinds of noises to it, at varying signal to noise ratio (SNR) levels as well
as at different positions in the sample. We then perform a crowdsourcing
experiment that asks workers to annotate their perception of emotion for both
the clean and the corresponding noise-augmented sample. This enables us to
divide noise augmentation options into groups characterized by their potential
to either influence or not influence human perception.
The results of the crowdsourcing experiments inform a series of empircal
analyses focused on model performance and model robustness. We first present
an empirical evaluation of the effects of including perception-altering noises
in training. It will allow us to observe how the inclusion of perception-
altering noises creates an impression of performance improvement. We will
discuss how this improvement is a myth, this new model will have learned to
predict labels that are not truly associated with a given sample due to the
perceptual effects of these noises. We consider both a general recurrent
neural network (RNN) model and an end-to-end model for this purpose. We
evaluate conditions in which novel augmentation noises are either introduced
during training (matched) or seen for the first time during testing
(mismatched). The second empirical evaluation analyzes whether the gap in
performance between the matched and mismatched conditions can be bridged using
noise robust modeling techniques. The third and final evaluation is focused on
the robustness of the model. It will allow us to observe how the inclusion of
these perception altering noises ultimately leads to a model that is more
susceptible to attack compared to a model that does not include these noises.
We train an attack model for robustness testing. It considers a pool of noises
and picks the best noise with a minimal SNR degradation that is able to change
a model’s prediction. We consider a condition in which the attack model has
black-box access to the trained model. The attack has a fixed number of
allowed queries to the trained model, but not the internal gradients or
structure (i.e., the attack model can only provide input and can only access
the trained model’s prediction). We test and monitor the difference in the
observed robustness of these aforementioned models.
We find that the crowdsourced labels do change in the presence of some kinds
of noise. We then verify that the models perform worse on noisy samples when
trained only on clean datasets. But, we show that this decrease in performance
is different when using the complete set of noises for augmenting the test set
vs. when only using the perception-retaining noises for augmentation. We show
similar patterns for noise-robust models, specifically showing how there is an
increased drop in performance for the end-to-end noise-robust model when
excluding performance-altering noises during augmentation. We then discuss how
our conventional metrics, those that look only at model performance, may be
incorrectly asserting improvements as the model is learning to predict an
emotion measure that is not in line with human perception. Troublingly, we
find that the attack model is generally more effective when it has access to
the set of all noises as compared to when excluding perception-altering noises
for allowed augmentations. We also specifically find that given just a pool of
carefully crafted reverberation modulations, the attack model can be
successful in almost 65% of the cases with minimal degradation in SNR and in
less than ten queries to the trained model. We end the chapter with a general
set of recommendations for noise augmentations in speech emotion recognition
datasets.
### 5.3 Research Questions
In this chapter, we investigate five research questions:
Purpose 1: Premise Validation through Crowdsourcing
RQ1: Does the presence of noise affect emotion perception as evaluated by
_human raters_? Is this effect dependent on the utterance length, loudness,
the type of the added noise, and the original emotion?
Reason: Noise has been known to have masking effect on humans in specific
situations. Hence, humans can often understand verbalized content even in
presence of noise. Our goal is to understand whether the same masking effect
extends to paralinguistic cues such as emotion, and to what extent. Our
continuing claim from hereon remains that only noises that do not change human
perception should be used for the training and evaluation of machine learning
models. Not doing so, can lead to gains or drops in performance measurement
that may not actually extend to real world settings. We call these changes
”unverified” because we cannot, with certainity, be sure that the model should
have predicted the original label (i.e., the label of the sample before noise
was added) because the human did not neccessarily label the noisy instance
with that same label.
Purpose 2: Noise Impact Quantification
RQ2: Can we verify previous findings that the presence of noise affects the
performance of _emotion recognition models_? Does this effect vary based on
the type of the added noise?
Reason: We have known that presence of noise in data shifts the data
distribution [50]. This shift often leads to poor performance by machine
learning models. We aim to quantify the amount of performance drop based on
the type of noise in these systems, both, for any kind of noise, and then,
specifically for noises that do not change human perception (perception-
retaining).
Purpose 3: Denoising and Augmentation Benefits Evaluation
RQ3: Does dataset augmentation (Q3a) and/or sample denoising (Q3b) help
improve the robustness of emotion recognition models to unseen noise?
Reason: We test whether the commonly-used methods for improving the
performance of these models under distribution shifts is helpful. We focus on
two main methods, augmentation and denoising. We specifically look at how
performance changes when we augment with noises that include those that are
perception-altering vs. when we exclude such noises.
Purpose 4: Model Robustness Testing Conditions
RQ4: How does the robustness of a model to attacks compare when we are using
test samples that with are augmented with perception-retaining noise vs.
samples that are augmented with all types of noise, regardless of their effect
on perception?
Reason: Another major metric for any deployable machine learning algorithm is
its performance on ”unseen situations” or handling incoming data shifts (i.e.,
robustness testing). We test robustness using a noise augmentation algorithm
that aims to forcefully and efficiently change a model’s output by augmenting
test samples with noise. We look at how often this algorithm is unsuccessful
in being able to ”fool” a model with its augmented samples. We look at the
changes in frequency with which a model is successfully able to defend itself
when the attack algorithm uses a set that includes all types of noises vs.
when it only uses perception-retaining noises.
Purpose 5: Recommendations
RQ5: What are the recommended practices for speech emotion dataset
augmentation and model deployment?
Reason: We then provide a set of recommendations based on our empirical
studies for deploying emotion recognition models in real world situations.
### 5.4 Noise
We investigate the effects of two types of noise, environmental and signal
distortion. Environmental noises are additive, while signal distortion noise
involves other types of signal manipulation.
#### 5.4.1 Environmental Noise
We define environmental noises (ENV) as additive background noise, obtained
from the ESC-50 dataset[176]111https://github.com/karoldvl/ESC-50. ESC-50 is
generally used for noise contamination and environmental sound classification
[231]. These environmental sounds are representative of many types of noise
seen in real world deployments, especially in the context of virtual and smart
home conversational agents. We use the following categories:
* •
Natural soundscapes (Nat), e.g., rain, wind.
* •
Human, non-speech sounds (Hum), e.g., sneezing, coughing, laughing or crying
in the background etc.
* •
Interior/domestic sounds (Int), e.g., door creaks, clock ticks etc.
We manipulate three factors when adding the noise sources:
* •
Position: The position of the introduction of sound that: (i) starts and then
fades out in loudness or (ii) occurs during the entirety of the duration of
the utterance. In the second case, this complete additive background would
represent a consistent noise source in real world (e.g., fan rotation).
* •
Quality Degradation: The decrease in the signal to noise ratio (SNR) caused by
the addition of the additive background noise at levels of 20dB, 10dB and 0dB.
This is used only when noise is added to the entirety of the utterance.
#### 5.4.2 Signal Distortion
We define signal distortion noise as modulations that aren’t additive in the
background. These kinds of noise in the audio signal can occur from
linguistic/paralinguistic factors, room environment, internet lags, or the
physical locomotion of the speaker.
We use the nine following categories:
* •
SpeedUtt: The utterance is sped up by either 1.25$\times$ or 0.75$\times$.
* •
SpeedSeg: A random segment within an utterance is sped up by 1.25$\times$. The
package pyAudio 222https://people.csail.mit.edu/hubert/pyaudio/ that we used
to speed up a segment did not permit slowing a segment down. Thus, the
0.75$\times$ was not used here.
* •
Fade: The loudness of the utterance is faded by 2% every second, which
emulates the scenario of a user moving away from the speaker. The loudness is
increased for fade in, and decreased for fade out.
* •
Filler: Non-verbal short fillers such as ‘uh’, ‘umm’ (from the same speaker)
are inserted in the middle of a sentence. The insertion is either just the
filler or succeeded and preceded by a long pause 333Fillers are obtained by
parsing audio files for a given speaker and finding occurrences of any of the
options from the above mentioned set. We will release the extracted fillers
per speaker for IEMOCAP.
* •
DropWord: A randomly selected set of non-essential words belonging to the set:
{a, the, an, so, like, and} are dropped from an utterance using word-aligned
boundaries and stiching the audio segments together.
* •
DropLetters: Following the same approach as drop word, letters are dropped in
accordance with various linguistic styles chosen from the set: {/h/+vowel,
vowel+/nd/+consonant(next word), consonant+/t/+consonant(next word),
vowel+/r/+consonant, /ihng/}. This is supported by research that has studied
phonological deletion or dropping of letters in the native US-English dialect
[1, 237].
* •
Laugh/Cry: “Sob” and “short-laughter” sounds are added to the utterance. They
are obtained from AudioSet [82].
* •
Pitch: The pitch is changed by $\pm$ 3 half octaves using the pyAudio library.
* •
Rev: Room reverberation is added to the utterance using py-audio-effects
(pysndfx444https://github.com/carlthome/python-audio-effects ). We vary
metrics such as reverberation ratio or room size to vary the type and
intensity of reverberation added.
#### 5.4.3 Sampling and Noise-Perturbations
We randomly select 900 samples from the IEMOCAP dataset, which is far larger
than the ones used for previous perception studies [170, 191]. We select 100
samples from each activation and valence pair bin, i.e., 100 samples from the
bin with activation: _low_ , valence: _low_ ; 100 samples from the bin with
activation: _low_ , and valence: _mid_ , and so on. This ensures that the
chosen 900 samples cover the range of emotions expressed. We impose another
constraint on these 100 samples from each bin, 30 of them are shorter than the
first quartile or greater than fourth quartile of utterance length in seconds
to cover both extremities of the spectrum, and the remaining 70 belong in the
middle. We also ensure that the selected samples had a 50-50 even split
amongst gender. We introduce noise to the 900 samples (Section 3.1). Each
sample is modulated in ten ways: four randomly chosen types of environmental
noise and six randomly chosen signal distortion noise modulations, giving us a
total of 9,000 noisy samples555We will release the script to create these
files..
### 5.5 User study
We first analyze the effects of noise on human perception by relabeling the
noise-enhanced data using the Amazon Mechanical Turk (AMT) platform. We use
insights from this experiment to guide the machine learning analyses that
follow.
#### 5.5.1 Crowdsourcing Setup
We recruited 147 workers using Amazon Mechanical Turk who self-identify as
being from the United States and as native English speakers, to reduce the
impact of cultural variability. We ensured that each worker had $>98$%
approval rating and more than 500 approved Human Intelligence Tasks (HITs). We
ensured that all workers understood the meaning of activation and valence
using a qualification task that asked workers to rank emotion content similar
to [105]. The qualification task has two parts: (i) we explain the difference
between valence and activation and how to identify those, and, (ii) we ask
them to identify which of the two samples has a higher/lower valence and a
higher/lower activation, to ensure that they have understood the concept of
activation and valence annotations. All HIT workers were paid a minimum wage
($\$9.45/$hr), pro-rated to the minute. Each HIT was annotated by three
workers.
For our main task, we created pairs that contained one original and one
modulated sample. We then asked each worker to annotate whether or not they
perceived the pair to have the same emotion. If they said yes for both
activation and valence, the noisy sample was labeled same and they could
directly move to the next HIT. If they said no, the noisy sample was labeled
different. In this case, they were asked to assess the activation and valence
of the noisy sample using Self Assessment Manikins [30] on a scale of [1, 5]
(similar to the original IEMOCAP annotation).
We also include three kinds of attention checks:
1. 1.
We show two samples that have not been modified and ask them to decide if the
emotion represented was different. If the person says yes, then the experiment
ends.
2. 2.
We observe the time spent on the task. If the time spent on the task is less
than the combined length of both samples, then the user’s qualification to
annotate the HITs is rescinded and their responses are discarded.
3. 3.
We show two samples, one which has a gold standard label, and another, which
has been contaminated with significant noise (performance degradation
$>$30dB), such that the resulting sample is incomprehensible. If people do not
mark this set of samples as being different, the experiment ends.
The failure rate based on the above criteria was 8%. We ensured the quality of
the annotations by paying bonuses based on time spent, not just number of
HITs, and by disqualifying annotators if they annotated any sample (including
those outside of the attention checks) more quickly than the combined length
of the audio samples.
We then created two sets of labels for each noise-augmented clip. The _first
type_ of label compared a noise-augmented clip to its original. The noise-
augmented clip was labeled the _same_ if the modified and original clip were
perceived to have the same valence or activation, otherwise it was labeled
_different_. We created this label by taking the majority vote over all
evaluations. The _second type_ of label included valence and activation. A
noise-augmented clip was given the average valence and activation over all
evaluations.
The inter-annotator agreement was measured using Cohen’s kappa.
Conventionally, when estimating Cohen’s kappa, annotators are not considered
as individuals, instead reducing annotators to the generic 1, 2, and 3. The
challenge is that this often leads to artificially inflated inter-annotator
agreement because individual characteristics and behavior of a particular
worker are not taken under consideration [95]. We take a different approach,
creating a table for the calculation of the statistic that considers
annotators as individuals with separate entries for each clip, following the
approach of [95]. If an annotator didn’t evaluate a given clip, the cell has a
null (missing data) value. We found that the Cohen’s kappa was 79% for
activation and 76% for valence 666The sample name, code to create the paired
noisy examples, and the resulting annotations will be made available for
further research.
Randomly sample 1 noise variation from each category mentioned in Section
5.4.;
$numAttempts=0$;
for _each noise in selected random noises:_ do
Add noise to the sample such that the decrease in SNR is 1.;
Get the classifier output with this new sample variation.;
$numAttempts+=1$;
if _$numAttempts >k$_ then
return _Exit Code = Failure_
end if
if _classifier output changes_ then
return _Exit Code = Success_
end if
end for
for _each noise in selected random noises:_ do
Add noise to the sample such that the decrease in SNR is 5.;
Get the classifier output with this new sample variation.;
$numAttempts+=1$;
if _$numAttempts >k$_ then
return _Exit Code = Failure_
end if
if _classifier output changes_ then
return _Exit Code = Success_
end if
if _classifier output changes_ then
while _classifier output does not change_ do
Iterate over all SNR decreases from 2-5;
Get classifier output for the modified sample;
$numAttempts+=1$;
if _$numAttempts >k$_ then
return _Exit Code = Failure_
end if
if _classifier output changes_ then
return _Exit Code = Success_
end if
end while
end if
end for
for _each noise in selected random noises:_ do
Add noise to the sample such that the decrease in SNR is 10.;
Get the classifier output with this new sample variation.;
if _classifier output changes_ then
return _Exit Code = Success_
end if
if _classifier output changes_ then
while _classifier output does not change_ do
Iterate over all SNR decreases from 6-10 Get classifier output for the
modified sample;
$numAttempts+=1$;
if _$numAttempts >k$_ then
return _Exit Code = Failure_
end if
if _classifier output changes_ then
return _Exit Code = Success_
end if
end while
end if
end for
return _Exit Code = Failure_
Algorithm 1 Pseudo-code for testing model robustness. Exit Code is Success when the algorithm finds a noise-augmented version of the sample that the model changes prediction for. Exit Code is Failure when the model maintains its predictions over the any of the noise-augmented versions tried. Table 5.1: The table shows the ratio of samples marked by human evaluators as imperceptible to difference in emotion perception. V: Valence, A: Activation, $\delta$V:Average change in Valence on addition of noise, $\delta$A:Average change in Activation on addition of noise. | V | A | $\delta$V | $\delta$A
---|---|---|---|---
Environmental Noise
NatSt | | 0.01 | 0.00 | |
NatdB (Co) | -10dB | 0.01 | 0.00 | |
| Same | 0.02 | 0.00 | |
| +10dB | 0.03 | 0.01 | |
HumSt | | 0.01 | 0.00 | |
HumdB (Co) | -10dB | 0.03 | 0.00 | |
| Same | 0.02 | 0.00 | |
| +10dB | 0.04 | 0.01 | |
IntSt | | 0.05 | 0.01 | |
IntdB (Co) | -10dB | 0.02 | 0.01 | |
| Same | 0.02 | 0.00 | |
| +10dB | 0.04 | 0.01 | |
Signal Distortion
SpeedSeg | | 0.01 | 0.0 | |
Fade | In | 0.04 | 0.01 | |
| Out | 0.04 | 0.00 | |
DropWord | | 0.01 | 0.00 | |
DropLetters | | 0.01 | 0.00 | |
Reverb | | 0.04 | 0.01 | |
Filler | L | 0.10 | 0.06 | |
| S | 0.06 | 0.03 | |
Laugh | | 0.16 | 0.17 | \+ .11 | \+ .26
Cry | | 0.20 | 0.22 | \- .20 | \- .43
SpeedUtt | 1.25x | 0.13 | 0.03 | \- .10 | \- .13
| 0.75x | 0.28 | 0.06 | \- .18 | \- .23
Pitch | 1.25x | 0.22 | 0.07 | \- .11 | \+ .19
| 0.75x | 0.29 | 0.10 | \- .07 | \- .15
Table 5.2: State of the art model performance in terms of UAR when using the general versions of traditional deep learning and end-to-end deep learning models. No noise refers to clean speech, all noises refers to the combined set of perception-retaining and perception-altering noise. The environmental and signal distortion categories shown include only the perception-retaining noises. As a reminder, samples in the all noises category have an uncertain ground truth, the row is marked with two stars ($\ast\ast$). V: Valence, A: Activation, Clean: Training on clean dataset, Clean$+$Noise — Mismatch: Cleaning on noisy dataset and testing on mismatched noisy partition, Clean$+$Noise — Match: Cleaning on noisy dataset and testing on matched noisy partition. Random chance UAR is 0.33. | | | | | Traditional Deep Neural Network | | End-To-End Deep Neural Network
---|---|---|---|---|---|---|---
| | | | | Clean | | Clean+Noise | | Clean | | Clean+Noise
| | | | | | Mismatch | Match | | | Mismatch | Match
| | | | | A | V | | A | V | A | V | | A | V | | A | V | A | V
No Noise | | 0.67 | 0.59 | | - | - | - | - | | 0.70 | 0.63 | | - | - | - | -
All Noises** | | 0.40 | 0.38 | | 0.55 | 0.42 | 0.66 | 0.59 | | 0.70 | 0.63 | | 0.44 | 0.38 | 0.67 | 0.60
Perception Retaining Noises | | 0.50 | 0.42 | | 0.57 | 0.48 | 0.60 | 0.52 | | 0.53 | 0.45 | | 0.60 | 0.50 | 0.62 | 0.54
Environmental Category | Nature | At Start | | | 0.50 | 0.45 | | 0.61 | 0.53 | 0.63 | 0.55 | | 0.56 | 0.48 | | 0.64 | 0.55 | 0.66 | 0.58
| Cont. | -5dB | | 0.45 | 0.39 | | 0.55 | 0.44 | 0.59 | 0.50 | | 0.49 | 0.42 | | 0.58 | 0.47 | 0.62 | 0.53
| -10dB | | 0.42 | 0.35 | | 0.56 | 0.46 | 0.59 | 0.49 | | 0.47 | 0.38 | | 0.57 | 0.48 | 0.61 | 0.51
| -20dB | | 0.40 | 0.35 | | 0.51 | 0.44 | 0.55 | 0.47 | | 0.47 | 0.39 | | 0.52 | 0.46 | 0.56 | 0.50
Interior | At Start | | | 0.53 | 0.44 | | 0.61 | 0.52 | 0.64 | 0.57 | | 0.57 | 0.47 | | 0.64 | 0.56 | 0.66 | 0.59
| Cont | -5dB | | 0.46 | 0.36 | | 0.55 | 0.44 | 0.58 | 0.49 | | 0.49 | 0.39 | | 0.59 | 0.49 | 0.62 | 0.51
| -10dB | | 0.44 | 0.36 | | 0.54 | 0.43 | 0.57 | 0.48 | | 0.49 | 0.39 | | 0.56 | 0.45 | 0.59 | 0.52
| -20dB | | 0.40 | 0.35 | | 0.52 | 0.44 | 0.55 | 0.49 | | 0.46 | 0.37 | | 0.56 | 0.45 | 0.58 | 0.51
Human | At Start | | | 0.52 | 0.45 | | 0.60 | 0.51 | 0.63 | 0.55 | | 0.58 | 0.47 | | 0.62 | 0.52 | 0.66 | 0.57
| Cont | -5dB | | 0.45 | 0.37 | | 0.52 | 0.43 | 0.55 | 0.48 | | 0.49 | 0.40 | | 0.56 | 0.44 | 0.57 | 0.50
| -10dB | | 0.42 | 0.34 | | 0.51 | 0.43 | 0.53 | 0.47 | | 0.49 | 0.38 | | 0.53 | 0.45 | 0.55 | 0.49
| -20dB | | 0.40 | 0.34 | | 0.50 | 0.41 | 0.53 | 0.46 | | 0.46 | 0.38 | | 0.54 | 0.43 | 0.56 | 0.48
Signal Distortion | Speed Segment | | 0.61 | 0.52 | | 0.63 | 0.53 | 0.64 | 0.55 | | 0.63 | 0.55 | | 0.64 | 0.55 | 0.67 | 0.58
Fade | In | | 0.62 | 0.53 | | 0.65 | 0.55 | 0.67 | 0.58 | | 0.64 | 0.55 | | 0.67 | 0.58 | 0.68 | 0.59
| | Out | | 0.61 | 0.51 | | 0.62 | 0.54 | 0.64 | 0.57 | | 0.63 | 0.54 | | 0.64 | 0.56 | 0.66 | 0.59
DropWord | | 0.64 | 0.56 | | 0.65 | 0.56 | 0.67 | 0.59 | | 0.65 | 0.58 | | 0.67 | 0.58 | 0.69 | 0.61
DropLetters | | 0.65 | 0.58 | | 0.69 | 0.60 | 0.71 | 0.62 | | 0.66 | 0.59 | | 0.72 | 0.60 | 0.74 | 0.63
Reverb | | 0.43 | 0.37 | | 0.50 | 0.43 | 0.53 | 0.45 | | 0.35 | 0.34 | | 0.51 | 0.42 | 0.55 | 0.46
Table 5.3: Noise-Robust (NR) state of the art model performance in terms of UAR when using the noise-robust versions of traditional deep learning and end-to-end deep learning models. No noise refers to clean speech, all noises refers to the combined set of perception-retaining and perception-altering noise. The environmental and signal distortion categories shown include only the perception-retaining noises. As a reminder, samples in the all noises category have an uncertain ground truth, the row is marked with two stars ($\ast\ast$). V: Valence, A: Activation, Clean: Training on clean dataset, Clean$+$Noise — Mismatch: Cleaning on noisy dataset and testing on mismatched noisy partition, Clean$+$Noise — Match: Cleaning on noisy dataset and testing on matched noisy partition, NR: Noise Robust versions of the corresponding models Random chance UAR is 0.33. | | | | | NR-Traditional Deep Neural Network | | NR-End-To-End Deep Neural Network
---|---|---|---|---|---|---|---
| | | | | Clean | | Clean+Noise | | Clean | | Clean+Noise
| | | | | | Mismatch | Match | | | Mismatch | Match
| | | | | A | V | | A | V | A | V | | A | V | | A | V | A | V
No Noise | | 0.67 | 0.59 | | - | - | - | - | | 0.70 | 0.63 | | - | - | - | -
All Noises** | | 0.44 | 0.40 | | 0.58 | 0.44 | 0.68 | 0.60 | | 0.50 | 0.40 | | 0.50 | 0.40 | 0.72 | 0.61
Perception Retaining Noises | | 0.52 | 0.44 | | 0.59 | 0.50 | 0.61 | 0.51 | | 0.55 | 0.48 | | 0.61 | 0.52 | 0.63 | 0.54
Environmental Category | Nature | At Start | | | 0.54 | 0.49 | | 0.63 | 0.55 | 0.63 | 0.55 | | 0.57 | 0.50 | | 0.66 | 0.55 | 0.67 | 0.56
| Cont. | -5dB | | 0.50 | 0.42 | | 0.58 | 0.50 | 0.60 | 0.52 | | 0.49 | 0.42 | | 0.61 | 0.51 | 0.63 | 0.53
| -10dB | | 0.48 | 0.38 | | 0.59 | 0.48 | 0.61 | 0.51 | | 0.50 | 0.46 | | 0.63 | 0.52 | 0.65 | 0.53
| -20dB | | 0.44 | 0.38 | | 0.55 | 0.49 | 0.59 | 0.51 | | 0.50 | 0.43 | | 0.53 | 0.46 | 0.58 | 0.47
Interior | At Start | | | 0.53 | 0.44 | | 0.61 | 0.52 | 0.65 | 0.54 | | 0.58 | 0.52 | | 0.63 | 0.56 | 0.66 | 0.58
| Cont | -5dB | | 0.46 | 0.36 | | 0.55 | 0.44 | 0.59 | 0.47 | | 0.51 | 0.42 | | 0.58 | 0.48 | 0.62 | 0.52
| -10dB | | 0.44 | 0.36 | | 0.54 | 0.43 | 0.57 | 0.43 | | 0.49 | 0.44 | | 0.57 | 0.48 | 0.61 | 0.50
| -20dB | | 0.40 | 0.35 | | 0.52 | 0.44 | 0.55 | 0.46 | | 0.46 | 0.40 | | 0.55 | 0.48 | 0.58 | 0.49
Human | At Start | | | 0.52 | 0.45 | | 0.60 | 0.51 | 0.63 | 0.52 | | 0.59 | 0.49 | | 0.63 | 0.53 | 0.66 | 0.56
| Cont | -5dB | | 0.45 | 0.37 | | 0.52 | 0.43 | 0.55 | 0.46 | | 0.49 | 0.42 | | 0.56 | 0.48 | 0.58 | 0.50
| -10dB | | 0.42 | 0.34 | | 0.51 | 0.43 | 0.54 | 0.44 | | 0.47 | 0.38 | | 0.54 | 0.49 | 0.55 | 0.45
| -20dB | | 0.40 | 0.34 | | 0.50 | 0.41 | 0.52 | 0.43 | | 0.43 | 0.38 | | 0.54 | 0.45 | 0.58 | 0.49
Signal Distortion | Speed Segment | | 0.63 | 0.55 | | 0.65 | 0.57 | 0.67 | 0.58 | | 0.66 | 0.58 | | 0.67 | 0.59 | 0.67 | 0.60
Fade | In | | 0.64 | 0.55 | | 0.66 | 0.57 | 0.68 | 0.59 | | 0.65 | 0.58 | | 0.67 | 0.59 | 0.69 | 0.60
| | Out | | 0.64 | 0.56 | | 0.65 | 0.57 | 0.67 | 0.58 | | 0.66 | 0.57 | | 0.68 | 0.59 | 0.69 | 0.63
DropWord | | 0.67 | 0.60 | | 0.66 | 0.58 | 0.66 | 0.59 | | 0.68 | 0.60 | | 0.69 | 0.60 | 0.69 | 0.60
DropLetters | | 0.67 | 0.60 | | 0.65 | 0.62 | 0.66 | 0.60 | | 0.68 | 0.64 | | 0.69 | 0.60 | 0.69 | 0.60
Reverb | | 0.48 | 0.41 | | 0.56 | 0.47 | 0.58 | 0.48 | | 0.52 | 0.40 | | 0.55 | 0.45 | 0.60 | 0.45
Table 5.4: Success of misclassification attempts on different models with varying number of allowed attempts (lower is better). As a reminder, samples in the all noises category have an uncertain ground truth, the row is marked with two stars ($\ast\ast$). Reverberation (reverb) is a perception-retaining noise that is also analyzed separately. Trad: Traditional Deep Learning Model, E2E: End to End deep learning model, NR: Noise Robust version of the deep learning model. | Noise Set | No. of Attempts | Activation | Valence
---|---|---|---|---
| Trad | E2E | NR-Trad | NR-E2E | Trad | E2E | NR-Trad | NR-E2E
Performance Impact Per Noise Is Unknown | All Noises** | 5 | 0.29 | 0.22 | 0.15 | 0.10 | 0.11 | 0.15 | 0.13 | 0.05
15 | 0.31 | 0.33 | 0.31 | 0.32 | 0.23 | 0.24 | 0.22 | 0.22
25 | 0.40 | 0.28 | 0.40 | 0.33 | 0.22 | 0.19 | 0.21 | 0.20
inf | 0.43 | 0.41 | 0.44 | 0.35 | 0.25 | 0.20 | 0.26 | 0.18
Perception-Retaining | 5 | 0.18 | 0.11 | 0.07 | 0.05 | 0.11 | 0.05 | 0.02 | 0.02
15 | 0.25 | 0.12 | 0.25 | 0.15 | 0.14 | 0.08 | 0.18 | 0.10
25 | 0.32 | 0.24 | 0.32 | 0.20 | 0.13 | 0.10 | 0.11 | 0.09
inf | 0.40 | 0.26 | 0.36 | 0.23 | 0.19 | 0.14 | 0.17 | 0.14
Reverb | 5 | 0.33 | 0.15 | 0.22 | 0.18 | 0.20 | 0.14 | 0.22 | 0.09
15 | 0.40 | 0.23 | 0.30 | 0.21 | 0.30 | 0.13 | 0.34 | 0.16
Performance Impact Per Noise Is Known | All Noises** | 5 | 0.33 | 0.28 | 0.24 | 0.22 | 0.15 | 0.14 | 0.12 | 0.12
15 | 0.38 | 0.38 | 0.32 | 0.33 | 0.20 | 0.21 | 0.18 | 0.16
25 | 0.52 | 0.32 | 0.44 | 0.37 | 0.25 | 0.16 | 0.18 | 0.16
inf | 0.54 | 0.42 | 0.46 | 0.41 | 0.24 | 0.20 | 0.22 | 0.22
Perception-Retaining | 5 | 0.29 | 0.16 | 0.22 | 0.13 | 0.14 | 0.10 | 0.15 | 0.08
15 | 0.32 | 0.32 | 0.32 | 0.28 | 0.14 | 0.15 | 0.16 | 0.12
25 | 0.47 | 0.30 | 0.47 | 0.29 | 0.22 | 0.18 | 0.23 | 0.16
inf | 0.51 | 0.36 | 0.50 | 0.32 | 0.22 | 0.19 | 0.25 | 0.17
Reverb | 5 | 0.38 | 0.22 | 0.33 | 0.22 | 0.28 | 0.20 | 0.31 | 0.21
15 | 0.47 | 0.29 | 0.41 | 0.28 | 0.30 | 0.23 | 0.35 | 0.26
### 5.6 Methods
Table 5.5: Hyper-parameters used to select the best performing model on validation subset whilst training the traditional deep learning model. Hyper-parameter | Values
---|---
Traditional |
No. of Convolution Kernels | {64, 128}
Convolution Kernels Width | {2}
Number of Convolution Layers | {5}
Number of GRU layers | {1, 2, 3}
Pooling Kernel Width | {2, 4}
GRU Layers Width | {32, 64}
Number of Dense Layers | {1, 2, 3}
End to End |
No. of Dense Layers | {1, 2}
We now describe the emotion recognition approaches, presenting two separate
pipelines, one that relies upon direct feature extraction (Section 5.6.2) and
the other that is end-to-end (Section 5.6.3). This allows us to investigate
whether noise has a consistent effect. We discuss approaches to improve noise
robustness by training models with noise-augmented data or denoised data
(Section 5.6.4). Finally, we describe the setup and evaluation of the model
robustness using an untargeted model misclassification test, which measures a
model’s fragility in terms of how likely it is that the model’s decisions will
change when specific types of noise are observed at test time (Section 5.6.5).
#### 5.6.1 Creation of Data Partitions
We use a subject-independent five-fold cross validation scheme to select our
train, test and validation sets. In the first iteration, sessions 1-3 are used
for training, session 4 is used as validation, and session 5 is used for
testing. This is repeated in a round-robin fashion, resulting in each session
serving as a validation and a test fold. We also divide possible noises in two
different categories based on results of crowdsourcing study (see Section
5.7.1). The first category is _perception-altering_ , those that changed
perception of humans and hence cannot be used for model training or evaluation
with the old annotations. The second category is _perception-retraining_ ,
those that did not change human perception, and hence, the model should
produce no change in predictions when using those noise categories for sample
augmentation.
We use the noise categories (seeSection 5.4) in two varying circumstances. The
first category is matched, where both the training and testing sets are
augmented with same kinds of noise (e.g., both have nature-based sounds in
them). The second category is mismatched, where the testing set is augmented
with a noise _category_ not used for augmenting the training set (e.g., only
the test set is augmented with nature-based noise while the train set is
augmented with human or interior noises).
#### 5.6.2 Traditional Deep Learning Network
We first explore a common “traditional” deep learning network that is used in
speech emotion recognition. In this method we extract Mel Filterbank (MFB)
features as input to a model composed of convolutional and gated recurrent
unit (GRU) layers.
##### 5.6.2.1 Features
We extract 40-dimensional Mel Filterbank (MFB) features using a 25-millisecond
Hamming window with a step-size of 10-milliseconds using python-speech-
features 777https://github.com/jameslyons/python_speech_features. Each
utterance is represented as a sequence of 40-dimensional feature vectors. We
$z$-normalize the acoustic features using parameters extracted from the
training dataset. During each cross-validation fold, the parameters are chosen
from the training data and are applied to both the validation and testing
data.
##### 5.6.2.2 Network
Our baseline network is a state-of-art single utterance emotion classification
model which has been used in previous research [9, 116, 126]. The extracted
MFBs are processed using a set of convolution layers and GRUs (see Table 5.5
for the hyperparameters used for these layers). The output of these layers is
then fed through a mean pooling layer to produce an acoustic representation
which is then fed into a set of dense layers to classify activation or
valence.
##### 5.6.2.3 Training.
We implement the models using the Keras library [51]. We use a cross-entropy
loss function for each task (e.g., valence or activation). We learn the model
parameters using the RMSProp optimizer. We train our networks for a maximum of
50 epochs and use early stopping if the validation loss does not improve after
five consecutive epochs. Once the training process ends, we revert the
network’s weights to those that achieved the lowest validation loss. We repeat
the experiment five times. We report the results in terms of Unweighted
Average Recall (UAR, chance is 0.33), averaged over all test samples and five
repetitions. We compare the performance of different models or the same model
in different noisy conditions/partitions using a paired t-test using the
Bonferroni correction, asserting significance when $p\leq 0.05$.
#### 5.6.3 End-to-End Deep Learning Networks
Next, we explore a transformer-based model. In this method the raw audio
signal is used as input to a pre-trained and fine-tuned network and the
emotion prediction is directly obtained as an output. These models do not
require us to perform manual or domain knowledge-based extraction of features.
They instead have a feature encoder component inside the model, which is
dynamic in nature, and hence, can change its output for the same signal based
on the dataset and nature of the task.
##### 5.6.3.1 Features
For the end-to-end deep learning models, we do not need to extract audio
features. Instead we rely on the network itself to both normalize and extract
features, that are later passed onto the deeper layers of the network. The
feature set here is the original wav files that are not modified in any
capacity. The eventual representations are of size 512, reproducing the setup
in the state-of-the-art implementation [175].
##### 5.6.3.2 Network
Our baseline network is the state-of-the-art wav2vec2.0 emotion recognition
model [175]. The wav2vec model is comprised of three parts: (i) a
convolutional neural network (CNN) that acts as feature encoder, (ii) a
quantizier module, and (iii) a transformer module. The input to the model is
raw audio data (16kHz) that is passed to a multi-block 1-d CNN to generate
audio representations (25ms). The quantizer is similar to a variational
autoencoder that encodes and extracts features using a contrastive loss. The
transformer is used for masked sequence prediction and encodes the bi-
directional temporal context of the features.
We use the base model, which has not been fine-tuned for ASR (wav2vec2.0-PT).
We then fine-tune the base model to predict the binned emotion labels. We use
the final representation of the output as an input to dense layers to produce
the final output.
##### 5.6.3.3 Training |
# The Data Conversion Bottleneck in Analog Computing Accelerators
James T. Meech1 Vasileios Tsoutsouras1,2 Phillip Stanley-Marbell1,2
1Department of Engineering, University of Cambridge 2Signaloid
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Most modern computing tasks have digital electronic input and output data. Due
to these constraints imposed by real-world use cases of computer systems, any
analog computing accelerator, whether analog electronic or optical, must
perform an analog-to-digital conversion on its input data and a subsequent
digital-to-analog conversion on its output data. The energy and latency costs
incurred by data conversion place performance limits on analog computing
accelerators. To avoid this overhead, analog hardware must replace the full
functionality of traditional digital electronic computer hardware. This is not
currently possible for optical computing accelerators due to limitations in
gain, input-output isolation, and information storage in optical hardware.
This article presents a case study that profiles 27 benchmarks for an analog
optical Fourier transform and convolution accelerator which we designed and
built. The case study shows that an ideal optical Fourier transform and
convolution accelerator can produce an average speedup of $9.4\times$ and a
median speedup of $1.9\times$ for the set of benchmarks. The optical Fourier
transform and convolution accelerator only produces significant speedup for
pure Fourier transform ($45.3\times$) and convolution ($159.4\times$)
applications.
## 1 Introduction
Most modern computing tasks are constrained to having digital electronic input
and output data. Mass-produced digital electronic memory is the only off-the-
shelf option for data storage. This constrains the input data to be digital
electronic signals. Plotting and data visualization software is only widely
available for programming languages designed to run on off-the-shelf digital
electronic hardware. The traditional digital electronic computer architecture
is better suited to most applications than current application-specific analog
computing accelerators. Directly substituting analog computer architectures
for digital computer architectures would therefore be unproductive: For the
time being, analog computing accelerators must efficiently compute partial or
full results for applications dominated by the type of computing operations
the accelerators are designed to accelerate.
Any analog computing accelerator operating on digital input data to produce
digital output data must perform a digital-to-analog conversion on its input
data and a subsequent analog-to-digital conversion on its output data because
of the input and output constraints imposed by modern computer systems. The
only alternative would be to develop an entire software stack to allow the
analog hardware to perform all the functions of the traditional digital
electronic computer hardware. This is not currently possible for optical
computing accelerators due to limitations in gain, input-output isolation, and
memory. Modern digital electronic computers spend 62.7 % of their energy
moving data [12]. Adding computing accelerators that cannot accelerate the
entire application exacerbates this existing data movement bottleneck [12].
Power delivery requirements trends are placing even more constraints on
available pins and memory bandwidth, making the problem worse still [63].
Figure 7 in Appendix B shows a prototype analog optical accelerator we
designed and built while studying the data movement and data conversion
bottlenecks. Appendix C contains an optical Fourier transform and convolution
computing accelerator case study which shows that the best possible speedup
for optical Fourier transform and convolution accelerators is orders of
magnitude smaller than that of other popular accelerator architectures. This
limited upside on possible speedup will continue to be the case even after
research advances overcome the data movement bottleneck.
## 2 The Cost of Digitally Interfacing With Analog Computing Accelerators
Most modern computing systems are constrained to having digital electronic
input and output data. These constraints are imposed by the storage of input
data in off-the-shelf digital electronic memory and the data processing and
visualization software tools which are exclusively designed to run on digital
electronic computer systems. To avoid data conversions, computer systems can
only use digital computing devices for applications that have digital input
and output data. Figure 1 block ➀ shows that a computer system must incur the
latency and energy cost of a digital-to-analog and analog-to-digital
conversion to use analog computing devices for problems with digital input and
output data. Figure 1 block ➁ shows that a computer system does not need any
data conversion to perform the same computation using digital computing
devices. Using analog computing device-based accelerators is therefore only
worthwhile when the energy and latency saved by using analog computing devices
far outweigh the energy and latency costs of the data conversion.
Figure 1: Architectures for computational problems with digital input and
output data. Let $a(x)$ be an analog function and $d(x)$ be a digital
function, both computed using the input data $x$.
Figure 2a shows a plot of the sampling speed and power consumption of 96
digital-to-analog converter designs published in various venues including the
International Solid-State Circuits Conference (ISSCC) and Symposium on Very
Large Scale Integration (VLSI) Technology and Circuits conference between 1996
and 2021. Figure 2b shows a plot of the sampling speed and power consumption
of 647 analog-to-digital converter designs published in ISSCC and VLSI since
1997. The Pareto frontier (black stepped line) shows that there is a tradeoff
between power consumption and sampling speed for both digital-to-analog and
analog-to-digital converters [14, 49].
Anderson et al. [6] use values from existing digital-to-analog (Kim et al.
[37]) and analog-to-digital (Liu et al. [42]) converters which are above the
Pareto frontiers of Figure 2a and 2b to predict an energy efficiency
improvement of $100\times$ over digital electronic hardware for an optical
computing accelerator which uses existing technology. Anderson et al. [6]
predict a greater than $100,000\times$ energy advantage when performing
multiply-accumulate (MAC) operations for an analog optical multiply-accumulate
computing accelerator over existing 300fJ/MAC digital electronic hardware
(NVIDIA A100 GPU). The greater than $100,000\times$ predicted energy advantage
relies on the availability of analog-to-digital and digital-to-analog
converters which use $32\times$ fewer joules per bit than Kim et al. [37] and
Liu et al. [42], respectively. Using fewer bits of precision to reduce analog-
to-digital and digital-to-analog converter requirements is promising, but any
precision reduction tradeoff to reduce power consumption and increase optical
hardware speed can be made more easily with digital hardware due to the
mature, low-cost manufacturing processes [60, 24]. Computer architects should
therefore avoid converting a given signal from digital to analog and vice
versa unless it is completely necessary.
Figure 2a shows that reaching the $32\times$ smaller digital-to-analog
converter energy by reducing converter power consumption, increasing sampling
speed, or some combination of the two requires a design significantly below
and in some cases more than an order of magnitude below the Pareto frontier.
Figure 2b shows that reaching the $32\times$ smaller analog-to-digital
converter energy by reducing converter power consumption, increasing sampling
speed, or some combination of the two requires a design more than an order of
magnitude below the Pareto frontier. Halving the analog-to-digital converter
energy target requires moving to a design space that is entirely below the
Pareto frontier. Implementing a design more than an order of magnitude below
the Pareto frontier may not be possible.
Reducing digital to analog converter power consumption and increasing sampling
speed are well-researched topics [34, 14, 49]. Researchers designing and
building novel analog computing accelerators should collaborate with digital-
to-analog and analog-to-digital converter designers to determine the
feasibility of reaching this design point with existing technology.
Fundamentally new methods and devices could be required to meet this goal.
Jang et al. [34] state that the best-reported analog-to-digital converter
efficiency has improved by nearly six orders of magnitude over the past 40
years. Jang et al. [34] however also state that energy-efficient analog-to-
digital converters have low bandwidth. This is problematic for analog
computing accelerator designers as they require high bandwidth analog-to-
digital converters to avoid the data movement bottleneck. Analog computing
accelerator designers should collaborate with the ISSCC and VLSI communities
to produce faster and more efficient digital-to-analog and analog-to-digital
converter designs and implementations.
(a) The power consumption and speed tradeoff for 96 different digital to
analog converter designs published in various venues since 1996 [14]. (b) The
power consumption and speed tradeoff for 647 different analog to digital
converter designs published in the ISSCC and VLSI conferences since 1996 [49].
Figure 2: The digital-to-analog and analog-to-digital converter speed and
power consumption tradeoff.
A research implementation of an optical computing accelerator that mitigates
the analog-to-digital and digital-to-analog conversion bottleneck exists [35].
The implementation minimizes the data conversions required by iteratively
solving entire optimization problems in the analog domain without repeated
digital-to-analog and analog-to-digital conversions. The architecture only
converts the input data from digital to analog once at the start of the
problem and from analog to digital once at the end of the problem. Their
approach has the weakness described in Section 3 that the accelerator has to
replace more of the functionality that is traditionally implemented using
digital electronic hardware. This leads to limits on the problem sizes that
their implementation can solve imposed by the off-the-shelf optical hardware
they used. The speed at which circuits can operate is determined by their
resistance $R$, capacitance $C$, and inductance $L$. As the implementation
needs to convert optical analog signals to electronic analog signals there
will be a new bottleneck imposed by the $RC$ and $LC$ delays in the conversion
circuitry. These conversion costs will be lower than analog-to-digital and
digital-to-analog conversion costs but we leave a detailed analysis as future
work.
## 3 Digital Hardware is Required to Facilitate Analog Computing Accelerators
For the example of optical transformers the data conversion bottleneck is
exacerbated because the technology to efficiently implement non-linear
activation functions optically does not exist [6, 35]. This makes it necessary
to transduce the optical signal to an electronic signal, perform an analog to
digital conversion, compute the activation function on digital electronic
hardware, perform a digital to analog conversion, and finally an electronic to
optical signal conversion [6]. Performing these conversions for every layer of
a neural network makes the resulting computer architecture slow and
inefficient, only producing energy savings for greater than ten billion
parameter models with current conversion technology [6, 49, 14]. Keyes [36]
stated in 1985 and Tucker [66] stated again in 2010 that a good computer
device requires:
1. 1.
Gain: The ability to produce an output signal larger than the input signal.
2. 2.
Input-output isolation: The output of the computing device does not affect the
input.
3. 3.
Information storage: An efficient and reliable memory cell design.
At the time of writing, digital electronic transistors are the only mass-
produced computing devices satisfying these criteria. Therefore, analog
computing accelerators need digital hardware to interface with the digital
computer system providing the input data, postprocessing, and storing the
output data.
### 3.1 Case Study: Analog Optical Fourier Transform and Computing
Accelerator
It is unclear whether or not existing optical computing hardware can
efficiently implement gain. Currently, optoelectronic gain (using electronic
transistors) is the most common choice in optical neural network research
prototypes [61, 45]. It is unclear whether or not existing optical computing
hardware can implement input-output isolation [6], but some ongoing efforts
attempt to address this challenge by solving an optimization problem that
quantifies light leakage [70]. Spatial light modulators do not have input-
output isolation. Anderson et al. [6] report that spatial light modulator
pixels have crosstalk if their neighbors have significantly different values.
Anderson et al. [6] remedied this crosstalk by aggregating $3\times 3$ blocks
of pixels together as macro pixels. This aggregation is undesirable as it
reduces the total number of pixels available for computation by a factor of
nine. Research implementations of integrated optical static random access
memory exist with footprints close to those of electronic memories, lower
access times, and total energy costs per bit [3, 45]. These research
implementations of a few bytes must be scaled and integrated into memory
architectures as cells that work reliably despite thermal instability and
crosstalk. Optical computer systems are therefore application-specific
computing accelerators until they can optically implement gain, input-output
isolation, and information storage at scale. Optical computing hardware
currently cannot replace the entire functionality of the digital electronic
processor and therefore will only offload selectively-chosen parts of
application programs to the optical computing accelerator.
## 4 Accelerators Should Target High Complexity Computational Problems
Because the cost of getting data into an analog computing accelerator is high
(Section 2), analog computing accelerators should target computationally-
complex operations. If the operation an accelerator can accelerate has the
same computational cost (for example $\mathcal{O}(N)$) as getting the data
into and out of the accelerator (for example $\mathcal{O}(N)$) then such an
accelerator will be constrained to produce a limited speedup over digital
electronic hardware alone. Promising examples for acceleration are matrix-
vector multiply accumulate operations $\mathcal{O}(N^{2})$ and Ising problems
$\mathcal{O}(2^{N})$. Even when the computational complexity of a problem is
larger than its input costs, it is still required that the problem is large
enough to make the speedup worthwhile. The compute-centric computational
complexity metric does not capture the large data movement costs required to
move data into computing accelerators. The community of researchers
investigating machine learning with new compute paradigms should instead adopt
existing metrics for computational complexity that account for communications
and data movement costs [40]. Figure 3 shows a plot to illustrate the
computational overhead introduced by the data conversions required to
interface an analog computing accelerator with a digital electronic computer
system. Figure 3 assumes that the conversion complexity $C=2N$ as all $N$ data
require a digital-to-analog conversion and then a subsequent analog-to-digital
conversion. In reality, the relationship between the computational complexity
and the conversion complexity will depend upon the type of operations that are
being accelerated and the implementation of the conversion and computing
hardware.
Figure 3: The computational and conversion complexity of problem classes on a
logarithmic scale.
## 5 Bespoke Hardware Accelerators Require $10\times$ Theoretical Improvement
Designing and building a computing accelerator is time-consuming, expensive,
and risky [10]. Therefore, accelerators should provide at least $10\times$
improvement of some metric that users care about for a large family of
applications to be a commercial success [64]. In addition, the theoretical
improvements produced by the accelerator must be large enough to absorb
reductions in performance from the theoretical maximum caused by compromises
in the design of the accelerator. The data conversion bottleneck in analog
computing accelerators which Meech et al. [47] originally identified has
recently been discussed in work on optical computing accelerators [6, 45, 72,
71], thermodynamic [19, 1, 23], and neuromorphic computing accelerators [2].
This article is the first to describe the data conversion bottleneck generally
and its applicability for all analog computing accelerators.
### 5.1 Theoretical Case Study: Analog Optical Fourier Transform and
Convolution Accelerator
Table 1 and Figure 9 (Appendix C.2) show that an ideal optical accelerator in
which Fourier transform and convolution operations cost zero time can only
provide greater than $10\times$ speedup for two of the benchmarked
applications (pure convolutions and pure Fourier transforms). We found that
the median end-to-end speedup achievable by an optical accelerator for 27
benchmark applications is $1.94\times$, limited primarily by Amdahl’s law
(Appendix C.2). This median speedup is small compared to the speedup
achievable by other accelerators. The average speedup is $9.39\times$, which
is close to the $10\times$ requirement to make the accelerator worthwhile
(Section 5). The high speedup values of $159.41\times$ and $45.32\times$ skew
the average for convolutions and Fourier transforms. Our benchmarking study
assumed zero cost for data movement, therefore our results are for the
theoretical best case.
Popular accelerators in the literature report average speedups of $60\times$
for convolutional neural networks on GPUs [41], $1.6\times 10^{9}$ $\times$
for a quantum accelerator [7], and $2076\times$ fewer instructions executed
compared to a Monte Carlo simulation for Laplace, an uncertainty
quantification accelerator [65]. These improvements are orders of magnitude
larger than those theoretically possible with an optical accelerator.
Therefore, developing an analog optical Fourier transform and convolution
accelerator is not worthwhile unless we are targeting applications that
consist solely of Fourier transforms and convolutions with less than $10$ % of
execution time spent performing other operations; otherwise, by Amdahl’s law,
the acceleration is limited to less than 10-fold, the threshold below which it
is not worth investing the time and capital in building an accelerator. A
multiply-accumulate accelerator for neural network applications is a
potentially more promising target for a commercial optical computing
accelerator. An optical physical computing accelerator implementation that
accelerates the end-to-end inference latency of the LeNet deep neural network
by $9.4\times$ and $6.6\times$ compared to Nvidia P4 and A100 graphics
processing units respectively exists [71]. The research article [71] which
reports the inference speedup does not report an energy efficiency comparison.
## 6 What Class of Computing Problems Suit Analog Computing Accelerators?
Analog computing devices are best suited for performing computing problems
with analog input and output data. Figure 4 block ➀ shows that two data
conversions are required to use digital computing devices for a problem with
analog input and output data. Figure 4 block ➁ shows that no data conversions
are required to use analog computing devices to solve a computing problem with
analog input and output data. Therefore, using analog computing devices for
computing problems with analog input and output data removes the data
conversion overhead required to use digital computing devices. For example, a
well-known computing application with analog input and output is optically
processing analog synthetic aperture radar images and then exposing analog
camera film using the light output by the optical system [26].
When an application has analog input data and digital output data or vice
versa we can choose to use analog or digital computing devices without
incurring the penalty of an additional analog-to-digital or digital-to-analog
conversion. Figure 4 blocks ➂, ➃, ➄, and ➅ show that we can choose to perform
the computation before or after the conversion stage. For this reason,
researchers developing new novel analog computing devices should focus on
accelerator architectures that follow the structure shown in Figure 4 blocks
➁, ➂, and ➃. Sensor data processing applications that have the architecture
shown in Figure 4 block ➃ are promising examples of applications where novel
analog computing devices could have a high impact. For example, an analog
vision sensor data processing research implementation prevents the analog-to-
digital conversion bottleneck by performing all processing on analog signals
and converting the final output to digital [18]. Waveform synthesis or control
signal generation applications that have the architecture shown in Figure 4
block ➂ are promising examples of applications where novel analog computing
devices could have a high impact. Figure 4 blocks ➄ and ➅ show architectures
suitable for applications of novel digital computing devices.
Figure 4: Architectures for computational problems with a variety of input and
output data. Let $a(x)$ be an analog function, and $d(x)$ be a digital
function both computed using the input data $x$.
## Conclusion
Modern computing tasks are constrained to having digital electronic input and
output data. Mass-produced electronic memory being the only off-the-shelf
option for users, constrains the input data storage to be digital electronic
signals stored in the memory. Support for plotting and data visualization
software is only available for programming languages designed to run on off-
the-shelf digital electronic hardware. Therefore, any analog computing
accelerator must perform an analog-to-digital conversion on its input data and
a subsequent digital-to-analog conversion on its output data. The only
alternative to this situation would be to develop an entire software and
hardware stack to allow the analog computing devices to perform all the
functions of the traditional digital electronic computer hardware. The
traditional digital electronic computer architecture is better suited for the
majority of applications than an application-specific analog computing
accelerator and therefore substituting them would be unproductive. In a case
study on an optical computing accelerator for Fourier transforms and
convolutions we performed the first large-scale benchmarking of applications
that rely on Fourier transform and convolution operations and found that the
median end-to-end speedup achievable by an optical accelerator for 27
benchmark applications is $1.94\times$, limited primarily by Amdahl’s law
(Appendix C.2). This median speedup is small compared to the speedup
achievable by other popular types of accelerators. The average speedup is
$9.39\times$, which is close to the $10\times$ requirement to make the
accelerator worthwhile (Section 5). The high speedup values of $159.41\times$
and $45.32\times$ skew the average for convolutions and Fourier transforms.
Our benchmarking study assumed zero cost for data movement, therefore our
results are for the theoretical best case. For optical accelerators to produce
a worthwhile speedup we must overcome the data movement bottleneck. Once we
have overcome the bottleneck, most applications will only be able to achieve a
speedup of less than 10-fold. Our results show that building an analog optical
Fourier transform and convolution accelerator is not worthwhile unless it will
be applied to applications for which more than $90$ % of the execution time is
Fourier transforms or convolutions. Even with faster light-modulating devices
and camera detectors, the data movement bottleneck will continue to be a show-
stopping problem for analog optical computing accelerators.
## References
* [1] Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon, Thomas Ahle, Daniel Simpson, Gavin E. Crooks, and Patrick J. Coles. Thermodynamic linear algebra, 2023.
* [2] James B. Aimone and Shashank Misra. Will stochastic devices play nice with others in neuromorphic hardware?: There’s more to a probabilistic system than noisy devices. IEEE Electron Devices Magazine, 1(2):50–56, 2023.
* [3] Theoni Alexoudi, George Theodore Kanellos, and Nikos Pleros. Optical ram and integrated optical memories: a survey. Light: Science & Applications, 9(1):91, 2020.
* [4] Pierre Ambs. Optical computing: A 60-year adventure. Advances in Optical Technologies, 2010:372652, May 2010.
* [5] Gene M. Amdahl. Validity of the single processor approach to achieving large scale computing capabilities. In Proceedings of the April 18-20, 1967, Spring Joint Computer Conference, AFIPS ’67 (Spring), page 483–485, New York, NY, USA, 1967. Association for Computing Machinery.
* [6] Maxwell G. Anderson, Shi-Yuan Ma, Tianyu Wang, Logan G. Wright, and Peter L. McMahon. Optical transformers, 2023.
* [7] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando GSL Brandao, David A Buell, et al. Quantum supremacy using a programmable superconducting processor. Nature, 574(7779):505–510, 2019.
* [8] Alexei Baevski, Yuhao Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations. Advances in Neural Information Processing Systems, 33:12449–12460, 2020.
* [9] M.W. Beijersbergen, L. Allen, H.E.L.O. van der Veen, and J.P. Woerdman. Astigmatic laser mode converters and transfer of orbital angular momentum. Optics Communications, 96(1):123–132, 1993.
* [10] Mark Bergen. Nvidia Is Soaring. AI Chip Rival Graphcore Can Barely Get Off the Ground, November 2023. [Online]. Available: https://www.bloomberg.com/news/newsletters/2023-05-31/nvidia-is-soaring-ai-chip-rival-graphcore-can-barely-get-off-the-ground.
* [11] Ashwin Bhandare, Maithili Bhide, Pranav Gokhale, and Rohan Chandavarkar. Applications of convolutional neural networks. International Journal of Computer Science and Information Technologies, 7(5):2206–2215, 2016.
* [12] Amirali Boroumand, Saugata Ghose, Youngsok Kim, Rachata Ausavarungnirun, Eric Shiu, Rahul Thakur, Daehyun Kim, Aki Kuusela, Allan Knies, Parthasarathy Ranganathan, and Onur Mutlu. Google workloads for consumer devices: Mitigating data movement bottlenecks. In Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS ’18, page 316–331, New York, NY, USA, 2018. Association for Computing Machinery.
* [13] S Cabrini, C Liberale, D Cojoc, A Carpentiero, M Prasciolu, S Mora, V Degiorgio, F De Angelis, and E Di Fabrizio. Axicon lens on optical fiber forming optical tweezers, made by focused ion beam milling. Microelectronic engineering, 83(4-9):804–807, 2006.
* [14] Pietro Caragiulo, Clayton Daigle, and Boris Murmann. DAC Performance Survey 1996-2020, November 2023. [Online]. Available: https://github.com/pietro-caragiulo/survey-DAC.
* [15] Webster Cash. The aragoscope: Ultra-high resolution optics at low cost. Technical report, NASA, 2014.
* [16] H John Caulfield and Shlomi Dolev. Why future supercomputing requires optics. Nature Photonics, 4(5):261–263, 2010.
* [17] H.J. Caulfield. Perspectives in optical computing. Computer, 31(2):22–25, 1998.
* [18] Yitong Chen, Maimaiti Nazhamaiti, Han Xu, Yao Meng, Tiankuang Zhou, Guangpu Li, Jingtao Fan, Qi Wei, Jiamin Wu, Fei Qiao, et al. All-analog photoelectronic chip for high-speed vision tasks. Nature, pages 1–10, 2023.
* [19] Patrick J. Coles, Collin Szczepanski, Denis Melanson, Kaelan Donatella, Antonio J. Martinez, and Faris Sbahi. Thermodynamic ai and the fluctuation frontier, 2023.
* [20] Edward Cottle, Florent Michel, Joseph Wilson, Nick New, and Iman Kundu. Optical convolutional neural networks–combining silicon photonics and fourier optics for computer vision. arXiv preprint arXiv:2103.09044, 2020.
* [21] Mauricio Delbracio, Damien Kelly, Michael S Brown, and Peyman Milanfar. Mobile computational photography: A tour. Annual Review of Vision Science, 7:571–604, 2021.
* [22] Brandon Dube and Erik Busby. Prysym, May 2022. [online] https://github.com/brandondube/prysm.
* [23] Samuel Duffield, Maxwell Aifer, Gavin Crooks, Thomas Ahle, and Patrick J. Coles. Thermodynamic matrix exponentials and thermodynamic parallelism, 2023\.
* [24] R Timothy Edwards. Google/skywater and the promise of the open pdk. In Workshop on Open-Source EDA Technology, 2020.
* [25] Zhuoran Fang, Rui Chen, Albert Ryou, and Arka Majumdar. 1d self-healing beams in integrated silicon photonics. ACS Photonics, 8(7):2139–2147, 2021.
* [26] Dror G Feitelson. Optical computing: A survey for computer scientists. pages 138–143, 1988.
* [27] Jean Baptiste Joseph Fourier. The Analytical Theory of Heat. Cambridge Library Collection - Mathematics. Cambridge University Press, 1878.
* [28] RW Gerhberg and WO Saxton. ‘a practical algorithm for the determination of phase from image and diffraction plane picture. Optik (Stuttgart), 35:237–246, 1972.
* [29] John L. Gustafson. Amdahl’s law. In David Padua, editor, Encyclopedia of Parallel Computing, pages 53–60, Boston, MA, 2011. Springer US.
* [30] Nicholas Harris. Passage—a wafer-scale, programmable photonic communication substrate. In Hot Chips 34 Symposium Talks, 2022.
* [31] Eugene Hecht. Optics, pages 465–492. Pearson Education India, 5 edition, 2017.
* [32] Chao-Wei Hsu, Yung-Feng Chen, and Yan-Kuin Su. Nanoepitaxy of gaas on a si (001) substrate using a round-hole nanopatterned sio2 mask. Nanotechnology, 23(49):495306, 2012.
* [33] Christiaan Huygens. Treatise on Light: In which are Explained the Causes of that which Occurs in Reflexion, & in Refraction. And Particularly in the Strange Refraction of Iceland Crystal. MacMillan and Company, limited, 1912.
* [34] Moonhyung Jang, Xiyuan Tang, Yong Lim, John G. Kauffman, Nan Sun, Maurits Ortmanns, and Youngcheol Chae. Design techniques for energy-efficient analog-to-digital converters. IEEE Open Journal of the Solid-State Circuits Society, 3:145–161, 2023.
* [35] Kirill P. Kalinin, George Mourgias-Alexandris, Hitesh Ballani, Natalia G. Berloff, James H. Clegg, Daniel Cletheroe, Christos Gkantsidis, Istvan Haller, Vassily Lyutsarev, Francesca Parmigiani, Lucinda Pickup, and Antony Rowstron. Analog iterative machine (aim): using light to solve quadratic optimization problems with mixed variables, 2023.
* [36] Robert W. Keyes. What makes a good computer device? Science, 230(4722):138–144, 1985.
* [37] Woo-Cheol Kim, Dong-shin Jo, Yi-Ju Roh, Ye-Dam Kim, and Seung-Tak Ryu. A 6b 28gs/s four-channel time-interleaved current-steering dac with background clock phase calibration. In 2019 Symposium on VLSI Circuits, pages C138–C139, 2019.
* [38] Laurent Koechlin, Denis Serre, and Paul Duchon. High resolution imaging with fresnel interferometric arrays: suitability for exoplanet detection. Astronomy & Astrophysics, 443(2):709–720, 2005.
* [39] S.H. Kong, D.D.L. Wijngaards, and R.F. Wolffenbuttel. Infrared micro-spectrometer based on a diffraction grating. Sensors and Actuators A: Physical, 92(1):88–95, 2001. Selected Papers for Eurosensors XIV.
* [40] Eyal Kushilevitz. Communication complexity. volume 44 of Advances in Computers, pages 331–360. Elsevier, 1997\.
* [41] Seyyed Salar Latifi Oskouei, Hossein Golestani, Matin Hashemi, and Soheil Ghiasi. Cnndroid: Gpu-accelerated execution of trained deep convolutional neural networks on android. In Proceedings of the 24th ACM international conference on Multimedia, pages 1201–1205, 2016.
* [42] Juzheng Liu, Mohsen Hassanpourghadi, and Mike Shuo-Wei Chen. A 10gs/s 8b 25fj/c-s 2850um2 two-step time-domain adc using delay-tracking pipelined-sar tdc with 500fs time step in 14nm cmos technology. In 2022 IEEE International Solid- State Circuits Conference (ISSCC), volume 65, pages 160–162, 2022.
* [43] Alexander J Macfaden, George SD Gordon, and Timothy D Wilkinson. An optical fourier transform coprocessor with direct phase determination. Scientific reports, 7(1):1–8, 2017.
* [44] David McGloin and Kishan Dholakia. Bessel beams: diffraction in a new light. Contemporary physics, 46(1):15–28, 2005.
* [45] Peter L. McMahon. The physics of optical computing, 2023.
* [46] James T Meech. Is this computing accelerator evaluation full of hot air? arXiv preprint arXiv:2304.01012, 2023.
* [47] James T. Meech, Vasileios Tsoutsouras, and Phillip Stanley-Marbell. The data movement bottleneck: Theoretical shortcomings of analog optical fourier transform and convolution computing accelerators, 2023.
* [48] Albert A Michelson and Edward W Morley. On the relative motion of the earth and of the luminiferous ether. Sidereal Messenger, vol. 6, pp. 306-310, 6:306–310, 1887.
* [49] Boris Murmann. ADC Performance Survey 1997-2023, November 2023. [Online]. Available: https://github.com/bmurmann/ADC-survey.
* [50] James New Nicholas. Reconfigurable optical processing system, 2017. U.S. Patent: US10289151B2.
* [51] NumPy. Numpy.fft.fft2, Jun 2022. [online] https://numpy.org/doc/stable/reference/generated/numpy.fft.fft2.html.
* [52] Optalysys. Optical Computing, the hardware solution for Cryptography: Fully Homomorpgic Encryption, Oct 2022.
* [53] Optalysys History, Oct 2022. [online] https://optalysys.com/our-history/.
* [54] Ben C Platt and Roland Shack. History and principles of shack-hartmann wavefront sensing. Journal of Refractive Surgery, 17(5):S573–S577, 2001.
* [55] PyTorch. Audio Resampling, Jun 2022. [online] https://pytorch.org/tutorials/beginner/audio_resampling_tutorial.html?highlight=audio%20convolution.
* [56] PyTorch. Training a Classifier, Jun 2022. [online] https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html.
* [57] Christoph Schulien Ranovus. Enabling scalable application-specific optical engines (asoe) by monolithic integration of photonics and electronics. In Hot Chips 34 Symposium Talks, 2022.
* [58] SciPy. Scipy.signal.convolve2d, Jun 2022. [online] https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.convolve2d.html.
* [59] SciPy. Scipy.signal.wiener, Jun 2022. [online] https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.wiener.html.
* [60] Mohamed Shalan and Tim Edwards. Building openlane: A 130nm openroad-based tapeout- proven flow : Invited paper. In 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD), pages 1–6, 2020.
* [61] Bhavin J Shastri, Alexander N Tait, Thomas Ferreira de Lima, Wolfram HP Pernice, Harish Bhaskaran, C David Wright, and Paul R Prucnal. Photonics for artificial intelligence and neuromorphic computing. Nature Photonics, 15(2):102–114, 2021.
* [62] A.E. Siegman. Unstable optical resonators for laser applications. Proceedings of the IEEE, 53(3):277–287, 1965.
* [63] Phillip Stanley-Marbell, Victoria Caparrós Cabezas, and Ronald P. Luijten. Pinned to the walls — impact of packaging and application properties on the memory and power walls. In IEEE/ACM International Symposium on Low Power Electronics and Design, pages 51–56, 2011.
* [64] Peter Thiel and Blake Masters. Zero to one: Notes on startups, or how to build the future. Virgin Books, 2014.
* [65] Vasileios Tsoutsouras, Orestis Kaparounakis, Bilgesu Bilgin, Chatura Samarakoon, James Meech, Jan Heck, and Phillip Stanley-Marbell. The laplace microarchitecture for tracking data uncertainty and its implementation in a risc-v processor. In MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO ’21, page 1254–1269, New York, NY, USA, 2021. Association for Computing Machinery.
* [66] Rodney S Tucker. The role of optics in computing. Nature Photonics, 4(7):405–405, 2010.
* [67] Gleb Vdovin, Fred van Goor, Kangde Huan, and Leonard Doyle. Lightpipes, May 2022. [online] https://github.com/opticspy/lightpipes.
* [68] Takeshi Watanabe, Masaaki Fujii, Yoshi Watanabe, Nobuhito Toyama, and Yoshinori Iketaki. Generation of a doughnut-shaped beam using a spiral phase plate. Review of scientific instruments, 75(12):5131–5135, 2004.
* [69] J Wilson. The multiply and fourier transform unit: A micro-scale optical processor. Optalysys, 2020. [online] https://optalysys.com/wp-content/uploads/2022/04/Multiply_and_Fourier_Transform_white_paper_12_12_20.pdf.
* [70] Benjamin Wolba. Akhetonics: Shaping the Future of Optical Computers, November 2023. [Online]. Available: https://www.future-of-computing.com/akhetonics-shaping-the-future-of-optical-computers/.
* [71] Zhizhen Zhong, Mingran Yang, Jay Lang, Dirk Englund, and Manya Ghobadi. Demo: First demonstration of real-time photonic-electronic dnn acceleration on smartnics. In Proceedings of the ACM SIGCOMM 2023 Conference, ACM SIGCOMM ’23, page 1173–1175, New York, NY, USA, 2023. Association for Computing Machinery.
* [72] Zhizhen Zhong, Mingran Yang, Jay Lang, Christian Williams, Liam Kronman, Alexander Sludds, Homa Esfahanizadeh, Dirk Englund, and Manya Ghobadi. Lightning: A reconfigurable photonic-electronic smartnic for fast and energy-efficient inference. In Proceedings of the ACM SIGCOMM 2023 Conference, ACM SIGCOMM ’23, page 452–472, New York, NY, USA, 2023. Association for Computing Machinery.
## Appendix A Analog Optical Fourier Transform and Convolution Accelerators
Optical computing has been a popular research topic since the 1950s but there
are still no commercially-available optical accelerators and no large-scale
analysis of benchmark performance. Research implementations of optical
computing accelerators and predictions of the performance of an application-
specific integrated circuit implementation do however exist [71, 72, 6].
Despite this, there are no commercially available optical computing
accelerators. The physics of light lends itself to fast and efficient Fourier
transform and convolution operations [50, 43]: Optical Fourier transform and
convolution accelerators use diffraction, the interference of Huygens wavelets
of light to perform Fourier transform operations [33]. This is in contrast to
digital electronic processors which break the high-level Fourier transform
down into individual additions, multiplications, and other component
operations, compute the results, and then recombine the results to calculate
the Fourier transform [17]. Having the light perform the computation is faster
and more efficient than using digital electronics if we do not consider the
time required for data movement [43].
Despite these benefits, researchers in academic institutions and industry have
struggled for 70 years to implement practically useful optical accelerators
[16, 4, 69]. Startup companies repeatedly pivot to applying optical
accelerators to new problems. They do this because the optical accelerator
does not provide a large enough improvement in a metric that users care about
for the target application [20, 69, 53, 52, 46]. As of today, there is still
no commercially available computer architecture that includes an optical
accelerator, despite the growing popularity of optical interconnects [30, 57].
### A.1 How Does an Analog Optical Fourier Transform and Convolution
Accelerator Work?
Figure 5 shows the typical 4$f$ optical setup for Fourier transform and
convolution operations. Let $\mathcal{F}$ be the Fourier transform operator
and $\mathcal{F}^{-1}$ be the inverse Fourier transform operator. Let $A$ and
$B$ be two-dimensional arrays and $\mathcal{F}^{-1}{C}$ be the convolution of
$A$ and $B$ where
$C=\mathcal{F}(A\circledast B)=\mathcal{F}(A)\cdot\mathcal{F}(B).$ (1)
Figure 5: The 4$f$ setup for optical convolution where $A$ and $B$ are
programmable apertures and $C$ is a camera detector. Each optical component is
spaced a distance $f$ from the previous one where $f$ is the focal length of
the convex lenses [4].
Equation 1 shows that an analog optical accelerator can perform the
convolution operation by taking the Fourier transform of both input datasets,
calculating their dot product, and finally, inverse Fourier transform the
result. The optical setup cannot perform the final inverse Fourier transform
step. Instead, the digital electronic processor interfacing with the optical
setup performs this step. Figure 5 shows how the lenses in the setup Fourier
transform the input data programmed into the aperture (spatial light
modulator). The programmable aperture encodes information into the light at
each of its pixels by manipulating the phase of the light between $0$ and
$2\pi$ according to the programmed digital value for that particular pixel. An
analog optical accelerator that uses a camera to transduce the output light
pattern to electronic signals can only calculate the magnitude component of
the right-hand side of Equation 1 and then the computer hardware must read the
detector pixels and use a digital inverse Fourier transform to calculate the
final result of Equation 1. The light can only compute the Fourier transform
when the condition (that $D\gg a$ and that $D\gg a^{2}/\lambda$, where $D$ is
the distance between the programmable aperture and the camera detector, $a$ is
the width of the programmable aperture, and $\lambda$ is the wavelength of the
light [31]) for Fraunhofer diffraction is met [31].
### A.2 Analog Optical Fourier Transform and Convolution Accelerator Computer
Architecture
Figure 6 shows the changes required at each abstraction layer of a software
and hardware stack required to use the physics of light to accelerate a user-
specified high-level computational problem (the Fourier transform). A computer
systems architect has to make changes at every abstraction level in the
software and hardware stack to take advantage of the physics of light to
perform computation. Required changes include a new software application
programming interface to load data into the accelerator, processor
architecture changes to allow store word and load word instructions to access
the optical accelerator and digital electronic processor memory, and the close
integration of optical hardware with digital electronic hardware that uses
incompatible process technologies. This is just as generations of engineers
and scientists designed the modern digital electronic computer stack to
realize the full potential of semiconductor transistors in digital electronic
processors. Row one of Figure 6 is the transition from the abstract idea of
the Fourier transform through the abstraction layers to the digital electronic
hardware that we wish to use to perform the computation. Row two of Figure 6
requires changes at every level of the software and hardware stack. If we
tried to use the physics of light to replace panel ➄ of row one, the
accelerator would not be able to use the Fourier transform properties of light
and we would not see performance increases. Row two of Figure 6 shows the
missing implementations that have made such optical accelerators unnecessarily
inefficient due to a lack of computer systems knowledge in the optical
computing community and vice-versa.
The optical accelerator takes advantage of the physics of light to skip all of
the component multiplication, division, and addition instructions shown in
Figure 6, row one, block ➂. Instead, we load the data into the optical
accelerator and the physics of light performs the Fourier transform
computation in one analog step. The optical accelerator performs the transform
using physics shown in Figure 6, row two, panel ➄. The optical field at point
$P$ is the superposition of the optical field at each elemental area $dS$ of
the total area, $S$, of the aperture. Every single point in the optical
accelerator output contains information from every single point in the optical
accelerator aperture input. Each point in the wavefront at the aperture
produces Huygens wavelets and the optical field beyond the aperture is the
superposition of all of the wavelets. The similarities to the equation in
block ➀ of both rows of Figure 6 are that the sum symbols use the value of
each pixel in the input once per output pixel to compute the pixel-by-pixel
result of the Fourier transform. This skipping of steps provides an
opportunity for the acceleration of Fourier transform and convolution
operations provided that the cost of moving data into and out of the optical
accelerator does not outweigh the speedup we gain by using the Fourier
transform and convolution properties of light. Unfortunately, Section 2 shows
that the cost of moving data into and out of the optical accelerator will
always be the bottleneck in analog optical Fourier transform accelerator
designs. Appendix C.3 shows that even the best-case speedup we can gain by
using the analog Fourier transform and convolution properties of light is
often small.
Figure 6: The steps required to perform a Fourier transform on data using an
optical accelerator instead of a digital electronic processor diverge at the
first abstraction level below the mathematical equation for the Fourier
transform. The optical accelerator requires changes at every level of the
software and hardware stack to use Maxwell’s equations for electromagnetic
waves to perform the Fourier transform. This figure captures the idea that
inspired 70 years of research into optical accelerators [31, 27]. The lumped
circuit abstraction shown in row one confines the resistance, capacitance, and
inductance of transistors within idealized circuit components. This allows the
designer to ignore the effects of electromagnetic waves. In contrast, row two
directly uses the physics of electromagnetic waves to perform the computation.
## Appendix B Optical Computing Accelerator Prototype Design and Construction
---
(a) The minimum architecture for an optical Fourier transform and convolution
computing accelerator. This architecture uses slow communication interfaces to
move camera data into the processor and data from the processor into the
spatial light modulator. These slow communications interfaces were designed
for updating displays at 60 $\mathrm{Hz}$ and therefore bottleneck our
accelerator.
---
(b) A side view of the prototype optical accelerator on an optical breadboard.
The variable aperture is not programmable, only the spatial light modulator is
programmable and controls the input data for the Fourier transform
computation. (c) A top view with components from left to right being the
laser, polarizer, spatial light modulator, a lens to bring the far-field
diffraction pattern closer to the laser, a second polarizer crossed with the
first and the Raspberry Pi 4 mounted on the Raspberry Pi high-quality camera
module.
Figure 7: The optical accelerator architecture diagram and the hardware
prototype that we built to analyze the data-movement bottleneck. The Raspberry
Pi 4 is an interface that we remotely connect to from a workstation computer
using a secure shell and does not perform any computation other than
programming the spatial light modulator and reading the camera.
Figure 7a shows a block diagram of the typical interface between a digital
electronic processor and an optical accelerator built using off-the-shelf
optical hardware modules. Typically these off-the-shelf optical hardware
modules use a communication interface to allow a digital electronic processor
to control the optical module as a peripheral input/output device. Figure 7a
shows the local memory and digital-to-analog converter inside a spatial light
modulator that allows an external digital electronic processor to program the
light-modulating pixels over the communications interface. The camera provides
a similar interface to allow the digital electronic processor to read values
from the camera pixels. It uses an analog-to-digital converter to convert the
analog signal from the camera detector pixels to a digital signal for the
processor to read from the local device memory over the communication
interface. Spatial light modulators and digital micro-mirror devices are
essentially a set of memory locations spatially arranged in large two-
dimensional arrays. Moving data from a processor into these memory locations
and back costs time and energy. This time and energy spent moving data
outweigh the speed and efficiency benefits gained by using the properties of
light to perform computation.
Figures 7b and 7c show our prototype implementation of a Fourier transform
accelerator. We included the lenses, polarizers, and mechanical variable
aperture to improve the resolution of the hardware prototype but they are not
a fundamental requirement for performing Fourier transforms and convolutions
using light. We conduct experiments to show the data-movement bottleneck using
our hardware prototype.
### B.1 Execution Time Experiment Methodology
We benchmark Python code to perform a $1024\times 768$ pixel two-dimensional
Fourier transform against the optical hardware setup performing the same
calculation. The hardware setup is an end-to-end system controlled by a
Raspberry Pi 4 that runs Python scripts to activate the optical hardware. For
this reason, profiling the Python code quantifies the digital electronic
processor, data movement, and analog optical accelerator computation time.
Figure 8: The hardware Fourier transform (left) is $23.8\times$ slower than
the NumPy software fast Fourier transform (right). The hardware Fourier
transform takes negligible time compared to moving data into and out of the
optical components. The total time required to run the software and hardware
Fourier transform is 0.219 s and 5.209 s respectively.
### B.2 Execution Time Experiment Results
Figure 8 shows that our off-the-shelf hardware prototype optical accelerator
is $23.8\times$ slower than a software fast Fourier transform of the same
dimensions. We used the same Raspberry Pi 4 to benchmark the software fast
Fourier transform and control the optical components (with no effort to
optimize the code) alone to perform the Fourier transform. As the Fourier
transform computation happens at the speed of light, the only fixed
computation that prevents infinite speedup (from Amdahl’s law) is the time
required to produce the input data, load it into the spatial light modulator,
and then read out the output from the camera detector. The fast Fourier
transform has the second-greatest theoretical speedup using an optical
accelerator for all of the applications in Table 1. Therefore, none of the
applications in Table 1 will see a speedup when running on our prototype
optical accelerator. Figure 8 shows that the majority of the computation time
in the prototype optical accelerator is spent on data movement (programming
the spatial light modulator and imaging the diffraction pattern using a
camera). Boroumand et al. [12] state that 62.7 % of energy is spent on moving
data in modern computing systems. In our optical computing accelerator
prototype 99.599 % of the time is spent moving data between the digital
electronic processor and the analog optical accelerator. Cameras that can
capture images significantly faster than the camera we used in our experiment
exist [21]. Nevertheless, the Fourier transform computation happens at the
speed of light, so the data movement bottleneck will always dominate the
computation time required by an optical Fourier transform and convolution
computing accelerator.
## Appendix C Convolution and Fourier Transform Benchmarking Case Study
### C.1 Convolution and Fourier Transform Application Benchmarking
Methodology
We profiled 27 benchmark applications (which we describe in Appendix C.3) to
estimate the maximum theoretical speedup that an optical Fourier transform and
convolution accelerator could provide for each application. We provide a short
description of each benchmark that we profiled on a 2.8 GHz Intel Core i7 CPU
with 16 GB of 2133 MHz LPDDR3 RAM. All benchmarks are Python 3.8.9 code
applications, not developed by the authors, which use well-optimized Python
libraries, and are available online. We used cProfile to profile each
benchmark using Python 3.8.9 on MacOS Monterey Version 12.0.1. We profiled
each benchmark assuming that the time taken by functions with Fourier
transform or convolution-related names was negligible. We used the results to
estimate the speedup gained by offloading the optical Fourier transform and
convolution functions to an accelerator that completes the operation in
negligible time. This assumption will provide results showing the best-case
speedup for an optical Fourier transform and convolution accelerator.
### C.2 Convolution and Fourier Transform Application Benchmarking Results:
Amdahl’s Law
We benchmarked the applications described in Appendix C.3 using Python and
cProfile and applied Amdahl’s law to the results [29, 5]. We benchmarked each
application one hundred times to take into account any variation. Let $P$ be
the degree of acceleration a computer system applies to an application,
$f_{\mathrm{fixed}}$ be the portion of the program we cannot accelerate, and
$f_{\mathrm{accelerate}}$ be the portion of the program that we can
accelerate, then Amdahl’s law states that the speedup $S$ we can achieve is
$S=\frac{1}{f_{\mathrm{fixed}}+\frac{f_{\mathrm{accelerate}}}{P}}.$ (2)
Using an optical accelerator to accelerate $f_{\mathrm{accelerate}}$ to the
point that $\frac{f_{\mathrm{accelerate}}}{P}\ll f_{\mathrm{fixed}}$ produces
$S\approx\frac{1}{f_{\mathrm{fixed}}}.$ (3)
$S$ is the best case speedup we can achieve by accelerating the Fourier
transform and convolution operations in a program. Figure 9 shows the
potential speedup that we could get if we accelerated all Fourier transform
and convolution operations in the benchmarks to the point where they were
negligible. In practice, the speedup achieved by a real optical accelerator
would be smaller because all optical accelerators require time for a digital
electronic processor to write to the programmable aperture and read from the
camera detector. Our benchmarking study has the unrealistic assumption that
this writing and reading takes zero time. Table 1 includes the names and
descriptions of the benchmarks included in Figure 9.
Figure 9: The potential end-to-end speedup for each application in Table 1
according to Amdahl’s law. The speedups are small unless almost 100 % of end-
to-end benchmark execution time is spent on Fourier transforms or
convolutions. The accelerator must speed up close to 100% of the application
code to produce a large end-to-end speedup. All the box and whisker plots that
show the run-to-run variation in the benchmark applications show small
variation. Box plot definitions: center line, median; box limits, upper and
lower quartiles; whiskers, 1.5x interquartile range; points, outliers. Table
1: The maximum end-to-end speedup achievable by an analog optical Fourier
transform and convolution computing accelerator for a range of 27 different
benchmark applications according to Amdahl’s law. We ran each benchmark one
hundred times and calculated the average for each column in the table. The
average speedup is $9.39\times$, close to the $10\times$ requirement (Section
5). The result is heavily skewed by the high speedup values of $159.41\times$
and $45.32\times$ for convolutions and Fourier transforms. The median speedup
is $1.94\times$ which is less than one-fifth of the $10\times$ requirement.
Application | | FFT/Conv
---
Time (s)
| Total
---
Time (s)
| FFT/Conv
---
Fraction (%)
| End-to-End
---
Speed Up ($\times$)
Lines
Convolution [58] | 0.158 | 0.159 | 99.37 | 159.41 | 1
Fourier Transform [51] | 0.912 | 0.933 | 97.79 | 45.32 | 1
Wiener Filter [59] | 1.164 | 1.724 | 67.51 | 3.08 | 1
Self-healing Airy beam [67] | 51.718 | 81.778 | 63.24 | 2.72 | 18
Young’s Experiment [67] | 0.0671 | 0.109 | 61.70 | 2.61 | 12
From Poisson Spot to a Non-Diffractive Bessel Beam [67] | 2.817 | 4.593 | 61.33 | 2.59 | 20
Generation of a Bessel Beam With a Lens and an Annular Slit [67] | 3.146 | 5.173 | 60.82 | 2.55 | 22
Generation of a Bessel Beam With an Axicon [67] | 2.839 | 4.677 | 60.71 | 2.55 | 18
Multi- holes and slits [67] | 0.200 | 0.328 | 60.70 | 2.55 | 21
Diffraction From a Circular Aperture [67] | 2.193 | 3.615 | 60.65 | 2.54 | 14
Shack Hartmann Sensor [67] | 2.142 | 4.051 | 52.88 | 2.12 | 25
Spot of Poisson [67] | 1.930 | 3.983 | 48.44 | 1.94 | 12
Fresnel Zone Plate [67] | 0.665 | 1.405 | 47.34 | 1.90 | 24
Unstable Laser Resonator [67] | 0.0645 | 0.163 | 39.43 | 1.65 | 41
Interference of a Doughnut Laser Beam: Collinear Beams [67] | 0.0604 | 0.198 | 30.54 | 1.44 | 16
Michelson Interferometer [67] | 0.0139 | 0.0472 | 29.45 | 1.42 | 25
Phase Recovery [67] | 0.296 | 1.580 | 18.75 | 1.23 | 16
| Transformation of a Fundamental Gauss Mode
---
into a Doughnut Mode With a Spiral Phase Plate [67]
0.296 | 1.230 | 18.75 | 1.23 | 13
| Transformation of High Order
---
Gauss Modes From Hermite to Laguerre [67]
0.0386 | 0.211 | 18.29 | 1.22 | 42
Interference of a Doughnut Laser Beam: Tilted Beams [67] | 0.00506 | 0.0692 | 7.31 | 1.08 | 15
Double-Slit Experiment [22] | 0.0519 | 0.0929 | 55.91 | 2.27 | 12
Your First Diffraction Model [22] | 0.0787 | 0.164 | 47.80 | 1.92 | 20
Image Simulation [22] | 1.882 | 17.195 | 10.95 | 1.12 | 45
Convolutional Neural Network Inference [56] | 0.263 | 0.416 | 63.17 | 2.71 | 1
Convolutional Neural Network Training [56] | 8.428 | 78.936 | 10.68 | 1.12 | 16
Audio Resampling Transforms [55] | 0.0513 | 0.135 | 37.94 | 1.61 | 22
Pre-Trained Model Wave2Vec2 Speech Recognition Inference [8] | 0.179 | 0.519 | 34.53 | 1.53 | 4
### C.3 Convolution and Fourier Transform Benchmark Application Descriptions
#### Convolution (Application 0):
The SciPy implementation of convolution run over pre-generated $100\times 100$
NumPy arrays.
#### Fourier Transform (Application 1):
The NumPy fast Fourier transform implementation run over pre-generated
$5000\times 5000$ NumPy arrays.
#### Wiener Filter (Application 2):
The SciPy implementation of the Wiener Filter run over a pre-generated
$4000\times 4000$ NumPy array.
#### Self-healing Airy Beam (Application 3):
The LightPipes implementation of a self-healing Airy diffraction simulation.
Airy beams have applications including laser micromachining and particle and
cell micro manipulation [25].
#### Young’s Experiment (Application 4):
The LightPipes implementation of a simulation of Young’s double slit
experiment. In the experiment, a monochromatic plane wave illuminates two
narrow slits which produces a diffraction pattern that illustrates the wave
properties of light on a screen placed in the far field. The diffraction
pattern is the Fourier transform of the slits function. It is possible to
construct arbitrary far-field diffraction patterns by constructing the
corresponding slit.
#### From Poisson Spot to a Non-Diffractive Bessel Beam (Application 5):
The LightPipes implementation of a simulation showing the proportionality of
the width of a Bessel beam to the distance $z$ from the Huygens light point
source. Bessel beams have applications in encryption, optical atom trapping,
and optical tweezers [44].
#### Generation of a Bessel Beam with a Lens and an Annular Slit (Application
6):
The LightPipes implementation of a simulation of a Bessel beam. Bessel beams
have applications in encryption, optical trapping of atoms, and optical
tweezers [44].
#### Generation of a Bessel Beam with an Axicon (Application 7):
Generating a Bessel beam with an annular slit is inefficient, most of the
laser beam is unused. This benchmark is the LightPipes implementation of
generating a Bessel beam with an axicon lens that uses more of the total
optical beam power than the annular slit method and is therefore, more
efficient [13].
#### Multi- Holes and Slits (Application 8):
The LightPipes implementation of a simulation of an extension of Young’s
experiment where multiple slits or holes are present. Changing the spacing and
geometry of the holes would allow the user to create apertures that create
arbitrary diffraction patterns and then simulate the resulting diffraction
pattern. A multi-slit diffraction grating has applications as a spectrometer
[39].
#### Diffraction from a Circular Aperture (Application 9):
The LightPipes implementation of a simulation of an extension of Young’s slit
experiment where the aperture is circular instead of a slit. Diffraction
through circular holes is used for simulating masks in epitaxy for
semiconductors [32].
#### Shack Hartmann Sensor (Application 10):
The LightPipes implementation of a Shack Hartmann sensor. The Shack-Hartmann
sensor is an array of lenses used to measure the phase distribution of a
wavefront. The US Air Force used them to improve the images of satellites
taken from Earth [54].
#### Spot of Poisson (Application 11):
The LightPipes implementation of a simulation of a laser beam illuminating a
disk. The result of the experiment is a bright spot of light directly behind
the round disk. Poisson predicted the existence of the spot by applying
Maxwell’s equations, later Arago experimentally observed the spot. This was
one of the first real-world demonstrations of the wave-like nature of light.
The Arago spot has applications in the design of telescopes [15].
#### Fresnel Zone Plate (Application 12):
The LightPipes implementation of the simulation of a Fresnel zone plate. The
Fresnel zone plate acts as a focusing lens for a plane wave. The Fresnel zone
plate has applications in exoplanet detection [38].
#### Unstable Laser Resonator (Application 13):
The LightPipes implementation of the simulation of an unstable laser
resonator. Unstable laser resonators build energy to create laser beams [62].
#### Interference of a Doughnut Laser Beam Collinear Beams (Application 14):
The LightPipes doughnut laser with collinear beams interference simulation
implementation.
#### Michelson Interferometer (Application 15):
The LightPipes implementation of a Michelson interferometer. The Michelson
interferometer has applications in spectrometers, measuring the diameter of
stars, and detecting gravitational waves [48].
#### Phase Recovery (Application 16):
The LightPipes implementation of the Gerchberg Saxton phase recovery
algorithm. Phase recovery is the act of recovering electric field phase
information that produces a diffraction pattern using only the light intensity
of the diffraction pattern. It iteratively performs forward and backward
Fourier transforms and applies the constraints of the target intensity image
until the algorithm converges to the phase of the electric field that produced
the original image [28]. Phase recovery has applications in holography,
electron microscopy, X-ray crystallography, and characterizing telescopes.
#### Transformation of a Fundamental Gauss Mode into a Doughnut Mode with a
Spiral Phase Plate (Application 17):
The LightPipes implementation of a spiral phase plate simulation to produce a
doughnut-shaped beam with applications in super-resolution microscopy, optical
tweezers, and cell capture [68].
#### Transformation of High Order Gauss Modes From Hermite to Laguerre
(Application 18):
The LightPipes implementation of a simulation that transforms Hermite Gauss
into Laguerre Gauss laser modes using two cylindrical lenses. Laguerre Gauss
laser modes have applications in optical communication, micromanipulation, and
quantum information [9].
#### Interference of a Doughnut Laser Beam Tilted Beams (Application 19):
The LightPipes doughnut laser with tilted beams interference simulation
implementation.
#### Double-Slit Experiment (Application 20):
The Prysm implementation of the simulation of Young’s Experiment. The speedup
value is similar to the LightPipes implementation.
#### Your First Diffraction Model (Application 21):
The Prysym implementation of diffraction through a circular aperture. The
speedup value is similar to the LightPipes implementation.
#### Image Simulation (Application 22):
The Prysym implementation of an end-to-end image simulation of a Siemens’ star
including all optical and electrical noise.
#### Convolutional Neural Network Inference (Application 23):
A PyTorch tutorial implementation of inference over a convolutional neural
network for classifying images from the CIFAR10 dataset. We benchmarked the
training and inference separately as they have significantly different
potential potential for acceleration. Convolutional neural networks have a
wide range of applications [11].
#### Convolutional Neural Network Training (Application 24):
A PyTorch tutorial implementation of training a convolutional neural network
for classifying images from the CIFAR10 dataset. The speedup achieved for the
training is less than half of the speedup achieved for the inference.
#### Audio Resampling Transforms (Application 25):
A PyTorch tutorial implementation of audio resampling using convolution. These
transforms are used to resample audio before passing it through larger neural
networks for training and inference.
#### Pre-Trained Model Wave2Vec2 Speech Recognition Inference (Application
26):
A PyTorch implementation of speech recognition inference with the pre-trained
Wave2Vec2 model.
|
# LiqD: A Dynamic Liquid Level Detection Model under Tricky Small Containers
1st Yukun Ma School of Electronic and Information Engineering
Beijing Jiatong University
Beijing, China
<EMAIL_ADDRESS>2nd Zikun Mao School of Electronic and Information
Engineering
Beijing Jiatong University
Beijing, China
<EMAIL_ADDRESS>
###### Abstract
In daily life and industrial production, it is crucial to accurately detect
changes in liquid level in containers. Traditional contact measurement methods
have some limitations, while emerging non-contact image processing technology
shows good application prospects. This paper proposes a container dynamic
liquid level detection model based on U²-Net. This model uses the SAM model to
generate an initial data set, and then evaluates and filters out high-quality
pseudo-label images through the SemiReward framework to build an exclusive
data set. The model uses U²-Net to extract mask images of containers from the
data set, and uses morphological processing to compensate for mask defects.
Subsequently, the model calculates the grayscale difference between adjacent
video frame images at the same position, segments the liquid level change area
by setting a difference threshold, and finally uses a lightweight neural
network to classify the liquid level state. This approach not only mitigates
the impact of intricate surroundings, but also reduces the demand for training
data, showing strong robustness and versatility. A large number of
experimental results show that the proposed model can effectively detect the
dynamic liquid level changes of the liquid in the container, providing a novel
and efficient solution for related fields.
###### Index Terms:
Detection, data augmentation, semi-supervised learning, image processing.
## I Introduction
liquid level detection technology in containers plays a vital role in daily
life. It not only prevents liquid overflow in home kitchens and ensures
cooking safety, but also monitors the amount of liquid in storage tanks and
reactors in the industrial field to ensure production processes. Smooth and
safe. Also in construction, liquid level detection is used to monitor liquid
levels in tunnels and underground facilities to prevent flooding and
structural damage. Scenarios like this are widely used.
To accurately monitor liquid levels, traditional contact measurement methods
like float gauges and pressure transmitters[1] offer high measurement accuracy
but have certain limitations. These methods require the measuring element to
be directly immersed in the liquid, making them unsuitable for harsh
environments involving highly corrosive substances, extreme temperatures, or
high pressures.
In recent years, some non-contact remote measurement technologies have rapidly
advanced, such as liquid level measurement systems based on radar and sonar
principles[2]. These novel techniques eliminate the necessity for physical
contact with the liquid being measured, offer a wide measurement range, and
highly adapt to different environments. However, they also face challenges
like relatively high system costs and strict requirements on environmental
conditions (e.g., temperature, pressure).
With continuous advancements in computer vision and image processing, image-
based liquid level detection methods are increasingly emerging and attracting
widespread industry attention. Traditional image algorithms have proposed
numerous liquid level detection methods using image shooting and processing to
obtain liquid level conditions through spatial mathematical relationships[3].
These methods achieved convincing results over a decade ago. However, the
application of deep learning in image processing has ushered in a new era. For
liquid level detection in large scenes like lakes and reservoirs, substantial
advancements have been achieved[4][5][6][7][8]. For example, Fang et al.[6]
used YOLOv4 to accurately locate liquid gauge scale characters, then
DeepLabv3+ to precisely segment the junction area between the gauge and liquid
body, and finally extracted liquid levels and calculated actual values using
image processing techniques. Sun et al.[4] achieved high-precision, real-time
liquid level monitoring through steps like image preprocessing, edge
detection, affine transformation correction, keyword positioning, and edge
projection. Xia et al.[5] improved the superpixel and graph cutting algorithm,
then performed liquid level detection based on the semantic segmentation
network technology of U-net. Zhang et al.[8] proposed a liquid level height
difference prediction method based on digital image processing by using a
digital camera to capture a top view of the container, then performing image
preprocessing, edge detection, and ellipse fitting to calculate the liquid
level and distance from the container top.
These methods have improved accuracy, generalization ability, and
environmental adaptability but still face challenges and bottlenecks. Firstly,
existing research mainly focuses on large liquid bodies, lacking relevant
technology accumulation for small container scenarios. Secondly, most
algorithms have high training data requirements, resulting in poor
generalization capabilities when applied to different environments.
Furthermore, complex environments introduce interference like lighting and
occlusion, affecting detection accuracy. Mitigating the influence of
environmental factors remains a critical challenge. Finally, for dynamically
changing liquid levels, accurate and stable detection is challenging due to
factors like fluctuations, and existing methods lack modeling and analysis of
dynamic processes. All these challenges await further breakthroughs and
research.
Based on the above analysis, we proposed a new visual processing method for
dynamic liquid level changes in containers, greatly addressing issues of high
sample requirements, complex environmental influences, and limited detection
scene sizes. Our main contributions are threefold:
* •
We construct a dedicated dataset using the SAM model and evaluate it through
the SemiReward framework to obtain a standardized and specialized dataset.
* •
By employing U²-Net for salient object extraction, we obtain the container
mask, focusing the analysis solely on the liquid surface within the container
image. This not only greatly mitigates interference from external environments
but also shifts the detection emphasis toward subtle changes in small-scale
features within the image.
* •
We adopt image morphological methods to significantly improve the quality of
suboptimal masks, resulting in more distinct and smooth boundaries.
## II Related Works
### II-A SAM Model
SAM [9] represents an innovative deep learning architecture designed to
efficiently segment arbitrary image content through a prompt-based
segmentation task. This model can generate precise segmentation masks in real-
time, without the need for specific task training, by utilizing flexible
prompts such as points, bounding boxes, and text. SAM relies on a large-scale
dataset named SA-1B, which includes over 1.1 billion auto-generated masks,
ensuring the model’s generalization across diverse scenes. The zero-shot
transfer learning capabilities of SAM have demonstrated remarkable performance
across multiple downstream tasks, marking a significant breakthrough in the
field of image segmentation.
It’s noteworthy that SAM has learned a universal ability for object
recognition and segmentation, thus its exceptional performance is not confined
to specific object categories. Whether dealing with a single target or
multiple targets of the same or different categories, SAM accurately segments
them. This versatility positions SAM for a wide range of applications, such as
interactive image editing, general object segmentation, and visual question
answering, among others. Beyond segmentation quality, another major advantage
of the SAM model is its computational efficiency. With no need for time-
consuming task-specific fine-tuning, SAM can respond to user prompts in real-
time, rapidly producing segmentation outcomes, thereby facilitating downstream
visual tasks and offering an excellent user interaction experience.
SAM’s image segmentation capabilities and prompt adaptability guide our
container mask creation, creating a foundational dataset for model training.
Its scene generalization lets us tackle various container types, broadening
our method’s scope. While SAM presents real-time interaction, we use it for
data creation, not full liquid level detection. To improve dataset
reliability, we also integrate SemiReward for mask quality refinement.
### II-B U²-Net
The U²-Net[10] architecture is a deep learning framework specifically tailored
for salient object detection (SOD) tasks. Its core innovation lies in the
unique nested U-shaped structure, which effectively captures rich contextual
information at different scales. The architecture utilizes Residual U-blocks
(RSUs) at each stage to extract multi-scale features while maintaining high-
resolution feature maps. The clever design of the RSUs enhances the network’s
depth without significantly increasing computational costs, allowing U²-Net to
be trained from scratch without relying on pre-trained image classification
backbones. This design not only improves SOD performance but also
computational efficiency, providing a novel and efficient solution for the SOD
domain. Unlike traditional methods that depend on pre-trained backbones, U²-
Net’s ability to train from zero showcases performance comparable to or even
better than the current state-of-the-art. And the training loss $L$ from[10]
is defined as:
$L=\sum_{m=1}^{M}w_{side}^{(m)}l_{side}^{(m)}+w_{fuse}l_{fuse}$ (1)
where $M$ is the number of side-output saliency maps, $w_{side}^{(m)}$ is the
weight of the $m$th side-output loss, $l_{side}^{(m)}$ is the loss of the
$m$th side-output saliency map, $w_{fuse}$ is the weight of the fusion output
loss, and $l_{fuse}$ is the loss of the final fusion output saliency map. Each
side-output loss $l_{side}^{(m)}$ is computed using the binary cross-entropy
loss from[10] as shown below:
$l=\sum_{(r,c)}^{(H,W)}\left[P_{G(r,c)}\log
P_{S(r,c)}+\left(1-P_{G(r,c)}\right)\log\left(1-P_{S(r,c)}\right)\right]$ (2)
where $(r,c)$ are the pixel coordinates, $(H,W)$ is the image size in height
and width, $P_{G}(r,c)$ denotes the pixel values of the ground truth, and
$P_{S}(r,c)$ denotes the pixel values of the predicted saliency probability
map. The training process tries to minimize the overall loss $L$ of (1). In
the testing process, the fusion output $l_{fuse}$ used is chosen as the final
saliency map.
U²-Net’s hierarchical U-shaped architecture and RSUs inform our approach,
allowing us to enhance container segmentation precision without increasing
computational demands. Its train-from-zero approach enables us to create
models tailored for specific container data, deviating from U²-Net’s general
SOD focus. We’ve adapted U²-Net for container segmentation by adjusting
training data, loss functions, and adding morphological processing to better
suit liquid level detection tasks.
### II-C Bottleneck in Hand-crafted Design
#### II-C1 Morphological Compensation
In the process of image analysis, defective images are commonly encountered.
To address this issue, Vizilter et al.[11] employed morphological image
analysis to solve the problems of change detection and shape matching in
images, which is similar to the idea of using morphological operations for
image restoration as described by Raid et al.[12]. By adopting this method,
defects can be compensated for by filling holes and connecting broken regions
in the image.
Firstly, a structuring element needs to be defined, which specifies the shape
and size of the morphological operation. In this study, we chose to use an
elliptical structuring element with a size of 5x5 pixels. Morphological
closing operation, which consists of dilation followed by erosion, is then
applied to the current binary image to fill small holes and connect broken
regions[1]. Based on this theory, the following equation from[12] can be
derived:
$A\oplus B=\left\\{x,y\left|\left(B\right)_{xy}\cap
A\neq\oslash\right.\right\\}$ (3)
where $(B)_{xy}$ denotes the translation of the structuring element B such
that its origin is at $(x,y)$. The output pixel $(x,y)$ is set to 1 if the
intersection of the translated $B$ with the set A is non-empty, otherwise it
is set to 0.
Erosion can ”shrink” the target region, essentially causing the image
boundaries to contract. It can be used to eliminate small, insignificant
targets. The equation for erosion from[12] is expressed as:
$A\ominus B=\left\\{x,y\left|\left(B\right)_{xy}\subseteq A\right.\right\\}$
(4)
where $(B)_{xy}$ denotes the translation of the structuring element $B$ such
that its origin is at $(x,y)$. The output pixel $(x,y)$ is set to 1 if the
translated $B$ is completely contained within the set $A$, otherwise it is set
to 0. This equation represents the erosion of A by the structuring element
$B$.
#### II-C2 Grayscale Value Conversion
Most of the images in this study are in color format, but the color
information is not highly relevant. Therefore, it is crucial to introduce
grayscale conversion to obtain meaningful numerical values. In terms of
grayscale conversion methods, Saravanan[13] proposed a novel algorithm that
addresses the contrast, sharpness, shadows, and structure of the image. This
algorithm approximates, reduces, and adds to the chromaticity and luminosity
of the RGB values. The formula from[13] is as follows:
$\displaystyle Y=0.299R+0.587G+0.114B$ $\displaystyle U=0.565(B-Y)$
$\displaystyle V=0.713(R-Y)$ $\displaystyle I_{1}=(R/3+G/3+B/3+U+V)/4$ (5)
where $Y$ represents luminance, while U and V represent chrominance. The
calculation of $Y$ is based on the weighted sum of RGB components, while the
calculation of $U$ and $V$ is based on the differences between red, green,
blue, and luminance. The intensity value ($I_{1}$) is computed by taking the
average of the RGB components, adding the $U$ and $V$ components, and dividing
the sum by 4.
Traditional grayscale image algorithms are not specifically tailored for
classification purposes. In the context of image classification, Güneş et
al.[14] proposed a novel color-to-grayscale conversion method based on Genetic
Algorithm (GA). By utilizing GA, the conversion coefficients for color images
are optimized to generate grayscale images with enhanced discriminative
features, aiming to reduce errors in image classification problems. The
formula from[14] is as follows:
$\displaystyle r^{\prime}=r/(r+g+b)$ $\displaystyle g^{\prime}=g/(r+g+b)$
$\displaystyle b^{\prime}=b/(r+g+b)$ $\displaystyle I_{2}=r^{\prime}\texttimes
R+g^{\prime}\texttimes G+b^{\prime}\texttimes B$ (6)
Integrating the above two methods, the final intensity value $I$ is obtained
by adding $I_{1}$ and $I_{2}$ through the weighted proportional coefficients
$\alpha$ and $\beta$ using the equation:
$I=\alpha\cdot I_{1}+\beta\cdot I_{2}$ (7)
where $\alpha$ and $\beta$ are weighting coefficients satisfying
$\alpha+\beta=1$. $I_{1}$ takes into account visual factors such as
brightness, chromaticity, and contrast, while $I_{2}$ emphasizes
discriminative power for classification. The two methods are complementary to
each other. By employing a weighted fusion approach, the visual quality can be
enhanced while simultaneously taking classification performance into account.
## III Method
Based on the algorithm analysis mentioned above, we propose an overall
framework workflow as illustrated in Fig. 1. The algorithm consists of four
core modules: Data engine construction, prominent object extraction from the
container, morphological completion of the container shape, and calculation of
the height difference in the container for liquid level detection.
Figure 1: Overall framework
### III-A Construct Data Engine
For the data engine approach, the core key is how to evaluate and filter
labels and how to generate more label candidates of different qualities.
SemiReward (SR) [15] has proposed an effective pseudo-label screening method
for classification and regression tasks in the past. We modified this method
to make it a method that can evaluate our masking. We use common Masking
evaluation as The metric that can be learned allows the trained SR model to
start evaluating and screening Masking. At the same time, data amplification
is performed using methods such as noise addition and the most advanced mix-up
[16] to ultimately seek the possibility of traversing Masking as much as
possible. Through the data engine, we found that this is a very resource-
saving method to achieve better training purposes. Combined with many of the
most advanced methods, it greatly improves the sample quality during training.
### III-B Salient Target Extraction
Using the U²-Net-based prominent object extraction algorithm, we focused on
container images. Initially, the SAM (Segment Anything Model) was employed for
image collection and processing, resulting in a substantial dataset of
container images along with their corresponding mask images for subsequent
analysis. These images, along with their masks, were fed into the U²-Net for
training, resulting in a prominent object detection model specifically
designed for extracting containers from images.
### III-C Container Morphology Compensation
Following the application of the U²-Net model, certain images exhibited
containers with colors closely resembling the surrounding environment, making
them difficult to separate shown in Fig. 3. This resulted in discontinuities
between adjacent segmented images. To address this issue, morphological
operations were applied to the images to fill in the gaps and obtain complete
images, ensuring a stable and continuous segmentation of the images shown in
Fig. 3.
Figure 2: Before the Completion
Figure 3: After the Completion
After being processed by the trained U²-Net salient target detection model, an
image containing only the location in the image is obtained, and then fused
with the original image to obtain an image containing only the container.
### III-D Dynamic Liquid Level Detection
In motion object detection using frame differencing, the goal is to detect the
changing parts by eliminating the static regions and retaining the areas with
variations in the difference image. Zhan et al. [6] divided the edge
difference image into several small blocks and determined whether they were
motion regions by comparing the number of non-zero pixels with a threshold. By
applying this method, it is possible to extract the information about the
changing liquid levels within the container.
#### III-D1 Threshold Division
After converting the obtained container-only images to grayscale, the
grayscale value difference between adjacent frames at corresponding pixel
positions was calculated. A threshold value was established to partition the
images according to these variations. Pixels with differences greater than the
threshold were marked as white, while pixels with differences below the
threshold were marked as black. This process captured subtle changes in the
liquid level within the container shown in Fig. 4 and assigned different
labels to represent different liquid level states: no change in the lower
level, rising level, no change in the higher level, falling level, and
container movement. The labeled images were then fed into a neural network for
image classification.
Figure 4: Threshold Division
It is crucial to set a reasonable threshold that can clearly distinguish
neighboring differences. Initially, we set the threshold range between 20 and
60 and experimented with the resulting difference images using different
threshold values. The data in Fig. 5 shows the comparative results. The
threshold value of 50 achieved the best performance, reaching 92.19%.
Figure 5: Threshold Data
#### III-D2 Liquid level difference calculation
The images of adjacent video frames are first converted from RGB to grayscale.
While keeping the external environment and the container unchanged, variations
in the grayscale values at corresponding positions between consecutive frames
indicate subtle dynamic changes in the video sequence. We set a threshold for
the magnitude of these value differences and determined the optimal threshold
through experimental comparisons. Pixels at positions where the difference
exceeds the threshold are marked. By processing these consecutive video
frames, we obtain the dynamic changes in the liquid level within the
container.
#### III-D3 Liquid level detection
Due to the training images being binary and the target object being relatively
homogeneous, we selected a lightweight model such as EfficientNet-B0 for
training. We fed these images shown in Fig. 7 and Fig. 7 along with their
corresponding labels into the neural network for image classification,
resulting in a network capable of detecting images for this specific task. By
utilizing this network, we ultimately achieved the detection of dynamic
changes in liquid level.
Figure 6: Increase
Figure 7: Decrease
## IV Experiment and result analysis
Following our implementation of U²-Net for dynamic liquid level detection, we
compared its performance with several well-known semantic segmentation models
to benchmark its effectiveness. These models include U-Net[17],
DeepLabV3+[18], Mask R-CNN[19], F3Net[20], HRNet[21], and PSPNet[22]. The
evaluation metrics employed in our comparison were Accuracy (Acc), Precision
(P), Recall (R), F1-score, Mean Absolute Error (MAE), and Mean Squared Error
(MSE).
TABLE I: Model Comparison Model | Acc | P | R | F1-score | MAE | MSE
---|---|---|---|---|---|---
U-Net | 0.977 | 0.694 | 0.784 | 0.736 | 10.529 | 49.121
DeepLabV3+ | 0.979 | 0.757 | 0.817 | 0.767 | 10.956 | 29.384
Mask R-CNN | 0.968 | 0.763 | 0.836 | 0.788 | 0.852 | 0.781
F3Net | 0.974 | 0.748 | 0.823 | 0.771 | 9.874 | 10.321
HRNet | 0.981 | 0.774 | 0.809 | 0.785 | 0.874 | 0.974
PSPNet | 0.984 | 0.769 | 0.813 | 0.786 | 9.923 | 10.219
U²-Net | 0.991 | 0.794 | 0.848 | 0.812 | 0.287 | 0.002
As indicated in Table I, U²-Net shows superior performance compared to the
other models evaluated. It achieves the highest accuracy at 0.991,
significantly higher than the next best-performing model, PSPNet, which has an
accuracy of 0.984. U²-Net’s precision and recall scores, 0.794 and 0.848
respectively, highlight its effectiveness in correctly classifying salient
areas in the images.
The F1-score for U²-Net is 0.812, confirming its robustness and the effective
balance it strikes between precision and recall. In terms of error metrics,
U²-Net records the lowest values with a mean absolute error of 0.287 and a
mean squared error of 0.002, emphasizing its precision and reliability in
predicting liquid level changes.
These comparative results underscore the potential of U²-Net for practical
deployment in scenarios where accurate liquid level detection is paramount,
such as in industrial control systems. The evaluation suggests that U²-Net
could serve as a reliable model for similar segmentation tasks that demand
high accuracy and consistency.
## V Conclusion
In this study, we developed a novel approach for liquid level state detection
by combining image differencing and binarization techniques. Our model
demonstrated strong robustness against variations in container types and
environmental conditions. By simplifying the input images into binary
representations focusing on the target object, we were able to achieve
accurate classification using a straightforward neural network architecture,
without the need for complex network designs.
One of the key advantages of our model is its reduced reliance on large
training datasets, which is a common challenge in many computer vision tasks.
This was made possible by leveraging the SemiReward framework to generate and
filter high-quality pseudo-labeled images using the SAM model. The resulting
dedicated dataset enabled efficient training and generalization of our model.
The prospective uses of our methodology surpass the domain of liquid level
state detection. The underlying principles can be adapted to a wide range of
tasks that involve identifying small changes in static object environments.
This versatility opens up opportunities for solving diverse problems across
various domains, such as quality control in manufacturing, anomaly detection
in surveillance systems, and monitoring of infrastructure conditions.
Integrating image differencing and object-focused binarization presents a
potent approach for simplifying complex visual information into more
manageable representations. By focusing on the essential features of the
target object, our model can effectively capture and analyze the relevant
changes while being resilient to variations of background. This approach not
only enhances the robustness of the model but also reduces the computational
complexity and data requirements, making it more practical for real-world
deployments.
Furthermore, our model’s ability to generate high-quality pseudo-labeled data
using the SemiReward framework presents an opportunity for self-supervised
learning. By iteratively refining the dataset and retraining the model, we can
continuously improve its performance and adapt to new scenarios without the
need for extensive manual labeling efforts. This self-supervised learning
paradigm has the potential to greatly accelerate the development and
deployment of computer vision models in various domains.
In conclusion, our liquid level state detection model, based on image
differencing and binarization, offers a robust, efficient, and generalizable
approach for analyzing small changes in static object environments. By
simplifying complex images into binary representations and leveraging high-
quality pseudo-labeled data, we have demonstrated the potential for solving a
wide range of similar problems with reduced data requirements and
computational complexity. As we continue to explore and refine this approach,
we anticipate its application in diverse fields, contributing to advancements
in computer vision and automation technologies.
## References
* [1] Yadvendra Singh, Kumar Sanjeev Raghuwanshi, and Soubir Kumar. Review on liquid-level measurement and level transmitter using conventional and optical techniques. IETE Technical Review, 2018.
* [2] Pankaj Mohindru. Development of liquid level measurement technology: A review. Flow Measurement and Instrumentation, 89:102295, 2023.
* [3] Ti-Ho Wang, Ming-Chih Lu, Chen-Chien Hsu, Cheng-Chuan Chen, and Jia-Dong Tan. Liquid-level measurement using a single digital camera. Measurement, 42(4):604–610, 2009.
* [4] Weiya Sun, Da Wang, Shuai Xu, Jingye Wang, and Zhanyu Ma. Water level detection algorithm based on computer vision. Journal of Applied Sciences, 40(03):434–447, 2022.
* [5] Ping Xia, Feng Wang, Bangjun Lei, and Dongxia Shi. Intelligent visual water level recognition based on super pixel and graph cut algorithm. computer simulation, 38(03):430–436+441, 2021.
* [6] Aiyin Fang, Yongxian Wang, Ximeng Yin, Peng Wang, Zhongyi Li, and Zhi Liu. Detsegnet: A high-precision water level detection network based on detection and segmentation. In Proceedings of the 2023 (11th) China Symposium on Water Conservancy Information Technology, pages 56–73, 2023.
* [7] Yan-Ting Lin, Yi-Chun Lin, and Jen-Yu Han. Automatic water-level detection using single-camera images with varied poses. Measurement, 127:167–174, 2018.
* [8] Xiaoyuan Zhang, Tangyou Liu, and Futing Yu. Prediction of height difference between liquid level and container top based on ellipse detection. Computer and Information Technology, 30(03):19–23, 2022.
* [9] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, C Alexander Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023.
* [10] Xuebin Qin, Zichen Zhang, Chenyang Huang, Masood Dehghan, R Osmar Zaiane, and Martin Jagersand. U2-net: Going deeper with nested u-structure for salient object detection. Pattern recognition, 106:107404, 2020.
* [11] V Yuriy Vizilter, P Yuriy Pyt’ev, I Alexey Chulichkov, and M Leonid Mestetskiy. Morphological image analysis for computer vision applications. Computer Vision in Control Systems-1: Mathematical Theory, pages 9–58, 2015.
* [12] AM Raid, WM Khedr, MA El-Dosuky, and Aoud Mona. Image restoration based on morphological operations. International Journal of Computer Science, Engineering and Information Technology, 4(3):9–21, 2014.
* [13] C Saravanan. Color image to grayscale image conversion. In 2010 Second International Conference on Computer Engineering and Applications, volume 2, pages 196–199. IEEE, 2010.
* [14] Ali G”uneş, Habil Kalkan, and Efkan Durmuş. Optimizing the color-to-grayscale conversion for image classification. Signal, Image and Video Processing, 10:853–860, 2016.
* [15] Siyuan Li, Weiyang Jin, Zedong Wang, Fang Wu, Zicheng Liu, Cheng Tan, and Stan Z. Li. Semireward: A general reward model for semi-supervised learning. In The Twelfth International Conference on Learning Representations, 2024.
* [16] Zicheng Liu, Siyuan Li, Di Wu, Zihan Liu, Zhiyuan Chen, Lirong Wu, and Stan Z. Li. Unveiling the power of mixup for stronger classifiers. In European Conference on Computer Vision, 2022.
* [17] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical image computing and computer-assisted intervention–MICCAI 2015: 18th international conference, Munich, Germany, October 5-9, 2015, proceedings, part III 18, pages 234–241. Springer, 2015.
* [18] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018.
* [19] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.
* [20] Jun Wei, Shuhui Wang, and Qingming Huang. F3net: fusion, feedback and focus for salient object detection. In Proceedings of the AAAI conference on artificial intelligence, volume 34, pages 12321–12328, 2020.
* [21] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5693–5703, 2019.
* [22] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2881–2890, 2017.
|
# Score-Based Generative Models for PET Image Reconstruction
Imraj R D Singh https://orcid.org/0000-0003-2186-0977
<EMAIL_ADDRESS>
Department of Computer Science, University College London, 66-72 Gower St,
WC1E 6EA, London, United Kingdom. Alexander Denker11footnotemark: 1
https://orcid.org/0000-0002-7265-261X<EMAIL_ADDRESS>
Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5,
28359 Bremen, Germany. Riccardo Barbano11footnotemark: 1
https://orcid.org/0000-0003-1863-2092<EMAIL_ADDRESS>
Department of Computer Science, University College London, 66-72 Gower St,
WC1E 6EA, London, United Kingdom. Željko Kereta
https://orcid.org/0000-0003-2805-0037<EMAIL_ADDRESS>
Department of Computer Science, University College London, 66-72 Gower St,
WC1E 6EA, London, United Kingdom. Bangti Jin
https://orcid.org/0000-0002-3775-9155<EMAIL_ADDRESS>
Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T.,
Hong Kong. Kris Thielemans https://orcid.org/0000-0002-5514-199X
<EMAIL_ADDRESS>
Institute of Nuclear Medicine, University College London, London, United
Kingdom. Peter Maass https://orcid.org/0000-0003-1448-8345 pmaass@uni-
bremen.de
Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5,
28359 Bremen, Germany. Simon Arridge https://orcid.org/0000-0003-1292-0210
<EMAIL_ADDRESS>
Department of Computer Science, University College London, 66-72 Gower St,
WC1E 6EA, London, United Kingdom Equal contribution.
###### Abstract
Score-based generative models have demonstrated highly promising results for
medical image reconstruction tasks in magnetic resonance imaging or computed
tomography. However, their application to Positron Emission Tomography (PET)
is still largely unexplored. PET image reconstruction involves a variety of
challenges, including Poisson noise with high variance and a wide dynamic
range. To address these challenges, we propose several PET-specific
adaptations of score-based generative models. The proposed framework is
developed for both 2D and 3D PET. In addition, we provide an extension to
guided reconstruction using magnetic resonance images. We validate the
approach through extensive 2D and 3D in-silico experiments with a model
trained on patient-realistic data without lesions, and evaluate on data
without lesions as well as out-of-distribution data with lesions. This
demonstrates the proposed method’s robustness and significant potential for
improved PET reconstruction.
Keywords: Positron emission tomography, score-based generative models, image
reconstruction
## 1 Introduction
Positron Emission Tomography (PET) is a functional medical imaging technique
for quantifying and visualising the distribution of a radio-tracer within the
body, and is vital in clinical practice for accurate diagnosis, treatment
planning, and monitoring of diseases. In a PET scan, radio-tracers are
injected to interrogate a specific biological pathway of interest. Through the
decay of the radio-tracer a positron is released, which upon annihilating with
an electron produces a pair of coincident photons that travel in approximately
anti-parallel directions. These emitted photons are detected and are then used
to reconstruct the underlying radio-tracer distribution. The relationship
between the measured emissions and the radio-tracer can be approximated with
the Poisson noise model as
${\mathbf{y}}\sim{\text{Pois}}(\mathbf{\bar{y}}),\quad\mathbf{\bar{y}}={\mathbf{A}}{\mathbf{x}}+\mathbf{\bar{b}},$
(1)
where $\mathbf{\bar{y}}\in\mathbb{R}^{m}$ is the expected value of the
measurements ($m$ is the number of detector bins) and
${\mathbf{x}}\in\mathbb{R}^{n}$ is the discrete (voxel) basis representation
of the tracer distribution ($n$ is the number of voxels). The system matrix
${\mathbf{A}}\in\mathbb{R}^{m\times n}$ includes approximate line integrals
between detectors as well as physical phenomena such as photon attenuation,
positron range, and detector sensitivity. It should be noted that 3D
measurements detect pairs of photons between detector rings, i.e. they are not
a stack of 2D measurements. The expected background
$\mathbf{\bar{b}}\in\mathbb{R}^{m}$ are estimates of scatter and randoms
events Qi and Leahy (2006). The unique challenges that distinguish PET from
other imaging modalities, e.g. Magnetic Resonance Imaging (MRI) and Computed
Tomography (CT), include Poisson noise with low mean number of counts, and
widely varying dynamic range of images due to functional differences between
patients.
Most inverse problems in imaging are ill-posed, in the sense that the solution
may not exist, not be unique, or not depend continuously on the measurement
noise (Engl et al., 1996; Ito and Jin, 2015). To stabilise the reconstruction
process, prior knowledge is often leveraged through a penalising functional
that promotes solutions from a desirable image subset. The priors are
typically hand-crafted to promote desired features in the reconstructed image,
such as sparsity of edges (Rudin et al., 1992) or smoothness. Furthermore, if
an additional image is available, e.g. with high resolution structural
information, a suitable prior can promote common features between the two
images (Ehrhardt, 2021). This is often referred to as guided reconstruction.
In recent years, deep learning approaches have shown state-of-the-art
performance in PET image reconstruction, see surveys (Reader et al., 2021;
Pain et al., 2022). Existing approaches include post-processing (Kaplan and
Zhu, 2019), to synthesise high-dose images from low-dose ones (which is akin
to denoising), and deep unrolled optimisation (Mehranian and Reader, 2021;
Guazzo and Colarieti-Tosti, 2021). However, these supervised approaches
require large volumes of paired data that is often hard to acquire.
In contrast, generative models are unsupervised, requiring a dataset only of
images of the target domain. These can, for example, be high-quality
reconstructions acquired from prior scans. The aim of generative modelling is
to approximate the image manifold of a given dataset (Bengio et al., 2013).
There are a variety of methods for this task, e.g. generative adversarial
networks (Goodfellow et al., 2014), variational autoencoders (Kingma and
Welling, 2014) and recently Score-based Generative Models (SGMs), which aim to
generate high-quality samples, sample quickly, and have adequate mode coverage
(Xiao et al., 2022). Over recent years, SGMs have become the de facto method
for image generation due to the quality and diversity of generated images
(Dhariwal and Nichol, 2021). Generative models can be integrated into the
reconstruction process as data-driven priors that are independent of the
forward model, cf. (Dimakis, 2022). This modularity separates the classical
forward modelling problem from the generative image modelling problem. Where
the generative model is trained to generate images specific to the task at
hand, i.e. PET images of brains. The model can then be used across scanners
and noise levels given the learnt image manifold is still relevant.
SGMs have been applied to CT and MR image reconstruction (Song et al., 2022).
These reconstructions condition the SGM image generation on measurements, and
balance the consistency with measurements versus consistency with the SGM
learnt image manifold. From this perspective the SGM acts as a prior (Kobler
and Pock, 2023). There are different methods to enforce measurement
consistency of the reconstructions, which can be broadly classified into
gradient based methods (Jalal et al., 2021; Chung et al., 2023a) and
projection based methods (Song et al., 2022; Chung and Ye, 2022; Chung et al.,
2023b). Recently, denoising diffusion models (discrete variants of SGMs) were
used for PET image denoising (Gong et al., 2022). Instead, our work focuses on
PET image reconstruction, and we present the following contributions:
* •
We develop a novel algorithmic framework building upon SGMs that carefully
addresses the challenges inherent to PET. To do so, we modify the conditional
sampling method (Chung et al., 2023b; Zhu et al., 2023), recently proposed for
inverse problems with Gaussian noise, for PET image reconstruction. This is
achieved with a penalised Maximum A Posteriori (MAP) estimator computed with
an accelerated algorithm that evaluates subsets of the measurements.
* •
We leverage additional MR images to enhance the proposed framework, leading to
improved image quality that better agrees with the measured data.
* •
We scale the approach to 3D PET reconstruction.
The proposed method is tested on multiple noise levels, radio-tracers, and in
both 2D and 3D settings with an SGM trained on patient-realistic BrainWeb data
without lesions (Collins et al., 1998). In addition to data without lesions,
we test on out-of-distribution (OOD) data with lesions to validate method
robustness. The rest of the paper is structured as follows. In Section 2 we
provide the background on PET reconstruction and SGMs. In particular, we
present different methods for using SGMs in image reconstruction. In Section 3
we propose modifications needed to apply SGMs for PET reconstruction. We
describe the experimental setting in Section 4, and present and discuss the
results in Section 5. The code is publicly available at
https://github.com/Imraj-Singh/Score-Based-Generative-Models-for-PET-Image-
Reconstruction
## 2 Background
### 2.1 Fundamentals of Positron Emission Tomography Reconstruction
PET measurements are the result of a low-count photon-counting process. The
true forward process, from tracer-distribution to photon detection, is
approximated by the forward model defined in Eq. (1). The likelihood of the
measured photon counts, for an unknown tracer distribution, can be modelled by
an independent Poisson distribution. One of the first methods developed for
estimating the tracer distribution through a Poisson model was maximum
likelihood. This selects an image ${\mathbf{x}}\in\mathbb{R}_{\geq 0}^{n}$ by
maximising the Poisson Log-Likelihood (PLL) function, given by
$L({\mathbf{y}}|{\mathbf{x}})=\sum_{i=1}^{m}y_{i}\log([{\mathbf{A}}{\mathbf{x}}+\mathbf{\bar{b}}]_{i})-[{\mathbf{A}}{\mathbf{x}}+\mathbf{\bar{b}}]_{i}-\log(y_{i}!).$
(2)
By maximising the PLL, the Maximum Likelihood Estimate (MLE) is obtained. A
particularly important algorithm for computing the MLE is Expectation
Maximisation (EM) (Shepp and Vardi, 1982). However, due to its slow
convergence, acceleration is sought through splitting the PLL into a sum of
$n_{\mathrm{sub}}\geq 1$ sub-objectives. This gives rise to the greatly sped-
up Ordered Subset Expectation Maximisation (OSEM) (Hudson and Larkin, 1994)
algorithm. Because of the ill-conditioning of PET reconstruction, the MLE
tends to overfit to measurement noise. To address the ill-conditioning and
improve reconstruction quality, it is common practice to regularise the
reconstruction problem via the use of an image-based prior. This gives rise to
the MAP objective
$\Phi(\mathbf{x})=L(\mathbf{y}|\mathbf{x})+\lambda R(\mathbf{x}),$ (3)
where $R({\mathbf{x}})$ is the log of a chosen image-based prior with penalty
strength $\lambda$. Block-Sequential Regularised Expectation Maximisation
(BSREM) (Pierro and Yamagishi, 2001; Ahn and Fessler, 2003) is an iterative
algorithm, globally convergent under mild assumptions, that applies the subset
idea of OSEM to the MAP objective. For
$\Phi({\mathbf{x}})=\sum_{j=1}^{n_{\mathrm{sub}}}\Phi_{j}({\mathbf{x}})$,
where $\Phi_{j}$ is the sub-objective
$\Phi_{j}({\mathbf{x}})=L_{j}({\mathbf{y}}|{\mathbf{x}})+\lambda
R({\mathbf{x}})/n_{\mathrm{sub}}$, and $L_{j}$ is the likelihood for a subset
of the measurements. The BSREM update iterations are given by
$\mathbf{x}^{i+1}=P_{\mathbf{x}\geq
0}\left[\mathbf{x}^{i}+\alpha_{i}\mathbf{D}(\mathbf{x}^{i})\nabla\Phi_{j}(\mathbf{x}^{i})\right]\quad
i\geq 0,$ (4)
where $P_{\mathbf{x}\geq 0}[\cdot]$ denotes the non-negativity projection, $i$
is the iteration number, and index $j=(i\mod n_{\mathrm{sub}})+1$ cyclically
accesses sub-objectives. The preconditioner is
$\mathbf{D}({\mathbf{x}}^{i})=\mathrm{diag}\left\\{{\max({\mathbf{x}}^{i},\delta)}/{\mathbf{A}^{\top}\mathbf{1}}\right\\}$,
where $\delta$ is a small positive constant to ensure positive definiteness,
$\mathbf{A}^{\top}\mathbf{1}$ is referred to as the sensitivity image, and
$\mathbf{A}^{\top}$ the matrix transpose. The step-sizes are
$\alpha_{i}=\alpha_{0}/(\zeta\lfloor i/n_{\mathrm{sub}}\rfloor+1)$, where
$\alpha_{0}=1$ and $\zeta$ is a relaxation coefficient. A common regulariser
for PET reconstruction is the Relative Difference Prior (RDP) (Nuyts et al.,
2002), see Appendix A.3 for details. The gradient of the RDP is scale-
invariant as it is computed using the ratio of voxel values. This partially
overcomes the issue with the wide dynamic range observed in emission
tomography images, helping to simplify the choice of the penalty strength
across noise levels.
### 2.2 Score-based Generative Models
SGMs have emerged as a state-of-the-art method for modelling, and sampling
from, high-dimensional image distributions (Song et al., 2021c). They
reinterpret denoising diffusion probabilistic modelling (Sohl-Dickstein et
al., 2015; Ho et al., 2020) and score-matching Langevin dynamics (Song and
Ermon, 2019) through the lens of Stochastic Differential Equations (SDE). SGMs
are often formulated by prescribing a forward diffusion process defined by an
Itô SDE
$d{\mathbf{x}_{t}}={\mathbf{f}}({\mathbf{x}_{t}},t)dt+g(t)d\mathbf{w}_{t},\quad{\mathbf{x}}_{0}\sim
p_{0}:={\pi},$ (5)
where $\\{{\mathbf{x}}_{t}\\}_{t\in[0,T]}$ is a stochastic process indexed by
time $t$ and ${\pi}$ is the image distribution. Each random vector
${\mathbf{x}}_{t}$ has an associated time-dependent density
$p({\mathbf{x}}_{t})$. To emphasise that the density is a function of $t$ we
write $p_{t}({\mathbf{x}}_{t}):=p({\mathbf{x}}_{t})$. The multivariate Wiener
process $\\{{\mathbf{w}}_{t}\\}_{t\geq 0}$ is the standard Brownian motion.
Starting at the image distribution ${\pi}$, the drift function
${\mathbf{f}}(\cdot,t):{\mathbb{R}}^{n}\to{\mathbb{R}}^{n}$ and the diffusion
function $g:{\mathbb{R}}^{n}\to{\mathbb{R}}$ are chosen such that the terminal
distribution at $t=T$ approximates the standard Gaussian,
$p_{T}\approx\mathcal{N}(0,I)$. Thus, the forward diffusion process maps the
image distribution ${\pi}$ to a simple, tractable distribution. The aim of
SGMs is to invert this process, i.e. start at the Gaussian distribution and go
back to the image distribution ${\pi}$. Under certain conditions on
${\mathbf{f}}$ and $g$, a reverse diffusion process can be defined (Anderson,
1982)
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}_{t}})]dt+g(t)d\bar{\mathbf{w}}_{t},$
(6)
that runs backwards in time. The Wiener process
$\\{\bar{\mathbf{w}}_{t}\\}_{t\geq 0}$ is time-reversed Brownian motion, and
the term $\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}_{t}})$ is the score
function. Denoising Score Matching (DSM) (Vincent, 2011) provides a
methodology for estimating
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{x}_{t}})$ by matching the
transition densities $p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})$ with a time-
conditional neural network $s_{\theta}({\mathbf{x}_{t}},t)$, called the score
model, parametrised by $\theta$. The resulting optimisation problem is given
by
$\min_{\theta}\left\\{L_{\text{DSM}}(\theta)=\mathbb{E}_{t\sim
U[0,T]}\mathbb{E}_{{\mathbf{x}_{0}}\sim{\pi}}\mathbb{E}_{{\mathbf{x}_{t}}\sim
p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})}\left[\omega_{t}\|s_{\theta}({\mathbf{x}_{t}},t)-\nabla_{{\mathbf{x}}}\log
p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})\|_{2}^{2}\right]\right\\},$ (7)
where $\omega_{t}>0$ are weighting factors, balancing the scores at different
time steps. For general SDEs, the loss $L_{\text{DSM}}(\theta)$ may still be
intractable, since it requires access to the transition density
$p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})$. However, for SDEs with an affine
linear drift function, $p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})$ is a
Gaussian and thus can be given in closed form (Särkkä and Solin, 2019).
Throughout the paper, we use $T=1$ and the variance preserving SDE (Ho et al.,
2020) given by
$d{\mathbf{x}_{t}}=-\frac{\beta(t)}{2}{\mathbf{x}_{t}}dt+\sqrt{\beta(t)}d\mathbf{w}_{t},$
(8)
where $\beta(t):[0,1]\to{\mathbb{R}}_{>0}$ is an increasing function defining
the noise schedule. We use
$\beta(t)=\beta_{\text{min}}+t(\beta_{\text{max}}-\beta_{\text{min}})$ giving
the transition kernel
$p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})=\mathcal{N}({\mathbf{x}_{t}};\gamma_{t}{\mathbf{x}_{0}},\nu_{t}^{2}I)$
with coefficients $\gamma_{t},\nu_{t}\in{\mathbb{R}}$ computed from drift and
diffusion coefficients, see Appendix A.1 for details. Generating samples with
the score model $s_{\theta}({\mathbf{x}_{t}},t)$ as a surrogate requires
solving the reverse SDE (6), with the score model
$s_{\theta}({\mathbf{x}_{t}},t)$ in place of
$\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}_{t}})$:
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}s_{\theta}({\mathbf{x}_{t}},t)]dt+g(t)d\bar{\mathbf{w}}_{t}.$
(9)
Drawing samples from the resulting generative model thus involves two steps.
First, drawing a sample from the terminal distribution
${\mathbf{x}}_{1}\sim\mathcal{N}(0,I)\approx p_{1}$, and second, initialising
the reverse SDE (9) with ${\mathbf{x}}_{1}$ and simulating backwards in time
until $t=0$. The latter can be achieved by Euler-Maruyama schemes or
predictor-corrector methods (Song et al., 2021c).
#### 2.2.1 Denoising diffusion implicit models
Simulating the reverse SDE can be computationally expensive as a fine time
grid is often necessary to produce realistic samples. Denoising Diffusion
Implicit Models (DDIMs) (Song et al., 2021a) were introduced to allow faster
sampling, and build upon a result by Tweedie (Efron, 2011) to approximate the
expectation ${\mathbb{E}}[{\mathbf{x}_{0}}|{\mathbf{x}_{t}}]$ via the score
model $s_{\theta}({\mathbf{x}_{t}},t)$ as
${\mathbb{E}}[{\mathbf{x}_{0}}|{\mathbf{x}_{t}}]=\frac{{\mathbf{x}_{t}}+\nu_{t}^{2}\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}_{t}})}{\gamma_{t}}\approx\frac{{\mathbf{x}_{t}}+\nu_{t}^{2}s_{\theta}({\mathbf{x}_{t}},t)}{\gamma_{t}}:={\hat{\mathbf{x}}_{0}}({\mathbf{x}_{t}}).$
(10)
DDIM defines a non-Markovian sampling rule, which uses both the current sample
${\mathbf{x}_{t}}$ and Tweedie’s estimate
${\hat{\mathbf{x}}_{0}}({\mathbf{x}_{t}})$ to create an accelerated sampler.
Let $0=t_{k_{1}}\leq\dots\leq t_{k_{N}}=1$ be the time discretisation. The
DDIM sampling update can be written as
$\begin{split}{\mathbf{x}}_{t_{k-1}}&=\gamma_{t_{k-1}}{\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})+\text{Noise}({\mathbf{x}}_{t_{k}},s_{\theta})+\eta_{t_{k}}{\mathbf{z}},\quad{\mathbf{z}}\sim\mathcal{N}(0,I)\\\
&\text{ with
}\text{Noise}({\mathbf{x}}_{t_{k}},s_{\theta}):=-\nu_{t_{k}}\sqrt{\nu_{t_{k-1}}^{2}-\eta_{t_{k}}^{2}}s_{\theta}({\mathbf{x}}_{t_{k}},t_{k}).\end{split}$
(11)
The sampling rule can be split into a denoising step (predicting
${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$ using the score model), and
adding an appropriate amount of noise back. Thus, the sampling mimics an
iterative refinement process, as the prediction of the denoised estimate
${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$ will be more accurate for
smaller $t_{k}$. Different choices of $\eta_{t}$ result in different sampling
schemes. We choose $\eta_{t_{k}}=\eta\beta_{t_{k}}$ with a hyperparameter
$\eta\in[0,1]$, controlling the amount of stochasticity in the sampling, and
$\beta_{t_{k}}=\nu_{t_{k-1}}/\nu_{t_{k}}\sqrt{1-\gamma_{t_{k}}/\gamma_{t_{k-1}}}$
(Song et al., 2021a).
### 2.3 Using Score-based Generative Models for Inverse Problems
The goal of the Bayesian framework of inverse problems is to estimate the
posterior ${p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})$, i.e. the conditional
distribution of images ${\mathbf{x}}$ given noisy measurements ${\mathbf{y}}$.
Using Bayes’ theorem the posterior can be factored into
${p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})\propto{p^{\text{lkhd}}}({\mathbf{y}}|{\mathbf{x}}){\pi}({\mathbf{x}}),\quad\nabla_{{\mathbf{x}}}\log{p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})=\nabla_{{\mathbf{x}}}\log{p^{\text{lkhd}}}({\mathbf{y}}|{\mathbf{x}})+\nabla_{{\mathbf{x}}}\log{\pi}({\mathbf{x}}),$
(12)
where ${p^{\text{lkhd}}}$ denotes the likelihood and ${\pi}$ the prior given
by the image distribution. We can set up a generative model for the posterior
in the same way as for the prior ${\pi}$ in Section 2.2 by defining a forward
SDE which maps the posterior to random noise. To generate a sample from the
posterior ${p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})$, we can simulate the
corresponding reverse SDE
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}\nabla_{{\mathbf{x}}}\log
p_{t}({\mathbf{x}_{t}}|{\mathbf{y}})]dt+g(t)d\bar{\mathbf{w}}_{t},$ (13)
where we need access to the time-dependent posterior
$\nabla_{{\mathbf{x}}}\log p_{t}({\mathbf{x}_{t}}|{\mathbf{y}})$. Similar to
Eq. (12), we use Bayes’ theorem for the score of the posterior and decompose
$\nabla_{{\mathbf{x}}}\log p_{t}({\mathbf{x}_{t}}|{\mathbf{y}})$ into a prior
and a likelihood term, where the former is approximated with the trained score
model
$\begin{split}\nabla_{{\mathbf{x}}}\log
p_{t}({\mathbf{x}_{t}}|{\mathbf{y}})&=\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{x}_{t}})+\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})\\\
&\approx s_{\theta}({\mathbf{x}_{t}},t)+\nabla_{{\mathbf{x}}}\log
p_{t}({\mathbf{y}}|{\mathbf{x}_{t}}).\end{split}$ (14)
Substituting the above approximation into (13), we obtain
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}(s_{\theta}({\mathbf{x}_{t}},t)+\nabla_{{\mathbf{x}_{t}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}}))]dt+g(t)d\bar{\mathbf{w}}_{t}.$
(15)
We can recover approximate samples from the posterior
${p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})$, by simulating the reverse SDE
(15). Through iterative simulation of the reverse SDE with varying noise
initialisations, we can estimate moments of the posterior distribution. As is
common practice in the field (Song et al., 2022; Chung and Ye, 2022; Jalal et
al., 2021) we use one sample for the reconstruction, due to computational
costs of repeatedly solving the reverse SDE. In addition to the score model
$s_{\theta}({\mathbf{x}_{t}},t)$, we need the score of the time-dependent
likelihood $\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})$.
At the start of the forward SDE (for $t=0$), it is equal to the true
likelihood ${p^{\text{lkhd}}}$. However, for $t>0$ the score
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})$ is
intractable to compute exactly and different approximations have been
proposed. In (Jalal et al., 2021; Ramzi et al., 2020), this term was
approximated with the likelihood ${p^{\text{lkhd}}}$ evaluated at the noisy
sample ${\mathbf{x}_{t}}$ with time-dependent penalty strength $\lambda_{t}$
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})\approx\lambda_{t}\nabla_{{\mathbf{x}}}\log{p^{\text{lkhd}}}({\mathbf{y}}|{\mathbf{x}_{t}}),$
(16)
We refer to Eq. (16) as the Naive approximation. The Diffusion Posterior
Sampling (DPS) (Chung et al., 2023a) uses Tweedie’s formula to obtain
$\hat{{\mathbf{x}}}_{0}({\mathbf{x}_{t}})\approx{\mathbb{E}}[{\mathbf{x}_{0}}|{\mathbf{x}_{t}}]$
and approximates
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})$ by
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})\approx\nabla_{{\mathbf{x}}}\log{p^{\text{lkhd}}}({\mathbf{y}}|{\hat{\mathbf{x}}_{0}}({\mathbf{x}_{t}})),$
(17)
where $\nabla_{{\mathbf{x}}}$ denotes taking derivative in ${\mathbf{x}}_{t}$
(instead of $\hat{{\mathbf{x}}}_{0}$). It was shown that this approximation
leads to improved performance for several image reconstruction tasks (Chung et
al., 2023a). However, DPS comes with a higher computational cost, due to the
need to back-propagate the gradient through the score model.
Recently, several works proposed modifying the DDIM sampling rule in Eq. (11)
for conditional generation (Zhu et al., 2023; Chung et al., 2023b). These
methods generally consist of three steps: (1) estimating the denoised image
${\mathbf{x}_{0}}$ using Tweedie’s estimate
${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$; (2) updating
${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$ with a data consistency step
using the measurements ${\mathbf{y}}$; and (3) adding the noise back,
according to the DDIM update rule, in order to get a sample for the next time
step $t_{k-1}$. Importantly, with this approach there is no need to estimate
the gradient of the time-dependent likelihood
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})$ as data
consistency is only enforced on Tweedie’s estimate at $t=0$. These conditional
DDIM samplers differ most greatly in the implementation of the data
consistency update. Decomposed Diffusion Sampling (DDS) (Chung et al., 2023b)
proposes to align Tweedie’s estimate with the measurements by running $p$
steps of a Conjugate Gradient (CG) scheme for minimising the negative log-
likelihood at each sampling step. Let
$\text{CG}^{(p)}({\hat{\mathbf{x}}_{0}})$ denote the $p$-th CG update
initialised with ${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$. This can be
seen as an approximation to the conditional expectation, i.e.
${\mathbb{E}}[{\mathbf{x}_{0}}|{\mathbf{x}_{t}},{\mathbf{y}}]\approx\text{CG}^{(p)}({\hat{\mathbf{x}}_{0}})$
(Ravula et al., 2023). Using this approximation, the update step for DDS can
be written as
${\mathbf{x}}_{t_{k-1}}=\gamma_{t_{k-1}}\text{CG}^{(p)}({\hat{\mathbf{x}}_{0}})+\text{Noise}({\mathbf{x}}_{t_{k}},s_{\theta})+\eta_{t_{k}}{\mathbf{z}},\text{
with }{\mathbf{z}}\sim\mathcal{N}(0,I),$ (18)
where the introduction of the conditional expectation offers us the
possibility to explore different approximations specific for PET image
reconstruction.
## 3 PET-specific Adaptations for SGMs
To apply SGMs to PET reconstruction, several key components of the pipeline in
Section 2.2 have to be modified in order to incorporate PET-specific
constraints. Namely, we introduce measurement-based normalisation of the input
to the score model, and explain how to apply a score model trained on 2D
slices for 3D reconstruction. Additionally, we adapt the sampling methods from
Section 2.3 to incorporate the Poisson noise model. Finally, we demonstrate
that the SGM framework allows for the incorporation of additional information,
e.g. MR images, by using classifier-free guidance (Ho and Salimans, 2022).
### 3.1 Measurement-based Normalisation
The intensity of the unknown tracer distribution in emission tomography can
significantly vary across different scans, resulting in a high dynamic range
that poses challenges for deep learning approaches. Neural networks may
exhibit bias toward intensity levels that appear more frequently in the
training set. Consequently, the network might struggle to handle new images
with unseen intensity levels, leading to instability in the learning and
evaluation process (Tran et al., 2020). For SGMs the intensity range of images
must be predefined to ensure that the forward diffusion process converges to a
standard Gaussian distribution and to stabilise the sampling process (Lou and
Ermon, 2023). Input normalisation is a standard deep learning methodology to
deal with intensity shifts and normalise the inputs to the network. In a
similar vein, we propose a PET-specific normalisation method to ensure that
the score model $s_{\theta}({\mathbf{x}_{t}},t)$ is able to estimate the score
function of images with arbitrary intensity values. Namely, we normalise each
training image ${\mathbf{x}_{0}}$ to ensure voxel intensities are within a
certain range. To do this we introduce a training normalisation factor
$c_{\text{train}}$ that when applied ensures the average emission per emission
voxels (a voxel with non-zero intensity value) is 1. This is computed as
$c_{\text{train}}=c({\mathbf{x}_{0}}):=\frac{\sum_{j=1}^{m}[{\mathbf{x}_{0}}]_{j}}{\\#\\{j:[{\mathbf{x}_{0}}]_{j}>0\\}},$
(19)
where the numerator captures the total emission in the image, and the
denominator is the number of emission voxels. The normalisation factor is
incorporated into the DSM training objective function by rescaling the initial
image, yielding the objective
$\mathbb{E}_{t\sim
U[0,T]}\mathbb{E}_{{\mathbf{x}_{0}}\sim{\pi}}\mathbb{E}_{{\mathbf{z}}\sim\mathcal{N}(0,I)}\mathbb{E}_{c\sim
U[\frac{c_{\rm train}}{2},\frac{3c_{\rm
train}}{2}]}\left[\omega_{t}\left\|s_{\theta}\left(\tilde{\mathbf{x}}_{t},t\right)-\nabla_{{\mathbf{x}}}\log
p_{t}(\tilde{\mathbf{x}}_{t}|{\mathbf{x}_{0}}/c)\right\|_{2}^{2}\right],$ (20)
with
$\tilde{\mathbf{x}}_{t}=\gamma_{t}{\mathbf{x}_{0}}/c+\nu_{t}{\mathbf{z}}$.
Compared with Eq. (7), the scale-factor in range $c\sim U[\frac{c_{\rm
train}}{2},\frac{3c_{\rm train}}{2}]$ is used to encourage the score model to
be more robust with respect to misestimations of the normalisation constant
during sampling.
An analogue of Eq. (19) is unavailable during the sampling, and thus a
surrogate is required. This is obtained through an approximate reconstruction,
computed using a single epoch of OSEM from a constant non-negative
initialisation. The resulting sampling normalisation factor is given by
$c_{\text{OSEM}}=\frac{\sum_{j=1}^{m}[{\mathbf{x}}_{\text{OSEM}}]_{j}}{\\#\\{j:[{\mathbf{x}}_{\rm
OSEM}]_{j}>Q_{0.01}\\}},$ (21)
where $Q_{0.01}$ defines the $1\%$ percentile of ${\mathbf{x}}_{\text{OSEM}}$
values. This threshold is heuristically chosen to ensure that noise and
reconstruction artefacts do not cause an over-estimation of the number of
emission voxels. In the reconstruction process the normalisation constant
$c_{\rm OSEM}$ is applied as a factor scaling the time-dependent likelihood,
giving
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}(s_{\theta}({\mathbf{x}_{t}},t)+\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|c_{\text{OSEM}}{\mathbf{x}_{t}}))]dt+g(t)d\bar{\mathbf{w}}_{t}.$
(22)
At final time step $t=0$, the output ${\mathbf{x}}$ is rescaled by $c_{\rm
OSEM}$ to recover the correct intensity level.
### 3.2 Scaling to 3D Reconstruction
While some SGM studies deal with fully 3D image generation (Pinaya et al.,
2022), the majority of work is restricted to 2D images. This is largely due to
the fact that the learning of full 3D volume distributions is computationally
expensive and requires access to many training volumes. Therefore, we propose
to train the score model on 2D axial slices and use a specific decomposition
of the conditional sampling rules to apply the model for 3D reconstruction.
Upon simulating the conditional reverse SDE in Eq. (15) using the Euler-
Maruyama approach, we arrive at the iteration rule:
$\displaystyle\tilde{\mathbf{x}}_{t_{k-1}}={\mathbf{x}}_{t_{k}}+\big{[}{\mathbf{f}}({\mathbf{x}}_{t_{k}},t_{k})-g(t_{k})^{2}s_{\theta}({\mathbf{x}}_{t_{k}},t_{k})\big{]}\Delta
t+g(t_{k})\sqrt{|\Delta
t|}{\mathbf{z}},\quad{\mathbf{z}}\sim\mathcal{N}(0,I),$ (23)
$\displaystyle{\mathbf{x}}_{t_{k-1}}=\tilde{\mathbf{x}}_{t_{k-1}}-g(t_{k})^{2}\nabla_{{\mathbf{x}}}\log
p_{t_{k}}({\mathbf{y}}|{\mathbf{x}_{t}}_{k})\Delta t,$ (24)
using an equidistant time discretisation $0=t_{k_{1}}\leq\dots\leq
t_{k_{N}}=1$ for $N\in{\mathbb{N}}$, with a time step $\Delta t=-1/N$. We
split the Euler-Maruyama update into two equations to highlight the influences
of the score model and the measurements ${\mathbf{y}}$. First, Eq. (23) is the
Euler-Maruyama discretisation for the unconditional reverse SDE, see Eq. (9).
This update is independent of the measurements ${\mathbf{y}}$ and can be
interpreted as a prior update, increasing the likelihood of
${\mathbf{x}}_{t_{k}}$ under the SGM. The second step in Eq. (24) is a data
consistency update, pushing the current iterate to better fit the
measurements. Notably, this step is fully independent of the score model. This
strategy was developed for 3D reconstruction, focusing on sparse view CT and
MRI (Chung et al., 2023c). It was proposed to apply the prior update in Eq.
(23) to all slices in the volume independently and use the 3D forward model in
the data consistency step. Further, a regulariser in the direction orthogonal
to the slice was introduced, to improve consistency of neighbouring slices.
However, applying this approach to the Euler-Maruyama discretisation results
in slow sampling times as a small time step $|\Delta t|$ is necessary. To
accelerate the sampling of high quality samples, we propose to use the DDS
update in Eq. (18) that uses a similar decomposition of independent score
model updates to axial slices, and 3D data consistency updates. Additionally,
we accelerate data consistency updates by splitting the measurement data into
ordered subsets and applying the forward model block-sequentially. The details
are explained below.
### 3.3 Modifications of Sampling Methods
The sampling schemes and approximations in Section 2.3 were originally
proposed for inverse problems with Gaussian noise. The work on DPS (Chung et
al., 2023a) also considers inverse problems with Poisson noise, but utilises a
Gaussian approximation to the Poisson noise, which is not appropriate for PET
reconstruction due to the low photon count. To apply the Naive approximation
or the DPS approach to Poisson inverse problems, one could simply replace the
Gaussian log-likelihood with the PLL in Eq. (16) and Eq. (17). However, PLL
and its gradient are only defined for non-negative values. Therefore, we have
to introduce a non-negativity projection into the sampling to ensure that the
gradient of the PLL can be evaluated. In the context of guided diffusion, it
was proposed to project the iterates ${\mathbf{x}}_{t_{k}}$ to a specified
domain after each sampling step (Li et al., 2022; Saharia et al., 2022). In
our case this would require thresholding all negative values. However, this
creates a mismatch between the forward and reverse SDEs. It was observed that
this mismatch results in artefacts in the reconstructions and may even lead to
divergence of the sampling (Lou and Ermon, 2023). Therefore, we propose to
only threshold the input to the PLL, i.e. with $L$ being the PLL, see Eq. (2),
for the PET-Naive approximation we use
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})\approx\lambda_{t}^{\text{{Naive}{}}}\nabla_{{\mathbf{x}}}L({\mathbf{y}}|c_{\text{OSEM}}P_{{\mathbf{x}}\geq
0}[{\mathbf{x}}_{t}]),$ (25)
and likewise for PET-DPS
$\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})\approx\lambda_{t}^{\text{DPS}}\nabla_{{\mathbf{x}}}L({\mathbf{y}}|c_{\text{OSEM}}P_{{\mathbf{x}}\geq
0}[{\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t})]).$ (26)
Note that this leads to a perturbed likelihood gradient that is not computed
on the true iterate ${\mathbf{x}}_{t}$, but only on the projection. In order
to reconstruct the PET image we have to solve the reverse SDE using the
specific approximation (PET-Naive or PET-DPS) as the likelihood term. This
usually requires around $1000$ sampling steps to produce an appropriate
reconstruction and results in impractically long reconstruction times for 3D
volumes.
To reduce the reconstruction times we propose to modify the conditional DDIM
sampling rule, which we call PET-DDS, similar to the DDS framework (Zhu et
al., 2023; Chung et al., 2023b), cf. Section 3.3. This circumvents the usage
of $\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}_{t}})$, instead
enforcing data consistency for Tweedie’s estimate
${\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$. For PET reconstruction we
propose to implement this data consistency with a MAP objective, leading to
the PET-DDS update
$\displaystyle{\mathbf{x}}^{0}_{t_{k}}$
$\displaystyle={\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})$ (27)
$\displaystyle{\mathbf{x}}^{i+1}_{t_{k}}$ $\displaystyle=P_{\mathbf{x}\geq
0}\left[{\mathbf{x}}^{i}_{t_{k}}+\mathbf{D}({\mathbf{x}}^{i}_{t_{k}})\nabla_{\mathbf{x}}\Phi_{j}({\mathbf{x}}^{i}_{t_{k}})\right]$
(28) $\displaystyle\quad\quad i=0,\dots,p-1$
$\displaystyle{\mathbf{x}}_{t_{k-1}}$
$\displaystyle=\gamma_{t_{k-1}}{\mathbf{x}}^{p}_{t_{k}}+\text{Noise}({\mathbf{x}}_{t_{k}},s_{\theta})+\eta_{t_{k}}{\mathbf{z}},\quad{\mathbf{z}}\sim\mathcal{N}(0,I).$
(29)
where the sub-objective is
$\Phi_{j}({\mathbf{x}}^{i})=L_{j}({\mathbf{y}}|c_{\text{OSEM}}{\mathbf{x}}^{i})+(\lambda^{\text{RDP}}R_{z}({\mathbf{x}}^{i})-\lambda^{\text{DDS}}\|{\mathbf{x}}^{i}-{\hat{\mathbf{x}}_{0}}\|_{2}^{2})/n_{\mathrm{sub}}.$
(30)
The sub-objective index $j=j(i)$ is given by $j=(p(N-k)+i\mod
n_{\rm{sub}})+1$, which cyclically accesses sub-objectives. The RDP used for
3D data $R_{z}$ is applied in the $z$-direction, perpendicular to the axial
slice, see Appendix A.3 for more details. The prior in Eq. (30) consists of
two components: one anchoring to the Tweedie’s estimate
$\|{\mathbf{x}}-{\hat{\mathbf{x}}_{0}}\|_{2}^{2}$, and the other RDP in the
$z$-direction $R_{z}({\mathbf{x}})$. The components have associated penalty
strengths $\lambda^{\text{RDP}}$ and $\lambda^{\text{DDS}}$, respectively.
In a PET-DDS update we first independently compute Tweedie’s estimate based on
${\mathbf{x}}_{t_{k}}$ for each axial slice (Eq. 27). Tweedie’s estimate
${\hat{\mathbf{x}}_{0}}$ impacts the reconstruction in two ways: first through
the Tikhonov regulariser scaled with $\lambda^{\text{DDS}}$, and second as the
initial value for the projected gradient descent in Eq. (28). Through running
$p$ steps of projected gradient descent consistency is balanced between a PLL
on measurements, RDP in the $z$-direction, and Tweedie’s estimate (Eq. 30). To
speed up computation of the objective gradient, the objective is split into
sub-objectives and the gradient of the log-likelihood is evaluated using only
subsets of the measurements ${\mathbf{y}}$, similar to the BSREM update in Eq.
(4). The subsets are partitioned in a staggered configuration and are ordered
with a Herman-Meyer order (Herman and Meyer, 1993). Eq. (29) is the DDIM
update applied to the conditioned Tweedie estimate ${\mathbf{x}}^{p}$, where
the score update is again applied independently for each axial slice, here the
notation of $\text{Noise}({\mathbf{x}}_{t_{k}},s_{\theta})$ is overloaded. The
DDIM update gives ${\mathbf{x}}_{t_{k-1}}$, and these PET-DDS updates repeat
until $t_{0}=0$ giving reconstruction $\hat{{\mathbf{x}}}$.
### 3.4 MR Image Guided Reconstruction
In recent years, several regularisation methods have been proposed which
leverage the availability of additional MR images to improve PET image
reconstruction (Ehrhardt et al., 2016; Bai et al., 2013; Somayajula et al.,
2011). These studies often encode anatomical features of the MR image as edges
or level sets and build hand-crafted regularisers based on these encoded
features. This is commonly referred to as guided reconstruction (Ehrhardt,
2021), where the MR image is first reconstructed and is then used in the PET
reconstruction pipeline. The SGM approach can be modified for guided
reconstruction. In this setting we can use Bayes’ theorem to express the
posterior
$\nabla_{\mathbf{x}}\log{p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}},{\mathbf{x}_{\text{MR}}})=\nabla_{\mathbf{x}}\log{p^{\text{lkhd}}}({\mathbf{y}}|{\mathbf{x}})+\nabla_{\mathbf{x}}\log{\pi}({\mathbf{x}}|{\mathbf{x}_{\text{MR}}}),$
(31)
assuming that ${\mathbf{y}}$ and ${\mathbf{x}_{\text{MR}}}$ are conditionally
independent given ${\mathbf{x}}$. Here, the likelihood
${p^{\text{lkhd}}}({\mathbf{y}}|{\mathbf{x}})$ is given by the Poisson noise
model and ${\pi}({\mathbf{x}}|{\mathbf{x}_{\text{MR}}})$ is a prior
conditioned on the MR image ${\mathbf{x}_{\text{MR}}}$, which will be learned
via a score model. Using this decomposition, the reverse SDE, given the MR
image, can be written as
$d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}\left(\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}})+\nabla_{{\mathbf{x}}}\log{\pi}({\mathbf{x}}|{\mathbf{x}_{\text{MR}}})\right)]dt+g(t)d\bar{\mathbf{w}}_{t}.$
(32)
We can use PET-Naive or PET-DPS to approximate the score of the time dependent
likelihood $\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{y}}|{\mathbf{x}})$.
However, we have to train a score model, conditioned on the MR image, to
estimate the conditional score function
$s_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}})\approx\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}}|{\mathbf{x}_{\text{MR}}}),$
(33)
where ${\mathbf{x}_{\text{MR}}}$ is an additional input to the score model. In
order to train such a score model, we need a dataset of PET images with
corresponding MR images. Learning such a conditional score model,
$s_{\theta}({\mathbf{x}};t,{\mathbf{x}_{\text{MR}}})\approx\nabla_{\mathbf{x}}\log{p_{t}}({\mathbf{x}}|{\mathbf{x}_{\text{MR}}})$,
was recently proposed and applied for PET image denoising (Gong et al., 2022).
However, learning a conditional score model requires a paired dataset
$\\{({\mathbf{x}}^{i},\mathbf{x}_{\text{MR}}^{i})\\}_{i=1}^{m}$ of PET images
and corresponding MR images. In contrast, using the Classifier Free Guidance
(CFG) framework (Ho and Salimans, 2022), we only need a partly paired dataset,
i.e. besides paired data
$\\{({\mathbf{x}}^{i},\mathbf{x}_{\text{MR}}^{i})\\}_{i=1}^{m_{1}}$ we can
also make use of unpaired data $\\{{\mathbf{x}}^{i}\\}_{i=1}^{m_{2}}$. In
particular, CFG trains both a conditional and unconditional score model
simultaneously and utilises their combination during the sampling process. CFG
uses a zero image $\mathbf{0}$ to distinguish between the conditional and
unconditional score model
$s_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}})\approx\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{x}_{t}}|{\mathbf{x}_{\text{MR}}})\quad\mbox{and}\quad
s_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}}=\mathbf{0})\approx\nabla_{{\mathbf{x}}}\log{p_{t}}({\mathbf{x}_{t}}),$
(34)
yielding a conditional DSM objective
$\begin{split}\\!{\mathbb{E}}_{t\sim
U[0,T]}{\mathbb{E}}_{{\mathbf{x}_{0}},{\mathbf{x}_{\text{MR}}}\sim{\pi}}{\mathbb{E}}_{{\mathbf{x}_{t}}\sim
p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})}{\mathbb{E}}_{\rho\sim
B(q)}\\!\left[\omega_{t}\|s_{\theta}({\mathbf{x}_{t}},t;\rho~{}{\mathbf{x}_{\text{MR}}})\\!-\\!\nabla_{{\mathbf{x}}}\log
p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})\|_{2}^{2}\right]\\!\\},\end{split}$
(35)
where $B(q)$ is a Bernoulli distribution with parameter $q$. Thus, if the
additional MR input is set to zero, the conditional DSM loss matches the
unconditional DSM loss defined in Eq. (7). After training, CFG defines a
combined score model
$\tilde{s}_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}})=(1+w)s_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}})-ws_{\theta}({\mathbf{x}_{t}};t,\mathbf{0}),$
(36)
as a linear combination with $w$ as the guidance strength. This combined score
model $s_{\theta}({\mathbf{x}_{t}};t,{\mathbf{x}_{\text{MR}}})$ can then be
used for any of the presented sampling methods.
## 4 Experimental Setup
### 4.1 Dataset and Evaluation Metrics
We use the BrainWeb dataset consisting of $20$ patient-realistic volumes
(Aubert-Broche et al., 2006). The tracer simulated was 18F-Fluorodeoxyglucose
(FDG) and the volumes were further perturbed by three realisations of random
distortions (Schramm, 2021). $19$ out of the $20$ volumes were used for
training. Axial slices with non-zero intensity were extracted, resulting in a
training dataset of $4569$ slices. For 2D evaluation, we used $20$ equidistant
axial slices from remaining volume (subject 04). An additional OOD dataset was
created by simulating ellipsoid hot lesions of random size and location within
soft-tissue. The noise level of simulated measurements was set by re-scaling
forward projected ground truth images, where the scale ensured that the total
counts divided by emission volume was 2.5 or 10. These rescaled measurements
are the clean measurements, which were then corrupted with Poisson noise and
constant background contamination. In addition, $10$ noise realisations were
obtained. Herein, we refer to the noise levels as $2.5$ and $10$, where the
total true counts averaged over evaluation dataset were $122\,808$ and
$491\,232$, respectively. Resolution modelling, attenuation, sensitivity, and
background contamination were modelled and subsequently included in the
forward model utilising ParallelProj (Schramm, 2022).
For the 3D evaluation, measurements of subject 04 were simulated with an
Siemens Biograph mMR scanner geometry (Karlberg et al., 2016). Measurements
with detector sensitivities and attenuation were simulated and included in the
forward model using SIRF and STIR (Ovtchinnikov et al., 2020; Thielemans et
al., 2012). The noise level was equivalent to 40 million counts without
background, and 5 noisy realisations were obtained. Both FDG and Amyloid
tracers were simulated with hot lesions. The projector and measurements were
split into 28 ordered subsets.
We evaluate the performance between reconstructions and ground truths using
two global metrics: Peak-Signal-to-Noise Ratio (PSNR) and Structural
Similarity Index Measure (SSIM) (Wang et al., 2004). Moreover, we compute two
local quality scores over a Region of Interest (ROI). First, to quantify the
detectability of lesions, we compute the Contrast Recovery Coefficient (CRC).
Second, the noise in reconstructions is estimated over background ROIs using
Standard Deviation (STD), which computes standard deviation across
realisations and then averages over ROI voxels (Tong et al., 2010). In 2D, we
evaluate the reconstruction consistency by computing the Kullback-Leibler
Divergence (KLDIV) between measurements ${\mathbf{y}}$ and estimated
measurements
$\mathbf{\bar{y}}={\mathbf{A}}\hat{{\mathbf{x}}}+\mathbf{\bar{b}}$, where
$\hat{{\mathbf{x}}}$ denotes the reconstruction. Furthermore, we include the
“mean KL” between the noisy and clean measurements across the 2D evaluation
dataset. More information about quality metrics can be found in the Appendix
A.4.
We present tables of best performing methods with optimal penalty strengths,
as well as qualitative figures of reconstructed images. Furthermore, to allow
direct comparison between methods, we give sensitivity plots of PSNR, SSIM,
KLDIV or CRC vs. STDs. Since STD gives an estimate of the noise in the image,
these plots can show the effect of varying penalty strength on reconstruction
quality or data-consistency. A lower STD typically corresponds to lower data-
fidelity (a higher prior strength), and the converse is true for higher STD.
In practice, with a generative model as a prior, higher penalty strengths do
not necessarily lead to a lower STD as there may be multiple reconstruction
with high likelihood under the model. Variations in STD are further
exacerbated by approximate nature and stochasticity of SGM sampling.
### 4.2 Comparison Methods
In the 2D setting, we compare against two established supervised learning
methods used in medical image reconstruction: the UNet post-processing
FBPConvNet (Jin et al., 2017) and unrolled iterative learned primal dual
(Adler and Öktem, 2018). We modify both models for PET reconstruction; the
post-processing method is referred to as PET-UNet, and the unrolled method as
PET-LPD. Additionally we compare against a state-of-the-art SGM approach for
PET image denoising (Gong et al., 2022), referred to as Naive (OSEM). This
denoising approach replaces the likelihood on the measurements with a
likelihood modelled as a Gaussian centred at the noisy reconstruction.
Therefore, Naive (OSEM) is able to use the same pre-trained score model as our
proposed PET-Naive, PET-DPS, and PET-DDS methods.
For 3D evaluation, Deep Image Prior (DIP) reconstruction was included as an
unsupervised comparison method with a 3D network architecture well-established
in literature (Gong et al., 2019; Ote et al., 2023; Singh et al., 2023). For
comparison, converged MAP solutions with an RDP regulariser were computed.
BSREM algorithm was used with a range of penalty strengths, cf. PET background
Sect. 2.1.
Further details on all comparison methods can be found in Appendix A.3.
## 5 Numerical Experiments
The first set of experiments investigates the performance of the SGM methods
(Naive (OSEM), PET-Naive, PET-DPS, PET-DDS) against one-another and against
established supervised methods (PET-UNet, PET-LPD). This is done in 2D and at
two noise levels, with and without lesions. In the second set of experiments
we present results with MR image guidance. The last set of experiments
investigates the best performing SGM method (PET-DDS) on 3D reconstruction,
and provides a comparison against classical MAP and state-of-the-art DIP
reconstructions with lesions and two simulated tracers. For all SGM results we
make use of a single score model trained on the dataset of axial BrainWeb
slices as discussed in Section 4.1. The details about the training process and
network architecture can be found in Appendix A.1. Further results can be
found in the Appendix B.
### 5.1 2D Reconstruction
The aim of 2D experiments is to benchmark the SGM and supervised methods, and
analyse the stability of SGM methods with respect to the choice of different
penalty strengths $\lambda_{t}^{\text{{Naive}}},\lambda_{t}^{\text{DPS}}$ and
$\lambda^{\text{DDS}}$. The penalty strengths for PET-Naive and PET-DPS
depends on the time step $t$, and the details about their specific choice can
be found in Appendix A.2.
#### 5.1.1 Reconstruction without Lesion
The results in Fig. 1 show that the performance of the four SGM methods vary
greatly for data of noise level 2.5 with no lesions. PET-DPS is the best
performing method, consistently giving high PSNR, SSIM and low KLDIV values.
However, it is also computationally the most expensive, requiring $1000$ steps
with back-propagation through the score model. PET-DDS preforms competitively
with a much lower computational overhead of $100$ steps without score model
back-propagation. Naive (OSEM) performs well with regards to PSNR, but
performs poorly in terms of data-consistency (KLDIV) and SSIM. As Naive (OSEM)
computes the likelihood on an early-stopped OSEM image, increasing data-
consistency ensures the reconstruction approaches the OSEM image. The maximum
achievable likelihood of Naive (OSEM) does not give a KLDIV lower than the
“mean KL”. Hence it is not deemed a strong surrogate to the true likelihood
computed on measurements. The PET-Naive reconstructions have substantially
higher STD values. This is attributed to instability when computing the PLL
gradient due to non-negativity projection directly applied on
${\mathbf{x}_{t}}$.
Figure 1: Results for BrainWeb without lesions with noise level 2.5 for
different penalty parameters. Standard deviation is across reconstructions
from different realisations of measurements.
In Table 1 we show quantitative results of the optimal penalty strength choice
for each metric, and comparisons against PET-UNet and PET-LPD. These
supervised methods are trained on data with noise levels of 5, 10 and 50
without lesions. Using noise levels 2.5 and 10 in evaluation allows
investigating the effect of OOD noise levels on supervised methods. PET-LPD is
the best performing method, giving the best SSIM at noise level 10, and best
PSNR at both noise levels. Between noise levels 10 and 2.5 PET-LPD observes a
drop of 6.7% and 6.6% for PSNR and SSIM, whereas PET-DPS exhibits a drop of
3.4% and 3.8%, respectively. PET-DPS performs competitively across both noise
levels and metrics, and gives the best SSIM value at noise level 2.5. Given
this competitive performance and lower reduction in drop of quality metrics
between noise level, PET-DPS is deemed more robust to different noise level.
This may be attributed to the unsupervised nature of SGM methods. Namely, as
they are not trained on data of given noise levels they are less affected by
distributional differences in noise levels at evaluation and training stages.
Table 1: Results using the best hyperparameters for each method for BrainWeb without lesions for noise level 2.5 and 10. The best SGM is highlighted in grey, and overall best metric is underlined. Supervised methods are trained with data of noise level 10, but not 2.5, and are in-distribution when evaluated with noise level 10. | Noise Level | 2.5 | 10
---|---|---|---
| | PSNR, $\lambda$ | SSIM, $\lambda$ | PSNR, $\lambda$ | SSIM, $\lambda$
Score-based Model | Naive (OSEM) | $22.38$, $0.527$ | $0.770$, $3.08$ | $23.40$, $0.2$ | $0.792$, $0.9$
PET-Naive | $21.52$, $12.0$ | $0.781$, $12.0$ | $22.81$, $10.0$ | $0.815$, $10.0$
PET-DPS | $22.80$, $650.$ | $0.818$, $750.$ | $23.70$, $400.$ | $0.850$, $400.$
PET-DDS | $22.46$, $0.25$ | $0.789$, $0.2$ | $23.55$, $0.025$ | $0.849$, $0.025$
Supervised Models | PET-LPD | $23.07$, N/A | $0.813$, N/A | $24.72$, N/A | $0.87$, N/A
PET-UNet | $22.80$, N/A | $0.80$, N/A | $24.52$, N/A | $0.868$, N/A
#### 5.1.2 2D Reconstruction with Lesion
As the score model was trained on data without lesions, testing on data with
simulated hot lesions gives an insight into generalisability to OOD data. The
quantitative results in Fig. 2 and Table 2 show results that are consistent
with those for data with no lesions in Fig. 1. CRC was computed to quantify
the detectability of hot lesions. The CRC results indicate that PET-DDS is
better at resolving lesions than other SGM methods. Further, Fig. 2 shows a
clear trade-off between reconstruction quality in terms of PSNR and SSIM and
visibility of lesions. Here, a lower regularisation results in a better
performance in terms of CRC. Results for noise level $10$ are shown in
Appendix B.1.
Figure 2: Results for BrainWeb with lesions with noise level 2.5 for different
penalty parameters. Standard deviation is across reconstructions from
different realisations of measurements.
Comparing the results between noise levels $2.5$ and $10$ in Table 2, we
observe that SGMs increase CRC values as compared to supervised methods. SGMs
also compare favourably with regards to PSNR and SSIM. CRC is local metric
that is more relevant than PSNR or SSIM in a clinical setting, as it
quantifies the detectability of lesions. Therefore, it is of greater interest
to improve this local metric rather than global metrics. With this
perspective, SGMs outperform supervised methods, and the best-preforming SGM
methods are PET-DPS and PET-DDS . Due to the performance observed and
computational overhead, PET-DDS is considered the most appropriate method to
test in guided reconstruction and in the 3D setting.
Table 2: Results using the best hyperparameters for each method for BrainWeb with lesions for noise level 2.5 and 10. The penalty strength for the SGMs methods is denoted by $\lambda$. The best score-based method is highlighted in grey. The overall best score per noise level is underlined. Noise Level | | | PSNR, $\lambda$ | SSIM, $\lambda$ | CRC, $\lambda$
---|---|---|---|---|---
2.5 | Score-based Model | Naive (OSEM) | $27.60$, $0.527$ | $0.821$, $1.71$ | $0.865$, $50.$
PET-Naive | $26.82$, $12.0$ | $0.817$, $12.0$ | $0.761$, $50.$
PET-DPS | $27.99$, $625.$ | $0.855$, $650.$ | $0.822$, $1500.$
PET-DDS | $27.46$, $0.15$ | $0.841$, $0.15$ | $0.910$, $0.01$
Supervised Models | PET-LPD | $28.30$, N/A | $0.853$, N/A | $0.865$, N/A
PET-UNet | $27.74$, N/A | $0.836$, N/A | $0.787$, N/A
10 | Score-based Model | Naive (OSEM) | $28.87$, $0.25$ | $0.847$, $0.9$ | $0.898$, $4.$
PET-Naive | $28.07$, $10.0$ | $0.845$, $7.5$ | $0.829$, $20.$
PET-DPS | $29.01$, $400.$ | $0.878$, $400.$ | $0.907$, $550.$
PET-DDS | $28.99$, $0.025$ | $0.879$, $0.025$ | $0.962$, $0.$111Regularised due to denoised score estimate initialisation.
Supervised Models | PET-LPD | $30.07$, N/A | $0.894$, N/A | $0.904$, N/A
PET-UNet | $29.41$, N/A | $0.889$, N/A | $0.856$, N/A
#### 5.1.3 MR Guided Reconstruction
Experiments with and without additional MR image guidance were conducted to
illustrate the flexibility of the proposed approach, and tested at three
guidance strengths $w=0.25$, $0.5$, $1.0$, where the guidance strength $w$
closer to zero constitutes more guidance. The results with best hyper-
parameters are given in Table 3. It is observed that there are significant
improvements to PSNR ($>18\%$) and SSIM ($>13\%$) with guidance. On PET data
with lesions, the lesions were only simulated for PET and not MR images.
Therefore, the data was of a worse-case scenario where clinically important
features are only present in the PET image. The results with lesions show
increasing the guidance strength decreased of CRC values and the lesions were
more difficult to detect - see Fig. 12. Conversely, the PSNR and SSIM values
on with lesions data increased with $w$ closer to zero (more guidance). This
highlights the potential dangers of guidance, as well as the importance of
evaluating local and global quality metrics.
Table 3: Results using the best hyperparameters for SGM methods for noise level 2.5 with MR image guidance. The best method for each setting (with/out lesion, and performance metric) is highlighted in gray, where the penalty strength is tuned for each method individually. | without lesions | with lesions
---|---|---
| PSNR, $\lambda$ | SSIM, $\lambda$ | PSNR, $\lambda$ | SSIM, $\lambda$ | CRC, $\lambda$
DDS (w/o MR) | $22.46$, $0.25$ | $0.789$, $0.2$ | $27.46$, $0.15$ | $0.841$, $0.15$ | $0.910$, $0.01$
DDS $w=0.25$ | $30.22$, $0.35$ | $0.950$, $0.35$ | $31.21$, $0.15$ | $0.954$, $0.25$ | $0.726$, $0.0$
DDS $w=0.5$ | $29.32$, $0.25$ | $0.940$, $0.25$ | $31.12$, $0.15$ | $0.946$, $0.25$ | $0.778$, $0.0$
DDS $w=1.0$ | $26.66$, $0.15$ | $0.899$, $0.15$ | $29.31$, $0.1$ | $0.906$, $0.15$ | $0.939$, $0.0$
From Fig. 3 a reconstruction without guidance and with guidance of various
strengths is presented for data without lesions at noise level 2.5. The
reconstructions indicate that MR guidance helps to reconstruct the specific
anatomical boundaries and structure, i.e. white matter tracts. In Appendix B.2
we give additional qualitative slices with and without lesions, cf. Figs. 11
and 12, and the associated sensitivity plots in Figs. 9 and 10.
Figure 3: Comparisons of single slice reconstructions with the PET-DDS MR
guided vs. unguided at noise level 2.5 without lesions.
### 5.2 3D Reconstruction
Full 3D reconstructions were analysed for two tracers with simulated lesions.
We evaluate the performance of PET-DDS with additional RDP regularisation in
the $z$-direction perpendicular to axial slices (termed RDPz), and introduce
subset-based data consistency updates as in Eq. (28). Acceleration of PET-DDS
was obtained through the use of subset-based data consistency updates, see
Table 4. For further experiments 28 subsets were used. We compare against a
BSREM computed MAP solution with RDP, and DIP with RDP, similar to Singh et
al. (2023). In Fig. 4 we show sensitivity plots for the FDG tracer and in Fig.
5 we plot the axial, coronal and sagittal slices centred on the lesion
location. Additionally, sensitivity curves for the amyloid tracer are given in
Fig. 6, and the associated reconstructions are available in Appendix B, see
Fig. 15.
Table 4: Computational time for 3D PET-DDS with different numbers of subsets. Number of subsets | 1 | 4 | 7 | 14 | 28 | 42
---|---|---|---|---|---|---
Time for reconstruction (mins) | 47.8 | 13.6 | 8.6 | 5.1 | 3.4 | 2.8
The FDG tracer sensitivity plot in Fig. 5 shows that adding RDPz into PET-DDS
improves SSIM and CRC metrics, while classical RDP provides highest PSNR
values. Since PSNR is computed using a mean squared reconstruction error, the
resulting metric is biased toward blurrier reconstructions. This can be
observed in the qualitative images given in Fig. 5, where RDP gives high PSNR
values while the image insets show excessive blurring on the lesion. PET-DDS
without RDPz performs worse than with RDPz, since the score model only acts on
axial slices and, without RDPz, consistency in $z$-direction is only ensured
through data consistency. Qualitatively this can be observed in Fig. 5, where
coronal and sagittal slices display discontinuities in the $z$-direction
whereas the axial slice is smoother. DIP reconstructions give improvements in
SSIM and CRC as compared to classical RDP results, but fail to improve PSNR.
Results with OOD Amyloid tracer show milder improvements with PET-DDS, with
trends similar to those seen with the FDG tracer.
Figure 4: Results for 3D reconstruction using the FDG tracer for different penalty values. PET-DDS-RDPz $\beta=21.9$, and DIP+RDP $\beta=0.1$. Standard deviation is across reconstructions from different realisations of measurements. Table 5: Results using the best hyperparameters for each method for 3D BrainWeb data with FDG and Amyloid tracers. | | PSNR, $\lambda$ | SSIM, $\lambda$ | CRC, $\lambda$
---|---|---|---|---
FDG Tracer | RDP | $25.74$, $1.81$ | $0.911$, $2.77$ | $0.994$, $0.5$
DIP+RDP | $25.26$, $9,800$ | $0.917$, $10,800$ | $0.966$, $9,500$
PET-DDS | $24.83$, $398$ | $0.910$, $398$ | $1.01$, $158$
PET-DDS+RDPz | $25.70$, $158$ | $0.922$, $63.1$ | $0.996$, $158$
Amyloid | RDP | $24.15$, $2.77$ | $0.898$, $1.81$ | $0.996$, $0.5$
DIP+RDP | $24.10$, $10,200$ | $0.894$, $10,800$ | $0.964$, $9,500$
PET-DDS | $23.08$, $1000$ | $0.890$, $398$ | $1.009$, $10$
PET-DDS+RDPz | $24.15$, $398$ | $0.906$, $158$ | $0.999$, $10$
Figure 5: 3D reconstruction for the different methods with FDG tracer, and
metrics computed on the inset lesion. Figure 6: Results for 3D reconstruction
using the Amyloid tracer for different penalty values. PET-DDS-RDPz
$\beta=21.9$, and DIP+RDP $\beta=0.1$. Standard deviation is across
reconstructions from different realisations of measurements.
## 6 Conclusion
In this work we adapt SGMs for PET image reconstruction by incorporating PET
specific constraints, e.g. Poisson noise and non-negativity, into several
popular sampling techniques. We further introduce a measurement-based
normalisation technique, to improve the generalisability to different
intensity values by stabilising the dynamic range encountered by the score
model. In future work, reflected SGMs, recently proposed by Lou and Ermon
(2023), could be leveraged to introduce non-negativity into the sampling
procedure in a more principled manner. This work provides a first
investigation of the generalisation capabilities by training the score model
on patient-realistic slices without lesions and testing on slices with
lesions. However, further work is needed to comprehensively evaluate the
generalisation performance on in-vivo data, and investigate the biases of
SGMs, which is vitally important for clinical adoption. The proposed SGM
sampling methods can produce multiple samples from the posterior
$p({\mathbf{x}}|{\mathbf{y}})$, and in this vein one can draw multiple samples
from the posterior for empirical uncertainty estimation; this is left for
future work. This work proposes guided SGM reconstruction with an additional
MR guidance image using CFG. The preliminary results are promising and further
validation is required. A clinically pertinent investigation into robustness
to misregistration of the MR image could be investigated. Furthermore,
guidance could be extended to a joint PET-MRI reconstruction. Recently, Levac
et al. (2023) used similar ideas for a joint reconstruction of multi-contrast
MR images.
Acknowledgments
I.R.D. Singh and R. Barbano are supported by the EPSRC-funded UCL Centre for
Doctoral Training in Intelligent, Integrated Imaging in Healthcare (i4Health)
(EP/S021930/1) and the Department of Health’s NIHR-funded Biomedical Research
Centre at University College London Hospitals. A. Denker acknowledges the
support by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) - Project number 281474342/GRK2224/2. Ž. Kereta was supported by
the UK EPSRC grant EP/X010740/1. B. Jin and S. Arridge were supported by the
UK EPSRC EP/V026259/1. Software used in this project is partially maintained
by CCP SyneRBI (EPSRC EP/T026693/1). P. Maass acknowledges support by DFG-NSFC
project M-0187 of the Sino-German Center mobility programme. The authors thank
Georg Schramm for his help with ParallelProj.
Ethical Standards
The work follows appropriate ethical standards in conducting research and
writing the manuscript, following all applicable laws and regulations
regarding treatment of animals or human subjects.
Conflicts of Interest
We declare we don’t have conflicts of interest.
## References
* Adler and Öktem (2018) Jonas Adler and Ozan Öktem. Learned primal-dual reconstruction. _IEEE Transactions on Medical Imaging_ , 37(6):1322–1332, 2018.
* Ahn and Fessler (2003) Sangtae Ahn and Jeffrey A Fessler. A globally convergent image reconstruction for emission tomography using relaxed ordered subsets algorithms. _IEEE Transactions on Medical Imaging_ , 22(5):623–626, 2003.
* Anderson (1982) Brian D O Anderson. Reverse-time diffusion equation models. _Stochastic Processes and their Applications_ , 12(3):313–326, 1982.
* Aubert-Broche et al. (2006) Bérengère Aubert-Broche, M Griffin, G Bruce Pike, Alan C Evans, and D Louis Collins. Twenty new digital brain phantoms for creation of validation image data bases. _IEEE Transactions on Medical Imaging_ , 25(11):1410–1416, 2006.
* Bai et al. (2013) Bing Bai, Quanzheng Li, and Richard M Leahy. Magnetic resonance-guided positron emission tomography image reconstruction. In _Seminars in Nuclear Medicine_ , volume 43, pages 30–44. Elsevier, 2013.
* Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 35(8):1798–1828, 2013.
* Chung and Ye (2022) Hyungjin Chung and Jong Chul Ye. Score-based diffusion models for accelerated MRI. _Medical Image Analysis_ , 80:102479, 2022.
* Chung et al. (2023a) Hyungjin Chung, Jeongsol Kim, Michael T McCann, Marc L Klasky, and Jong Chul Ye. Diffusion posterior sampling for general noisy inverse problems. In _Proceedings of the Eleventh International Conference on Learning Representations_. ICLR, 2023a.
* Chung et al. (2023b) Hyungjin Chung, Suhyeon Lee, and Jong Chul Ye. Fast diffusion sampler for inverse problems by geometric decomposition. _CoRR_ , abs/2303.05754, 2023b.
* Chung et al. (2023c) Hyungjin Chung, Dohoon Ryu, Michael T McCann, Marc L Klasky, and Jong Chul Ye. Solving 3D inverse problems using pre-trained 2D diffusion models. In _Proceedings of the Thirty Seventh IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 22542–22551. CVPR, 2023c.
* Collins et al. (1998) D Louis Collins, Alex P Zijdenbos, Vasken Kollokian, John G Sled, Noor Jehan Kabani, Colin J Holmes, and Alan C Evans. Design and construction of a realistic digital brain phantom. _IEEE Transactions on Medical Imaging_ , 17(3):463–468, 1998.
* Dhariwal and Nichol (2021) Prafulla Dhariwal and Alexander Q Nichol. Diffusion models beat GANs on image synthesis. In _Proceedings of the Thirty Fourth Conference on Advances in Neural Information Processing Systems_ , pages 8780–8794. NeurIPS, 2021.
* Dimakis (2022) Alexandros G Dimakis. _Mathematical Aspects of Deep Learning_ , chapter Deep Generative Models and Inverse Problems, page 400–421. Cambridge University Press, Cambridge, UK, 2022.
* Efron (2011) Bradley Efron. Tweedie’s formula and selection bias. _Journal of the American Statistical Association_ , 106(496):1602–1614, 2011.
* Ehrhardt (2021) Matthias J Ehrhardt. _Handbook of Mathematical Models and Algorithms in Computer Vision and Imaging: Mathematical Imaging and Vision_ , chapter Multi-modality Imaging with Structure-Promoting Regularizers, pages 1–38. Springer International Publishing, Cham, Switzerland, 2021. ISBN 978-3-030-03009-4.
* Ehrhardt et al. (2016) Matthias J Ehrhardt, Pawel Markiewicz, Maria Liljeroth, Anna Barnes, Ville Kolehmainen, John S Duncan, Luis Pizarro, David Atkinson, Brian F Hutton, Sébastien Ourselin, Kris Thielemans, and Simon R Arridge. PET reconstruction with an anatomical MRI prior using parallel level sets. _IEEE Transactions on Medical Imaging_ , 35(9):2189–2199, 2016.
* Engl et al. (1996) Heinz W Engl, Martin Hanke, and Andreas Neubauer. _Regularization of inverse problems_. Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1996.
* Gong et al. (2019) Kuang Gong, Ciprian Catana, Jinyi Qi, and Quanzheng Li. PET image reconstruction using deep image prior. _IEEE Transactions on Medical Imaging_ , 38(7):1655–1665, 2019.
* Gong et al. (2022) Kuang Gong, Keith A. Johnson, Georges El Fakhri, Quanzheng Li, and Tinsu Pan. PET image denoising based on denoising diffusion probabilistic models. _CoRR_ , abs/2209.06167, 2022.
* Goodfellow et al. (2014) Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, and Yoshua Bengio. Generative adversarial nets. In _Proceedings of the Second Conference on Advances in Neural Information Processing Systems_ , pages 2672–2680. NeurIPS, 2014.
* Guazzo and Colarieti-Tosti (2021) Alessandro Guazzo and Massimiliano Colarieti-Tosti. Learned primal dual reconstruction for PET. _Journal of Imaging_ , 7(12), 2021.
* Herman and Meyer (1993) Gabor T Herman and Lorraine B Meyer. Algebraic reconstruction techniques can be made computationally efficient (positron emission tomography application). _IEEE Transactions on Medical Imaging_ , 12(3):600–609, 1993.
* Ho and Salimans (2022) Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. _CoRR_ , abs/2207.12598, 2022.
* Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _Proceedings of the Thirty Third Conference on Advances in Neural Information Processing Systems_. NeurIPS, 2020.
* Hudson and Larkin (1994) H Malcolm Hudson and Richard S Larkin. Accelerated image reconstruction using ordered subsets of projection data. _IEEE Transactions on Medical Imaging_ , 13(4):601–609, 1994.
* Ito and Jin (2015) Kazufumi Ito and Bangti Jin. _Inverse problems: Tikhonov theory and algorithms_. World Scientific, Hackensack, NJ, 2015. ISBN 978-981-4596-19-0.
* Jalal et al. (2021) Ajil Jalal, Marius Arvinte, Giannis Daras, Eric Price, Alexandros G. Dimakis, and Jonathan I. Tamir. Robust compressed sensing MRI with deep generative priors. In _Proceedings of the Thirty Fourth Conference on Advances in Neural Information Processing Systems_ , pages 14938–14954. NeurIPS, 2021.
* Jin et al. (2017) Kyong H Jin, Michael T McCann, Emmanuel Froustey, and Michael Unser. Deep convolutional neural network for inverse problems in imaging. _IEEE Transactions on Image Processing_ , 26(9):4509–4522, 2017.
* Kaplan and Zhu (2019) Sydney Kaplan and Yang-Ming Zhu. Full-dose PET image estimation from low-dose PET image using deep learning: a pilot study. _Journal of Digital Imaging_ , 32(5):773–778, 2019.
* Karlberg et al. (2016) Anna M Karlberg, Oddbjørn Sæther, Live Eikenes, and Pål Erik Goa. Quantitative comparison of PET performance—siemens biograph mCT and mMR. _European Journal of Nuclear Medicine and Molecular Imaging: Physics_ , 3(1), 2016.
* Kingma and Welling (2014) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In _Proceedings of the Second International Conference on Learning Representations_. ICLR, 2014.
* Kobler and Pock (2023) Erich Kobler and Thomas Pock. Learning gradually non-convex image priors using score matching. _CoRR_ , abs/2302.10502, 2023.
* Leuschner et al. (2021) Johannes Leuschner, Maximilian Schmidt, Daniel Otero Baguer, David Erzmann, and Mateus Baltazar. DIV$\alpha$l library, 2021. URL https://doi.org/10.5281/zenodo.4428220.
* Levac et al. (2023) Brett Levac, Ajil Jalal, Kannan Ramchandran, and Jonathan I Tamir. MRI reconstruction with side information using diffusion models. _arXiv_ , 2023. URL https://arxiv.org/abs/2303.14795.
* Li et al. (2022) Xiang Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. Diffusion-LM improves controllable text generation. In _Proceedings to the Thirty Fifth Conference on Advances in Neural Information Processing Systems_ , pages 4328–4343. NeurIPS, 2022.
* Lou and Ermon (2023) Aaron Lou and Stefano Ermon. Reflected diffusion models. In _Proceedings of the Fortieth International Conference on Machine Learning_ , volume 202, pages 22675–22701. PMLR, 2023.
* Mehranian and Reader (2021) Abolfazl Mehranian and Andrew J Reader. Model-based deep learning PET image reconstruction using forward–backward splitting expectation–maximization. _IEEE Transactions on Radiation and Plasma Medical Sciences_ , 5(1):54–64, 2021.
* Nuyts et al. (2002) Johan Nuyts, Dirk Bequé, Patrick Dupont, and Luc Mortelmans. A concave prior penalizing relative differences for maximum-a-posteriori reconstruction in emission tomography. _IEEE Transactions on Nuclear Science_ , 49(1):56–60, 2002.
* Ote et al. (2023) Kibo Ote, Fumio Hashimoto, Yuya Onishi, Takashi Isobe, and Yasuomi Ouchi. List-mode PET image reconstruction using deep image prior. _IEEE Transactions on Medical Imaging_ , 42(6):1822–1834, 2023.
* Ovtchinnikov et al. (2020) Evgueni Ovtchinnikov, Richard Brown, Christoph Kolbitsch, Edoardo Pasca, Casper O da Costa-Luis, Ashley Gillman, Benjamin A Thomas, Nikos Efthimiou, Johannes Mayer, Palak Wadhwa, Matthias J Ehrhardt, Sam Ellis, Jakob S Jørgensen, Julian C Matthews, Claudia Prieto, Andrew J Reader, Charalampos Tsoumpas, Martin J Turner, and Kris Thielemans. SIRF: synergistic image reconstruction framework. _Computer Physics Communications_ , 249:107087, 2020.
* Pain et al. (2022) Cameron D Pain, Gary F Egan, and Zhaolin Chen. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. _European Journal of Nuclear Medicine and Molecular Imaging_ , 49(9):3098–3118, 2022.
* Pierro and Yamagishi (2001) Alvaro R De Pierro and Michel E B Yamagishi. Fast EM-like methods for maximum ‘a posteriori’ estimates in emission tomography. _IEEE Transactions on Medical Imaging_ , 20(4):280–288, 2001.
* Pinaya et al. (2022) Walter H L Pinaya, Petru-Daniel Tudosiu, Jessica Dafflon, Pedro F Da Costa, Virginia Fernandez, Parashkev Nachev, Sébastien Ourselin, and M Jorge Cardoso. Brain imaging generation with latent diffusion models. In _Proceedings of the Twenty Fifth Conference on Medical Image Computing and Computer Assisted Intervention_ , pages 117–126. MICCAI, 2022.
* Qi and Leahy (2006) Jinyi Qi and Richard M Leahy. Iterative reconstruction techniques in emission computed tomography. _Physics in Medicine & Biology_, 51(15):R541, 2006.
* Ramzi et al. (2020) Zaccharie Ramzi, Benjamin Remy, François Lanusse, Jean-Luc Starck, and Philippe Ciuciu. Denoising score-matching for uncertainty quantification in inverse problems. _CoRR_ , abs/2011.08698, 2020.
* Ravula et al. (2023) Sriram Ravula, Brett Levac, Ajil Jalal, Jonathan I Tamir, and Alexandros G Dimakis. Optimizing sampling patterns for compressed sensing MRI with diffusion generative models. _CoRR_ , abs/2306.03284, 2023.
* Reader et al. (2021) Andrew J Reader, Guillaume Corda, Abolfazl Mehranian, Casper da Costa-Luis, Sam Ellis, and Julia A Schnabel. Deep learning for PET image reconstruction. _IEEE Transactions on Radiation and Plasma Medical Sciences_ , 5(1):1–25, 2021.
* Ronneberger et al. (2015) Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In _Proceedings of the Eighteenth Conference on Medical Image Computing and Computer-Assisted Intervention_ , pages 234–241. MICCAI, 2015.
* Rudin et al. (1992) Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. _Physica D: Nonlinear Phenomena_ , 60(1-4):259–268, 1992.
* Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. In _Proceedings of the Thirty Fifth Conference on Advances in Neural Information Processing Systems_ , pages 36479–36494. NeurIPS, 2022.
* Schramm (2021) Georg Schramm. Simulated brainweb PET/MR data sets for denoising and deblurring, 2021\. URL https://zenodo.org/record/4897350.
* Schramm (2022) Georg Schramm. PARALLELPROJ – an open-source framework for fast calculation of projections in tomography, 2022. URL https://arxiv.org/abs/2212.12519.
* Shepp and Vardi (1982) Lawrence A Shepp and Yehuda Vardi. Maximum likelihood reconstruction for emission tomography. _IEEE Transactions on Medical Imaging_ , 1(2):113–122, 1982.
* Singh et al. (2023) Imraj R D Singh, Riccardo Barbano, Željko Kereta, Bangti Jin, Kris Thielemans, and Simon Arridge. 3D PET-DIP reconstruction with relative difference prior using a SIRF-based objective. In _Proceedings to the Seventeenth International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine_. Fully3D, 2023.
* Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _Proceedings of the Thirty Second International Conference on Machine Learning_ , pages 2256–2265, 2015.
* Somayajula et al. (2011) Sangeetha Somayajula, Christos Panagiotou, Anand Rangarajan, Quanzheng Li, Simon R Arridge, and Richard M Leahy. PET image reconstruction using information theoretic anatomical priors. _IEEE Transactions on Medical Imaging_ , 30(3):537–549, 2011.
* Song et al. (2021a) Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In _Proceedings of the Ninth International Conference on Learning Representations_. ICLR, 2021a.
* Song and Ermon (2019) Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In _Proceedings of the Thirty Second Conference on Advances in Neural Information Processing Systems_ , pages 11895–11907. NeurIPS, 2019.
* Song et al. (2021b) Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. In _Proceedings of the Thirty Fourth Conference on Advances in Neural Information Processing Systems_ , pages 1415–1428. NeurIPS, 2021b.
* Song et al. (2021c) Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In _Proceedings of the Ninth International Conference on Learning Representations_. ICLR, 2021c.
* Song et al. (2022) Yang Song, Liyue Shen, Lei Xing, and Stefano Ermon. Solving inverse problems in medical imaging with score-based generative models. In _Proceedings of the Tenth International Conference on Learning Representations_. ICLR, 2022.
* Särkkä and Solin (2019) Simo Särkkä and Arno Solin. _Applied Stochastic Differential Equations_. Institute of Mathematical Statistics Textbooks. Cambridge University Press, Cambridge, 2019. ISBN 978-131-6649-46-6.
* Thielemans et al. (2012) Kris Thielemans, Charalampos Tsoumpas, Sanida Mustafovic, Tobias Beisel, Pablo Aguiar, Nikolaos Dikaios, and Matthew W Jacobson. STIR: software for tomographic image reconstruction release 2. _Physics in Medicine & Biology_, 57(4):867, 2012\.
* Tong et al. (2010) Shan Tong, Adam M Alessio, and Paul E Kinahan. Noise and signal properties in PSF-based fully 3D PET image reconstruction: an experimental evaluation. _Physics in Medicine & Biology_, 55(5):1453, 2010.
* Tran et al. (2020) Dustin Tran, Jasper Snoek, and Balaji Lakshminarayanan. Practical uncertainty estimation and out-of-distribution robustness in deep learning. NeurIPS Tutorial, Google Brain, 2020. URL https://neurips.cc/media/neurips-2020/Slides/16649.pdf.
* Ulyanov et al. (2018) Dmitry Ulyanov, Andrea Vedaldi, and Victor S Lempitsky. Deep image prior. In _Proceedings of the Thirty First IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 9446–9454. CVPR, 2018.
* Vincent (2011) Pascal Vincent. A connection between score matching and denoising autoencoders. _Neural Computation_ , 23(7):1661–1674, 2011\.
* Wang et al. (2004) Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. _IEEE Transactions on Image Processing_ , 13(4):600–612, 2004.
* Xiao et al. (2022) Zhisheng Xiao, Karsten Kreis, and Arash Vahdat. Tackling the generative learning trilemma with denoising diffusion GANs. In _Proceedings of the Tenth International Conference on Learning Representations_. ICLR, 2022.
* Zhu et al. (2023) Yuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bihan Wen, Radu Timofte, and Luc Van Gool. Denoising diffusion models for plug-and-play image restoration. _CoRR_ , abs/2305.08995, 2023.
## A Appendix
### A.1 Training of Score-based Models
##### Forward SDE
In our experiments, we make use of the variance preserving SDE (Ho et al.,
2020)
$\displaystyle
d{\mathbf{x}_{t}}=-\frac{\beta(t)}{2}{\mathbf{x}_{t}}dt+\sqrt{\beta(t)}d\mathbf{w},$
(37)
were we employ
$\beta(t)=\beta_{\text{min}}+t(\beta_{\text{max}}-\beta_{\text{min}})$ as a
linear schedule with $\beta_{\text{min}}=0.1$ and $\beta_{\text{max}}=10$ with
a terminal time $T=1$. The coefficients were chosen such that the terminal
distribution approximates a Gaussian, i.e.
$p_{1}({\mathbf{x}})\approx\mathcal{N}(0,I)$. We also tested the Variance
Exploding (VE) SDE (Song et al., 2021c); it was found that VE-SDE was more
unstable than VP-SDE for PET image reconstruction. The transition kernel for
the variance preserving SDE is a Gaussian, i.e.
$p_{t}({\mathbf{x}_{t}}|{\mathbf{x}_{0}})=\mathcal{N}({\mathbf{x}_{t}};\gamma_{t}{\mathbf{x}_{0}},\nu_{t}^{2}I)$,
with coefficients
$\displaystyle\gamma_{t}=\exp{\left(-\frac{1}{2}\int_{0}^{t}\beta(s)ds\right)},\quad\nu_{t}^{2}=1-\exp{\left(-\int_{0}^{t}\beta(s)ds\right)}.$
Using this closed form expression for the transition kernel, the denoising
score matching loss can be rewritten as
$\displaystyle L_{\text{DSM}}(\theta)=\mathbb{E}_{t\sim
U[0,1]}\mathbb{E}_{{\mathbf{x}_{0}}\sim{\pi}}\mathbb{E}_{{\mathbf{z}}\sim\mathcal{N}(0,I)}\left[\omega_{t}\left\|s_{\theta}({\mathbf{x}_{t}},t)+\frac{{\mathbf{z}}}{\nu_{t}}\right\|_{2}^{2}\right],$
(38)
with ${\mathbf{x}}_{t}=\gamma_{t}{\mathbf{x}}_{0}+\nu_{t}{\mathbf{z}}$. The
weighting $\omega_{t}$ is chosen as $\omega_{t}=\nu_{t}^{2}$ to approximate
maximum likelihood training (Song et al., 2021b).
##### Model Architecture
We use the architecture proposed by Dhariwal and Nichol (2021)222available at
https://github.com/openai/guided-diffusion. The architecture is based on the
popular U-Net architecture (Ronneberger et al., 2015) consisting of a decoder
implemented as a stack of residual blocks and downsampling operations and an
encoder of residual blocks and upsampling operations. At the lowest resolution
($8\times 8$), additional global attention layers are used. To incorporate the
timestep into each residual block, the authors use adaptive group
normalisation (AdaGN) layers defined as
$\text{AdaGN}(h,e)=e_{s}\text{GroupNorm}(h)+e_{b}$, where $h$ are intermediate
features and $e=[e_{s},e_{b}]$ is the encoded time step. The specific
implementation and the choice of our hyperparameters can be seen in our
github. For the MRI guided model we apply the clean MRI image as an additional
channel to the input of the network.
### A.2 Experimental Details
The sampling methods presented in Section 3 use different penalty strengths in
order to scale the likelihood term for PET-Naive and PET-DPS or to set the
strength of the additional Tikhonov regularisation for PET-DDS. For Naive it
is recommended to choose $\lambda_{t}^{\text{naive}}$ s.t. the penalty is zero
at the start of sampling and increased as $t\to 0$ (Jalal et al., 2021). We
use $\lambda_{t}=\lambda(1-t)$ in all our experiments. For the PET-DPS
approach (Chung et al., 2023a) define the sampling iteration as
$\begin{split}&\tilde{{\mathbf{x}}}_{t_{k-1}}={\mathbf{x}}_{t_{k}}+[{\mathbf{f}}({\mathbf{x}}_{t_{k}},t_{k})-g(t_{k})^{2}s_{\theta}({\mathbf{x}}_{t_{k}},t_{k})]\Delta
t+g(t_{k})\sqrt{|\Delta
t|}{\mathbf{z}}\quad{\mathbf{z}}\sim\mathcal{N}(0,I),\\\
&{\mathbf{x}}_{t_{k-1}}=\tilde{{\mathbf{x}}}_{t_{k-1}}-\lambda_{t_{k}}^{\text{DPS}}\nabla_{\mathbf{x}}L({\mathbf{y}}|{\hat{\mathbf{x}}_{0}}({\mathbf{x}}_{t_{k}})).\end{split}$
(39)
which is equivalent to the classical Euler-Maruyama scheme, when
$\lambda_{t_{k}}^{\text{DPS}}$ is chosen in such a way that it incorporates
the step size $\Delta t$ and the diffusion function $g(t_{k})^{2}$. We follow
Chung et al. (2023a) and define
$\lambda_{t}^{\text{DPS}}=\frac{\lambda}{D_{KL}(A{\hat{\mathbf{x}}_{0}}||{\mathbf{y}})}$.
For PET-DDS a constant penalty $\lambda^{\text{DDS}}$, without time
dependency, is used. Heuristically, it was found that the number of iterations
used for data-consistency projection were adjust such that the results, with
$\lambda^{\text{DDS}}=0$, overfit to noise. The penalty strength
$\lambda^{\text{DDS}}$ was then increased to regularise the reconstruction
more. In 2D the number of projection steps for PET-DDS were set to $4$ for
noise level $2.5$ and $15$ for noise level $10$.
### A.3 Baseline Methods
##### Classical Methods
Relative Difference Prior (RDP) is a common penalty for PET reconstruction
(Nuyts et al., 2002), defined by
$\displaystyle R({\mathbf{x}})=-\sum_{j=1}^{n}\sum_{k\in
N_{j}}\frac{(x_{j}-x_{k})^{2}}{x_{j}+x_{k}+\xi\lvert x_{j}-x_{k}\rvert},$
where $N_{j}$ is a pre-defined neighbourhood around $x_{j}$, typically
$3\times 3$ in 2D or $3\times 3\times 3$ in 3D. $R_{z}({\mathbf{x}})$ is a
variant of RDP whereby the neighbourhood is defined in the axial dimensions in
3D, i.e. $3\times 1\times 1$. A Neumann boundary was chosen where
neighbourhoods that were outside of the domain. The tunable parameter $\xi>0$
controls the degree of edge-preservation ($\xi=1$, in-line with clinical
practice), and with its gradient given by
$\displaystyle\frac{\partial R({\mathbf{x}})}{\partial x_{j}}=\sum_{k\in
N_{j}}-\frac{(r_{jk}-1)(\xi|r_{jk}-1|+r_{jk}+3)}{(r_{jk}+1+\xi|r_{jk}-1|)^{2}},\quad\text{with
}r_{jk}:=\frac{x_{j}}{x_{k}}.$ (40)
The penalisation is scale-invariant since the gradient is computed using the
ratio of voxel values $r_{jk}$. This partially overcomes the issue with the
wide dynamic range observed in emission tomography images. For BSREM algorithm
the convergence criteria was set based on the change of voxel values within
the reconstruction between iterates. Specifically, the change in mean voxel
values across non-zero voxel values was less than $0.01\%$, we set the
relaxation coefficient to $\zeta=0.1$.
##### PET Image Denoising with SGM
In PET image denoising, the goal is to sample from the posterior
$p({\mathbf{x}}|\mathbf{x}_{\text{noisy}})$ of the true image ${\mathbf{x}}$
given an initial (low-count) reconstruction $\mathbf{x}_{\text{noisy}}$. This
is differs from PET reconstruction, where the goal is to sample from the
posterior ${p^{\text{post}}}({\mathbf{x}}|{\mathbf{y}})$ conditioned on the
measurements ${\mathbf{y}}$. In this framework the denoising likelihood is
given by Gaussian noise, i.e.
$\displaystyle
p(\mathbf{x}_{\text{OSEM}}|{\mathbf{x}})=\mathcal{N}(\mathbf{x}_{\text{noisy}};{\mathbf{x}},\sigma_{d}^{2}I),$
(41)
with the noise level $\sigma_{d}$ to be specified. Using the Naive
approximation, we get the following reverse SDE for the PET denoising
likelihood
$\displaystyle
d{\mathbf{x}_{t}}=[{\mathbf{f}}({\mathbf{x}_{t}},t)-g(t)^{2}(s_{\theta}({\mathbf{x}_{t}},t)-1/\sigma_{d}^{2}(\mathbf{x}_{\text{noisy}}-{\mathbf{x}_{t}})]dt+g(t)d\bar{\mathbf{w}}_{t}.$
(42)
In our implementation we estimate the initial reconstruction using OSEM with
34 subsets and iterations (i.e. 1 epoch). The same score model
$s_{\theta}({\mathbf{x}_{t}},t)$ is used for both PET denoising and
reconstruction. The noise level $\sigma_{d}$ is chosen based on a held-out
evaluation dataset.
##### Supervised Learning
We are using two popular supervised learning techniques: post-processing and
learned iterative methods. For the post-processing method we used a variant of
the FBPConvNet (Jin et al., 2017), modified to PET reconstruction. The input
to the FBPConvNet was changed to an OSEM with 34 subsets and iterations, this
variation is denoted by PET-UNet. For the learned iterative method, we adopt
Learned Primal Dual (LPD) (Adler and Öktem, 2018), referred to as PET-LPD. For
PET-LPD we use the same OSEM reconstruction as initialisation for the primal
channels and include the affine forward model with sample specific attenuation
maps. Note that these sample specific factors were not included in previous
implementation of learned iterative methods for PET image reconstruction
(Guazzo and Colarieti-Tosti, 2021). Three primal and dual unrolled iterations
were used. Both of these networks were implemented using Div$\alpha$L
(Leuschner et al., 2021) with only minimal changes to the architecture; PET-
UNet was a UNet with $1\,783\,249$ parameters, and PET-LPD used a block of
convolutional filters for each primal and dual network with a total of
$132\,300$ parameters. Both networks were trained using the dataset in Section
4.1 without lesions and noise levels of $5$, $10$, and $50$. The dataset was
split into training and evaluation, and training was terminated when over-
fitting was observed. Additionally, data-corrected mean normalisation was
included to promote generalisability between noise levels. The code for these
supervised learning models is publicly available at https://github.com/Imraj-
Singh/pet_supervised_normalisation.
##### Deep Image Prior
The Deep Image Prior (DIP) (Ulyanov et al., 2018) is a popular framework for
unsupervised image reconstruction, relying only on a single measurement. A
common problem of the DIP is its tendency to overfit to noise. Therefore some
regularisation has to be used. We included RDP into the objective function to
elevate the need for early-stopping and prevent over-fitting to noise. The
architecture used was a three-scale U-Net (Ronneberger et al., 2015) with
$1\,606\,899$ parameters, with a rectified linear unit on the output to ensure
non-negativity. This architecture is minimally changed from previous
applications of DIP to PET (Gong et al., 2019; Ote et al., 2023; Singh et al.,
2023). DIP results are computed on reconstructions along the optimisation
trajectory, every 100 iterations from 6,600 iterations to 11,600.
### A.4 Evaluation metrics
In addition to peak-signal-to-noise ratio (PSNR) and structural similarity
index measure (SSIM) (Wang et al., 2004), we compute two local quality scores
over a Region of Interest (ROI). For reconstructions with lesions a Contrast
Recovery Coefficient (CRC) was computed to quantify detectability of these
local features. This was computed between lesion $L$ and background $B$ ROIs,
these have $N_{L}$ and $N_{B}$ number of elements respectively333We use $L$ to
denote the lesion ROI in this section only; in the main manuscript $L$ is the
likelihood.. Additionally, there are $R$ realisations of the measured data.
Given an ROI $Z$, we include subscript indices for element and realisation
$Z_{r,k}$, where $r$ is the realisation index, and $k$ is the element index.
An average over the elements of the ROI is denoted as
$\bar{Z}_{r}=\frac{1}{N_{Z}}\sum_{k=1}^{N_{Z}}Z_{r,k}$. The CRC is defined by
$\mathrm{CRC}:=\sum_{r=1}^{R}\left(\frac{\bar{L}_{r}}{\bar{B}_{r}}-1\right)/\left(\frac{L_{\mathrm{t}}}{B_{\mathrm{t}}}-1\right),$
(43)
where the subscript $\rm t$ denotes the ground truth ROIs. We study the noise
over realisations of the measured data using normalised STD (also referred to
as ensemble noise, see Tong et al. (2010), and is reported to give a true
estimate of noise in the image). We define an average over realisations of the
ROI as $\bar{Z}_{k}=\frac{1}{R}\sum_{r=1}^{R}Z_{r,k}$ where STD is computed on
background ROIs it is given by:
$\mathrm{STD}:=\frac{1}{N_{B}}\sum_{k=1}^{N_{B}}\sqrt{\frac{1}{R-1}\sum^{R}_{r=1}\frac{(B_{r,k}-\bar{B}_{k})^{2}}{\bar{B}_{k}}}.$
(44)
For reconstructions with lesions the background ROI was used, and without
lesions a background of the whole emission volume (defined on reference
images) was used. In 2D $R=10$ noise realisations of acquisition data were
used, and $R=5$ in 3D. To evaluate the consistency of our reconstructions to
the true measurements, we compute the Kullback-Leibler divergence (KLDIV)
$\displaystyle\mathrm{KLDIV}:=\sum_{j=1}^{m}\bar{y}_{j}\log\left(\frac{\bar{y}_{j}}{y_{j}}\right)-\bar{y}_{j}+y_{j},$
(45)
between measurements ${\mathbf{y}}$ and estimated measurements
$\mathbf{\bar{y}}={\mathbf{A}}\hat{{\mathbf{x}}}+\mathbf{\bar{b}}$ where
$\hat{{\mathbf{x}}}$ denotes the reconstruction.
## B Additional Results
### B.1 2D Reconstruction
We show additional sensitivity plots for 2D reconstruction. For noise level
$10$ these results are presented in Fig. 7 and Fig. 8 without and with
lesions, respectively. The results are similar to the settings for noise level
$2.5$, as we see a clear trade-off between reconstruction quality in terms of
PSNR/SSIM and visibility of lesions in terms of CRC in Fig. 8. Here, a higher
regularisation leads to better PSNR/SSIM scores and a lower regularisation to
a better recovery of lesions. A high regularisation, i.e. a high influence of
the score model, may lead to a worse reconstruction of the lesions, as the
score model was trained on images without lesions.
Figure 7: Results for BrainWeb without lesions with noise level 10 for
different penalty parameters. The Standard Deviation is computed over
reconstructions of different noise realisations ${\mathbf{y}}$. Figure 8:
Results for BrainWeb with lesions with noise level 10 for different penalty
parameters. The Standard Deviation is computed over reconstructions of
different noise realisations ${\mathbf{y}}$.
### B.2 MR guidance
We show additional results for the MR guided model. Sensitivity plots without
and with lesions are presented in Fig. 9 and Fig. 10. These results support
the findings of the paper, as the MR guided models achieve better
reconstruction quality w.r.t. PSNR and SSIM. However, the CRC is similar to
the unguided model. As the lesions were not visible in the MR image, no
additional information about the lesions are introduced through guidance. We
show two more reconstruction examples without lesions in Fig. 11 and examples
with lesions in Fig. 12.
Figure 9: Results for 2D reconstruction guided vs unguided without lesions for
noise level 2.5. Figure 10: Results for 2D reconstruction guided vs unguided
with lesion for noise level 2.5. Figure 11: Comparisons of single central
slice reconstructions with the PET-DDS MR guided vs. unguided at noise level
2.5 without lesions.
Figure 12: Comparison of the PET-DDS MR guided vs. unguided with a noise level
2.5 with lesions.
### B.3 3D results RDPz sweeps
We show the sensitivity plots for different penalty values of the additional
RDP regularizer in $z$-direction for PET-DDS in Fig. 13 and 14 for the two
different tracers. In addition, we show axial, coronal and saggital slices of
the reconstruction with the Amyloid tracer in 15.
Figure 13: Results for 3D reconstruction using the FDG tracer for different
penalty values. Figure 14: Results for 3D reconstruction using the Amyloid
tracer for different penalty values. Figure 15: 3D reconstruction for the
different method with Amyloid tracer, and metrics computed on inset lesion.
|
# Strain effects on the electronic properties of a graphene wormhole
J. E. G. Silva<EMAIL_ADDRESS>Universidade Federal do Ceará,
Departamento de Física, 60455-760, Fortaleza, CE, Brazil Ö. Yeşiltaş
<EMAIL_ADDRESS>Department of Physics, Faculty of Science, Gazi
University, 06500 Ankara, Turkey J. Furtado<EMAIL_ADDRESS>Universidade Federal do Cariri, Centro de Ciências e Tecnologia, 63048-080,
Juazeiro do Norte, CE, Brazil Department of Physics, Faculty of Science, Gazi
University, 06500 Ankara, Turkey A. A. Araújo Filho<EMAIL_ADDRESS>Departamento de Física Teórica and IFIC, Centro Mixto Universidad de
Valencia–CSIC. Universidad de Valencia, Burjassot-46100, Valencia, Spain
Departamento de Física, Universidade Federal da Paraíba, Caixa Postal 5008,
58051-970, João Pessoa, Paraíba, Brazil
(August 28, 2024)
###### Abstract
In this work, we explore the strain and curvature effects on the electronic
properties of a curved graphene structure, called the graphene wormhole. The
electron dynamics is described by a massless Dirac fermion containing
position–dependent Fermi velocity. In addition, the strain produces a
pseudo–magnetic vector potential to the geometric coupling. For an isotropic
strain tensor, the decoupled components of the spinor field exhibit a
supersymmetric (SUSY) potential, depending on the centrifugal term and the
external magnetic field only. In the absence of a external magnetic field, the
strain yields to an exponential damped amplitude, whereas the curvature leads
to a power–law damping of the wave function. The spin–curvature coupling
breaks the chiral symmetry between the upper and the lower spinor component,
which leads to the increasing of the wave function on either upper or lower
region of the wormhole, i.e., depending on the spin number. By adding an
uniform magnetic field, the effective potential exhibits an asymptotic
quadratic profile and a spin–curvature barrier near the throat. As a result,
the bound states (Landau levels) are confined around the wormhole throat
showing an asymmetric and spin–dependent profile.
## I Introduction
Two dimensional materials, such as graphene geim , silicene silicene and
phosphorene phosphorene , have been the subject of intense investigations due
to their outstanding properties. Beyond the remarkable mechanical katsnelson
and electronic properties Novoselov2004 ; electronic , graphene can also be
seen as a table–top laboratory for relativistic physics. Indeed, since the
conduction electrons are effectively described as massless Dirac fermions,
relativistic effects such as zitterbewegung zitter , Klein tunneling klein
and atomic collapse collapse have been observed. Since the graphene layer can
assume a curved shape, the curvature effects might lead to new interesting
relativistic effects, such as the Hawking–Unruh effect hawking ; hawking2 .
The study of a Dirac fermion confined into a two dimensional surface was
initially addressed in Ref.BJ and further developments were provide
afterwards diracsurface ; diracsurface2 ; diracsphere . For a relativistic
fermion intrinsically living on a curved surface, a physical realization was
found for conducting electrons on two dimensional carbon–based structures, as
the fullerenes diracintrinsic ; diracintrinsic1 , carbon nanotubes saito and
graphitic cones cone . In graphene, the massless Dirac equation in curved
spaces was studied in a variety of shapes, such as the localized gaussian bump
contijo , the cone furtado , a helical graphene ribbon atanasov ; watanabe , a
corrugated plane corrugated , a Möbius ring mobius1 ; mobius2 , a torus ozlem
; Yesiltas:2021crm among others. The surface curvature produces a
spin–curvature coupling which leads to a geometric Aharonov–Bohm–like effect
geometricphase , a modified spin–orbit coupling geometricsoc ; geometricsoc2
and a geometric spin–Hall effect geometricmonopole .
Besides the curvature, the deformations of the graphene layer modify the
effective Dirac fermion dynamics as well, producing the so–called
pseudo–magnetic fields ribbons . This vector potential steams from the strain
tensor defined by the deformations of the graphene layer and the
pseudo–magnetic term comes from the coupling to the Dirac fermion; it is
similar to the minimal coupling to a magnetic field gaugestrain . The strain
applied to graphene can mimic a strong magnetic field strainstrong and lead
to important applications strainappli . From the strain tensor, an effective
Hamiltonian for the Dirac fermion was derived using the tight–binding approach
vozmediano , containing an anisotropic and position–dependent Fermi velocity
and a pseudo–magnetic strain vector in the continuum limit. Since the strain
may contain both in–plane and out–of–plane components, the effective Dirac
Hamiltonian was extended in order to encompass all the stretching and bending
effects vozmediano2 . In addition, the quantum field interaction of the
effective Dirac fermion and the strain was discussed in sinner ;
gaugegrapheneqft .
An interesting curved graphene structure is the so–called graphene wormhole,
where two flat graphene layers are connected by a carbon nanotube wormhole .
Since its shape (cylinder) has a non–vanishing mean curvature, the
discontinuity of the curvature at the graphene–nanotube junction leads to
modifications of the energy spectrum and the possibility of localized states
near to it graphenejunction . Although, the Dirac fermions on the upper and
lower layers are free states (non–normalizable), the curvature of the nanotube
allows the existence of normalizable zero–modes confined at the radius of the
wormhole picak ; wormhole3 . In order to avoid the discontinuity at the
junction, a smooth graphene wormhole was proposed considering a continuum and
asymptotic flat catenoid surface dandoloff ; euclides ; deSouza:2022ioq . The
negative curvature of the catenoid leads to a repulsive spin–curvature
coupling near the wormhole throat, allowing only the zero–mode as a localized
state around the throat wormhole4 ; ozlem2 ; wormhole5 .
In this work, we consider the effects of the curvature and the strain on the
effective Dirac fermion living in a catenoid–shaped graphene wormhole. We
extend the effective Hamiltonian obtained in vozmediano to a curved surface,
introducing the usual spin–connection coupling. We explore the different
effects driven by the curvature, isotropic strain and an external magnetic
field. Since the surface is asymptotic flat, the lattice deformation which
yields the curvature and strain should be concentrated around the throat. The
strain leads to a vector potential along the surface meridian, whereas the
spin–curvature coupling points in the angular direction. Moreover, the strain
vector potential provides an exponential damping of the wave function, whereas
the curvature leads to a power–law decay. By adopting the so-called
supersymmetric quantum mechanic-like approach ozlem2 , the spinor components
exhibit a chiral symmetry breaking. Indeed, the upper component has its
probability density enhanced near the wormhole throat in the upper layer,
whereas the lower component is enhanced in the lower layer. The ground state
zero mode also exhibits this chiral behaviour, since it is exponentially
damped either in the upper or in the lower layer depending on the total
angular momentum. By applying an uniform magnetic field, the Landau levels are
also modified by the curved geometry and strain, leading to asymmetric
localized states near the throat.
This work is organized as the follows. In the section (II), we provide a brief
review on the catenoid-shape graphene wormhole geometry. In the section (III)
we present the effective Hamiltonian containing the strain, curvature and
external magnetic field interactions. The section (IV) is devoted to the
symmetries of the effective Hamiltonian and in the section (V) we employ the
SUSY-QM approach in order to investigate the effects of each interaction.
Finally, additional discussion and perspectives are outlined in the section
(VI).
## II Graphene wormhole geometry
In this section, we define the graphene wormhole surface and describe some of
its most important properties. We consider a smooth surface connecting an
upper to the lower layer (flat planes). For this purpose, we choose a catenoid
shaped surface. In other words, the catenoid surface can be describe in
coordinates by euclides ; ozlem2
$\vec{r}(u,\phi)=\sqrt{R^{2}+u^{2}}\left(\cos\phi\hat{i}+\sin\phi\hat{j}\right)+R\sinh^{-1}\left(\frac{u}{R}\right)\hat{k},$
(1)
where $R$ is the throat radius, $-\infty<u<\infty$ describes the meridian
coordinate and $\varphi$ is the parallel coordinate $\varphi\in[0,2\pi)$, as
shown in Fig.1.
Figure 1: Graphene wormhole geometry. The meridian coordinate $u$ connects the
lower to the upper asymptotic flat regions.
The tangent vectors are given by
$\displaystyle\vec{e}_{1}$ $\displaystyle=$
$\displaystyle\frac{\partial\vec{r}}{\partial
u}=\frac{1}{\sqrt{u^{2}+R^{2}}}(u\hat{r}+R\hat{k})$ (2)
$\displaystyle\vec{e}_{2}$ $\displaystyle=$
$\displaystyle\frac{\partial\vec{r}}{\partial\phi}=\sqrt{u^{2}+R^{2}}\hat{e}_{2},$
(3)
where $\hat{e}_{2}=\cos\phi\hat{i}+\sin\phi\hat{j}$ is the unit vector along
the $\phi$ direction. From the tangent vectors $(\hat{e}_{1},\hat{e}_{2})$, we
can define the surface induced metric $g_{ij}=\vec{e}_{i}\cdot\vec{e}_{j}$. In
$(u,\phi)$ coordinates, the surface metric takes the form
$g_{ij}=diag(1,(R^{2}+u^{2}))$. Thus, the $2+1$ spacetime interval has the
form euclides
$\mathrm{d}s^{2}=\mathrm{d}t^{2}-\mathrm{d}u^{2}-(R^{2}+u^{2})\mathrm{d}\phi^{2},$
(4)
where we adopt the $(+,-,-)$ spacetime metric signature convention. Note that
the line element in Eq.(4) is invariant under time–translations and rotations
with respect to the $\mathrm{z}$ axis (axissymmetric).
Let us now obtain the the main geometric quantities for the electron dynamics,
namely, the dreinbeins, connections and curvature. The dreinbeins are related
to the spacetime metric by the relation
$g_{\mu\nu}=e^{a}_{\mu}e^{b}_{\nu}\eta_{ab}.$ (5)
Thus, for the catenoid, the only non–vanishing components of the dreinbeins
are
$\displaystyle e^{a}_{\mu}$ $\displaystyle=$ $\displaystyle
diag(1,1,\sqrt{R^{2}+u^{2}})$ (6)
Remember that they modify the Fermi velocity by turning it into a position
dependet configuration. Moreover, from the dreinbeins, we can defined the
moving frame $\theta^{a}=e^{a}_{\mu}\mathrm{d}x^{\mu}$, where, in the
catenoid, take the form $\theta^{0}=\mathrm{d}t$, $\theta^{1}=\mathrm{d}u$,
$\theta^{2}=\sqrt{R^{2}+u^{2}}\mathrm{d}\phi$. From the torsion–free
condition, i.e.,
$T^{a}=\mathrm{d}\theta^{a}+\omega^{a}_{b}\wedge\theta^{b}=0$, the only
non–vanishing one–form connection coefficient
$\omega^{a}_{b}=\Gamma_{cb}^{a}\theta^{c}$ is given by
$\displaystyle\omega^{2}_{1}$ $\displaystyle=$
$\displaystyle\frac{u}{R^{2}+u^{2}}\theta^{2}.$ (7)
The curvature 2–form
$R^{a}_{a}=d\omega^{a}_{b}+\omega^{a}_{c}\wedge\omega^{c}_{b}$ has only
non–vanishing component, namely
$R^{2}_{1}=-\frac{R^{2}}{(R^{2}+u^{2})^{2}}\theta^{2}\wedge\theta^{1}$.
Accordingly, the gaussian curvature $K=\delta^{ab}R_{ab}$ has the form
$\displaystyle K=-\frac{R^{2}}{(R^{2}+u^{2})^{2}}.$ (8)
Here, it is important to point out that the catenoid has a negative Gaussian
curvature concentrated around the throat and it vanishes in the regions far
from it. In Fig. 2, we display the behavior of graphene wormhole curvature
$K$. Note that the surface is asymptotically flat. Thus, the effects of the
curved geometry and strain on the electron should be concentrated around the
origin. Furthermore, as $R\rightarrow 0$, the curvature tends to a $\delta(r)$
function, as it is reported in the literature for a discontinuous graphene
wormhole wormhole ; picak ; wormhole3 .
Figure 2: The Gaussian curvature $K(u)$ of the graphene wormhole. The
curvature is smooth and concentrated around the throat of the wormhole. For
$R\rightarrow 0$, the curvature tends to a delta–like function.
## III Strain Hamiltonian
After a brief review of the main geometric properties of the graphene
wormhole, let us now discuss the effective Hamiltonian containing the strain
and curvature effects on the electron. We follow closely the Ref.(vozmediano )
where the most general effective Hamiltonian was derived. The effective
Hamiltonian for a Dirac fermion constrained to a flat surface under influence
of the strain and an external magnetic field was found in Ref. (vozmediano ).
We propose a generalization of the effective Hamiltonian of Ref.(vozmediano )
in the continuum limit for curved surfaces in the form
$\mathcal{H}_{D}=-i\hbar\left(v_{i}^{j}\sigma^{i}\partial_{j}+ie\sigma^{i}A_{i}+v_{0}\sigma^{i}\Gamma_{i}+v_{0}\sigma^{i}\Omega_{i}\right),$
(9)
where $v_{i}^{j}$ is a position–dependent Fermi velocity tensor defined in
terms of the strain tensor $u_{ij}$ as vozmediano
$v_{i}^{j}=v_{0}\left[\delta_{i}^{j}-\frac{\beta}{4}(2u_{i}^{j}+\delta_{i}^{j}u_{k}^{k})\right],$
(10)
and $v_{0}=\frac{3t_{0}a}{2}$ is the undeformed Fermi velocity, $t$ is the
hopping parameter, $a$ is the lattice constant and $\beta=|\partial\ln
t/\partial\ln a|$ gaugestrain . The definition of the strain tensor will be
given in the next subsection. Note that, when $\beta=0$, the usual constant
Fermi velocity is recovered. Besides, the tensor nature of $v_{ij}$ means that
the Fermi velocity depends on the direction on the surface.
In addition, the strain on the surface also induces a new vector field, called
the strain vector $\Gamma_{i}$, as a divergence of the velocity tensor
vozmediano . Thus, for a curved surface, it is defined as
$\Gamma_{i}=\frac{1}{2v_{0}}\nabla_{j}v^{j}_{i},$ (11)
where the definition of the strain vector in Eq.(11) is independent of the
coordinate choice.
The curved Pauli matrices are defined as watanabe ; mobius1 ; mobius2
$\sigma^{i}=e^{i}_{a}\sigma^{a},$ (12)
where $\sigma^{a}$ are the usual flat Pauli matrices, and $e^{i}_{a}$ are the
zweinbeins matrices which satisfy
$g_{ij}=e^{a}_{i}e^{b}_{j}\delta_{ab}.$ (13)
The definition of the curved sigma matrices employed in Eq.(12) ensures that
these matrices do not depend on the particular choice of coordinates of the
surface (surface covariance). It is worthwhile to mention that the definitions
of the velocity tensor in Eq.(24 and the curved Pauli matrices in Eq.(12) lead
to a position and direction dependent Dirac kinetic term
$H_{1}=v_{i}^{j}\sigma^{i}\partial_{j}$.
Figure 3: Stress function $\sigma(u)$ for
Figure 4: Fermi velocity $v(u)$ for $\beta=0.1$
In the effective Hamiltonian exhibited in Eq.(9), $A_{i}$ is the external
magnetic potential and $\Omega_{i}$ is the spinor connection furtado ; mobius2
; ozlem2
$\Omega_{i}=\frac{1}{4}\omega_{i}^{ab}\gamma_{a}\gamma_{b}.$ (14)
The curved $\gamma^{\mu}$ matrices are related to the flat $\gamma^{a}$ ones
by the dreinbeins $e^{a}_{\mu}$, i.e., $\gamma^{\mu}=e^{\mu}_{a}\gamma^{a}$.
The dreinbeins are defined as $g_{\mu\nu}=e_{\mu}^{a}e_{\nu}^{b}$. In $(2+1)$
dimension, we can adopt the following representation to the flat Dirac
$\gamma^{a}$ matrices $\gamma_{0}=\sigma_{3}$, $\gamma_{1}=-i\sigma_{2}$ and
$\gamma_{2}=-i\sigma_{1}$ mobius2 ; ozlem ; ozlem2 . Thus, the curved Dirac
matrices on the wormhole graphene surface have the form
$\displaystyle\gamma^{t}$ $\displaystyle=$ $\displaystyle
e^{t}_{0}\gamma^{0}=\gamma_{0},$ (15) $\displaystyle\gamma^{u}$
$\displaystyle=$ $\displaystyle\gamma^{1},$ (16) $\displaystyle\gamma^{\phi}$
$\displaystyle=$ $\displaystyle
e_{2}^{\phi}\gamma^{2}=\frac{1}{\sqrt{R^{2}+u^{2}}}\gamma^{2}.$ (17)
From the connection 1–form in Eq.(7), only $\omega^{2}_{1}$ is non zero. Thus,
the only non–vanishing component of the spinor connection $\Omega_{\mu}$ is
$\Omega_{\varphi}=\frac{i}{2}\frac{u}{\sqrt{R^{2}+u^{2}}}\sigma_{3}.$ (18)
Note that, since $-\infty<u<\infty$, the geometric spinor potential in Eq.(18)
is an odd function under parity. This parity violation does not occur for the
Dirac fermion in a flat plane diracplanar or the graphitic cone furtado .
Moreover, for $R=0$ or for $R\neq 0$ and $u\rightarrow\pm\infty$, the
geometric connection is constant, as found for conic surfaces furtado . It is
worth mentioning that, due to the resemblance of the spinor and gauge field
coupling, the spinor potential is sometimes interpreted as a kind of a
pseudo–magnetic potential steaming from the curved geometry contijo .
Furthermore, the strain produces two different potentials on the Dirac
electron. The first potential, steaming from the strain in Eq.(11), is a
vector potential, whereas the second one in Eq.(18) is a spinorial potential
depending on the surface connection. In the next subsections, we choose a
particular configuration for the strain and the external magnetic field and
explore the differences between these three interactions.
### III.1 Strain tensor
Now, let us investigate how the strain tensor $u_{ij}$ modifies the catenoid
surface. In order to do it, we assume that the tensions over the surface are
static and isotropic. Therefore, we consider the non–uniform isotropic stress
tensor in the form
$\sigma_{j}^{i}=\sigma(u)\delta_{j}^{i}.$ (19)
Since the catenoid brige is an asymptotically flat surface, we are interested
in stress tensor which vanishes away from the throat and it is finite at the
origin, i.e.,
$\displaystyle\lim_{u\rightarrow 0}\sigma(u)$ $\displaystyle=$
$\displaystyle\sigma_{0},$ (20)
$\displaystyle\lim_{u\rightarrow\pm\infty}\sigma(u)$ $\displaystyle=$
$\displaystyle 0,$ (21)
where $\sigma_{0}$ is a constant, which accounts for the maximum value of the
surface tension. These conditions guarantee a stress tensor concentrated
around the catenoid throat. Indeed, since the surface is asymptotic flat, the
curvature and strain effects should vanish as $u\rightarrow\infty$.
Figure 5: Geometric connection $\Omega_{\phi}$ for $R=0.1$ and $R=1$.
Figure 6: Strain vector $\Gamma_{u}$ for $R=0.1$ and $R=1$.
We assume that the mechanical properties of the surface are in the linear
elastic regime. Thereby, the stress and the strain tensors are related by
vozmediano
$\sigma_{ij}=\lambda\theta g_{ij}+2\mu u_{ij},$ (22)
with $\lambda$ and $\mu$ are the Lamé coefficients and $\theta=u^{i}_{i}$ is
the trace of the strain tensor. From the ansatz employed in Eq.(19), we obtain
the strain tensor as
$u^{i}_{j}=\frac{1}{2(\lambda+\mu)}\sigma(u)\delta^{i}_{j}.$ (23)
The form of the strain tensor in Eq.(23) shows that the deformations undergone
by the surface are isotropic and concentrated aroun the catenoid throat. The
position–dependent Fermi velocity tensor $v_{ij}$ can be written as
$v_{i}^{j}=v(u)\delta_{i}^{j},$ (24)
where the position–dependent Fermi velocity function $v(u)$ is given by
$v(u)=v_{0}\left[1-\frac{\beta}{2}\frac{\sigma(u)}{\lambda+\mu}\right].$ (25)
Accordingly, the components of the pseudo–vector potential $\Gamma_{i}$ are
$\displaystyle\Gamma_{u}$ $\displaystyle=$
$\displaystyle-\frac{\beta}{4}\frac{\sigma^{\prime}(u)}{\lambda+\mu},$ (26)
$\displaystyle\Gamma_{\phi}$ $\displaystyle=$ $\displaystyle 0.$ (27)
In this work, we assume the isotropic stress function $\sigma(u)$ as
$\sigma(u)=\sigma_{0}\frac{R^{2}}{R^{2}+u^{2}}.$ (28)
We see that $\sigma(u)$ becomes even more concentrated when $R\rightarrow 0$,
as it is shown in Fig. (4). On the other hand, in Fig. (4), we show the plot
of the Fermi velocity function $v(u)$ for different values of $\beta$.
Remarkably, it decreases for the regions close to the wormhole throat (high
curvature). Such a feature was already found in a smooth ripple curved
graphene layer contijo .
In addition, the behavior of the geometric spinor connection
$\Omega_{\varphi}$ and the strain vector $\Gamma_{u}$ are shown in Fig.(6) and
Fig.(6), respectively. Note that both terms are parity odd functions with
respect to the $u$ coordinate. In this sense, both potentials yield to barrier
between the lower $u<0$ and upper $u>0$ layers. However, despite this
similarity, the strain potential given by Eq.(11) and the spinor potential
given by Eq.(18) have different natures, forms and components. Therefore,
these potentials produce different effects on the Dirac electron, as we shall
see in the next section.
### III.2 Magnetic vector potential
Now, let us see how the external magnetic field $\vec{B}$ modifies the
Hamiltonian. For an uniform magnetic field $\vec{B}=B\hat{k}$, the vector
potential $\vec{A}$ is given by $\vec{A}=\frac{1}{2}\vec{B}\times\vec{r}$.
Using the coordinates system in Eq.(1), we obtain
$\vec{A}=\frac{B}{2}\sqrt{R^{2}+u^{2}}\hat{e}_{2}.$ (29)
For $R=0$, the expression for the vector potential in Eq.(29) reduces to
$\vec{A}=\pm\frac{B}{2}u\hat{e}_{2}$, as found in Ref.(diracmagnetic1 ). For
$u\gg R$, it yields to $A^{2}\approx\frac{B}{2}u$, as in a conical surface
diracmagnetic1 and on the flat plane diracplanar . In addition, at $u=0$, it
has a finite value $A^{2}=\frac{BR}{2}$. Since
$\vec{e}_{\phi}=\sqrt{R^{2}+u^{2}}\hat{e}_{2}$, the component of $\vec{A}$ in
coordinates is given by $A^{\phi}=\frac{B}{2}$. Accordingly, the contravariant
component $A_{\phi}=g_{\phi\phi}A^{\phi}$, we have
$A_{\phi}=\frac{B}{2}(R^{2}+u^{2}).$ (30)
A similar expression for the vector potential was found for the discontinuous
graphene wormhole wormhole3 , except for the presence of the throat radius
$R$. The electromagnetic potential displayed in Fig.(10) is parity even, in
contrast with the geometric spinor connection shown in Fig.(8). Also, the
strain vector in Fig.(8) turns out to have a parity odd configuration.
Figure 7: Spin connection $\Omega_{\phi}$ for $R=0.1$ and $R=1$. This
curvature potential exhibit a parity–odd angular configuration.
Figure 8: Strain vector $\Gamma_{u}$ for $R=0.1$ and $R=1$. The strain–driven
potential has a components only along the meridian.
## IV Effective Hamiltonian
Once we have discussed all those interactions acting upon the electron, i.e.,
the curved geometry, strain and external magnetic field, let us now obtain the
respective Hamiltonian. By collecting all the interactions, the effective
Hamiltonian becomes
$\displaystyle\mathcal{H}_{D}$ $\displaystyle=$ $\displaystyle-i\hbar
v_{0}\left\\{\sigma_{1}\left[v(u)\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}\right]\right.$
(31)
$\displaystyle\left.-\frac{i\sigma_{2}}{\sqrt{R^{2}+u^{2}}}[v(u)\partial_{\phi}+eA_{\phi}]\right\\},$
where $\bar{\beta}=\frac{\beta}{4\lambda+\mu}$. Notice that the strain vector
and the spinor connection modifies the Dirac equation leading to a canonically
momentum on the $u$ of form
$\hat{P}_{u}=-i\hbar
v_{0}\left[v(u)\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}\right].$
(32)
Additionally, along the angular $\phi$ direction, the canonically conjugate
momentum is modified by
$\hat{P}_{\phi}=-i\hbar
v_{0}\frac{1}{\sqrt{R^{2}+u^{2}}}[v(u)\partial_{\phi}].$ (33)
In this manner, the effective Hamiltonian can be rewritten in the familiar
form $\mathcal{H}_{D}=v_{0}\vec{\sigma}\cdot(\vec{P}-e\vec{A}),$ where
$\vec{P}=(P_{u},P_{\phi})$ are the canonically conjugate momenta and
$\vec{\sigma}=(\sigma_{1},\sigma_{2})$ are the flat Pauli matrices.
The expression in Eq.(31) depends only on the coordinate $u$. The symmetry of
the Hamiltonian with respect to the angular $\phi$ variable is the result of
the surface axial symmetry. Thus, the wave function should also inherit this
symmetry. In fact, consider the angular momentum operator with respect to the
$\mathrm{z}$ axis, $\hat{L}_{z}=-i\hbar\frac{\partial}{\partial\phi}$ such
that, $\hat{L}_{z}\psi=\hbar l\psi$, where $l$ is the orbital angular momentum
with respect to the $\mathrm{z}$ axis. For a non–relativistic and spinless
electron on the graphene wormhole, an axissymmetric wave function can be
written as $\psi(u,\phi)=e^{il\phi}\psi(u)$ euclides . However, as it is
well–known for the relativistic electron, $\hat{L}_{z}$ no longer commute with
$\mathcal{H}_{D}$, although the total angular momentum operator along the $z$
direction $\hat{J}_{z}=\hat{L}_{z}+\hat{S}_{z}$ does diracplanar . Since
$\hat{S}_{3}=\frac{\hbar}{4}[\gamma^{1},\gamma^{2}]$, then the spin operator
with respect to the $\mathrm{z}$ axis is given by
$S_{z}=-i\frac{\hbar}{2}\sigma_{3}.$ (34)
Here, the total angular momentum operator has the form
$\hat{J}_{z}=-i\hbar\left(\frac{\partial}{\partial\phi}+\frac{1}{2}\sigma_{3}\right)$,
where $\hat{J}\psi=m\hbar\psi$ and $m=l\pm 1/2$ diracplanar2 . Therefore,
considering the axial symmetry on the spinorial wave function, so that
wormhole4 ; diracplanar2
$\Psi(u,\phi)=e^{im\phi}\psi(u),$ (35)
the Dirac equation $\mathcal{H}_{D}\Psi=E\Psi$ leads to the Dirac equation
$\tilde{\mathcal{H}}_{D}\psi=E\psi$, in which the effective Hamiltonian
simplifies to
$\displaystyle\tilde{\mathcal{H}}_{D}$ $\displaystyle=$ $\displaystyle-i\hbar
v_{0}\left(\begin{array}[]{cc}0&v(u)\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}-\frac{[v(u)m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}\\\
v(u)\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}+\frac{[v(u)m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}&0\end{array}\right).$
(38)
Since the effective Hamiltonian in Eq.(38) has parity violating terms steaming
from the geometric connection and the strain vector, we can not assume that
the spinor $\psi=\left(\begin{array}[]{cc}\psi_{1}\\\
\psi_{2},\end{array}\right)$ is parity invariant. This parity violation of the
Dirac spinor on the graphene wormhole is in contrast with the relativistic
electron on a flat plane diracplanar ; diracplanar2 .
In Eq.(38), the position–dependent velocity function $v(u)$ multiplies the
partial derivative $\partial_{u}$. By writing
$v\frac{d}{du}=\frac{d}{d\zeta}$, for $v(u)$ given by Eq.(25), we have
$\zeta=u-\bar{\beta}R\,\tanh^{-1}\left(\frac{\sqrt{u}u}{\sqrt{\bar{\beta}-2}R}\right)$.
Unfortunately, this relation analytically can not be inverted, seeking to
rewrite Eq. (38) in terms of variable $\zeta$. Nevertheless, a graphic
analysis reveals only a small difference between $\zeta$ and $u$ as shown in
Fig. 12. Analogously, for the sake of simplicity, we adopt $\zeta\approx u$
from now on.
Figure 9: Vector potential angular component for an uniform magnetic field.
Figure 10: Magnetic vector potential $\vec{A}$ on the surface.
As a result, the Hamiltonian simplifies to
$\displaystyle\tilde{\mathcal{H}}_{D}$ $\displaystyle=$ $\displaystyle-i\hbar
v_{0}\left(\begin{array}[]{cc}0&\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}-\frac{[m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}\\\
\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}+\frac{[m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}&0\end{array}\right).$
(41)
The effective Hamiltonian in Eq.(41) shows clearly the distinctive interaction
terms steaming from the strain $\bar{\beta}\sigma^{\prime}$, from the
geometric connection $\frac{u}{2(R^{2}+u^{2})}$, centrifugal term
$\frac{m}{\sqrt{R^{2}+u^{2}}}$ and the electromagnetic coupling
$\frac{eA_{\phi}}{\sqrt{R^{2}+u^{2}}}$. In the next section, we explore the
effects of each interaction.
## V Supersymmetric analysis
In this section, we employ a supersymmetric quantum mechanical approach ozlem
; ozlem2 to explore the features of the effective Hamiltonian in Eq.(41) and
find the solutions of the Dirac equation.
From the effective Eq.(41), it leads to
$\tilde{\mathcal{H}}_{D}\psi=\epsilon\psi,$ (42)
where the spinor $\psi=\left(\begin{array}[]{cc}\psi_{1}\\\
\psi_{2},\end{array}\right)$. The effective Dirac equation (42) can be written
as
$\displaystyle\left(\begin{array}[]{cc}0&i\mathcal{O}_{2}\\\
i\mathcal{O}_{1}&0\end{array}\right)\left(\begin{array}[]{cc}\psi_{1}\\\
\psi_{2},\end{array}\right)=\epsilon\left(\begin{array}[]{cc}\psi_{1}\\\
\psi_{2},\end{array}\right),$ (49)
where the firs–order operators $\mathcal{O}_{1,2}$ are defined as
$\mathcal{O}_{1,2}=\frac{\mathrm{d}}{\mathrm{d}u}+\bar{\beta}\sigma^{\prime}+\frac{u}{2(R^{2}+u^{2})}\pm\frac{(m+eA_{\phi})}{\sqrt{R^{2}+u^{2}}}.$
(50)
By performing the change on the wave function of the form
$\psi_{1,2}(u)=(R^{2}+u^{2})^{-1/4}e^{-\bar{\beta}\sigma(u)}\chi_{1,2}(u),$
(51)
the Dirac equation yields to a decoupled equations for the $\chi_{1}$ and
$\chi_{2}$ in a Klein–Gordon like form
$\displaystyle-\chi_{1,2}^{\prime\prime}+U_{eff1,2}^{2}\chi_{1,2}=\epsilon^{2}\chi_{1,2},$
(52)
where $\epsilon=\frac{E}{\hbar v_{0}}$ is the electron momentum and the
effective squared potential is given by
$U_{eff1,2}^{2}=\left(\frac{(m+eA_{\phi})}{\sqrt{R^{2}+u^{2}}}\right)^{2}\mp\left(\frac{(m+eA_{\phi})}{\sqrt{R^{2}+u^{2}}}\right)^{\prime}.$
(53)
The Klein–Gordon–like expression present in Eq. (52) has the structure of a
so–called supersymmetric quantum mechanics, whose superpotential $W$ is given
by
$\displaystyle W=\frac{m+eA_{\phi}}{\sqrt{R^{2}+u^{2}}}.$ (54)
Note that Eq.(54) is given by the spin–curvature potential
$\frac{m}{\sqrt{R^{2}+u^{2}}}$ and the magnetic coupling term
$\frac{eA_{\phi}}{\sqrt{R^{2}+u^{2}}}$ present in the Dirac equation.
Figure 11: Change of coordinate $\zeta=\zeta(u)$ for $\beta=1$.
Figure 12: Superpotential $W(u)$ for $R=1$, $B=0$.
The SUSY–like form of the effective squared potential
$U_{eff1,2}^{2}=W^{2}\mp W^{\prime}$ (55)
enables us to rewrite the decoupled system of second–order Klein–Gordon like
Eq.(52) as
$\displaystyle a^{\dagger}a\chi_{1}=\epsilon^{2}\chi_{1}$ (56) $\displaystyle
aa^{\dagger}\chi_{2}=\epsilon^{2}\chi_{2},$ (57)
where $a=\frac{d}{du}+W$ and $a^{\dagger}=-\frac{d}{du}+W$ are first–order
differential operators ozlem2 . The so-called superpartner squared-
Hamiltonians $H_{1}^{2}=a^{\dagger}a$ and $H_{2}^{2}=aa^{\dagger}$ satisfy
$H_{2}^{2}=(H_{1}^{2})^{\dagger}$ ozlem2 .
A remarkable feature of a quantum mechanical SUSY–like eq.(56) is the
existence of a nonvanishing ground state for $\epsilon=0$, known as the zero
mode wormhole ; picak . Indeed, for $\epsilon=0$, the conditions $a\chi_{1}=0$
and $a^{\dagger}\chi_{2}$ yield to
$\psi^{0}_{1,2}(u)=(R^{2}+u^{2})^{-1/4}e^{-\bar{\beta}\sigma(u)}e^{\mp\int{W(u^{\prime})du^{\prime}}}.$
(58)
Since the superpotential $W$ is related to the spin–curvature coupling and the
external magnetic field, the zero mode is related to the flux of curvature and
magnetic field near the throat wormhole .
The factor $(R^{2}+u^{2})^{-1/4}$ steams from the normalization condition on
the surface. Indeed, the normalization constant takes the form
$1=\int_{-b}^{b}\int_{0}^{2\pi}{||\psi||^{2}(R^{2}+u^{2})^{1/2}\mathrm{d}u\mathrm{d}\phi},$
(59)
where $-b<u<b$. Despite both the curved geometry and the strain reduce the
wave function amplitude for $u\rightarrow\pm\infty$, the curvature damps the
amplitude by a potential factor whereas the strain damps it by an exponential
factor.
Another noteworthy feature of the Dirac equation in curved surface is related
to the geometric phase furtado . Indeed, the holonomy operator
$U(\phi)=e^{\int_{0}^{\phi}{\Omega_{i}dx^{i}}}$, where $\Omega_{i}$ is the
spin connection in Eq.(18) leads to
$U(\phi)=e^{-\frac{i}{2}\frac{u}{R^{2}+u^{2}}\sigma_{3}\phi}.$ (60)
This geometric phase reflects the change on the wave function when the fermion
performs a $2\pi$ rotation for a given $u$ mobius2 . It is a kind of geometric
Aharonov–Bohm effect driven by the curvature instead of the magnetic field
furtado . By applying the geometric phase operator $U(\phi)$, i.e.,
$\psi^{\prime}=U(\phi)\psi$, the Dirac equation
$\tilde{\mathcal{H}}_{D}\psi=E\psi$ simplifies to
$\hat{\mathcal{H}}_{D}\psi^{\prime}=E\psi^{\prime}$, where the simplified
Hamiltonian $\hat{\mathcal{H}}_{D}$ is given by
$\displaystyle\tilde{\mathcal{H}}_{D}$ $\displaystyle=$ $\displaystyle-i\hbar
v_{0}\left(\begin{array}[]{cc}0&\partial_{u}+\bar{\beta}\sigma^{\prime}-\frac{[m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}\\\
\partial_{u}+\bar{\beta}\sigma^{\prime}+\frac{[m+eA_{\phi}]}{\sqrt{R^{2}+u^{2}}}&0\end{array}\right).$
(63)
Thus, the curvature effects of the curved surface can be encoded into the
geometric phase in the operator $U(\phi)$ given by Eq.(60).
It is important to highlight that the effects of the strain, curved geometry
and the external magnetic fields are rather distinct. The strain and curved
geometry provide the geometric phase, whereas the centrifugal and the magnetic
field yield to the SUSY–like symmetry. In the following, we explore the
effects of the each term on the electronic states.
Figure 13: Effective squared potential for $m=1/2$ (on the left panel),
$m=-1/2$ (on the right panel) for $B=0$.
### V.1 No external magnetic field
In the absence of a magnetic field, i.e., for $B=0$, the superpotential has a
simple form
$W(u)=\frac{m}{\sqrt{R^{2}+u^{2}}},$ (64)
whose behavior is plotted in Fig.12. The symmetric centrifugal barrier of the
superpotential yields to an asymmetric potential for $U^{2}_{eff}$, as shown
in Fig. 13. Note the dependence of the squared potential on the total angular
momentum $m$. This one is similar to that one encountered in the context of a
Dirac electron constrained to a helicoid potential watanabe .
Figure 14: Density of states for $\epsilon=1$ and $m=1/2$.
Figure 15: Density of states for $\epsilon=1$ and $m=-1/2$.
For $m\neq 0$, the Klein–Gordon like eq.(52) reads
$-\chi_{1,2}^{\prime\prime}+\left(\frac{m^{2}}{R^{2}+u^{2}}\pm
m\frac{u}{(R^{2}+u^{2})^{3/2}}\right)\chi_{1,2}=\epsilon^{2}\chi_{1,2},$ (65)
It is worthwhile to mention that the effective potential
$U^{2}_{eff1,2}=\frac{m^{2}}{R^{2}+u^{2}}\pm m\frac{u}{(R^{2}+u^{2})^{3/2}},$
(66)
couples the angular momentum quantum number $m$ and the curved geometry terms.
The second term in the potential $\frac{m}{(R^{2}+u^{2})^{3/2}}$ breaks the
symmetry $m\rightarrow-m$. Indeed, we can obtain the $\chi_{2}$ spinor
component from $\chi_{1}$ by performing the change $m\rightarrow-m$.
For $u>>R$, the potential tends to $U_{eff1,2}\approx\frac{m(m\pm 1)}{u^{2}}$.
Accordingly, the Eq.(65) has the asymptotic solution
$\chi_{1,2}\approx\sqrt{u}\left(c_{1}J_{\frac{2m\pm 1}{2}}(\epsilon
u)+c_{2}Y_{\frac{2m\pm 1}{2}}(\epsilon u)\right),$ (67)
where $J_{n}(x)$ and $Y_{n}(x)$ are the Bessel function of the first kind and
the second kind, respectively. In this region, the solution in Eq.(67)
resembles the one found for the Dirac equation in other graphene wormhole
geometries outside the throat wormhole4 .
Note the presence of the total angular momentum number $m=l+\frac{1}{2}$ in
the order of the Bessel functions. For $u\rightarrow\pm\infty$, the potential
vanishes and thus, the $\chi_{1}$ function tends to $A\sin(\epsilon u)$, as
for the $m=0$ states. Therefore, the interaction between angular momentum and
curvature is concentrated around the graphene wormhole throat.
For $R=1$ and $\epsilon=1$, we numerically solved the Eq.(65) and the
resulting squared wave function was plotted in fig.(15) for $m=1$ and in the
fig.(15) for $m=-1$. By changing $m\rightarrow-m$, the density of state is
shift from the upper $(u>0)$ into the lower sheet $(u<0)$. Moreover, for
$\beta=1$ (thick line) the amplitude is smaller than for $\beta=0.5$ (dashed
line).
The zero mode, i.e., for $\epsilon=0$ is given by
$\psi^{0}_{1,2}(u)=(R^{2}+u^{2})^{-1/4}e^{-\bar{\beta}\sigma(u)}e^{\mp
m\sinh^{-1}(u/R)},$ (68)
where is evident the chiral symmetry breaking $m\rightarrow-m$ and the parity-
odd behaviour of this ground state. Indeed, as shown in the fig. the states
for $m>0$ are suppressed in the upper layer whereas the $m<0$ states are
suppressed in the lower layer. A similar chiral separation was also found for
the massless Dirac field in a helicoidal graphene strip atanasov .
Figure 16: Zero mode for $m=\pm 1/2$.
Figure 17: Zero mode $m=\pm 3/2$.
### V.2 Constant magnetic field
Now let us consider the additional effects from the uniform magnetic field.
The respective Klein-Gordon like equation becomes
$-\chi_{1,2}^{\prime\prime}+\left(\frac{(m+(Bu^{2})/2)^{2}}{R^{2}+u^{2}}\mp\frac{u(-2m+B(2R^{2}+u^{2}))}{2(R^{2}+u^{2})^{3/2}}\right)\chi_{1,2}=\epsilon^{2}\chi_{1,2}.$
(69)
In the eq.(69), the effective potential
$U^{2}_{eff1,2}=\frac{(m+(Bu^{2})/2)^{2}}{R^{2}+u^{2}}\mp\frac{u(-2m+B(2R^{2}+u^{2}))}{2(R^{2}+u^{2})^{3/2}}$
(70)
includes effects from the curved geometry, total angular momentum (spin and
orbital), and the coupling to the magnetic field. Note that, for $u\gg R$
(outside the throat), the effective potential takes the form
$U^{2}_{eff}\approx\frac{m(m+1)}{u^{2}}+B\left(m-\frac{1}{2}\right)+\frac{B^{2}}{4}u^{2},$
(71)
which is the effective potential for a $2+1$ massless Dirac fermion under an
uniform magnetic field in a flat plane using cylindrical coordinates
diracplanar ; diracplanar2 . That is an expected result, since the graphene
wormhole surface is asymptotic flat. Moreover, the effective potential in
Eq.(70) also exhibits the $m\rightarrow-m$ asymmetry.
Due to the complexity of eq.(69) we employ numerical methods to obtain the
first excited states and their respective energy spectrum (Landau levels). In
the figures (19) and (19) we plotted the effective potential $U^{2}_{eff}$ for
$m=0$ and $m=1$, respectively. Note that for $u\gg R$, the effective potential
diverges as $\frac{B^{2}}{4}u^{2}$, whereas for $u\ll R$ the potential is
dominated by the geometric and angular momentum terms (finite barrier for
$m\neq 0$).
We plotted the first Landau levels for $eB=1$, $R=1$, $\beta=1$ and $m=0$ (s
state) in the fig.(21. Note that the first excited state (red line) is located
on the upper layer, whereas the second (blue) and the third (green) have two
asymmetric peaks around the origin. For $eB=1$, $R=1$, $\beta=1$ and $m=2$ in
the fig.(21), the first excited state already has two asymmetric peaks
displaced from the origin. Nonetheless, it is worthwhile to mention that the
probability density does not vanishes at the origin. Note that for $u\gg R$,
the wave function exhibits the usual exponential decay due to the external
magnetic field diracplanar . Therefore, the external magnetic field allow us
to confine the electron around the wormhole throat. However, due to the curved
geometry and the strain, the electron is not symmetric confined around the
wormhole.
Figure 18: Effective squared potential for $m=-3/2$ for different values of
$B$.
Figure 19: Effective squared potential for $m=3/2$. Red line(B=0.1), blue line
(B=0.15) and the green line (B=0.2).
Figure 20: Density of states for $B=0.1$ and $m=1/2$. The ground state (red
line) has a peak in the upper layer whereas the first excited state (blue
line) is more localized in the lower layer. The second excited state exhibits
three less distinct peaks.
Figure 21: Density of states for $B=0.1$ and $m=3/2$.The ground state (red
line) has a peak in the upper layer whereas the first excited state (blue
line) is more localized in the lower layer. The second excited state exhibits
three less distinct peaks.
Finally, the ground state zero mode under the action of the magnetic field is
modified by
$\displaystyle\psi^{0}_{1,2}(u)$ $\displaystyle=$
$\displaystyle(R^{2}+u^{2})^{-1/4}e^{-\bar{\beta}\sigma(u)}e^{\mp
m\sinh^{-1}(u/R)}$ (72) $\displaystyle\times
e^{\mp\frac{eB}{4}(u\sqrt{R^{2}+u^{2}}+R^{2}\tanh^{-1}(u/\sqrt{R^{2}+u^{2}}))}.$
In the eq.(72), the magnetic field introduces another exponential factor whose
argument is a parity-odd function. Accordingly, the magnetic field enhances
the chiral separation of the eletronic modes. However, note that shifting the
sign of the magnetic field for a given angular momentum number $m$, the
magnetic field might reduce the chiral separation between the upper and lower
layers.
## VI Final remarks and perspectives
We investigated the curvature, strain and magnetic field effects upon a
massless relativistic electron on a graphene wormhole surface. The graphene
wormhole was described by a catenoid surface which smoothly connects two
asymptotic flat graphene planes (layers). In this sense, the geometry proposed
was a smooth generalization of the graphene wormhole shown in the Ref.wormhole
. The effective Dirac Hamiltonian containing strain–dependent terms was
obtained in Ref.vozmediano and extended in Ref.vozmediano2 .
Due to the axial symmetry, we considered an isotropic strain tensor localized
near the wormhole throat, similar to the behavior of the Gaussian curvature.
Indeed, since the curved geometry of the throat was obtained due to
deformation of the lattice structure, it was expected that both curvature and
strain were localized around this region. It turned out that the
pseudo–magnetic potential due to the strain $\Gamma_{u}$ had only components
along the meridian coordinate $u$, whereas the spin connection $\Omega_{\phi}$
pointed along the parallel direction $\phi$. In this manner, despite of having
the same origin (the lattice deformation), these two interactions had distinct
effects on the electron. Moreover, by applying the external magnetic field, a
true magnetic potential $\vec{A}$ also acted on the electron. Nevertheless,
although $\vec{A}$ only had the angular component $A_{\phi}$, the spin
connection $\Omega_{\phi}$ was parity–odd, whereas $A_{\phi}$ was parity–even
under the transformation $u\rightarrow-u$.
The strain and spin–curvature brook the parity invariance of the effective
Hamiltonian. Moreover, the spin–connection term led to a chiral invariance
$m\rightarrow-m$. By employing the supersymmetric quantum mechanichal (QMSUSY)
approach, we found that the strain vector yielded to an exponential
suppression of the wave function, whereas the spin connection led to a
power–law decay. In absence of magnetic field, the superpotential was given by
the spin–curvature term which increased the amplitude of the probability
density in the upper layer for $m>0$ and in the lower layer for $m<0$. A
similar chiral behavior was found in graphene nanoribbons in a helical shape
atanasov . Since the graphene structure is asymptotically flat, for a
vanishing strain vector, the wave function behaves as a free wave in a flat
plane watanabe . The inclusion of an uniform magnetic field confined the
electronic states near the wormhole throat. These Landau levels were modified
by the spin–curvature and the strain interactions turned out to be an
asymmetric probability distribution.
In addition, this work revealed that, in spite of the coupling of strain,
curvature and magnetic field in the effective Dirac Hamiltonian were similar,
they possessed rather different effects. As a result, we pointed out a couple
of perspectives for further investigations. A noteworthy extension of the
present work could be considering the effects of a a dynamical strain
(phonons) or corrugations on the graphene wormhole. Furthermore, the chiral
breaking due to the spin–curvature coupling suggests an interesting spin–Hall
current to be analysed. Moreover, the analysis of the geometric Aharonov–Bohm
like phase due to the concentrated curvature around the wormhole throat seems
a worthy perspective as well.
## Acknowledgments
J.E.G.Silva thanks the Conselho Nacional de Desenvolvimento Científico e
Tecnológico (CNPq), grants no 304120/2021-9 for financial support.
Particularly, A. A. Araújo Filho would like to thank Fundação de Apoio à
Pesquisa do Estado da Paraíba (FAPESQ) and CNPq – [200486/2022-5] and
[150891/2023-7] for the financial support. JF would like to thank the Fundação
Cearense de Apoio ao Desenvolvimento Científico e Tecnológico (FUNCAP) under
the grant PRONEM PNE0112-00085.01.00/16 for financial support, the CNPq under
the grant 304485/2023-3 and Gazi University for the kind hospitality. Most of
these calculations were performed by using the Mathematica software.
## References
* (1) A. K. Geim, Science, v. 324, (2009) 1530.
* (2) A. Kara et al, Surf. Sci. Rep. 67, 1, (2012).
* (3) A. Carvalho, M. Wang, X. Zhu, A. S. Rodin, H. Su, A. H. Castro Neto, Nat. Rev. Mat. 1, 11 (2016).
* (4) M. I. Katsnelson, Graphene: carbon in two dimensions, Cambridge University Press (2012).
* (5) K. S. Novoselov, K. Geim, V. Morozov, D. Jiangy, Y. Zhangs, S. V. Dubonos, V. Grigorieva and A. A. Firsov, Science 306, (2004) 666.
* (6) A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* (7) M. I. Katsnelson, Eur. Phys. J. B 51, 157–160 (2006).
* (8) M. I. Katsnelson, K. S. Novoselov, A. K. Geim, Nature Physics volume 2, 620 (2006).
* (9) A. V. Shytov, M. I. Katsnelson, L. S. Levitov, Phys. Rev. Let. 99, (2007).
* (10) A. Iorio, G. Lambiase, Phys. Lett. B 716, 334 (2012).
* (11) T. Morresi et al, 2D Mater. 7, 041006 (2020).
* (12) M. Burgess, B. Jensen, Phys. Rev. A 48, 3, 1861 (1993).
* (13) F.T. Brandt, J.A. Sanchez-Monroy, Phys. Let. A 380, 38, 3036 (2016).
* (14) H. Zhao, Y. L. Wang, C.Z. Ye, R. C., G. H. Liang, and H. Liu Phys. Rev. A 105, 052220 (2022).
* (15) Q. H. Liu, Z. Li, X. Y. Zhou, Z. Q. Yang, W. K. Du Eur. Phys. J. C 79, 712 (2019).
* (16) J. González, F. Guinea, M.A.H. Vozmediano, Phys. Rev. Lett. 69, 172 (1992).
* (17) J. González, F. Guinea, M.A.H. Vozmediano, Nuclear Physics B, (1993).
* (18) Saito R., Dresselhaus G. and Dresselhaus M. S., Physical Properties of Carbon Nanotubes, I. C. Press, London 1998.
* (19) P. Lammert and V. Crespi, Phys. Rev. lett. 85, 5190 (2000).
* (20) F. de Juan, A. Cortijo, M. A. H. Vozmediano, Phys. Rev. B 76, 165409, (2007).
* (21) C. Furtado , F. Moraes , A.M. de M. Carvalho, Phys. Lett. A 372, 5368, (2008).
* (22) V. Atanasov, A. Saxena, Phys. Rev. B 92, 035440 (2015).
* (23) M. Watanabe, H. Komatsu, N. Tsuji, and H. Aoki Phys. Rev. B 92, 205425 (2015).
* (24) V. Atanasov, A. Saxena, Phys. Rev. B 81, 205409 (2010).
* (25) K. Flouris, M. M. Jimenez, and H. J. Herrmann, Phys. Rev. B 105, (2022) 235122.
* (26) L. N. Monteiro, C. A. S. Almeida, and J. E. G. Silva Phys. Rev. B 108, 115436 (2023).
* (27) Ö. Yeşiltaş Advances in high energy physics, 6891402 (2018).
* (28) Ö. Yeşiltaş and J. Furtado, Int. J. Mod. Phys. A 37 (2022) no.11n12, 2250073
* (29) F. de Juan, A. Cortijo, María A. H. Vozmediano, A. Cano, Nature Physics 7, 810 (2011).
* (30) A. Shitade and E. Minamitani, New J. Phys. 22, 113023 (2020).
* (31) G. Liang, Y. Wang, M. Lai, H. Liu, H. Zong and S. Zhu Phys. Rev. A 98, 062112 (2018).
* (32) Y. Wang, H. Zhao, H. Jiang, H. Liu and Y. Chen Phys. Rev. B 106, 235403, (2022).
* (33) F. Guinea, A. K. Geim, M. I. Katsnelson, and K. S. Novoselov, Phys. Rev. B 81, 035408 (2010).
* (34) M.A.H. Vozmedianoa, M.I. Katsnelson, F. Guinea, Phys. Rep. 496, 109 (2010).
* (35) Levy, N. et al., Science 329, 544–547 (2010).
* (36) Guinea, F., Katsnelson, M. I. and Geim, A. K., Nature Phys. 6, 30–33 (2010).
* (37) F. de Juan, J. L. Mañes, and M. A. H. Vozmediano Phys. Rev. B 87, 165131 (2013).
* (38) J. L. Mañes, F. de Juan, M. Sturla, and M. A. H. Vozmediano, Phys. Rev. B 88, 155405 (2013).
* (39) A. Sinner and K. Ziegler, Ann. Phys. 400, 262-278 (2019).
* (40) E. Arias, A. R. Hernandez and C. Lewenkopf, Phys. Rev. B 92, 245110 (2015).
* (41) J. González, J. Herrero, Nucl. Phys. B 825, 426 (2010).
* (42) J. Gonzalez, F. Guinea, J. Herrero, Phys. Rev. B 79, 165434 (2009).
* (43) R. Pincak, J. Smotlacha, Quantum Matter 5, 114 (2016).
* (44) G.Q. Garcia, P.J. Porfírio, D.C. Moreira and C. Furtado, Nucl. Phys. B 950, 114853 (2020).
* (45) R. Dandoloff, A. Saxena, B. Jensen, Phys. Rev. A 81, 014102 (2010).
* (46) J.E.G. Silva et al., Phys. Lett. A 384, 126458 (2020).
* (47) T. F. de Souza, A. C. A. Ramos, R. N. Costa Filho and J. Furtado, Phys. Rev. B 106 (2022) no.16, 165426
* (48) K. Pimsamarn, P. Burikham, T. Rojjanason, Eur. Phys. J. C 80, 1111 (2020).
* (49) Ö. Yeşiltaş, J. Furtado, and J. E. G. Silva, Eur. Phys. J. Plus, 137, (2022) 416.
* (50) E. Cavalcante, Eur. Phys. J. Plus 137, 1351 (2022).
* (51) C. Ho and V. R. Khalilov, Phys. Rev. A 61, 032104 (2000).
* (52) V.M. Villalba and A. R. Maggiolo, Eur. Phys. J. B 22, 31–35 (2001).
* (53) M. J. Bueno, C. Furtado and A. M. de M. Carvalho, Eur. J. Phys. B 85, 53 (2012).
|
# On the complexity of open shop scheduling with time lags
Wiesław Kubiak
_Faculty of Business Administration_
_Memorial University_
_St. John’s, Canada_
###### Abstract
The minimization of makespan in open shop with time lags has been shown NP-
hard in the strong sense even for the case of two machines and unit-time
operations. The minimization of total completion time however has remained
open for that case though it has been shown NP-hard in the strong sense for
weighted total completion time or for jobs with release dates. This note shows
that the minimization of total completion time for two machines and unit-time
operations is NP-hard in the strong sense which answers the long standing open
question.
Keywords: Open shop, time lags, total completion time, complexity
## 1 Introduction
The open shop with job-dependent time lags has been studied for quite sometime
in the literature. The time lags model delays required between job’s
operations due to necessary transportation needed to move a job from one
machine to another for instance. Zhang [7] provides an interesting discussion
of the time lag models and their applications in scheduling. Most research on
the open shops with time lags has focused on two machine open shops where each
job $J_{i}$, $i=1,\dots,n$, consists of two operations $O_{i1}$ and $O_{i2}$
to be processed on two machines $M_{1}$ and $M_{2}$ respectively in any order.
The operations $O_{i1}$ and $O_{i2}$ processing times equal $p_{i1}>0$ and
$p_{i2}>0$ respectively, and the time lag is $l_{i}\geq 0$. In a feasible
schedule either machine can process at most one job at a time, each job can be
processed by at most one machine at a time, and the _later_ operation of job
$J_{i}$ in the schedule needs to wait at least $l_{i}$ time units to start
following the completion of the _earlier_ operation of job $J_{i}$ in the
schedule. Yu [5], and Yu et al. [6] prove a series of strong complexity
results for makespan minimization. They prove that the problem is NP-hard in
the strong sense even if all operations are unit-time operations. This problem
is denoted by $O2|p_{ij}=1,l_{i}|C_{\max}$ in the well-known notation of
Graham et al [2]. Yu [5] then goes on to prove that the problem is NP-hard in
the strong sense even if there are only two possible values $l$ and
$l^{\prime}$ of time lags in a job-proportionate open shop, i.e. the problem
$O2|p_{i1}=p_{i2},l_{i}\in\\{l,l^{\prime}\\}|C_{max}$, and it is also NP-hard
in the ordinary sense when only one value $l$ of time lag is permitted in a
job-proportionate open shop, i.e. the problem
$O2|p_{i1}=p_{i2},l_{i}=l|C_{max}$. Rebain and Strusevich [3] give a linear
time algorithm for the instances with short time lags, i.e time lags that meet
the following condition $\max_{i}\\{l_{i}\\}\leq\min_{ij}\\{p_{ij}\\}$. These
results determine current borderline between NP-hard and polynomially solvable
cases for the two machine open shop makespan minimization problem with job
dependent-time lags. The problem intractability caused research to focus on
approximation algorithms and on-line competitive algorithms for makespan
minimization. Strusevich [4] gave $\frac{3}{2}$\- approximation algorithm, and
Zhang and van de Velde [8] gave a 2-competitive algorithm for
$O2|l_{i}|C_{max}$. The reader is referred to Zhang [7] for a comprehensive
review of approximation and on-line algorithms for the problem
Brucker et al. [1] switch attention to other than makespan objective
functions. In particular to the total completion time objective which is
another key scheduling objective function. They prove that _weighted_ total
completion time minimization is NP-hard in the strong sense, i.e. the problem
$O2|p_{ij}=1,l_{i}|\sum w_{i}C_{i}$. They prove that the same holds for the
total completion time with jobs being _released_ possibly at different times,
i.e. the problem $O2|p_{ij}=1,l_{i},r_{i}|\sum C_{i}$. In this paper we prove
that the problem where all jobs are released at the same time and their
weights are all equal, i.e. the problem $O2|p_{ij}=1,l_{i}|\sum C_{i}$ is NP-
hard in the strong sense. This result strengthens those earlier complexity
results for total completion time, and it answers a question that has been
open at least since the paper by Brucker et al. [1]. The prove is given in the
next section.
## 2 NP-hardness proof
Let $l_{1},\dots,l_{n},e$ be non-negative integers, and $n$ a positive _even_
integer such that $n<e<\frac{3n}{2}$. Let $A$ and $B$ be a partition of the
index set $\\{1,\dots,n\\}$ into two disjoint sets of equal size
$\frac{n}{2}$. For simplicity denote
$\\{\ell_{1},\dots,\ell_{\frac{n}{2}}\\}=\\{l_{j}:j\in A\\}$ and
$\\{\lambda_{1},\dots,\lambda_{\frac{n}{2}}\\}=\\{l_{j}:j\in B\\}$. Consider
the following question.
(Q) Is there a partition of the set $\\{1,\dots,n\\}$ into two disjoint sets
$A$ and $B$ of equal size $\frac{n}{2}$ such that there is a permutation
$\pi_{A}$ of the set $\\{1,\dots,\frac{n}{2}\\}$ and a permutation
$\sigma_{A}$ of the set $\\{\frac{n}{2}+1,\dots,n\\}$ satisfying
$\pi_{A}(i)+\ell_{i}+\sigma_{A}(i)=e$ (1)
for $i=1,\dots,\frac{n}{2}$, and there is a permutation $\pi_{B}$ of the set
$\\{\frac{n}{2}+1,\dots,n\\}$ and a permutation $\sigma_{B}$ of the set
$\\{1,\dots,\frac{n}{2}\\}$ satisfying
$\pi_{B}(i)+\lambda_{i}+\pi_{A}(i)=e$ (2)
for $i=1,\dots,\frac{n}{2}$?
We refer to this problem as Partition Restricted Numerical 3-Dimensional
Matching (PRN3DM) problem. It is easy to verify that any instance of the
PRN3DM problem with an affirmative answer to Q must satisfy the following
condition
$\Sigma_{i=1}^{n}l_{i}=n(e-n-1),$ (3)
therefore without loss of generality we limit the PRN3DM to the instances for
which (3) holds. The problem PRN3DM is NP-hard in the strong sense which
follows from Theorem 1 on p. 30 in Yu [5].
The corresponding instance of the decision open shop problem
$O2|p_{ij}=1,l_{i}|\Sigma C_{i}\leq F$ is made up of $n$ jobs
$J_{1},\dots,J_{n}$ with time lags
$L_{1}=l_{1}+\Delta,\dots,L_{n}=l_{n}+\Delta$ respectively, where
$\Delta=\frac{3n}{2}-e$. The threshold for total completion time equals
$F=\frac{n}{2}(\frac{3n}{2}+1)$.
For a partition $A$ and $B$ and permutations $\pi_{A}$, $\sigma_{A}$,
$\pi_{B}$, and $\sigma_{B}$ that attest to an affirmative answer to Q, the
schedule $S$ in Figure 1 is a feasible schedule for the open shop problem with
total completion time equal to
$\Sigma_{i=1}^{\frac{n}{2}}(\pi_{A}(i)+\ell_{i}+\Delta+1)+\Sigma_{i=1}^{\frac{n}{2}}(\pi_{B}(i)-\frac{n}{2}+\lambda_{i}+\Delta+1),$
which by (3) and definition of $\Delta$ equals $F$. Thus $S$ gives an
affirmative answer to the problem $O2|p_{ij}=1,l_{i}|\Sigma C_{i}\leq F$
instance.
Figure 1: Schedule S for the partition $A$ and $B$ and permutations $\pi_{A}$,
$\sigma_{A}$, $\pi_{B}$, and $\sigma_{B}$.
Now, let $\mathcal{S}$ be a feasible schedule for the instance of
$O2|p_{ij}=1,l_{i}|\Sigma C_{i}\leq F$ with total completion time not
exceeding $F$. We first show that the makespan $C_{\max}=n$ in $\mathcal{S}$.
To that end let $x_{\sigma(1)}\leq x_{\sigma(2)}\leq\dots\leq
x_{\sigma(n-1)}\leq x_{\sigma(n)}$ be the times when the _earlier_ operations
of the jobs $J_{1},\dots,J_{n}$ complete in $\mathcal{S}$. Because of the
delay due to the time lags the total completion time of $\mathcal{S}$ is at
least
$\sum_{i=1}^{\frac{n}{2}}(x_{\sigma(2i-1)}+x_{\sigma(2i)})+\sum_{j=1}^{n}(L_{j}+1),$
which does not exceed the threshold $F$ for $\mathcal{S}$. Hence by (3) and
definition of $\Delta$
$\sum_{i=1}^{\frac{n}{2}}(x_{\sigma(2i-1)}+x_{\sigma(2i)})\leq\frac{n}{2}(\frac{n}{2}+1).$
(4)
For two machines we have $i\leq x_{\sigma(2i-1)}\leq x_{\sigma(2i)}$,
$i=1,\dots,\frac{n}{2}$. Thus by (4) we get
$x_{\sigma(2i-1)}=x_{\sigma(2i)}=i$ for $i=1,\dots,\frac{n}{2}$. Therefore
each job $J_{1},\dots,J_{n}$ completes after time $\frac{n}{2}$ in
$\mathcal{S}$. Let $C_{\pi(1)}\leq C_{\pi(2)}\leq\dots\leq C_{\pi(n-1)}\leq
C_{\pi(n)}$ be the completion times of the jobs $J_{1},\dots,J_{n}$ in
$\mathcal{S}$. Clearly $C_{\pi(i)}=\frac{n}{2}+c_{\pi(i)}$, for some
$c_{\pi(i)}\geq 1$, thus
$\sum_{i=1}^{\frac{n}{2}}(c_{\pi(2i-1)}+c_{\pi(2i)})\leq\frac{n}{2}(\frac{n}{2}+1)$
(5)
in $\mathcal{S}$. Again, for two machines we have $i\leq c_{\pi(2i-1)}\leq
c_{\pi(2i)}$, $i=1,\dots,n$. Thus by (5) we get $c_{\pi(2i-1)}=c_{\pi(2i)}=i$
for $i=1,\dots,\frac{n}{2}$. Therefore all jobs complete by $C_{\max}=n$ in
$\mathcal{S}$ which is what we set out to show first.
Figure 2: Schedule S with total completion time not exceeding F.
Finally, let $C$ be the set of $\frac{n}{2}$ jobs with earlier operations in
the interval $[0,\frac{n}{2}]$ on $M_{1}$ and later operations in
$[\frac{n}{2},n]$ on $M_{2}$ in $\mathcal{S}$, and $D$ be the set of
$\frac{n}{2}$ jobs with earlier operations in the interval $[0,\frac{n}{2}]$
on $M_{2}$ and later operations in $[\frac{n}{2},n]$ on $M_{1}$ in
$\mathcal{S}$, see Figure 2. We have
$\alpha_{C}(i)+L^{\prime}_{i}+\beta_{C}(i)=n$
for each $i\in C$, and
$\alpha_{D}(j)+L^{\prime}_{j}+\beta_{D}(j)=n$
for each $j\in D$ for some permutations $\alpha_{C}$, $\alpha_{D}$,
$\beta_{C}$, $\beta_{D}$ of the set $\\{1,\dots,\frac{n}{2}\\}$. Thus
$\alpha_{C}(i)+L^{\prime}_{i}+\beta_{C}(i)+\frac{n}{2}=\frac{3n}{2}$
for each $i\in C$, and
$\alpha_{D}(j)+\frac{n}{2}+L^{\prime}_{j}+\beta_{D}(j)=\frac{3n}{2}$
for each $j\in D$. By taking the permutation $\pi_{C}=\alpha_{C}$ of
$\\{1,\dots,\frac{n}{2}\\}$ and $\sigma_{C}=\beta_{C}+\frac{n}{2}$ of
$\\{\frac{n}{2}+1,\dots,n\\}$, and the permutation
$\pi_{D}=\alpha_{D}+\frac{n}{2}$ of $\\{\frac{n}{2}+1,\dots,n\\}$ and
$\sigma_{D}=\beta_{D}$ of $\\{1,\dots,\frac{n}{2}\\}$, we get
$\pi_{C}(i)+l^{\prime}_{i}+\sigma_{C}(i)=\frac{3n}{2}-\Delta=e$ (6)
for each $i\in C$, and
$\pi_{D}(j)+l^{\prime}_{j}+\sigma_{D}(j)=\frac{3n}{2}-\Delta=e$ (7)
for each $j\in D$, where $L^{\prime}_{i}=l^{\prime}_{i}+\Delta$ and
$l^{\prime}_{i}\geq l_{i}$ for $i=1,\dots,n$. By (6) and (7) we have
$\Sigma_{i=1}^{n}l^{\prime}_{i}=n(e-n-1).$
Thus by (3), $l^{\prime}_{i}=l_{i}$ for $i=1,\dots,n$. Hence
$\pi_{C}(i)+l_{i}+\sigma_{C}(i)=e$ (8)
for each $i\in C$, and
$\pi_{D}(j)+l_{j}+\sigma_{D}(j)=e$ (9)
for each $j\in D$. Therefor $C$ and $D$ make up the required partition, which
proves the following theorem.
###### Theorem 2.1.
The problem $O2|p_{ij}=1,l_{i}|\sum C_{i}$ is NP-hard in the strong sense.
## References
* [1] P. Brucker, S. Knust, T. C. E. Cheng, and N. V. Shakhlevich. Complexity results for flow-shop and open-shop scheduling problems with transportation delays. Annals of Operations Research, 129:81–106, 2004.
* [2] R. L. Graham, E. L. Lawler, J. K. Lenstra, and A. H. G. Rinnooy Kan. Optimization and approximation in deterministic sequencing and scheduling: A survey. Anns. Discr. Math., 5:287–326, 1979.
* [3] D. Rebaine and V.A. Strusevich. Two-machine open shop scheduling with special transportation times. Journal of the Operational Research Society, 50:756 – 764, 1999\.
* [4] V.A. Strusevich. A heuristic for the two-machine open-shop scheduling problem with transportation times. Discrete Appl. Math., 93:287 – 304, 1999.
* [5] W. Yu. The Two-machine Flow Shop Problem with Delays and the One- machine Total Tardiness Problem. PhD thesis, Eindhoven University of Technology, 1996.
* [6] W. Yu, J. A. Hoogeveen, and J. K. Lenstra. Minimizing makespan in a two-machine flow shop with delays and unit-time operations is np-hard. Journal of Scheduling, 7:333–348, 2004.
* [7] X. Zhang. Scheduling with Time Lags. PhD thesis, Erasmus Universiteit Rotterdam, 2010.
* [8] X. Zhang and S. van de Velde. On-line two-machine open shop scheduling with time lags. European Journal of Operational Research, 204:14–15, 2010.
|
<EMAIL_ADDRESS>WeydeResearch Centre for Machine Learning,
Department of Computer Science, City, University of London, United Kingdom
<EMAIL_ADDRESS>Manisha Kopparti††thanks: Funded by a PhD
studentship from City, University of LondonResearch Centre for Machine
Learning, Department of Computer Science, City, University of London, United
Kingdom
We would like to thank the anonymous reviewers of this article for their
valuable comments and suggestions that helped to improve this article.
# Modelling Identity Rules with Neural Networks
###### Abstract
In this paper, we show that standard feed-forward and recurrent neural
networks fail to learn abstract patterns based on identity rules. We propose
Relation Based Pattern (RBP) extensions to neural network structures that
solve this problem and answer, as well as raise, questions about integrating
structures for inductive bias into neural networks.
Examples of abstract patterns are the sequence patterns ABA and ABB where A or
B can be any object. These were introduced by Marcus et al (1999) who also
found that 7 month old infants recognise these patterns in sequences that use
an unfamiliar vocabulary while simple recurrent neural networks do not. This
result has been contested in the literature but it is confirmed by our
experiments. We also show that the inability to generalise extends to
different, previously untested, settings.
We propose a new approach to modify standard neural network architectures,
called Relation Based Patterns (RBP) with different variants for
classification and prediction. Our experiments show that neural networks with
the appropriate RBP structure achieve perfect classification and prediction
performance on synthetic data, including mixed concrete and abstract patterns.
RBP also improves neural network performance in experiments with real-world
sequence prediction tasks.
We discuss these finding in terms of challenges for neural network models and
identify consequences from this result in terms of developing inductive biases
for neural network learning.
## 1 Introduction
Despite the impressive development of deep neural networks over recent years,
there has been an increasing awareness that there are some tasks that still
elude neural network learning or need unrealistic amounts of data. Humans, on
the other hand, are remarkably quick at learning and abstracting from very few
examples. Marcus [1] showed in an experiment that 7-month old infants already
recognise sequences by identity rules, i.e. which elements are repeated, after
just two minutes of familiarization. In that study a simple recurrent neural
network model was also tested and it failed to generalise these identity rules
to new data.
In this study, we re-visit this problem and evaluate the performance of
frequently used standard neural network models in learning identity rules.
More specifically, we find that feed-forward and recurrent neural networks
(RNN) and their gated variants (LSTM and GRU) in standard set-ups clearly fail
to learn general identity rules presented as classification and prediction
tasks.
We tackle this problem by proposing _Relation Based Patterns_ (RBP), which
model identity relationships explicitly as extensions to neural networks for
classification and prediction. We show experimentally that on synthetic data
the networks with suitable RBP structures learn the relevant rules and
generalise with perfect classification and prediction. We also show that this
perfect performance extends to mixed rule-based and concrete patterns, and
that RBP improves prediction on real-world language and music data.
Identity rules are clearly in the hypothesis space of the neural networks, but
the networks fail to learn them by gradient descent. We identify that both the
comparison of related input neurons and of input tokens needs to be predefined
in the network to learn general rules from data. The RBP structures introduce
this inductive bias in the neural networks and thus enable the learning of
identity rules by standard neural networks.
Our contributions in this paper are specifically:
* •
we evaluate several common NN architectures: feed-forward networks, RNN, GRU,
and LSTM, in novel settings, and find that they fail to learn general identity
rules;
* •
we identify reasons that prevent the learning process from being successful in
this context;
* •
we propose the Relation Based Patterns, a new method to enable the learning of
identity rules within the regular network structure;
* •
we show in experiments that identity rules can be learnt with RBP structure on
artificial data, including mixed rule-based and concrete patterns, and that
they improve performance in real-world prediction tasks;
The remainder of this paper is structured as follows. Section 2 introduces
related work on modelling identity rules. Section 3 presents results of our
experiments with standard neural network architectures. Section 4 presents our
RBP model and its different variants. Section 5 presents the results of
experiments using RBP structures. Section 6 addresses the application of RBP
to mixed patterns and real data. Section 7 discusses the implications of the
presented experimental results and Section 8 concludes this paper.
## 2 Related work
Our task is the learning of rules from sequential data. This is often seen as
grammar learning, on which there have been many studies in psychology. [2]
made an early contribution on implicit learning and generalisation.
Subsequently, [3, 4] studied specifically the knowledge acquired during
artificial grammar learning tasks. ]
The specific problem we are addressing in this study is the recognition of
abstract patterns that are defined by the identity relation between tokens in
a sequence. In the well-known experiments by [1], infants were exposed to
sequences of one of the forms _ABA_ or _ABB_ , e.g. ‘la di la’ or ‘la di di’,
for a few minutes in the familiarisation phase.
In the test phase the infants were exposed to sequences with a different
vocabulary (e.g. ‘ba tu ba’ and ‘ba tu tu’) and they showed significantly
different behaviour depending on whether the sequences exhibited the form they
were familiarised with or not.
This type of pattern only depends on the equality between elements of the
sequence and after successful learning it should be recognisable independently
of the vocabulary used. However, [1] also showed that simple recurrent Elman
networks were not able to perform this learning task. This finding sparked an
exchange about whether human speech acquisition is based on rules or
statistics and the proposal of several neural networks models that claimed to
match the experimental results. [5] and [6] proposed a solution based on a
distributed representation of the input and on pre-training where the network
is first trained to recognise repeated items in a sequence. The network is
subsequently trained on classifying ABA vs ABB patterns. Only [6] reports
specific results and has only 4 test data points, but 100% accuracy. However,
[7] reported that they could not recreate these results.
[8] and [9] suggested solutions which are based on modified network
architectures and training methods. [10] could not replicate the results by
[9] and found that the models by [8] do not generalise. The claims by [10]
were again contested by [11]. A number of other methods were suggested that
used specifically designed network architectures, data representations, and
training methods, such as [12, 13, 14, 15]. More recent work by [16] suggests
that prior experience or pre-defined context representation (“pre-training” or
“pre-wiring”) is necessary for the network to learn general identity rules
when using echo state networks. While these works are interesting and
relevant, they do not answer our question whether more commonly used network
architectures can learn general identity rules.
The discussion of this problem is part of a wider debate on the systematicity
of language learning models, which started in the 1980s and 1990s [17, 18].
This debate, like the more specific one on identity rules, has been
characterised by claims and counter-claims [19, 20, 21, 22, 23, 24], which, as
stated by [25], often suffer from a lack of empirical grounding. Very
recently, the work in [26] has defined a test of systematicity in a framework
of translation, applied it to standard _seq2seq_ neural network models [27].
They found that generalisation occurs in this setting, but it depends largely
on the amount and type of data shown, and does not exhibit the extraction and
systematic application of rules in the way a human learner would.
In most of the studies above, the evaluation has mostly been conducted by
testing whether the output of the network shows a statistically significant
difference between inputs that conform to a trained abstract pattern and those
that do not. From a machine learning perspective, this criterion is not
satisfactory as we, like [26], would expect that an identity rule should
always be applied correctly once if it has been learned from examples, at
least in cases of noise-free synthetic data. We are therefore interested in
the question whether and how this general rule learning can be achieved with
common neural network types for sequence classification.
This question also relates to recent discussions sparked by [28] about deep
neural networks’ need for very large amounts of training data, lack of
robustness and lack of transparency as also expressed, e.g., by [29, 30, 31].
We surmise that these issues relate to the lack of generalisation beyond the
space covered by the input data, i.e. extrapolation, which is generally seen
as requiring an inductive bias in the learning system, but there is no general
agreement about the nature or implementation of inductive biases for neural
networks, e.g. [32, 33]. In recent years, there was a trend to remove human
designed features from neural networks, and leave everything to be learned
from the data [34]. We follow here the inverse approach, to add a designed
internal representation, as we find that for the given problem standard neural
network methods consistently fail to learn any suitable internal
representation from the data.
## 3 Experiment 1: standard neural networks
We test different network architectures to evaluate if and to what extent
recurrent and feed-forward neural networks can learn and generalise abstract
patterns based on identity rules.
### 3.1 Supervised learning of identity rules
The problem in the experiment by [1] is an unsupervised learning task, as the
infants in the experiments were not given instructions or incentives. However,
most common neural network architectures are designed for supervised learning
and there are also natural formulations of abstract pattern recognition as
supervised learning task in the form of classification or prediction.
In our case, abstract patterns are defined by identity relations. Expressed in
logic, they can be described using the binary equality predicate
$eq(\cdot,\cdot)$. For a sequence of three tokens $\alpha,\beta,\gamma$ the
rule-based patterns $ABA$ and $ABB$ can be described by the following rules:
$\displaystyle ABA$ $\displaystyle:\neg eq(\alpha,\beta)\land
eq(\alpha,\gamma)$ (1) $\displaystyle ABB$ $\displaystyle:\neg
eq(\alpha,\beta)\land eq(\beta,\gamma).$ (2)
These rules are independent of the actual values of
$\alpha,\beta,\text{and}\;\gamma$ and also called abstract patterns. Concrete
patterns, on the other hand, are defined in terms of values of from a
vocabulary $a,b,c,...$ . E.g., sequences $a**$, i.e. beginning with ‘$a$’, or
$*bc$, ending with ‘$bc$’, can be formulated in logic as follows:
a** $\displaystyle:\alpha=\mlq{}a\mrq$ (3) *bc
$\displaystyle:\beta=\mlq{}b\mrq\land\gamma=\mlq{}c\mrq.$ (4)
For the remainder of this article we use the informal notations $ABA$ and
$a**$ as far as they are unambiguous in their context.
For classification, the task is to assign a sequence to a class, i.e. _ABA_ or
_ABB_ , after learning from labelled examples. For prediction, the task is to
predict the next token given a sequence of two tokens after exposure to
sequences of one of the classes (e.g. only ABA, or ABB respectively). These
tasks are suitable for the most commonly used neural network architectures.
### 3.2 Experimental set-up
#### Network set-up
We use the Feed-forward Neural Network (FFNN) (also called Multi-layer
Perceptron) [35], the Simple Recurrent Neural Network (RNN, also called Elman
network [36]), the Gated Recurrent Unit (GRU) network [37], and the Long Short
Term Memory (LSTM) network [38]. For Prediction we only use the RNN and its
gated variants GRU and LSTM.
The input to the networks is a one-hot encoded vector representing each token
with $n$ neurons, where $n$ is the size of the vocabulary. In the case of the
FFNN, we encode the whole sequence of 3 tokens as a vector of size $3n$. For
the recurrent models, we present the tokens sequentially, each as a
$n$-dimensional vector. We set the number of neurons in each hidden layer to
(10, 20, 30, 40, 50), using 1 or 2 hidden layers. We use Rectified Linear
Units (ReLUs) for the hidden layers in all networks. The output layer uses the
softmax activation function. The number of output units is 2 for
classification and the size of the vocabulary for prediction. We train with
the Adam optimisation method [39], using initial learning rates of
$0.01,0.1,0.2,0.4$, and train with the synthetic datasets in one batch. We use
regularisation with Dropout rates of 0.1, 0,2, 0.4 and set the number of
epochs to 10, within which all trainings converged.
We conduct a full grid search over all hyperparameters using four-fold cross-
validation to optimise the hyperparameters and determine test results. We run
a total of 10 simulations for each evaluation and average the results. All
experiments have been programmed in PyTorch and the code is publicly
available.111https://github.com/radhamanisha1/RBP-architecture
#### Datasets
For performing the rule learning experiments, we artificially generate data in
the form of triples for each of the experiments. We consider our sample
vocabulary as $a...l$ (12 letters) for both prediction and classification
tasks. We generate triples in all five abstract patterns: AAA, AAB, ABA, ABB,
and ABC for the experiments. The sequences are then divided differently for
the different cases of classification. For all the experiments we use separate
train, validation, and test sets with 50%, 25%, and 25% of the data,
respectively. All sampling (train/test/validation split, downsampling) is done
per simulation.
### 3.3 Classification
First we test three different classification tasks as listed below. We use
half the vocabulary for training and the other half for testing and validation
(randomly sampled). We divide the sequences into two classes as follows,
always maintaining an equal size of both classes:
* 1)
ABA/ABB vs other: In task a) class one contains only pattern ABA while the
other contains all other possible patterns (AAA, AAB, ABB, ABC) downsampled
per pattern for class balance. The task is to detect whether
$eq(\alpha,\gamma)\land\neg eq(\alpha,\beta)$ is true or false. Analogously,
the task in b) ABB vs other is to detect $eq(\beta,\gamma)\land\neg
eq(\alpha,\beta)$. This case corresponds to the experiment in [1], where only
one rule-based pattern type is used for familiarisation.
* 2)
ABA vs ABB: This task is like task 1 above, but only pattern ABB occurs in the
second class, so that this task has less variance in the second class. We
expected this task to be easier to learn because two equality predicates
$eq(\alpha,\gamma),eq(\beta,\gamma)$ change their values between the classes
and are each sufficient to indicate the class.
* 3)
ABC vs other: In this case, class one (ABC) has no pair of equal tokens, while
the $other$ class has at least one of
$eq(\alpha,\beta),eq(\alpha,\gamma),eq(\beta,\gamma)$ as $true$, i.e.
detecting equalities without localising them is sufficient for correct
classification.
In our experiments, the training converged quickly in all cases and the
classification accuracy on the training data was 100%. The results on the test
set are shown in Table 1. In all cases the baseline, corresponding to random
guessing is 50%. This baseline is only exceeded for task 1) by the RNNs and
their gated variants, and even then the accuracy is far from perfect at 55%.
Classification task | FFNN | RNN | GRU | LSTM
---|---|---|---|---
1a) ABA/other | 50% | 55% | 55% | 55%
1b) ABB/other | 50% | 55% | 55% | 55%
2) ABA/ABB | 50% | 50% | 50% | 50%
3) ABC/other | 50% | 50% | 50% | 50%
Table 1: Three classification tasks based on abstract patterns over 10
simulations. The numbers show test set accuracy after a grid search and cross
validation as described in section 3.2. All values are rounded to the next
percentage point.
### 3.4 Prediction
We performed prediction experiments on two tasks. In task 1) we train and test
on ABA patterns and in task 2) on ABB. Training and test/validation set use
different vocabularies. The training converged quickly in less than 10 epochs,
and after training the classification accuracy on the training set is 100%.
The results on the test set are shown in Table 2. The baseline is
$~{}8.3\ldots\%$ as we have a vocabulary size of 12. We use again half the
vocabulary (6 values) for training and half for validation/testing. The
results show that the tested networks fail completely to make correct
predictions. They perform below the baseline at 0% accuracy, which is mostly
because they predict only tokens that appear in the training set but not in
the test set.
Prediction task | RNN | GRU | LSTM
---|---|---|---
1) ABA | 0% | 0% | 0%
2) ABB | 0% | 0% | 0%
Table 2: Prediction results for two different abstract patterns. The numbers
show test set prediction accuracy after a grid search and cross validation as
described in section 3.2.
### 3.5 Discussion
The results show clearly that FFNNs, RNNs, GRUs and LSTMs do not learn general
abstract patterns based on identity rules. This agrees with the previously
reported experiments by [1]. However, since there was some conflicting
evidence in the literature, the clarity of the outcome was not expected.
#### Questions raised
This result raises the question of why these neural networks do not learn to
generalise abstract patterns from data. There are two aspects worth
considering for an explanation: the capacity of the network and the necessary
information for the network to solve the problem.
Regarding the capacity: the solution to the task is in the hypothesis space of
the neural networks, since proofs exist of universal approximation properties
for feed-forward networks with unbounded activation functions [40] and of
Turing-completeness for recurrent networks [41]. We will present a
constructive solution below, putting that result into practice, with a design
of network instances that solve the problem.
The relevant question, as has been pointed out by [16], is therefore why
learning with backpropagation does not lead to effective generalisation here.
There are three different steps that are necessary to detect identity rules: a
comparison of input neurons, a comparison of tokens, represented by multiple
neurons, and a mapping of comparison results to classes or predictions.
#### Vocabulary hypothesis
A possible reason for the failure of the networks to generalise what we call
the vocabulary hypothesis. It is based on the separated vocabulary in one-hot
encoded representation. This leads to some input neurons only being activated
in the training set and some only in the validation and test sets.
In order to learn suitable weights for an input comparison, there would have
to be a suitable gradient of the weights of the outgoing connections from
these inputs. If parts of the vocabulary do not appear in the training data,
i.e. the activation of the corresponding input neurons is always zero during
training, the weights of their outgoing connections will not be adapted. We
therefore expect that the separation of the vocabulary prevents generalisation
from the training to the test set as the weights going out from neurons that
are used during testing have not been adapted by the gradient descent. Based
on this consideration we conducted another experiment with a shared
vocabulary.
This experiment is called ABA-BAB vs other. We again represent our vocabulary
as $a...l$ (12 letters) for this task with train/validation/test split as
$50\%/25\%/25\%$. Now we use the same vocabulary for training, validation, and
testing, but we separate different sequences of the form ABA that use the same
tokens between the training and validation/test sets. E.g., if the $ded$ is in
the test set, then $ede$ is in the training or validation set, so that there
is no overlap in terms of actual sequences. Like in classification experiment
1), training converged quickly and resulted in perfect classification
performance on the training set.
Classification task | FFNN | RNN | GRU | LSTM
---|---|---|---|---
ABA-BAB vs other | 50% | 50% | 50% | 50%
Table 3: Classification results on test sets with the same vocabulary used in
test, validation and training set.
The results on the test set presented in Table 3 show performance at the
baseline with no evidence of generalisation. This shows that activating all
inputs by using a shared vocabulary is not sufficient to enable generalisation
in the learning process.
#### Other explanations
A second potential problem is which neurons should be compared. The FFNN has
no prior information about neurons belonging to the same or different tokens
or about which input neurons correspond to the same token values. In the RNN,
one token is presented per time step, so that a comparison between the
previous hidden state and the current input is possible as the same neurons
are activated. However, with a full set of connections between the previous
hidden layer and the current, there is no reason that relations between the
same neurons at different time steps would be processed differently from
different neurons.
On the other hand, if we had a representation that includes the information of
which tokens are identical or different, then we would have all the
information we need for a mapping, as these are the relations in which our
defining rules are formulated (e.g. ABA is defined as
$eq(\alpha,\gamma)\land\neg eq(\alpha,\beta)$). This idea has led to the
Relation Based Pattern (RBP) model that we introduce in the next section and
then evaluate with respect to its effect on both abstract and concrete pattern
learning.
## 4 Design of Relation Based Pattern models
To address the inability of neural networks to generalise rules in neural
network learning, we developed the Relation Based Pattern (RBP) model as a
constructive solution, where the comparisons between input neurons and between
tokens and the mappings to outputs are added as a predefined structure to the
network. The purpose of this structure is to enable standard neural networks
to learn abstract patterns based on the identity rules over tokens while
retaining other learning abilities.
In the RBP model there are two major steps. The first step is defining
comparison units for detecting repetitions, called DR units, and the second
step is adding the DR units to the neural network.
### 4.1 Comparison units
#### Comparing neurons
We assume, as before, that input is a one-hot encoded vector of the current
token along with the $n-1$ previous vectors for a given context length n (in
this study context length 3 for classification and 2 for prediction). We use
comparison units, called DR units (differentiator-rectifier). As the name
suggests, they apply a full wave rectification to the difference between two
inputs: $f(x,y)=|x-y|$. The first level of $DR$ units are $DR_{n}$ units that
are applied to every pair of corresponding input neurons (representing the
same value) within a token representation, as shown in Figure 1.
Figure 1: $DR_{n}$ units comparing related inputs with an absolute of
difference activation function. In one-hot encoding there are $kDR_{n}$ units
for every pair of input tokens, where $k$ is the vocabulary size.
#### Comparing tokens
The next level of $DR$ units are the $DR_{p}$ units that sum the activations
of the $DR_{n}$ values that belong to one pair of tokens. Based on the
sequence length $n$ and vocabulary size $a$ we create $k=a\times n(n-1)/2$
$DR_{n}$ units for all the possible pairs of tokens and i.e. in our
classification example, we have a sequence of 3 tokens and a vocabulary size
of 12, i.e. $12\times 3(3-1)/2=36\times 3$ $DR_{n}$ units. All the $DR_{n}$
units for a pair of tokens are then summed in a $DR_{p}$ unit using
connections with a fixed weight of +1. E.g. we have $5\times(5-1)/2=10$
$DR_{p}$ units for a context of length 5. Figure 2a shows the network
structure with $DR_{n}$ and $DR_{p}$ units.
For the prediction case, we also use the same approach to represent the
difference between each input token and the next token (i.e., the target
network output). We create $n$ $DR_{p}out$ units that calculate the difference
between each input in the given context and the next token. There are $k\times
n$ $DR_{n}{out}$ units that compare the corresponding neurons for each pair of
input/output tokens, in the same way as for the pairs of input tokens. The
overall network structure is shown in Figure 2b.
(a) The $DR_{p}$ and $DR_{n}$ units that are used in the RBP1 and RBP2
structures with $3\times k$ $DR_{n}$ and 3 $DR_{p}$ units for a vocabulary
size $k$ and sequence length 3.
(b) The $DR_{out}$ structure for detecting repetitions between input and
target. The $DR_{p}out$ values are calculated at training time and a model is
trained to predict them conditional on $DR_{p}in$ (see Figure 5).
Figure 2: $DR_{n}$ and $DR_{p}$ units for inputs (all RBP) and outputs (RBP3).
### 4.2 Neural network integration
Figure 3: Overview of the RBP1n/RBP1p structure.
We combine the DR units ($DR_{n}$ and $DR_{p}$) with the neural network models
in early, mid and late fusion approaches we call RBP1, RBP2 and RBP3, as
outlined below. The weights that connect $DR_{n}$ units to input and output,
and the $DR_{n}$ to $DR_{p}$ units and the offset layer are fixed, all other
weights that appear in the following models are trainable with
backpropagation.
#### Early Fusion (RBP1n/p)
In this approach, $DR_{n}$ or $DR_{p}$ units are added as additional inputs to
the network, concatenated with the normal input. In Figure 3, the RBP1n/p
structure is depicted. We use early fusion in both the prediction and
classification tasks.
#### Mid Fusion (RBP2)
The $DR_{p}$ units are added to the hidden layer. Figure 4a shows the mid
fusion structure for the feed-forward network and Figure 4b for the recurrent
network respectively. The RBP2 approach is used for classification and
prediction tasks.
(a) RBP2a
(b) RBP2b
Figure 4: Overview of RBP2 approaches, where the $DR_{p}out$ units are
concatenated to the hidden layer.
Figure 5: Overview of the RBP3 approach. The $DR_{p}in$ values are calculated
as in RBP2. From there, we use a fully connected layer to predict $DR_{p}out$
(trained with teacher-forcing). The predicted $DR_{p}{out}$ values are mapped
back to the vocabulary (based on the context tokens) and used as probability
offsets in a mixture of experts with the standard neural network in the left
part of the diagram. All connections are trainable except $Input$ to
$DR_{p}in$ and $DR_{p}out$ to Output offsets (dotted arrows).
#### Late Fusion (RBP3)
In this approach, we use the same structure as in RBP2 (we call it $DR_{n}in$
and $DR_{p}in$ in this context), and in addition we estimate the probability
of identity relations between the input and the output, i.e., that the token
in the current context is repeated as the next token. We use a structure
called $DR_{p}out$ for this, and from there we project back to the vocabulary,
to generate a probability offset for the tokens appearing in the context.
Figure 5 gives an overview of the RBP3 late fusion scheme. The $DR_{p}in$
units detect identities between the input tokens in the current context as
before. The $DR_{p}out$ units model the identities between the context and the
next token, as shown in the Figure 4b, where repetition is encoded as 1, and a
non-repeated token as a $-1$. During training we use teacher-forcing, i.e., we
set the values of the $DR_{p}out$ units to the true values. We use a feed-
forward neural network with one hidden layer to learn a mapping from the
$DR_{in}$ to the $DR_{out}$. This gives us an estimate of the $DR_{out}$ units
given the $DR_{in}$ units. The $DR_{out}$ values are then normalised
subtracting the mean, and then mapped back to the output space (the one-hot
vocabulary representation), using a zero value for the output values that
don’t appear in the input. These output offsets are then combined in a
weighted sum (mixture of experts) with the output distribution estimated by
the standard normal network (on the left side in Figure 5). The weights in the
mixture are trainable. The outputs from the combined distribution of mixture
of experts are clipped between [0,1] and renormalised. The final output
distribution is a softmax layer providing the probability distribution over
the vocabulary for the next token.
## 5 Experiment 2: neural networks with RBP structures
In the following we repeat the experiments from section 3 but also test
networks with added RBP structures. For convenience we repeat the previous
results in the tables in this section.
### 5.1 Classification experiments
This experiment is analogous to the first classification experiment. In the
case of the feed-forward network, RBP2(a) was used in the mid fusion approach
and for recurrent network, RBP2(b) was used. We trained again for 10 epochs
and all networks converged to perfect classification on the training set.
Table 4 provides the overall test accuracy for the three approaches.
Task | RBP | FFNN | RNN | GRU | LSTM
---|---|---|---|---|---
1a) ABA vs other | - | 50% | 55% | 55% | 55%
RBP1n | 50% | 55% | 55% | 55%
RBP1p | 65% | 70% | 70% | 70%
RBP2 | 100% | 100% | 100% | 100%
1b) ABB vs other | - | 50% | 55% | 55% | 55%
RBP1n | 50% | 55% | 55% | 55%
RBP1p | 65% | 70% | 70% | 70%
RBP2 | 100% | 100% | 100% | 100%
2) ABA vs ABB | - | 50% | 50% | 50% | 50%
RBP1n | 50% | 60% | 65% | 65%
RBP1p | 75% | 75% | 75% | 75%
RBP2 | 100% | 100% | 100% | 100%
3) ABC vs other | - | 50% | 50% | 50% | 50%
RBP1n | 55% | 65% | 65% | 65%
RBP1p | 55% | 70% | 70% | 70%
RBP2 | 100% | 100% | 100% | 100%
4) ABA-BAB vs other | - | 50% | 50% | 50% | 50%
RBP1n | 55% | 72% | 75% | 75%
RBP1p | 69% | 74% | 75% | 76%
RBP2 | 100% | 100% | 100% | 100%
Table 4: Classification experiments with RBP: test accuracy for the different
models and tasks, as explained above. Results with ‘-’ in the RBP column are
the same as in section 3 and shown here again for comparison.
The results with RBP1n models already show some improvement over the baseline
in most configurations, but the result are only slightly above the standard
networks, with RNNs, GRUs and LSTMs benefiting more than FFNN. This supports
our hypothesis that learning to compare corresponding input neurons is a
challenging task for neural networks. However, the results show that providing
that comparison is not sufficient for learning identity rules.
RBP1p structures also aggregate all the $l$ $DR_{n}$ neurons that belong to a
pair of input tokens. The results show that providing that information leads
to improved accuracy and provide evidence that this aggregation is another
necessary step that the networks do not learn reliably from the data.
The RBP2 models enable the neural networks to make predictions and
classifications that generalise according to identity rules that it learns
from data. The RBP2 leads to perfect classification for all network types
tested. This confirms the design consideration that comparing pairs of tokens
provides the relevant information in the form required for classification, as
the classes are defined by $equals$ relations, so that the activations of the
DRp units are directly correlated with the class labels.
A surprising result is the big difference between the generalisation using the
RBP1p and the RBP2 structures. They both provide the same information, only in
different layers of the network, but RBP1p only reaches at most 75% with a 50%
baseline. We hypothesize that the additional expressive power provided by the
non-linearities in the hidden layer here provides effective learning. This
effect deserves further investigation.
### 5.2 Prediction experiments
Here we performed two experiments separately on ABA and ABB patterns as in
experiment 1 on prediction. The tasks are the same as previously and we
trained again for 10 epochs after which all networks had converged to perfect
prediction accuracy on the training data. Table 5, summarises the accuracy for
RNN, GRU and LSTM without RBP, and with RBP1n, 1p, 2, and 3.
Pattern | RBP | RNN | GRU | LSTM
---|---|---|---|---
1) ABA | - | 0% | 0% | 0%
RBP1n | 0% | 0% | 16%
RBP1p | 0% | 0% | 18%
RBP2 | 0% | 0% | 20%
RBP3 | 100% | 100% | 100%
2) ABB | - | 0% | 0% | 0%
RBP1n | 0% | 0% | 17%
RBP1p | 0% | 0% | 20%
RBP2 | 0% | 0% | 22%
RBP3 | 100% | 100% | 100%
Table 5: Test set accuracy in prediction experiments for patterns ABA and ABB.
As before, results are averaged over 10 simulations and rounded to the nearest
decimal point. Results with ‘-’ in the RBP column are the same as in section 3
and shown here again for comparison.
Overall, we observe that only the LSTM benefits from RPB1n, RBP1p, and RBP2
structures, all other networks can apparently not make use of the information
provided. The RBP3 model, on the other hand, leads to perfect classification
on our synthetic dataset.
Our interpretation is, that standard recurrent networks do not learn the more
complex mapping that prediction requires, as not only recognition of a pattern
but also selecting a prediction on the basis of that pattern is required. The
somewhat better results of the LSTM networks are interesting. In the RBP3
model, the mapping between the identity patterns and back to the vocabulary
adds considerable prior structure to the model and it is very effective in
achieving the generalisation of rule-based patterns.
## 6 Experiment 3: mixed tasks and real data
The results presented here were all obtained with synthetic data where
classification was exclusively on rule-based abstract patterns. This raises
the question whether the RBP will impede recognition of concrete patterns in a
mixed situation. Furthermore, we would like to know whether RBP is effective
with real data where the abstract and concrete patterns may interact.
### 6.1 Mixed abstract and concrete patterns
We conducted an experiment where the classes were defined by combinations of
abstract and concrete patterns. Specifically we defined 4 classes based on the
abstract patterns $ABA$ and $ABB$ combined with the concrete patterns $a**$
and $b**$. E.g., the class $ABA,a**$ can be expressed logically as
$eq(\alpha,\gamma)\land\neg eq(\alpha,\beta)\land\alpha=`a\mrq.$ (5)
We use a vocabulary of 18 characters, out of which 12 are used for training
and 6 are used for validation/testing in addition to ‘a’ and ‘b’, which need
to appear in all sets because of the definition of the concrete patterns. For
class 1/3 and class 2/4, abstract patterns ABA and ABB are used respectively.
Class 1/2 and 3/4 start with tokens ‘a’ and ‘b’ respectively. The train,
validation and test split is 50%, 25%, and 25% respectively. We trained the
network for 10 epochs, leading to perfect classification on the training set.
A total of 10 simulations has been performed. We test a feed forward and a
recurrent neural network without and with RBP1p and RBP2. The results are
shown in Table 6.
RBP | FFNN | RNN
---|---|---
- | 23% | 42%
RBP1p | 49% | 57%
RBP2 | 100% | 100%
Table 6: Test set accuracy for mixed abstract/concrete pattern classification.
As in the previous experiments, networks without RBP fail to generalise the
abstract patterns. The results for RBP1p and RBP2 show, that the ability to
learn and recognise the concrete patterns is not impeded by adding the RBP
structures.
### 6.2 Language models with RBP
In order to test the capability of networks with RBP structure, we use them in
two language modelling tasks. One is to predict characters in English text,
and one is to predict the pitch of the next note in folk song melodies. We
selected both tasks because of the prevalence of repetitions in the data, as
notes in music and characters in English tend to be repeated more than words.
Our RBP structures are designed to model identity-rules and we therefore
expect them to be more effective on tasks with more repetitions.
#### Character prediction
We conducted a character prediction experiment on a subset of the Gutenberg
electronic book collection222https://www.gutenberg.org/, consisting of text
with the dataset size of 42252 words. We used 2 hidden layers with 50 neurons
each. In the RBP2 model, the DRp units were concatenated with the first hidden
layer. The learning rate is set to 0.01 and the network training converged
after 30 epochs. Each character is predicted without and with the RBP variants
using a context size of 5. The prediction results are summarized in Table 7.
RBP | RNN | GRU | LSTM
---|---|---|---
- | 3.8281 | 3.8251 | 3.8211
RBP1p | 4.4383 | 4.4368 | 4.4321
RBP2 | 3.7512 | 3.7463 | 3.7448
RBP3 | 3.4076 | 3.4034 | 3.4012
Table 7: Character prediction task. The numbers show the average cross entropy
loss per character on the test set (lower is better, best values are set in
bold), without and with RBP structures using context length 5.
#### Pitch prediction
In another experiment we applied RBP to pitch prediction in melodies [42]
taken from the Essen Folk Song Collection [43]. We performed a grid search for
each context length for hyper parameter tuning, with [10,30,50,100] as the
size of the hidden layer and [30,50] epochs with learning rate set to 0.01
with one hidden layer. The results for context length 5 are summarized in
Table 8. RBP improved the network performance for RNN, GRU, and LSTM. Overall,
LSTM with late fusion produces the best result and also improves over the best
reported performance in pitch prediction with a long-term model on this
dataset with a cross-entropy of 2.547, which was achieved with a feature
discovery approach by [44].
RBP | RNN | GRU | LSTM
---|---|---|---
- | 2.6994 | 2.5702 | 2.5589
RBP1p | 2.6992 | 2.5714 | 2.5584
RBP2 | 2.6837 | 2.5623 | 2.5483
RBP3 | 2.6588 | 2.5549 | 2.5242
Table 8: Pitch prediction task on the Essen Folk Song Collection. The numbers
show the average cross entropy per note (lower is better, best values are set
in bold), without and with RBP using context length 5.
#### Results
In both character and pitch prediction, the addition of RBP3 structures
improves the overall results consistently. RBP1n leads to a deterioration in
character prediction and to inconsistent effect on pitch prediction, while
RBP2 leads to a slight but consistent improvement in both tasks. This provides
further evidence that the RBP structure enables the learning of relevant
patterns in the data.
## 7 Discussion
### 7.1 Standard neural networks
The results of the experiments described above confirm the results of [1] and
others that standard recurrent (and feed-forward) neural networks do not learn
generalisable identity rules. From the tested models and settings of the task
we can see that the lack of activation of input neurons impedes learning, but
avoiding this lack is not sufficient. The task assumes that the identity of
input tokens is easy to recognise, classify and base predictions on, but the
models we tested do not learn to generalise in this way. These results confirm
our view that in order to generalise it is necessary to know which input
neurons are related, similarly on the next level, which comparisons of input
belong to a pair of tokens so that they can be aggregated per token. The
structure of neural networks does not provide any prior preference for inputs
that are related in this way over any other combinations of inputs. This makes
it seem plausible that the solutions by [5, 6, 9] could not be replicated by
[7, 10].
### 7.2 Constructive model with RBP
The RBP model addresses the learning of identity rules by adding neurons and
connections with fixed weights. From the input neurons we add connections to a
DR (differentiator-rectifier) unit from each pair of corresponding input
neurons within any pair of tokens (represented in one-hot encoding). These DR
units calculate the absolute of the difference of the activations of the two
input neurons. They are followed by $DRp$ units that aggregate by taking the
sum of the DR unit activations for each pair of tokens. The fact that the DRp
units relate to the difference between each pair of neurons makes the learning
task for classification much simpler, as has been confirmed by our results. An
open question in this context is why the RBP2 is so much more effective than
RBP1p for classification, although the only difference is the layer in which
the information is added into the network.
For prediction, we need a more complex structure, as beyond recognition of
identity, also the selection of the token to predict is required, that depends
on the tokens in the context and their similarity relations. The constructive
RBP solution requires a transformation into a representation of identity
relations in the input that is mapped to identities between input and output
and that is mapped back to the token space by adding prediction probability to
the tokens that are predicted to be identical between input and output. This
created a complex predefined structure, but without it even the models that
achieved prefect classification failed to make correct predictions with new
data. Only the LSTM models could use the RPB1 and RBP2 information to make
prediction above the baseline (22% vs 8.3%). We hypothesise that the gating
structure of the LSTMs enables at least some effective mapping. The 100%
correct predictions by all models using RBP3 shows the effectiveness of this
structure.
### 7.3 Applications
Adding a bias into the network with a predefined structure such as RBP raises
the question whether there is a negative effect on other learning abilities of
the network and whether interactions between the abstract and concrete tasks
can be learnt. In the mixed pattern experiment, RBP is still effective and
showed no negative effect. In experiments with real language and music data we
found that RBP3 has a positive effect on the prediction of characters in
language and pitches in folk song melodies. The small negative effect of RBP1
on character prediction seems to indicate that there may be confounding effect
where identity rules are less relevant. This effect did not appear in melody
prediction, where repetition is more important.
### 7.4 Extrapolation and inductive bias
The results in this study confirm that an inductive bias is needed for
extrapolation, in the terminology of [28], in order to generalise in some
sense outside the space covered by the training data. This general challenge
has recently attracted some attention. E.g., [45] provided several solutions
to the related problem of learning equality of numbers (in binary
representation), which does not generalise from even to odd numbers as pointed
out already by [46]. As the authors point out in [45], an essential question
is which biases are relevant to the domain and problem. The identity problem
addressed here is in itself fundamental to learning about relations [47], as
relations depend on object identity. This further raises the question what is
needed to enable more complex concepts and rules to be learnt, such as more
general logical concepts and rules.
The identity rules also point to the lower-level problem that the natural
relations of position and belonging to objects are not naturally addressed in
neural networks. Other tasks may require different structures, relating for
example to arithmetics, geometry or physics [48]. We therefore see as an
important task the definition or predefined structures in neural networks, so
that they create useful inductive bias, but do not prevent learning of
functions that do not conform to that bias.
## 8 Conclusions
Our experiments show that the observation by [1], that neural networks are
unable to learn general identity rules, holds for standard feed-forward
networks, recurrent neural networks, and networks of GRUs and LSTMs. The
solution we propose here, the Relation Based Patterns (RBP), introduce an
additional structure with fixed weights into the network. Our experiments
confirm that the RBP structures enable the learning of abstract patterns based
on identity rules in classification and prediction as well as in mixed
abstract and concrete patterns. We have further found that adding RBP
structures improves performance in language and music prediction tasks.
Overall, we find that standard neural networks do not learn identity rules and
that adding RBP structure creates an inductive bias which enables this
extrapolation beyond training data with neural networks. This outcome raises
the question on how to develop further inductive biases for neural networks to
improve generalisation of learning on other tasks and more generally.
## References
* [1] G. F. Marcus, S. Vijayan, S. Rao, P. Vishton, Rule learning by seven-month-old infants, Science, 283 283 (5398) (1999) 77–80.
* [2] P. C. Gordon, K. J. Holyoak, Implicit learning and generalization of the mere exposure effect., Journal of Personality and Social Psychology, 45, 492–500.
* [3] B. J. Knowlton, L. R. Squire, The information acquired during artificial grammar learning., Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 79 –91.
* [4] B. J. Knowlton, L. R. Squire, Artificial grammar learning depends on implicit acquisition of both abstract and exemplar-specific information, Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 169 –181.
* [5] M. Seidenberg, J. Elman, Do infants learn grammar with algebra or statistics?, Science 284 (5413) (1999) 433–433.
* [6] J. Elman, Generalization, rules, and neural networks: A simulation of Marcus et. al, https://crl.ucsd.edu/~elman/Papers/MVRVsimulation.html.
* [7] M. Vilcu, R. F. Hadley, Generalization in simple recurrent netrworks, in: Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 23, 2001, pp. 1072–1077.
* [8] T. R. Shultz, A. C. Bale, Neural network simulation of infant familiarization to artificial sentences: Rule-like behavior without explicit rules and variables, Infancy, 2:4, 501-536, DOI: 10.1207/S15327078IN020407.
* [9] G. Altmann, Z. Dienes, Rule learning by seven-month-old infants and neural networks, In Science 284 (5416) (1999) 875–875.
* [10] M. Vilcu, R. F. Hadley, Two apparent ‘counterexamples’ to marcus: A closer look, Minds and Machines 15 (3-4) (2005) 359–382.
* [11] T. R. Shultz, J.-P. Thivierge, D. Titone, Generalization in a model of infant sensitivity to syntactic variation, in: Proceedings of the Annual Meeting of the Cognitive Science Society, 2005, pp. 2009–20014.
* [12] L. Shastri, S. Chang, A spatiotemporal connectionist model of algebraic rule-learning, Tech. Rep. TR-99-011, Berkeley, California: International Computer Science Institute (1999).
* [13] M. Gasser, E. Colunga, Babies, variables, and connectionist networks, in: Proceedings of the 21st Annual Conference of the Cognitive Science Society, Lawrence Erlbaum, 1999, p. 794.
* [14] P. F. Dominey, F. Ramus, Neural network processing of natural language: I. sensitivity to serial, temporal and abstract structure of language in the infant, Language and Cognitive Processes 15 (1) (2000) 87–127.
* [15] M. H. Christiansen, C. M. Conway, S. Curtin, A connectionist single-mechanism account of rule-like behavior in infancy, in: Proceedings of the 22nd annual conference of the cognitive science society, 2000, pp. 83–88.
* [16] R. G. Alhama, W. Zuidema, Pre-wiring and pre-training: What does a neural network need to learn truly general identity rules, CoCo at NIPS.
* [17] J. A. Fodor, Z. W. Pylyshyn, Connectionism and cognitive architecture: A critical analysis, Cognition 28 (1-2) (1988) 3–71.
* [18] R. F. Hadley, Systematicity in connectionist language learning, Mind & Language 9 (3) (1994) 247–272.
* [19] P. Smolensky, The constituent structure of connectionist mental states: A reply to fodor and pylyshyn, Southern Journal of Philosophy 26 (Supplement) (1987) 137–161.
* [20] J. Fodor, B. P. McLaughlin, Connectionism and the problem of systematicity: Why smolensky’s solution doesn’t work, Cognition 35 (2) (1990) 183–204.
* [21] D. Chalmers, Why fodor and pylyshyn were wrong: The simplest refutation, in: Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, Cambridge, Mass, 1990, pp. 340–347.
* [22] L. Niklasson, T. van Gelder, Systematicity and connectionist language learning, Mind and Language 9 (3) (1994) 28–302.
* [23] M. H. Christiansen, N. Chater, Generalization and connectionist language learning, Mind & Language 9 (3) (1994) 273–287.
* [24] R. F. Hadley, Systematicity revisited: reply to christiansen and chater and niklasson and van gelder, Mind & Language 9 (4) (1994) 431–444.
* [25] S. L. Frank, Getting real about systematicity, in: P. Calvo, J. Symons (Eds.), The architecture of cognition: Rethinking Fodor and Pylyshyn’s systematicity challenge, MIT Press, 2014, pp. 147–164.
* [26] B. Lake, M. Baroni, Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks, in: International Conference on Machine Learning, 2018, pp. 2879–2888.
* [27] I. Sutskever, O. Vinyals, Q. V. Le, Sequence to sequence learning with neural networks, in: Advances in neural information processing systems, 2014, pp. 3104–3112.
* [28] G. F. Marcus, Deep learning : a critical appraisal, arXiv:1801.00631.
* [29] S. Sabour, N. Frosst, G. E. Hinton, Dynamic routing between capsules, in: Advances in Neural Information Processing Systems, 2017, pp. 3856–3866.
* [30] K. Kansky, T. Silver, D. A. Mély, M. Eldawy, M. Lázaro-Gredilla, X. Lou, N. Dorfman, S. Sidor, S. Phoenix, D. George, Schema networks: Zero-shot transfer with a generative causal model of intuitive physics, arXiv preprint arXiv:1706.04317.
* [31] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. J. Goodfellow, R. Fergus, Intriguing properties of neural networks, CoRR abs/1312.6199. arXiv:1312.6199.
URL http://arxiv.org/abs/1312.6199
* [32] R. Feinman, B. M. Lake, Learning inductive biases with simple neural networks (2018). arXiv:1802.02745.
* [33] D. G. T. Barrett, F. Hill, A. Santoro, A. S. Morcos, T. Lillicrap, Measuring abstract reasoning in neural networks (2018). arXiv:1807.04225.
* [34] Y. Bengio, A. Courville, P. Vincent, Representation learning: A review and new perspectives, IEEE transactions on pattern analysis and machine intelligence 35 (8) (2013) 1798–1828.
* [35] D. E. Rumelhart, G. E. Hinton, R. J. Williams, Learning internal representations by error propagation, Tech. rep., California Univ San Diego La Jolla Inst for Cognitive Science (1985).
* [36] J. L. Elman, Finding structure in time, Cognitive science 14 (2) (1990) 179–211.
* [37] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase representations using rnn encoder-decoder for statistical machine translation, arXiv preprint arXiv:1406.1078.
* [38] S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Computation 9 (8) (1997) 1735–1780. doi:10.1162/neco.1997.9.8.1735.
URL https://doi.org/10.1162/neco.1997.9.8.1735
* [39] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv:1412.6980.
* [40] M. Leshno, V. Y. Lin, A. Pinkus, S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural networks 6 (6) (1993) 861–867.
* [41] H. Siegelmann, E. Sontag, On the computational power of neural nets, Journal of Computer and System Sciences (1995) Volume 50, Issue 1, pp. 132–150.
* [42] R. M. Kopparti, T. Weyde, Evaluating repetition based melody prediction over different context lengths, ICML Joint Workshop on Machine Learning and Music, Stockholm, Sweden, July 14.
* [43] H. Schaffrath, The essen folksong collection in the humdrum kern formatDatabase edited by David Huron, CCAHR, Menlo Park, CA.
URL http://kern.ccarh.org/cgi-bin/ksbrow
* [44] J. Langhabel, R. Lieck, M. Rohrmeier, Feature discovery for sequential prediction of monophonic music, International Society for Music Information Retrieval Conference (2017) 649–655.
* [45] J. Mitchell, P. Minervini, P. Stenetorp, S. Riedel, Extrapolation in NLP, arXiv:1805.06648.
* [46] G. F. Marcus, The algebraic mind: Integrating connectionism and cognitive science, Cambridge MIT Press.
* [47] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, et al., Relational inductive biases, deep learning, and graph networks, arXiv preprint arXiv:1806.01261.
* [48] N. Cohen, A. Shashua, Inductive bias of deep convolutional networks through pooling geometry (2016). arXiv:1605.06743.
|
# Signatures of bath-induced quantum avalanches in a many-body–localized
system
Julian Léonard1,∗,† Sooshin Kim1,† Matthew Rispoli1 Alexander Lukin1 Robert
Schittko1 Joyce Kwan1 Eugene Demler2 Dries Sels3,4 Markus Greiner1,‡
###### Abstract
Strongly correlated systems can exhibit surprising phenomena when brought in a
state far from equilibrium. A spectacular example are quantum avalanches, that
have been predicted to run through a many-body–localized system and delocalize
it. Quantum avalanches occur when the system is locally coupled to a small
thermal inclusion that acts as a bath. Here we realize an interface between a
many-body–localized system and a thermal inclusion of variable size, and study
its dynamics. We find evidence for accelerated transport into the localized
region, signature of a quantum avalanche. By measuring the site-resolved
entropy we monitor how the avalanche travels through the localized system and
thermalizes it site by site. Furthermore, we isolate the bath-induced dynamics
by evaluating multipoint correlations between the bath and the system. Our
results have fundamental implications on the robustness of many-body–localized
systems and their critical behavior.
One of the founding principles of statistical physics is that a generic
macroscopic system can equilibrate on its own. This means that local
fluctuations of energy, magnetization, or particle density can relax towards
thermal equilibrium because interactions allow different parts of the system
to serve as reservoirs to each other. This universal picture has been
challenged by the idea of many-body localization (MBL), which suggests that
systems with strong disorder can evade thermalization even in the presence of
interactions Alet and Laflorencie (2018); Abanin _et al._ (2019); Schreiber
_et al._ (2015); Smith _et al._ (2016); Choi _et al._ (2016); Rubio-Abadal
_et al._ (2019); Lukin _et al._ (2019); Lüschen _et al._ (2017); Rispoli
_et al._ (2019).
In one-dimensional systems, a stable MBL phase can be argued as follows:
Matrix elements of local operators decay exponentially with separation between
two points, whereas the density of states increases exponentially with the
system size. For strong disorder, matrix elements can thus be argued to decay
faster than the density of states increases, ultimately inhibiting relaxation.
However, the existence of MBL remains a subject of debate Abanin _et al._ ;
Panda _et al._ (2019); Sierant _et al._ (2020); Šuntajs _et al._ (2020a,
b); Luitz and Lev (2020); Kiefer-Emmanouilidis _et al._ (2020a, b), since it
is unclear whether those conditions can actually be fulfilled. For instance,
by introducing a small region with weak disorder, part of the system may be
delocalized and thus give rise to local operators with non-exponential decay
Agarwal _et al._ (2015); Bar Lev _et al._ (2015); Žnidarič _et al._ (2016);
Gopalakrishnan _et al._ (2016); Agarwal _et al._ (2017); Potter _et al._
(2015); Vosk _et al._ (2015); Gopalakrishnan _et al._ (2015); Weiner _et
al._ (2019); Khemani _et al._ (2017a, b); Weiner _et al._ (2019). Those
weakly disordered regions occur naturally in randomly disordered systems, when
potential offsets on consecutive lattice sites accidentally coincide Griffiths
(1969); McCoy (1969). The dynamics in MBL systems in the presence of a locally
thermalizing region have been predicted to occur in so-called quantum
avalanches, which imply those small islands grow by absorbing nearby
disordered regions Nandkishore and Gopalakrishnan (2017); De Roeck and
Huveneers (2017); Luitz _et al._ (2017); Thiery _et al._ (2018); Crowley and
Chandran (2020). Under which conditions quantum avalanches can arise, run out
of steam, or propagate without halt determines the ultimate fate of MBL at
very long times. Their understanding is thus closely connected to discerning
thermalization in interacting many-body systems.
Figure 1: Bath-induced quantum avalanches. a, Two scenarios at an interface of
a thermal bath (clean) and a localized (disordered) region: a weak bath
penetrates logarithmically slow and localization remains robust (left), or an
avalanche from a strong bath thermalizes the disordered region site by site
(right). b, Fluorescence pictures of a two-dimensional Mott insulator at unity
filling, and of the initialized one-dimensional system of $L$ sites. Projected
optical potentials isolate the system and apply site-resolved offsets onto the
disordered region (blue). c, The initial state is brought far from equilibrium
through a quantum quench by abruptly enabling tunneling along all links, then
evolved under the Hamiltonian, until we detect the site-resolved atom number
with a fluorescence picture. d, The system’s dynamics are governed by the
Bose-Hubbard model with tunneling energy $J$ and on-site interaction energy
$U$, extended by a disorder potential with amplitude $W$ in the disordered
region. Figure 2: Accelerated transport across the clean-disorder interface.
a, Density correlations for all pairs of sites in a system consisting of
$L_{\text{clean}}=L_{\text{dis}}=6$ at disorder strength $W=9.1\,J$. After a
quantum quench, an uncorrelated initial state (left) develops separate
dynamics within each subsystem (center), followed by particle transport across
the clean-disorder interface (grey dashed lines) for evolution times $\gg
L_{\text{clean}},L_{\text{dis}}$ (right). Cuts show the total density
correlations $g^{(2)}(i)$ of the clean region with site $i$ (i.e. average of
top six rows, excluding diagonal entries), featuring homogeneous coupling
among the clean sites, and exponentially decaying anti-correlations with the
distance of the disordered site from the interface. b, The decay length
$\xi_{\text{d}}$ of the total density correlations increases first
logarithmically in time and accelerates at long evolution times. c, The decay
length $\xi_{\text{d}}$ after an evolution time of $100\tau$ grows with
$L_{\text{clean}}$, indicating improved particle transport into the disordered
region. The data point at $L_{\text{clean}}=0$ and the dashed line show the
localization length of an isolated MBL system. Solid lines (bars in panel c)
show the prediction from exact numerics without free parameters. Error bars
denote the s.e.m. (below the marker size in panel a).
Bath-induced relaxation dynamics can often be captured semi-classically in the
context of Fermi’s golden rule. In an isolated MBL system particle
rearrangements are restricted to the length scales of the order of the
localization length $\xi_{\text{loc}}$. The relaxation rate $\Gamma_{i}$ of a
lattice site at distance $i$ coupled to the bath is captured by Fermi’s golden
rule $\Gamma_{i}=g_{i}^{2}\rho_{\text{bath}}$. Here, the coupling for a
relaxation process on site $i$ away from the bath leading to a transfer of
energy or particles into the bath is set by $g_{i}\propto
Je^{-i/\xi_{\text{loc}}}$. The density of states in the thermal region is
exponential in its size, i.e. $\rho_{\text{bath}}\propto J^{-1}e^{\alpha
L_{\text{bath}}}$ with a constant $\alpha$. This model implies that site $i$
shows relaxation after a time $T_{i}=1/\Gamma_{i}$, or equivalently, after an
evolution time $T$ we expect relaxation on the sites up to the distance
$d_{\text{FGR}}(T)\sim\xi_{\text{loc}}\log(J^{2}\rho_{\text{bath}}T)$. In
conclusion, within a perturbative description MBL remains robust against a
local bath, with a bath penetration into the MBL region that increases only
logarithmically in time. Quantum avalanches, however, are predicted to emerge
from dynamics beyond Fermi’s golden rule. As the bath begins to delocalize
neighboring disordered sites, the size of the thermalizing bath expands,
leading to an increase in its density of states.
In this work we explore the dynamics of an MBL system coupled to a thermal
bath (Fig. 1). We observe phenomena that suggest the presence of non-
perturbative avalanche processes, while other features of dynamics can be
explained using the perturbative Fermi’s golden rule. Our experimental
protocol starts by preparing a Mott-insulating state with one 87Rb atom on
each site of a two-dimensional optical lattice (Fig. 1b). The system is placed
in the focus of a high-resolution imaging system through which we project
site-resolved repulsive potentials on individual lattice sites. We isolate a
one-dimensional system of $L$ lattice sites from the Mott insulator and add
potential offsets to the lattice sites. At this point, the system remains in a
product state of one atom per lattice site. We then perform a quantum quench
by abruptly reducing the lattice depth (Fig. 1c). The subsequent non-
equilibrium dynamics are described by the Bose-Hubbard Hamiltonian:
$\displaystyle\hat{\mathcal{H}}$
$\displaystyle=-J\sum_{i}\left(\hat{a}_{i}^{\dagger}\hat{a}_{i+1}+h.c.\right)$
$\displaystyle+\frac{U}{2}\sum_{i}\hat{n}_{i}\left(\hat{n}_{i}-1\right)+W\sum_{i\in
L_{\text{dis}}}h_{i}\hat{n}_{i}\text{,}$
where $\hat{a}^{\dagger}_{i}$ ($\hat{a}_{i}$) is the creation (annihilation)
operator for a boson on site $i$, and
$\hat{n}_{i}=\hat{a}^{\dagger}_{i}\hat{a}_{i}$ is the particle number
operator. The first term describes the tunneling between all neighboring
lattice sites, and the second term represents the on-site repulsive
interactions. The last term introduces a site-resolved energy offset. We set
$h_{i}=0$ for all lattice sites in the clean region of size
$L_{\text{clean}}$, whereas the energy offsets in the disordered region of
size $L_{\text{dis}}$ follow a quasi-periodic disorder distribution
$h_{i}=\cos(2\pi\beta i+\phi)$ with $1/\beta\approx 1.618$, phase $\phi$ and
amplitude $W$. The quasi-periodic distribution avoids nearby lattice sites to
coincidentally have similar energy offsets, which inhibits the presence of
secondary rare regions within the disordered region Setiawan _et al._ (2017).
After a variable evolution time, we read out the site-resolved atom number by
fluorescence imaging. The applied unitary evolution preserves the initial
purity of $99.1(2)\%$ per site Kaufman _et al._ (2016); Lukin _et al._
(2019). All observables are disorder-averaged by realizing potential with
different $\phi$. The tunneling time $\tau=\hbar/J=4.3(1)\,\text{ms}$ (with
the reduced Planck constant $\hbar$), the interaction strength $U=2.87(3)\,J$,
and the number of disordered sites $L_{\text{dis}}=6$ remain constant in all
experiments.
We first use the full site-resolved readout of our microscope to investigate
the local transport dynamics in the system. The connected density-density
correlations
$\langle\hat{n}_{i}\hat{n}_{j}\rangle_{c}=\langle\hat{n}_{i}\hat{n}_{j}\rangle-\langle\hat{n}_{i}\rangle\langle\hat{n}_{j}\rangle$
detects correlations between the particle numbers on site $i$ and $j$ Rispoli
_et al._ (2019). Negative values of $\langle\hat{n}_{i}\hat{n}_{j}\rangle_{c}$
signal anti-correlated density fluctuations, and thus particles motion between
the involved sites (Fig. 2a). In the following, we consider a system with
$L_{\text{clean}}=6$ at disorder strength $W=9.1\,J$ after different evolution
times $T$ after the quantum quench. At the beginning of the evolution
($T=0\tau$), we do not detect any correlations, because the initial state is a
product state. After short evolution times ($T\lesssim\tau L$), we observe the
buildup of spatially dependent anti-correlations in the system. Within the
clean region all lattice sites develop mutual anti-correlations, signaling
delocalizing particles. In contrast, the anti-correlations in the disordered
region remain short-ranged, indicating localized particles. At this time, we
do not detect significant anti-correlations between the clean and the
disordered region.
The situation changes for long evolution times ($T\gg\tau L$), where the
correlations in the clean region have spread out evenly among all pairs of
lattice sites, signaling homogeneously delocalized particles. Furthermore, we
observe the buildup of anti-correlations between lattice sites in the clean
and the disordered region, evidence for transport dynamics across the
interface. Each of the disordered sites is equally anti-correlated to all
clean sites, which suggests that the clean region acts as a homogeneous bath
for the disordered region. Motivated by this picture, we extract the total
correlations of the clean region $g^{(2)}(i)=\sum_{j\in
L_{\text{clean}}}\langle\hat{n}_{i}\hat{n}_{j}\rangle_{\text{c}}$ by taking
the sum of the correlations of each site with all clean sites (Fig. 2b cute).
The results show a decay with distance from the clean region, in agreement
with the Fermi golden rule picture of exponentially decaying couplings between
bath and MBL.
While a static bath spectrum causes bath correlations to penetrate MBL
logarithmically in time, a signature of the quantum avalanche is an
accelerated increase, faster than logarithmically in time. In order to test
this picture, we quantify the correlation decay into the disordered region by
measuring the average distance $\xi_{\text{d}}=\sum_{i\in
L_{\text{dis}}}ig^{(2)}(i)$ from the clean region over which anti-correlations
form (Fig. 2b). At short times the decay length $\xi_{\text{d}}$ increases
logarithmically in time, but accelerates at long evolution times — signature
for the emergence of a quantum avalanche.
Figure 3: Site-resolved thermalization dynamics. a, The atom number
probability distribution for the edge sites in the clean region (left) and the
disordered region (right), measured after $100\tau$ in a system consisting of
$L_{\text{clean}}=L_{\text{dis}}=6$ at disorder strength $W=9.1\,J$. b, Local
entropy per particle $s_{i}=-\sum_{n}p_{n}\log
p_{n}/\langle\hat{n}_{i}\rangle$ extracted from the atom number distribution
on site $i$. The entropy grows after a stationary evolution whose length
depends on the distance from the interface (indicated by the grey dashed
line). Traces are vertically offset for better readability. c, Local entropy
$s_{i}$ (offset by $s_{i}(T=1\tau)$) for all disordered sites. Solid lines
(bars in panel a) show the prediction from exact numerics without free
parameters. Error bars denote the s.e.m. (below the marker size in panel a).
Figure 4: Bath-induced many-body correlations. a, Three-point correlations
$\langle\hat{n}_{i}\hat{n}_{j}\hat{n}_{k}\rangle_{c}$ among pairs of clean
sites $i$, $j$ and one disordered site $k$ (summed over all disordered $k$) in
a system with $L_{\text{clean}}=L_{\text{dis}}=6$ at disorder strength
$W=9.1\,J$ and evolution time $T=100(1)$. Cuts across the site $j=6$ (arrows)
show nonzero entries for all sites, evidence for multi-particle entanglement
between all sites in the clean region with the disordered sites. The flat
distribution visualizes the homogeneous coupling to the disordered region. b,
Correlations $\langle\hat{n}_{i}\hat{n}_{j}\hat{n}_{k}\rangle_{c}$ among pairs
of disordered sites $i$, $j$ and one clean site $k$ (summed over all clean
$k$) vary strongly with the chosen lattice sites, and decrease with the
distance from the clean region. The presence of multi-point correlations
demonstrates non-perturbative dynamics: delocalization is driven through many-
body processes between the disordered region and the clean region. c, We
average over all off-diagonal sites and find a maximum for intermediate
disorder for the MBL-bath entanglement. d, The total multi-point correlations
among disordered sites with the bath show a similar maximum at slightly lower
intermediate disorder. Solid lines show the prediction from exact numerics
without free parameters. Error bars denote the s.e.m.
The size $L_{\text{clean}}$ determines the number of degrees of freedom of the
initial thermal region, and thus the spectral density of the thermal bath.
While a bath of small number of degrees of freedom can only couple to
disordered sites at distances on the order of the localization length
$\xi_{\text{loc}}$, larger baths are expected to significantly exceed this
length scale. The perturbative picture predicts that
$\xi_{\text{d}}\propto\xi_{\text{loc}}\log(J\rho_{\text{bath}})\propto\xi_{\text{loc}}\times
L_{\text{clean}}$, therefore a deviation from this proportionality can be
regarded as evidence for non-perturbative dynamics in form of avalanches. In
order to investigate this effect, we realize systems with different
$L_{\text{clean}}$, while keeping $L_{\text{dis}}=6$ constant (Fig. 2c). For
each system size, we characterize the particle transport by measuring
$\xi_{\text{d}}$ after an evolution time of $100(1)\tau$. Our results show an
increasing value of $\xi_{\text{d}}$ for larger $L_{\text{clean}}$. The
enhanced $\xi_{\text{d}}$ for $L_{\text{clean}}=6$ suggests the presence of a
quantum avalanche in the system.
We next examine the local thermalization dynamics in a system with
$L_{\text{clean}}=L_{\text{dis}}=6$. The site-resolved full atom number
readout enables us to measure the atom number distribution on a local level
(Fig. 3a). Lattice sites in the clean region show a distribution corresponding
to a thermal ensemble, whereas lattice sites in the disordered region show a
distribution with enhanced probability for one particle, the initial state of
the system. We quantify the site-resolved thermalization dynamics with the
entropy per particle $s_{i}=-\sum_{n_{i}}{p(n_{i})}\log
p(n_{i})/\langle\hat{n}_{i}\rangle$ on site $i$ from the atom number
distributions. We observe reduced thermalization dynamics of the disordered
sites with increasing distance from the interface (Fig. 3b). Moreover, the
data suggest that the dynamics are first stationary until thermalization sets
in with a delay that is exponential in the site’s distance from the interface.
This picture is confirmed by our exact numerical calculations.
The signatures for quantum avalanches imply that many-body processes drive the
long-term dynamics of the system. We investigate this effect through
multipoint correlations Kubo (1962); Rispoli _et al._ (2019). The presence of
non-zero three-point connected correlations
$\langle\hat{n}_{i}\hat{n}_{j}\hat{n}_{k}\rangle_{c}$ signals the presence of
entanglement among all involved lattice sites, which cannot be explained by
lower order processes. We start by evaluating the connected correlations
$\langle\hat{n}_{i}\hat{n}_{j}\hat{n}_{\text{dis}}\rangle_{c}$ among two clean
lattice sites $i$, $j$ and a disordered site $k$, summed over all possible $k$
(Fig. 4a). The correlations are non-zero across the clean region, and their
homogeneous distribution indicates that all clean sites contribute equally to
the delocalization in the disordered region. In contrast, when evaluating the
connected correlations
$\langle\hat{n}_{i}\hat{n}_{j}\hat{n}_{\text{clean}}\rangle_{c}$ among two
disordered sites $i$, $j$ and a clean site $k$, averaged over all possible $k$
(Fig. 4b), the data show a strong dependence on the involved disordered sites.
Close to the interface we find strong correlations, whereas they are absent
for distant sites. We quantify the presence of many-body correlations at
different disorder strengths and find a maximum at intermediate strengths
(Fig. 4c,d), close to the estimated critical point of the system Rispoli _et
al._ (2019).
In conclusion, we experimentally studied signatures of quantum avalanches in
an MBL system, set in motion by a thermal inclusion. We observed an
accelerated intrusion of the bath in the MBL system, its evolution to thermal
equilibrium site after site, and the many-body entanglement between the two
subsystems. By varying the size $L_{\text{clean}}$, we studied the emergence
of quantum avalanches for increased number of degrees of freedom of the bath.
In future, our experiments can be readily extended in many ways. For example,
one could more systematically study the fate of quantum avalanches as a
function of bath size and localization length. By increasing both the system
size of the disordered region, one could explore the interplay at intermediate
disorder strengths in a quantitive way through its scaling behaviour, i.e. by
increasing the system size at constant ratio of $L_{\text{clean}}$ and
$L_{\text{dis}}$, which may provide insight into the critical behaviour of the
transition. An interesting extension would also be the influence of the
statistical distribution of the disorder on the critical behaviour of the
system.
We acknowledge fruitful discussions with K. Agarwal, V. Khemani, M. Knap, M.
Lebrat and J. Marino. We are supported by grants from the National Science
Foundation, the Gordon and Betty Moore Foundations EPiQS Initiative, an Air
Force Office of Scientific Research MURI program, an Army Research Office MURI
program, the Swiss National Science Foundation (J. L.), and the NSF Graduate
Research Fellowship Program (S. K.).
∗ current address: Vienna Center for Quantum Science and Technology,
Atominstitut, TU Wien, Vienna, Austria; † These authors contributed equally to
this work; $\ddagger$ email<EMAIL_ADDRESS>
## References
* Alet and Laflorencie (2018) F. Alet and N. Laflorencie, Comptes Rendus Physique 19, 498 (2018), arXiv:1711.03145 .
* Abanin _et al._ (2019) D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Reviews of Modern Physics 91, 21001 (2019).
* Schreiber _et al._ (2015) M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Science 349, 842 (2015).
* Smith _et al._ (2016) J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Monroe, Nat. Phys. 12, 907 (2016).
* Choi _et al._ (2016) J.-y. Choi, S. Hild, J. Zeiher, P. Schauß, A. Rubio-abadal, T. Yefsah, V. Khemani, D. A. Huse, I. Bloch, and C. Gross, Science 352, 1547 (2016).
* Rubio-Abadal _et al._ (2019) A. Rubio-Abadal, J. Y. Choi, J. Zeiher, S. Hollerith, J. Rui, I. Bloch, and C. Gross, Physical Review X 9, 41014 (2019), arXiv:1805.00056 .
* Lukin _et al._ (2019) A. Lukin, M. Rispoli, R. Schittko, M. E. Tai, A. M. Kaufman, S. Choi, V. Khemani, J. Léonard, and M. Greiner, Science 260, 256 (2019).
* Lüschen _et al._ (2017) H. P. Lüschen, P. Bordia, S. Scherg, F. Alet, E. Altman, U. Schneider, and I. Bloch, Physical Review Letters 119, 260401 (2017).
* Rispoli _et al._ (2019) M. Rispoli, A. Lukin, R. Schittko, S. Kim, M. E. Tai, J. Léonard, and M. Greiner, Nature 573, 385 (2019), arXiv:1812.06959 .
* (10) D. A. Abanin, J. H. Bardarson, G. de Tomasi, S. Gopalakrishnan, V. Khemani, S. A. Parameswaran, F. Pollmann, A. C. Potter, M. Serbyn, and R. Vasseur, arXiv , 1arXiv:1911.04501 .
* Panda _et al._ (2019) R. K. Panda, A. Scardicchio, M. Schulz, S. R. Taylor, and M. Žnidarič, Europhys. Lett. 128 (2019), 10.1209/0295-5075/128/67003, arXiv:1911.07882 .
* Sierant _et al._ (2020) P. Sierant, D. Delande, and J. Zakrzewski, Physical Review Letters 124, 186601 (2020), arXiv:1911.06221 .
* Šuntajs _et al._ (2020a) J. Šuntajs, J. Bonča, T. Prosen, and L. Vidmar, Physical Review B 102, 1 (2020a), arXiv:2004.01719 .
* Šuntajs _et al._ (2020b) J. Šuntajs, J. Bonca, T. Prosen, and L. Vidmar, Phys. Rev. E 102, 062144 (2020b).
* Luitz and Lev (2020) D. J. Luitz and Y. B. Lev, arXiv 102, 100202 (2020), arXiv:2007.13767 .
* Kiefer-Emmanouilidis _et al._ (2020a) M. Kiefer-Emmanouilidis, R. Unanyan, M. Fleischhauer, and J. Sirker, Physical Review Letters 124, 243601 (2020a), arXiv:2003.04849 .
* Kiefer-Emmanouilidis _et al._ (2020b) M. Kiefer-Emmanouilidis, R. Unanyan, M. Fleischhauer, and J. Sirker, arXiv (2020b), arXiv:2010.00565 .
* Agarwal _et al._ (2015) K. Agarwal, S. Gopalakrishnan, M. Knap, M. Müller, and E. Demler, Physical Review Letters 114, 1 (2015), arXiv:1408.3413 .
* Bar Lev _et al._ (2015) Y. Bar Lev, G. Cohen, and D. R. Reichman, Physical Review Letters 114, 1 (2015), arXiv:1407.7535 .
* Žnidarič _et al._ (2016) M. Žnidarič, A. Scardicchio, and V. K. Varma, Physical Review Letters 117, 1 (2016), arXiv:1604.08567 .
* Gopalakrishnan _et al._ (2016) S. Gopalakrishnan, K. Agarwal, E. A. Demler, D. A. Huse, and M. Knap, 134206, 1 (2016).
* Agarwal _et al._ (2017) K. Agarwal, E. Altman, E. Demler, S. Gopalakrishnan, D. A. Huse, and M. Knap, Annalen der Physik 529, 1 (2017), arXiv:1611.00770 .
* Potter _et al._ (2015) A. C. Potter, R. Vasseur, and S. A. Parameswaran, Physical Review X 5, 031033 (2015).
* Vosk _et al._ (2015) R. Vosk, D. A. Huse, and E. Altman, Physical Review X 5, 1 (2015), arXiv:1412.3117 .
* Gopalakrishnan _et al._ (2015) S. Gopalakrishnan, M. Müller, V. Khemani, M. Knap, E. Demler, and D. A. Huse, Physical Review B - Condensed Matter and Materials Physics 92, 1 (2015), arXiv:1502.07712 .
* Weiner _et al._ (2019) F. Weiner, F. Evers, and S. Bera, Physical Review B 100, 1 (2019), arXiv:1904.06928 .
* Khemani _et al._ (2017a) V. Khemani, S. P. Lim, D. N. Sheng, and D. A. Huse, Physical Review X 7, 021013 (2017a).
* Khemani _et al._ (2017b) V. Khemani, D. N. Sheng, and D. A. Huse, 075702, 1 (2017b).
* Griffiths (1969) R. B. Griffiths, Phys. Rev. Lett. 23, 17 (1969).
* McCoy (1969) B. M. McCoy, Physical Review Letters 23, 383 (1969).
* Nandkishore and Gopalakrishnan (2017) R. Nandkishore and S. Gopalakrishnan, Ann. Phys. (Berlin) 529, 1600181 (2017).
* De Roeck and Huveneers (2017) W. De Roeck and F. Huveneers, Physical Review B 95, 1 (2017), arXiv:1608.01815 .
* Luitz _et al._ (2017) D. J. Luitz, F. Huveneers, and W. De Roeck, Physical Review Letters 119, 150602 (2017).
* Thiery _et al._ (2018) T. Thiery, F. Huveneers, M. Müller, and W. De Roeck, Physical Review Letters 121, 1 (2018), arXiv:1706.09338 .
* Crowley and Chandran (2020) P. J. Crowley and A. Chandran, Physical Review Research 2, 033262 (2020), arXiv:1910.10812 .
* Setiawan _et al._ (2017) F. Setiawan, D. L. Deng, and J. H. Pixley, Physical Review B 96, 1 (2017), arXiv:1707.02984 .
* Kaufman _et al._ (2016) A. M. Kaufman, M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, and M. Greiner, Science 353, 794 (2016).
* Kubo (1962) R. Kubo, J. Phys. Soc. Jpn. 17, 1100 (1962).
* Zupancic _et al._ (2016) P. Zupancic, P. M. Preiss, R. Ma, M. E. Tai, M. Rispoli, and R. Islam, Optics Express 24, 13881 (2016).
* Sidje (1998) R. B. Sidje, ACM Trans. Math. Softw. 24, 130 (1998).
## I Supplementary information
### I.1 Experimental sequence
Mott insulator preparation. All described experiments start with a Bose-
Einstein condensate of bosonic 87Rb atoms in the $|F=1,m_{F}=-1\rangle$
hyperfine state. This ultracold gas is loaded into a single 2D plane of a deep
lattice along the vertical direction with lattice constant $1.5\mu m$ at laser
wavelength $760\,\text{nm}$. This lattice stays on for the remainder of the
experiment. We use an attractive dimple potential to isolate a controlled
number of atoms from the 2D gas and load them into the center of a repulsive
ring-shaped potential, created from a second laser beam at wavelength
$760\,\text{nm}$. At this point the atoms form a two-dimensional superfluid
with harmonic in-plane confinement. We then ramp up further laser beams at
wavelength $760\,\text{nm}$ over $250\,\text{ms}$ to create a repulsive two-
dimensional square lattice with lattice constant $a=680\,\text{nm}$ in both
directions and lattice depth $45E_{\text{r}}$, where
$E_{\text{r}}=h^{2}/(2ma^{2})=h\times 1.1\,\text{kHz}$ is the recoil energy of
a 87Rb atom of mass $m$.
Initial state preparation. We use two digital micro-mirror devices (DMD) to
project repulsive potentials onto the Mott insulator. The DMDs are placed in
the Fourier plane with respect to the atoms, which allows us to project
diffraction limited arbitrary potentials that correct for optical wavefront
aberrations in the imaging system Zupancic _et al._ (2016). We optically
confine a single chain of $L=L_{\text{clean}}+L_{\text{dis}}$ lattice sites
within the Mott insulator’s unity-filling shell, and subsequently ramp down
the power of the optical lattice. We use a repulsive deconfining beam to eject
all atoms outside the confinement potential, while each atom within the
projected confinement potential remains pinned on its lattice site. We then
ramp the lattice back to $45E_{\text{r}}$ and remove the confining DMD
potential. After post-selecting for the atom number $N=L$, this procedure
results in an initial state of $99.1(2)\,\%$ fidelity per site.
Quantum quench and state evolution. We use the first DMD to project a “wall-
potential” on the adjacent sites around the one-dimensional system, which
provides a box-like confinement. This potential is registered to the position
of the optical lattice and defines the size of the one-dimensional system. We
simultaneously use the second DMD to project a custom, quasi-periodic disorder
potential onto the disordered region of the system. The disorder strength $W$
is tuned by the intensity of the DMD potential. The quantum quench is
initiated by lowering the lattice depth along the one-dimensional system from
$45E_{\text{r}}$ to $8E_{\text{r}}$. After a variable evolution time we freeze
the dynamics by ramping the lattice back to $45E_{\text{r}}$.
Full quantum state read out. We first let the atom populations located on
individual lattice sites expand into independent tubes and use fluorescence
imaging with an optical molasses beam to perform a site-resolved atom number
measurement. The expansion step before the imaging procedure is employed to
avoid parity projection during the imaging process. We subsequently post-
select our data by excluding any images which do not contain the correct total
number of atoms. The error in postselection, that is the fraction of falsely
post-selected snapshots due to the finite readout fidelity, is $<0.1\,\%$ for
all the experiments, small compared to the statistical error in the data.
### I.2 Calibration of Hamiltonian parameters
The calibration procedure for the Bose-Hubbard parameters is identical to the
one described in Lukin _et al._ (2019). We obtain $J=h\times
37.5(1)\,\text{Hz}$ and $U=h\times 107(1)\,\text{Hz}$.
### I.3 Multi-point correlations
Generically, a $n^{\mathrm{th}}$ order correlation function can be measured
from a set of operators $\mathcal{O}_{i}$ by their joint expectation value
$\langle\prod_{i=1}^{n}\mathcal{O}_{i}\rangle=\langle\mathcal{O}_{1}\mathcal{O}_{2}...\mathcal{O}_{n}\rangle$.
However, this joint expectation value captures two kinds of information:
“disconnected” correlations that exist at $n^{\mathrm{th}}$ order due to
existing lower order correlations, and “connected” correlations that only
exist at order $n$ and can’t be described by factorization into correlations
of lower order Kubo (1962).
In the two-point case, this would mean comparing the measured value of
$\langle\mathcal{O}_{i}\mathcal{O}_{j}\rangle$ to the product of their
individual expectation values
$\langle\mathcal{O}_{i}\rangle\langle\mathcal{O}_{j}\rangle$. The “connected”
part of the correlation between $i$ and $j$ is defined as the correlations
that remain after removing the contributions from factorization into smaller
groups. This motivates the definition of
$\langle\mathcal{O}_{i}\mathcal{O}_{j}\rangle_{\text{c}}=\langle\mathcal{O}_{i}\mathcal{O}_{j}\rangle-\langle\mathcal{O}_{i}\rangle\langle\mathcal{O}_{j}\rangle$.
For a three-point connected correlation function, we must subtract out
contributions that come from connected two-point correlations that can look
like three-point correlations when randomly combined with a residual 1-point
correlation. This is how the connected three-point correlation function is
defined in the main text for the on-site number operator $\hat{n}_{i}$.
$\displaystyle\langle\mathcal{O}_{i}\mathcal{O}_{j}\mathcal{O}_{k}\rangle_{\text{c}}=$
$\displaystyle{}\langle\mathcal{O}_{i}\mathcal{O}_{j}\mathcal{O}_{k}\rangle$
$\displaystyle-$
$\displaystyle{}G_{\text{c}}^{(2)}(i,j)\langle\mathcal{O}_{k}\rangle-
G_{\text{c}}^{(2)}(i,k)\langle\mathcal{O}_{j}\rangle-
G_{\text{c}}^{(2)}(j,k)\langle\mathcal{O}_{i}\rangle$ $\displaystyle-$
$\displaystyle{}\langle\mathcal{O}_{j}\rangle\langle\mathcal{O}_{j}\rangle\langle\mathcal{O}_{k}\rangle$
Higher order multi-point correlations can be constructed in a similar way
Rispoli _et al._ (2019).
### I.4 Numerical calculations
The experimental studied system sizes have Hilbert space dimensions of up to
$1.3\times 10^{6}$ ($L=12$, N=12). Due to the non-equilibrium evolution and
the disorder, matrix diagonalization for such systems is computationally
challenging. Instead, we implement an exact numerical integration of
Schrödinger’s equation $\ket{\psi(t)}=e^{-i\hat{H}t/\hbar}\ket{\psi_{0}}$
based on the Krylov-subspace method Sidje (1998). This method provides an
memory- and CPU-run-time efficient way to numerically compute the time
evolution while achieving high, controlled precision. All numerical
calculations are averaged over 200 different realizations of the quasi-
periodic potential. The computations are performed on the Harvard Odyssey
computing cluster (for specifications see:
https://www.rc.fas.harvard.edu/odyssey/).
### I.5 Data Analysis
For all experiments we average over 197 patterns of quasi-periodic potentials,
each with a different phase $\phi$ of the quasi-periodic potential. The data
are taken from a running average over those patterns by randomly sampling a
given number of realizations and treating them as independent measurements of
the same system.
We extract the decay length $\xi_{d}$ by computing the first moment of the
non-local density-density correlations
$\xi_{d}=\sum_{i}i\langle\hat{n}_{i}\hat{n}_{j}\rangle_{c}$.
The single-site entropy in Fig. 3a is extracted from the edge sites. The edge
sites are most insensitive to the dynamics at the clean-disorder interface and
therefore allow for a fair indicator for thermalization Khemani _et al._
(2017a).
Error bars are computed by resampling the set of snapshots with replacement
(bootstrapping).
The number of samples for each experiment is summarized in the following
table:
Figure | Number of samples
---|---
2a,b | 199 ($0\tau$), 86 ($1\tau$), 242 ($3.1\tau$), 294 ($10\tau$), 315 ($31.9\tau$), 456 ($100\tau$)
2c | 456 ($L_{\text{clean}}=0$), 835 ($L_{\text{clean}}=2$), 134 ($L_{\text{clean}}=4$), 456 ($L_{\text{clean}}=6$)
3a | 835 ($100\tau$)
3b,c | same samples as for Fig. 2a,b
4a,b | 456
4c,d | 85 ($W=2.9\,J$), 71 ($W=4.4\,J$), 553 ($W=5.5\,J$), 179 ($W=6.2\,J$), 198 ($W=7.0\,J$), 191 ($W=7.7\,J$), 200 ($W=8.4\,J$), 623 ($W=9.1\,J$), 237 ($W=9.6\,J$)
|
# Deep imaging with Milanković telescope: Linking merger history to kinematics
of elliptical galaxies
###### Abstract
Kinematical and morphological features observed in early-type galaxies provide
valuable insights into the evolution of their hosts. We studied the origin of
prolate rotation (i.e., rotation around the long axis) in Illustris large-
scale cosmological hydrodynamical simulations. We found that basically all the
simulated massive prolate rotators were created in relatively recent major
mergers of galaxies. Such mergers are expected to produce tidal features such
as tails, shells, asymmetric stellar halos.
We investigated deep optical images of prolate rotators, including newly
obtained Milanković data, revealing signs of galaxy interaction in all of
them. This correlation proves to be statistically very significant when
compared with a general sample of early-type galaxies from the MATLAS deep
imaging survey.
In an ongoing project, we use Milanković to assemble deep images of the
complete sample of all known nearby massive prolate rotators. Additionally, we
searched these data for asteroids to improve the accuracy of trajectories and
even discover one previously unknown main-belt asteroid.
The most frequent tidal features among the prolate rotators happen to be
shells. We developed methods to calculate the probable time of the merger from
optical images. This will allow us to compare the merger history of the sample
with predictions from Illustris. Our plan is to expand these methods to even
larger samples of shell galaxies supplied by upcoming large surveys like LSST
at Rubin Observatory. This will provide an unprecedented amount of
statistically significant data on the recent merger history of our Universe
and allow extensive investigation of the impact of mergers to a wide range of
other astrophysical phenomena.
IVANA EBROVÁ 1,∗, MICHAL BÍLEK 1,2,3, ANA LALOVIĆ 4, MUSTAFA K. YıLDıZ 5,6,
PIERRE-ALAIN DUC 7, MARTIN MAŠEK 1, and MICHAEL PROUZA 1
1FZU -- Institute of Physics of the Czech Academy of Sciences, Na Slovance
1999/2, 182 00 Prague, Czechia
∗E-mail<EMAIL_ADDRESS>
2LERMA, Observatoire de Paris, CNRS, PSL Univ., Sorbonne Univ., 75014 Paris,
France
3Collège de France, 11 place Marcelin Berthelot, 75005 Paris, France
4Astronomical Observatory, Volgina 7, 11060 Belgrade, Serbia
5Astronomy and Space Sciences Department, Science Faculty, Erciyes University,
Kayseri, 38039 Türkiye
6Erciyes University, Astronomy and Space Sciences Observatory Applied and
Research Center (UZAYBİMER), 38039, Kayseri, Türkiye
7Université de Strasbourg, CNRS, Observatoire astronomique de Strasbourg
(ObAS), UMR 7550, 67000 Strasbourg, France
## 1 INTRODUCTION
The closer we look at a galaxy, the more remarkable characteristics we see.
Integral-field spectroscopy of inner parts reveals kinematical peculiarities,
while deep imaging can uncover various tidal features in the outskirts. Even
many elliptical galaxies that previously seemed featureless have become highly
attractive objects to study. Our work explores the connections between the
kinematical and morphological attributes in early-type galaxies (ETGs). To
unveil these connections, we combine and utilize data and findings from large-
scale cosmological simulations on the theoretical side, and integral-field
spectroscopy as well as ultra deep imaging on the observational side.
Figure 1: Surface density (top row) and kinematics (bottom row) of stellar
particles of two galaxies from the Illustris simulation. Left column: galaxy
with a normal disky rotation and with stellar shells visible in the top panel.
Right column: galaxy with prolate rotation and a kinematically distinct core.
## 2 RESULTS
### 2.1 Simulations
In Ebrová et al. (2021b) and Ebrová and Łokas (2017), we studied galaxies with
kinematical peculiarities in the Illustris project – hydrodynamic cosmological
simulations (Vogelsberger et al., 2014; Nelson et al., 2015). We examined the
formation and merger histories of selected galaxies drawn from a global sample
of 7697 Illustris galaxies with more than $10^{4}$ stellar particles (i.e.,
LMC-like and heavier) in the last snapshot of the Illustris-1 run. By visually
inspecting kinematic maps, we identified 134 galaxies with kinematically
distinct cores (KDCs) and automatically selected 59 galaxies with prolate
rotation.
Fig.1 shows two galaxies from Illustris. On the left, an ETG that maintained
normal disky rotation, even though the galaxy suffered a relatively recent
merger, as can be inferred from the presence of stellar shells in the surface
density map. On the right, an ETG with a kinematically distinct core (KDC) and
prolate rotation (aalso known as “minor-axis rotation”). Prolate rotators
exhibit a substantial misalignment between the photometric and kinematic axes;
in other words, the galaxy appears to be rotating predominantly around its
major morphological axis. The galaxy on the right had both kinematic features
created in a major 1:1 merger 6.1 Gyr before the end of the simulation.
Figure 2: Correlation of the birth of prolate rotation (left) and KDCs (right)
with the time of the merger experienced by the host galaxy in the Illustris
simulation. For prolate rotators, the merger time represents the last
significant merger, while for the KDC hosts, it indicates the time of the
merger closest to the KDC birth. Circle areas are proportional to the host
stellar mass at the end of the simulation.
While both, prolate rotation and KDCs, can emerge from mergers, Illustris data
indicate systematic differences between those two. Prolate rotation is more
common among massive ETGs, consistent with observations. In contrast, KDCs
display no clear dependence on the host mass and environment. We specifically
examined the role of galaxy mergers in creating both kinematic features, see
Fig.2. KDCs more often have other origins, and if their origins are associated
with mergers, the mergers can be minor or ancient. Other KDCs are induced by
galaxy fly-bys or without an apparent cause. Moreover, KDCs can be long-
lasting features and survive subsequent significant mergers.
On the other hand, basically all massive prolate rotators were created in
major mergers (at least 1:5) during the last 6 Gyr of the Illustris
simulation. Such mergers are expected to produce tidal features that should
be, in the majority of cases, visible in current deep imaging surveys. Based
on the Illustris data, we predicted that the frequency of tidal features in
host galaxies should be higher for prolate rotation than for KDCs.
Figure 3: Images from the Milanković telescope: recent deep images of three
prolate rotators and, in the bottom right panel, the newly discovered asteroid
2022 TO6.
### 2.2 Observations
In Ebrová et al. (2021a) we examined 19 observed prolate rotators with
available deep optical images and found morphological signs of galaxy
interaction in all of them, which proves to be a statistically very
significant correlation when compared with a general sample of ETGs in MATLAS
– a deep imaging survey (Bílek et al., 2020).
In our current project, we use the Serbian 1.4m Milanković telescope at the
Astronomical Station Vidojevica to assemble deep optical images of the
complete sample of all known nearby massive prolate rotators. Between Feb 2021
and Oct 2023 we observed 5 out of 8 additional prolate rotators, each at least
5.5 h integrated on-source exposure time in the $L$-band. Fig.3 shows
preliminary processed deep images of three of these prolate rotators. All show
signs of galaxy interactions.
### 2.3 Small Solar System bodies
We search and measure small Solar System bodies as a by-product of deep
imaging of galaxies with the Milanković telescope. We use Tycho-Tracker to
explore the field of view of the images of prolate rotators and other
galaxies. In some cases, we performed follow-up and dedicated observations. So
far, we have significantly improved trajectory measurements of more than 50
objects of 17 – 22 mag and even one asteroid as faint as 23.2 mag in stacked
images.
However, the most remarkable is the discovery of a previously unknown main-
belt asteroid that received a temporary designation 2022 TO6, see the bottom
right panel of Fig.3 – the stacked image of 4$\times$300 s exposures, each
centered on the asteroid before stacking. As far as we know, this is the first
and currently the sole asteroid discovery at the Astronomical Station
Vidojevica.
### 2.4 Merger histories
The most frequent tidal features among the prolate rotators happen to be
stellar shells. Therefore, we can estimate the timing of mergers for a large
portion of the sample and compare it with the predictions of the merger
history of prolate rotators in the Illustris simulation.
We developed the ‘shell identification method’ in Bílek et al. (2013) and
Bílek et al. (2014). So far, we have applied it to several special cases of
shell galaxies to explore the host gravitational potential and derive the time
of the galaxy mergers undergone by the hosts (Bílek et al., 2014; Ebrová et
al., 2020; Bílek et al., 2022; see also Bílek et al., 2015).
There are hundreds of known shell galaxies, even more hidden in current data,
and much more will be observed in the next few years in upcoming large deep
surveys like the Large Survey of Space and Time (LSST) at the Vera C. Rubin
Observatory. We are developing tools to extract fairly accurate estimates of
the merger times for large samples. This will transform shell galaxies from a
position of curiosity to utility, allowing statistical applications using the
merger data on thousands of shell galaxies. This will provide an unprecedented
amount of data on the recent merger history of our Universe and allow
extensive investigation of the impact of mergers on a wide range of other
astrophysical phenomena such as star formation, stellar dynamics, active
galactic nuclei, transient events, and more.
## 3 CONCLUSIONS
We investigated the link between kinematical and morphological features in
early-type galaxies through simulations as well as observations. In Illustris,
we found that while the origin of kinematically distinct cores is partially
associated with mergers of galaxies, basically all massive prolate rotators
were created in relatively recent major mergers. Such mergers are expected to
leave tidal features that should be detectable today in sufficiently deep
images.
In our analysis of available observational data, we found an overabundance of
morphological signs of galaxy interactions in prolate rotators. With the help
of the Milanković telescope, we are assembling deep optical images of the
complete sample of all known nearby massive prolate rotators. Initial data
processing confirms the high statistical significance of previous findings,
showcasing Milanković as a valuable tool for studying low-surface-brightness
tidal features. Additionally, the same data can be used to search for small
Solar System bodies. We significantly improved the accuracy of trajectories
for more than 50 such objects and even discovered a previously unknown main-
belt asteroid 2022 TO6.
The most frequent tidal features among the prolate rotators are stellar shells
that can be used to constrain the time since the merger. That will enable us
to compare merger histories of prolate rotators with the Illustris
predictions. Our plan is to extend such analyses to hundreds, potentially
thousands, of shell galaxies. This will significantly deepen our understanding
of the impact of mergers on various astrophysical phenomena.
Acknowledgements
This project has received funding from the European Union’s Horizon Europe
Research and Innovation program under the Marie Skłodowska-Curie grant
agreement No. 101067618. We acknowledge support by the Astronomical Station
Vidojevica and funding from the Ministry of science, technological development
and innovation of the Republic of Serbia, contract No.
451-03-47/2023-01/200002 and by the EC through project BELISSIMA (call
FP7-REGPOT-2010-5, No. 256772). IE acknowledges the support from the Polish
National Science Centre under the grant 2017/26/D/ST9/00449.
## References
* Bílek et al. (2013) M. Bílek, B. Jungwiert, L. Jílková, I. Ebrová, K. Bartošková, and M. Křížek. Testing MOND gravity in the shell galaxy NGC 3923. _A &A_, 559:A110, Nov. 2013. doi: 10.1051/0004-6361/201322060.
* Bílek et al. (2014) M. Bílek, K. Bartošková, I. Ebrová, and B. Jungwiert. MOND prediction of a new giant shell in the elliptical galaxy NGC 3923. _A &A_, 566:A151, June 2014. doi: 10.1051/0004-6361/201423935.
* Bílek et al. (2015) M. Bílek, I. Ebrová, B. Jungwiert, L. Jílková, and K. Bartošková. Shell galaxies as laboratories for testing MOND. _Canadian Journal of Physics_ , 93(2):203–212, Feb. 2015. doi: 10.1139/cjp-2014-0170.
* Bílek et al. (2020) M. Bílek, P.-A. Duc, J.-C. Cuilland re, S. Gwyn, M. Cappellari, D. V. Bekaert, P. Bonfini, T. Bitsakis, S. Paudel, D. Krajnović, P. R. Durrell, and F. Marleau. Census and classification of low-surface-brightness structures in nearby early-type galaxies from the MATLAS survey. _MNRAS_ , 498(2):2138–2166, Aug. 2020. doi: 10.1093/mnras/staa2248.
* Bílek et al. (2022) M. Bílek, J. Fensch, I. Ebrová, S. T. Nagesh, B. Famaey, P.-A. Duc, and P. Kroupa. Origin of the spectacular tidal shells of galaxy NGC 474. _A &A_, 660:A28, Apr. 2022. doi: 10.1051/0004-6361/202141709.
* Ebrová and Łokas (2017) I. Ebrová and E. L. Łokas. Galaxies with Prolate Rotation in Illustris. _ApJ_ , 850(2):144, Dec. 2017. doi: 10.3847/1538-4357/aa96ff.
* Ebrová et al. (2020) I. Ebrová, M. Bílek, M. K. Yıldız, and J. Eliášek. NGC 4993, the shell galaxy host of GW170817: constraints on the recent galactic merger. _A &A_, 634:A73, Feb. 2020. doi: 10.1051/0004-6361/201935219.
* Ebrová et al. (2021a) I. Ebrová, M. Bílek, A. Vudragović, M. K. Yıldız, and P.-A. Duc. Ubiquitous signs of interactions in early-type galaxies with prolate rotation. _A &A_, 650:A50, June 2021a. doi: 10.1051/0004-6361/202140588.
* Ebrová et al. (2021b) I. Ebrová, E. L. Łokas, and J. Eliášek. Galaxies with kinematically distinct cores in Illustris. _A &A_, 647:A103, Mar. 2021b. doi: 10.1051/0004-6361/202039562.
* Nelson et al. (2015) D. Nelson, A. Pillepich, S. Genel, M. Vogelsberger, V. Springel, P. Torrey, V. Rodriguez-Gomez, D. Sijacki, G. F. Snyder, B. Griffen, F. Marinacci, L. Blecha, L. Sales, D. Xu, and L. Hernquist. The illustris simulation: Public data release. _Astronomy and Computing_ , 13:12–37, Nov. 2015. doi: 10.1016/j.ascom.2015.09.003.
* Vogelsberger et al. (2014) M. Vogelsberger, S. Genel, V. Springel, P. Torrey, D. Sijacki, D. Xu, G. Snyder, S. Bird, D. Nelson, and L. Hernquist. Properties of galaxies reproduced by a hydrodynamic simulation. _Natur_ , 509:177–182, May 2014. doi: 10.1038/nature13316.
|
# A Fragile multi-CPR Game††thanks: Research was supported by the Hellenic
Foundation for Research and Innovation (H.F.R.I.) under the ?First Call for
H.F.R.I. Research Projects to support Faculty members and Researchers and the
procurement of high-cost research equipment grant? (Project Number: HFRI-
FM17-2436).
Christos Pelekis School of Electrical and Computer Engineering, National
Technical University of Athens, Zografou, Greece, 15780, e-mail:
<EMAIL_ADDRESS>Panagiotis Promponas School of Electrical and Computer
Engineering, National Technical University of Athens, Zografou, Greece, 15780,
e-mail<EMAIL_ADDRESS>Juan Alvarado KU Leuven, Department of
Computer Sciences, Celestijnenlaan 200A, 3001, Belgium, e-mail:
<EMAIL_ADDRESS>Eirini Eleni Tsiropoulou Department of Electrical
and Computer Engineering, University of New Mexico, New Mexico, USA, 87131,
e-mail<EMAIL_ADDRESS>Symeon Papavassiliou School of Electrical and Computer
Engineering, National Technical University of Athens, Zografou, Greece, 15780,
e-mail<EMAIL_ADDRESS>
###### Abstract
A Fragile CPR Game is an instance of a resource sharing game where a common-
pool resource, which is prone to failure due to overuse, is shared among
several players. Each player has a fixed initial endowment and is faced with
the task of investing in the common-pool resource without forcing it to fail.
The return from the common-pool resource is subject to uncertainty and is
perceived by the players in a prospect-theoretic manner. It is shown in Hota
et al. [13] that, under some mild assumptions, a Fragile CPR Game admits a
unique Nash equilibrium. In this article we investigate an extended version of
a Fragile CPR Game, in which players are allowed to share multiple common-pool
resources that are also prone to failure due to overuse. We refer to this game
as a Fragile multi-CPR Game. Our main result states that, under some mild
assumptions, a Fragile multi-CPR Game admits a Generalized Nash equilibrium.
Moreover, we show that, when there are more players than common-pool
resources, the set consisting of all Generalized Nash equilibria of a Fragile
multi-CPR Game is of Lebesgue measure zero.
_Keywords and phrases_ : CPR games; prospect theory; Generalized Nash
equilibrium
_MSC(2010)_ : 91A06; 90C25
## 1 Prologue, related work and main results
In this article we shall be concerned with a _resource sharing game_. Such
games model instances in which a common-pool resource (henceforth CPR), which
is prone to failure due to overuse, is shared among several users who are
addressing the problem of choosing how much to exploit from / invest in the
CPR without forcing it to fail. Resource sharing games arise in a variety of
problems ranging from economics to computer science. Examples of CPRs include
arable lands, forests, fisheries, groundwater basins, spectrum and computing
resources, the atmosphere, among many others. Such CPRs are, on the one hand,
usually regenerative but, on the other hand, subject to failure when several
agents exploit the resource in an unsustainable manner. Each agent exploits /
invests in the CPR in order to obtain an individual benefit. However, it has
been observed that actions which are individually rational (e.g. Nash
equilibria) may result in outcomes that are collectively irrational, thus
giving rise to a particular social dilemma known as “the tragedy of the
commons” (see [12]). It is thus of interest to investigate equilibrium points
of resource sharing games, in order to better understand situations where such
a social dilemma arises. This is a topic that has drawn considerable
attention, both from a theoretical and a practical perspective. We refer the
reader to [1, 5, 13, 14, 17, 21, 22, 25, 26, 27] for applications, variations,
and for further references on resource sharing games. Let us remark that most
results in the literature appear to focus on games in which players invest on
a single CPR. In this article we investigate a resource sharing game in which
players are allowed to invest in several CPRs, whose performances are mutually
independent. To the best of our knowledge our work appears to be among the
first to consider resource sharing games on more than one CPR.
We shall be interested in a multi-version of a particular resource sharing
game, which is referred to as a _Fragile CPR Game_. It is initially introduced
in [13] and is played by several players, each of whom has a fixed initial
endowment and must decide how much to invest in the CPR without forcing it to
fail. The return from the CPR is subject to uncertainty, and is perceived by
the players in a prospect-theoretic manner. It is shown in [13] that a Fragile
CPR Game admits a unique Nash equilibrium. In this article we focus on an
extended version of a Fragile CPR Game in which players are allowed to share
multiple CPRs. We refer to the corresponding game as a _Fragile multi-CPR
Game_ and investigate its _Generalized Nash equilibria_. Our main result
states that the set consisting of all Generalized Nash equilibria of a Fragile
multi-CPR Game is non-empty and, when there are more players than CPRs,
“small” in a measure-theoretic sense. In the next subsection we introduce the
Fragile CPR Game and state the main result from [13]. We then proceed, in
Subsection 1.2, with defining the Fragile multi-CPR Game, which is the main
target of this work, and stating our main results.
### 1.1 Fragile CPR game
Throughout the text, given a positive integer $n$, we denote by $[n]$ the set
$\\{1,\ldots,n\\}$. In this article we extend a particular resource sharing
game to the case where the players are allowed to share multiple resources, by
determining how to distribute/invest their initial fixed endowment in the
available CPRs. The resource sharing game under consideration is referred to
as a _Fragile CPR Game_ , and may be seen as a prospect-theoretic version of
the _Standard CPR Game_ , introduced in [22, p. 109].
The Fragile CPR Game is introduced in [13], and is played by $n$ players, who
are assumed to be indexed by the set $[n]$. It is also assumed that there is a
single CPR, and each player has to decide how much to invest in the CPR. Each
player has an available endowment, which, without loss of generality, is
assumed to be equal to $1$. Every player, say $i\in[n]$, invests an amount
$x_{i}\in[0,1]$ in the CPR. The total investment of all players in the CPR is
denoted $\mathbf{x}_{T}=\sum_{i\in[n]}x_{i}$. The return from the CPR is
subject to uncertainty, that is there is a probability $p(\mathbf{x}_{T})$
that the CPR will fail, and this probability depends on the total investment
of the players in the CPR. In case the CPR fails, the players lose their
investment in the CPR. In case the CPR does not fail, then there is a _rate of
return_ from the CPR which depends on the total investment of all players, and
is denoted by $\mathcal{R}(\mathbf{x}_{T})$. The rate of return is assumed to
satisfy $\mathcal{R}(\mathbf{x}_{T})>1$, for all $\mathbf{x}_{T}$.
In other words, player $i\in[n]$ gains
$x_{i}\cdot\mathcal{R}(\mathbf{x}_{T})-x_{i}$ with probability
$1-p(\mathbf{x}_{T})$, and gains $-x_{i}$ with probability
$p(\mathbf{x}_{T})$. The situation is modelled through a prospect-theoretic
perspective, in the spirit of [16]. More precisely, let
$x^{(i)}=\sum_{j\in[n]\setminus\\{i\\}}x_{j}$; hence it holds
$x_{i}+x^{(i)}=\mathbf{x}_{T}$. Then the utility of player $i\in[n]$ is given
by the following utility function:
$\mathcal{V}_{i}(x_{i},x^{(i)})=\begin{cases}(x_{i}\cdot(\mathcal{R}(\mathbf{x}_{T})-1))^{a_{i}},&\text{
with probability }1-p(\mathbf{x}_{T}),\\\ -k_{i}x_{i}^{a_{i}},&\text{ with
probability }p(\mathbf{x}_{T}).\end{cases}$ (1)
The parameters $k_{i}$ and $a_{i}$ are fixed and player-specific. Let us note
that the parameter $k_{i}$ may be thought of as capturing the “behaviour” of
each player. More precisely, when $k_{i}>1$ then a player weighs losses more
than gains, a behaviour which is referred to as “loss averse”. On the other
hand, when $k_{i}\in[0,1]$ then a player weighs gains more than losses, a
behaviour which is referred to as “gain seeking”. Capturing behaviours of this
type among players constitutes a central aspect of prospect theory (see, for
example, [28]). Notice that when $k_{i}=1$ and $a_{i}=1$ then player $i\in[n]$
is _risk neutral_.
Each player of the Fragile CPR game is an expected utility maximizer, and
therefore chooses $x_{i}\in[0,1]$ that maximizes the expectation of
$\mathcal{V}(x_{i},x^{(i)})$, i.e, that maximizes the _utility_ of player
$i\in[n]$ which is given by
$\mathbb{E}\left(\mathcal{V}_{i}(x_{i},x^{(i)})\right)=x_{i}^{a_{i}}\cdot\mathcal{F}_{i}(\mathbf{x}_{T})\,,$
where
$\mathcal{F}_{i}(\mathbf{x}_{T})=(\mathcal{R}(\mathbf{x}_{T})-1)^{a_{i}}\cdot(1-p(\mathbf{x}_{T}))-k_{i}\cdot
p(\mathbf{x}_{T})$ (2)
is the _effective rate of return_ to payer $i\in[n]$.
The main result in [13] establishes, among other things, the existence of a
unique Nash equilibrium for the Fragile CPR game, provided the following hold
true.
###### Assumption 1.
Consider a Fragile CPR game that satisfies the following properties.
1. 1.
It holds $p(0)=0$ and $p(\mathbf{x}_{T})=1$, whenever $\mathbf{x}_{T}\geq 1$.
2. 2.
$a_{i}\in(0,1]$ and $k_{i}>0$, for all $i\in[n]$.
3. 3.
For all $i\in[n]$ and all $\mathbf{x}_{T}\in(0,1)$ it holds
$\frac{\partial}{\partial\mathbf{x}_{T}}\mathcal{F}_{i}(\mathbf{x}_{T}),\frac{\partial^{2}}{\partial\mathbf{x}_{T}^{2}}\mathcal{F}_{i}(\mathbf{x}_{T})<0$,
where $\mathcal{F}_{i}$ is given by (2).
In other words, the third condition in Assumption 1 states that the effective
rate of return of all players is a strictly decreasing and concave function.
An example of an effective rate of return $\mathcal{F}_{i}$ satisfying the
conditions of Assumption 1 is obtained by choosing $a_{i}<1/2$,
$p(\mathbf{x}_{T})=\mathbf{x}_{T}^{2}$, and
$\mathcal{R}(\mathbf{x}_{T})=2-e^{\mathbf{x}_{T}-1}$, as can be easily
verified.
Before proceeding with the main result from [13], let us recall here the
notion of Nash equilibrium, adjusted to the setting of the Fragile CPR Game.
###### Definition 1.
(Nash Equilibrium) A _Nash equilibrium_ for a Fragile CPR Game is a strategy
profile $(x_{1},\ldots,x_{n})\in[0,1]^{n}$ such that for all $i\in[n]$ it
holds:
$\mathbb{E}\left(\mathcal{V}(x_{i},x^{(i)})\right)\geq\mathbb{E}\left(\mathcal{V}(z_{i},x^{(i)})\right)\,,\text{
for all }\,z_{i}\in[0,1]\,.$
In other words, $(x_{1},\ldots,x_{n})\in[0,1]^{n}$ is a Nash equilibrium for a
Fragile CPR Game if no player can increase her utility by unilaterally
changing strategy. The main result in Hota et al. [13] reads as follows.
###### Theorem 1 ([13]).
Consider a Fragile CPR Game that satisfies Assumption 1. Then the game admits
a _unique_ Nash equilibrium.
We now proceed with defining the _Fragile multi–CPR Game_ , whose equilibria
are the main target of the present article.
### 1.2 Fragile multi-CPR game
In this article we introduce and investigate a multi-version of the Fragile
CPR game. In order to be more precise, we need some extra piece of notation.
If $m$ is a positive integer, let $C_{m}$ denote the set:
$C_{m}=\left\\{(x_{1},\ldots,x_{m})\in[0,1]^{m}:\sum_{i\in[m]}x_{i}\leq
1\right\\}\,.$ (3)
Moreover, let $\mathcal{C}_{n}$ denote the Cartesian product
$\prod_{i\in[n]}C_{m}$ and let
$\mathcal{C}_{-i}=\prod_{[n]\setminus\\{i\\}}C_{m}$ denote the Cartesian
product obtained from $\mathcal{C}_{n}$ by deleting its $i$-th component.
Elements in $\mathcal{C}_{-i}$ are denoted by $\mathbf{x}_{-i}$, as is
customary, and an element
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{n}$ is
occasionally written $\mathbf{x}=(\mathbf{x}_{i},\mathbf{x}_{-i})$, for
$i\in[n]$, $\mathbf{x}_{i}\in C_{m}$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$.
We now proceed with defining the _Fragile multi-CPR Game_.
Suppose that there are $n$ players, indexed by the set $[n]$, each having an
_initial endowment_ equal to $1$. Assume further that there are $m$ available
CPRs, where $m\geq 1$ is an integer. Every player has to decide how much to
invest in each CPR. More precisely, every player, say $i\in[n]$, chooses an
element $\mathbf{x}_{i}=(x_{i1},\ldots,x_{im})\in C_{m}$ and invests $x_{ij}$
in the $j$-th CPR. Given strategies $\mathbf{x}_{i}=(x_{i1},\ldots,x_{im})\in
C_{m},i\in[n]$, of the players and an integer $j\in[m]$, set
$\mathbf{x}_{T}^{(j)}=\sum_{i\in[n]}x_{ij}\quad\text{ and
}\quad\mathbf{x}_{T}^{j|i}=\sum_{\ell\in[n]\setminus\\{i\\}}x_{\ell j}\,.$ (4)
Hence it holds $\mathbf{x}_{T}^{(j)}=x_{ij}+\mathbf{x}_{T}^{j|i}$, for all
$i\in[n]$. In other words, $\mathbf{x}_{T}^{(j)}$ equals the total investment
of the players in the $j$-th CPR and $\mathbf{x}_{T}^{j|i}$ equals the total
investment of all players except player $i$ in the $j$-th CPR. As in the case
of the Fragile CPR Game, we assume that the performance of each CPR is subject
to uncertainty, and that each CPR has a corresponding rate of return, both
depending on the total investment of the players in each CPR. More precisely,
for $j\in[m]$, let $\mathcal{R}_{j}(\mathbf{x}_{T}^{(j)})$ denote the _return
rate_ of the $j$-th CPR and let $p_{j}(\mathbf{x}_{T}^{(j)})$ denote the
_probability that the $j$-th CPR fails_. We assume that
$\mathcal{R}_{j}(\mathbf{x}_{T}^{(j)})>1$ holds true, for all
$\mathbf{x}_{T}^{(j)}$.
The _utility_ of player $i\in[n]$ from the $j$-th CPR is given, as in the case
of the Fragile CPR game, via the following prospect-theoretic utility
function:
$\mathcal{V}_{ij}(x_{ij},\mathbf{x}_{T}^{j|i})=\begin{cases}(x_{ij}\cdot(\mathcal{R}_{j}(\mathbf{x}_{T}^{(j)})-1))^{a_{i}},&\text{
with probability }1-p_{j}(\mathbf{x}_{T}^{(j)}),\\\
-k_{i}x_{ij}^{a_{i}},&\text{ with probability
}p_{j}(\mathbf{x}_{T}^{(j)}).\end{cases}$ (5)
We assume that the performance of each CPR is _independent_ of the
performances of all remaining CPRs. Players in the Fragile multi-CPR Game are
expected utility maximizers. If player $i\in[n]$ plays the vector
$\mathbf{x}_{i}=(x_{i1},\ldots,x_{im})\in C_{m}$, and the rest of the players
play $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ then her expected utility from the
$j$-th CPR is equal to
$\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i}):=\mathbb{E}\left(\mathcal{V}_{ij}(x_{ij},\mathbf{x}_{T}^{j|i})\right)=x_{ij}^{a_{i}}\cdot\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})\,,$
(6)
where
$\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)}):=(\mathcal{R}_{j}(\mathbf{x}_{T}^{(j)})-1)^{a_{i}}(1-p_{j}(\mathbf{x}_{T}^{(j)}))-k_{i}p_{j}(\mathbf{x}_{T}^{(j)})$
(7)
is the _effective rate of return_ to the $i$-th player from the $j$-th CPR.
Notice that, since we assume that the performance of each CPR is independent
of the performances of the remaining CPRs, $\mathcal{E}_{ij}$ depends only on
the values of $x_{ij},\mathbf{x}_{T}^{j|i}$ and does not depend on the values
of $x_{ik},\mathbf{x}_{T}^{k|i}$, for $k\neq j$. In other words, the (total)
prospect-theoretic _utility_ of player $i\in[n]$ in the Fragile multi-CPR Game
is given by:
$\mathcal{V}_{i}(\mathbf{x}_{i};\mathbf{x}_{-i})=\sum_{j\in[m]}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})\,.$
(8)
In this article we establish the existence of a Generalized Nash equilibrium
for the Fragile multi-CPR game, provided the following holds true.
###### Assumption 2.
Consider a Fragile multi-CPR Game that satisfies the following properties:
1. 1.
For every $j\in[m]$ it holds $p_{j}(0)=0$ and $p_{j}(\mathbf{x}_{T}^{(j)})=1$,
whenever $\mathbf{x}_{T}^{(j)}\geq 1$.
2. 2.
It holds $a_{i}\in(0,1]$ and $k_{i}>0$, for all $i\in[n]$.
3. 3.
For all $i\in[n]$ and all $j\in[m]$ it holds
$\frac{\partial}{\partial\mathbf{x}_{T}^{(j)}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)}),\frac{\partial^{2}}{\partial(\mathbf{x}_{T}^{(j)})^{2}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})<0$,
where $\mathcal{F}_{ij}$ is given by (7).
Notice that, similarly to the Fragile CPR Game, the third condition in
Assumption 2 states that the effective rate of return of every player from any
CPR is a _strictly decreasing and concave_ function. An example of an
effective rate of return satisfying Assumption 2 is obtained by choosing, for
$j\in[m]$, the return rate of the $j$-th CPR to be equal to
$\mathcal{R}_{j}(\mathbf{x}_{T}^{(j)})=c_{j}+1$, where $c_{j}>0$ is a
constant, and the probability that the $j$-th CPR fails to be a strictly
increasing and convex, on the interval $[0,1]$, function such that
$p_{j}(\mathbf{x}_{T}^{(j)})=1$, when $\mathbf{x}_{T}^{(j)}\geq 1$.
Before stating our main result, let us proceed with recalling the notion of
Generalized Nash equilibrium (see [11]).
Consider the, above-mentioned, Fragile multi-CPR Game, denoted $G$. Assume
further that, for each player $i\in[n]$, there exists a correspondence
$\vartheta_{i}:\mathcal{C}_{-i}\to 2^{C_{m}}$ mapping every element
$\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ to a set
$\vartheta_{i}(\mathbf{x}_{-i})\subset C_{m}$. The set-valued correspondence
$\vartheta_{i}$ is referred to as a _constraint policy_ and may be thought of
as determining the set of strategies that are feasible for player $i\in[n]$,
given $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$. We refer to the tuple
$(G,\\{\vartheta_{i}\\}_{i\in[n]})$ as the _Constrained Fragile multi-CPR
Game_ with constraint policies $\\{\vartheta_{i}\\}_{i\in[n]}$. Corresponding
to a constrained game is the following notion of _Constrained Nash
equilibrium_ (or _Generalized Nash equilibrium_):
###### Definition 2 (GNE).
A _Generalized Nash equilibrium_ for a Constrained Fragile multi-CPR Game
$(G,\\{\vartheta_{i}\\}_{i\in[n]})$ is a strategy profile
$\mathbf{x}^{\ast}=(\mathbf{x}_{1}^{\ast},\ldots,\mathbf{x}_{n}^{\ast})\in\mathcal{C}_{n}$
such that
1. 1.
For all $i\in[n]$, it holds
$\mathbf{x}_{i}^{\ast}\in\vartheta_{i}(\mathbf{x}_{-i}^{\ast})$, for all
$i\in[n]$, and
2. 2.
For all $i\in[n]$, it holds
$\mathcal{V}_{i}(\mathbf{x}_{i}^{\ast};\mathbf{x}_{-i}^{\ast})\geq\mathcal{V}_{i}(\mathbf{x}_{i};\mathbf{x}_{-i}^{\ast})$,
for all $\mathbf{x}_{i}\in\vartheta_{i}(\mathbf{x}_{-i}^{\ast})$, where
$\mathcal{V}_{i}(\,\cdot\,;\,\cdot\,)$ is the utility function of the $i$-th
player in a Fragile multi-CPR Game, given in (8).
In other words,
$\mathbf{x}^{\ast}=(\mathbf{x}_{1}^{\ast},\ldots,\mathbf{x}_{n}^{\ast})\in\mathcal{C}_{n}$
is a GNE if no player can increase her utility by unilaterally changing her
strategy to any other element of the set
$\vartheta_{i}(\mathbf{x}_{-i}^{\ast})$. We may now proceed with stating our
main results.
###### Theorem 2.
Consider a Fragile multi-CPR game, $G$, with $n\geq 1$ players and $m\geq 1$
CPRs, which satisfies Assumption 2. Then there exist constraint policies
$\\{\vartheta_{i}\\}_{i\in[n]}$ such that the Constraint Fragile multi-CPR
Game $(G,\\{\vartheta_{i}\\}_{i\in[n]})$ admits a Generalized Nash
equilibrium.
Given Theorem 2, it is natural to ask about the “size” of the set consisting
of all GNEs of a Fragile multi-CPR Game. Let us note that it is a well known
fact that Generalized Nash equilibrium problems tend to possess infinitely
many GNEs (see [11, p. 192]). In the case of a single CPR, i.e., when $m=1$,
the corresponding Constrained Fragile CPR Game admits a unique GNE.
###### Theorem 3.
Consider a Fragile multi-CPR Game with $n\geq 1$ players and $m=1$ CPR
satisfying Assumption 2. Then the game admits a unique GNE.
The proof of Theorem 3 is based upon a “first order condition” which is
satisfied by the best response correspondence in a Fragile multi-CPR Game. It
turns out that the aforementioned “first order condition” gives rise to _two
types_ of best responses for the players (see Theorem 8 below). In fact, we
show that Theorem 3 is a consequence of a more general statement (i.e.,
Theorem 9 below) which provides an upper bound on the numbers of GNEs in a
Fragile multi-CPR Game, subject to the assumption that best response of every
player is of the first type.
For general $m$ we are unable to determine the exact “size” of the set of
GNEs. We conjecture its size is always finite. Our main result, which is valid
when there are more players than CPRs, states that the set of GNEs is small in
a measure-theoretic sense.
###### Theorem 4.
Consider a Fragile multi-CPR game, $G^{(2)}$, with $n\geq 1$ players and
$m\geq 1$ CPRs, which satisfies Assumption 2. Assume further that $m\leq n$,
and let $\mathcal{N}(G^{(2)})$ be the set consisting of all Generalized Nash
equilibria of $G^{(2)}$. Then the $(n\cdot m)$-dimensional Lebesgue measure of
$\mathcal{N}(G^{(2)})$ is equal to zero.
As mentioned already, and despite the fact that GNE problems tend to possess
infinitely many solutions, we speculate that the “size” of the set
$\mathcal{N}(G^{(2)})$ in Theorem 4 can be reduced significantly.
###### Conjecture 1.
The set $\mathcal{N}(G^{(2)})$ is finite.
### 1.3 Brief outline of the proofs of main results
The proofs of our main results are inspired from the proof of Theorem 1, given
in [13]. Having said that, it should also be mentioned that in a Fragile
multi-CPR Game certain additional technicalities arise that are substantially
different from those addressed in the proof of Theorem 1 in [13]. First and
foremost, in a Fragile multi-CPR Game the strategy space of each player
consists of $m$-dimensional vectors, a setting which requires concepts and
ideas from multi-variable calculus.
In [13] the existence of a Nash equilibrium in a Fragile CPR Game is
established in two ways: the first approach employs Brouwer’s fixed point
theorem, and the second approach employs ideas from a particular class of
games known as _Weak Strategic Substitute Games_ (see [7]). The first approach
requires, among other things, the best response correspondence to be single-
valued. The second approach requires the best-response correspondence to be
decreasing. Both requirements may fail to hold true in a Fragile multi-CPR
Game. Instead, we establish the existence of a Generalized Nash equilibrium
for the Fragile multi-CPR Game by showing that it belongs to a particular
class of “convex constrained games” which are known to possess Generalized
Nash equilibria.
In [13] the uniqueness of the Nash equilibrium for a Fragile CPR Game is
established by showing that a particular auxiliary function, corresponding to
the fact that the best response correspondence satisfies a particular “first
order condition” (see [13, Eq. (6), p. 142] for the precise formulation of the
condition), is decreasing. Similar auxiliary functions are employed in the
proofs of Theorems 3 and 4. However, the corresponding “first order
conditions” are more delicate to characterise, and we do so by employing the
KKT conditions to the optimization program corresponding to the best response
correspondence (i.e., Problem (17) below). This allows to describe the best
responses via a system of equations, having unique solution, and results in
two types of “first order conditions” (see Theorem 8 below). Having
established the first order conditions in a Fragile multi-CPR Game, we
complete the proofs of our main results by employing monotonicity properties
of certain auxiliary functions, in a way which may be seen as an extension of
the approach taken in the proof of Theorem 1 in [13].
### 1.4 Organization
The remaining part of our article is organised as follows. In Section 2 we
show that the utility function of each player in a Fragile multi-CPR Game is
concave on a particular subset of the strategy space. In Section 3 we prove
Theorem 2, namely, we show that a Fragile multi-CPR Game admits a GNE. In
Section 4 we show that the best response of each player in a Fragile multi-CPR
Game satisfies certain “first order conditions”, which are then used, in
Section 5, in order to define suitable auxiliary functions whose monotonicity
properties play a key role in the proofs of Theorems 3 and 4. Theorem 3 is
proven in Section 6 and Theorem 4 is proven in Section 7. In Section 8 we show
that a “restricted” version of a Fragile multi-CPR Game admits finitely many
GNEs, a result which is then employed in order to formulate a conjecture which
is equivalent to Conjecture 1. Our paper ends with Section 9 which includes
some concluding remarks and conjectures.
## 2 Concavity of utility function
In this section we show that the utility function, given by (8), of each
player in a Fragile multi-CPR Games is concave in some particular subset of
$C_{m}$. Before proceeding with the details let us mention that this
particular subset will be used to define the constraint policies in the
corresponding Constrained Fragile multi-CPR Game.
We begin with the following result, which readily follows from [13, Lemma 1].
Recall the definition of $\mathbf{x}_{T}^{(j)}$ and $\mathbf{x}_{T}^{j|i}$,
given in (4), and the definition of the effective rate of return,
$\mathcal{F}_{ij}$, given in (7).
###### Lemma 1 (see [13], Lemma 1).
Let $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ be fixed. Then, for
every $j\in[m]$, there exists a real number $\omega_{ij}\in(0,1)$ such that
$\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})>0$, whenever
$\mathbf{x}_{T}^{(j)}\in(0,\omega_{ij})$, and
$\mathcal{F}_{ij}(\omega_{ij})=0$. Furthermore, provided that
$\mathbf{x}_{T}^{j|i}<\omega_{ij}$, the function
$\mathcal{E}_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$ is concave in the interval
$(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$.
###### Proof.
We repeat the proof for the sake of completeness. Notice that
$\mathcal{F}_{ij}(0)>0$. Moreover, Assumption 2 implies that
$\mathcal{F}_{ij}(1)<0$. Since $\mathcal{F}_{ij}$ is continuous, the
intermediate value theorem implies that there exists $\omega_{ij}\in(0,1)$
such that $\mathcal{F}_{ij}(\omega_{ij})=0$. Since $\mathcal{F}_{ij}$ is
assumed to be decreasing, the first statement follows, and we proceed with the
proof of the second statement. To this end, notice that (6) yields
$\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=a_{i}(a_{i}-1)x_{ij}^{a_{i}-2}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})+2a_{i}x_{ij}^{a_{i}-1}\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})+x_{ij}^{a_{i}}\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})\,.$
Notice also that $\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})=\frac{\partial}{\partial\mathbf{x}_{T}^{(j)}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})$
as well as $\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})=\frac{\partial^{2}}{\partial(\mathbf{x}_{T}^{(j)})^{2}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})$.
Moreover, Assumption 2 guarantees that $\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)}),\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})<0$ as well as that $a_{i}-1\leq
0$. Since $\mathcal{F}_{ij}(\mathbf{x}_{T}^{(j)})>0$ when
$\mathbf{x}_{T}^{(j)}\in(0,\omega_{ij})$, we conclude that
$\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})<0$ and therefore
$\mathcal{E}_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$ is concave in the interval
$(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$, as desired. ∎
In other words, given the choices of all players except player $i$, the
utility of the $i$-th player from the $j$-th CPR is a concave function, when
restricted on a particular interval. The next result shows that an analogous
statement holds true for the total utility of each player in the Fragile
multi-CPR Game, namely, $\mathcal{V}_{i}(\mathbf{x}_{i};\mathbf{x}_{-i})$,
given by (8).
Given $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, let
$A(\mathbf{x}_{-i}):=\\{j\in[m]:\mathbf{x}_{T}^{j|i}<\omega_{ij}\\}\,,$ (9)
where $\omega_{ij},j\in[m]$, is provided by Lemma 1. We refer to
$A(\mathbf{x}_{-i})$ as the set of _active CPRs_ corresponding to $i$ and
$\mathbf{x}_{-i}$.
###### Theorem 5.
Fix $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$. Let
$A(\mathbf{x}_{-i})$ be the set of active CPRs corresponding to $i$ and
$\mathbf{x}_{-i}$, and consider the set
$\mathcal{R}_{A(\mathbf{x}_{-i})}=\prod_{j\in
A(\mathbf{x}_{-i})}(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$. Then the function
$\mathcal{V}_{A(\mathbf{x}_{-i})}:=\sum_{j\in
A(\mathbf{x}_{-i})}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j\setminus i})$ is
concave in $\mathcal{R}_{A(\mathbf{x}_{-i})}$.
###### Proof.
If $|A(\mathbf{x}_{-i})|=1$, then the result follows from Lemma 1 so we may
assume that $|A(\mathbf{x}_{-i})|\geq 2$. The set
$\mathcal{R}_{A(\mathbf{x}_{-i})}$ is clearly convex. Let $j,k\in
A(\mathbf{x}_{-i})$ be such that $j\neq k$ and notice that
$\frac{\partial^{2}\mathcal{V}_{A(\mathbf{x}_{-i})}}{\partial x_{ij}\;\partial
x_{ik}}=0\,.$ (10)
Moreover, by Lemma 1, we also have
$\frac{\partial^{2}\mathcal{V}_{A(\mathbf{x}_{-i})}}{\partial
x_{ij}^{2}}=\frac{\partial^{2}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})}{\partial
x_{ij}^{2}}<0\,,\text{ for all
}x_{ij}\in(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})\,.$ (11)
Given $\mathbf{x}\in\mathcal{R}_{A(\mathbf{x}_{-i})}$, denote by
$H(\mathbf{x})=\left(\frac{\partial^{2}\mathcal{V}_{A(\mathbf{x}_{-i})}(\mathbf{x})}{\partial
x_{ij}\;\partial x_{ik}}\right)_{j,k\in A(\mathbf{x}_{-i})}$ the Hessian
matrix of $\mathcal{V}_{A(\mathbf{x}_{-i})}$ evaluated at $\mathbf{x}$, and
let $\Delta_{k}(\mathbf{x})$, for $k\in A(\mathbf{x}_{-i})$, be the principal
minors of $H(\mathbf{x})$ (see [4, p. 111]). Notice that (10) implies that
$H(\mathbf{x})$ is a diagonal matrix. Therefore, using (11), it follows that
$(-1)^{k}\cdot\Delta_{k}(\mathbf{x})>0$, when
$\mathbf{x}\in\mathcal{R}_{A(\mathbf{x}_{-i})}$. In other words,
$H(\,\cdot\,)$ is negative definite on the convex set
$\mathcal{R}_{A(\mathbf{x}_{-i})}$ and we conclude (see [4, Theorem 3.3, p.
110]) that $\mathcal{V}_{A(\mathbf{x}_{-i})}$ is concave in
$\mathcal{R}_{A(\mathbf{x}_{-i})}$, as desired. ∎
## 3 Proof of Theorem 2: existence of GNE
In this section we show that the Fragile multi-CPR Game possesses a
Generalized Nash equilibrium. Recall that the notion of Generalized Nash
equilibrium depends upon the choice of constraint policies. Thus, before
presenting the details of the proof, we first define the constrained policies
under consideration.
Let $i\in[n]$ and
$\mathbf{x}_{-i}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{i-1},\mathbf{x}_{i+1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{-i}$,
where $\mathbf{x}_{j}=(x_{j1},\ldots,x_{jm})\in C_{m}$, for
$j\in[n]\setminus\\{i\\}$, be fixed. Recall that
$\mathbf{x}_{T}^{j|i}=\sum_{\ell\in[n]\setminus\\{i\\}}x_{\ell j}$ and
consider the set of active indices corresponding to $i$ and $\mathbf{x}_{-i}$,
i.e., consider the set $A(\mathbf{x}_{-i})$, defined in (9).
Define the constraint policy $\vartheta_{i}(\cdot)$ that maps each element
$\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ to the set
$\vartheta_{i}(\mathbf{x}_{-i})=C_{m}\bigcap\left\\{\prod_{j\in
A(\mathbf{x}_{-i})}[0,\omega_{ij}-\mathbf{x}_{T}^{j|i}]\,\,\times\prod_{j\in[m]\setminus
A(\mathbf{x}_{-i})}\\{0\\}\right\\}\,,$ (12)
where $\\{\omega_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is given by Lemma 1.
Notice that, for every $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, the set
$\vartheta_{i}(\mathbf{x}_{-i})$ is _non-empty, compact and convex_. Fig. 1
provides a visualization of the aforementioned constraint policy, in the case
of $m=2$.
$\quad\quad\hat{\omega}_{2}$$\quad\quad\hat{\omega}_{1}$$\quad\quad\mathcal{C}_{2}$$\vartheta_{i}(\bm{x_{-i}})$$CPR_{1}$$CPR_{2}$
Figure 1: Visualization of an instance of the constraint policy
$\vartheta_{i}(\cdot)$ (blue shaded region) of player $i$ in the case of
$m=2$, where we denote $\hat{\omega}_{1}=\omega_{i1}-\mathbf{x}_{T}^{1|i}$ and
$\hat{\omega}_{2}=\omega_{i2}-\mathbf{x}_{T}^{2|i}$.
We aim to show that the Constrained Fragile multi-CPR Game, with constraint
policies given by (12), admits a Generalized Nash equilibrium. In order to do
so, we employ the following theorem. Recall (see [15, p. 32–33]) that a set-
valued correspondence $\phi:X\to 2^{Y}$ is _upper semicontinuous_ if for every
open set $G\subset Y$, it holds that $\\{x\in X:\phi(x)\subset G\\}$ is an
open set in $X$. A set-valued correspondence $\phi:X\to 2^{Y}$ is _lower
semicontinuous_ if every open set $G\subset Y$, it holds that $\\{x\in
X:\phi(x)\cap G\neq\emptyset\\}$ is an open set in $X$. Recall also that,
given $S\subset\mathbb{R}^{s}$, a function $f:S\to\mathbb{R}$ is quasi-concave
if
$f(\lambda\mathbf{x}+(1-\lambda)\mathbf{y})\geq\min\\{f(\mathbf{x}),f(\mathbf{y})\\}$,
for all $\mathbf{x}\neq\mathbf{y}$ in $S$ and $\lambda\in(0,1)$. Clearly, a
concave function is also quasi-concave.
###### Theorem 6.
Let $n$ players be characterized by strategy spaces $X_{i},i\in[n]$,
constraint policies $\phi_{i},i\in[n]$, and utility functions
$\mathcal{V}_{i}:\prod_{i}X_{i}\to\mathbb{R},i\in[n]$. Suppose further that
the following hold true for every $i\in[n]$:
1. 1.
$X_{i}$ is non-empty, compact, convex subset of a Euclidean space.
2. 2.
$\phi_{i}(\cdot)$ is both upper semicontinuous and lower semicontinuous in
$X_{-i}$.
3. 3.
For all $\mathbf{x}_{-i}\in X_{-i}$, $\phi_{i}(\mathbf{x}_{-i})$ is nonempty,
closed and convex.
4. 4.
$\mathcal{V}_{i}$ is continuous in $\prod_{i}X_{i}$.
5. 5.
For every $\mathbf{x}_{-i}\in X_{-i}$, the map
$x_{i}\mapsto\mathcal{V}_{i}(x_{i},\mathbf{x}_{-i})$ is quasi-concave on
$\phi_{i}(\mathbf{x}_{-i})$.
Then there exists a Generalized Nash equilibrium.
###### Proof.
This is a folklore result that can be found in various places. See, for
example, [2], [11, Theorem 6], [15, Theorem 4.3.1], [3, Theorem 12.3], or [8,
Theorem 3.1]. ∎
We are now ready to establish the existence of a GNE in the Constrained
Fragile multi-CPR Game. In the following proof, $\|\cdot\|_{d}$ denotes
$d$-dimensional Euclidean distance, and
$B_{d}(\varepsilon):=\\{\mathbf{x}\in\mathbb{R}^{d}:\|\mathbf{x}\|_{d}\leq\varepsilon\\}$
is the closed ball of radius $\varepsilon$ centered at the origin. Moreover,
given $A\subset\mathbb{R}^{d}$ and $\varepsilon>0$, we denote by
$\\{A\\}_{\varepsilon}$ the set $A+B_{d}(\varepsilon):=\\{a+b:a\in A\text{ and
}b\in B_{d}(\varepsilon)\\}$ and by $(1-\varepsilon)\cdot A$ the set
$\\{(1-\varepsilon)\cdot a:a\in A\\}$.
###### Proof of Theorem 2.
We apply Theorem 6. The strategy space of each player is equal to $C_{m}$,
which is non-empty, compact and convex. Hence the first condition of Theorem 6
holds true. The third condition also holds true, by (12). Moreover, the fourth
condition of Theorem 6 is immediate from the definition of utility, given in
(8), while the fifth condition follows from Theorem 5.
It remains to show that the second condition of Theorem 6 holds true, i.e.,
that for each $i\in[n]$ the constrained policy $\vartheta_{i}(\cdot)$, given
by (12), is both upper and lower semicontinuous. Towards this end, fix
$i\in[n]$ and let $G\subset C_{m}$ be an open set. Consider the sets
$G^{+}:=\\{\mathbf{x}_{-i}\in\mathcal{C}_{-i}:\vartheta_{i}(\mathbf{x}_{-i})\subset
G\\}\quad\text{and}\quad
G^{-}:=\\{\mathbf{x}_{-i}\in\mathcal{C}_{-i}:\vartheta_{i}(\mathbf{x}_{-i})\cap
G\neq\emptyset\\}\,.$
We have to show that both $G^{+}$ and $G^{-}$ are open subsets of
$\mathcal{C}_{-i}$. We first show that $G^{+}$ is open.
If $G^{+}$ is empty then the result is clearly true, so we may assume that
$G^{+}\neq\emptyset$. Let
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{i-1},\mathbf{y}_{i+1},\ldots,\mathbf{y}_{n})\in
G^{+}$; hence $\vartheta_{i}(\mathbf{y})\subset G$. We have to show that there
exists $\varepsilon>0$ such that for every $\mathbf{x}\in\mathcal{C}_{-i}$
with $\|\mathbf{x}-\mathbf{y}\|_{(n-1)m}<\varepsilon$, we have
$\vartheta_{i}(\mathbf{x})\subset G$. Since $\vartheta_{i}(\mathbf{y})$ is a
compact subset of the open set $G$, it follows that there exists
$\varepsilon_{0}>0$ such that
$\\{\vartheta_{i}(\mathbf{y})\\}_{\varepsilon_{0}}\subset G$. Since summation
is continuous, there exists $\varepsilon_{1}>0$ such that for every
$\mathbf{x}\in\mathcal{C}_{-i}$ with
$\|\mathbf{x}-\mathbf{y}\|_{(n-1)m}<\varepsilon_{1}$ it holds
$\mathbf{x}\in\\{\vartheta_{i}(\mathbf{y})\\}_{\varepsilon_{0}}$. The desired
$\varepsilon$ is given by $\varepsilon_{1}$. Hence $G^{+}$ is an open set, and
we proceed with showing that $G^{-}$ is open as well.
We may assume that $G^{-}$ is non-empty. For each $i\in[n]$, let
$g_{i}:\mathcal{C}_{-i}\to\mathbb{R}^{m}_{\geq 0}$ be the continuous function
whose $j$-th coordinate, for $j\in[m]$, is given by
$g_{ij}(\mathbf{x}_{-i})=\begin{cases}\omega_{ij}-\mathbf{x}_{T}^{j|i}\,,&\mbox{
if }\omega_{ij}-\mathbf{x}_{T}^{j|i}>0\\\ 0\,,&\mbox{ if
}\omega_{ij}-\mathbf{x}_{T}^{j|i}\leq 0\,\,,\end{cases}$
where $\omega_{ij}$ is given by Lemma 1. Let $h:\mathbb{R}^{m}_{\geq 0}\to
2^{C_{m}}$ be the set-valued function defined by
$h(z_{1},\ldots,z_{m})=\prod_{j\in[m]}[0,z_{j}]$, with the convention
$[0,0]:=\\{0\\}$. Clearly, it holds that $\vartheta_{i}=h\circ g_{i}$, for all
$i\in[n]$.
We claim that $h$ is lower semicontinuous. If the claim holds true then it
follows that the set $H:=\\{\mathbf{z}\in\mathbb{R}^{m}_{\geq
0}:h(\mathbf{z})\cap G\neq\emptyset\\}$ is open. Notice that
$G^{-}\neq\emptyset$ implies that $H\neq\emptyset$. Since $g_{i}$ is
continuous, it follows that the preimage of $H$ under $g_{i}$, i.e.,
$g^{-1}_{i}(H)$, is open. In other words, the set
$\\{\mathbf{x}\in\mathcal{C}_{-i}:h\circ g_{i}(\mathbf{x})\cap
G\neq\emptyset\\}=\\{\mathbf{x}\in\mathcal{C}_{-i}:\vartheta_{i}(\mathbf{x})\cap
G\neq\emptyset\\}$ is open and the proof of the theorem is complete.
It remains to prove the claim, i.e., that $h$ is lower semicontinuous. To this
end, let $G\subset C_{m}$ be an open set, and let
$G^{\ast}:=\\{\mathbf{z}\in\mathbb{R}^{m}_{\geq 0}:h(\mathbf{z})\cap
G\neq\emptyset\\}$. We have to show that $G^{\ast}$ is open; that is, we have
to show that for every $\mathbf{z}\in G^{\ast}$ there exists $\varepsilon>0$
such that $\mathbf{w}\in G^{\ast}$, for all $\mathbf{w}$ with
$\|\mathbf{z}-\mathbf{w}\|_{m}<\varepsilon$. Fix $\mathbf{z}\in G^{\ast}$.
Since $h(\mathbf{z})$ is compact and $G$ is open, it follows that there exists
$\varepsilon_{0}>0$ such that $(1-\varepsilon_{0})\cdot h(\mathbf{z})\cap
G\neq\emptyset$. Now choose $\varepsilon>0$ such that for every $\mathbf{w}\in
C_{m}$ for which $\|\mathbf{z}-\mathbf{w}\|_{m}<\varepsilon$ it holds
$(1-\varepsilon_{0})\cdot h(\mathbf{z})\subset h(\mathbf{w})$. In other words,
for this particular choice of $\varepsilon>0$ it holds $h(\mathbf{w})\cap
G\neq\emptyset$, for every $\mathbf{w}$ with
$\|\mathbf{z}-\mathbf{w}\|_{m}<\varepsilon$. The claim follows. ∎
## 4 Best response correspondence
Having established the existence of a GNE for a Fragile multi-CPR Game, we now
proceed with the proofs of Theorems 3 and 4. The proofs will be obtained in
two steps. In the first step we deduce certain “first order conditions” which
are satisfied by the best response correspondence of each player in the game.
In the second step we employ the first order conditions in order to define
certain auxiliary functions, whose monotonicity will be employed in the proofs
of the aforementioned theorems. In this section we collect some results
pertaining to the first step. We begin with recalling the notion of the best
response correspondence (see [19]).
Given $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, let
$\vartheta_{i}(\cdot)$ denote the constraint policy given by (12), and
consider the _best response_ of the $i$-th player in the Fragile multi-CPR
Game defined as follows:
$B_{i}(\mathbf{x}_{-i})=\arg\max_{\mathbf{x}_{i}\in\vartheta_{i}(\mathbf{x}_{-i})}\,\mathcal{V}_{i}(\mathbf{x}_{i};\mathbf{x}_{-i})\,,$
(13)
where $\mathcal{V}_{i}$ is the utility of the $i$-th player, given by (8).
Notice that $B_{i}(\cdot)$ is a correspondence $B_{i}:\mathcal{C}_{-i}\to
2^{C_{m}}$, where $2^{C_{m}}$ denotes the class consisting of all subsets of
$C_{m}$. For $j\in[m]$, we denote by $B_{ij}(\mathbf{x}_{-i})$ the $j$-th
component of $B_{i}(\mathbf{x}_{-i})$; hence we have
$B_{i}(\mathbf{x}_{-i})=(B_{i1}(\mathbf{x}_{-i}),\ldots,B_{im}(\mathbf{x}_{-i}))\,.$
###### Remark 1.
Notice that Definition 2 implies that if
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{n}$ is a GNE
of a Constrained Fragile multi-CPR Game, with constraint policies given by
(12), then for each $i\in[n]$ it holds $\mathbf{x}_{i}\in
B_{i}(\mathbf{x}_{-i})$.
Recall that $A(\mathbf{x}_{-i})$ denotes the set of active CPRs corresponding
to $\mathbf{x}_{-i}$, defined in (9), and notice that
$B_{ij}(\mathbf{x}_{-i})=0$, for all $j\in[m]\setminus A(\mathbf{x}_{-i})$.
For $x_{ij}\in[0,1]$, let $\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$ be the
function defined via
$\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=x_{ij}\cdot\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})+a_{i}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})\,.$
(14)
###### Lemma 2.
Fix $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ and let
$\mathcal{R}_{A(\mathbf{x}_{-i})}=\prod_{j\in
A(\mathbf{x}_{-i})}(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$, where $\omega_{ij}$
is provided by Lemma 1. Then a global maximum of the function
$\mathcal{V}_{\mathbf{x}_{-i}}:=\sum_{j\in
A(\mathbf{x}_{-i})}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$ defined on
the set $\mathcal{R}_{A(\mathbf{x}_{-i})}$ is given by the unique solution of
the following system of equations:
$\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=0,\,\text{ for }\,j\in
A(\mathbf{x}_{-i})\,,$ (15)
where $\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$ is defined in (14).
###### Proof.
To simplify notation, we write $\psi_{ij}(\cdot)$ instead of
$\psi_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$. Using (6) and (8), it is
straightforward to verify that for every $j\in A(\mathbf{x}_{-i})$ it holds
$\frac{\partial\mathcal{V}_{\mathbf{x}_{-i}}}{\partial
x_{ij}}=\frac{\partial\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})}{\partial
x_{ij}}=x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij})\,.$ (16)
Now notice that $\psi_{ij}(0)>0$ as well as
$\psi_{ij}(\omega_{ij}-\mathbf{x}_{T}^{j|i})<0$. Moreover, Assumption 2
readily implies that $\psi_{ij}(\cdot)$ is strictly decreasing on the interval
$(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$. The intermediate value theorem implies
that there exists unique $\lambda_{ij}\in(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$
such that $\psi_{ij}(\lambda_{ij})=0$. Hence, it follows from (16) that the
points $\lambda_{ij}$, for $j\in A(\mathbf{x}_{-i})$, are critical points of
the function $\mathcal{V}_{\mathbf{x}_{-i}}$, which is concave on the open and
convex set $\mathcal{R}_{A(\mathbf{x}_{-i})}$, by Theorem 5. It follows (see
[4, Theorem 2.4, p. 132]) that $\\{\lambda_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$
is a global maximum of $\mathcal{V}_{\mathbf{x}_{-i}}$ on
$\mathcal{R}_{A(\mathbf{x}_{-i})}$. We conclude that
$\mathcal{V}_{\mathbf{x}_{-i}}$ is maximized when $x_{ij}=\lambda_{ij}$, for
$j\in A(\mathbf{x}_{-i})$, as desired. ∎
###### Remark 2.
Let us remark that the solution of the system of equations given by (15) may
not belong to the set $C_{m}$. More precisely, it could happen that the
solution of the system of equations (15), say $\\{\lambda_{ij}\\}_{j\in
A(\mathbf{x}_{-i})}$, satisfies $\sum_{j\in
A(\mathbf{x}_{-i})}\lambda_{ij}>1$. This is a crucial difference between the
Fragile CPR Game and the Fragile multi-CPR Game.
Now notice that, given $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, the best response
of player $i$ is a local maximum of the following program:
$\displaystyle\underset{\\{x_{ij}\\}_{j\in
A(\mathbf{x}_{-i})}}{\text{maximize}}$
$\displaystyle\mathcal{V}_{\mathbf{x}_{-i}}:=\sum_{j\in
A(\mathbf{x}_{-i})}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$ subject to
$\displaystyle\sum_{j\in A(\mathbf{x}_{-i})}x_{ij}\leq 1$ $\displaystyle 0\leq
x_{ij}\leq\omega_{ij}-\mathbf{x}_{T}^{j|i},\text{ for all }\,j\in
A(\mathbf{x}_{-i})\,.$
Equivalently, the best response of player $i$ is a local minimum of the
following program:
$\displaystyle\underset{\\{x_{ij}\\}_{j\in
A(\mathbf{x}_{-i})}}{\text{minimize}}$ $\displaystyle-\sum_{j\in
A(\mathbf{x}_{-i})}\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$ (17) subject
to $\displaystyle\sum_{j\in A(\mathbf{x}_{-i})}x_{ij}\leq 1$ $\displaystyle
0\leq x_{ij}\leq\omega_{ij}-\mathbf{x}_{T}^{j|i},\text{ for all }\,j\in
A(\mathbf{x}_{-i})\,.$
Notice that since $\mathcal{E}_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$ is
concave on $(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$, by Lemma 1, it follows that
Problem (17) is a separable convex knapsack program (see [20, 24]). We are
going to describe the optima of Problem (17) using the KKT conditions. The KKT
conditions pertain to the Lagrangian corresponding to Problem (17), which is
defined as the following quantity:
$\mathcal{L}:=-\mathcal{V}_{\mathbf{x}_{-i}}+\kappa_{0}\cdot\left(\sum_{j\in
A(\mathbf{x}_{-i})}x_{ij}-1\right)+\sum_{j\in
A(\mathbf{x}_{-i})}\mu_{j}\cdot(x_{ij}+\mathbf{x}_{T}^{j|i}-\omega_{ij})+\sum_{j\in
A(\mathbf{x}_{-i})}\nu_{j}\cdot(-x_{ij}),$
where $\kappa_{0},\\{\mu_{j}\\}_{j},\\{\nu_{j}\\}_{j}$ are real numbers. The
KKT conditions corresponding to problem (17) read as follows (see [18, Theorem
3.8]).
###### Theorem 7 (KKT conditions for Problem (17)).
If $\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is a local minimum of Problem
(17), then there exist _non-negative_ real numbers $\kappa_{0}$,
$\\{\mu_{j}\\}_{j\in A(\mathbf{x}_{-i})}$, and $\\{\nu_{j}\\}_{j\in
A(\mathbf{x}_{-i})}$ such that:
1. 1.
For all $j\in A(\mathbf{x}_{-i})$ it holds
$-x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})+\kappa_{0}+\mu_{j}-\nu_{j}=0\,$,
where $\psi_{ij}$ is given by (14).
2. 2.
$\kappa_{0}\cdot\left(\sum_{j\in A(\mathbf{x}_{-i})}x_{ij}-1\right)=0$.
3. 3.
$\mu_{j}\cdot(x_{ij}+\mathbf{x}_{T}^{j|i}-\omega_{ij})=0\,$, for all $j\in
A(\mathbf{x}_{-i})$.
4. 4.
$\nu_{j}\cdot x_{ij}=0\,$, for all $j\in A(\mathbf{x}_{-i})$.
5. 5.
$0\leq x_{ij}\leq\omega_{ij}-\mathbf{x}_{T}^{j|i}\,$, for all $j\in
A(\mathbf{x}_{-i})$.
We aim to employ Theorem 7 in order to describe a local maximum of Problem
(17) via the solution of a system of equations. This will require the
following result, which is presumably reported somewhere in the literature
but, lacking a reference, we include a proof for the sake of completeness.
###### Lemma 3.
Fix a positive integer $s$ and, for each $j\in[s]$, let
$f_{j}:\mathbb{R}\to\mathbb{R}$ be a strictly decreasing function. Then there
exists at most one vector $(c,x_{1},\ldots,x_{s})\in\mathbb{R}^{s+1}$ such
that
$f_{j}(x_{j})=c,\,\text{ for all }\,j\in[s],\,\text{ and
}\,\sum_{j\in[s]}x_{j}=1\,.$
###### Proof.
Suppose that there exist two distinct vectors, say $(c,x_{1},\ldots,x_{s})$
and $(d,y_{1},\ldots,y_{s})$. If $c=d$, then there exists $j\in[s]$ such that
$x_{j}\neq y_{j}$ and
$f_{j}(x_{j})=c=d=f_{j}(y_{j})\,,$
contrariwise to the assumption that the function $f_{i}(\cdot)$ is strictly
decreasing. Hence $c\neq d$.
Since $f_{j}(\cdot),j\in[s]$, is strictly decreasing, it is injective and
therefore it follows that it is invertible. Let us denote its inverse by
$f_{j}^{-1}(\cdot)$. We then have
$x_{j}=f_{j}^{-1}(c)\,\text{ and }y_{j}=f_{j}^{-1}(d),\,\text{ for all
}\,j\in[s],$
which in turn implies that $x_{j}\neq y_{j}$, for all $j\in[s]$. Assume,
without loss of generality, that $c<d$. The assumption that $f_{j}$ is
strictly decreasing then implies $x_{j}>y_{j}$, for all $j\in[s]$, and
therefore $1=\sum_{j\in[s]}x_{j}>\sum_{j\in[s]}y_{j}=1$, a contradiction. The
result follows. ∎
We may now proceed with describing the best responses of each player in the
Fragile multi-CPR Game via a system of “first order conditions”.
###### Theorem 8.
Let $i\in[n]$ and $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$ be fixed. Suppose that
$\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is a best response of player $i$ in
the Fragile multi-CPR Game. Then $\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is
either of the following two types:
* •
Type I: There exists $J_{\mathbf{x}_{-i}}\subset A(\mathbf{x}_{-i})$ such
that $x_{ij}=0$, when $j\in A(\mathbf{x}_{-i})\setminus J_{\mathbf{x}_{-i}}$,
and $\\{x_{ij}\\}_{j\in J_{\mathbf{x}_{-i}}}$ satisfy the following
inequality, and are given by the unique solution of the following system of
equations:
$\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}<1\quad\text{ and
}\quad\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=0,\,\text{ for }\,j\in
J_{\mathbf{x}_{-i}}\,,$
where $\psi_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$ is defined in (14).
* •
Type II: There exists $J_{\mathbf{x}_{-i}}\subset A(\mathbf{x}_{-i})$ and a
real number $\kappa_{0}\geq 0$ such that $x_{ij}=0$, when $j\in
A(\mathbf{x}_{-i})\setminus J_{\mathbf{x}_{-i}}$, and $\\{x_{ij}\\}_{j\in J}$
are given by the unique solution of the following system of equations:
$\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}=1\quad\text{and}\quad
x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=\kappa_{0},\,\text{
for }\,j\in J_{\mathbf{x}_{-i}}\,,$
where $\psi_{ij}(\,\cdot\,;\mathbf{x}_{T}^{j|i})$ is defined in (14).
###### Proof.
Let $\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ be a best response of player
$i\in[n]$. Then $\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is a local minimum of
Problem (17); hence it satisfies the KKT Conditions of Theorem 7.
If $x_{ij}=\omega_{ij}-\mathbf{x}_{T}^{j|i}$, for some $j\in
A(\mathbf{x}_{-i})$, then Lemma 1 and (6) imply that
$\mathcal{E}_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=0$. Hence player $i$ could
achieve the same utility from the $j$-th CPR by choosing $x_{ij}=0$. Thus we
may assume that $x_{ij}<\omega_{ij}-\mathbf{x}_{T}^{j|i}$, for all $j\in
A(\mathbf{x}_{-i})$ and therefore Theorem 7.(3) implies that $\mu_{j}=0$, for
all $j\in A(\mathbf{x}_{-i})$. Now let
$J_{\mathbf{x}_{-i}}=\\{j\in A(\mathbf{x}_{-i}):x_{ij}\neq 0\\}\,,$ (18)
and notice that Theorem 7.(4) implies that $\nu_{j}=0$ for $j\in
J_{\mathbf{x}_{-i}}$. We distinguish two cases.
Suppose first that $\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}<1$. Then Theorem
7.(2) yields $\kappa_{0}=0$, and therefore Theorem 7.(1) implies that
$\\{x_{ij}\\}_{i\in J_{\mathbf{x}_{-i}}}$ is given by the unique solution of
the following system of equations:
$\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=0,\,\text{ for }\,j\in
J_{\mathbf{x}_{-i}}\,.$
In other words, if $\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}<1$ then
$\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is of Type I.
Now assume that $\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}=1$. Then Theorems 7.(1)
and 7 .(2) imply that there exists $\kappa_{0}\geq 0$ such that
$-x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=-\kappa_{0}$,
for all $j\in J_{\mathbf{x}_{-i}}$. In other words, $\\{x_{ij}\\}_{j\in
J_{\mathbf{x}_{-i}}}$ and $\kappa_{0}$ are given by the solution of the
following system of equations:
$\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}=1\,\text{ and
}\,x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})=\kappa_{0},\,\text{
for all }\,j\in J_{\mathbf{x}_{-i}}\,.$ (19)
Since the functions
$f_{ij}(x_{ij}):=x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};\mathbf{x}_{T}^{j|i})$,
for $j\in J_{\mathbf{x}_{-i}}$, are strictly decreasing, Lemma 3 implies that
the system of equations in (19) has a unique solution. Hence
$\\{x_{ij}\\}_{j\in A(\mathbf{x}_{-i})}$ is of Type II and the result follows.
∎
We refer to the set $J_{\mathbf{x}_{-i}}$ provided by Theorem 8, defined in
(18), as the set of _effective_ CPRs corresponding to $i\in[n]$ and
$\mathbf{x}_{-i}\in\mathcal{C}_{-i}$. In the next section we employ Theorem 8
in order to define auxiliary functions (i.e., (24) and (25) below) whose
monotonicity will play a key role in the proof of Theorem 4.
## 5 Auxiliary functions
In this section we define and state basic properties of certain auxiliary
functions, whose monotonicity will be used in the proofs of Theorems 3 and 4,
and whose definition depends upon the “first order conditions” provided by
Theorem 8.
Let us begin with some notation and remarks. Fix $i\in[n]$ and
$\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, and recall from (13) that
$B_{i}(\mathbf{x}_{-i})$ denotes a best response of player $i$ and that
$B_{ij}(\mathbf{x}_{-i})$ is its $j$-th component. To simplify notation, let
us denote $b_{ij}:=B_{ij}(\mathbf{x}_{-i})$. From Theorem 8 we know that there
exists $J_{\mathbf{x}_{-i}}\subset A(\mathbf{x}_{-i})$ such that $b_{ij}=0$,
for $j\in A(\mathbf{x}_{-i})\setminus J_{\mathbf{x}_{-i}}$, and either
$\sum_{j\in
J_{\mathbf{x}_{-i}}}b_{ij}<1\quad\text{and}\quad\psi_{ij}(b_{ij};\mathbf{x}_{-i})=0,\,\text{
for all }\,j\in J_{\mathbf{x}_{-i}},$ (20)
or
$\sum_{j\in J_{\mathbf{x}_{-i}}}b_{ij}=1\quad\text{ and }\quad
b_{ij}^{a_{i}-1}\cdot\psi_{ij}(b_{ij};\mathbf{x}_{-i})=\kappa_{0},\,\text{ for
all }\,j\in J_{\mathbf{x}_{-i}}\,\text{ and some }\,\kappa_{0}\geq 0.$ (21)
In particular, it holds $b_{ij}>0$, for all $j\in J_{\mathbf{x}_{-i}}$. Using
(14), it follows that the second statement of (20) is equivalent to
$b_{ij}\cdot\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i})+a_{i}\mathcal{F}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i})=0,\,\text{
for all }\,j\in J_{\mathbf{x}_{-i}},$ (22)
and that the second statement of (21) is equivalent to
$b_{ij}^{a_{i}-1}\cdot\left(b_{ij}\cdot\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i})+a_{i}\mathcal{F}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i})\right)=\kappa_{0},\,\text{
for all }\,j\in J_{\mathbf{x}_{-i}}\,.$ (23)
Now, given $\mathbf{x}_{-i}\in\mathcal{C}_{-i}$, $j\in J_{\mathbf{x}_{-i}}$
and $\kappa_{0}\geq 0$, define for each $i\in[n]$ the functions
$\mathcal{G}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i}):=-\frac{a_{i}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})}{\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})}\,,\text{ for
}\,x_{ij}\in(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})$ (24)
and
$\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0}):=-\frac{a_{i}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})}{\frac{-\kappa_{0}}{x_{ij}^{a_{i}}}+\frac{\partial}{\partial
x_{ij}}\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})},\text{ for
}\,x_{ij}\in(0,\omega_{ij}-\mathbf{x}_{T}^{j|i})\,.$ (25)
Notice that (22) implies that when $b_{ij}$ is of Type I it holds
$\mathcal{G}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i})=b_{ij}\,,$ (26)
while (23) implies that when $b_{ij}$ is of Type II it holds
$\mathcal{H}_{ij}(b_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})=b_{ij},\,.$ (27)
Observe also that it holds
$\mathcal{G}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})\geq\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})$,
for all $x_{ij}\in[0,\omega_{ij}-\mathbf{x}_{T}^{j|i}]$. Let us, for future
reference, collect a couple of observations about the functions
$\mathcal{G}_{ij},\mathcal{H}_{ij}$.
###### Lemma 4.
Let $i\in[n]$ and $j\in[m]$ be fixed. Then the functions
$\mathcal{G}_{ij}(\cdot)$ and $\mathcal{H}_{ij}(\,\cdot\,;\kappa_{0})$,
defined in (24) and (25) respectively, are strictly decreasing in the interval
$[0,\omega_{ij}]$.
###### Proof.
To simplify notation, let
$\mathcal{F}:=\mathcal{F}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i})$,
$\mathcal{F}^{\prime}:=\frac{\partial}{\partial x_{ij}}\mathcal{F}$ and
$\mathcal{F}^{\prime\prime}:=\frac{\partial^{2}}{\partial
x_{ij}^{2}}\mathcal{F}$. For $x_{ij}\in(0,\omega_{ij}-\mathbf{x}_{T}^{(j)})$,
we compute
$\displaystyle\frac{\partial}{\partial
x_{ij}}\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})$
$\displaystyle=$
$\displaystyle\frac{-a_{i}\mathcal{F}^{\prime}\cdot(\frac{-\kappa_{0}}{x_{ij}^{a_{i}}}+\mathcal{F}^{\prime})+a_{i}\mathcal{F}\cdot(-a_{i}\frac{-\kappa_{0}}{x_{ij}^{a_{i}+1}}+\mathcal{F}^{\prime\prime})}{(\frac{-\kappa_{0}}{x_{ij}^{a_{i}}}+\mathcal{F}^{\prime})^{2}}$
$\displaystyle=$ $\displaystyle
a_{i}\cdot\frac{\kappa_{0}x_{ij}^{a_{i}-1}\left(x_{ij}\mathcal{F}^{\prime}+a_{i}\mathcal{F}\right)-x_{ij}^{2a_{i}}(\mathcal{F}^{\prime})^{2}+x_{ij}^{2a_{i}}\mathcal{F}\cdot\mathcal{F}^{\prime\prime}}{(-\kappa_{0}+x_{ij}^{a_{i}}\mathcal{F}^{\prime})^{2}}$
$\displaystyle<$ $\displaystyle
a_{i}\cdot\frac{\kappa_{0}x_{ij}^{a_{i}-1}\left(x_{ij}\mathcal{F}^{\prime}+a_{i}\mathcal{F}\right)-x_{ij}^{2a_{i}}(\mathcal{F}^{\prime})^{2}}{(-\kappa_{0}+x_{ij}^{a_{i}}\mathcal{F}^{\prime})^{2}}\,,$
where the last estimate follows from the fact that, by Assumption 2, it holds
$\mathcal{F}^{\prime\prime}<0$. If $\kappa_{0}=0$, then it readily follows
that $\frac{\partial}{\partial
x_{ij}}\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})<0$ and
therefore $\mathcal{H}_{ij}$ is strictly decreasing; thus $\mathcal{G}_{ij}$
is strictly decreasing as well. So we may assume that $\kappa_{0}>0$. If
$x_{ij}\mathcal{F}^{\prime}+a_{i}\mathcal{F}<0$, then it also follows that
$\mathcal{H}_{ij}$ is strictly decreasing; thus we may also assume that
$A:=x_{ij}\mathcal{F}^{\prime}+a_{i}\mathcal{F}\geq 0$. Now notice that
$\frac{\partial A}{\partial
x_{ij}}=\mathcal{F}^{\prime}+x_{ij}\mathcal{F}^{\prime\prime}+a_{i}\mathcal{F}^{\prime}<0$,
and define the function
$H(x_{ij}):=\kappa_{0}x_{ij}^{a_{i}-1}\cdot
A-x_{ij}^{2a_{i}}(\mathcal{F}^{\prime})^{2}\,;$
hence it holds $\frac{\partial}{\partial
x_{ij}}\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})<a_{i}\cdot\frac{H(x_{ij})}{(-\kappa_{0}+x_{ij}^{a_{i}}\mathcal{F}^{\prime})^{2}}$.
Moreover, it holds
$\frac{\partial}{\partial
x_{ij}}H(x_{ij})=(a_{i}-1)\kappa_{0}x_{ij}^{a_{i}-2}\cdot
A+\kappa_{0}x_{ij}^{a_{i}-1}\cdot\frac{\partial A}{\partial
x_{ij}}-2a_{i}x_{ij}^{2a_{i}-1}(\mathcal{F}^{\prime})^{2}-2x_{ij}^{2a_{i}}\mathcal{F}^{\prime}\mathcal{F}^{\prime\prime}\,.$
Since $a_{i}\leq 1$, $A\geq 0$ and
$\mathcal{F}^{\prime},\mathcal{F}^{\prime\prime},\frac{\partial A}{\partial
x_{ij}}<0$, it readily follows that all addends in the previous equation are
negative, and therefore $\frac{\partial}{\partial x_{ij}}H(x_{ij})<0$. In
other words, $H(\cdot)$ is strictly decreasing in
$[0,\omega_{ij}-\mathbf{x}_{T}^{j|i}]$ and, since $H(0)=0$,
$H(\omega_{ij}-\mathbf{x}_{T}^{j|i})<0$, we conclude that $H(x_{ij})\leq 0$,
for all $x_{ij}\in[0,\omega_{ij}-\mathbf{x}_{T}^{j|i}]$. This implies that
$\frac{\partial}{\partial
x_{ij}}\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})<0$ for
$x_{ij}\in[0,\omega_{ij}-\mathbf{x}_{T}^{j|i}]$. Since
$\frac{\partial}{\partial\mathbf{x}_{T}^{(j)}}\mathcal{H}_{ij}(\mathbf{x}_{T}^{(j)};\kappa_{0})=\frac{\partial}{\partial
x_{ij}}\mathcal{H}_{ij}(x_{ij}+\mathbf{x}_{T}^{j|i};\kappa_{0})$, and
similarly for $\mathcal{G}_{ij}$, we conclude that both $\mathcal{G}_{ij}$ and
$\mathcal{H}_{ij}$ are strictly decreasing in the interval $[0,\omega_{ij}]$,
as desired. ∎
## 6 Proof of Theorem 3
In this section we prove Theorem 3. We first introduce some notation. Consider
a GNE, say
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{n}$, where
$\mathbf{x}_{i}=(x_{i1},\ldots,x_{im})\in C_{m}$, of a Fragile multi-CPR Game
satisfying Assumption 2. Given $j\in[m]$, let
$\mathcal{S}(\mathbf{x}_{T}^{(j)})=\\{i\in[n]:\mathbf{x}_{T}^{(j)}<\omega_{ij}\text{
and }x_{ij}>0\\}$ (28)
be the _support_ of the $j$-th CPR and let
$\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})=\\{i\in\mathcal{S}(\mathbf{x}_{T}^{(j)}):\mathbf{x}_{i}\text{
is of Type\leavevmode\nobreak\ I}\\}$ (29)
be the _support of Type I_ , consisting of those players in the support of the
$j$-th CPR whose best response is of Type I, and
$\mathcal{S}_{II}(\mathbf{x}_{T}^{(j)})=\\{i\in\mathcal{S}(\mathbf{x}_{T}^{(j)}):\mathbf{x}_{i}\text{
is of Type\leavevmode\nobreak\ II}\\}$ (30)
be the _support of Type II_ , consisting of those players in the support of
the $j$-th CPR whose best response is of Type II. Clearly, in view of Theorem
8, it holds
$\mathcal{S}(\mathbf{x}_{T}^{(j)})=\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})\cup\mathcal{S}_{II}(\mathbf{x}_{T}^{(j)})$.
We employ the properties of the auxiliary functions in the proof of Theorem 3,
a basic ingredient of which is the fact that in the setting of Theorem 3 the
support of Type II is empty. The proof is similar to the proof of Theorem 1,
given in [13, p. 155]. In fact, we prove a bit more. We show that Theorem 3 is
a consequence of the following result.
###### Theorem 9.
Consider a Fragile multi-CPR Game with $n\geq 1$ players and $m\geq 1$ CPRs
satisfying Assumption 2. Then there exists at most one GNE
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ for which $\mathbf{x}_{i}$
is of Type I, for all $i\in[n]$.
###### Proof.
Let $\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ be a GNE such that
$\mathbf{x}_{i}$ is of Type I, for all $i\in[n]$ and note that, since
$\mathbf{x}_{i}$ is of Type I, it holds $\sum_{j}x_{ij}<1$, for all $i\in[n]$.
For each $j\in[m]$, let
$\mathcal{S}_{0}(\mathbf{x}_{T}^{(j)}):=\\{i\in[n]:\mathbf{x}_{T}^{(j)}<\omega_{ij}\\}$.
We claim that
$\mathcal{S}_{0}(\mathbf{x}_{T}^{(j)})=\mathcal{S}(\mathbf{x}_{T}^{(j)})$.
Indeed, if there exists
$i\in\mathcal{S}_{0}(\mathbf{x}_{T}^{(j)})\setminus\mathcal{S}(\mathbf{x}_{T}^{(j)})$
then $x_{ij}=0$ and since it holds $\mathbf{x}_{T}^{(j)}<\omega_{ij}$ and
$\sum_{j}x_{ij}<1$, it follows that player $i$ could increase her utility by
investing a suitably small amount, say $\varepsilon>0$, in the $j$-th CPR. But
then this implies that $\mathbf{x}$ cannot be a GNE, a contradiction. Hence
$\mathcal{S}_{0}(\mathbf{x}_{T}^{(j)})=\mathcal{S}(\mathbf{x}_{T}^{(j)})$.
We claim that for any two distinct GNEs, say
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ and
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})$, for which
$\mathbf{x}_{i},\mathbf{y}_{i}$ are of Type I for all $i\in[n]$, it holds
$\mathbf{x}_{T}^{(j)}=\mathbf{y}_{T}^{(j)}$, for all $j\in[m]$. Indeed, if the
claim is not true, then there exists $j\in[m]$ such that
$\mathbf{x}_{T}^{(j)}\neq\mathbf{y}_{T}^{(j)}$. Suppose, without loss of
generality, that $\mathbf{x}_{T}^{(j)}<\mathbf{y}_{T}^{(j)}$.
Since $\mathbf{x}_{i}$ is of Type I, for all $i\in[n]$, it follows that
$\mathcal{S}(\mathbf{x}_{T}^{(j)})=\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})$ and
$\mathcal{S}(\mathbf{y}_{T}^{(j)})=\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})$.
Moreover, since $\mathbf{x}_{T}^{(j)}<\mathbf{y}_{T}^{(j)}$ it follows that
$\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})=\mathcal{S}_{0}(\mathbf{y}_{T}^{(j)})\subset\mathcal{S}_{0}(\mathbf{x}_{T}^{(j)})=\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})$.
Now notice that (26) implies that
$\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})=x_{ij}$, for all
$i\in\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})$, and
$\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})=y_{ij}$, for all
$i\in\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})$. Since
$\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})\subset\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})$
it holds that
$\sum_{i\in\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})}\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})\leq\mathbf{x}_{T}^{(j)}<\mathbf{y}_{T}^{(j)}=\sum_{i\in\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})\,.$
(31)
However, since $\mathcal{G}_{ij}$ is strictly decreasing, it follows that
$\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})>\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})$,
for all $i\in\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})$, which contradicts (31).
We conclude that $\mathbf{x}_{T}^{(j)}=\mathbf{y}_{T}^{(j)}$ and
$\mathcal{S}_{I}(\mathbf{x}_{T}^{(j)})=\mathcal{S}_{I}(\mathbf{y}_{T}^{(j)})$.
Finally, given a total investment $\mathbf{x}$ of the players at a GNE, we
claim that the optimal investment of every player on any CPR is unique.
Indeed, if a player, say $i\in[n]$, has two optimal investments, say $x<z$, on
the $j$-th CPR, then it holds
$\mathcal{G}_{ij}(\mathbf{x})=x<z=\mathcal{G}_{ij}(\mathbf{x})$, a
contradiction. The result follows. ∎
Theorem 3 is a direct consequence of Theorem 9, as we now show.
###### Proof of Theorem 3.
We know from Theorem 2 that the game admits a GNE, and it is therefore enough
to show that it is unique. Since $m=1$, the first condition in Assumption 2
implies that no player invests an amount of $1$ in the CPR which in turn
implies that all coordinates of any GNE are of Type I. The result follows from
Theorem 9. ∎
Observe that a basic ingredient in the proof of Theorem 9 is the fact that
$\mathcal{S}(\mathbf{x}_{T})=\mathcal{S}_{I}(\mathbf{x}_{T}),\mathcal{S}(\mathbf{y}_{T})=\mathcal{S}_{I}(\mathbf{y}_{T})$
and $\mathcal{S}_{I}(\mathbf{y}_{T})\subset\mathcal{S}_{I}(\mathbf{x}_{T})$.
Moreover, observe that the proof of Theorem 9 proceeds in two steps: in the
first step it is shown that any two GNEs admit the same total investment in
the CPR, and in the second step it is shown that, given an optimal total
investment, every player has a unique optimal investment in the CPR. In the
following section we are going to improve upon the aforementioned
observations. A bit more concretely, we are going to prove that the set
consisting of all GNEs of a Fragile multi-CPR Game is “small” via showing that
the set consisting of all total investments at the GNEs is “small”.
## 7 Proof of Theorem 4
Throughout this section, we denote by $G^{(2)}$ a Fragile multi-CPR Game
satisfying Assumption 2. Moreover, given a finite set, $F$, we denote by $|F|$
its cardinality. Now consider the set
$\mathcal{N}(G^{(2)}):=\\{\mathbf{x}\in\mathcal{C}_{n}:\mathbf{x}\text{ is a
GNE of }G^{(2)}\\}$
and, given
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{N}(G^{(2)})$,
let
$\mathcal{T}_{I}(\mathbf{x})=\\{i\in[n]:\mathbf{x}_{i}\text{ is of
Type\leavevmode\nobreak\ I}\\}$
and
$\mathcal{T}_{II}(\mathbf{x})=\\{i\in[n]:\mathbf{x}_{i}\text{ is of
Type\leavevmode\nobreak\ II}\\}\,.$
Recall the definition of active CPRs corresponding to $\mathbf{x}_{-i}$, which
is denoted $A(\mathbf{x}_{-i})$ and is defined in (9), as well as the
definition of effective CPRs corresponding to $\mathbf{x}_{-i}$, which is
denoted $J_{\mathbf{x}_{-i}}$ and is defined in (18).
###### Lemma 5.
Let $\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{N}(G^{(2)})$
and suppose that $i\in\mathcal{T}_{I}(\mathbf{x})$, for some $i\in[n]$. Then
it holds $J_{\mathbf{x}_{-i}}=A(\mathbf{x}_{-i})$.
###### Proof.
Recall from Theorem 8 that $J_{\mathbf{x}_{-i}}$ is such that $x_{ij}>0$ if
and only if $j\in J_{\mathbf{x}_{-i}}$. Suppose, towards arriving at a
contradiction, that there exists $j\in A(\mathbf{x}_{-i})\setminus
J_{\mathbf{x}_{-i}}$. Since $i\in\mathcal{T}_{I}(\mathbf{x})$, it follows that
$\sum_{j\in J_{\mathbf{x}_{-i}}}x_{ij}<1$ and thus player $i$ can increase her
utility by investing a suitably small amount $\varepsilon>0$ in the $j$-th
CPR. This contradicts the fact that $\mathbf{x}$ is a GNE, and the lemma
follows. ∎
###### Lemma 6.
Let $\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ and
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})$ be two elements from
$\mathcal{N}(G^{(2)})$ such that
$\mathbf{x}_{T}^{(j)}\leq\mathbf{y}_{T}^{(j)}$, for all $j\in[m]$. Then the
following hold true:
1. 1.
If $i\in\mathcal{T}_{I}(\mathbf{x})$, then $J_{\mathbf{y}_{-i}}\subset
J_{\mathbf{x}_{-i}}$.
2. 2.
It holds $\mathcal{T}_{I}(\mathbf{x})\subset\mathcal{T}_{I}(\mathbf{y})$.
###### Proof.
Fix $i\in[n]$ such that $i\in\mathcal{T}_{I}(\mathbf{x})$ and notice that
Lemma 5 implies that $\mathbf{x}_{T}^{(j)}\geq\omega_{ij}$, for all
$j\in[m]\setminus J_{\mathbf{x}_{-i}}$. Since
$\mathbf{x}_{T}^{(j)}\leq\mathbf{y}_{T}^{(j)}$, for all $j\in[m]$, it holds
$\mathbf{y}_{T}^{(j)}\geq\omega_{ij}$, for all $j\in[m]\setminus
J_{\mathbf{y}_{-i}}$, and we conclude that $J_{\mathbf{y}_{-i}}\subset
J_{\mathbf{x}_{-i}}$. The first statement follows.
We proceed with the second statement. Let $i\in[n]$ be such that
$\mathbf{x}_{i}$ is of Type I. We have to show that $\mathbf{y}_{i}$ is also
of Type I. Suppose that this is not true; hence $\mathbf{y}_{i}$ is of Type
II, and thus it holds $\sum_{j\in J_{\mathbf{y}_{-i}}}y_{ij}=1$. Since
$\mathbf{y}_{i}$ is of Type II, it follows from (27) that
$\mathcal{H}_{ij}(\mathbf{y}_{T}^{(j)};\kappa_{0})=y_{ij}$, for all $j\in
J_{\mathbf{y}_{-i}}$ and some $\kappa_{0}\geq 0$. Since $\mathbf{x}_{i}$ is of
Type I and $\mathcal{H}_{ij}$ is decreasing, we may apply (26) and conclude
$x_{ij}=\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})\geq\mathcal{H}_{ij}(\mathbf{x}_{T}^{(j)};\kappa_{0})\geq\mathcal{H}_{ij}(\mathbf{y}_{T}^{(j)};\kappa_{0})=y_{ij},\,\text{
for all }\,j\in J_{\mathbf{y}_{-i}}\,.$
Hence $1>\sum_{j\in J_{\mathbf{y}_{-i}}}x_{ij}\geq\sum_{j\in
J_{\mathbf{y}_{-i}}}y_{ij}=1$, a contradiction. The result follows. ∎
###### Lemma 7.
Assume that $m\leq n$. Then it holds
$\mathcal{T}_{I}(\mathbf{x})\neq\emptyset$, for every
$\mathbf{x}\in\mathcal{N}(G^{(2)})$.
###### Proof.
Suppose that the conclusion is not true; hence there exists
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{N}(G^{(2)})$
such that $\mathcal{T}_{II}(\mathbf{x})=[n]$, which in turn implies that
$\sum_{j\in[m]}\mathbf{x}_{T}^{(j)}=n\geq m$. Hence there exists $k\in[m]$
such that $\mathbf{x}_{T}^{(k)}\geq 1$. We now claim that
$\mathbf{x}_{T}^{k|i}\geq\omega_{ik}$, for all
$i\in\mathcal{S}_{II}(\mathbf{x}_{T}^{(k)})$, where $\mathcal{S}_{II}(\cdot)$
is defined in (30) and $\omega_{ik}$ is given by Lemma 1. To prove the claim,
notice that if there exists $i\in\mathcal{S}_{II}(\mathbf{x}_{T}^{(k)})$ such
that $\mathbf{x}_{T}^{k|i}<\omega_{ik}$ then, since $x_{ik}$ is a best
response of player $i$ in the $k$-th CPR, by Remark 1, it would assume a value
for which $x_{ik}+\mathbf{x}_{T}^{k|i}<\omega_{ij}$, which contradicts the
fact that $\mathbf{x}_{T}^{(k)}\geq 1$. The claim follows.
However, since $x_{ik}$ is a best response and
$\mathbf{x}_{T}^{k|i}\geq\omega_{ik}$, for all
$i\in\mathcal{S}_{II}(\mathbf{x}_{T}^{(k)})$, it follows that $x_{ik}=0$, for
all $i\in\mathcal{S}_{II}(\mathbf{x}_{T}^{(k)})$. This contradicts the fact
that $\mathbf{x}_{T}^{(k)}\geq 1$, and the result follows. ∎
###### Lemma 8.
Assume that $m\leq n$. Then there do not exist distinct elements
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ and
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})$ in $\mathcal{N}(G^{(2)})$
for which it holds $\mathbf{x}_{T}^{(j)}\leq\mathbf{y}_{T}^{(j)}$, for all
$j\in[m]$, and
$\sum_{j\in[m]}\mathbf{x}_{T}^{(j)}<\sum_{j\in[m]}\mathbf{y}_{T}^{(j)}$.
###### Proof.
Suppose that such GNEs do exist. Since $m\leq n$, it follows from Lemma 7 that
$\mathcal{T}_{I}(\mathbf{x})\neq\emptyset$, for every
$\mathbf{x}\in\mathcal{N}(G^{(2)})$.
Notice that Lemma 6 implies that
$\mathcal{T}_{I}(\mathbf{x})\subset\mathcal{T}_{I}(\mathbf{y})$ and
$J_{\mathbf{y}_{-i}}\subset J_{\mathbf{x}_{-i}}$, and (26) implies that
$\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})=x_{ij}$, for all
$i\in\mathcal{T}_{I}(\mathbf{x})$ and all $j\in J_{\mathbf{x}_{-i}}$.
Similarly, it holds $\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})=y_{ij}$, for all
$i\in\mathcal{T}_{I}(\mathbf{y})$ and all $j\in J_{\mathbf{y}_{-i}}$. Hence we
may write
$\sum_{j\in[m]}\mathbf{x}_{T}^{(j)}=\sum_{i\in\mathcal{T}_{I}(\mathbf{x})}\sum_{j\in
J_{\mathbf{x}_{-i}}}\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})+|\mathcal{T}_{II}(\mathbf{x})|$
as well as
$\sum_{j\in[m]}\mathbf{y}_{T}^{(j)}=\sum_{i\in\mathcal{T}_{I}(\mathbf{y})}\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})+|\mathcal{T}_{II}(\mathbf{y})|\,.$
Since $\sum_{j}\mathbf{x}_{T}^{(j)}<\sum_{j}\mathbf{y}_{T}^{(j)}$,
$|\mathcal{T}_{I}(\mathbf{x})|\leq|\mathcal{T}_{I}(\mathbf{y})|$ and
$|\mathcal{T}_{II}(\mathbf{x})|\geq|\mathcal{T}_{II}(\mathbf{y})|$ hold true,
it follows that
$\displaystyle\sum_{i\in\mathcal{T}_{I}(\mathbf{x})}\,\sum_{j\in
J_{\mathbf{x}_{-i}}}\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})+|\mathcal{T}_{II}(\mathbf{x})\setminus\mathcal{T}_{II}(\mathbf{y})|$
$\displaystyle<$
$\displaystyle\sum_{i\in\mathcal{T}_{I}(\mathbf{x})}\,\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})$ (32)
$\displaystyle+$
$\displaystyle\sum_{i\in\mathcal{T}_{I}(\mathbf{y})\setminus\mathcal{T}_{I}(\mathbf{x})}\,\,\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})\,.$
However, the fact that $\mathcal{G}_{ij}$ is decreasing implies that
$\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})\geq\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})$,
for all $i\in\mathcal{T}_{I}(\mathbf{x})$ and all $j\in J_{\mathbf{y}_{-i}}$;
hence it holds
$\sum_{i\in\mathcal{T}_{I}(\mathbf{x})}\,\sum_{j\in
J_{\mathbf{x}_{-i}}}\mathcal{G}_{ij}(\mathbf{x}_{T}^{(j)})\geq\sum_{i\in\mathcal{T}_{I}(\mathbf{x})}\,\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})\,.$ (33)
Moreover, since $\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})<1$, for all
$i\in\mathcal{T}_{I}(\mathbf{y})\setminus\mathcal{T}_{I}(\mathbf{x})$, it
holds
$\sum_{i\in\mathcal{T}_{I}(\mathbf{y})\setminus\mathcal{T}_{I}(\mathbf{x})}\,\,\sum_{j\in
J_{\mathbf{y}_{-i}}}\mathcal{G}_{ij}(\mathbf{y}_{T}^{(j)})<|\mathcal{T}_{I}(\mathbf{y})\setminus\mathcal{T}_{I}(\mathbf{x})|=|\mathcal{T}_{II}(\mathbf{x})\setminus\mathcal{T}_{II}(\mathbf{y})|\,.$
(34)
Now notice that (33) and (34) contradict (32). The result follows. ∎
Finally, the proof of Theorem 4 requires the following measure-theoretic
results. Here and later, given a positive integer $k\geq 1$, $\mathcal{L}^{k}$
denotes $k$-dimensional Lebesgue measure. Moreover, given a function
$f:\mathbb{R}^{k}\to\mathbb{R}^{m}$ and a set $B\subset\mathbb{R}^{m}$, we
denote $f^{-1}(B):=\\{\mathbf{x}\in\mathbb{R}^{k}:f(\mathbf{x})\in B\\}$ the
preimage of $B$ under $f$.
###### Lemma 9.
Let $f:\mathbb{R}^{d}\to\mathbb{R}^{m}$ be a continuously differentiable
function for which $\mathcal{L}^{d}(\\{\mathbf{x}\in\mathbb{R}^{d}:\nabla
f(\mathbf{x})=0\\})=0$. Then it holds $\mathcal{L}^{d}(f^{-1}(A))=0$, for
every $A\subset\mathbb{R}^{m}$ for which $\mathcal{L}^{m}(A)=0$.
###### Proof.
See [23, Theorem 1]. ∎
Let $m\geq 1$ be an integer. A set $A\subset[0,1]^{m}$ is called an
_antichain_ if it does not contain two distinct elements
$\mathbf{x}=(x_{1},\ldots,x_{m})$ and $\mathbf{y}=(y_{1},\ldots,y_{m})$ such
that $x_{j}\leq y_{j}$, for all $j\in[m]$.
###### Lemma 10.
Let $A\subset[0,1]^{m}$ be an antichain. Then $\mathcal{L}^{m}(A)=0$.
###### Proof.
The result is an immediate consequence of Lebesgue’s density theorem.
Alternatively, it follows from the main result in [9], and from [10, Theorem
1.3]. ∎
Now given
$\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{n}$, let
$\mathbf{v}_{\mathbf{x}}$ denote the vector
$\mathbf{v}_{\mathbf{x}}:=(\mathbf{x}_{T}^{(1)},\ldots,\mathbf{x}_{T}^{(m)})\in[0,1]^{m},$
(35)
where $\mathbf{x}_{T}^{(j)},j\in[m]$, is defined in (4). Finally, given
$N\subset\mathcal{C}_{n}$, define the set
$W_{N}:=\bigcup_{\mathbf{x}\in N}\mathbf{v}_{\mathbf{x}}\,.$ (36)
The proof of Theorem 4 is almost complete.
###### Proof of Theorem 4.
To simplify notation, let us set $N:=\mathcal{N}(G^{(2)})$. We have to show
that $\mathcal{L}^{nm}(N)=0$.
Now let $f$ denote the map $f:\mathcal{C}_{n}\to[0,1]^{m}$ given by
$f(\mathbf{x})=\mathbf{v}_{\mathbf{x}}$, where $\mathbf{v}_{\mathbf{x}}$ is
defined in (35). It is straightforward to verify that
$\\{\mathbf{x}\in\mathcal{C}_{n}:\nabla f(\mathbf{x})=0\\}=\emptyset$.
Now consider the set $W_{N}$, defined in (36), and notice that Lemma 8 implies
that $W_{N}$ is an antichain; hence it follows from Lemma 10 that
$\mathcal{L}^{m}(W_{N})=0$. Therefore, Lemma 9 yields
$\mathcal{L}^{nm}(N)=\mathcal{L}^{nm}(f^{-1}(W_{N}))=0\,,$
as desired. ∎
## 8 A restricted version of the game
Let $G^{(2)}$ denote a Fragile multi-CPR Game satisfying Assumption 2. In this
section we show that $G^{(2)}$ admits finitely many GNEs, subject to the
constraint that the total investment in each CPR is fixed. We then use this
result, in the next section, in order to formulate a conjecture which is
equivalent to Conjecture 1. Before being more precise, we need some extra
piece of notation.
Given a set $F\subset[m]$ and real numbers $\\{r_{j}\\}_{j\in F}\subset[0,1]$,
indexed by $F$, we denote by $W(\\{r_{j}\\}_{j\in F})$ the set
$W(\\{r_{j}\\}_{j\in
F}):=\\{\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})\in\mathcal{C}_{n}:\mathbf{x}_{T}^{(j)}=r_{j},\text{
for }j\in F\\}\,,$
where $\mathbf{x}_{T}^{(j)}$ is defined in (4). In other words,
$W(\\{r_{j}\\}_{j\in F})$ consists of those strategy profiles for which the
total investment in the CPRs corresponding to elements in $F$ is fixed, and
equal to the given numbers $\\{r_{j}\\}_{j\in F}$.
In this section we prove the following.
###### Theorem 10.
Fix real numbers $r_{1},\ldots,r_{m}\in[0,1]$. Then the set
$W:=W(r_{1},\ldots,r_{m})$ contains at most $2^{n\cdot(m+1)}$ GNEs of
$G^{(2)}$.
The proof requires a couple of observations which we collect in the following
lemmata.
###### Lemma 11.
Suppose that $\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ and
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})$ are two GNEs of $G^{(2)}$
such that $\mathbf{x},\mathbf{y}\in W:=W(r_{j})$ and $0<x_{ij}<y_{ij}$, for
some $i\in[n]$, $j\in[m]$ and $r_{j}\in[0,1]$. Then either $\mathbf{x}_{i}$ is
of Type II or $\mathbf{y}_{i}$ is of Type II.
###### Proof.
Suppose, towards arriving at a contradiction, that the conclusion is not true.
Then both $\mathbf{x}_{i}$ and $\mathbf{y}_{i}$ are of Type I, and thus (26)
implies that $\mathcal{G}_{ij}(r)=x_{ij}$ and $\mathcal{G}_{ij}(r)=y_{ij}$.
Hence it holds
$\mathcal{G}_{ij}(r_{j})=x_{ij}<y_{ij}=\mathcal{G}_{ij}(r_{j})$, a
contradiction. The result follows. ∎
###### Lemma 12.
Suppose that $\mathbf{x}=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})$ and
$\mathbf{y}=(\mathbf{y}_{1},\ldots,\mathbf{y}_{n})$ are two GNEs of $G^{(2)}$
for which it holds $\mathbf{x},\mathbf{y}\in W:=W(r_{l},r_{\ell})$ and
$0<x_{ij}<y_{ij}$ and $x_{i\ell}>y_{i\ell}>0$, for some $i\in[n]$ and
$\\{j,\ell\\}\subset[m]$. Then either $\mathbf{x}_{i}$ is of Type I or
$\mathbf{y}_{i}$ is of Type I.
###### Proof.
Suppose, towards arriving at a contradiction, that both $\mathbf{x}_{-i}$ and
$\mathbf{y}_{-i}$ are of Type II. Recall the definition of
$\psi_{ij}(\,\cdot\,;\,\cdot\,)$, given in (14), and notice that, since both
$\mathbf{x}_{i},\mathbf{y}_{i}$ are of Type II, Theorem 8 implies the
existence of $\kappa_{x},\kappa_{y}\geq 0$ such that
$\kappa_{x}=x_{ij}^{a_{i}-1}\cdot\psi_{ij}(x_{ij};r_{j}-x_{ij})=x_{i\ell}^{a_{i}-1}\cdot\psi_{i\ell}(x_{i\ell};r_{\ell}-x_{i\ell})$
and
$\kappa_{y}=y_{ij}^{a_{i}-1}\cdot\psi_{ij}(y_{ij};r_{j}-y_{ij})=y_{i\ell}^{a_{i}-1}\cdot\psi_{i\ell}(y_{i\ell};r_{\ell}-y_{i\ell})\,.$
Now notice that, for all $k\in[m]$, the function
$\Psi_{k}(x):=x^{a-1}\cdot\psi_{ik}(x;r-x)$ is decreasing in $x$, for fixed
$r>0$ and $a\in(0,1]$. Hence, $x_{ij}<y_{ij}$ implies that
$\kappa_{x}>\kappa_{y}$, and $x_{i\ell}>y_{i\ell}$ implies that
$\kappa_{x}<\kappa_{y}$, a contradiction. The result follows. ∎
We may now proceed with the proof of the main result of this section.
###### Proof of Theorem 10.
For every $i\in[n]$, define the set
$N_{i}:=\\{\mathbf{x}_{i}\in C_{m}:(\mathbf{x}_{i},\mathbf{x}_{-i})\in
W,\text{ for some }\mathbf{x}_{-i}\in\mathcal{C}_{-i}\\}\,.$
We first show that the cardinality of $N_{i}$, denoted $|N_{i}|$, is at most
$2^{m+1}$.
Let $\mathbf{x}_{i}\in N_{i}$, and recall from Theorem 8, and (18), that there
exists $J\subset[m]$ such that $x_{ij}>0$, when $j\in J$, and $x_{ij}=0$ when
$j\in[m]\setminus J$. In other words, to every $\mathbf{x}_{i}\in N_{i}$ there
corresponds a set $J\subset[m]$ such that $x_{ij}>0$ if and only if $j\in J$.
Now, given $J\subset[m]$, let
$N_{J}:=\\{\mathbf{x}_{i}\in N_{i}:x_{ij}>0\text{ if and only if }j\in
J\\}\,.$
Assume first that $|J|\geq 2$. In this case we claim that $|N_{J}|\leq 2$.
Indeed, if $|N_{J}|\geq 3$, then there are two elements, say
$\mathbf{x}^{(1)},\mathbf{x}^{(2)}\in N_{J}$, which are either both of Type I,
or both of Type II. If both $\mathbf{x}^{(1)}$ and $\mathbf{x}^{(2)}$ are of
Type I, then there exists $j\in J$ such that, without loss of generality, it
holds $x_{ij}^{(1)}<x_{ij}^{(2)}$; which contradicts Lemma 11. If both
$\mathbf{x}^{(1)}$ and $\mathbf{x}^{(2)}$ are of Type II, then there exist
$j,\ell\in J$ such that $x_{ij}<y_{ij}$ and $x_{i\ell}>y_{i\ell}$; which
contradicts Lemma 12. The claim follows.
If $|J|=1$, say $J=\\{j\\}$, we claim that $|N_{J}|\leq 1$. Indeed, suppose
that $|N_{J}|\geq 2$ holds true and notice that every element of $N_{J}$ is of
Type I. However, the assumption that $|N_{J}|\geq 2$ implies that there exist
$\mathbf{x}^{(1)},\mathbf{x}^{(2)}\in N_{J}$ such that
$0<x_{ij}^{(1)}<y_{ij}^{(1)}$; which contradicts Lemma 11. The second claim
follows.
Since there are $2^{m}$ subsets $J\subset[m]$, and for each $J$ it holds
$|N_{J}|\leq 2$, it follows that there are at most $2^{m+1}$ elements in
$N_{i}$. Since there are $n$ players in the game, the result follows. ∎
## 9 Concluding remarks and conjectures
Let $G^{(2)}$ denote a Fragile multi-CPR Game satisfying Assumption 2, and let
$\mathcal{N}(G^{(2)})$ be the set consisting of all GNEs of $G^{(2)}$. So far
we have proven that the $(n\cdot m)$-dimensional Lebesgue measure of
$\mathcal{N}(G^{(2)})$ equals zero, but there are several problems and
questions that remain open. First and foremost, we believe that the following
holds true.
###### Conjecture 2.
Let $N:=\mathcal{N}(G^{(2)})$. Then the antichain $W_{N}$, defined in (36), is
finite.
Notice that if Conjecture 2 holds true then, in view of Theorem 10, Conjecture
1 holds true as well. Since the converse is clearly true, it follows that
Conjecture 1 and Conjecture 2 are equivalent. The exact number of GNEs in a
Fragile multi-CPR Game appears to depend on the relation between the number of
players, $n$, and the number of CPRs, $m$. When $n\geq m$ we conjecture that
that for every GNE the players choose best responses of Type I and therefore,
provided this is indeed the case, Theorem 9 would imply that the game admits a
unique GNE.
###### Conjecture 3.
If $n\geq m$, then $|\mathcal{N}(G^{(2)})|=1$.
Another line of research is to investigate the _best response dynamics_ of a
Fragile multi-CPR Game, which may be seen as a behavioral rule along which
players fix an initial investment in the CPRs and proceed with updating their
investment, over rounds, in such a way that in the $t$-th round player
$i\in[n]$ invests ${}_{i}^{(t)}:=B_{i}(\mathbf{x}_{-i}^{(t)})$, where
$B_{i}(\cdot)$ is defined in (13) and
$\mathbf{x}_{-i}^{(t)}\in\mathcal{C}_{-i}$ is the strategy profile of all
players except player $i$ in the $t$-th round. A natural question to ask is
whether the best response dynamics converge, i.e., whether there exists a
round $t_{0}$ such that ${}_{i}^{(t)}=_{i}^{(t_{0})}$, for all $t\geq t_{0}$
and all $i\in[n]$.
###### Conjecture 4.
The best response dynamics of $G^{(2)}$ converge.
When $m=1$, it is shown in [13] that the best response dynamics of the Fragile
CPR Game converge to its Nash Equilibrium. This is obtained as a consequence
of the fact that the best response correspondence is single-valued and
decreasing in the total investment in the CPR (see the remarks following [13,
Proposition 7]). Moreover, it is not difficult to verify that the Nash
equilibrium of the Fragile CPR Game is also the Generalized Nash equilibrium.
Hence, the best response dynamics of a Fragile CPR Game converge to the
Generalized Nash equilibrium. When $m\geq 2$, the best response correspondence
need no longer be decreasing in each CPR. It is decreasing for those players
whose best response is of Type I, as can be easily seen using the fact that
the auxiliary function $\mathcal{G}_{ij}$ is decreasing. This monotonicity may
no longer be true when a player moves from a best response of Type II to a
best response of Type I, or from a best response of Type II to a best response
of the same type. Furthermore, Theorem 8 does not guarantee that the set of
effective CPR, defined in (18), is unique. Hence, the best response
correspondence may not be single-valued. So far our theoretical analysis does
not provide sufficient evidence for the holistic validity of Conjecture 4.
However, our numerical experiments suggest that Conjecture 4 holds true, and
we expect that we will be able to report on that matter in the future.
## References
* [1] S. Aflaki, The effect of environmental uncertainty on the tragedy of the commons, Games and Economic Behavior 82 (2013) 240–253.
* [2] K. J. Arrow, G. Debreu, Existence of an Equilibrium for a Competitive Economy, Econometrica 22(3) (1954) 265–290.
* [3] J.-P. Aubin, Optima and Equilibria: An Introduction to Nonlinear Analysis, Springer, 1998.
* [4] L.D. Berkovitz, Convexity and optimization in $\mathbb{R}^{n}$, Wiley, New York (2002).
* [5] D. V. Budescu, A. Rapoport, R. Suleiman, Common Pool Resource Dilemmas under Uncertainty: Qualitative Tests of Equilibrium Solutions, Games and Economic Behavior 10 (1995) 171–201.
* [6] G. Debreu, A Social Equilibrium Existence Theorem, Proceedings of the National Academy of Sciences, 38(10) (1952) 886–893.
* [7] P. Dubey, O. Haimanko, A. Zapechelnyuk, Strategic complements and substitutes, and potential games, Games and Economic Behavior 54 (1) (2006) 77–94.
* [8] C. Dutang, Existence theorems for generalized Nash equilibrium problems, Journal of Nonlinear Analysis and Optimization: Theory and Applications 4(2) 2013.
* [9] K. Engel, A continuous version of a Sperner-type theorem, Elektron. Inf. verarb. Kybern. EIK 22 (1) (1986) 45–50.
* [10] K. Engel, T. Mitsis, C. Pelekis, C. Reiher, Projection inequalities for antichains, Israel Journal of Mathematics 238 (2020) 61–90.
* [11] F. Facchinei, C. Kanzow, Generalized Nash equilibrium problems, 4OR 5 (2007) 173–210.
* [12] G. Hardin, The tragedy of the commons, Science 162(3859) (1968) 1243–1248.
* [13] A. R. Hota, S. Garg, S. Sundaram, Fragility of the commons under prospect-theoretic risk attitudes, Games and Economic Behavior 98 (2016) 135–164.
* [14] A. R. Hota, S. Sundaram, Controlling Human Utilization of Failure-Prone Systems via Taxes, IEEE Transactions on Automatic Control (2020).
* [15] T. Ichiishi, Game Theory for Economic Analysis, In Economic theory, Econometrics, and Mathematical Economics, Academic press, New York, London, Paris, 1983.
* [16] D. Kahneman, A. Tversky, Prospect theory: An analysis of decision under risk, Econometrica: Journal of the Econometric Society 47 (1979) 263–291.
* [17] C. Keser, R. Gardner, Strategic behavior of experienced subjects in a common pool resource game, International Journal of Game Theory 28 (1999) 241–252.
* [18] M. Luptacik, Mathematical optimization and economic analysis, Springer, 2010.
* [19] R. Laraki, J. Renault, S. Sorin, Mathematical foundations of game theory, Springer, 2019.
* [20] R. Levi, G. Perakis, G. Romero, A continuous knapsack problem with separable convex utilities: Approximation algorithms and applications, Operations Research Letters 42(5) (2014) 367–373.
* [21] E. Ostrom, Governing the commons: The evolution of institutions for collective action, Cambridge university press, 1990.
* [22] E. Ostrom, R. Gardner, J. Walker, Rules, Games, and Common-Pool Resources, University of Michigan Press, 1994.
* [23] S. P. Ponomarev, Submersions and preimages of sets of measure zero, Siberian Mathematical Journal 28(1) (1987) 153-163.
* [24] S. M. Stefanov, On the solution of multidimensional convex separable continuous knapsack problem with bounded variables, European Journal of Operational Research 247(2) (2015) 366–369.
* [25] P. Vamvakas, E. E. Tsiropoulou, S. Papavassiliou, Risk-Aware Resource Control with Flexible 5G Access Technology Interfaces, 2019 IEEE 20th International Symposium on “A World of Wireless, Mobile and Multimedia Networks” (WoWMoM), Washington, DC, USA, 2019, pp. 1-9,
* [26] P. Vamvakas, E. E. Tsiropoulou, S. Papavassiliou, On Controlling Spectrum Fragility via Resource Pricing in 5G Wireless Networks, IEEE Networking Letters 1(3) (2019) 111–115.
* [27] J. M. Walker, R. Gardner, Probabilistic Destruction of Common-pool Resources: Experimental Evidence, The Economic Journal 102(414) (1992) 1149–1161.
* [28] P.P. Wakker, Prospect Theory: For Risk and Ambiguity, Cambridge University Press, 2010.
|
# A novel control method for solving high-dimensional Hamiltonian systems
through deep neural networks
Shaolin Ji Zhongtai Securities Institute for Financial Studies, Shandong
University, 250100, China Shige Peng School of Mathematics, Shandong
University, 250100, China Ying Peng Zhongtai Securities Institute for
Financial Studies, Shandong University, 250100, China Xichuan Zhang School
of Mathematics, Shandong University, 250100, China
###### Abstract
In this paper, we mainly focus on solving high-dimensional stochastic
Hamiltonian systems with boundary condition, which is essentially a Forward
Backward Stochastic Differential Equation (FBSDE in short), and propose a
novel method from the view of the stochastic control. In order to obtain the
approximated solution of the Hamiltonian system, we first introduce a
corresponding stochastic optimal control problem such that the extended
Hamiltonian system of the control problem is exactly what we need to solve,
then we develop two different algorithms suitable for different cases of the
control problem and approximate the stochastic control via deep neural
networks. From the numerical results, comparing with the Deep FBSDE method
developed previously from the view of solving FBSDEs, the novel algorithms
converge faster, which means that they require fewer training steps, and
demonstrate more stable convergences for different Hamiltonian systems.
Keywords stochastic Hamiltonian system, FBSDE, optimal control, PDE
## 1 Introduction
The theory of the Hamiltonian system is known as one of the dominant tools for
the description of dynamic phenomenons in the field of physics and economics
[1]. For example, in physics, the mechanical and electrical systems are
usually represented as energy functions, which are at the same time
Hamiltonian systems. Actually, the Hamiltonian system could reflect the laws
of energy conservation and dissipation [1, 2].
A determined Hamiltonian system can be given as
$\begin{cases}\mathrm{d}x_{t}=H_{y}(x_{t},y_{t})\mathrm{d}t,\vspace{1ex}\\\
\mathrm{d}y_{t}=H_{x}(x_{t},y_{t})\mathrm{d}t,\end{cases}$ (1.1)
where $H(x,y):\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}$ is a
given real function called the Hamiltonian, $H_{x}(\cdot)$ and $H_{y}(\cdot)$
are partial derivatives of $H(\cdot)$ with respect to $x$ and $y$,
respectively. When considering a terminal condition $y_{T}=\Phi_{x}(x_{T})$
for a given function $\Phi:\mathbb{R}^{n}\rightarrow\mathbb{R}$, (1.1) becomes
a boundary problem.
For more complex environments where the physical system can not be represented
with deterministic form, the Hamiltonian system is usually combined with a
stochastic process. Here we consider a boundary problem of stochastic
Hamiltonian system, as shown in the following,
$\begin{cases}\mathrm{d}x_{t}=H_{y}(t,x_{t},y_{t},z_{t})\mathrm{d}t+H_{z}(t,x_{t},y_{t},z_{t})\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}=H_{x}(t,x_{t},y_{t},z_{t})\mathrm{d}t-z_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\qquad y_{T}=-\Phi_{x}(x_{T}),\end{cases}$ (1.2)
which is essentially a fully coupled forward-backward stochastic differential
equation (FBSDE in short). Many research work have studied the solutions of
FBSDEs and the eigenvalue of the Hamiltonian systems [3, 4, 5, 6, 7, 8, 9, 10,
11]. The significance of studying this kind of Hamiltonian system is that on
the one hand it can be applied in solving the stochastic optimal control
problems via the well-known stochastic maximum principle [12, 13]; on the
other hand, it helps to obtain the solutions of nonlinear partial differential
equations (PDEs in short) according to the connection between the FBSDEs and
the PDEs [11].
In most cases, it is difficult to obtain the explicit solution of the
Hamiltonian system (1.2), thus numerical methods should be studied. As (1.2)
is essentially a FBSDE, an intuitive way is to solve (1.2) from the
perspective of FBSDEs. Therefore, numerical methods for solving the FBSDEs can
be applied [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 8] , such as the PDE
methods, the probabilistic methods, etc. However, most of the traditional
numerical methods can not deal with high-dimensional problems. Moreover, it is
worth to point out that solving the fully coupled FBSDEs numerically has been
a much more challenging problem than the general FBSDEs, even for low
dimensional cases. Recently, with the application of the deep learning
technique in a wide range of areas, numerical methods based on deep neural
networks have been proposed to solve high-dimensional Backward Stochastic
Differential Equations (BSDEs in short) and FBSDEs and achieved remarkable
success. Among them, a breakthrough work was developed by [24, 25], the main
idea is to reformulate the BSDE into a stochastic optimal control problem by
rewriting the backward process into a forward form and taking the terminal
error as the cost functional, then the solution of the BSDE is approximated by
deep neural network. Other different deep learning algorithms are proposed to
solve the BSDEs and related PDEs [26, 27, 28, 29], where they also focus on
approximating the solution of the BSDE (or PDE) with the deep neural network.
For solving coupled and fully coupled FBSDEs, [30, 31] developed numerical
algorithms which are also inspired by [24, 25].
In this paper, we propose a novel method to solve the Hamiltonian system (1.2)
via deep learning. As equation (1.2) is at the same time a fully coupled
FBSDE, this method is also suitable for solving fully coupled FBSDEs. However,
different from the above mentioned deep learning methods which aim to solve
the FBSDEs directly, we first look for the corresponding stochastic optimal
control problem of the Hamiltonian system, such that the extended Hamiltonian
system of the stochastic control problem is exactly what we need to solve.
Then we approximate the optimal control with deep neural networks. In order to
solve the optimal control problem, two different cases are considered which
correspond to two different algorithms. The first algorithm (Algorithm 1)
deals with the case where the function $f(t,x,u,v)$ defined in (2.5) has an
explicit form. For the case that $f(t,x,u,v)$ cannot be expressed explicitly,
the original control problem is transformed to a double objective optimization
problem and we develop the second algorithm (Algorithm 2) to solve it.
Finally, the numerical solutions $(y_{t},z_{t})$ of (1.2) are obtained by
calculating the solution of the extended Hamiltonian system for the optimal
control according to the stochastic maximum principle.
We also compare the results of our novel proposed algorithms with that of the
algorithm developed in our previous work [31] ( called the Deep FBSDE method
here), which can be used to solve the Hamiltonian system from the view of the
FBSDEs. Comparing with the Deep FBSDE method, the novel algorithms have two
advantages. The first advantage is that less numbers of iteration steps are
required to achieve convergent results. When the Deep FBSDE method converges,
it needs more iterations to achieve a convergent result. The second advantage
is that our proposed algorithms have more stable convergences. For some
Hamiltonian systems, the Deep FBSDE method is easier to diverge with the same
piecewise decay learning rate as our novel proposed algorithms. The details
can be referred to the numerical results in section 4.
This paper is organized as follows. In section 2, we describe the Hamiltonian
system that we aim to solve, and introduce its corresponding stochastic
optimal control problem. In section 3, we introduce two schemes to solve the
stochastic control problem according to whether the function $f(t,x,u,v)$
defined in (2.5) has an explicit form, and then give the corresponding neural
network architectures. The numerical results for different examples are shown
in section 4, and a brief conclusion is made in section 5.
## 2 Problem formulation
In this section, we first introduce the stochastic Hamiltonian system that we
aim to solve, then we show that solving this kind of stochastic Hamiltonian
systems is equivalent to solving a stochastic optimal control problem.
### 2.1 The stochastic Hamiltonian system
Let $T>0$, $(\Omega,\mathcal{F},\mathbb{F},\mathbb{P})$ be a filtered
probability space, in which $B:[0,T]\times\Omega\rightarrow\mathbb{R}^{d}$ is
a $d$-dimensional standard $\mathbb{F}$-Brownian motion;
$\mathbb{F}=\\{\mathcal{F}_{t}\\}_{0\leq t\leq T}$ is the natural filtration
generated by the Brownian motion $B$ Suppose that
$(\Omega,\mathcal{F},\mathbb{P})$ is complete, $\mathcal{F}_{0}$ contains all
the $\mathbb{P}$-null sets in $\mathcal{F}$ and $\mathbb{F}$ is right
continuous.
For $z^{1},z^{2}\in\mathbb{R}^{n\times d}$, define $\left\langle
z^{1},z^{2}\right\rangle=\mbox{tr}(z^{1}(z^{2})^{\operatorname{T}}{})$ and
$|z|^{2}=\left\langle z,z\right\rangle$. The space of all mean square-
integrable $\mathcal{F}_{t}$-adapted and $\mathbb{R}^{n}$-valued processes
will be denoted by $M^{2}(0,T;\mathbb{R}^{n})$, which is a Hilbert space with
the norm
$\|v(\cdot)\|=\Big{(}\mathbb{E}\big{[}\int_{0}^{T}|v(t)|^{2}dt\big{]}\Big{)}^{1/2}$
and
$L^{2}(\Omega,\mathcal{F}_{t},\mathbb{P})\triangleq\left\\{\xi|\xi\in\mathbb{R}^{n}\mbox{
is }\mathcal{F}_{t}\mbox{-measurable and
}\mathbb{E}\left[|\xi|^{2}\right]<\infty\right\\}.$
Let
$H:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times
d}\rightarrow\mathbb{R},$ (2.1)
be a $C^{2}$ real function of $(x,y,z)$, called a Hamiltonian and let
$\Phi:\mathbb{R}^{n}\rightarrow\mathbb{R},$ (2.2)
be a $C^{1}$ real function of $x$. In our context, unless otherwise stated, we
always assume that the Hamiltonian $H$ is strictly convex with respect to $y$
and $z$.
Consider the following stochastic Hamiltonian system:
$\begin{cases}\mathrm{d}x_{t}=H_{y}(t,x_{t},y_{t},z_{t})\mathrm{d}t+H_{z}(t,x_{t},y_{t},z_{t})\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}=H_{x}(t,x_{t},y_{t},z_{t})\mathrm{d}t-z_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\qquad y_{T}=-\Phi_{x}(x_{T}),\end{cases}$ (2.3)
where $H_{x}$, $H_{y}$, $H_{z}$ are derivatives of $H$ with respect to $x$,
$y$, $z$, respectively. And the above system is essentially a special kind of
fully coupled FBSDEs.
Set
$w=\begin{pmatrix}x\\\ y\\\
z\end{pmatrix}\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times
d},\qquad A(t,w)=\begin{pmatrix}-H_{x}\\\ H_{y}\\\ H_{z}\end{pmatrix}(t,w),$
and
$\left\langle w^{1},w^{2}\right\rangle=\left\langle
x^{1},x^{2}\right\rangle+\left\langle y^{1},y^{2}\right\rangle+\left\langle
z^{1},z^{2}\right\rangle.$
###### Definition 2.1.
A triple of process
$(x(\cdot),y(\cdot),z(\cdot)):[0,T]\times\Omega\rightarrow\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times
d}$ is called an adapted solution of (2.3), if
$(x(\cdot),y(\cdot),z(\cdot))\in
M^{2}(0,T;\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d})$,
and it satisfies (2.3).
###### Assumption 1.
For any
$w,w^{\prime}\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{n\times
d}$ and $x,x^{\prime}\in\mathbb{R}^{n}$,
* (i)
there exists a constant $\beta_{1}>0$, such that
$\begin{array}[]{c}|A(t,w)-A(t,w^{\prime})|\leq\beta_{1}|w-w^{\prime}|,\vspace{1ex}\\\
\end{array}$
and
$|\Phi_{x}(x)-\Phi_{x}(x^{\prime})|\leq\beta_{1}|x-x^{\prime}|.$
* (ii)
there exists a constant $\beta_{2}>0$, such that the following monotonic
conditions hold.
$\displaystyle\langle A(t,w)-A(t,w^{\prime}),w-w^{\prime}\rangle$
$\displaystyle\leq-\beta_{2}|w-w^{\prime}|^{2}$
$\displaystyle\langle-\Phi_{x}(x)+\Phi_{x}(x^{\prime}),x-x^{\prime}\rangle$
$\displaystyle\geq\beta_{2}|x-x^{\prime}|^{2}.$
As equation (2.3) is at the same time a fully coupled FBSDE, we recall the
following existence and uniqueness theorem in [3, 4].
###### Theorem 1 (Theorem 3.1 in [3]).
Let Assumption 1 hold. Then there exists a unique adapted solution
$(x(\cdot),y(\cdot),z(\cdot))$ for (2.3).
Recently, numerical algorithms for solving the BSDEs and FBSDEs with deep
learning method [24, 25, 30, 31] have been proposed and demonstrated
remarkable performance. The main idea is to reformulate the BSDE into a
stochastic optimal control problem, where the solution $z$ of the BSDE is
regarded as a control and approximated with deep neural network, and the
terminal error is taken as the cost functional. Other different numerical
algorithms have also been developed for solving the FBSDEs and the related
PDEs [26, 27, 28, 29], which also regard the solution of the FBSDE ($y$ or
$z$) as a control and approximate it with appropriate loss function.
### 2.2 A novel method to solve the stochastic Hamiltonian system
As noted in the previous sections, the stochastic Hamiltonian system (2.3) is
essentially a fully coupled FBSDE and can be solved through the methods for
solving the FBSDEs. In this paper, we develop a novel method for solving the
Hamiltonian system(2.3). Different from the above mentioned algorithms for
solving the BSDEs or FBSDEs, our main idea is to find the corresponding
stochastic optimal control problem of the stochastic Hamiltonian system, and
then directly apply the deep learning method to solve the control problem.
$\forall x,y,u\in\mathbb{R}^{n},z,v\in\mathbb{R}^{n\times d}$, set
$F(t,x,u,v,y,z)=\langle y,u\rangle+\langle z,v\rangle-H(t,x,y,z)$ (2.4)
and
$f(t,x,u,v)=\max_{y,z}F(t,x,u,v,y,z).$ (2.5)
Here $f(t,x,u,v)$ is the Legendre-Fenchel transform of $H(t,x,y,z)$ with
respect to $(y,z)$. Due to the differentiability and strict convexity of the
Hamiltonian $H$, $f$ is also differential and strictly convex with respect to
$u$ and $v$ [32].
Consider the following control system,
$\begin{cases}\mathrm{d}x_{t}=u_{t}\mathrm{d}t+v_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\end{cases}$ (2.6)
and the cost functional is given as
$J(u(\cdot),v(\cdot))=\mathbb{E}\left[\int_{0}^{T}f(t,x_{t},u_{t},v_{t})\mathrm{d}t+\Phi(x_{T})\right],$
(2.7)
where the controls $u(\cdot)$ and $v(\cdot)$ belong to
$M^{2}(0,T;\mathbb{R}^{n})$ and $M^{2}(0,T;\mathbb{R}^{n\times d})$,
respectively. The set of all admissible controls is denoted by
$\mathcal{U}_{ad}[0,T]$. Any
$(u^{*}(\cdot),v^{*}(\cdot))\in\mathcal{U}_{ad}[0,T]$ satisfying
$J(u^{*}(\cdot),v^{*}(\cdot))=\inf_{(u(\cdot),v(\cdot))\in\mathcal{U}_{ad}[0,T]}J(u(\cdot),v(\cdot))$
(2.8)
is called an optimal control. The corresponding state trajectory
$x^{*}(\cdot)$ is called an optimal trajectory and the corresponding triple
$(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot))$ is called an optimal triple.
In the following we prove that solving the stochastic Hamiltonian system (2.3)
is equivalent to solving the stochastic optimal control problem (2.6)-(2.7).
We need the following assumption.
###### Assumption 2.
$f(t,x,u,v)$ is continuously differentiable with respect to $x$, $u$, $v$, and
$\displaystyle|f_{x}(t,x,u,v)|$ $\displaystyle\leq
C(|x|+|u|+|v|+1),\vspace{1ex}$ $\displaystyle|f_{u}(t,x,u,v)|$
$\displaystyle\leq C(|x|+|u|+|v|+1),\vspace{1ex}$
$\displaystyle|f_{v}(t,x,u,v)|$ $\displaystyle\leq
C(|x|+|u|+|v|+1),\vspace{1ex}$ $\displaystyle|f(t,0,0,0)|$ $\displaystyle\leq
C$
for some given $C>0$.
###### Theorem 2.
Let $H$ be a given $C^{2}$ real function and strictly convex with respect to
$y$ and $z$. The derivatives of $H$ and $\Phi$ satisfy Assumption 1; $f$
satisfies Assumption 2. Suppose that
$(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot))$ is the optimal triple of the
optimal control problem (2.6)-(2.7). Then
$(x^{*}(\cdot),y^{*}(\cdot),z^{*}(\cdot))$ uniquely solves the Hamiltonian
system (2.3), where $y^{*}(\cdot)$ can be given as
$\displaystyle y_{t}^{*}$
$\displaystyle=\mathbb{E}\left[\int_{t}^{T}-f_{x}(s,x_{s}^{*},u^{*}_{s},v^{*}_{s})\mathrm{d}s-\Phi_{x}(x^{*}_{T})\Big{|}\mathcal{F}_{t}\right],$
(2.9)
and $z^{*}(\cdot)$ can be solved by this following BSDE
$\left\\{\begin{array}[]{l}-\mathrm{d}y_{t}^{*}=-f_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t})\mathrm{d}t-z_{t}^{*}\mathrm{d}B_{t},\vspace{1ex}\\\
y_{T}^{*}=-\Phi_{x}(x^{*}_{T}),\qquad t\in[0,T].\end{array}\right.$ (2.10)
###### Proof.
Set
$h(t,x,u,v,y,z)=\langle y,u\rangle+\langle z,v\rangle-f(t,x,u,v).$ (2.11)
Under our assumptions, for any optimal triple
$(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot))$ of the optimal control problem
(2.6)-(2.7), we have the following extended stochastic Hamiltonian system
through the well-known stochastic maximum principle (SMP in short) ( e.g.
Theorem 4.1 in [12]),
$\begin{cases}\mathrm{d}x_{t}^{*}=u_{t}^{*}\mathrm{d}t+v_{t}^{*}\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}^{*}=h_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})\mathrm{d}t-z_{t}^{*}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}^{*}=a,\qquad y_{T}^{*}=-\Phi_{x}(x^{*}_{T}),\qquad
t\in[0,T],\end{cases}$ (2.12)
and
$h(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=\max_{u\in\mathbb{R}^{n},\\\
v\in\mathbb{R}^{n\times d}}h(t,x^{*}_{t},u,v,y^{*}_{t},z^{*}_{t}),\ a.e.\
t\in[0,T].$ (2.13)
The solution of the extended stochastic Hamiltonian system (2.12)-(2.13) is a
5-tuple $(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot),y^{*}(\cdot),z^{*}(\cdot))$.
By Theorem 12.2 in [32], we have the inverse Legendre-Fenchel transform of
(2.5):
$H(t,x,y,z)=\max_{u,v}h(t,x,u,v,y,z).$ (2.14)
Because $h(t,x,u,v,y,z)$ is strictly concave in $u,v$, it yields that the
maximum point $(u^{*}_{t},v^{*}_{t})$ of (2.13) is uniquely determined by
$(x^{*}_{t},y^{*}_{t},z^{*}_{t})$ due to the implicit function existence
theorem:
$\displaystyle u^{*}_{t}$
$\displaystyle=\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),$ (2.15)
$\displaystyle v^{*}_{t}$
$\displaystyle=\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),$
and $\bar{u},\bar{v}$ are differentiable functions. By (2.14),
$\displaystyle H(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})$
$\displaystyle=h(t,x^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),y^{*}_{t},z^{*}_{t}),$
(2.16)
which leads to
$\displaystyle
f(t,x^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}))$
(2.17) $\displaystyle=$ $\displaystyle\langle
y^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})\rangle+\langle
z^{*}_{t},\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})\rangle-H(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}).$
Thus, the derivatives of $f(t,x,u,v)$ with respect to $u,v$ are
$\displaystyle
f_{u}(t,x^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}))=y^{*}_{t},$
(2.18) $\displaystyle
f_{v}(t,x^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}))=z^{*}_{t}.$
It can be verified that
$\displaystyle H_{x}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=$ $\displaystyle-
f_{x}(t,x^{*}_{t},\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}))$
(2.19) $\displaystyle=$ $\displaystyle-f_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t})$
$\displaystyle=$ $\displaystyle
h_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t}),$
which implies that $(y^{*}(\cdot),z^{*}(\cdot))$ solves the BSDE (2.10).
Taking the conditional expectation in the backward equation of (2.10), we have
(2.9) hold.
Similarly, we have
$\displaystyle
H_{y}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=\bar{u}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=u^{*}_{t},$
(2.20) $\displaystyle
H_{z}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=\bar{v}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=v^{*}_{t}.$
Thus, the extended stochastic Hamiltonian system (2.12) is just the
Hamiltonian system (2.3) and $(x^{*}(\cdot),y^{*}(\cdot),z^{*}(\cdot))$ solves
(2.3). Because $H$ satisfies the monotonicity condition in Assumption 1, the
uniqueness of the solution is proved. ∎
The following proposition can help us to construct our algorithms in the next
section.
###### Proposition 3.
Under the same assumptions as in Theorem 2, we have
$\displaystyle J(u^{*}(\cdot),v^{*}(\cdot))$
$\displaystyle=\mathbb{E}\left[\int_{0}^{T}f(t,x^{*}_{t},u^{*}_{t},v^{*}_{t})\mathrm{d}t+\Phi(x_{T}^{*})\right]$
(2.21)
$\displaystyle=\mathbb{E}\left[\int_{0}^{T}F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})dt+\Phi(x_{T}^{*})\right],$
and $(y^{*}(\cdot),z^{*}(\cdot))$ of (2.3) can also be obtained by solving the
following BSDE:
$\left\\{\begin{array}[]{l}-\mathrm{d}y_{t}^{*}=-F_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})\mathrm{d}t-z_{t}^{*}\mathrm{d}B_{t},\vspace{1ex}\\\
y_{T}^{*}=-\Phi_{x}(x^{*}_{T}),\qquad t\in[0,T],\end{array}\right.$ (2.22)
where
$F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=\max_{y,z}F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y,z).$
(2.23)
Then $y^{*}(\cdot)$ can be expressed as
$\displaystyle y_{t}^{*}$
$\displaystyle=\mathbb{E}\left[\int_{t}^{T}-F_{x}(s,x_{s}^{*},u^{*}_{s},v^{*}_{s},y^{*}_{s},z^{*}_{s})\mathrm{d}s-\Phi_{x}(x^{*}_{T})\Big{|}\mathcal{F}_{t}\right].$
(2.24)
###### Proof.
By the definition of $H$ and the SMP,
$\displaystyle H(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})$
$\displaystyle=\max_{u,v}h(t,x^{*}_{t},u,v,y^{*}_{t},z^{*}_{t}),$ (2.25)
$\displaystyle=\langle y^{*}_{t},u^{*}_{t}\rangle+\langle
z^{*}_{t},v^{*}_{t}\rangle-f(t,x^{*}_{t},u^{*}_{t},v^{*}_{t}).$
Then, we have
$f(t,x^{*}_{t},u^{*}_{t},v^{*}_{t})=\langle y^{*}_{t},u^{*}_{t}\rangle+\langle
z^{*}_{t},v^{*}_{t}\rangle-H(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}).$ (2.26)
The definition of $f(t,x,u,v)$ shows that
$\displaystyle f(t,x^{*}_{t},u^{*}_{t},v^{*}_{t})$
$\displaystyle=\max_{y,z}F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y,z),$ (2.27)
$\displaystyle=\langle\bar{y},u^{*}_{t}\rangle+\langle\bar{z},v^{*}_{t}\rangle-H(t,x^{*}_{t},\bar{y},\bar{z}),$
where $(\bar{y},\bar{z})$ is the minimum point.
Notice that $F(t,x,u,v,y,z)$ is strictly concave with respect to $y,z$. If
$(y^{*}_{t},z^{*}_{t})\neq(\bar{y},\bar{z})$, then
$\displaystyle F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},\bar{y},\bar{z})$
$\displaystyle>F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})$ (2.28)
$\displaystyle=\langle y^{*}_{t},u^{*}_{t}\rangle+\langle
z^{*}_{t},v^{*}_{t}\rangle-H(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})$
$\displaystyle=f(t,x^{*}_{t},u^{*}_{t},v^{*}_{t}),$
which contradicts the formula (2.27). So we have
$(y^{*}_{t},z^{*}_{t})=(\bar{y},\bar{z})$ and (2.21) holds.
Because the strict concavity of $F(t,x,u,v,y,z)$ with respect to $y,z$, we
have
$\displaystyle F_{y}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})$
$\displaystyle=u^{*}_{t}-H_{y}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=0$ (2.29)
$\displaystyle F_{z}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})$
$\displaystyle=v^{*}_{t}-H_{z}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t})=0.$
Similar to the proof of Theorem 2, according to the implicit function
existence theorem, we can easily check that
$F_{x}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=-H_{x}(t,x^{*}_{t},y^{*}_{t},z^{*}_{t}),$
(2.30)
holds, and then (2.22) holds. Taking the conditional expectation on (2.22), we
have (2.24). ∎
In Theorem 2 and Proposition 3, we choose the stochastic optimal control
problem (2.6)-(2.7), whose diffusion term $b$ and drift term $\sigma$ of the
state equation are simplely $u$ and $v$. In fact, to simplify the linear terms
of $y$ and $z$ in the Hamiltonian $H$, we can also choose other forms of $b$
and $\sigma$, such as $\alpha_{1}(x)+\alpha_{2}(x)u$ and
$\beta_{1}(x)+\beta_{2}(x)v$, which are linear with respect to $u$ and $v$. In
this cases, the transformations (2.5) and (2.14) also hold. We show an example
of this form in subsection 4.2 for the numerical results.
Besides, we can still solve the Hamiltonian system (2.3) even if the
coefficients $H_{x},H_{y},H_{z}$ do not satisfy the monotonic conditions
(Assumption 1 (ii)) in Theorem 2 and Proposition 3. For example, the articles
[8, 9] studied the solvability of FBSDEs under relatively loose conditions. In
this situation, as long as the optimal controls reach the optimal values, the
Hamiltonian system (2.3) can be solved, however the solution is not
necessarily unique.
## 3 Numerical method for solving Hamiltonian systems
In Section 2, we present the idea of the stochastic optimal control method to
solve the Hamiltonian system (2.3). According to Theorem 2, we only need to
find the optimal control triple $(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot))$ of
the stochastic control problem (2.6) and (2.7), then the solution
$(y^{*}(\cdot),z^{*}(\cdot))$ can be obtained by taking the conditional
expectation on the backward SDE of (2.10). Therefore, effective approximation
method should be used to obtain the optimal triple of (2.6)-(2.7), especially
for high dimensional cases.
Deep neural networks are usually used to approximate functions defined on
finite-dimensional space, and the approximation relies on the composition of
layers with simple functions. On the basis of the universal approximation
theorem [33, 34], the neural networks have shown to be an effective tool and
gained great successes in many practical applications. In this paper, inspired
by [35], we simulate the stochastic optimal control problem (2.6)-(2.7) from a
direct way with the deep neural network and develop two different numerical
algorithms suitable for different cases.
Let $\pi$ be a partition of the time interval,
$0=t_{0}<t_{1}<t_{2}<\cdots<t_{N-1}<t_{N}=T$ of $[0,T]$. Define $\Delta
t_{i}=t_{i+1}-t_{i}$ and $\Delta B_{t_{i}}=B_{t_{i+1}}-B_{t_{i}}$, where
$B_{t_{i}}\sim\mathcal{N}(0,t_{i})$, for $i=0,1,2,\cdots,N-1$. We also denote
$\delta=\sup\limits_{0\leq i\leq N-1}\Delta t_{i},$
which is small enough. Then the Euler-Maruyama scheme of the state equation
(2.6) can be written as
$\left\\{\begin{array}[]{l}x_{t_{i+1}}^{\pi}=x_{t_{i}}^{\pi}+u_{t_{i}}^{\pi}\Delta
t_{i}+v_{t_{i}}^{\pi}\Delta B_{t_{i}},\vspace{1ex}\\\
x_{0}=a,\end{array}\right.$ (3.1)
and the corresponding cost functional is given as
$J(u^{\pi}(\cdot),v^{\pi}(\cdot))=\dfrac{1}{M}\sum_{m=1}^{M}\Big{[}\sum_{i=0}^{N-1}f(t_{i},x_{t_{i}}^{\pi,m},u_{t_{i}}^{\pi,m},v_{t_{i}}^{\pi,m})\Delta
t_{i}+\Phi(x_{t_{N}}^{\pi,m})\Big{]},$ (3.2)
where $M$ represents the number of Monte Carlo samples.
We introduce a feedforward neural network
$\varphi^{\theta}:[0,T]\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ of the
form
$\varphi^{\theta}=\mathcal{A}_{\ell}\circ\sigma_{\ell-1}\circ\mathcal{A}_{\ell-1}\circ\cdots\circ\sigma_{1}\circ\mathcal{A}_{1},$
(3.3)
where
* •
$\ell$ is a positive integer specifying the depth of the neural network,
* •
$\mathcal{A}_{1},\cdots,\mathcal{A}_{\ell}$ are functions of the form
$\displaystyle\mathcal{A}_{1}$
$\displaystyle=w_{1}x+b_{1}\in\mathbb{R}^{d_{1}},$
$\displaystyle\mathcal{A}_{i}$
$\displaystyle=w_{i}\mathcal{A}_{i-1}+b_{i}\in\mathbb{R}^{d_{i}},\qquad\text{for
}2\leq i\leq\ell,$
the matrix weights $w_{i}$ and bias vector $b_{i}$ are trainable parameters,
$\theta$ is the whole trainable parameters $\theta=(w_{i},b_{i})_{1\leq
i\leq\ell}$, and $d_{i}$ is the number of neurons at layer $i$,
* •
$\sigma_{\ell-1},\cdots,\sigma_{1}$ are the nonlinear activation functions,
such as the sigmoid, the rectified linear unit (ReLU), the exponential linear
unit (ELU), etc.
We approximate the controls $u,v$ with two different neural networks, which
can be represented with (3.3) and denoted as $\varphi^{\theta^{u}}_{u}$ and
$\varphi^{\theta^{v}}_{v}$, respectively:
$\begin{cases}u=\varphi^{\theta^{u}}_{u}(t,x)=\varphi_{u}(t,x;\theta^{u})=\mathcal{A}_{\ell_{u}}^{u}\circ\sigma_{\ell_{u}-1}^{u}\circ\mathcal{A}_{\ell_{u}-1}^{u}\circ\cdots\circ\sigma_{1}^{u}\circ\mathcal{A}_{1}^{u}(t,x)\vspace{1ex}\\\
v=\varphi^{\theta^{v}}_{v}(t,x)=\varphi_{v}(t,x;\theta^{v})=\mathcal{A}_{\ell_{v}}^{v}\circ\sigma_{\ell_{v}-1}^{v}\circ\mathcal{A}_{\ell_{v}-1}^{v}\circ\cdots\circ\sigma_{1}^{v}\circ\mathcal{A}_{1}^{v}(t,x).\end{cases}$
(3.4)
The two neural networks have the same input dimension but different output
dimensions. In this paper, we use the common parameters of the neural networks
for all the time points, i.e. a single network is developed for simulating
each of the control, and the time point $t$ is regarded as an input of the
neural network.
### 3.1 Case 1: the function $f(t,x,u,v)$ has an explicit form
When the function $f(t,x,u,v)$ defined as (2.5) has an explicit form, then the
discrete cost functional (3.2) can be approximated directly with
$J(u^{\pi}(\cdot),v^{\pi}(\cdot))=\dfrac{1}{M}\sum_{m=1}^{M}\Big{[}\sum_{i=0}^{N-1}f(t_{i},x_{t_{i}}^{\pi,m},u_{t_{i}}^{\pi,m},v_{t_{i}}^{\pi,m})\Delta
t_{i}+\Phi(x_{t_{N}}^{\pi,m})\Big{]},$ (3.5)
which is also the loss function we need to minimize in the whole neural
network, and $u_{t_{i}}^{\pi},v_{t_{i}}^{\pi}$ are the outputs of the two
neural networks at time $t_{i}$. Both the neural networks approximating
$u_{t_{i}}^{\pi},v_{t_{i}}^{\pi}$ contain one $(n+1)$-dim input layer, three
$(n+10)$-dim hidden layers. The network of $u_{t_{i}}^{\pi}$ has an $n$-dim
output layer and that of $v_{t_{i}}^{\pi}$ has a $n\times d$-dim output layer.
In order to simplify the representation, here we use $\theta$ to represent the
training parameters $(\theta^{u},\theta^{v})$ for both of the neural networks.
To minimize the loss function (3.5) and learn the optimal parameters, some
basic optimization algorithms, such as stochastic gradient descent (SGD),
AdaGrad, RMSProp, and Adam which are already implemented in TensorFlow can be
used. In this paper, the Adam method [36] is adopted as the optimizer.
Once we obtain the approximations of the optimal controls $u^{*}$ and $v^{*}$,
we can calculate the numerical solution $y^{*}_{t}$ by taking the conditional
expectation on the Backward SDE of (2.10), which can be approximated with
Monte Carlo simulation:
$y_{t_{i}}^{\pi}=\dfrac{1}{M}\sum\limits_{m=1}^{M}\Big{[}\sum\limits_{j=i}^{N-1}-f_{x}(t_{j},x_{t_{j}}^{\pi,m},u_{t_{j}}^{\pi,m},v_{t_{j}}^{\pi,m})\Delta
t_{j}-\Phi_{x}(x_{t_{N}}^{\pi,m})\Big{]}.$ (3.6)
We show the whole network architecture in Figure 1, where $h^{u}$ and $h^{v}$
represent respectively the hidden layers of the neural networks
$\varphi^{\theta^{u}}_{u}$ and $\varphi^{\theta^{v}}_{v}$. For each of the
neural network, the common parameters is used for all the time points, and the
time $t$ is taken as one of the inputs of the neural network.
Figure 1: The whole network architecture for case 1: $f(t,x,u,v)$ has an
explicit representation. The boxes in purple, green and orange represent
respectively the input layer, the hidden layers and the output layers of the
neural networks $\varphi^{\theta^{u}}_{u}$ and $\varphi^{\theta^{v}}_{v}$. The
data flow of the neural networks is represented with black arrows.
The pseudo-code is shown in Algorithm 1.
Algorithm 1 Numerical algorithm for solving the Hamiltonian system of case 1
1:The Brownian motion $\Delta B_{t_{i}}$, initial state $a$, and time $t_{i}$;
2:The output controls $u_{t_{i}}^{\pi}$, $v_{t_{i}}^{\pi}$ and
$y_{t_{i}}^{\pi}$.
3:for $l=0$ to $maxstep$ do
4: $x_{0}^{l,\pi,m}=a$, $loss=0$;
5: for $i=0$ to $N-1$ do
6: $u^{l,\pi,m}_{t_{i}}=\varphi_{u}(t_{i},x^{l,\pi,m}_{t_{i}};\theta^{l});$
7: $v^{l,\pi,m}_{t_{i}}=\varphi_{v}(t_{i},x^{l,\pi,m}_{t_{i}};\theta^{l});$
8: $x^{l,\pi,m}_{t_{i+1}}=x^{l,\pi,m}_{t_{i}}+u^{l,\pi,m}_{t_{i}}\Delta
t_{i}+v^{l,\pi,m}_{t_{i}}\Delta B_{t_{i}};$
9:
$loss=loss+f(t_{i},x_{t_{i}}^{l,\pi,m},u_{t_{i}}^{l,\pi,m},v_{t_{i}}^{l,\pi,m})\Delta
t_{i};$
10: end for
11:
$loss=\dfrac{1}{M}\sum\limits_{m=1}^{M}\left[loss+\Phi(x_{t_{N}}^{l,\pi,m});\right]$
12: $\theta^{l+1}=Adam(\theta^{l},\nabla loss);$
13:end for
14:$y_{t_{i}}^{\pi}=\dfrac{1}{M}\sum\limits_{m=1}^{M}\Big{[}\sum\limits_{j=i}^{N-1}-f_{x}(t_{j},x_{t_{j}}^{l,\pi,m},u_{t_{j}}^{l,\pi,m},v_{t_{j}}^{l,\pi,m})\Delta
t_{j}-\Phi_{x}(x_{t_{N}}^{l,\pi,m})\Big{]}.$
### 3.2 Case 2: the function $f(t,x,u,v)$ does not have an explicit form
For the cases where the function $f$ does not have an explicit form, we can
still solve the Hamiltonian system (2.3) by constructing a different neural
network architecture. For any given optimal triple
$(x^{*}(\cdot),u^{*}(\cdot),v^{*}(\cdot))$ of the optimal control problem
(2.7), we assume that the solution $(y^{*}(\cdot),z^{*}(\cdot))$ of (2.22)
satisfy
$y^{*}_{t}=Y(t,x^{*}_{t}),\qquad z^{*}_{t}=Z(t,x^{*}_{t}),\qquad\forall
t\in[0,T],$ (3.7)
for some functions $Y$ and $Z$. Then according to Proposition 3, we have
$J(u^{*}(\cdot),v^{*}(\cdot))=\mathbb{E}\displaystyle\left[\int_{0}^{T}F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})dt+\Phi(x_{T}^{*})\right],$
(3.8)
and
$\mathrm{d}x_{t}^{*}=u^{*}_{t}\mathrm{d}t+v_{t}^{*}\mathrm{d}B_{t},\quad
x_{0}^{*}=a,$ (3.9)
where $y^{*}_{t},z^{*}_{t}$ satisfy
$F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=\max\limits_{y,z}F(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y,z),$
(3.10)
and $F$ is defined by (2.4). Because of the strict concavity and
differentiable properties of $F$ with respect to $y,z$, the constraint
condition (3.10) can be rewritten as
$\begin{cases}F_{y}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=0,\vspace{1ex}\\\
F_{z}(t,x^{*}_{t},u^{*}_{t},v^{*}_{t},y^{*}_{t},z^{*}_{t})=0.\end{cases}$
(3.11)
In this way, the Hamiltonian system (2.3) can be solved by solving the
stochastic optimal control
$J(u(\cdot),v(\cdot))=\mathbb{E}\displaystyle\left[\int_{0}^{T}F(t,x_{t},u_{t},v_{t},y_{t},z_{t})\mathrm{d}t+\Phi(x_{T})\right],$
(3.12)
with the state constraint
$\begin{cases}\mathrm{d}x_{t}=u_{t}\mathrm{d}t+v_{t}\mathrm{d}B_{t},\quad
x_{0}=a,\vspace{1ex}\\\
F_{y}(t,x_{t},u_{t},v_{t},y_{t},z_{t})=0,\vspace{1ex}\\\
F_{z}(t,x_{t},u_{t},v_{t},y_{t},z_{t})=0.\end{cases}$ (3.13)
Now we focus on solving (3.12)-(3.13) with a new neural network architecture.
Firstly, the Euler-Maruyama scheme (3.1) is used to obtain the discrete form
of the optimal control problem. In addition to the neural networks for
simulating the controls $u$ and $v$, we need to construct two more neural
networks for simulating the functions $Y$ and $Z$,
$\begin{cases}y=\varphi^{\theta^{y}}_{y}(t,x)=\varphi_{y}(t,x;\theta^{y})=\mathcal{A}_{\ell_{y}}^{y}\circ\sigma_{\ell_{y}-1}^{y}\circ\mathcal{A}_{\ell_{y}-1}^{y}\circ\cdots\circ\sigma_{1}^{y}\circ\mathcal{A}_{1}^{y}(t,x)\vspace{1ex}\\\
z=\varphi^{\theta^{z}}_{z}(t,x)=\varphi_{z}(t,x;\theta^{z})=\mathcal{A}_{\ell_{z}}^{z}\circ\sigma_{\ell_{z}-1}^{z}\circ\mathcal{A}_{\ell_{z}-1}^{z}\circ\cdots\circ\sigma_{1}^{z}\circ\mathcal{A}_{1}^{z}(t,x).\end{cases}$
(3.14)
We also use the common parameters on all the time points for each of the four
neural networks, and the inputs of each neural network are $(t,x)$. All of the
four neural networks contain one $(n+1)$-dim input layer and three
$(n+10)$-dim hidden layers. The dimensions of the output layer for each neural
network are different, that of $y$ and $u$ are $n$-dim, and that of $z$ and
$v$ are $(n\times d)$-dim. We still adopt Adam as the optimizer.
We denote $\theta=(\theta_{uv},\theta_{yz})$ as all the parameters of the
neural networks, where $\theta_{uv}$ are the parameters of the neural networks
$\varphi_{u}$ and $\varphi_{v}$ (for simulating $u$ and $v$), and
$\theta_{yz}$ are the parameters of the neural networks $\varphi_{y}$ and
$\varphi_{z}$ (for simulating $y$ and $z$).
Then the cost functional of the control problem is approximated by
$J(u^{\pi}(\cdot),v^{\pi}(\cdot))=\dfrac{1}{M}\sum_{m=1}^{M}\Big{[}\sum_{i=0}^{N-1}F(t_{i},x_{t_{i}}^{\pi,m},u_{t_{i}}^{\pi,m},v_{t_{i}}^{\pi,m},y_{t_{i}}^{\pi,m},z_{t_{i}}^{\pi,m})\Delta
t_{i}+\Phi(x_{t_{N}}^{\pi,m})\Big{]},$ (3.15)
which is the first loss function we need to minimize, and
$u_{t_{i}}^{\pi},v_{t_{i}}^{\pi},y_{t_{i}}^{\pi},z_{t_{i}}^{\pi}$ are the
outputs of the whole neural networks at time $t_{i}$. In addition, in order to
guarantee that the conditions (3.11) hold, we introduce the other cost
functional
$\displaystyle J(y^{\pi}(\cdot),z^{\pi}(\cdot))$
$\displaystyle:=\dfrac{1}{M}\sum_{m=1}^{M}\sum_{i=0}^{N-1}\Big{[}|F_{y}(t_{i},x_{t_{i}}^{\pi,m},u_{t_{i}}^{\pi,m},v_{t_{i}}^{\pi,m},y_{t_{i}}^{\pi,m},z_{t_{i}}^{\pi,m})|^{2}$
(3.16)
$\displaystyle\qquad\qquad+|F_{z}(t_{i},x_{t_{i}}^{\pi,m},u_{t_{i}}^{\pi,m},v_{t_{i}}^{\pi,m},y_{t_{i}}^{\pi,m},z_{t_{i}}^{\pi,m})|^{2}\Big{]}.$
which is at the same time the second loss function we need to minimize in the
neural networks.
The update of the neural network parameters is carried out as follows. Suppose
that we have finished the update at the iteration step $l$ and obtain the
parameters $\theta^{l}=(\theta_{uv}^{l},\theta_{yz}^{l})$. We first calculate
the values
$(x^{\pi}_{t_{i}},u^{\pi}_{t_{i}},v^{\pi}_{t_{i}},y^{\pi}_{t_{i}},z^{\pi}_{t_{i}})$
with the parameters $\theta^{l}$. Then the parameters $\theta_{uv}^{l}$ are
updated to $\theta_{uv}^{l+1}$ by using one step Adam optimization with the
first loss function (3.15). In the following, the parameters
$(\theta_{uv}^{l+1},\theta_{yz}^{l})$ are used to calculate the values
$(x^{\pi}_{t_{i}},u^{\pi}_{t_{i}},v^{\pi}_{t_{i}},y^{\pi}_{t_{i}},z^{\pi}_{t_{i}})$
in (3.16). Then the parameters $\theta_{yz}^{l}$ are updated with the second
loss function (3.16) by Adam optimization. In each iteration step, the update
of parameters $\theta_{yz}^{l}$ can be performed multiple times, for example
$\kappa$ times, to ensure the loss function (3.16) is enough small. And after
$\kappa$ times update, the parameters of the neural networks are denoted as
$\theta_{yz}^{l+1}$. Finally the solution $y$ is obtained by taking the
conditional expectation of the backward SDE of (2.22) which can be calculated
with Monte Carlo simulation:
$y_{t_{i}}^{\pi}=\dfrac{1}{M}\sum\limits_{m=1}^{M}\Big{[}\sum\limits_{j=i}^{N-1}-F_{x}(t_{j},x_{t_{j}}^{\pi,m},u_{t_{j}}^{\pi,m},v_{t_{j}}^{\pi,m},y_{t_{j}}^{\pi,m},z_{t_{j}}^{\pi,m})\Delta
t_{j}-\Phi_{x}(x_{t_{N}}^{\pi,m})\Big{]}.$ (3.17)
The pseudo-code is given in Algorithm 2.
In fact,we can also deal with the maximum condition (3.10) directly. in this
situation, we need to maximize the second objective functional in addition to
(3.15),
$J(y(\cdot),z(\cdot))=\mathbb{E}\left[\int_{0}^{T}F(t,x_{t},u_{t},v_{t},y_{t},z_{t})\right]\mathrm{d}t.$
(3.18)
and a similar scheme can be given. The advantage for using the conditions
(3.11) instead of (3.18) is that we can determine the influence of the
constraint conditions by the value of the cost functional (3.16), as the
optimal value of (3.16) should be 0.
## 4 Numerical results
In this section, we show the numerical results for solving the Hamiltonian
system with our proposed algorithms. If not specifically mentioned, we use
6-layer fully connected neural networks for the approximation in these
examples, the number of time divisions is set to be $N=25$, and we mainly use
a piecewise constant decay learning rate which decreases from $3\times
10^{-3}$ to $1\times 10^{-3}$ with the increase of the number of iteration
steps. We adopt ELU as the activation function. In order to show the
performance of the proposed algorithms, we compare the results among the two
proposed algorithms and the Deep FBSDE method (briefly noted as DFBSDE in the
figures and tables of this section) which was developed as Algorithm 1 in our
previous work [31]. But different from [31], for better comparison with the
novel proposed algorithms, here we use the ELU activation function and remove
the batch normalization layer in the Deep FBSDE method. For each algorithm of
the examples, we perform ten independent runs to show more accurate results.
Algorithm 2 Numerical algorithm for solving the Hamiltonian system of case 2
1:The Brownian motion $\Delta B_{t_{i}}$, initial state $a$, and time $t_{i}$;
2:The processes
$(x^{l,\pi}_{t_{i}},u^{l,\pi}_{t_{i}},v^{l,\pi}_{t_{i}},y^{l,\pi}_{t_{i}},z^{l,\pi}_{t_{i}})$.
3:for $l=0$ to $maxstep$ do
4: for $k=0$ to $\kappa+1$ do
5: $x_{0}^{l,\pi,m}=a$, $loss_{1}=0$, $loss_{2}=0$;
6: for $i=0$ to $N-1$ do
7:
$u^{l,\pi,m}_{t_{i}}=\varphi_{u}(t_{i},x^{l,\pi,m}_{t_{i}};\theta_{uv}^{l});$
8:
$v^{l,\pi,m}_{t_{i}}=\varphi_{v}(t_{i},x^{l,\pi,m}_{t_{i}};\theta_{uv}^{l});$
9:
$y^{l,\pi,m}_{t_{i}}=\varphi_{y}(t_{i},x^{l,\pi,m}_{t_{i}};\theta_{yz}^{l});$
10:
$z^{l,\pi,m}_{t_{i}}=\varphi_{z}(t_{i},x^{l,\pi,m}_{t_{i}};\theta_{yz}^{l});$
11: $x^{l,\pi,m}_{t_{i+1}}=x^{l,\pi,m}_{t_{i}}+u^{l,\pi,m}_{t_{i}}\Delta
t_{i}+v^{l,\pi,m}_{t_{i}}\Delta B_{t_{i}};$
12:
$loss_{1}=loss_{1}+F(t_{i},x_{t_{i}}^{l,\pi,m},u_{t_{i}}^{l,\pi,m},v_{t_{i}}^{l,\pi,m},y_{t_{i}}^{l,\pi,m},z_{t_{i}}^{l,\pi,m})\Delta
t_{i};$
13:
$loss_{2}=loss_{2}+\dfrac{1}{M}\sum\limits_{m=1}^{M}\left[|F_{y}(t_{i},x_{t_{i}}^{l,\pi,m},u_{t_{i}}^{l,\pi,m},v_{t_{i}}^{l,\pi,m},y_{t_{i}}^{l,\pi,m},z_{t_{i}}^{l,\pi,m})|^{2}\right];$
14:
$loss_{2}=loss_{2}+\dfrac{1}{M}\sum\limits_{m=1}^{M}\left[|F_{z}(t_{i},x_{t_{i}}^{l,\pi,m},u_{t_{i}}^{l,\pi,m},v_{t_{i}}^{l,\pi,m},y_{t_{i}}^{l,\pi,m},z_{t_{i}}^{l,\pi,m})|^{2}\right];$
15: end for
16:
$loss_{1}=\dfrac{1}{M}\sum\limits_{m=1}^{M}\left[loss_{1}+\Phi(x_{t_{N}}^{l,\pi,m})\right];$
17: if $k=0$ then
18: $\theta_{uv}^{l}=Adam(\theta_{uv}^{l},\nabla loss_{1});$
19: $\theta^{l}=(\theta_{uv}^{l},\theta_{yz}^{l});$
20: else
21: $\theta_{yz}^{l}=Adam(\theta_{yz}^{l},\nabla loss_{2});$
22: $\theta^{l}=(\theta_{uv}^{l},\theta_{yz}^{l});$
23: end if
24: end for
25: $\theta^{l+1}=(\theta_{uv}^{l},\theta_{yz}^{l});$
26:end for
27:$y_{t_{i}}^{\pi}=\dfrac{1}{M}\sum\limits_{m=1}^{M}\Big{[}\sum\limits_{j=i}^{N-1}-F_{x}(t_{j},x_{t_{j}}^{l,\pi,m},u_{t_{j}}^{l,\pi,m},v_{t_{j}}^{l,\pi,m},y_{t_{j}}^{l,\pi,m},z_{t_{j}}^{l,\pi,m})\Delta
t_{j}-\Phi_{x}(x_{t_{N}}^{l,\pi,m})\Big{]}.$
### 4.1 Example 1: a linear Hamiltonian system
Firstly, we consider the following linear quadratic Hamiltonian
$H(t,x,y,z)=\langle x,y\rangle+\dfrac{1}{4}\langle y,y\rangle+\langle
z,z\rangle,\qquad\Phi(x)=\langle\dfrac{1}{2}Qx,x\rangle,$ (4.1)
where $(x,y,z)\in\mathbb{R}^{n+n+n}$ and $Q$ is a given matrix valued in
$\mathbb{R}^{n\times n}$, and the corresponding Hamiltonian system is given as
$\left\\{\begin{array}[]{l}\mathrm{d}x_{t}=(x_{t}+\dfrac{1}{2}y_{t})\mathrm{d}t+2z_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}=y_{t}\mathrm{d}t-z_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\qquad y_{T}=-\Phi_{x}(x_{T})=-Qx_{T},\end{array}\right.$ (4.2)
which is a linear FBSDE and $B$ is a $1$-dimensional standard Brownian motion.
It can be easily check that this linear FBSDE has a unique solution [3, 4,
37].
As is known, the linear FBSDE connects with a Riccati equation. Suppose the
solution of FBSDE (4.2) is in the following form:
$\displaystyle y_{t}=-K_{t}x_{t},\qquad z_{t}=-M_{t}x_{t}.$
Combing it with (4.2), we then obtain a Riccati equation
$\begin{cases}\dot{K}_{t}-\frac{1}{2}K_{t}^{2}+2K_{t}=0,\\\ M_{t}=0,\qquad
K_{T}=Q,\end{cases}$ (4.3)
where $K_{t}$ is a matrix function and $\dot{K}_{t}$ is the derivative of
$K_{t}$ with respect to $t$. And the solution of (4.3) can be approximated
with the ODE45 method in MATLAB (ODE45 in short), which solves determined
ordinary differential equations with the four-order Runge-Kutta method. In
order to show the performance of our novel proposed algorithms, we take the
numerical solution of the Ricatti equation (4.3) with ODE45 as a benchmark.
Now we consider the corresponding optimal control problem,
$\left\\{\begin{array}[]{l}\mathrm{d}x_{t}=u_{t}\mathrm{d}t+v_{t}\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\end{array}\right.$
with cost functional
$J(u(\cdot),v(\cdot))=\mathbb{E}\Big{\\{}\int_{0}^{T}f(t,x_{t},u_{t},v_{t})+\Phi(x_{T})\Big{\\}},$
where
$\displaystyle f(t,x,u,v)$
$\displaystyle=\max_{y,z}F(t,x,u,v,y,z)\vspace{1ex}$
$\displaystyle=|u-x|^{2}+\dfrac{1}{4}|v|^{2},$
and
$F(t,x,u,v,y,z)=\langle y,u\rangle+\langle z,v\rangle-\langle
x,y\rangle-\dfrac{1}{4}\langle y,y\rangle-\langle z,z\rangle.$
In this example, we set $a=(1.0,\cdots,1.0)\in\mathbb{R}^{n},T=0.1$ and give
the form of $Q$ as
$\begin{bmatrix}1&\lambda&\lambda&\cdots&\lambda\\\
\lambda&1&\lambda&\cdots&\lambda\\\ \lambda&\lambda&1&\cdots&\lambda\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
\lambda&\lambda&\lambda&\cdots&1\end{bmatrix}$
where $\lambda$ is a given constant between 0 and 1. For example, if
$\lambda=0.0$, $Q=E_{n}$ is a $n$-order unit matrix, The numerical result of
(4.3) with ODE45 can be solved with $K_{0}=1.1573E_{n}$ for $n=100$, then the
value of $y_{0}$ can be obtained by
$y_{0}=-K_{0}x_{0}=-1.1573a.$
which is taken as the benchmark results in this example.
Even though the function $f$ can be solved explicitly in this example, we
calculate the numerical results through both the two proposed algorithms and
regard that $f$ does not have an explicit form in Algorithm 2. The comparison
results on the approximated solution $y_{0}$ between our proposed stochastic
control methods (Algorithm 1 and 2) and the Deep FBSDE method are shown in
Table 1, and the solution with ODE45 is regarded as the benchmark. Note that
when the initial state $x=a$ , the solution of $y_{0}$ is a vector and all its
elements are equal, thus we show the value of the first element of $y_{0}$ in
Table 1. Moreover, we show the relative errors between our approximated
solution of $y_{0}$ and that of ODE45, and compute the variances among ten
independent runs of the approximated solution $y_{0}$. We also change the
parameter $\lambda$, and study the corresponding approximation results.
Table 1: The implementations of different terminal $Q$ with $n=100$ | Riccati | DEEP FBSDE | Alg 1 | Alg 2
---|---|---|---|---
Mean | Rela. Error | Var. | Mean | Rela. Error | Var. | Mean | Rela. Error | Var.
$\lambda=0.0$ | -1.1573 | -1.15733 | 2.907e-05 | 6.209e-08 | -1.15751 | 1.797e-04 | 3.141e-07 | -1.15651 | 6.839e-04 | 3.065e-06
$\lambda=0.2$ | -11.8093 | -11.6314 | 1.506e-02 | 2.392e-01 | -11.8113 | 1.733e-04 | 2.441e-03 | -11.8222 | 1.095e-03 | 5.997e-03
$\lambda=0.4$ | -15.2711 | -14.1265 | 7.495e-02 | 9.101e-01 | -15.3120 | 2.681e-03 | 1.739e-03 | -15.2264 | 2.930e-03 | 1.622e-01
$\lambda=0.6$ | -16.9860 | -10.4284 | 3.861e-01 | 2.753e+01 | -17.0461 | 3.859e-03 | 3.563e-03 | -17.0516 | 3.860e-03 | 4.633e-02
$\lambda=0.8$ | -18.0101 | -9.2087 | 4.887e-01 | 2.279e+01 | -17.9844 | 1.426e-03 | 2.528e-02 | -18.1494 | 7.733e-03 | 2.726e-02
$\lambda=1.0$ | -18.6920 | -9.9503 | 4.677e-01 | 2.300e+01 | -18.7162 | 1.297e-03 | 2.000e-02 | -18.8837 | 1.026e-02 | 8.245e-02
From Table 1, we can see that the novel proposed algorithms show much more
stable performance than the Deep FBSDE method. For different terminals with
different parameters $\lambda$, the novel proposed algorithms demonstrate more
stable relative errors and variances. The Deep FBSDE method perform well when
the terminal is a unit matrix ($\lambda=0.0$), but when we change the terminal
to other forms ($\lambda\not=0.0$), the results of the Deep FBSDE method
diverge. As we know, the learning rate is one of the important factors
affecting the approximation results. When we choose smaller learning rate, the
Deep FBSDE method can converge, but it need much more iteration steps than our
proposed algorithms. As an example for the case $\lambda\not=0.0$ with smaller
learning rate, we show the approximation results in the following Figure 3 for
$\lambda=0.8$.
In Figure 2, we show the curves and the variances of the approximated results
$y_{0}$ with different iteration steps for $\lambda=0.0$ and $\lambda=0.8$
respectively, and the black lines represent the results with ODE45 which is
taken as the benchmark. The upper two figures in Figure 2 exhibit the results
for $\lambda=0.0$. From the upper left figure, we can see that when the number
of iteration steps is close to 10000, the approximated solution $y_{0}$ with
Algorithm 1, 2 and the Deep FBSDE method are all very close to the results
with ODE45. Moreover, that of Algorithm 1 and 2 have smaller variation scopes
among ten independent runs and converge within less iteration steps than that
of the Deep FBSDE method. The upper right figure shows that when the number of
iteration steps tends to be 10000, the variance curve of $y_{0}$ with
Algorithm 1 and 2 are also close to that of the Deep FBSDE method. The lower
two figures in Figure 2 exhibit the results for $\lambda=0.8$. We can see that
when the number of iteration steps tends to be 10000, the approximation
results of Algorithm 1 and 2 are close to the benchmark. However, that of the
Deep FBSDE method is far from the benchmark, and the variation scope and
variance increase with the increase of iteration steps.
Figure 2: Approximation results with a piecewise decay learning rate from
$3\times 10^{-3}$ to $1\times 10^{-3}$ for $\lambda=0.0$ in the upper figures
and $\lambda=0.8$ in the lower figures. The left figures show the mean and
variation scopes of the approximated solution $y_{0}$ among 10 independent
runs, and the right figures exhibit the variance curves of $y_{0}$ among 10
independent runs. The black lines in the left figure represent the results
with ODE45 which are taken as the benchmarks. We can see that our novel
proposed algorithms(Algorithms 1 and 2) have more stable convergence and
converge within less iteration steps.
Figure 3: Approximation results with a constant learning rate of $1\times
10^{-3}$. We can see from the left figure that comparing with the novel
proposed algorithms(Algorithms 1 and 2), the Deep FBSDE method needs more
iteration steps to achieve stable convergence, and it has a much smaller
variance at the end of the training from the right figure.
### 4.2 Example 2: a nonlinear Hamiltonian system
Given the Hamiltonian $H$ as
$H(t,x,y,z)=\dfrac{1}{2}\langle y,y\circ\cos^{2}x\rangle+\dfrac{1}{2}\langle
z,z\circ\sin^{2}x\rangle+\langle y,\cos x\rangle+\langle z,\sin
x\rangle-\dfrac{1}{2}\langle x,x\rangle$ (4.4)
and
$\Phi(x)=\dfrac{1}{2}\langle x,x\rangle,$ (4.5)
where $(x,y,z)\in\mathbb{R}^{n+n+n}$. Here $\circ$ represents the Hadamard
product,
$x\circ
y=(x_{1}y_{1},x_{2}y_{2},\cdots,x_{n}y_{n})\in\mathbb{R}^{n},\qquad\forall
x,y\in\mathbb{R}^{n}$
The corresponding Hamiltonian system is
$\left\\{\begin{array}[]{l}\mathrm{d}x_{t}=\cos x_{t}\circ(y_{t}\circ\cos
x_{t}+1)\mathrm{d}t+\sin x_{t}\circ(z_{t}\circ\sin
x_{t}+1)\circ\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}=\Big{[}-y_{t}\circ\sin x_{t}\circ(y_{t}\circ\cos
x_{t}+1)+z_{t}\circ\cos x_{t}\circ(z_{t}\circ\sin
x_{t}+1)-x_{t}\Big{]}\mathrm{d}t\\\ \qquad\qquad-
z_{t}\circ\mathrm{d}B_{t},\vspace{1ex}\\\ x_{0}=a,\qquad
y_{T}=-\Phi_{x}(x_{T}),\end{array}\right.$ (4.6)
where $B$ is a $n$-dimensional Brownian motion.
Here we introduce a stochastic optimal control problem which is different from
(2.6):
$\left\\{\begin{array}[]{l}\mathrm{d}x_{t}=\cos
x_{t}\circ(u_{t}+1)\mathrm{d}t+\sin
x_{t}\circ(v_{t}+1)\circ\mathrm{d}B_{t}\vspace{1ex}\\\
x_{0}=a,\end{array}\right.$ (4.7)
with the cost functional
$J(u(\cdot),v(\cdot))=\mathbb{E}\left\\{\int_{0}^{T}f(t,x_{t},u_{t},v_{t})dt+\Phi(x_{T})\right\\},$
where $f(t,x,u,v)$ is given as
$\displaystyle f(t,x,u,v)$
$\displaystyle=\max_{y,z}F(t,x,u,v,y,z)\vspace{1ex}$
$\displaystyle=\max_{y,z}\left\\{\langle y,\cos x\circ(u+1)\rangle+\langle
z,\sin x\circ(v+1)\rangle-H(t,x,y,z)\right\\}.$
Then $f(t,x,u,v)$ can be solved as
$f(t,x,u,v)=\dfrac{1}{2}\big{[}\langle x,x\rangle+\langle u,u\rangle+\langle
v,v\rangle\big{]},$
and
$\displaystyle y$ $\displaystyle=u\circ\dfrac{1}{\cos x},$ $\displaystyle z$
$\displaystyle=v\circ\dfrac{1}{\sin x}.$
Define $h(t,x,y,v,y,z)$ as
$\displaystyle h(t,x,u,v,y,z)=\langle y,\cos x\circ(u+1)\rangle+\langle z,\sin
x\circ(v+1)\rangle-f(t,x,u,v),$
then we get the value of $y_{0}$ by
$\displaystyle y_{0}$
$\displaystyle=\mathbb{E}\left\\{\int_{0}^{T}h_{x}(t,x_{t},u_{t},v_{t},y_{t},z_{t})dt-\Phi_{x}(x_{T})\right\\}$
$\displaystyle=\mathbb{E}\left\\{\int_{0}^{T}(-u_{t}\circ\tan
x_{t}\circ(u_{t}+1)+v_{t}\circ\cot
x_{t}\circ(v_{t}+1)-x_{t})-\Phi_{x}(x_{T})\right\\}.$
We set $a=(1.0,\cdots,1.0)\in\mathbb{R}^{n}$ and $T=0.1$. The comparison
results among Algorithm 1, Algorithm 2 and the Deep FBSDE method are shown in
Figure 4. We can see from the left figure that when the number of iteration
steps is 5000, The approximated value of $Y_{0}$ with Algorithm 1 and the Deep
FBSDE method are very close, which are $-1.0835$ and $-1.0834$, respectively.
And the approximated solution with Algorithm 2 is $-1.1208$, which is slightly
smaller than that with Algorithm 1 and the Deep FBSDE method. Similar to the
first example, the variation scope of $y_{0}$ for Algorithm 1, 2 are much
smaller than that of the Deep FBSDE method. Besides, at the end of the
training, the variances of $y_{0}$ for Algorithm 1, 2 and the Deep FBSDE
method are very close.
Figure 4: We can see from the figure that when the number of iteration steps
tends to be 5000, the approximated values of $y_{0}$ for all the three methods
are close. However, the Deep FBSDE method shows larger variation scopes than
Algorithm 1 and 2 during the training. And the variances of these three
methods are also close at the end of the training.
### 4.3 Example 3: a Hamiltonian system with exponents in the drift term
In the third example, we solve a Hamiltonian system with exponents in the
drift term. Consider the following Hamiltonian
$H(t,x,y,z)=\log\left(\sum_{i=1}^{n}\exp(y_{i})\right)+\dfrac{1}{2}\langle
y,y\rangle+\langle z,z\rangle+\langle z,x\rangle+\dfrac{1}{5}\langle
x,x\rangle,$ (4.8)
where $(x,y,z)\in\mathbb{R}^{n+n+n}$ and $y=(y_{1},\cdots,y_{n})$. The
terminal function is given as $\Phi(x)=\dfrac{1}{2}\langle x,x\rangle$. Then
the Hamiltonian system we need to solve is given as following,
$\left\\{\begin{array}[]{l}\mathrm{d}x_{t}=\left[\exp(y_{t})\left(\sum_{i=1}^{n}\exp(y_{it})\right)^{-1}+y_{t}\right]\mathrm{d}t+(x_{t}+2z_{t})\circ\mathrm{d}B_{t},\vspace{1ex}\\\
-\mathrm{d}y_{t}=(\dfrac{2}{5}x_{t}+z_{t})\mathrm{d}t-z_{t}\circ\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\qquad y_{T}=-\Phi_{x}(x_{T}),\end{array}\right.$ (4.9)
where $\exp(y_{t})=(\exp(y_{it}),\cdots,\exp(y_{nt}))$ and $B$ is a
$n$-dimensional Brownian motion.
The corresponding stochastic optimal control problem is
$\begin{cases}\mathrm{d}x_{t}=u_{t}\mathrm{d}t+v_{t}\circ\mathrm{d}B_{t},\vspace{1ex}\\\
x_{0}=a,\end{cases}$
with the cost functional
$\displaystyle
J(u(\cdot),v(\cdot))=\mathbb{E}\left\\{\int_{0}^{T}f(t,x_{t},u_{t},v_{t})\mathrm{d}t+\Phi(x_{T})\right\\},$
where
$\displaystyle f(t,x,u,v)$ $\displaystyle=\max_{y,z}F(t,x,u,v,y,z),$ (4.10)
$\displaystyle=\max_{y,z}\left\\{\langle y,u\rangle+\langle
z,v\rangle-H(t,x,y,z)\right\\}.$
Different from the previous two examples, in this example, the function $f$
defined as (4.10) does not have an explicit representation. In this situation,
Algorithm 1 is not applicable, thus we mainly make the comparison between the
results of Algorithm 2 and the Deep FBSDE method.
We set $T=0.2$, $a=(0.5,\cdots,0.5)\in\mathbb{R}^{n}$ and $n=100$ in this
example. The comparison results between Algorithm 2 and the Deep FBSDE method
are shown in Figure 5. We can see that at the end of the training, the
approximated solution $y_{0}$ of the two methods are close, and both the
variances are small enough. Similar with the previous two examples, Algorithm
2 shows smaller variation scope and converges within less iteration steps than
the Deep FBSDE method.
Figure 5: We can see from the above figure that at the end of training, the
approximated solutions of $y_{0}$ with both Algorithm 2 and the Deep FBSDE
method are very close, and the mean values of $y_{0}$ among 10 independent
runs are $-0.41297$ for Algorithm 2 and $-0.41211$ for the Deep FBSDE method.
The variance of Algorithm 2 is slightly larger than that of the Deep FBSDE
method, but it converges within less iteration steps.
In the implementations of Algorithm 2 for all the three examples, we minimize
the norm of the derivatives of the function $F$ with respect to $y$ and $z$
according to (3.16) so that they are equal to $0$. As an alternative, we can
also maximize the cost functional defined as (3.18) in the implementations.
## 5 Conclusion
In this paper, different from the general way of solving the FBSDEs, we
propose a novel method which solve the Hamiltonian system from the view of the
stochastic optimal control via deep learning. Two different algorithms
suitable for different cases are developed. From the numerical results, the
novel proposed Algorithms 1 and 2 demonstrate faster convergence rate and more
stable performance than the Deep FBSDE method. In some cases, Algorithm 1 and
2 shows higher accuracy.
## References
* [1] R. Ortega, A. J. Van Der Schaft, I. Mareels, and B. Maschke, “Putting energy back in control,” IEEE Control Systems Magazine, vol. 21, no. 2, pp. 18–33, 2001.
* [2] R. Ortega, A. van der Schaft, B. Maschke, and G. Escobar, “Interconnection and damping assignment passivity-based control of port-controlled hamiltonian systems,” Automatica, vol. 38, no. 4, pp. 585–596, 2002.
* [3] Y. Hu and S. Peng, “Solution of forward-backward stochastic differential equations,” Probability Theory and Related Fields, vol. 103, no. 2, pp. 273–283, 1995.
* [4] S. Peng and Z. Wu, “Fully coupled forward-backward stochastic differential equations and applications to optimal control,” Siam Journal on Control and Optimization, vol. 37, no. 3, pp. 825–843, 1999.
* [5] S. Peng, “Problem of eigenvalues of stochastic hamiltonian systems with boundary conditions,” Stochastic Processes and Their Applications, vol. 88, no. 2, pp. 259–290, 2000.
* [6] Y. Hu and J. Yong, “Forward–backward stochastic differential equations with nonsmooth coefficients,” Stochastic Processes and their Applications, vol. 87, no. 1, pp. 93–106, 2000.
* [7] J. Ma, P. Protter, and J. Yong, “Solving forward-backward stochastic differential equations explicitly — a four step scheme,” Probability Theory and Related Fields, vol. 98, no. 3, pp. 339–359, 1994.
* [8] J. Ma and J. Yong, “Solvability of forward-backward sdes and the nodal set of hamilton-jacobi-bellman equations,” Chinese Annals of Mathematics, 1993\.
* [9] J. Ma and J. Yong, “Approximate solvability of forward—backward stochastic differential equations,” Applied Mathematics and Optimization, vol. 45, no. 1, pp. 1–22, 2002.
* [10] N. E. Karoui, S. Peng, and M. C. Quenez, “Backward Stochastic Differential Equations in Finance,” Mathematical Finance, vol. 7, no. 1, pp. 1–71, 1997. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1467-9965.00022.
* [11] E. Pardoux and S. Tang, “Forward-backward stochastic differential equations and quasilinear parabolic PDEs,” Probability Theory and Related Fields, vol. 114, pp. 123–150, may 1999.
* [12] A. Bensoussan, “Lectures on stochastic control,” in Nonlinear filtering and stochastic control, pp. 1–62, Springer, 1982.
* [13] J. Yong and X. Zhou, Stochastic Controls-Hamiltonian System and HJB Equations. Springer, 1999.
* [14] E. Tadmor, “A review of numerical methods for nonlinear partial differential equations,” Bulletin of the American Mathematical Society, vol. 49, no. 4, pp. 507–554, 2012.
* [15] B. Bouchard and N. Touzi, “Discrete-time approximation and monte-carlo simulation of backward stochastic differential equations,” Stochastic Processes & Their Applications, vol. 111, no. 2, pp. 175–206, 2004.
* [16] V. Bally and G. Pages, “A quantization algorithm for solving multidimensional discrete-time optimal stopping problems,” Bernoulli, vol. 9, no. 6, pp. 1003–1049, 2003.
* [17] M. A. Jin, S. Jie, and Y. Zhao, “On numerical approximations of forward-backward stochastic differential equations,” Siam Journal on Numerical Analysis, vol. 46, no. 5, pp. 2636–2661, 2008.
* [18] C. Bender and J. Zhang, “Time discretization and markovian iteration for coupled fbsdes,” Annals of Applied Probability, vol. 18, no. 1, pp. 143–177, 2008.
* [19] F. Yu, W. Zhao, and T. Zhou, “Multistep schemes for forward backward stochastic differential equations with jumps,” Journal of Scientific Computing, vol. 69, no. 2, pp. 1–22, 2016.
* [20] E. Weinan, M. Hutzenthaler, A. Jentzen, and T. Kruse, “On multilevel picard numerical approximations for high-dimensional nonlinear parabolic partial differential equations and high-dimensional nonlinear backward stochastic differential equations,” arXiv: Numerical Analysis, 2017.
* [21] G. N. Milstein and M. V. Tretyakov, “Numerical algorithms for forward-backward stochastic differential equations connected with semilinear parabolic equations,” vol. 28, pp. 561–582, 2004.
* [22] F. Yu, W. Zhao, and Z. Tao, “Efficient spectral sparse grid approximations for solving multi-dimensional forward backward sdes,” Discrete and Continuous Dynamical Systems - Series B, vol. 22, no. 9, 2016.
* [23] T. P. Huijskens, M. Ruijter, and C. W. Oosterlee, “Efficient numerical fourier methods for coupled forward-backward sdes,” Journal of Computational and Applied Mathematics, vol. 296, pp. 593–612, 2016.
* [24] E. Weinan, J. Han, and A. Jentzen, “Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations,” Communications in Mathematics & Statistics, vol. 5, no. 4, pp. 349–380, 2017.
* [25] J. Han, A. Jentzen, and E. Weinan, “Solving high-dimensional partial differential equations using deep learning,” Proceedings of the National Academy of Sciences of the United States of America, vol. 115, no. 34, pp. 8505–8510, 2018.
* [26] C. Huré, H. Pham, and X. Warin, “Deep backward schemes for high-dimensional nonlinear PDEs,” arXiv:1902.01599 [cs, math, stat], June 2020.
* [27] H. Pham, X. Warin, and M. Germain, “Neural networks-based backward scheme for fully nonlinear PDEs,” SN Partial Differential Equations and Applications, vol. 2, p. 16, Feb. 2021.
* [28] M. Raissi, “Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations,” arXiv: 1804.07010, 2018\.
* [29] H. Wang, H. Chen, A. Sudjianto, R. Liu, and Q. Shen, “Deep learning-based bsde solver for libor market model with application to bermudan swaption pricing and hedging,” 2018.
* [30] J. Han and J. Long, “Convergence of the deep bsde method for coupled fbsdes,” arXiv:1811.01165, 2018.
* [31] S. Ji, S. Peng, Y. Peng, and X. Zhang, “Three algorithms for solving high-dimensional fully coupled fbsdes through deep learning,” IEEE Intelligent Systems, vol. 35, no. 3, pp. 71–84, 2020.
* [32] R. T. ROCKAFELLAR, Convex Analysis. Princeton University Press, 1970.
* [33] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals and Systems, vol. 2, no. 4, pp. 303–314, 1989\.
* [34] K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, no. 2, pp. 251–257, 1991.
* [35] J. Han and W. E, “Deep learning approximation for stochastic control problems,” NIPS Workshop on Deep Reinforcement Learning, 2016.
* [36] A. C. Wilson, R. Roelofs, R. Stern, N. Srebro, and B. Recht, “The marginal value of adaptive gradient methods in machine learning,” 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017.
* [37] J. Ma, Z. Wu, D. Zhang, and J. Zhang, “On well-posedness of forward-backward SDEs–A unified approach,” The Annals of Applied Probability, vol. 25, no. 4, pp. 2168 – 2214, 2015.
|
# Stabilizers for ergodic actions and invariant random expansions of non-
archimedean Polish groups
Colin JAHEL & Matthieu JOSEPH
###### Abstract
Let $G$ be a closed permutation group on a countably infinite set $\Omega$,
which acts transitively but not highly transitively. If $G$ is oligomorphic,
has no algebraicity and weakly eliminates imaginaries, we prove that any
p.m.p. ergodic action $G\curvearrowright(X,\mu)$ is either essentially free or
essentially transitive. A key notion that we develop in our approach is that
of invariant random expansions, which are $G$-invariant probability measures
on the space of expansions of the canonical (model theoretic) structure
associated with $G$. We also initiate the study of invariant random subgroups
for Polish groups and prove that – although the result for p.m.p. ergodic
actions fails for the group $\mathrm{Sym}(\Omega)$ of all permutation of
$\Omega$ – any ergodic invariant random subgroup of $\mathrm{Sym}(\Omega)$ is
essentially transitive.
MSC: Principal: 37A15, 22F50. Secondary: 03C15, 03C98, 60G09, 03C75.
Keywords: Measure-preserving actions, Polish groups, non-archimedian groups,
invariant random subgroups, model theory, infinitary logic.
###### Contents
1. 1 Introduction
2. 2 Dynamically de Finetti groups
3. 3 Preliminaries on model theory
1. 3.1 Structures and logic actions
2. 3.2 First-order and infinitary logics
3. 3.3 Back-and-forth
4. 4 Invariant Random Expansions
1. 4.1 First properties
2. 4.2 IREs without fixed point
5. 5 Rigidity for p.m.p. actions of dynamically de Finetti groups
1. 5.1 Essentially free and essentially transitive actions
2. 5.2 The proof of the main theorem
6. 6 Invariant Random Subgroups of Polish groups
1. 6.1 Definition
2. 6.2 From IRS to IRE and vice versa
3. 6.3 Rigidity of ergodic IRSs of $S_{\infty}$
7. 7 Further discussions
## 1 Introduction
The study of automorphism groups of countable structures is an very active
theme of research which can be approached from a variety of perspectives such
as model theory, Ramsey theory, permutation groups theory, but also
topological dynamics and ergodic theory. The present work contributes to the
study of the ergodic theoretic properties of automorphism groups of countable
structures. More precisely, we will be interested in probability measure
preserving (p.m.p.) actions of automorphism groups of countable structures.
One of the behaviors that we exhibit – essential transitivity of p.m.p.
actions – can be considered as a measure theoretic equivalent to the
topological property for a minimal action of having a comeager orbit.
Topological dynamics of closed subgroups of $\mathrm{Sym}(\Omega)$ is an
extensively studied topic that was kindled by Kechris, Pestov and Todorcevic
in [28]. An especially important result in this area, due Ben-Yaacov,
Melleray, Nguyen Van Thé, Tsankov and Zucker in [14], [31] and [38] is the
classification of groups for which every minimal action on a compact Hausdorff
space admits a comeager orbit. In particular, they show a link between the
existence of those comeager orbits and the metrizability of the universal
minimal flow, both phenomena corresponding to the existence of a suitable
expansion of the canonical structure associated with the group. This mirrors
our own study, as we explore invariant random expansions of structures in
order to study p.m.p. actions.
Before diving into model theoretic considerations, let us discuss our main
result from the point of view of permutation groups theory. We study closed
permutation groups, which are closed subgroups of the symmetric group
$\mathrm{Sym}(\Omega)$ (the group of all the permutations) on a countably
infinite set $\Omega$. Here $\mathrm{Sym}(\Omega)$, and therefore any closed
permutation group, has the natural topology of pointwise convergence which
turns it into a Polish group. A closed permutation group is transitive if the
action $G\curvearrowright\Omega$ is transitive, and proper if
$G\neq\mathrm{Sym}(\Omega)$ (equivalently, $G\curvearrowright\Omega$ is not
highly transitive). Let us now turn to definitions with a model theoretic
flavor. A closed permutation group $G\leq\mathrm{Sym}(\Omega)$ is oligomorphic
if the diagonal action of $G$ on $\Omega^{n}$ has only finitely many orbits
for every $n\geq 1$. The algebraic closure $\mathrm{acl}(A)$ of a finite
subset $A\subseteq\Omega$ is the set of points in $\Omega$ which lies in a
finite orbit of the pointwise stabilizer $G_{A}\coloneqq\\{g\in G\colon\forall
a\in A,g(a)=a\\}$. We say that $G$ has no algebraicity if $\mathrm{acl}(A)=A$
for all finite subset $A\subseteq\Omega$. Finally, we say that $G$ weakly
eliminates imaginaries if every open subgroup of $G$ contains as a finite
index subgroup the pointwise stabilizer $G_{A}$ of a finite subset
$A\subseteq\Omega$. We refer to Example 2.3 for a list of groups having the
three above properties.
In this paper, a p.m.p. action $G\curvearrowright(X,\mu)$ of a closed
permutation group is a Borel action $G\curvearrowright X$ on a standard Borel
space with a $G$-invariant Borel probability measure, i.e. such that
$\mu(Y)=\mu(g\cdot Y)$ for all $Y$ measurable and $g\in G$. A p.m.p. action
$G\curvearrowright(X,\mu)$ is ergodic if any measurable $Y\subseteq X$ that
satisfies $\mu(Y\triangle g\cdot Y)=0$ for all $g\in G$ is either null or
conull. The main goal of this paper is to prove the following result.
###### Theorem 1.1 (see Theorems 2.2 and 5.7). —
Let $G\lneq\mathrm{Sym}(\Omega)$ be a transitive, proper, closed subgroup. If
$G$ is oligomorphic, has no algebraicity and admits weak elimination of
imaginaries, then any p.m.p. ergodic action $G\curvearrowright(X,\mu)$ is
either essentially free or essentially transitive.
A p.m.p. action is essentially free if the stabilizer of almost every point is
trivial, and essentially transitive if there exists a conull orbit.
Surprisingly, this result fails for $\mathrm{Sym}(\Omega)$ as it admits p.m.p.
ergodic actions that are neither essentially free nor essentially ergodic, see
Remark 5.8. Such p.m.p. actions for $\mathrm{Sym}(\Omega)$ have been studied
in-depth from a model theoretic perspective in [4].
Despite this rigidity result, closed permutation groups
$G\leq\mathrm{Sym}(\Omega)$ have no shortage of p.m.p. actions. Indeed, for
any standard probability space $(A,\kappa)$, the generalized Bernoulli shift
$G\curvearrowright(A,\kappa)^{\Omega}$ is a p.m.p. action. If $G$ is proper,
has no algebraicity and weakly eliminates imaginaries, this action is
essentially transitive when $(A,\kappa)$ is purely atomic and essentially free
otherwise, see Lemma 5.5.
Theorem 1.1 applies to a large variety of groups (in fact continuum many),
such as the group $\mathrm{Aut}(\mathbb{Q},<)$ of order-preserving bijections
of $\mathbb{Q}$, the group $\mathrm{Aut}(\mathbb{Q}/\mathbb{Z},<)$ of
bijections of $\mathbb{Q}/\mathbb{Z}$ which preserve the dense cyclic order,
the automorphism group of the Rado graph, and many more. We refer to Example
2.3 for a wider variety of groups covered by our theorem.
The assumptions on the groups in Theorem 1.1 are rather standard. In fact, the
class of closed permutation groups $G\leq\mathrm{Sym}(\Omega)$, which are
oligomorphic, has no algebraicity and admit weak elimination of imaginaries
has been widely studied in various contexts. Tsankov proved that they have
property (T) [36, Thm. 6.6] as a corollary of his classification of unitary
representations for oligomorphic groups [36, Thm. 1.3]. The first author and
Tsankov also have classified ergodic invariant probability measures on product
spaces $X^{\Omega}$ and, in many cases, on the compact space
$\mathrm{LO}(\Omega)$ of linear orders on $\Omega$. They moreover obtain a
property [24, Thm. 3.4], which holds for any p.m.p. action and is strongly
reminiscent of the classical theorem of de Finetti for i.i.d. exchangeable
random variables. This property will be essential in our study and will lead
us to the notion of dynamically de Finetti groups (see Definition 2.1), a
class which contains oligomorphic groups that have no algebraicity and weakly
eliminates imaginaries (see Theorem 2.2). Theorem 5.7 establishes that the
p.m.p. ergodic actions of any transitive, proper, closed permutation group
which is dynamically de Finetti, are either essentially free or essentially
transitive, making it a generalization of Theorem 1.1.
The point of view that we adapt in order to prove Theorem 1.1 comes from model
theory (basic model-theoretic notions will be discussed in Section 3). This is
motivated by the fact that any closed permutation group $G$ is indeed
(isomorphic to) the automorphism group of a countable relational structure
(see [27, § 1.5]), namely the canonical structure associated with $G$. Another
characterization of these groups is they are non-archimedean Polish groups:
they admit a countable basis of neighborhoods of the identity consisting of
open subgroups. Given any countable relational language $\mathcal{L}$ (which
contains the canonical language $\mathcal{L}_{G}$ associated with $G$), we
denote by $\mathrm{Struc}_{\mathcal{L}}^{G}$ the compact space of expansions
of $\mathbf{M}_{G}$ in the language $\mathcal{L}$. These are structures whose
reduct (the structure obtained by removing the relations in
$\mathcal{L}\setminus\mathcal{L}_{G}$) is equal to $\mathbf{M}_{G}$. The space
$\mathrm{Struc}_{\mathcal{L}}^{G}$ carries a continuous $G$-action and we will
study the $G$-invariant Borel probability measures for this action, that we
call invariant random expansions of (the canonical structure associated with)
$G$. Invariant random expansions, IREs for short, of
$\mathrm{Sym}(\mathbb{N})$ were studied from the model theoretic point of view
in a series of papers under different names such as invariant measures,
invariant structures, or ergodic structures [4], [5], [6], [7], [8], [9]. We
prefer here to coin the name IRE as it more accurately describes the objects,
the structures we look at being explicitly expansions. Furthermore, this name
is reminiscent to the acronym for invariant random subgroups (IRS), a topic
that we will discuss in the context of Polish groups below. Let us denote by
$\mathrm{IRE}_{\mathcal{L}}(G)$ the space of invariant random expansions of
$G$ in the language $\mathcal{L}$. An invariant random expansion
$\mu\in\mathrm{IRE}_{\mathcal{L}}(G)$ is concentrated on an orbit if there
exists an orbit $O$ of $G\curvearrowright\mathrm{Struc}_{\mathcal{L}}^{G}$
such that $\mu(O)=1$. Towards proving Theorem 1.1, we prove the following
result.
###### Theorem 1.2 (see Theorems 2.2 and 5.6). —
Let $G\lneq\mathrm{Sym}(\Omega)$ be a transitive, proper, closed permutation
group. Assume that $G$ is oligomorphic, has no algebraicity and admits weak
elimination of imaginaries. Then for any $\mu\in\mathrm{IRE}_{\mathcal{L}}(G)$
ergodic, either $\mathrm{Aut}(\mathbf{M})=\\{1\\}$ for $\mu$-a.e.
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, or $\mu$ is concentrated on
an orbit.
Again, this result holds more generally for dynamically de Finetti groups.
Theorem 1.1 is now obtained thanks to Theorem 1.2 by invoking a universality
theorem due to Becker and Kechris, which states that any Borel $G$-action can
be Borel-embedded into $\mathrm{Struc}_{\mathcal{L}}^{G}$, provided that the
language $\mathcal{L}$ is “rich” enough, see [11, Thm. 2.7.4] for a precise
statement.
Another part of our work concerns subgroup dynamics for Polish groups and more
precisely for closed permutation groups equipped with the pointwise
convergence topology. Subgroup dynamics is the study for a topological group
$G$ of its action by conjugation on the set $\mathrm{Sub}(G)$ of its _closed_
subgroups. It turns out that subgroup dynamics is a very active area of
research in the locally compact realm. In the presence of a Polish locally
compact group $G$, the space of closed subgroups $\mathrm{Sub}(G)$ is endowed
with a natural topology, called the Chabauty topology, which turns
$\mathrm{Sub}(G)$ into a compact Hausdorff space. In this setting, subgroup
dynamics turned out to be very fruitful in various contexts of group theory
such as the study of lattices in Lie groups [1], $\mathrm{C}^{*}$-simplicity
[30], or else permutation stability of groups [12]. Yet the picture is not
quite as rosy in the Polish realm. If $G$ is a Polish non locally compact
group, then the Chabauty topology on the space $\mathrm{Sub}(G)$ of closed
subgroups is not Hausdorff in general (and this is indeed not the case when
$G=\mathrm{Sym}(\Omega)$). However, there is a natural $\sigma$-algebra on
$\mathrm{Sub}(G)$ called the Effros $\sigma$-algebra that turns
$\mathrm{Sub}(G)$ into a standard Borel space. This allows to define IRSs for
Polish groups.
An invariant random subgroup (IRS) of a Polish group $G$ is a probability
measure on (the Effros $\sigma$-algebra of) $\mathrm{Sub}(G)$ which is
invariant by conjugation. We denote by
$\mathrm{IRS}(G)\coloneqq\mathrm{Prob}(\mathrm{Sub}(G))^{G}$
the (standard Borel) space of invariant random subgroups of $G$. The theory of
invariant random subgroups as well as the spaces $\mathrm{IRS}(G)$ in the
setting of Polish _locally compact_ groups $G$ have been very recently
extensively studied on their own and the literature in this area is rapidly
growing, see e.g. [2], [10], [15], [16]. On the contrary, invariant random
subgroups of Polish non locally compact groups have not been studied so far to
the best of our knowledge.
One of the foundational results for IRSs of locally compact groups is that
every $\mu\in\mathrm{IRS}(G)$ is obtained as the stabilizer IRS of a p.m.p.
action $G\curvearrowright(X,\mu)$, i.e., the stabilizer of a $\mu$-random
point [1, Thm. 2.6] (see also [2, Prop. 12] for a proof when $G$ is
countable). We prove a similar statement for closed permutation groups.
###### Theorem 1.3 (see Theorem 6.5). —
Let $G\leq\mathrm{Sym}(\Omega)$ be a closed subgroup and let
$\nu\in\mathrm{IRS}(G)$. Then there exists a p.m.p. action
$G\curvearrowright(X,\mu)$ whose stabilizer IRS is equal to $\nu$.
We say that $\nu\in\mathrm{IRS}(G)$ is concentrated on a conjugacy class if
there exists an orbit $O$ of the $G$-action by conjugation on
$\mathrm{Sub}(G)$ such that $\nu(O)=1$. Theorem 1.1 readily implies that if
$G\lneq\mathrm{Sym}(\Omega)$ is a transitive, proper, closed subgroup, which
is oligomorphic, has no algebraicity and weakly eliminates imaginaries, then
any ergodic $\nu\in\mathrm{IRS}(G)$ is concentrated on a conjugacy class. Even
though Theorem 1.1 is false for $\mathrm{Sym}(\Omega)$, we prove that any
ergodic IRS of $\mathrm{Sym}(\Omega)$ is concentrated on a conjugacy class,
therefore obtaining the following result.
###### Theorem 1.4 (see Theorems 2.2 and 6.10). —
Let $G\leq\mathrm{Sym}(\Omega)$ be a transitive closed subgroup. If $G$ is
oligomorphic, has no algebraicity and weakly eliminates imaginaries, then any
ergodic $\nu\in\mathrm{IRS}(G)$ is concentrated on a conjugacy class.
Again, this result holds more generally for dynamically de Finetti groups.
###### Convention. —
In this paper, all countable relational structures as well as all closed
permutation groups are defined on $\Omega=\mathbb{N}$. We denote by
$S_{\infty}$ the group $\mathrm{Sym}(\mathbb{N})$ of all permutations on
$\mathbb{N}$. Tuples will be denoted by $\bar{x},\bar{y},\dots$ and
$\mathbb{N}^{<\omega}$ will denote the set of tuples on $\mathbb{N}$.
#### Acknowledgments.
We would like to thank Gianluca Basso, Ronnie Chen, Clinton Conley, David
Evans, François le Maître, Todor Tsankov and Anush Tserunyan for fruitful
discussions related to this work. We also thank Nate Ackerman, Cameron Freer
and Rehana Patel for comments on a draft of this paper. C.J. was partially
funded by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) – project number 467967530. M.J. was partially supported by a
public grant as part of the Investissement d’avenir project, reference
ANR-11-LABX-0056-LMH, LabEx LMH.
## 2 Dynamically de Finetti groups
If $\mathcal{F},\mathcal{G},\mathcal{H}$ are $\sigma$-fields in a probability
space, we say that $\mathcal{F}$ and $\mathcal{H}$ are _conditionally
independent_ over $\mathcal{G}$, and denote it by
$\mathcal{F}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}_{\mathcal{G}}\mathcal{H}$
if for all $\mathcal{F}$-measurable random variables $\xi$, we have
$\mathbb{E}[\xi\mid\mathcal{G},\mathcal{H}]=\mathbb{E}[\xi\mid\mathcal{G}]$.
For a p.m.p. action $G\curvearrowright(X,\mu)$ of a closed subgroup $G\leq
S_{\infty}$ and a finite subset $A\subseteq\mathbb{N}$, we denote by
$\mathcal{F}_{A}$ the $\sigma$-algebra of $(X,\mu)$ generated by the
$G_{A}$-invariant measurable subsets of $X$, i.e., measurable subsets
$Y\subseteq X$ such that $\mu(Y\triangle g\cdot Y)=0$ for all $g\in G_{A}$.
###### Definition 2.1. —
A closed subgroup $G\leq S_{\infty}$ is dynamically de Finetti if it has no
algebraicity, admits weak elimination of imaginaries and satisfies the
following property: for all p.m.p. actions $G\curvearrowright(X,\mu)$ and all
$A,B\subseteq\mathbb{N}$ finite subsets, we have
$\mathcal{F}_{A}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}_{\mathcal{F}_{A\cap
B}}\mathcal{F}_{B}$.
We will discuss the connection between this definition and exchangeability
theory in Section 7. Our main (and so far only) source of dynamically de
Finetti groups is the following result.
###### Theorem 2.2 ([24, Thm. 3.4]). —
Let $G\leq S_{\infty}$ be a closed subgroup. If $G$ has no algebraicity,
admits weak elimination of imaginaries and is oligomorphic, then $G$ is
dynamically de Finetti.
We have strong evidence that there exist dynamically de Finetti groups which
are not oligomorphic (one potential candidate would be the automorphism group
of the universal rational Urysohn space $\mathbb{Q}\mathbb{U}$) and this will
be the topic of a future work. Before stating a key property of dynamically de
Finetti groups, let us discuss examples of such groups.
###### Example 2.3. —
All the examples we will present are obtained as automorphism groups of
Fraïssé limits. A Fraïssé limit $\mathbb{F}$ is an ultrahomogeneous structure
uniquely defined (up to isomorphism) by a Fraïssé class of finite structures.
We refer the reader to [22, § 7.1] for more details on Fraïssé limits. Here is
a non-exhaustive list of some Fraïssé classes whose automorphism groups
satisfy the assumption of Theorem 2.2 and therefore are dynamically de
Finetti.
1. (i)
The class of finite sets. Its Fraïssé limit is $\mathbb{N}$ and its
automorphism group is $S_{\infty}$.
2. (ii)
The class of finite linear orders. Its Fraïssé limit is $(\mathbb{Q},<)$ and
its automorphism group is the group $\mathrm{Aut}(\mathbb{Q},<)$ of order-
preserving permutations of $\mathbb{Q}$.
3. (iii)
The class of finite cyclic orders. Its Fraïssé limit is
$(\mathbb{Q}/\mathbb{Z},<)$ and its automorphism group is the group
$\mathrm{Aut}(\mathbb{Q}/\mathbb{Z},<)$ of bijections of
$\mathbb{Q}/\mathbb{Z}$ which preserves the dense cyclic order.
4. (iv)
The class of partially ordered finite sets.
5. (v)
The class of finite simple graphs. Its Fraïssé limit is the Rado graph $R$.
6. (vi)
The class of $K_{n}$-free finite simple graphs for some $n\geq 3$.
7. (vii)
The class of $k$-uniform finite hypergraphs for some $k\geq 2$.
8. (viii)
The class of $k$-uniform finite hypergraphs omitting a complete $k$-uniform
hypergraph for some $k\geq 2$.
9. (ix)
The class of finite tournaments. A tournament is a directed graph where there
is an oriented edge between any two distinct vertices.
10. (x)
The class of directed finite graphs omitting a (possibly infinite) set of
finite tournaments. Henson observed [21] that there are continuum many Fraïssé
limits obtained this way.
11. (xi)
The class of finite metric spaces with distance set $\\{0,1,\dots,n\\}$ for
some fixed $n\geq 1$. For $n=1$, we recover (i) and for $n=2$, we recover (v).
We now explain briefly why the automorphism groups of these structures satisfy
the assumptions of Theorem 2.2.
* •
In all the examples (i) \- (xi), the automorphism group of the Fraïssé limit
is oligomorphic because there are only finitely many isomorphism types of
finite structures generated by $n$ elements in the corresponding Fraïssé
class.
* •
In all the examples (i) \- (xi), the automorphism group of the Fraïssé limit
has no algebraicity because the corresponding Fraïssé class has the strong
amalgamation property, see [17, (2.15)] for a definition.
* •
A closed subgroup $G\leq S_{\infty}$ has the _strong small index property_ if
every subgroup $H\leq G$ of index $<2^{\aleph_{0}}$ lies between the pointwise
and the setwise stabilizer of a finite set $A\subseteq\mathbb{N}$. This
property implies weak elimination of imaginaries and has been verified for the
examples (i) and (v)-(x), see [33]. The fact that the other examples weakly
eliminate imaginaries is rather standard.
Let us close this example by mentioning that the automorphism groups in
(i)-(xi) are pairwise non (abstractly) isomorphic, implying in particular that
there are continuum many dynamically de Finetti groups. We refer to [32] for
the problem of reconstructing a countable structure from its automorphism
group.
One of the main features of dynamically de Finetti groups is that they satisfy
the classical theorem of de Finetti in exchangeable theory.
###### Theorem 2.4. —
Let $G\leq S_{\infty}$ be a closed subgroup, and let $a\in\mathbb{N}$. If $G$
is dynamically de Finetti, then any $G$-invariant probability measure $\mu$ on
$[0,1]^{G\cdot a}$ is mixed i.i.d., that is there exists a probability measure
$\lambda$ on the set $\mathrm{Prob}([0,1])$ such that
$\mu=\int_{\mathrm{Prob}([0,1])}m^{\otimes G\cdot a}d\lambda(m).$
###### Proof.
Assume first that the p.m.p. action $G\curvearrowright([0,1]^{G\cdot a},\mu)$
is ergodic. In this case, the $\sigma$-algebra $\mathcal{F}_{\emptyset}$ of
$G$-invariant measurable subsets of $[0,1]^{G\cdot a}$ is trivial. Therefore,
for any $A,B\subseteq G\cdot a$ finite and disjoint, we have
$\mathcal{F}_{A}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}\mathcal{F}_{B}$.
This exactly says that $\mu=\otimes_{b\in G\cdot a}m_{b}$ where $m_{b}$ is the
$b^{\text{th}}$-marginal of $\mu$. But $G$ acts transitively on $G\cdot a$, so
all the (one-dimensional) marginals of $\mu$ are equal. Therefore,
$\mu=m^{\otimes G\cdot a}$ for some $m\in\mathrm{Prob}([0,1])$. If $\mu$ is
not ergodic, the ergodic decomposition theorem (see for instance [34, p. 77])
allows to conclude that $\mu$ is mixed i.i.d. ∎
We now state a characterization of having no algebraicity and admitting weak
elimination of imaginaries. This characterization, which will be useful later,
seems to be well known by model theorists (see for instance [35, Lem. 16.17]).
We provide here a proof in the language of permutation groups. If $G\leq
S_{\infty}$ is a closed subgroup, we denote by $\mathrm{Fix}(G)$ the set of
points in $\Omega$ which are fixed by $G$.
###### Lemma 2.5. —
Let $G\leq S_{\infty}$ be a closed subgroup. Then $G$ has no algebraicity and
admits weak elimination of imaginaries if and only if
$\mathrm{Fix}(G)=\emptyset$ and for all $A,B\subseteq\mathbb{N}$ finite, we
have $\langle G_{A},G_{B}\rangle=G_{A\cap B}$.
###### Proof.
$(\Rightarrow)$ First, we readily get $\mathrm{Fix}(G)=\emptyset$ because $G$
has no algebraicity. Fix $A,B\subseteq\mathbb{N}$ two finite subsets and let
$V=\langle G_{A},G_{B}\rangle$. By definition, $V\leq G_{A\cap B}$. Let us
show the reverse inclusion. Since $G$ admits weak elimination of imaginaries,
there exists a finite subset $C\subseteq\mathbb{N}$ such that $G_{C}\leq V$
and $[V:G_{C}]<+\infty$. We will prove that $C\subseteq A\cap B$. The fact
that $G_{C}$ has finite index in $V$ implies that the $V$-orbit of every $c\in
C$ is finite. However, the $G_{A}$-orbit of every $x\in\mathbb{N}\setminus A$
is infinite since $G$ has no algebraicity. Since $G_{A}\leq V$, the $V$-orbit
of every $x\in\mathbb{N}\setminus A$ is infinite. Therefore $C\subseteq A$.
Similarly, $C\subseteq B$. We conclude that $C\subseteq A\cap B$ and thus
$G_{A\cap B}\leq G_{C}\leq V=\langle G_{A},G_{B}\rangle\leq G_{A\cap B}$.
$(\Leftarrow)$ Let us first show that $G$ has no algebraicity. Assume that
this is not the case. Then there exists a finite subset $A\subseteq\mathbb{N}$
such that $G_{A}\curvearrowright\mathbb{N}\setminus A$ has a fixed point
$b\in\mathbb{N}\setminus A$ (indeed, we know that there exists
$A^{\prime}\subseteq\mathbb{N}$ finite such that
$G_{A^{\prime}}\curvearrowright\mathbb{N}\setminus A^{\prime}$ has a finite
orbit, say $\\{b_{1},\dots,b_{n}\\}$, then
$A=A^{\prime}\sqcup\\{b_{1},\dots,b_{n-1}\\}$ works). Set $B=\\{b\\}$. Then we
have $G_{A\sqcup B}=G_{A}$ and $G_{A\cap B}=G$. Therefore, $G=\langle
G_{A},G_{B}\rangle=\langle G_{A\sqcup B},G_{B}\rangle=G_{B}$. This shows that
$B\subseteq\mathrm{Fix}(G)$, which is a contradiction. Thus $G$ has no
algebraicity. We now prove that $G$ admits weak elimination of imaginaries.
Let $V\leq G$ be an open subgroup. The property “$\langle
G_{A},G_{B}\rangle=G_{A\cap B}$ for all $A,B\subseteq\mathbb{N}$ finite”
implies that there exists a unique finite subset $A_{0}\subseteq\mathbb{N}$
which is minimal (for inclusion) among all finite subsets
$A\subseteq\mathbb{N}$ that satisfy $G_{A}\leq V$. Let us show that
$[V:G_{A_{0}}]<+\infty$. Observe that for all $g\in G$, $G_{g(A_{0})}$ is a
subgroup of $gVg^{-1}$ and $g(A_{0})$ is minimal among finite subsets
$A\subseteq\mathbb{N}$ that satisfy $G_{g(A_{0})}\leq gVg^{-1}$. This implies
that for all $g\in V$, we have $g(A_{0})=A_{0}$ and thus
$[V:G_{A_{0}}]<+\infty$. ∎
We close this section with a discussion on topological simplicity. We provide
here a proof of dynamical nature of the fact that dynamically de Finetti
groups are topologically simple. It should however be noted that, as pointed
out to us by David Evans (personal communication), having no algebraicity and
weak elimination of imaginaries is enough to show topological simplicity.
###### Lemma 2.6. —
Let $G\leq S_{\infty}$ be a closed subgroup. Assume that any p.m.p. ergodic
action of $G$ is either essentially free or essentially transitive. Then any
closed non-trivial normal subgroup $N\trianglelefteq G$ is cocompact.
###### Proof.
Assume that $N\trianglelefteq G$ is a closed, non-trivial normal subgroup and
let $\pi:G\twoheadrightarrow G/N$ be the quotient homomorphism. The quotient
group $G/N$ is a closed permutation group by [11, Thm. 1.5.1]. Theorem 6.5
implies that the IRS $\delta_{\\{1_{G/N}\\}}$ of $G/N$ is realized, that is,
$G/N$ admits a p.m.p. essentially free action. By the ergodic decomposition
theorem [34, p. 77], it admits a p.m.p. ergodic essentially free action
$G/N\curvearrowright(X,\mu)$. The action $G\curvearrowright(X,\mu)$ defined by
$g\cdot x\coloneqq\pi(g)\cdot x$ is a p.m.p. ergodic action of $G$ whose
stabilizers are a.s. equal to $N$. Since $N$ is non-trivial, the action is
therefore not essentially free. By assumption on $G$, the action
$G/N\curvearrowright(X,\mu)$ is thus essentially transitive. This means that
there exists a $G$-invariant probability measure $\nu$ on $G/N$ such that
$G\curvearrowright(X,\mu)$ is measurably isomorphic to
$G\curvearrowright(G/N,\nu)$. This implies that $G/N$ is a compact group as
$\nu$ is the Haar probability measure on it. Thus, $N$ is cocompact. ∎
Assume now that $G\lneq S_{\infty}$ is a transitive, closed, proper
permutation group which is dynamically de Finetti. By Theorem 5.7 any p.m.p.
ergodic action of $G$ is either essentially free or essentially transitive. By
Lemma 2.6, any closed non-trivial normal subgroup $N\trianglelefteq G$ is
cocompact. Let us prove that $N=G$. If this is not the case, then $G/N$ is a
non-trivial compact group. But compact closed permutation groups are
profinite, so $G/N$ is a non-trivial profinite group. In particular, $G$
admits a proper finite index subgroup $H$. Since $[G:H]<+\infty$, $H$ is open
in $G$. Since $G$ weakly eliminates imaginaries, there exists
$A\subseteq\mathbb{N}$ finite non-empty, such that $G_{A}\leq H$ and
$[H:G_{A}]<+\infty$. This implies that $[G:G_{A}]<+\infty$. But $G$ has no
algebraicity, so the $G$ orbit of $A$ is infinite and therefore $[G:G_{A}]$ is
infinite, which yields a contradiction. Therefore, dynamically de Finetti
groups are topologically simple.
## 3 Preliminaries on model theory
### 3.1 Structures and logic actions
A relational language $\mathcal{L}=(R_{i})_{i\in I}$ is a countable collection
of relation symbols, each of which has a given arity $r_{i}$. A structure
$\mathbf{M}$ in the language $\mathcal{L}$ (with domain $\mathbb{N}$) is a
collection of subsets $R_{i}^{\mathbf{M}}\subseteq\mathbb{N}^{r_{i}}$ for each
$i\in I$, which are interpretations of the abstract relation symbols $R_{i}$.
Examples of structures include simple graphs, where the language is a single
binary relation, whose interpretation is the set of edges of the graph.
Similarly, $k$-hypergraphs are structures, the language being a single $k$-ary
relation. We can also see linear (as well as partial) orders as structures
with a single binary relation.
We denote by
$\mathrm{Struc}_{\mathcal{L}}\coloneqq\prod_{i\in
I}\\{0,1\\}^{\mathbb{N}^{r_{i}}}$
the space of structures in the language $\mathcal{L}$. This is a compact space
with the product topology and there is a natural continuous
$S_{\infty}$-action on it called the logic action: for $g\in S_{\infty}$ and
$\mathbf{M}$ a structure, $g\cdot\mathbf{M}$ is the structure $\mathbf{N}$
defined by
$\forall i\in I,(R_{i}^{\mathbf{N}}(x_{1},\dots,x_{r_{i}})=1\Leftrightarrow
R_{i}^{\mathbf{M}}(g^{-1}(x_{1},\dots,x_{r_{i}}))=1).$
The automorphism group of $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ is the
stabilizer of $\mathbf{M}$ for the logic action, i.e.,
$\mathrm{Aut}(\mathbf{M})\coloneqq\\{g\in S_{\infty}\colon
g\cdot\mathbf{M}=\mathbf{M}\\}.$
Let $G$ be a closed subgroup of $S_{\infty}$. For all $n\geq 1$, let $J_{n}$
be the set of orbits of the diagonal action $G\curvearrowright\mathbb{N}^{n}$
and let $J=\bigcup_{n\geq 1}J_{n}$. We denote by
$\mathcal{L}_{G}\coloneqq(R_{j})_{j\in J}$ the canonical language associated
with $G$, where $R_{j}$ is of arity $n$ for all $j\in J_{n}$. The canonical
structure associated with $G$ is the structure
$\mathbf{M}_{G}\coloneqq(R_{j}^{G})_{j\in J}$, where
$R_{j}^{G}=j\subseteq\mathbb{N}^{n}$ for all $j\in J_{n}$. It is easy to check
that $\mathrm{Aut}(\mathbf{M}_{G})=G$. In this article we will deal with
structures which expand the canonical structure associated with a closed
subgroup $G\leq S_{\infty}$. For a language $\mathcal{L}$ which contains
$\mathcal{L}_{G}$, an element $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ is
an expansion of $\mathbf{M}_{G}$ if the structure
$\mathbf{M}_{\upharpoonright\mathcal{L}_{G}}\in\mathrm{Struc}_{\mathcal{L}_{G}}$
(called the $\mathcal{L}_{G}$-reduct of $\mathbf{M}$) obtained from
$\mathbf{M}$ by removing the relation symbols from
$\mathcal{L}\setminus\mathcal{L}_{G}$ is equal to $\mathbf{M}_{G}$.
###### Definition 3.1. —
Let $G\leq S_{\infty}$ be a closed subgroup and let $\mathcal{L}$ be a
language which contains $\mathcal{L}_{G}$. A $G$-structure in the language
$\mathcal{L}$ is an element of $\mathrm{Struc}_{\mathcal{L}}$ which is an
expansion of the canonical structure $\mathbf{M}_{G}$.
We denote by $\mathrm{Struc}_{\mathcal{L}}^{G}$ the space of $G$-structures in
the language $\mathcal{L}$. This is a closed subset of
$\mathrm{Struc}_{\mathcal{L}}$ and there is a natural continuous $G$-action on
$\mathrm{Struc}_{\mathcal{L}}^{G}$ which is induced from the $G$-action on
$\mathrm{Struc}_{\mathcal{L}}$ and is called the relativized logic action [11,
§2.7]. With this terminology, for any language $\mathcal{L}$, there is a
canonical homeomorphism between $\mathrm{Struc}_{\mathcal{L}}$ and
$\mathrm{Struc}^{S_{\infty}}_{\mathcal{L}\sqcup\mathcal{L}_{S_{\infty}}}$ and
we will therefore consider elements of $\mathrm{Struc}_{\mathcal{L}}$ as
$S_{\infty}$-structures. We will use the relativized logic action in an
essential way in Section 5 through a universality theorem of Becker and
Kechris [11, Thm. 2.7.4], which states that whenever
$\mathcal{L}\setminus\mathcal{L}_{G}$ contains relations of arbitrarily high
arity, then for any Borel action $G\curvearrowright X$ on a standard Borel
space, there exists a Borel $G$-equivariant injective map
$X\to\mathrm{Struc}_{\mathcal{L}}^{G}$.
### 3.2 First-order and infinitary logics
In this section, we fix a countable relational language
$\mathcal{L}=(R_{i})_{i\in I}$. An atomic formula in the language
$\mathcal{L}$ is an expression of the form $R_{i}(v_{1},\dots,v_{r_{i}})$ for
some $i\in I$, where $v_{1},\dots,v_{r_{i}}$ are free variables and $r_{i}$ is
the arity of $R_{i}$. The set of quantifier-free formulas is the smallest set
that contains atomic formulas and is closed under negation, finite conjunction
and finite disjunction. We denote by $\mathcal{L}_{\omega_{1},\omega}$ the
infinitary logic in the language $\mathcal{L}$, which is the smallest set
containing atomic formulas and closed under negation, under universal and
existential quantification and under conjunction and disjunction of any
countable family of formulas with a common finite set of free variables. We
denote by $\mathcal{L}_{\omega,\omega}$ the standard finitary first-order
logic in the language $\mathcal{L}$, which consists in first-order formulas,
that is, formulas in $\mathcal{L}_{\omega_{1},\omega}$ with only finitely many
disjunctions and conjunctions. A formula in $\mathcal{L}_{\omega_{1},\omega}$
without free variables is called a sentence. For an atomic formula
$\phi(v_{1},\dots,v_{r_{i}})=R_{i}(v_{1},\ldots,v_{r_{i}})$ and
$\bar{x}=(x_{1},\ldots,x_{r_{i}})\in\mathbb{N}^{r_{i}}$, we say that
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ satisfies $\phi(\bar{x})$ if
$R_{i}^{\mathbf{M}}(x_{1},\ldots,x_{r_{i}})=1$. The interpretation of any non-
atomic formula $\phi(v_{1},\dots,v_{n})\in\mathcal{L}_{\omega_{1},\omega}$ is
defined inductively, the interpretation of each symbol corresponding to its
usual use in mathematics. For any tuple $\bar{x}\in\mathbb{N}^{n}$, we write
$\mathbf{M}\models\phi(\bar{x})$ whenever $\mathbf{M}$ satisfies
$\phi(\bar{x})$. The following lemma is straightforward and will be used many
times in the sequel.
###### Lemma 3.2. —
Let $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$,
$\phi(v_{1},\dots,v_{n})\in\mathcal{L}_{\omega_{1},\omega}$ and
$x\in\mathbb{N}^{<\omega}$.
1. (i)
The set
$\\{\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}\colon\mathbf{M}\models\phi(\bar{x})\\}$
is a Borel subset of $\mathrm{Struc}_{\mathcal{L}}$.
2. (ii)
For all $g\in S_{\infty}$, we have
$\mathbf{M}\models\phi(\bar{x})\Leftrightarrow
g\cdot\mathbf{M}\models\phi(g(\bar{x}))$.
A fragment in $\mathcal{L}_{\omega_{1},\omega}$ is a set which contains
$\mathcal{L}_{\omega,\omega}$ and is closed under subformula, finite
conjunction and disjunction, negation, universal and existential
quantification, and substitution of free variables. Any subset
$\Sigma\leq\mathcal{L}_{\omega_{1},\omega}$ is contained in a least fragment
denoted by $\langle\Sigma\rangle$, which is countable whenever $\Sigma$ is. By
definition, $\langle\emptyset\rangle=\mathcal{L}_{\omega,\omega}$. Let $F$ be
a fragment. Given $n\geq 1$, the set of formulas $\phi(v_{1},\dots,v_{n})\in
F$ with $n$-free variables is a Boolean algebra and we denote by $S^{n}_{F}$
its Stone space, which is the space of $F$-types in $n$ variables. By Stone
duality, $S^{n}_{F}$ is a compact Hausdorff totally disconnected space, which
is moreover metrizable if and only if $F$ is a countable fragment. A basis of
clopen sets of $S^{n}_{F}$ is given by the sets $[\phi]\coloneqq\\{p\in
S^{n}_{F}\colon\phi\in p\\}$ for $\phi\in F$. Given
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ and $\bar{x}\in\mathbb{N}^{n}$ we
denote by $\mathrm{tp}^{\mathbf{M}}_{F}(\bar{x})$ the $F$-type in $S^{n}_{F}$
defined by
$\mathrm{tp}^{\mathbf{M}}_{F}(\bar{x})\coloneqq\\{\phi\in
F\colon\mathbf{M}\models\phi(\bar{x})\\}.$
The following lemma is the equivalent of Lemma 3.2 for $F$-types.
###### Lemma 3.3. —
Let $F$ be a fragment and $\bar{x}\in\mathbb{N}^{n}$. The following holds
1. (i)
The map $\mathrm{tp}_{F}(\bar{x}):\mathrm{Struc}_{\mathcal{L}}\to S^{n}_{F}$
is Borel.
2. (ii)
For all $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ and $g\in S_{\infty}$, we
have
$\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=\mathrm{tp}_{F}^{g\cdot\mathbf{M}}(g(\bar{x}))$.
3. (iii)
For all $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ and all
$\bar{y}\in\mathrm{Aut}(\mathbf{M})\cdot\bar{x}$, we have
$\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y})$.
###### Proof.
Let us prove (i). For all $\phi\in F$, observe that
$\\{\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}\colon\mathrm{tp}^{\mathbf{M}}_{F}(\bar{x})\in[\phi]\\}=\\{M\in\mathrm{Struc}_{\mathcal{L}}^{G}\colon\mathbf{M}\models\phi(\bar{x})\\},$
which is Borel by Lemma 3.2 (i). Therefore, the map
$\mathrm{tp}_{F}(\bar{x}):\mathrm{Struc}_{\mathcal{L}}\to S^{n}_{F}$ is Borel.
The proof of (ii) is a straightforward consequence of Lemma 3.2 (ii). Finally,
(iii) is a particular case of (ii). ∎
We finally need the notion of quantifier-free types. The space of quantifier-
free types in $n$ variables is the Stone space of the Boolean algebra of
quantifier-free formulas with $n$ free variables. Given
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ and $\bar{x}\in\mathbb{N}^{n}$, we
denote by $\mathrm{qftp}^{\mathbf{M}}(\bar{x})$ the quantifier-free type
defined by
$\mathrm{qftp}^{\mathbf{M}}(\bar{x})\coloneqq\\{\phi(v_{1},\dots,v_{n})\text{
quantifier-free formula}\colon\mathbf{M}\models\phi(\bar{x})\\}.$
An important remark is that if $\mathbf{M}$ is a $G$-structure for some closed
subgroup $G\leq S_{\infty}$, then for all
$\bar{x},\bar{y}\in\mathbb{N}^{<\omega}$,
$\mathrm{qftp}^{\mathbf{M}}(\bar{x})=\mathrm{qftp}^{\mathbf{M}}(\bar{y})$
implies that $\bar{x}$ and $\bar{y}$ are in the same $G$-orbit. An immediate
corollary is that for any countable fragment $F$,
$\mathrm{tp}^{\mathbf{M}}_{F}(\bar{x})=\mathrm{tp}^{\mathbf{M}}_{F}(\bar{y})$
implies that $\bar{x}$ and $\bar{y}$ are in the same $G$-orbit.
### 3.3 Back-and-forth
Let us fix in this section a closed subgroup $G\leq S_{\infty}$ and a
countable relational language $\mathcal{L}$ which contains $\mathcal{L}_{G}$.
The following lemma, which is almost tautological, is at the heart of the
technique of back-and-forth that we explain next.
###### Lemma 3.4. —
Let $\mathbf{M},\mathbf{N}\in\mathrm{Struc}_{\mathcal{L}}^{G}$. Assume that
there exist two sequences $A_{0}\subseteq A_{1}\subseteq\dots$,
$B_{0}\subseteq B_{1}\subseteq\dots$ of finite subsets of $\mathbb{N}$ and a
sequence $g_{0}\subseteq g_{1}\subseteq\dots$ of bijections $g_{n}:A_{n}\to
B_{n}$, which are restrictions of elements in $G$, such that
$\bigcup_{n}A_{n}=\bigcup_{n}B_{n}=\mathbb{N}$ and for all $n\geq 0$, for all
$R\in\mathcal{L}$ of arity $r$, for all $x_{1},\dots,x_{r}\in A_{n}$, we have
$\displaystyle R^{\mathbf{M}}(x_{1},\dots,x_{r})=1\Leftrightarrow
R^{\mathbf{N}}(g_{n}(x_{1},\dots,x_{r}))=1.$ ($*$)
Then $g=\bigcup_{n}g_{n}$ belongs to $G$ and satisfies
$g\cdot\mathbf{M}=\mathbf{N}$. If $\mathbf{M}=\mathbf{N}$, then
$g\in\mathrm{Aut}(\mathbf{M})$.
The way we will use this lemma in our proofs is by inductively constructing
$A_{n},B_{n}$ and $g_{n}$. Assume for the initial step that we have
$A_{0},B_{0}$ with same quantifier-free type and $g_{0}$ a bijection between
$A_{0}$ and $B_{0}$ preserving the relations. The strategy is to take
$(x_{i})_{i\geq 1}$ and $(y_{j})_{j\geq 1}$ enumerations of $\mathbb{N}$ and
to ensure that $A_{n}$ contains $x_{1},\ldots,x_{n}$ and $B_{n}$ contains
$y_{1},\ldots,y_{n}$. The key is to construct both sets in order to ensure
that for any $x\in\mathbb{N}$ there is $y\in\mathbb{N}$ such that
$\mathrm{qftp}^{\mathbf{M}}(A_{n},x)=\mathrm{qftp}^{\mathbf{N}}(B_{n},y)$ and
for all $y^{\prime}\in\mathbb{N}$ there is $x^{\prime}\in\mathbb{N}$ such that
$\mathrm{qftp}^{\mathbf{N}}(B_{n},y^{\prime})=\mathrm{qftp}^{\mathbf{M}}(A_{n},x^{\prime})$.
If such is the case, assume that we have constructed $A_{n},B_{n}$ as in the
former paragraph and $g_{n}:A_{n}\to B_{n}$ a bijection which is a restriction
of an element in $G$. Let $i$ be the smallest index such that $x_{i}\notin
A_{n}$ and $j$ the smallest index such that $y_{j}\notin B_{n}$ (by
construction, $i>n$ and $j>n$). By assumption, there exists $y\in\mathbb{N}$
and $x\in\mathbb{N}$
$\displaystyle\mathrm{qftp}^{\mathbf{M}}(A_{n},x_{i})$
$\displaystyle=\mathrm{qftp}^{\mathbf{N}}(B_{n},y),$ (1)
$\displaystyle\mathrm{qftp}^{\mathbf{N}}(B_{n},y,y_{j})$
$\displaystyle=\mathrm{qftp}^{\mathbf{M}}(A_{n},x_{i},x).$ (2)
We then set $A_{n+1}=A_{n}\cup\\{x_{i},x\\}$,
$B_{n+1}=B_{n}\cup\\{y,y_{j}\\}$. Since $x_{i}\notin A_{n}$ and $y_{j}\notin
B_{n}$, the equalities of the quantifier-free types in (1) and (2) imply that
$x\notin A_{n}$ and $y\notin B_{n}$. This allows to extend $g_{n}$ into a
bijection $g_{n+1}:A_{n+1}\to B_{n+1}$ by setting $g_{n+1}(x_{i})=y$ and
$g_{n+1}(x)=y_{j}$. Equations (1) and (2) and the definition of quantifier-
free type ensure that $g_{n+1}$ is indeed the restriction of an element in $G$
and that the condition ($*$ ‣ 3.4) in Lemma 3.4 is satisfied.
## 4 Invariant Random Expansions
In this section we fix a closed subgroup $G\leq S_{\infty}$ and a countable
relational language $\mathcal{L}$ which contains the canonical language
$\mathcal{L}_{G}$. Recall that $\mathrm{Struc}_{\mathcal{L}}^{G}$ denotes the
compact space of expansions of the canonical structure $\mathbf{M}_{G}$, which
are the structures $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ whose
$\mathcal{L}_{G}$-reduct $\mathbf{M}_{\upharpoonright\mathcal{L}_{G}}$ is
equal to $\mathbf{M}_{G}$. We now turn our attention to invariant random
expansions.
###### Definition 4.1. —
A invariant random expansion of $G$ in the language $\mathcal{L}$ is a Borel
probability measure on $\mathrm{Struc}_{\mathcal{L}}^{G}$ which is invariant
under the relativized logic action.
We denote by $\mathrm{IRE}(G)$ (or $\mathrm{IRE}_{\mathcal{L}}(G)$ if we want
to emphasize the language) the space of invariant random expansions of $G$. We
will also use $G$-IRE to refer to an element of $\mathrm{IRE}(G)$. Let us give
concrete examples of invariant random expansions.
###### Example 4.2. —
1. (i)
For all $p\in]0,1[$, the random simple graph whose vertex set is $\mathbb{N}$
and the edges are i.i.d. with distribution $\mathrm{Ber}(p)$ is an
$S_{\infty}$-IRE.
2. (ii)
Let $\mathrm{LO}(\mathbb{N})\subseteq\\{0,1\\}^{\mathbb{N}\times\mathbb{N}}$
be the space of linear orders on $\mathbb{N}$. There is a unique
$S_{\infty}$-invariant probability measure $\mu$ on $\mathrm{LO}(\mathbb{N})$
and it is defined by its values on the cylinders $\\{x_{1}<\dots<x_{n}\\}$ by
$\mu(\\{x_{1}<\dots<x_{n}\\})=1/n!$. This gives an $S_{\infty}$-IRE.
3. (iii)
The structure we consider in this case is a particular $3$-hypergraph. A
$2$-graph is a 3-hypergraph such that there is an even number of hyperedges
between any four vertices. A simple way to produce a $2$-graph is to take a
graph, put an hyperedge between three vertices if there is an even number of
edges between them, and then remove the edges. Any $2$-graph can be obtained
this way and we call a graph producing a given $2$-graph $H$ a graphing of
$H$. Consider $\mathbf{N}$ the Fraïssé limit of finite $2$-graphs, which can
be obtained by the above construction starting from a Rado graph. Denote by
$G$ the automorphism group of $\mathbf{N}$. There is a $G$-IRE concentrated on
the space of graphings of $\mathbf{N}$, see [23, Chap. 2]. One notable feature
is that this IRE doesn’t come from an $S_{\infty}$-IRE, see Section 7 for a
more in-depth discussion.
### 4.1 First properties
In this section, we state several lemmas about IREs of groups with no
algebraicity. Recall that $G$ has no algebraicity if for all finite subset
$A\subseteq\mathbb{N}$, we have $\mathrm{acl}(A)=A$. We will not prove the
measurability of sets and maps in this section, all of them being
straightforward, save for $\mathbf{M}\mapsto\mathrm{Aut}(\mathbf{M})$. The
measurability of this map is the object of Lemma 6.4 (ii).
###### Lemma 4.3. —
Let $\mu\in\mathrm{IRE}(G)$. Then for $\mu$-a.e.
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, the set
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$ is either empty or infinite.
###### Proof.
Assume that there exists a finite subset $A\subseteq\mathbb{N}$ such that
$\mu(\\{\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}\colon\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))=A\\})>0.$
Since $G$ has no algebraicity, there exists infinitely many pairwise disjoints
sets $A_{n}$ in the $G$-orbit of $A$ by Neumann’s lemma [22, Cor. 4.2.2]. By
$G$-invariance of $\mu$, the sets
$\\{\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}\colon\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))=A_{n}\\}$
all have the same measure, which is positive. This is absurd since they are
pairwise disjoint. ∎
The following lemma shows that if $\mu$ is an ergodic invariant random
expansion of $G$ which has no algebraicity, then $\mu$-a.s.,
$\mathrm{Aut}(\mathbf{M})$ has no algebraicity, apart from the fixed points
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$.
###### Lemma 4.4. —
Assume that $G$ has no algebraicity and let $\mu\in\mathrm{IRE}(G)$. Then for
all tuples $\bar{x}\in\mathbb{N}^{<\omega}$, the following holds:
1. (i)
for $\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, we have
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})=\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))\cup\bar{x}$.
2. (ii)
for $\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, the
$\mathrm{Aut}(\mathbf{M})_{\bar{x}}$-orbits on $\mathbb{N}$ are either of size
$1$ or infinite.
###### Proof.
Let us prove (i). It is clear that $\mu$-a.s., we have
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))\cup\bar{x}\subseteq\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})$.
We now prove the reverse inclusion. Assume by contradiction that there exists
$a\in\mathbb{N}\setminus\bar{x}$ such that
$\mu(\\{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})\setminus\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))\text{
contains }a\\})>0.$
Therefore, there exists $\bar{y}\in\mathbb{N}^{<\omega}$ such that
$\mu(\\{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})\text{ contains
}a\text{ and }\exists g\in\mathrm{Aut}(\mathbf{M})\colon
g(\bar{x})=\bar{y},g(a)\neq a\\})>0.$
Conditionally on the set
$\\{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})\text{ contains }a\\}$, we
have that for $\mu$-a.e. $\mathbf{M}$, for all
$g,h\in\mathrm{Aut}(\mathbf{M})$, if $g(\bar{x})=h(\bar{x})=\bar{y}$ then
$g(a)=h(a)$. If we denote by $\bar{z}$ the tuple $(\bar{x},\bar{y},a)$, then
for all $b\in\mathbb{N}\setminus\bar{z}$, the $\mu$-measure of the set
$\\{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{x}})\text{ contains }a\text{
and for any }g\in\mathrm{Aut}(\mathbf{M})\text{ s.t.
}g(\bar{x})=\bar{y},g(a)=b\\}$
is strictly positive and constant along the $G_{\bar{z}}$-orbit of $b$. These
orbits are infinite since $G$ has no algebraicity, and the former sets are
pairwise disjoint, which is a contradiction.
Let us now prove (ii). Assume by contradiction that there exists a tuple
$\bar{y}=(y_{1},\dots,y_{n})$ of size $n\geq 2$ such that
$\mu(\\{\bar{y}\text{ is an orbit of
}\mathrm{Aut}(\mathbf{M})_{\bar{x}}\\})>0$. Let
$\bar{z}\coloneqq(\bar{x},y_{1},\dots,y_{n-1})$. Then
$\mu(\\{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M})_{\bar{z}})\setminus\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))\text{
contains }y_{n}\\})>0$, which contradicts (i). ∎
The following result is a version of Neumann’s lemma for IRE.
###### Lemma 4.5. —
Assume that $G$ has no algebraicity and let $\mu\in\mathrm{IRE}(G)$. Let
$\bar{x}\in\mathbb{N}^{<\omega}$. Then for $\mu$-a.e.
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, either $\bar{x}$ contains an
element of $\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$ or the
$\mathrm{Aut}(\mathbf{M})$-orbit of $\bar{x}$ contains an infinite set of
pairwise disjoint tuples.
###### Proof.
We know by Lemma 4.4 that $\mu$-a.s., the orbits of
$\mathrm{Aut}(\mathbf{M})_{\bar{x}}$ are either of size $1$ or infinite. Then
Neumann’s lemma [22, Cor. 4.2.2] gives us the desired conclusion. ∎
### 4.2 IREs without fixed point
If $\mu\in\mathrm{IRE}(G)$, we say that $\mu$ has no fixed point if for
$\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, the set
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$ is empty. We now focus our attention
on IREs without fixed points under the additional assumption that the group
$G$ is dynamically de Finetti.
###### Lemma 4.6. —
Let $G\leq S_{\infty}$ be a closed subgroup, which is dynamically de Finetti.
Let $\mu\in\mathrm{IRE}(G)$ be ergodic and let $F$ be a countable fragment. If
$\mu$ has no fixed point, then for all $\bar{x}\in\mathbb{N}^{n}$, the
probability measure $\mathrm{tp}_{F}(\bar{x})_{*}\mu$ on $S^{n}_{F}$ is purely
atomic.
###### Proof.
First of all, observe that by Lemma 3.3 (ii), for all
$\bar{x}\in\mathbb{N}^{n}$, the random variable
$\mathrm{tp}_{F}(\bar{x}):(\mathrm{Struc}_{\mathcal{L}}^{G},\mu)\to S^{n}_{F}$
is $G_{\bar{x}}$-invariant and therefore $\mathcal{F}_{\bar{x}}$-measurable,
where $\mathcal{F}_{\bar{x}}$ denotes the $\sigma$-algebra of
$G_{\bar{x}}$-invariant measurable subsets of
$\mathrm{Struc}_{\mathcal{L}}^{G}$. Since $\mu$ is ergodic and $G$ is
dynamically de Finetti, we deduce that for all
$\bar{x},\bar{y}\in\mathbb{N}^{n}$ disjoint, the random variables
$\mathrm{tp}_{F}(\bar{x})$ and $\mathrm{tp}_{F}(\bar{y})$ are independent.
If $\mu$ has no fixed point, then by Lemma 4.5, for $\mu$-a.e. $\mathbf{M}$,
there exists $\bar{y}$ disjoint from $\bar{x}$ in the
$\mathrm{Aut}(\mathbf{M})$-orbit of $\bar{x}$. This implies that $\mu$-a.s.,
there is $\bar{y}\in\mathbb{N}^{n}$ disjoint from $\bar{x}$ such that
$\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y})$.
Assume that $\mathrm{tp}_{F}(\bar{x})_{*}\mu$ is not purely atomic. This means
that there exists a measurable subset
$X\subseteq\mathrm{Struc}_{\mathcal{L}}^{G}$, $\mu(X)>0$, such that
$\mathrm{tp}_{F}(\bar{x})_{*}\mu(\cdot\mid X)$ is a diffuse measure. There
exists $\bar{y}\in\mathbb{N}^{n}$ (deterministic) disjoint from $\bar{x}$ and
a measurable subset $X^{\prime}\subseteq X$ of positive measure such that for
all $\mathbf{M}\in X^{\prime}$,
$\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y})$.
By independence of $\mathrm{tp}_{F}(\bar{x})$ and $\mathrm{tp}_{F}(\bar{y})$
and using the fact that $\mathrm{tp}_{F}(\bar{x})_{*}\mu(\cdot\mid
X^{\prime})$ is diffuse, we get that
$0=\mu(\\{\mathbf{M}\in
X^{\prime}\colon\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y})\\})=\mu(X^{\prime})>0,$
which yields a contradiction. Thus, $\mathrm{tp}_{F}(\bar{x})_{*}\mu$ is
purely atomic. ∎
###### Remark 4.7. —
If $\mu$ has fixed points, then one can prove with a similar argument that the
probability measure $\mathrm{tp}_{F}(\bar{x})_{*}\tilde{\mu}$ is purely
atomic, where $\tilde{\mu}$ is the conditional measure
$\mu(\cdot\mid\\{\bar{x}\cap\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))=\emptyset\\})$.
For all $n\geq 0$, we denote by $S^{n}_{F}(\mu)$ the countable set of $p\in
S^{n}_{F}$ for which there exists $\bar{x}\in\mathbb{N}^{n}$ such that
$\mu(\\{\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}\colon\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=p\\})>0$.
In order to analyze in details IRE with no fixed points of dynamically de
Finetti groups, we need the following result, a version for $S_{\infty}$-IRE
of which is contained in [4, Lem. 4.6]. Moreover, the proof we present here
contains no new argument compared to the proof of the aforementioned result.
###### Theorem 4.8. —
Assume that $G$ is dynamically de Finetti. Let $\mu\in\mathrm{IRE}(G)$ be
ergodic. If $\mu$ has no fixed point, then there exists a countable fragment
$F$ such that for all $p\in S^{n}_{F}(\mu)$, $q\in S^{n+1}_{F}(\mu)$ and
$(\bar{x},z)\in\mathbb{N}^{n+1}$ satisfying
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x})=p\text{ and
}\mathrm{tp}_{F}^{\mathbf{M}}(\bar{x},z)=q\\})>0$, the following holds
$\mu$-a.s.
$\forall\bar{y}\in\mathbb{N}^{n},(\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y})=p\Rightarrow\exists
z^{\prime}\colon\mathrm{tp}_{F}^{\mathbf{M}}(\bar{y},z^{\prime})=q).$
###### Proof.
Let $\omega_{1}$ be the first uncountable ordinal. We will construct a family
of countable fragments $(F_{\alpha})_{\alpha<\omega_{1}}$ depending on $\mu$,
indexed by countable ordinals and show that for some ordinal
$\alpha<\omega_{1}$, the countable fragment $F_{\alpha}$ is as wanted. We
define $F_{\alpha}$ by transfinite induction:
$\displaystyle F_{0}$ $\displaystyle=\mathcal{L}_{\omega,\omega},$
$\displaystyle F_{\alpha+1}$
$\displaystyle=\Big{\langle}F_{\alpha},\bigwedge_{\phi\in
p}\phi(v_{1},\dots,v_{n})\text{ for all }p\in S^{n}_{F_{\alpha}}(\mu)\text{
and }n\geq 0\Big{\rangle},$ $\displaystyle F_{\beta}$
$\displaystyle=\bigcup_{\alpha<\beta}F_{\alpha}\text{ if }\beta\text{ is a
limit ordinal}.$
For all $n\geq 0$ the set $S^{n}_{F}(\mu)$ is countable, therefore
$F_{\alpha}$ is a countable fragment for all ordinal $\alpha$.
###### Lemma 4.9. —
There is an ordinal $\alpha$ such that for $\alpha<\beta<\omega_{1}$, all
$\bar{x}\in\mathbb{N}^{n}$ and $r\in S^{n}_{F_{\beta}}(\mu)$, if
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\beta}}^{\mathbf{M}}(\bar{x})=r\\})>0$,
then
$\displaystyle\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\beta}}^{\mathbf{M}}(\bar{x})=r\\})=\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=s\\})$
(3)
where $s\subseteq r$ denotes the restriction of $r$ to the fragment
$F_{\alpha}$.
###### Proof.
We reproduce the argument from the proof of [4, Lem. 4.5]. Fix
$\bar{x}\in\mathbb{N}^{n}$. For $\alpha<\omega_{1}$ an ordinal, let us denote
by $\mathrm{Sp}(\alpha)(\bar{x})$ the set of $p\in S^{n}_{F_{\alpha}}(\mu)$
such that there exists $\beta>\alpha$ and $q\in S^{n}_{F_{\beta}}(\mu)$ whose
restriction to the fragment $F_{\alpha}$ is $p$, satisfying
$\displaystyle
0<\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\beta}}^{\mathbf{M}}(\bar{x})=q\\})<\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p\\}).$
Assume that for all $\alpha<\omega_{1}$, $\mathrm{Sp}(\alpha)(\bar{x})$ is
non-empty. We construct a sequence $(\alpha_{\delta})_{\delta<\omega_{1}}$ of
ordinals and we prove that
$r_{\alpha_{\delta}}(\bar{x})\coloneqq\sup_{p\in\mathrm{Sp}(\alpha)(\bar{x})}\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha_{\delta}}}^{\mathbf{M}}(\bar{x})=p\\})$
is a strictly decreasing sequence of reals of lenght $\omega_{1}$, which can
not exist.
Assume $\delta=\gamma+1$ and that we have constructed $\alpha_{\gamma}$. Then
$r_{\alpha_{\gamma}}$ is realized by a finite number of types
$p_{1},\ldots,p_{k}\in\mathrm{Sp}(\alpha_{\gamma})(\bar{x})$. Let us take
$\beta>\alpha_{\gamma}$ such that for all $i\leq k$ there is $q_{i}\in
S^{n}_{F_{\beta}}(\mu)$ whose restriction to $F_{\alpha_{\gamma}}$ is $p_{i}$
and satisfies
$\displaystyle
0<\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\beta}}^{\mathbf{M}}(\bar{x})=q_{i}\\})<\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha_{\gamma}}}^{\mathbf{M}}(\bar{x})=p_{i}\\}).$
We set $\alpha_{\delta}=\beta$. We have $r_{\beta}<r_{\alpha_{\gamma}}$,
indeed if $q$ realizes $r_{\beta}$, take $p$ its restriction to
$F_{\alpha_{\gamma}}$. If $p\in\\{p_{1},\ldots,p_{k}\\}$ then we are done by
definition of $\beta$ and if not, then
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\beta}}^{\mathbf{M}}(\bar{x})=q\\})\leq\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha_{\gamma}}}^{\mathbf{M}}(\bar{x})=p\\})<r_{\alpha_{\gamma}}$.
If $\delta$ is a limit ordinal, then set
$\alpha_{\delta}=\sup_{\gamma<\delta}\alpha_{\gamma}$. If $\gamma<\delta$ then
$\gamma+1<\delta$ and $r_{\alpha_{\gamma}}<r_{\alpha_{\gamma+1}}\leq
r_{\delta}$.
This is enough to conclude that there must be $\alpha(\bar{x})$ such that
$\mathrm{Sp}(\alpha)(\bar{x})$ is empty. Take
$\alpha=\sup_{\bar{x}\in\mathbb{N}^{<\omega}}\alpha(\bar{x})$, then
$F_{\alpha}$ is as wanted. ∎
Let us show that $F\coloneqq F_{\alpha}$ is a countable fragment satisfying
the conclusion of the theorem. Fix $p\in S^{n}_{F}(\mu)$, $q\in
S^{n+1}_{F}(\mu)$ and $(\bar{x},z)\in\mathbb{N}^{n+1}$ satisfying
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p\text{
and }\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x},z)=q\\})>0$. Let us define
$\psi(v_{1},\dots,v_{n})\coloneqq\exists v,\bigwedge_{\phi\in
q}\phi(v_{1},\dots,v_{n},v).$
Since $q\in S^{n+1}_{F_{\alpha}}(\mu)$, the formula $\bigwedge_{\phi\in
q}\phi(v_{1},\dots,v_{n+1})$ belongs to $F_{\alpha+1}$ Therefore, $\psi\in
F_{\alpha+1}$. Since $\mu$ has no fixed, we can fix by Lemma 4.6, a type $r\in
S^{n}_{F_{\alpha+1}}$ which is an atom of the probability measure
$\mathrm{tp}_{F_{\alpha+1}}(\bar{x})_{*}\tilde{\mu}$, where $\tilde{\mu}$ is
the conditional measure
$\mu(\cdot\mid\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p)$. Then
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha+1}}^{\mathbf{M}}(\bar{x})=r\\})>0$
and the restriction of $r$ to the fragment $F_{\alpha}$ is exactly $p$. By
equation (3), we therefore get that
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p\\})=\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha+1}}^{\mathbf{M}}(\bar{x})=r\\})$.
Let us prove that $\psi\in r$. For any $\mathbf{M}$ such that
$\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p$ and
$\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x},z)=q$ (which is a set of
positive measure by assumption), we have that
$\mathbf{M}\models\psi(\bar{x})$. This shows that $\psi\in r$ as otherwise we
would have
$\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p\\})>\mu(\\{\mathbf{M}\colon\mathrm{tp}_{F_{\alpha+1}}^{\mathbf{M}}(\bar{x})=r\\})$.
To conclude, we obtain that $\mu$-a.s., if
$\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x})=p$, then there exists
$z^{\prime}\in\mathbb{N}$ such that
$\mathrm{tp}_{F_{\alpha}}^{\mathbf{M}}(\bar{x},z^{\prime})=q$. The same
conclusion holds for any $\bar{y}\in\mathbb{N}^{n}$ by $G$-invariance of the
measure $\mu$. ∎
We can now prove the main theorem of this section. An invariant random
expansion $\mu\in\mathrm{IRE}(G)$ is concentrated on an orbit if there exists
an orbit $O$ of $G\curvearrowright\mathrm{Struc}_{\mathcal{L}}^{G}$ such that
$\mu(O)=1$. This definition is legitimate as orbits of Borel actions are
indeed Borel, see [26, Thm. 15.14]. The following lemma will be useful to
prove that p.m.p. actions are essentially free.
###### Lemma 4.10. —
Let $G$ be a Polish group and $G\curvearrowright(X,\mu)$ be a p.m.p ergodic
action. If for $\mu\otimes\mu$-a.e. $(x,y)\in X\times X$, $x$ and $y$ are in
the same orbit, then there is a conull orbit of the action.
###### Proof.
If we denote by $E$ the set of $(x,y)$ such that $x\in G\cdot y$, we have
$\displaystyle 1=\mu\otimes\mu(E)$
$\displaystyle=\int_{X}\left(\int_{X}\mathds{1}_{G\cdot
x}(y)d\mu(y)\right)d\mu(x)$ $\displaystyle=\int_{X}\mu(G\cdot x)d\mu(x),$
By ergodicity of $\mu$, for a.e. $x\in X$, the value of $\mu(G\cdot x)$ is
either $0$ or $1$. Therefore, there exists an orbit $O$ of the action
$G\curvearrowright X$ such that $\mu(O)=1$ ∎
###### Theorem 4.11. —
Let $G$ be a dynamically de Finetti group. Let $\mu\in\mathrm{IRE}(G)$ be
ergodic. If $\mu$ has no fixed point, then $\mu$ is concentrated on an orbit.
###### Proof.
For $\mu\otimes\mu$-a.e.
$\mathbf{M},\mathbf{N}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, we will construct
an element $g\in G$ such that $g\cdot\mathbf{M}=\mathbf{N}$, i.e., such that
for all $\bar{x}\in\mathbb{N}^{<\omega}$, we have
$\mathrm{qftp}^{\mathbf{M}}(\bar{x})=\mathrm{qftp}^{\mathbf{N}}(g(\bar{x})).$
This will imply that $\mu$ is concentrated on an orbit by Lemma 4.10.
Let us use Theorem 4.8 to build $g$ via a back-and-forth. We use the notation
of Lemma 3.4 and its following discussion. Take $(x_{i})$ and $(y_{j})$ two
enumerations of $\mathbb{N}$. Let $F$ be a fragment given by Theorem 4.8.
We start our back-and-forth by defined $A_{0}=B_{0}=\emptyset$. Assume that
$A_{n}$ and $B_{n}$ have been built, each containing the first $n$ elements of
each enumeration respectively. Let us build $A_{n+1}$ and $B_{n+1}$. Let $i$
be the smallest index such that $x_{i}\notin A_{n}$ and $j$ the smallest index
such that $y_{j}\notin B_{n}$ (by construction, $i>n$ and $j>n$). By Lemma 4.6
and Theorem 4.8, $\mu\otimes\mu$-a.s. there is $y\in\mathbb{N}$ such that
$\mathrm{tp}_{F}^{\mathbf{M}}(A_{n},x_{i})=\mathrm{tp}_{F}^{\mathbf{N}}(B_{n},y),$
implying that
$\mathrm{qftp}^{\mathbf{M}}(A_{n},x_{i})=\mathrm{qftp}^{\mathbf{N}}(B_{n},y_{i}).$
Similarly, there is $\mu\otimes\mu$-a.s. $x\in\mathbb{N}$ such that
$\mathrm{tp}_{F}^{\mathbf{M}}(A_{n},x_{i},x)=\mathrm{tp}_{F}^{\mathbf{N}}(B_{n},y,y_{j}).$
The a.s. existence of $x$ is also a consequence of Theorem 4.8. We set
$A_{n+1}\coloneqq A_{n}\cup\\{x_{i},x\\}$, $B_{n+1}\coloneqq
B_{n}\cup\\{y,y_{j}\\}$ and $g_{n+1}(x_{i})\coloneqq y$, $g_{n+1}(x)\coloneqq
y_{j}$ and $g_{n+1}(a)=g_{n}(a)$ for any $a\in A_{n}$. The way we constructed
$g_{n}$ using an enumeration ensures it converges for the pointwise topology
to an element of $G$. Moreover, its limit $g$ satisfies a.s.
$\mathrm{qftp}^{\mathbf{M}}(\bar{x})=\mathrm{qftp}^{\mathbf{N}}(g(\bar{x})).\qed$
## 5 Rigidity for p.m.p. actions of dynamically de Finetti groups
The aim of this section is to give a proof of Theorem 1.1. Let us first recall
the notions of essential freeness and essential transitivity for p.m.p.
actions of Polish groups. The free part of a p.m.p. action
$G\curvearrowright(X,\mu)$ of a Polish group $G$ is the set $\\{x\in
X\colon\forall g\in G\setminus\\{1_{G}\\},g\cdot x\neq x\\}$. This is a
$\mu$-measurable $G$-invariant set (see Lemma 6.4 (ii)). We say that
$G\curvearrowright(X,\mu)$ is
* •
essentially free if the free part is a conull set,
* •
essentially transitive if there exists a conull set on which the action is
transitive,
* •
properly ergodic if it is ergodic and every orbit has measure $0$.
### 5.1 Essentially free and essentially transitive actions
We will provide concrete examples both of essentially free and of essentially
transitive p.m.p. ergodic actions in Lemma 5.5. Before that, let us prove that
if $G$ is Polish non-compact, p.m.p. ergodic actions cannot be both.
###### Lemma 5.1. —
Let $G$ be a Polish non-compact group. If $G\curvearrowright(X,\mu)$ is a
p.m.p. essentially transitive action, then it is not essentially free.
###### Proof.
By contradiction, assume that $G\curvearrowright(X,\mu)$ is essentially free
and essentially transitive. Thus, there exists a $G$-invariant conull set
$A\subseteq X$ on which the action is free and transitive. In other words,
there exists a Borel probability measure $m$ on $G$ which is invariant by left
translation, such that $G\curvearrowright(G,m)$ is measurably isomorphic to
$G\curvearrowright(X,\mu)$. This shows that $G$ has a probability Haar
measure. Thus $G$ is compact and this finishes the proof. ∎
###### Corollary 5.2. —
Any p.m.p. ergodic essentially free action of a Polish non-compact group is
properly ergodic.
In the next section, we will prove that apart from $S_{\infty}$, the converse
of Corollary 5.2 holds for any dynamically de Finetti group. The following
lemma will be useful to characterize proper subgroups of $S_{\infty}$. A
permutation $\tau\in S_{\infty}$ is a transposition if $\tau$ is a $2$-cycle.
###### Lemma 5.3. —
Let $G\leq S_{\infty}$ be a transitive, closed subgroup, which has no
algebraicity and weakly eliminates imaginaries. If $G$ contains a
transposition, then $G=S_{\infty}$.
###### Proof.
We define a $G$-invariant equivalence relation $E$ on $\mathbb{N}$ as follows:
$xEy\Leftrightarrow x=y\text{ or the transposition which exchange }x\text{ and
}y\text{ belongs to }G.$
Let us prove that $E=\mathbb{N}\times\mathbb{N}$. By assumption,
$E\neq\\{(x,x)\colon x\in\mathbb{N}\\}$, so take $x\in\mathbb{N}$ whose
$E$-class $[x]_{E}$ satisfies $\lvert[x]_{E}\rvert\geq 2$. Let $V$ be the
stabilizer of $[x]_{E}$, that is, $V\coloneqq\\{g\in G\colon
g\cdot[x]_{E}=[x]_{E}\\}$. Then for any $y\in[x]_{E}$, we have $G_{y}\leq V$.
By Lemma 2.5, we get that $G=V$ and therefore $[x]_{E}=G\cdot x$. Since $G$ is
transitive, this shows that $E=\mathbb{N}\times\mathbb{N}$. ∎
From the previous lemma, we get the following corollary.
###### Corollary 5.4. —
Let $G\lneq S_{\infty}$ be a transitive, proper, closed subgroup. If $G$ has
no algebraicity and weakly eliminates imaginaries, then for all
$x,y\in\mathbb{N}$ distinct, there exists infinitely many disjoint tuples
$\bar{z}$, all disjoint from $x$ and $y$, such that $(\bar{z},x)$ and
$(\bar{z},y)$ lies in different $G$-orbits.
###### Proof.
Assume that there is no such tuple $\bar{z}$. Fix an enumeration
$z_{0},z_{1},\dots$ of $\mathbb{N}\setminus\\{x,y\\}$. Then there exists a
sequence $(g_{n})_{n\geq 0}$ of elements in $G$ such that $g_{n}(z_{i})=z_{i}$
for all $i\leq n$ and $g_{n}(x)=y$. The sequence $(g_{n})_{n\geq 0}$ converges
to the transposition that exchanges $x$ and $y$. But $G$ is a proper subgroup
of $S_{\infty}$, so $G$ contain no transposition by Lemma 5.3. This yields a
contradiction. So there exists such a tuple $\bar{z}$. But $G$ has no
algebraicity, so there exists infinitely many disjoint such tuples by
Neumann’s lemma. ∎
We can now provides examples both of essentially free and of essentially
transitive p.m.p. ergodic actions.
###### Lemma 5.5. —
Let $G\lneq S_{\infty}$ be a proper, closed subgroup, which has no
algebraicity and weak elimination of imaginaries. Let $(A,\kappa)$ be a
standard probability space. Then the p.m.p. action
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is ergodic. If moreover
$(A,\kappa)$ is purely atomic, then
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially transitive.
Else, $G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially free.
###### Proof.
The ergodicity of $G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is proved
verbatim in [29, Prop. 2.1], the only assumption needed is that the orbits of
$G\curvearrowright\mathbb{N}$ are infinite, which is weaker than having no
algebraicity. Assume that $(A,\kappa)$ is purely atomic and let us prove that
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially transitive.
Let $\mu=\kappa^{\otimes\mathbb{N}}$. In order to prove that
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially transitive,
we will prove that for $\mu\otimes\mu$-a.e.
$((a_{n})_{n\in\mathbb{N}},(b_{n})_{n\in\mathbb{N}})\in A^{\mathbb{N}}\times
A^{\mathbb{N}}$, there exists a random element $g\in G$ such that
$g\cdot(a_{n})_{n\in\mathbb{N}}=(b_{n})_{n\in\mathbb{N}}$. For this we run a
back-and forth argument.
Fix $x_{1},x_{2},\dots$ and $y_{1},y_{2},\dots$ two deterministic enumerations
of $\mathbb{N}$. We will inductively construct $g$, ensuring that, at the
$n$-th step, its domain contains $x_{1},\ldots,x_{n}$ and that its image
contains $y_{1},\ldots,y_{n}$. To initiate the back-and-forth we define $g$ on
the empty set and define its image as the empty set. Let us now assume that we
have defined $g$ as wanted with domain $A_{n}$ containing $x_{1},\ldots,x_{n}$
and image $B_{n}$ contains $y_{1},\ldots,y_{n}$. Let us define $g(x_{n+1})$
and $g^{-1}(y_{n+1})$. If $x_{n+1}\in A_{n}$, $g(x_{n+1})$ is already defined.
Otherwise, since $G$ has no algebraicity, the $G$-orbit of $x_{n+1}$ is
infinite. Therefore, for $\mu\otimes\mu$-a.e.
$((a_{n})_{n\in\mathbb{N}},(b_{n})_{n\in\mathbb{N}})\in A^{\mathbb{N}}\times
A^{\mathbb{N}}$, there exists $\tilde{x}_{n+1}$ in the $G$-orbit of $x_{n+1}$
such that $a_{x_{n+1}}=b_{\tilde{x}_{n+1}}$. Set $g(x_{n+1})=\tilde{x}_{n+1}$.
If $y_{n+1}\in B_{n}$, then $g^{-1}(y_{n+1})$ is already defined. Otherwise,
by the same argument for the existence of $\tilde{x}_{n+1}$,
$\mu\otimes\mu$-a.s. there exists $\tilde{y}_{n+1}$ such that
$a_{\tilde{y}_{n+1}}=b_{y_{n+1}}$ and we set $g(\tilde{y}_{n+1})=y_{n+1}$.
This construction provides for $\mu\otimes\mu$-a.e. couple of sequences
$(a_{n})_{n\in\mathbb{N}}$ and $(b_{n})_{n\in\mathbb{N}}$ an element $g$,
which belongs to $G$ as a limit of elements in $G$, such that
$g\cdot(a_{n})=(b_{n})$. This shows that
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially transitive by
Lemma 4.10.
We now tackle the case when $(A,\kappa)$ is not purely atomic. Let $O$ be a
measurable set such that $\kappa(O)>0$ and $\kappa(\\{x\\})=0$ for all $x\in
O$. For $\mu$-a.e. $a\in A$, infinitely many coordinates of $a$ are in $O$.
Let us denote by $a_{O}$ the set of those coordinates. Since a.s. for any
$i,j\in a_{O}$, $a_{i}\neq a_{j}$, any $g\in G$ such that $g\cdot a=a$
stabilizes $a_{O}$ pointwise. However, for any $x,y\in\mathbb{N}$ distinct, by
Corollary 5.4, there must be $\bar{z}$ contained in $a_{O}$ such that
$(\bar{z},x)$ and $(\bar{z},y)$ are in different $G$-orbits. Therefore for any
$x,y\in\mathbb{N}$ distinct, no $g\in G\setminus\\{1_{G}\\}$ satisfying
$g\cdot a=a$ can send $x$ to $y$ for any $x,y$ distinct. Therefore
$G\curvearrowright(A,\kappa)^{\otimes\mathbb{N}}$ is essentially free. ∎
### 5.2 The proof of the main theorem
Towards proving Theorem 1.1, we first prove a version of it for IREs.
###### Theorem 5.6. —
Let $G\lneq S_{\infty}$ be a proper, transitive, closed subgroup. Let
$\mathcal{L}$ be a countable relational language which contains the canonical
language $\mathcal{L}_{G}$. If $G$ is dynamically de Finetti, then for any
$\mu\in\mathrm{IRE}_{\mathcal{L}}(G)$ ergodic, either
$\mathrm{Aut}(\mathbf{M})=\\{1\\}$ for $\mu$-a.e.
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, or $\mu$ is concentrated on
an orbit.
###### Proof.
Assume that $\mu$ is not concentrated on an orbit. By Theorem 4.11, for
$\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, the set
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$ is non-empty. Let us prove the
following claim.
###### Claim. —
For $\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, we have
$G_{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))}=\\{1_{G}\\}$.
###### Proof.
Fix $x,y\in\mathbb{N}$ distinct. The set
$\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$ is a $G$-invariant ergodic random
subset of $\mathbb{N}$. By Theorem 2.4, its law is therefore i.i.d. since $G$
is dynamically Finetti. Therefore, we can use Corollary 5.4 to get for
$\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$ a tuple
$\bar{z}\in\mathbb{N}^{<\omega}$ disjoint from $x$ and $y$ and which is
contained in $\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))$, such that $(\bar{z},x)$
and $(\bar{z},y)$ lies in different $G$-orbits. This shows that $\mu$-a.s.,
$x$ and $y$ lies in different
$G_{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))}$-orbits, which proves the claim. ∎
So for $\mu$-a.e. $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$, we obtain
that $\mathrm{Aut}(\mathbf{M})$ is a subgroup of
$G_{\mathrm{Fix}(\mathrm{Aut}(\mathbf{M}))}$, which is trivial by the above
claim. Therefore, $\mathrm{Aut}(\mathbf{M})=\\{1_{G}\\}$, $\mu$-a.s., which
concludes the proof. ∎
We are now ready to prove our main result.
###### Theorem 5.7. —
Let $G\lneq S_{\infty}$ be a proper, transitive, closed subgroup. If $G$ is
dynamically de Finetti, then any p.m.p. ergodic action of $G$ is either
essentially free or essentially transitive.
###### Proof.
Let $G\curvearrowright(X,\nu)$ be a p.m.p. ergodic action which is not
essentially transitive. Fix $\mathcal{L}$ a countable relational language
which contains the canonical language $\mathcal{L}_{G}$ and such that
$\mathcal{L}\setminus\mathcal{L}_{G}$ contains relations of arbitrarily high
arity. By [11, Thm. 2.7.4], the relativized logic action
$G\curvearrowright\mathrm{Struc}_{\mathcal{L}}^{G}$ is Borel-universal. That
is, every Borel $G$-action can be Borel-embedded in
$\mathrm{Struc}_{\mathcal{L}}^{G}$. In particular, this implies that there
exists a $G$-invariant Borel probability measure $\mu$ on
$\mathrm{Struc}_{\mathcal{L}}^{G}$ such that the p.m.p. action
$G\curvearrowright(X,\nu)$ is measurably isomorphic to
$G\curvearrowright(\mathrm{Struc}_{\mathcal{L}}^{G},\mu)$. Thus, $\mu$ is a
ergodic IRE of $G$, which is not concentrated on an orbit. By Theorem 5.6, we
obtain that $\mathrm{Aut}(\mathbf{M})=\\{1\\}$ for $\mu$-a.e.
$\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$. Since the automorphism group
of $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}^{G}$ coincide with the
stabilizer of $\mathbf{M}$ for the relativized logic action, this exactly
means that $\mu$ is essentially free. Therefore, $G\curvearrowright(X,\nu)$ is
essentially free. ∎
###### Remark 5.8. —
The group $S_{\infty}$ admits p.m.p. properly ergodic actions that are not
essentially free. In particular, they are neither essentially free nor
essentially transitive. An example can be constructed as follows. Fix an
i.i.d. sequence $(X_{n})_{n\in\mathbb{N}}$ of uniform random variables on
$[0,1]$ and an i.i.d. sequence $(Y_{n})_{n\in\mathbb{N}}$ of $\mathrm{Ber}(p)$
random variables. Let $A\subseteq\mathbb{N}$ be a random
$S_{\infty}$-invariant non-empty subset, independent of
$(X_{n})_{n\in\mathbb{N}}$ and $(Y_{n})_{n\in\mathbb{N}}$. Then define the
random variables $(Z_{n})_{n\in\mathbb{N}}$ by setting $Z_{n}=X_{n}$ if $n\in
A$ and $Z_{n}=Y_{n}$ if $n\notin A$. If $\mu$ denotes the law of
$(Z_{n})_{n\in\mathbb{N}}$, then
$S_{\infty}\curvearrowright([0,1]^{\mathbb{N}},\mu)$ is properly ergodic but
not essentially free.
## 6 Invariant Random Subgroups of Polish groups
### 6.1 Definition
In this section we define the notion of invariant random subgroups for Polish
groups. Let $G$ be a Polish group. We denote by $\mathrm{Sub}(G)$ the space of
closed subgroups of $G$. The Effros $\sigma$-algebra is the $\sigma$-algebra
on $\mathrm{Sub}(G)$ generated by the sets
$\\{H\in\mathrm{Sub}(G)\colon H\cap U\neq\emptyset\\},$
where $U$ varies over open subsets of $G$. The following lemma is probably
well-known but we were not able to locate a proof in the literature.
###### Lemma 6.1. —
If $G$ is a Polish group, then $\mathrm{Sub}(G)$ equipped with the Effros
$\sigma$-algebra is a standard Borel space.
###### Proof.
If $X$ is a standard Borel space and $\mathcal{F}(X)$ denotes the space of
closed subsets of $X$, then $\mathcal{F}(X)$ is a standard Borel space when
equipped with the $\sigma$-algebra generated by the sets
$\\{F\in\mathcal{F}(X)\colon F\cap U\neq\emptyset\\}$ where $U$ varies over
open subsets of $X$ [18]. Therefore, in our case $\mathcal{F}(G)$ is standard
Borel. So it remains to show that $\mathrm{Sub}(G)$ is Borel in
$\mathcal{F}(G)$. By the selection theorem of Kuratowski and Ryll-Nardzewski
[26, Thm. 12.13], there exists a countable sequence of Borel maps
$d_{i}:\mathcal{F}(G)\to G$ with $i\in I$ such that for all nonempty
$F\in\mathcal{F}(G)$, the set $\\{d_{i}(F)\colon i\in I\\}$ is dense in $F$.
But a closed subset $F\in\mathcal{F}(G)$ belongs to $\mathrm{Sub}(G)$ if and
only if $1_{G}\in F$ and $d_{i}(F)d_{j}(F)^{-1}\in F$ for all $i,j\in I$. This
implies that $\mathrm{Sub}(G)$ is Borel in $\mathcal{F}(G)$ and thus
$\mathrm{Sub}(G)$ is a standard Borel space. ∎
###### Remark 6.2. —
If $G$ is a Polish locally compact group, then $\mathrm{Sub}(G)$ is usually
endowed with the Chabauty topology, which is the topology generated by the
sets
$\\{H\in\mathrm{Sub}(G)\colon H\cap U\neq\emptyset,H\cap K=\emptyset\\}$
where $U$ varies over open subsets of $G$ and $K$ over compact subsets of $G$.
In this case, $\mathrm{Sub}(G)$ is a compact Hausdorff space and its Borel
$\sigma$-algebra is the Effros $\sigma$-algebra. However, for Polish non
locally compact groups, the Chabauty topology is not Hausdorff in general (one
can adapt the proof of [13, Thm. 4.4.12] to get that this is indeed not the
case for many Polish groups including $S_{\infty}$).
The $G$-action by conjugation on $\mathrm{Sub}(G)$ is Borel and we are
interested in the probability measures invariant under this action.
###### Definition 6.3. —
Let $G$ be a Polish group. An Invariant Random Subgroup of $G$ is a Borel
probability measure on $\mathrm{Sub}(G)$ that is invariant by conjugation.
We denote by $\mathrm{IRS}(G)=\mathrm{Prob}(\mathrm{Sub}(G))^{G}$ the space of
invariant random subgroups of $G$. This is a standard Borel space equipped
with the $\sigma$-algebra generated by the maps $\mu\mapsto\mu(A)$ with $A$
varying over Borel subsets of $\mathrm{Sub}(G)$ [26, Thm. 17.23 and 17.24]. We
say that $\nu\in\mathrm{IRS}(G)$ is concentrated on a conjugacy class if there
exists an orbit $O$ of the $G$-action by conjugation on $\mathrm{Sub}(G)$ such
that $\nu(O)=1$. Recall that orbits of Borel actions are indeed Borel [26,
Thm. 15.14] so this definition makes sense.
We now explain how to construct IRSs. Let $G$ be a Polish group and let
$G\curvearrowright(X,\mu)$ be a p.m.p. action of $G$. Recall that for us, a
p.m.p. action of a Polish group is a Borel action on some standard Borel space
with a Borel invariant probability measure. For $x\in X$, let
$\mathrm{Stab}(x)\coloneqq\\{g\in G\colon g\cdot x=x\\}$ denotes the
stabilizer of $x$. We will prove that the law of the stabilizer of a
$\mu$-random point is indeed an IRS of $G$. For this, we need the following
lemma.
###### Lemma 6.4. —
Let $G$ be a Polish group and $G\curvearrowright(X,\mu)$ a p.m.p. action. Then
1. (i)
for all $x\in X$, $\mathrm{Stab}(x)$ is a closed subgroup,
2. (ii)
the map $\mathrm{Stab}:x\in X\mapsto\mathrm{Stab}(x)\in\mathrm{Sub}(G)$ is
$\mu$-measurable.
###### Proof.
By [11, Thm. 5.2.1], there exists a Polish topology on $X$ whose Borel
$\sigma$-algebra is that of $X$, such that the action $G\curvearrowright X$ is
continuous. Let us fix such a topology. The proof of (i) is obvious:
stabilizers are closed because the action is continuous. Let us prove (ii).
Let $U\subseteq G$ be open and let $B_{U}\coloneqq\\{H\in\mathrm{Sub}(G)\colon
H\cap U\neq\varnothing\\}$. Let us prove that $\mathrm{Stab}^{-1}(B_{U})$ is
analytic (the continuous image of a Borel set in a Polish space). Then [26,
Theorem 21.10] will allow us to conclude that $\mathrm{Stab}$ is
$\mu$-measurable. First, we have
$\displaystyle\mathrm{Stab}^{-1}(B_{U})$ $\displaystyle=\\{x\in X\colon\exists
g\in U,g\cdot x=x\\}$ $\displaystyle=\pi(B),$
where $B=\\{(x,g)\in X\times U\colon g\cdot x=x\\}$ and $\pi:X\times U\to X$
denotes the projection onto the first coordinate. So we need to prove that $B$
is Borel. But $B$ is the preimage under the continuous map $(x,g)\in X\times
U\to(x,g\cdot x)\in X\times X$ of the diagonal set $\\{(x,x)\colon x\in X\\}$,
which is Borel (because $X$ is Polish). ∎
Therefore, p.m.p. actions of Polish groups produce IRSs: if
$G\curvearrowright(X,\mu)$ is a p.m.p. action of a Polish group, then
$\mathrm{Stab}_{*}\mu$ is an IRS of $G$. In the next theorem, we prove that
for closed permutation group groups, every IRS arises this way.
###### Theorem 6.5. —
Let $G$ be a closed subgroup of $S_{\infty}$ and let $\nu\in\mathrm{IRS}(G)$.
Then there exists a p.m.p. action $G\curvearrowright(X,\mu)$ such that
$\mathrm{Stab}_{*}\mu=\nu$.
###### Proof.
Fix $\nu\in\mathrm{IRS}(G)$ and let us prove that $\nu$ is a stabilizer IRS.
Let $X$ be the standard Borel space defined by
$X\coloneqq\mathrm{Sub}(G)\times[0,1]^{\mathbb{N}^{<\omega}}.$
Recall that $\mathbb{N}^{<\omega}$ stands for the disjoint union of
$\mathbb{N}^{n}$ for $n\geq 1$. The action of $G$ on $\mathbb{N}^{<\omega}$
induces a Borel action of $G$ on $[0,1]^{\mathbb{N}^{<\omega}}$. Let us now
construct a $G$-invariant probability measure $\mu$ on $X$ such that
$\mathrm{Stab}_{*}\mu=\nu$.
Given $H\in\mathrm{Sub}(G)$, a coloring of $H$ is a map
$c:\mathbb{N}^{<\omega}\to[0,1]$ which is constant on each orbit of the action
$H\curvearrowright\mathbb{N}^{<\omega}$. Let $H\in\mathrm{Sub}(G)$. Then there
exists a unique Borel probability measure $\lambda^{H}$ on
$[0,1]^{\mathbb{N}^{<\omega}}$ concentrated on colorings of $H$ such that if
$c$ is a random variable with law $\lambda^{H}$ and
$\bar{x_{1}},\dots,\bar{x_{n}}\in\mathbb{N}^{<\omega}$ are tuples whose
$H$-orbits are pairwise disjoint, then $c(\bar{x_{1}}),\dots,c(\bar{x_{n}})$
are i.i.d. uniform random variables. By uniqueness, we have that for all $g\in
G$, $g_{*}\lambda^{H}=\lambda^{gHg^{-1}}$. Therefore the probability measure
on $X$
$\mu=\int_{X}\lambda^{H}(c)d\nu(H)$
is $G$-invariant.
We will now prove that for $\mu$-a.e. $(H,c)\in X$, we have
$\mathrm{Stab}(H,c)=H$. Remark first that if $g\in\mathrm{Stab}(H,c)$ then
$g\in N_{G}(H)$. The following claim is what we need to conclude that
$\mathrm{Stab}_{*}\mu=\nu$.
###### Claim. —
Let $H\in\mathrm{Sub}(G)$ and let $g\in N_{G}(H)$ be such that any orbit of
the action $H\curvearrowright\mathbb{N}^{<\omega}$ is invariant by $g$. Then
$g\in H$.
###### Proof of the claim.
Fix an enumeration $x_{1},x_{2},\dots$ of $\mathbb{N}$. For all $n\geq 1$, the
$H$-orbit of $(x_{1},\dots,x_{n})$ is preserved by $g$. Since $g\in N_{G}(H)$,
this implies that
$H(x_{1},\dots,x_{n})=gH(x_{1},\dots,x_{n})=Hg(x_{1},\dots,x_{n}).$
Therefore there exists $h_{n}\in H$ such that
$h_{n}(x_{1},\dots,x_{n})=g(x_{1},\dots,x_{n})$. This means that $h_{n}\to g$
as $n\to+\infty$. Since $H$ is closed, we obtain that $g\in H$. ∎
∎
Versions of these results for Polish locally compact groups already appeared
in the literature. When $G$ is Polish locally compact, Lemma 6.4 (i) was
proved by Varadarajan [37]. Theorem 6.5 was proved for discrete groups by
Abért, Glasner and Virag [2] and for Polish locally compact groups in [1].
###### Remark 6.6. —
Theorem 6.5 is false in full generality for Polish groups, as there exist
Polish groups, such as $\mathrm{Aut}(X,\mu)$, which admits no non-trivial
p.m.p. action, [19], [20]. These groups therefore have no essentially free
p.m.p. actions. For such groups $G$, the IRS $\delta_{\\{1_{G}\\}}$ is not
realized. We thank Anush Tserunyan and Ronnie Chen for pointing us toward such
examples. Nonetheless, it would be interesting to understand which Polish
groups do realize their IRSs.
### 6.2 From IRS to IRE and vice versa
We will draw a connection between invariant random subgroups and invariant
random expansions for closed permutation groups.
#### From IRS to IRE.
Fix $G\leq S_{\infty}$ a closed subgroup. Let us recall the definition of the
canonical language. For all $n\geq 1$, let $J_{n}$ be the set of orbits of the
diagonal action $G\curvearrowright\mathbb{N}^{n}$ and let $J=\bigcup_{n\geq
1}J_{n}$. The canonical language associated with $G$ is
$\mathcal{L}_{G}\coloneqq(R_{j})_{j\in J}$ where $R_{j}$ is a relation symbol
of arity $n$ for all $j\in J_{n}$. We define a new language
$\mathcal{L}_{dyn}\coloneqq(T_{n})_{n\geq 1}\sqcup\mathcal{L}_{G}$ that we
call the dynamical language of $G$, where $T_{n}$ is a relation symbol of
arity $n$ for each integer $n\geq 1$.
To any $H\in\mathrm{Sub}(G)$ we associate an expansion
$\mathbf{M}_{G}(H)\in\mathrm{Struc}_{\mathcal{L}_{dyn}}^{G}$ of the canonical
structure $\mathbf{M}_{G}$ in the language $\mathcal{L}_{dyn}$ as follows:
$\mathbf{M}_{G}(H)\coloneqq((\mathcal{R}_{H\curvearrowright\mathbb{N}^{n}})_{n\in\mathbb{N}},(R_{j}^{G})_{j\in
J}),$
where $R_{j}^{G}=j\subseteq\mathbb{N}^{n}$ for all $j\in J_{n}$ and where
$\mathcal{R}_{H\curvearrowright\mathbb{N}^{n}}\coloneqq\\{(\bar{x},\bar{y})\in\mathbb{N}^{n}\times\mathbb{N}^{n}\colon\bar{y}\in
H\bar{x}\\}$ is the orbit equivalence relation of the $H$-action on
$n$-tuples.
###### Lemma 6.7. —
Let $G\leq S_{\infty}$ be a closed subgroup. The following hold.
1. (i)
The map
$H\in\mathrm{Sub}(G)\mapsto\mathbf{M}_{G}(H)\in\mathrm{Struc}_{\mathcal{L}_{\text{dyn}}}^{G}$
is Borel, $G$-equivariant and injective.
2. (ii)
For all $H\in\mathrm{Sub}(G)$, we have
$\mathrm{Aut}(\mathbf{M}_{G}(H))=N_{G}(H)$.
###### Proof.
We only show that the map is Borel, the rest being straightforward. For this,
it suffices to show that for all $n\geq 1$ and all
$\bar{x},\bar{y}\in\mathbb{N}^{n}$, the set
$\\{H\in\mathrm{Sub}(G)\colon(\bar{x},\bar{y})\in\mathcal{R}_{H\curvearrowright\mathbb{N}^{n}}\\}$
is Borel. But this is exactly $\\{H\in\mathrm{Sub}(G)\colon H\cap
U\neq\emptyset\\}$ where $U\subseteq G$ is the open subset consisting of the
elements $g\in G$ such that $g(\bar{x})=\bar{y}$. This last set belongs to the
Effros $\sigma$-algebra, therefore $H\mapsto\mathbf{M}_{G}(H)$ is Borel. ∎
To any $\nu\in\mathrm{IRS}(G)$ we can therefore associate an IRE of $G$ as the
law of the random expansion $\mathbf{M}_{G}(H)$ where $H$ is a $\nu$-random
closed subgroup of $G$.
###### Remark 6.8. —
This construction is in fact underlying in the proof of Theorem 6.5, in which
we implicitly use this IRE that we expand using colorings of each orbit.
#### From IRE to IRS.
Let $G$ be a closed subgroup of $S_{\infty}$ and fix a countable relational
language $\mathcal{L}$ which contains $\mathcal{L}_{G}$. Let
$\mu\in\mathrm{IRE}(G)$. If $\mathbf{M}$ is a $\mu$-random expansion, then the
law of $\mathrm{Aut}(\mathbf{M})$ is an IRS of $G$. It may happen that the IRS
obtained this way is trivial (that is equal to $\delta_{\\{1_{G}\\}}$) whereas
the IRE is a nontrivial object of interest. We give details of such an IRE in
the next example.
###### Example 6.9 (The kaleidoscope random graph). —
Let $\mathcal{L}=(R_{n})_{n\in\mathbb{N}}$ be the language consisting in
countably many binary relations. Denote by $\mathcal{P}_{2}(\mathbb{N})$ the
set of subsets $A\subseteq\mathbb{N}$ with $\lvert A\rvert=2$. Consider the
random element $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ obtained by first
picking an $S_{\infty}$-invariant random non-empty subset
$A_{\\{i,j\\}}\subseteq\mathbb{N}$, independently for each
$\\{i,j\\}\in\mathcal{P}_{2}(\mathbb{N})$ and then setting
$R_{n}^{\mathbf{M}}(i,j)=1$ if and only if $n\in A_{\\{i,j\\}}$. The law of
$\mathbf{M}$ is indeed an IRE of $S_{\infty}$. This IRE can be thought as the
union of countably many random graphs on $\mathbb{N}$, each of which having
its edges labeled by a different color. The theory of such an IRE is studied
in [4, Ex. 3.2] and [6, §5.1] . This is an example of what they call a
properly ergodic structure, which is the main object of study in [4]. If $\mu$
denotes the law of the IRE $\mathbf{M}$, then the p.m.p. action
$S_{\infty}\curvearrowright(\mathrm{Struc}_{\mathcal{L}},\mu)$ is measurably
conjugate to
$S_{\infty}\curvearrowright([0,1],\mathrm{Leb})^{\otimes\mathcal{P}_{2}(\mathbb{N})}$
which is easily seen to be a properly ergodic p.m.p. action.
### 6.3 Rigidity of ergodic IRSs of $S_{\infty}$
We have already proved in Theorem 5.7 that the p.m.p. ergodic actions of any
proper, transitive, closed subgroup $G\lneq S_{\infty}$ which is dynamically
de Finetti, are either essentially free or essentially transitive. In
particular, this implies that for any such group $G$, any ergodic
$\nu\in\mathrm{IRS}(G)$ is concentrated on a conjugacy class. On the other
hand, we have seen in Remark 5.8 that $S_{\infty}$ admits p.m.p. ergodic
actions that are neither essentially free nor essentially transitive. However,
we prove that such a behavior never appears for ergodic IRSs of $S_{\infty}$.
We therefore obtain the following result.
###### Theorem 6.10. —
Let $G\leq S_{\infty}$ be a transitive closed subgroup. If $G$ is dynamically
de Finetti, then any ergodic $\nu\in\mathrm{IRS}(G)$ is concentrated on a
conjugacy class.
###### Proof.
As mentioned in the above discussion, if $G$ is a proper subgroup of
$S_{\infty}$, this is a consequence of Theorem 5.7, therefore the proof is
dedicated to the case when $G=S_{\infty}$. Let $\mathcal{L}_{dyn}$ be the
dynamical language of $S_{\infty}$. Let us define $\mu\in\mathrm{IRE}(G)$ in
the language $\mathcal{L}_{dyn}$ as the pushforward measure of $\nu$ by the
map $H\mapsto\mathbf{M}_{G}(H)$. Let us prove that $\mu$ has no fixed point.
First, for $\nu$-a.e. $H\in\mathrm{Sub}(G)$,
$\mathrm{Aut}(\mathbf{M}_{G}(H))=N_{S_{\infty}}(H)$ by Lemma 6.7 (ii). But for
all $H\in\mathrm{Sub}(G)$, the group $N_{S_{\infty}}(H)$ acts transitively on
$\mathrm{Fix}(H)$. Indeed, for any distinct $x,y\in\mathrm{Fix}(H)$, the
transposition that exchanges $x$ and $y$ belongs to $N_{S_{\infty}}(H)$. This
shows that $\mu$ has no fixed point and concludes the proof. ∎
## 7 Further discussions
#### On the existence of IREs.
The main result in the present paper (Theorem 5.7 and its corresponding
version for IRE in Theorem 5.6) suggests examining both essentially free and
essentially transitive p.m.p. actions of dynamically de Finetti groups.
Essentially free actions of $S_{\infty}$ have been analyzed in depth from a
model theoretic perspective in [4] through a notion that is called properly
ergodic structures, but such a study for other groups is lacking. On the other
hand, Ackerman, Freer and Patel have obtained a great understanding of
essentially transitive p.m.p. actions of $S_{\infty}$. In their seminal paper
[8], they proved that for any countable relational language $\mathcal{L}$ and
any $\mathbf{M}\in\mathrm{Struc}_{\mathcal{L}}$ with no algebraicity, there
exists an IRE of $S_{\infty}$ supported on the orbit of $\mathbf{M}$. In
another paper with Kwiatkowska [5], they moreover characterize the cardinality
of the set of such measures. A natural generalization of these works would
then be:
###### Question 7.1. —
Is there a natural condition on an expansion $\mathbf{N}$ of the canonical
structure $\mathbf{M}_{G}$ associated with a given closed subgroup $G\leq
S_{\infty}$, such that there exists a $G$-IRE concentrated on the $G$-orbit of
$\mathbf{N}$?
This question has been answered in some special cases in [3] and [6]. The
following observations may help to answer this question, however they also
suggest to us that this problem might be difficult.
1. 1)
For any closed $G\leq S_{\infty}$ and any $S_{\infty}$-IRE $\mu$ in a language
$\mathcal{L}$ (disjoint from $\mathcal{L}_{G}$), one readily gets a $G$-IRE in
the language $\mathcal{L}\sqcup\mathcal{L}_{G}$. Indeed, take $\mathbf{M}$ a
random structure with law $\mu$ and define a $G$-IRE $\mathbf{N}$ by
$R^{\mathbf{N}}(\bar{x})\Leftrightarrow\left\\{\begin{array}[]{c}R^{\mathbf{M}}(\bar{x})\text{
whenever }R\in\mathcal{L}.\\\ R^{\mathbf{M}_{G}}(\bar{x})\text{ whenever
}R\in\mathcal{L}_{G}.\end{array}\right.$
2. 2)
There are groups that admit IREs not produced as in 1). This is the case for
the IRE of the automorphism group of the Fraïssé limit of 2-graphs considered
in Example 4.2 (iii).
3. 3)
There are expansions which orbits can not be the support of an IRE. We give
two examples.
1. i)
Any expansion of the generic poset into a linear order. The existence of such
an IRE would contradict the non-amenability of the automorphism groups of the
generic poset.
2. ii)
The expansion $\mathbf{N}$ of $(\mathbb{Q},<)$ given by adding a unary
relation $R$ and for a fixed irrational $\alpha$, setting for all
$q\in\mathbb{Q}$, $R^{\mathbf{N}}(q)$ if and only if $q<\alpha$. If the orbit
of this expansion was the support of an ergodic
$\mathrm{Aut}(\mathbb{Q},<)$-IRE, we would get a non product
$\mathrm{Aut}(\mathbb{Q},<)$-invariant ergodic measure on
$\\{0,1\\}^{\mathbb{Q}}$ which does not exist by [24].
#### On the definition of dynamically de Finetti groups.
Let us discuss the different hypotheses in Definition 2.1. We classify them by
their natures:
1. a)
the model-theoretic hypotheses: no algebraicity and weak elimination of
imaginaries,
2. b)
the dynamical hypothesis: for any p.m.p. action $G\curvearrowright(X,\mu)$, we
have
$\mathcal{F}_{A}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}_{\mathcal{F}_{A\cap
B}}\mathcal{F}_{B}$ for all finite $A,B\subseteq\mathbb{N}$.
Both are heavily used in our proofs, however, our result might hold under
weaker hypotheses.
###### Question 7.2. —
Let $G\lneq S_{\infty}$ be a proper, transitive, closed subgroup satisfying
b). Are all its ergodic p.m.p. actions either essentially free or essentially
transitive?
One way to go about answering positively this question would be to prove that
b) implies a). The converse is most likely not true however, as we have strong
evidence, in an ongoing project of the first author and Perruchaud, of the
existence of a group satisfying a) and not b).
Moreover, Item b) closely resembles the notion of dissociation, which is
relevant in exchangeability theory and is related to the Aldous-Hoover-
Kallenberg representation Theorem, see [25, Lem.7.35]. We say that a p.m.p.
action is dissociated if
$\mathcal{F}_{A}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}\mathcal{F}_{B}$
for all finite disjoint $A,B\subseteq\mathbb{N}$. This name suggests that we
call strongly dissociated a p.m.p. action such that
$\mathcal{F}_{A}\mathrel{\reflectbox{\rotatebox[origin={c}]{90.0}{$\models$}}}_{\mathcal{F}_{A\cap
B}}\mathcal{F}_{B}$ for all finite $A,B\subseteq\mathbb{N}$. One can check
carefully the proofs in this paper and notice that we never used fully strong
dissociation for dynamically de Finetti groups. Instead we only use that every
p.m.p ergodic action is dissociated. We do not know whether dissociation of
every p.m.p. ergodic action is actually more general than strong dissociation
of every p.m.p. action. We preferred to use strong dissociation in our
definition as it more accurately describes the behavior of dynamically de
Finetti oligomorphic groups, see [24, Thm. 3.4].
## References
* [1] Miklós Abért et al. “On the growth of $L^{2}$-invariants for sequences of lattices in Lie groups” In _Ann. of Math. (2)_ 185.3, 2017, pp. 711–790 DOI: 10.4007/annals.2017.185.3.1
* [2] Miklós Abért, Yair Glasner and Bálint Virág “Kesten’s theorem for invariant random subgroups.” In _Duke Math. J._ 163.3, 2014, pp. 465–488 DOI: 10.1215/00127094-2410064
* [3] Nathanael Ackerman “Representations of $\mathrm{Aut}(M)$-Invariant Measures”, 2021 arXiv:1509.06170
* [4] Nathanael Ackerman, Cameron Freer, Alex Kruckman and Rehana Patel “Properly ergodic structures”, 2017 arXiv:1710.09336
* [5] Nathanael Ackerman, Cameron Freer, Aleksandra Kwiatkowska and Rehana Patel “A classification of orbits admitting a unique invariant measure” In _Ann. Pure Appl. Logic_ 168.1, 2017, pp. 19–36 DOI: 10.1016/j.apal.2016.08.003
* [6] Nathanael Ackerman, Cameron Freer, Jaroslav Nešetřil and Rehana Patel “Invariant measures via inverse limits of finite structures” In _Eur. J. Comb._ 52, 2016, pp. 248–289 DOI: 10.1016/j.ejc.2015.07.006
* [7] Nathanael Ackerman, Cameron Freer and Rehana Patel “Countable infinitary theories admitting an invariant measure”, 2017 arXiv:1710.06128
* [8] Nathanael Ackerman, Cameron Freer and Rehana Patel “Invariant measures concentrated on countable structures” In _Forum Math. Sigma_ 4 Cambridge University Press, 2016, pp. e17 DOI: 10.1017/fms.2016.15
* [9] Nathanael Ackerman, Cameron Freer and Rehana Patel “The Entropy Function of an Invariant Measure” In _Proceedings of the $14^{\text{th}}$ and $15^{\text{th}}$ Asian Logic Conferences_, pp. 3–34 DOI: 10.1142/9789813237551_0001
* [10] Uri Bader, Bruno Duchesne, Jean Lécureux and Phillip Wesolek “Amenable invariant random subgroups” In _Isr. J. Math._ 213, 2016, pp. 399–422 DOI: 10.1007/s11856-016-1324-7
* [11] Howard Becker and Alexander S. Kechris “The descriptive set theory of Polish group actions” 232, Lond. Math. Soc. Lect. Note Ser. Cambridge: Cambridge University Press, 1996
* [12] Oren Becker, Alexander Lubotzky and Andreas Thom “Stability and invariant random subgroups” In _Duke Math. J._ 168.12, 2019, pp. 2207–2234 DOI: 10.1215/00127094-2019-0024
* [13] Gerald Beer “Topologies on closed and closed convex sets” 268, Math. Appl., Dordr. Dordrecht: Kluwer Academic Publishers, 1993
* [14] Itaï Ben Yaacov, Julien Melleray and Todor Tsankov “Metrizable universal minimal flows of Polish groups have a comeagre orbit” In _Geom. Funct. Anal._ 27.1, 2017, pp. 67–77 DOI: 10.1007/s00039-017-0398-7
* [15] Lewis Bowen “Invariant random subgroups of the free group” In _Groups Geom. Dyn._ 9.3, 2015, pp. 891–916 DOI: 10.4171/GGD/331
* [16] Lewis Bowen, Rostislav Grigorchuk and Rostyslav Kravchenko “Invariant random subgroups of lamplighter groups” In _Isr. J. Math._ 207, 2015, pp. 763–782 DOI: 10.1007/s11856-015-1160-1
* [17] Peter J. Cameron “Oligomorphic permutation groups” 152, Lond. Math. Soc. Lect. Note Ser. Cambridge: Cambridge University Press, 1990
* [18] Edward G. Effros “Convergence of Closed Subsets in a Topological Space” In _Proc. Amer. Math. Soc._ 16.5 American Mathematical Society, 1965, pp. 929–931
* [19] Eli Glasner, Boris Tsirelson and Benjamin Weiss “The automorphism group of the Gaussian measure cannot act pointwise” In _Isr. J. Math._ 148, 2005, pp. 305–329 DOI: 10.1007/BF02775441
* [20] Eli Glasner and Benjamin Weiss “Spatial and non-spatial actions of Polish groups” In _Ergod. Theory Dyn. Syst._ 25.5, 2005, pp. 1521–1538 DOI: 10.1017/S0143385705000052
* [21] C. Henson “Countable homogeneous relational structures and $\aleph_{0}$-categorical theories” In _J. Symb. Log._ 37, 1972, pp. 494–500 DOI: 10.2307/2272734
* [22] Wilfrid Hodges “Model Theory”, Encyclopedia of Mathematics and its Applications Cambridge University Press, 1993 DOI: 10.1017/CBO9780511551574
* [23] Colin Jahel “Some progress on the unique ergodicity problem”, 2021
* [24] Colin Jahel and Todor Tsankov “Invariant measures on products and on the space of linear orders” In _J. Ec. Polytech. Math._ 9, 2022, pp. 155–176
* [25] Olav Kallenberg “Probabilistic symmetries and invariance principles.”, Probab. Appl. New York, NY: Springer, 2005 DOI: 10.1007/0-387-28861-9
* [26] Alexander S. Kechris “Classical descriptive set theory” In _Grad. Texts Math._ 156 Berlin: Springer-Verlag, 1995
* [27] Alexander S. Kechris “Dynamics of non-archimedean Polish groups” In _European Congress of Mathematics. Proceedings of the 6th ECM congress, Kraków, Poland, July 2–7 July, 2012_ Zürich: European Mathematical Society (EMS), 2013, pp. 375–397 DOI: 10.4171/120-1/22
* [28] Alexander S. Kechris, Vladimir G. Pestov and Stevo Todorcevic “Fraï ssé limits, Ramsey theory, and topological dynamics of automorphism groups” In _Geom. Funct. Anal._ 15.1, 2005, pp. 106–189 DOI: 10.1007/s00039-005-0503-1
* [29] Alexander S. Kechris and Todor Tsankov “Amenable actions and almost invariant sets” In _Proc. Amer. Math. Soc._ 136.2, 2008, pp. 687–697 DOI: 10.1090/S0002-9939-07-09116-2
* [30] Matthew Kennedy “An intrinsic characterization of C*-simplicity” In _Ann. Sci. Éc. Norm. Supér. (4)_ 53.5, 2020, pp. 1105–1119 DOI: 10.24033/asens.2441
* [31] Julien Melleray, Lionel Nguyen Van Thé and Todor Tsankov “Polish groups with metrizable universal minimal flows” In _Int. Math. Res. Not._ , 2016, pp. 1285–1307 DOI: 10.1093/imrn/rnv171
* [32] Gianluca Paolini and Saharon Shelah “Reconstructing structures with the strong small index property up to bi-definability” In _Fund. Math._ 247.1, 2019, pp. 25–35 DOI: 10.4064/fm640-9-2018
* [33] Gianluca Paolini and Saharon Shelah “Research Trends in Contemporary Logic” College Publications, forthcoming
* [34] Robert R. Phelps “Lectures on Choquet’s theorem” 1757, Lect. Notes Math. Berlin: Springer, 2001 DOI: 10.1007/b76887
* [35] Bruno Poizat “A course in model theory. An introduction to contemporary mathematical logic. Transl. from the French by Moses Klein”, Universitext New York, NY: Springer, 2000
* [36] Todor Tsankov “Unitary representations of oligomorphic groups” In _Geom. Funct. Anal._ 22.2, 2012, pp. 528–555 DOI: 10.1007/s00039-012-0156-9
* [37] Veeraualli S. Varadarajan “Geometry of Quantum Theory” New York, NY: Springer-Verlag, 1985 DOI: 10.1007/978-1-4612-5082-9
* [38] Andy Zucker “Topological dynamics of automorphism groups, ultrafilter combinatorics, and the generic point problem” In _Trans. Amer. Math. Soc._ 368.9, 2016, pp. 6715–6740 DOI: 10.1090/tran6685
C. Jahel, Institut fur Algebra, Technische Universität Dresden, GERMANY
E-mail address<EMAIL_ADDRESS>
M. Joseph, Université Paris-Saclay, CNRS, Laboratoire de mathématiques
d’Orsay, 91405, Orsay, France
E-mail address<EMAIL_ADDRESS>
|
figurec
# Trajeglish: Learning the Language of Driving Scenarios
Jonah Philion1,2,3, Xue Bin Peng1,4, Sanja Fidler1,2,3
1NVIDIA, 2University of Toronto, 3Vector Institute, 4Simon Fraser University
{jphilion, japeng<EMAIL_ADDRESS>
###### Abstract
A longstanding challenge for self-driving development is simulating dynamic
driving scenarios seeded from recorded driving logs. In pursuit of this
functionality, we apply tools from discrete sequence modeling to model how
vehicles, pedestrians and cyclists interact in driving scenarios. Using a
simple data-driven tokenization scheme, we discretize trajectories to
centimeter-level resolution using a small vocabulary. We then model the multi-
agent sequence of motion tokens with a GPT-like encoder-decoder that is
autoregressive in time and takes into account intra-timestep interaction
between agents. Scenarios sampled from our model exhibit state-of-the-art
realism; our model tops the Waymo Sim Agents Benchmark, surpassing prior work
along the realism meta metric by 3.3% and along the interaction metric by
9.9%. We ablate our modeling choices in full autonomy and partial autonomy
settings, and show that the representations learned by our model can quickly
be adapted to improve performance on nuScenes. We additionally evaluate the
scalability of our model with respect to parameter count and dataset size, and
use density estimates from our model to quantify the saliency of context
length and intra-timestep interaction for the traffic modeling task.
## 1 Introduction
In the short term, self-driving vehicles will be deployed on roadways that are
largely populated by human drivers. For these early self-driving vehicles to
share the road safely, it is imperative that they become fluent in the ways
people interpret and respond to motion. A failure on the part of a self-
driving vehicle to predict the intentions of people can lead to overconfident
or overly cautious planning. A failure on the part of a self-driving vehicle
to communicate to people its own intentions can endanger other roadusers by
surprising them with uncommon maneuvers.
In this work, we propose an autoregressive model of the motion of roadusers
that can be used to simulate how humans might react if a self-driving system
were to choose a given sequence of actions. At test time, as visualized in
Fig. 1, the model functions as a policy, outputting a categorical distribution
over the set of possible states an agent might move to at each timestep.
Iteratively sampling actions from the model results in diverse, scene-
consistent multi-agent rollouts of arbitrary length. We call our approach
“Trajeglish” due to the fact that we model multi-agent trajectories as a
sequence of discrete tokens, similar to the representation used in language
modeling, and to make an analogy between how roadusers use vehicle motion to
communicate and how people use verbal languages, like English, to communicate.
A selection of samples from our model is visualized in Fig. 2. When generating
these samples, the model is prompted with only the initial position and
heading of the agents, in contrast to prior work that generally requires at
least one second of historical motion to begin sampling. Our model generates
diverse outcomes for each scenario, while maintaining the scene-consistency of
the trajectories. We encourage readers to consult our project page for videos
of scenarios sampled from our model in full control and partial control
settings, as well as longer rollouts of length 20 seconds.
Our main contributions are:
* •
A simple data-driven method for tokenizing trajectory data we call “k-disks”
that enables us to tokenize the Waymo Open Dataset (WOMD) (Ettinger et al.,
2021) at an expected discretization error of 1 cm using a small vocabulary
size of 384.
* •
A transformer-based architecture for modeling sequences of motion tokens that
conditions on map information and one or more initial states per agent. Our
model outputs a distribution over actions for agents one at a time which we
show is ideal for interactive applications.
* •
State-of-the-art quantitative and qualitative results when sampling rollouts
given real-world initializations both when the traffic model controls all
agents in the scene as well as when the model must interact with agents
outside its control.
We additionally evaluate the scalability of our model with respect to
parameter count and dataset size, visualize the representations learned by our
model, and use density estimates from our model to quantify the extent to
which intra-timestep dependence exists between agents, as well as to measure
the relative importance of long context lengths for traffic modeling (see Sec.
4.3).
### 1.1 Related Work
Our work builds heavily on recent work in imitative traffic modeling. The full
family of generative models have been applied to this problem, including VAEs
(Suo et al., 2021; Rempe et al., 2021), GANs (Igl et al., 2022), and diffusion
models (Zhong et al., 2022; Jiang et al., 2023). While these approaches
primarily focus on modeling the multi-agent joint distribution over future
trajectories, our focus in this work is additionally on building reactivity
into the generative model, for which the factorization provided by
autoregression is well-suited. For the structure of our encoder-decoder, we
draw inspiration from Scene Transformer (Ngiam et al., 2021) which also uses a
global coordinate frame to encode multi-agent interaction, but does not
tokenize data and instead trains their model with a masked regression
strategy. A limitation of regression is that it’s unclear if the Gaussian or
Laplace mixture distribution is flexible enough to represent the distribution
over the next state, whereas with tokenization, we know that all scenarios in
WOMD are within the scope of our model, the only challenge is learning the
correct logits. A comparison can also be made to the behavior cloning
baselines used in Symphony (Igl et al., 2022) and “Imitation Is Not Enough”
(Lu et al., 2023) which also predict a categorical distribution over future
states, except our models are trained directly on pre-tokenized trajectories
as input, and through the use of the transformer decoder, each embedding
receives supervision for predicting the next token as well as all future
tokens for all agents in the scene. In terms of tackling the problem of
modeling complicated continuous distributions by tokenizing and applying
autoregression, our work is most similar to Trajectory Transformer (Janner et
al., 2021) which applies a fixed-grid tokenization strategy to model state-
action sequences for RL. Finally, our work parallels MotionLM (Seff et al.,
2023) which is concurrent work that also uses discrete sequence modeling for
motion prediction, but targets 1- and 2-agent online interaction prediction
inistead of $N$-agent offline closed-loop simulation.
Figure 1: Inputs and outputs At a given timestep, our model predicts a
distribution over a fixed set of $V$ states defined relative to an agent’s
current location and heading, and conditions on map information, actions from
all previous timesteps (green), and any actions that have already been chosen
by other agents within the current timestep (blue). We model motion of all
agents relevant to driving scenarios, including vehicles, pedestrians, and
cyclists.
Figure 2: Trajeglish Visualizations of samples from our model. Rollouts within
each row are given the same single-timestep initialization, outlined in black.
Future trajectories become lighter for timesteps farther into the future.
While some tracks overlap in the figure, they do not overlap when time is
taken into account; there are no collisions in these rollouts. Videos are
available on our project page.
## 2 Imitative Traffic Modeling
In this section, we show that the requirement that traffic models must
interact with all agents at each timestep of simulation, independent of the
method used to control each of the agents, imposes certain structural
constraints on how the multi-agent future trajectory distribution is factored
by imitative traffic models. Similar motivation is provided to justify the
conditions for submissions to the WOMD sim agents benchmark to be considered
valid closed-loop policies (Montali et al., 2023).
We are given an initial scene with $N$ agents, where a scene consists of map
information, the dimensions and object class for each of the $N$ agents, and
the location and heading for each of the agents for some number of timesteps
in the past. For convenience, we denote information about the scene provided
at initialization by $\bm{c}$. We denote the state of a vehicle $i$ at future
timestep $t$ by ${\bm{s}}_{t}^{i}\equiv(x^{i}_{t},y^{i}_{t},h^{i}_{t})$ where
$(x,y)$ is the center of the agent’s bounding box and $h$ is the heading. For
a scenario of length $T$ timesteps, the distribution of interest for traffic
modeling is given by
$\displaystyle
p(\bm{s}_{1}^{1},...,\bm{s}_{1}^{N},\bm{s}_{2}^{1},...,\bm{s}_{2}^{N},...,\bm{s}_{T}^{1},...,\bm{s}_{T}^{N}\mid\bm{c}).$
(1)
We refer to samples from this distribution as rollouts. In traffic modeling,
our goal is to sample rollouts under the restriction that at each timestep, a
black-box autonomous vehicle (AV) system chooses a state for a subset of the
agents. We refer to the agents controlled by the traffic model as “non-player
characters” or NPCs. This interaction model imposes the following
factorization of the joint likelihood expressed in Eq. 1
$\displaystyle\begin{split}&p(\bm{s}_{1}^{1},...,\bm{s}_{1}^{N},\bm{s}_{2}^{1},...,\bm{s}_{2}^{N},...,\bm{s}_{T}^{1},...,\bm{s}_{T}^{N}\mid\bm{c})\\\
&=\prod_{1\leq t\leq
T}p(\bm{s}_{t}^{1...N_{0}}|\bm{c},\bm{s}_{1...t-1})\underbrace{p(\bm{s}^{N_{0}+1...N}_{t}\mid\bm{c},\bm{s}_{1...t-1},\bm{s}_{t}^{1...N_{0}})}_{\text{NPCs}}\end{split}$
(2)
where
$\bm{s}_{1...t-1}\equiv\\{\bm{s}_{1}^{1},\bm{s}_{1}^{2},...,\bm{s}_{t-1}^{N}\\}$
is the set of all states for all agents prior to timestep $t$,
$\bm{s}^{1...N_{0}}_{t}\equiv\\{\bm{s}_{t}^{1},...,\bm{s}_{t}^{N}\\}$ is the
set of states for agents 1 through $N$ at time $t$, and we arbitrarily
assigned the agents out of the traffic model’s control to have indices
$1,...,N_{0}$. The factorization in Eq. LABEL:eq:factor shows that we seek a
model from which we can sample an agent’s next state conditional on all states
sampled in previous timesteps as well as any states already sampled at the
current timestep.
We note that, although the real-world system that generated the driving data
involves independent actors, it may still be important to model the influence
of actions chosen by other agents at the same timestep, a point we expand on
in Appendix A.1. While intra-timestep interaction between agents is weak in
general, explicitly modeling this interaction provides a window into
understanding cases when it is important to consider for the purposes of
traffic modeling.
## 3 Method
In this section, we introduce Trajeglish, an autoregressive generative model
of dynamic driving scenarios. Trajeglish consists of two components. The first
component is a strategy for discretizing, or “tokenizing” driving scenarios
such that we model exactly the conditional distributions required by the
factorization of the joint likelihood in Eq. LABEL:eq:factor. The second
component is an autoregressive transformer-based architecture for modeling the
distribution of tokenized scenarios.
Important features of Trajeglish include that it preserves the dynamic
factorization of the full likelihood for dynamic test-time interaction, it
accounts for intra-timestep coupling across agents, and it enables both
efficient sampling of scenarios as well as density estimates. While sampling
is the primary objective for traffic modeling, we show in Sec. 4.3 that the
density estimates from Trajeglish are useful for understanding the importance
of longer context lengths and intra-timestep dependence. We introduce our
tokenization strategy in Sec. 3.1 and our autoregressive model in Sec. 3.2.
Figure 3: Tokenization We iteratively find the token with minimum corner
distance to the next state. An example trajectory is shown in green. The raw
representation of the tokenized trajectory is shown as boxes with blue
outlines. States that have yet to be tokenized are light green. Token
templates are optimized to minimize the error between the tokenized
trajectories and the raw trajectories. Figure 4: Raw motion token
representation We plot the raw representation of action sets extracted with
k-disks for $|V|\in\\{128,256,384,512\\}$. Agents sample one of these actions
at each timestep. Figure 5: Token frequency We plot the frequency that each
token appears in the validation and training sets. Note that we sort the
tokens by their frequency for each class individually for the ID. Increasing
the vocabulary size increases the resolution but also results in a longer
tail. The distribution of actions on the training set and validation set match
closely.
### 3.1 Tokenization
The goal of tokenization is to model the support of a continuous distribution
as a set of $|V|$ discrete options. Given ${\bm{x}}\in\mathbb{R}^{n}\sim
p({\bm{x}})$, a tokenizer is a function that maps samples from the continuous
distribution to one of the discrete options $f:\mathbb{R}^{n}\rightarrow V$. A
renderer is a function that maps the discrete options back to raw input
$r:V\rightarrow\mathbb{R}^{n}$. A high-quality tokenizer-renderer pair is one
such that $r(f(\bm{x}))\approx\bm{x}$. The continuous distributions that we
seek to tokenize for the case of traffic modeling are given by Eq. 1. We note
that these distributions are over single-agent states consisting of only a
position and heading. Given the low dimensionality of the input data, we
propose a simple approach for tokenizing trajectories based on a fixed set of
state-to-state transitions.
Figure 6: K-means vs. k-disks We plot the average discretization error for
multiple template sets sampled from k-means and k-disks with $|V|=384$. Alg. 1
consistently samples better template sets than k-means.
#### Method
Let ${\bm{s}}_{0}$ be the state of an agent with length $l$ and width $w$ at
the current timestep. Let ${\bm{s}}$ be the state at the next timestep that we
seek to tokenize. We define $V=\\{\bm{s}_{i}\\}$ to be a set of template
actions, each of which represents a change in position and heading in the
coordinate frame of the most recent state. We use the notation
$a_{i}\in\mathbb{N}$ to indicate the index representation of token template
$\bm{s}_{i}$ and $\hat{\bm{s}}$ to represent the raw representation of the
tokenized state $\bm{s}$. Our tokenizer $f$ and renderer $r$ are defined by
$\displaystyle
f({\bm{s}}_{0},{\bm{s}})=a_{i^{*}}=\operatorname*{arg\,min}_{i}d_{l,w}({\bm{s}}_{i},\mathrm{local}({\bm{s}}_{0},{\bm{s}}))$
(3) $\displaystyle
r({\bm{s}}_{0},a_{i})=\hat{{\bm{s}}}=\mathrm{global}({\bm{s}}_{0},{\bm{s}}_{i})$
(4)
where $d_{l,w}({\bm{s}}_{0},{\bm{s}}_{1})$ is the average of the L2 distances
between the ordered corners of the bounding boxes defined by ${\bm{s}}_{0}$
and ${\bm{s}}_{1}$, “local” converts ${\bm{s}}$ to the local frame of
${\bm{s}}_{0}$, and “global” converts ${\bm{s}}_{i^{*}}$ to the global frame
out of the local frame of ${\bm{s}}_{0}$. We use $d_{l,w}(\cdot,\cdot)$
throughout the rest of the paper to refer to this mean corner distance metric.
Importantly, in order to tokenize a full trajectory, this process of
converting states ${\bm{s}}$ to their tokenized counterpart $\hat{{\bm{s}}}$
is done iteratively along the trajectory, using tokenized states as the base
state ${\bm{s}}_{0}$ in the next tokenization step. We visualize the procedure
for tokenizing a trajectory in Fig. 3. Tokens generated with our approach have
three convenient properties for the purposes of traffic modeling: they are
invariant across coordinate frames, invariant under temporal shift, and they
supply efficient access to a measure of similarity between tokens, namely the
distance between the raw representations. We discuss how to exploit the third
property for data augmentation in Sec. A.2.
#### Optimizing template sets
We propose an easily parallelizable approach for finding template sets with
low discretization error. We collect a large number of state transitions
observed in data, sample one of them, filter transitions that are within
$\epsilon$ meters, and repeat $|V|$ times. Pseudocode for this algorithm is
included in Alg. 1. We call this method for sampling candidate templates
“k-disks” given it’s similarity to k-means++, the standard algorithm for
seeding the anchors k-means (Arthur & Vassilvitskii, 2007), as well as the
Poisson disk sampling algorithm (Cook, 1986). We visualize the template sets
found using k-disks with minimum discretization error in Fig. 4. We verify in
Fig. 5 that the tokenized action distribution is similar on WOMD train and
validation despite the fact that the templates are optimized on the training
set. We show in Fig. 6 that the discretization error induced by templates
sampled with k-disks is in general much better than that of k-means, across
agent types. A comprehensive evaluation of k-disks in comparison to baselines
is in Sec. A.3.
Figure 7: Trajeglish modeling We train an encoder-decoder transformer that
predicts the action token of an agent conditional on previous action tokens,
map information, and agent information available at $t=0$. The diagram
represents the forward pass of the network during training in which $t=0$
agent information, map objects, and motion tokens are passed into the network
using a causal mask and the model is trained to predict the next motion token,
shown in the top right.
### 3.2 Modeling
The second component of our method is an architecture for learning a
distribution over the sequences of tokens output by the first. Our model
follows an encoder-decoder structure very similar to those used for LLMs
(Vaswani et al., 2017; Radford et al., 2019; Raffel et al., 2019). A diagram
of the model is shown in Fig. 7. Two important properties of our encoder are
that it is not equivariant to choice of global coordinate frame and it is not
permutation equivariant to agent order. For the first property, randomizing
the choice of coordinate frame during training is straightforward, and sharing
a global coordinate frame enables shared processing and representation
learning across agents. For the second property, permutation equivariance is
not actually desirable in our case since the agent order encodes the order in
which agents select actions within a timestep; the ability of our model to
predict actions should improve when the already-chosen actions of other agents
are provided.
#### Encoder
Our model takes as input two modalities that encode the initial scene. The
first is the initial state of the agents in the scene which includes the
length, width, initial position, initial heading, and object class. We apply a
single layer MLP to encode these values per-agent to an embedding of size $C$.
We then add a positional embedding that encodes the agent’s order as well as
agent identity across the action sequence. The second modality is the map. We
use the WOMD representation of a map as a collection of “map objects”, where a
map object might be a variable-length polyline representing a lane, a
sidewalk, a crosswalk, etc.. We apply a VectorNet encoder to encode the map to
a sequence of embeddings for at most $M$ map objects (Gao et al., 2020). Note
that although the model is not permutation equivariant to the agents, it is
permutation invariant to the ordering of the map objects. Similar to Wayformer
(Nayakanti et al., 2022), we then apply a layer of latent query attention that
outputs a final encoding of the scene initialization.
#### Decoder
Given the set of multi-agent future trajectories, we tokenize the trajectories
and flatten using the same order used to apply positional embeddings to the
$t=0$ agent encoder to get a sequence $a_{0}^{0}a_{1}^{0}...a_{N}^{T}$. We
then prepend a start token and pop the last token, and use an embedding table
to encode the result. For timesteps for which an agent’s state wasn’t observed
in the data, we set the embedding to zeros. We pass the full sequence through
a transformer with causal mask during training. Finally, we use a linear layer
to decode a distribution over the $|V|$ template states and train to maximize
the probability of the next token with cross-entropy loss. We tie the token
embedding matrix to the weight of the final linear layer, which we observed
results in small improvements (Press & Wolf, 2017). We leverage flash
attention (Dao et al., 2022) which we find greatly speeds up training time, as
documented in Sec. A.7.
We highlight that although the model is trained to predict the next token, it
is incorrect to say that a given embedding for the motion token of a given
agent only receives supervision signal for the task of predicting the next
token. Since the embeddings for later tokens attend to the embeddings of
earlier tokens, the embedding at a given timestep receives signal for the task
of predicting all future tokens across all agents.
## 4 Experiments
We use the Waymo Open Motion Dataset (WOMD) to evaluate trajeglish in full and
partial control environments. We report results for rollouts produced by
Trajeglish on the official WOMD Sim Agents Benchmark in Sec. 4.1. We then
ablate our design choices in simplified full and partial control settings in
Sec. 4.2. Finally, we analyze the representations learned by our model and the
density estimates it provides in Sec. 4.3. The hyperparameters for each of the
models that we train can be found in Sec. A.4.
Table 1: WOMD Sim Agents Test
Method Realism meta metric $\uparrow$ Kinematic metrics $\uparrow$ Interactive
metrics $\uparrow$ Map-based metrics $\uparrow$ minADE (m) $\downarrow$
Constant Velocity 0.2380 0.0465 0.3372 0.3680 7.924 Wayformer (Identical)
0.4250 0.3120 0.4482 0.5620 2.498 MTR+++ 0.4697 0.3597 0.4929 0.6028 1.682
Wayformer (Diverse) 0.4720 0.3613 0.4935 0.6077 1.694 Joint-Multipath++ 0.4888
0.4073 0.4991 0.6018 2.052 MTR_E* 0.4911 0.4180 0.4905 0.6073 1.656 MVTA
0.5091 0.4175 0.5186 0.6374 1.870 MVTE* 0.5168 0.4202 0.5289 0.6486 1.677
Trajeglish 0.5339 0.4019 0.5811 0.6667 1.872
### 4.1 WOMD Sim Agents Benchmark
We test the sampling performance of our model using the WOMD Sim Agents
Benchmark and report results in Tab. 1. Submissions to this benchmark are
required to submit 32 rollouts of length 8 seconds at 10hz per scenario, each
of which contains up to 128 agents. We bold multiple submissions if they are
within 1% of each other, as in Montali et al. (2023). Trajeglish is the top
submission along the leaderboard meta metric, outperforming several well-
established motion prediction models including Wayformer, MultiPath++, and MTR
(Shi et al., 2022; 2023), while being the first submission to use discrete
sequence modeling. Most of the improvement is due to the fact that Trajeglish
models interaction between agents significantly better than prior work,
increasing the state-of-the-art along interaction metrics by 9.9%. A full
description of how we sample from the model for this benchmark with
comparisons on the WOMD validation set is included in Appendix A.5.
Figure 8: Partial control ADE Left shows the ADE for the vehicles selected for
evaluation under partial control, but for rollouts where the agents are fully
autonomous. Right shows the ADE for the same vehicles but with all other
agents on replay. When agents controlled by Trajeglish go first in the
permutation order, they behave similarly to the no intra model. When they go
last, utilize the intra-timestep information to produce interaction more
similar to recorded logs, achieving a lower ADE.
### 4.2 Ablation
To simplify our ablation study, we test models in this section on the
scenarios they train on, of at most 24 agents and 6.4 seconds in length. We
compare performance across 5 variants of our model. Both “trajeglish” and
“trajeglish w/ reg.” refer to our model, the latter using the noisy
tokenization strategy discussed in Sec. A.2. The “no intra” model is an
important baseline designed to mimic the behavior of behavior cloning
baselines used in Symphony (Igl et al., 2022) and “Imitation Is Not Enough”
(Lu et al., 2023). For this baseline, we keep the same architecture but adjust
the masking strategy in the decoder to not attend to actions already chosen
for the current timestep. The “marginal” baseline is designed to mimic the
behavior of models such as Wayformer (Nayakanti et al., 2022) and MultiPath++
(Varadarajan et al., 2021) that are trained to model the distribution over
single agent trajectories instead of multi-agent scene-consistent
trajectories. For this baseline, we keep the same architecture but apply a
mask to the decoder that enforces that the model can only attend to previous
actions chosen by the current agent. Our final baseline is the same as the
marginal baseline but without a map encoder. We use this baseline to
understand the extent to which the models rely on the map for traffic
modeling.
#### Partial control
We report results in Fig. 8 in a partial controllability setting in which a
single agent in each scenario is chosen to be controlled by the traffic model
and all other agents are set to replay. The single-agent ADE (average distance
error) for the controlled-agent is similar in full autonomy rollouts for all
models other than the model that does not condition on the map, as expected.
However, in rollouts where all other agents are placed on replay, the replay
trajectories leak information about the trajectory that the controlled-agent
took in the data, and as a result, the no-intra and trajeglish rollouts have a
lower ADE. Additionally, the trajeglish rollouts in which the controlled-agent
is placed first do not condition on intra-timestep information and therefore
behave identically to the no-intra baseline, whereas rollouts where the
controlled-agent is placed last in the order provide the model with more
information about the replay trajectories and result in a decreased ADE.
#### Full control
We evaluate the collision rate of models under full control in Fig. 9 as a
function of initial context, object category, and rollout duration. The value
of modeling intra-timestep interaction is most obvious when only a single
timestep is used to seed generation, although intra-timestep modeling
significantly improves the collision rate in all cases for vehicles. For
interaction between pedestrians, Trajeglish is able to capture the grouping
behavior effectively. We observe that noising the tokens during training
improves rollout performance slightly in the full control setting. We expect
these rates to improve quickly given more training data, as suggested by Fig.
4.2.
Figure 9: Full Autonomy Collision Rate Vehicle collision rate is shown on top
and pedestrian collision rate is shown on bottom. From left to right, we seed
the scene with an increasing number of initial actions from the recorded data.
Trajeglish models the log data statistics significantly better than baselines
when seeded with only an initial timestep, as well as with longer
initialization. Figure 10: Intra-Timestep Conditioning We plot the negative
log-likelihood (NLL) when we vary how many agents choose an action before a
given agent within a given timestep. As expected, when the context length
increases, intra-timestep interaction becomes much less important to take into
account.
figureScaling Behavior Our preliminary study on parameter and dataset scaling
suggests that, compared to LLMs (Kaplan et al., 2020), Trajeglish is severely
data-constrained on WOMD; models with 35M parameters just start to be
significantly better than models with 15M parameters for datasets the size of
WOMD. A more rigorous study of how all hyperparameters of the training
strategy affect sampling performance is reserved for future work.
figurenuScenes transfer We test the ability of our model to transfer to the
maps and scenario initializations in the nuScenes dataset. The difference
between maps and behaviors found in the nuScenes dataset are such that LoRA
does not provide enough expressiveness to fine-tune the model to peak
performance. The fine-tuned models both outperform and train faster than the
model that is trained exclusively on nuScenes.
### 4.3 Analysis
#### Intra-Timestep Dependence
To understand the extent to which our model leverages intra-timestep
dependence, in Fig. 10, we evaluate the negative log likelihood under our
model of predicting an agent’s next action depending on the agent’s order in
the selected permutation, as a function of the amount of historical context
the model is provided. In all cases, the agent gains predictive power from
conditioning on the actions selected by other agents within the same timestep,
but the log likelihood levels out as more historical context is provided.
Intra-timestep dependence is significantly less important when provided over 4
timesteps of history, as is the setting used for most motion prediction
benchmarks.
#### Representation Transferability
We measure the generalization of our model to the nuScenes dataset (Caesar et
al., 2019). As recorded in Sec. A.7, nuScenes is 3 orders of magnitude smaller
than WOMD. Additionally, nuScenes includes scenes from Singapore where the
lane convention is opposite that of North America where WOMD is collected.
Nevertheless, we show in Fig. 4.2 that our model can be fine-tuned to a
validation NLL far lower than a model trained from scratch on only the
nuScenes dataset. At the same time, we find that LoRA (Hu et al., 2021) does
not provide enough expressiveness to achieve the same NLL as fine tuning the
full model. While bounding boxes have a fairly canonical definition, we note
that there are multiple arbitrary choices in the definition of map objects
that may inhibit transfer of traffic models to different datasets.
#### Token Embeddings
We visualize the embeddings that the model learns in Fig. 11. Through the task
of predicting the next token, the model learns a similarity matrix across
tokens that reflects the euclidean distance between the actions the tokens
represent.
#### Preliminary Scaling Law
We perform a preliminary study of how our model scales with increased
parameter count and dataset size in Fig. 4.2. We find that performance between
a model of 15.4M parameters and 35.6 parameters is equivalent up to 0.5B
tokens, suggesting that a huge amount of performance gain is expected if the
dataset size can be expanded beyond the 1B tokens in WOMD. We reserve more
extensive studies of model scaling for future work.
Figure 11: Token Embedding Visualization We run PCA on the model embeddings at
initialization and at convergence, and plot the $(x,y)$ location of each of
the token templates using the top 3 principal component values to determine
the hue, lightness, and saturation of the point. The model learns that tokens
that correspond to actions close together in euclidean space represent
semantically similar actions. Note that the heading of each action is not
visualized, which also affects action similarity. Additionally, the top 3
principal components include only 35% of the variance, explaining why some
colors repeat.
## 5 Conclusion
In this work, we introduce a discrete autoregressive model of the interaction
between roadusers. By improving the realism of self-driving simulators, we
hope to enhance the safety of self-driving systems as they are increasingly
deployed into the real world.
## References
* Arthur & Vassilvitskii (2007) David Arthur and Sergei Vassilvitskii. K-means++: The advantages of careful seeding. In _Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms_ , SODA ’07, pp. 1027–1035, USA, 2007. Society for Industrial and Applied Mathematics. ISBN 9780898716245.
* Caesar et al. (2019) Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Giancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. _CoRR_ , abs/1903.11027, 2019. URL http://arxiv.org/abs/1903.11027.
* Cook (1986) Robert L. Cook. Stochastic sampling in computer graphics. _ACM Trans. Graph._ , 5(1):51–72, jan 1986. ISSN 0730-0301. doi: 10.1145/7529.8927. URL https://doi.org/10.1145/7529.8927.
* Dao et al. (2022) Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness, 2022.
* Ettinger et al. (2021) Scott Ettinger, Shuyang Cheng, Benjamin Caine, Chenxi Liu, Hang Zhao, Sabeek Pradhan, Yuning Chai, Ben Sapp, Charles R. Qi, Yin Zhou, Zoey Yang, Aurélien Chouard, Pei Sun, Jiquan Ngiam, Vijay Vasudevan, Alexander McCauley, Jonathon Shlens, and Dragomir Anguelov. Large scale interactive motion forecasting for autonomous driving: The waymo open motion dataset. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_ , pp. 9710–9719, October 2021.
* Gao et al. (2020) Jiyang Gao, Chen Sun, Hang Zhao, Yi Shen, Dragomir Anguelov, Congcong Li, and Cordelia Schmid. Vectornet: Encoding hd maps and agent dynamics from vectorized representation, 2020.
* Holtzman et al. (2020) Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. In _International Conference on Learning Representations_ , 2020. URL https://openreview.net/forum?id=rygGQyrFvH.
* Hu et al. (2023) Anthony Hu, Lloyd Russell, Hudson Yeo, Zak Murez, George Fedoseev, Alex Kendall, Jamie Shotton, and Gianluca Corrado. Gaia-1: A generative world model for autonomous driving, 2023.
* Hu et al. (2021) Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. _CoRR_ , abs/2106.09685, 2021. URL https://arxiv.org/abs/2106.09685.
* Igl et al. (2022) Maximilian Igl, Daewoo Kim, Alex Kuefler, Paul Mougin, Punit Shah, Kyriacos Shiarlis, Dragomir Anguelov, Mark Palatucci, Brandyn White, and Shimon Whiteson. Symphony: Learning realistic and diverse agents for autonomous driving simulation, 2022.
* Janner et al. (2021) Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. In _Advances in Neural Information Processing Systems_ , 2021.
* Jiang et al. (2023) Chiyu Max Jiang, Andre Cornman, Cheolho Park, Ben Sapp, Yin Zhou, and Dragomir Anguelov. Motiondiffuser: Controllable multi-agent motion prediction using diffusion, 2023.
* Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _CoRR_ , abs/2001.08361, 2020. URL https://arxiv.org/abs/2001.08361.
* Loshchilov & Hutter (2017) Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. _CoRR_ , abs/1711.05101, 2017. URL http://arxiv.org/abs/1711.05101.
* Lu et al. (2023) Yiren Lu, Justin Fu, George Tucker, Xinlei Pan, Eli Bronstein, Rebecca Roelofs, Benjamin Sapp, Brandyn White, Aleksandra Faust, Shimon Whiteson, Dragomir Anguelov, and Sergey Levine. Imitation is not enough: Robustifying imitation with reinforcement learning for challenging driving scenarios, 2023.
* Montali et al. (2023) Nico Montali, John Lambert, Paul Mougin, Alex Kuefler, Nick Rhinehart, Michelle Li, Cole Gulino, Tristan Emrich, Zoey Yang, Shimon Whiteson, Brandyn White, and Dragomir Anguelov. The waymo open sim agents challenge, 2023.
* Nayakanti et al. (2022) Nigamaa Nayakanti, Rami Al-Rfou, Aurick Zhou, Kratarth Goel, Khaled S. Refaat, and Benjamin Sapp. Wayformer: Motion forecasting via simple and efficient attention networks, 2022.
* Ngiam et al. (2021) Jiquan Ngiam, Benjamin Caine, Vijay Vasudevan, Zhengdong Zhang, Hao-Tien Lewis Chiang, Jeffrey Ling, Rebecca Roelofs, Alex Bewley, Chenxi Liu, Ashish Venugopal, David Weiss, Benjamin Sapp, Zhifeng Chen, and Jonathon Shlens. Scene transformer: A unified multi-task model for behavior prediction and planning. _CoRR_ , abs/2106.08417, 2021. URL https://arxiv.org/abs/2106.08417.
* Philion (2019) Jonah Philion. Fastdraw: Addressing the long tail of lane detection by adapting a sequential prediction network. _CoRR_ , abs/1905.04354, 2019. URL http://arxiv.org/abs/1905.04354.
* Press & Wolf (2017) Ofir Press and Lior Wolf. Using the output embedding to improve language models, 2017.
* Radford et al. (2019) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019\.
* Raffel et al. (2019) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _CoRR_ , abs/1910.10683, 2019. URL http://arxiv.org/abs/1910.10683.
* Ranzato et al. (2016) Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In Yoshua Bengio and Yann LeCun (eds.), _4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings_ , 2016. URL http://arxiv.org/abs/1511.06732.
* Rempe et al. (2021) Davis Rempe, Jonah Philion, Leonidas J. Guibas, Sanja Fidler, and Or Litany. Generating useful accident-prone driving scenarios via a learned traffic prior. _CoRR_ , abs/2112.05077, 2021. URL https://arxiv.org/abs/2112.05077.
* Ross & Bagnell (2010) Stephane Ross and Drew Bagnell. Efficient reductions for imitation learning. In Yee Whye Teh and Mike Titterington (eds.), _Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics_ , volume 9 of _Proceedings of Machine Learning Research_ , pp. 661–668, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR. URL https://proceedings.mlr.press/v9/ross10a.html.
* Seff et al. (2023) Ari Seff, Brian Cera, Dian Chen, Mason Ng, Aurick Zhou, Nigamaa Nayakanti, Khaled S. Refaat, Rami Al-Rfou, and Benjamin Sapp. Motionlm: Multi-agent motion forecasting as language modeling, 2023.
* Shi et al. (2022) Shaoshuai Shi, Li Jiang, Dengxin Dai, and Bernt Schiele. Motion transformer with global intention localization and local movement refinement. _Advances in Neural Information Processing Systems_ , 2022.
* Shi et al. (2023) Shaoshuai Shi, Li Jiang, Dengxin Dai, and Bernt Schiele. Mtr++: Multi-agent motion prediction with symmetric scene modeling and guided intention querying. _arXiv preprint arXiv:2306.17770_ , 2023.
* Suo et al. (2021) Simon Suo, Sebastian Regalado, Sergio Casas, and Raquel Urtasun. Trafficsim: Learning to simulate realistic multi-agent behaviors, 2021\.
* van den Oord et al. (2016) Aäron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew W. Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. _CoRR_ , abs/1609.03499, 2016. URL http://arxiv.org/abs/1609.03499.
* Varadarajan et al. (2021) Balakrishnan Varadarajan, Ahmed Hefny, Avikalp Srivastava, Khaled S. Refaat, Nigamaa Nayakanti, Andre Cornman, Kan Chen, Bertrand Douillard, Chi-Pang Lam, Dragomir Anguelov, and Benjamin Sapp. Multipath++: Efficient information fusion and trajectory aggregation for behavior prediction. _CoRR_ , abs/2111.14973, 2021. URL https://arxiv.org/abs/2111.14973.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _CoRR_ , abs/1706.03762, 2017. URL http://arxiv.org/abs/1706.03762.
* Zhong et al. (2022) Ziyuan Zhong, Davis Rempe, Danfei Xu, Yuxiao Chen, Sushant Veer, Tong Che, Baishakhi Ray, and Marco Pavone. Guided conditional diffusion for controllable traffic simulation, 2022\.
* Zhu et al. (2015) Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. _CoRR_ , abs/1506.06724, 2015. URL http://arxiv.org/abs/1506.06724.
## Appendix A Appendix
### A.1 Intra-Timestep Interaction
There are a variety of reasons that intra-timestep dependence may exist in
driving log data. To list a few, driving logs are recorded at discrete
timesteps and any interaction in the real world between timesteps gives the
appearance of coordinated behavior in log data. Additionally, information that
is not generally recorded in log data, such as eye contact or turn signals,
may lead to intra-timestep dependence. Finally, the fact that log data exists
in 10-20 second chunks can result in intra-timestep dependence if there were
events before the start of the log data that result in coordination during the
recorded scenario. These factors are in general weak, but may give rise to
behavior in rare cases that is not possible to model without taking into
account coordinatation across agents within a single timestep.
### A.2 Regularization
Trajeglish is trained with teacher forcing, meaning that it is trained on the
tokenized representation of ground-truth trajectories. However, at test time,
the model ingests its own actions. Given that the model does not model the
ground-truth distribution perfectly, there is an inevitable mismatch between
the training and test distributions that can lead to compounding errors (Ross
& Bagnell, 2010; Ranzato et al., 2016; Philion, 2019). We combat this effect
by noising the input tokens fed as input to the model. More concretely, when
tokenizing the input trajectories, instead of choosing the token with minimum
corner distance to the ground-truth state as stated in Eq. 3, we sample the
token from the distribution
$\displaystyle
a_{i}\sim\mathrm{softmax}_{i}(\mathrm{nucleus}(d(\bm{s}_{i},\bm{s})/\sigma,p_{\mathrm{top}}))$
(5)
meaning we treat the the distance between the ground-truth raw state and the
templates as logits of a categorical distribution with temperature $\sigma$
and apply nucleus sampling (Holtzman et al., 2020) to generate sequences of
motion tokens. When $\sigma=0$ and $p_{\mathrm{top}}=1$, the approach recovers
the tokenization strategy defined in Eq. 3. Intuitively, if two tokens are
equidistant from the ground-truth under the average corner distance metric,
this approach will sample one of the two tokens with equal probability during
training. Note that we retain the minimum-distance template index as the
ground-truth target even when noising the input sequence.
While this method of regularization does make the model more robust to errors
in its samples at test time, it also adds noise to the observation of the
states of other agents which can make the model less responsive to the motion
of other agents at test time. As a result, we find that this approach
primarily improves performance for the setting where all agents are controlled
by the traffic model.
Figure 12: K-disk expected discretization error Average corner distance for
each of the k-disk vocabularies of sizes 128, 256, 384, and 512. Figure 13:
Tokenization method comparison Average corner distance for trajectories
tokenized with a vocabulary of 384 with template sets derived using different
methods. Figure 14: Semantic Tokenization Performance We plot the probability
that the bounding box of an agent has non-zero overlap with another agent in
the scene for each timestep. The collision rate for the raw data is shown in
black.
### A.3 Tokenization Analysis
We compare our approach for tokenization against two grid-based tokenizers
(van den Oord et al., 2016; Seff et al., 2023; Janner et al., 2021), and one
sampling-based tokenizer. The details of these methods are below.
$(x,y,h)$-grid \- We independently discretize change in longitudinal and
latitudinal position and change in heading, and treat the template set as the
product of these three sets. For vocabulary sizes of 128/256/384/512
respectively, we use 6/7/8/9 values for $x$ and $y$, and 4/6/7/8 values for
$h$. These values are spaced evenly between (-0.3, 3.5) m for $x$, (-0.2 m,
0.2 m) for $y$, and (-0.1, 0.1) rad for $h$.
$(x,y)$-grid \- We independenly discretize change in only the location. We
choose the heading for each template based on the heading of the state-to-
state transition found in the data with a change in location closest to the
template location. Compared to the $(x,y,h)$-grid baseline, this approach
assumes heading is deterministic given location in order to gain resolution in
location. We use 12/16/20/23 values for $x$ and $y$ with the same bounds as in
the $(x,y,h)$-grid baseline.
k-means \- We run k-means many times on a dataset of $(x,y,h)$ state-to-state
transitions. The distance metric is the distance between the $(x,y)$
locations. We note that the main source of randomness across runs is how
k-means is seeded, for which we use k-means++ Arthur & Vassilvitskii (2007).
We ultimately select the template set with minimum expected discretization
error as measured by the average corner distance.
k-disks \- As shown in Alg. 1, we sample subsets of a dataset of state-to-
state transitions that are at least $\epsilon$ from each other. For vocab
sizes of 128/256/384/512, we use $\epsilon$ of 3.5/3.5/3.5/3.0 centimeters.
Intuitively, the issue with both grid-based methods is that they distribute
templates evenly instead of focusing them in regions of the support where the
most state transitions occur. The main issue with k-means is that the heading
is not taken into account when optimizing the cluster centers.
We offer several comparisons between these methods. In Fig. 12, we plot the
expected corner distance between trajectories and tokenized trajectories as a
function of trajectory length for the template sets found with k-disks. In
Fig. 13, we compare the tokenization error as a function of trajectory length
and find that grid-based tokenizers create large oscillations. To calibrate to
a metric more relevant to the traffic modeling task, we compare the collision
rate between raw trajectories as a function of trajectory length for the raw
scenarios and the tokenized scenarios using k-disk template sets of size 128,
256, 384, and 512 in Fig. 14. We observe that a vocabulary size of 384 is
sufficient to avoid creating extraneous collisions. Finally, Fig. 15 plots the
full distribution of discretizaion errors for each of the baselines and Tab. 2
reports the expected discretization error across vocabulary sizes for each of
the methods.
Algorithm 1 Samples a candidate vocabulary of size $N$. The distance
$d(x_{0},x)$ measures the average corner distance between a box of length 1
meter and width 1 meter with state $x_{0}$ vs. state $x$.
1:procedure SampleKDisks($X$, $N$, $\epsilon$)
2: $S\leftarrow\\{\\}$
3: while len($S$) $<$ $N$ do
4: $x_{0}\sim X$
5: $X\leftarrow\\{x\in X\mid d(x_{0},x)>\epsilon\\}$
6: $S\leftarrow S\cup\\{x_{0}\\}$ return $S$
Figure 15: Discretization error distribution We plot the probability that the discretized trajectory is greater than 2 cm $\leq\epsilon\leq$ 10 cm away from the true trajectory as a function of trajectory length. Lower is therefore better. Each row visualizes the error distribution for a different method, each with a vocabulary size of 384. We keep the y-axis the same across all plots. We note that k-means discretizes more trajectories to within 2 cm than k-disks, but does not improve past 5 cm, whereas k-disks is able to tokenize nearly all trajectories in WOMD to within 6 centimeters. Table 2: Tokenization discretization error comparison | $\mathbb{E}[d(s,\hat{s})]$ (cm)
---|---
method | $|V|=128$ | $|V|=256$ | $|V|=384$ | $|V|=512$
$(x,y,h)$-grid | 20.50 | 16.84 | 14.09 | 12.59
$(x,y)$-grid | 9.35 | 8.71 | 5.93 | 4.74
k-means | 14.49 | 8.17 | 6.13 | 5.65
k-disks | 2.66 | 1.46 | 1.18 | 1.02
### A.4 Training hyperparameters
We train two variants of our model. The variant we use for the WOMD benchmark
is trained on scenarios with up to 24 agents within 60.0 meters of the origin,
up to 96 map objects with map points within 100.0 meters of the origin, 2 map
encoder layers, 2 transformer encoder layers, 6 transformer decoder layers, a
hidden dimension of 512, trained to predict 32 future timesteps for all
agents. We train with a batch size of 96, with a tokenization temperature of
0.008, a tokenization nucleus of 0.95, a top learning rate of 5e-4 with 500
step warmup and linear decay over 800k optimization steps with AdamW optimizer
(Loshchilov & Hutter, 2017). We use the k-disks tokenizer with vocabulary size
384. During training, we choose a random 4-second subsequence of a WOMD
scenario, a random agent state to define the coordinate frame, and a random
order in which the agents are fed to the model.
For the models we analyze in all other sections, we use the same setting from
above, but train to predict 64 timesteps, using only 700k optimization steps.
Training on these longer scenarios enables us to evaluate longer rollouts
without the complexity of extending rollouts in a fair way across models,
which we do only for the WOMD Sim Agents Benchmark and document in Sec. A.5.
### A.5 Extended Rollouts for WOMD Sim Agents Benchmark
In order to sample scenarios for evaluation on the WOMD sim agents benchmark,
we require the ability to sample scenarios with an arbitrary number of agents
arbitrarily far from each other for an arbitrary number of future timesteps.
While it may become possible to train a high-performing model on 128-agent
scenarios on larger datasets, we found that training our model on 24-agent
scenarios and then sampling from the model using a “sliding window” (Hu et
al., 2023) both spatially and temporally achieved top performance.
In detail, at a given timestep during sampling, we determine the 24-agent
subsets with the following approach. First, we compute the 24-agent subset
associated with picking each of the agents in the scene to be the center
agent. We choose the subset associated with the agent labeled as the self-
driving car to be the first chosen subset. Among the agents not included in a
subset yet, we find which agent has a 24-agent subset associated to it with
the maximum number of agents already included in a chosen subset, and select
that agent’s subset next. We continue until all agents are included in at
least one of the subsets.
Importantly, to define the order for agents within the subset, we place any
padding at the front, followed by all agents that will have already selected
an action at the current timestep, followed by the remaining agents sorted by
distance to the center agent. In keeping this order, we enable the agents to
condition on the maximum amount of pre-generated information possible.
Additionally, this ordering guarantees that the self-driving car is always the
first to select an action at each timestep, in accordance with the guidelines
for the WOMD sim agents challenge (Montali et al., 2023).
To sample an arbitrarily long scenario, we have the option to sample up to
$t<T=32$ steps before computing new 24-agent subsets. Computing new subsets
every timestep ensures that the agents within a subset are always close to
each other, but has the computational downside of requiring the transformer
decoder key-value cache to be flushed at each timestep. For our submission, we
compute the subsets at timesteps $t\in\\{10,34,58\\}$.
While the performance of our model under the WOMD sim agents metrics was
largely unaffected by the choice of the hyperparameters above, we found that
the metrics were sensitive to the temperature and nucleus that we use when
sampling from the model. We use a temperature of 1.5 and a nucleus of 1.0 to
achieve the results in Tab. 1. Our intuition for why larger temperatures
resulted in larger values for the sim agents metric is that the log likelihood
penalizes lack of coverage much more strongly than lack of calibration, and
higher temperature greatly improves the coverage.
Finally, we observed that, although the performance of our model sampling with
temperature 1.5 was better than all prior work for interaction and map-based
metrics as reported in Tab. 3, the performance was worse than prior work along
kinematics metrics. To test if this discrepancy was a byproduct of
discretization, we trained a “heading smoother” by tokenizing trajectories,
then training a small autoregressive transformer to predict back the original
heading given the tokenized trajectory. On tokenized ground-truth
trajectories, the heading smoother improves heading error from 0.58 degrees to
0.33 degrees. Note that the autoregressive design of the smoother ensures that
it does not violate the closed-loop requirement for the Sim Agents Benchmark.
The addition of this smoother did improve along kinematics metrics slightly,
as reported in Tab. 3. We reserve a more rigorous study of how to best improve
the kinematic realism of trajectories sampled from discrete sequence models
for future work.
Table 3: WOMD sim agents validation
Method Realism Meta metric $\uparrow$ Kinematic metrics $\uparrow$ Interactive
metrics $\uparrow$ Map-based metrics $\uparrow$ $\tau=1.25$,
$p_{\mathrm{top}}=0.995$ 0.5176 0.3960 0.5520 0.6532 $\tau=1.5$,
$p_{\mathrm{top}}=1.0$ 0.5312 0.3963 0.5838 0.6607 $\tau=1.5$,
$p_{\mathrm{top}}=1.0$, w/ $h$-smooth 0.5352 0.4065 0.5841 0.6612
### A.6 Additional Ablation Results
#### Full Control
In Fig. 16, we find the sampled scenario with minimum corner distance to the
ground-truth scenario and plot that distance as a function of the number of
timesteps that are provided at initialization. When the initialization is a
single timestep, the minADE of both models that take into account intra-
timestep dependence improves. As more timesteps are provided, the effect
diminishes, as expected. We visualize a small number of rollouts in the full
autonomy setting in Fig. 17, and videos of other rollouts can be found on our
project page.
#### Partial Control
To quantitatively evaluate these rollouts, we measure the collision rate and
visualize the results in Fig. A.7. Of course, we expect the collision rate to
be high in these scenarios since most of the agents in the scene are on
replay. For Trajeglish models, we find that when the autonomous agent is the
first in the permutation to choose an action, they reproduce the performance
of the model with no intra-timestep dependence. When the agent goes last
however, the collision rate drops significantly. Modeling intra-timestep
interaction is a promising way to enable more realistic simulation with some
agents on replay, which may have practical benefits given that the
computational burden of simulating agents with replay is minimal. In Fig. 18,
we visualize how the trajectory for agents controlled by Trajeglish shifts
between the full autonomy setting and the partial autonomy setting. The agent
follows traffic flow and cedes the right of way when replay agents ignore the
actions of the agent controlled by the traffic model.
Figure 16: Full Autonomy minADE As we seed the scene with a longer
initialization, the no-intra model and our model converge to similar values,
and all models improve. When initialized with only a single timestep, the
performance gap between models that take into account intra-timestep
interaction and models that do not is significant.
### A.7 Additional Analysis
#### Data and Training Statistics
We report a comparison between the number of tokens in WOMD and the number of
tokens in datasets used to train GPT-1 and GPT-2 in Tab. 5. Of course, a text
token and a motion token do not have exactly the same information content, but
we still think the comparison is worth making as it suggests that WOMD
represents a dataset size similar to BookCorpus Zhu et al. (2015) which was
used to train GPT-1 and the scaling curves we compute for our model shown in
Fig. 4.2 support this comparison. We also report the number of tokens
collected per hour of driving to estimate how many hours of driving would be
necessary to reach a given token count. In Tab. 5, we document the extent to
which using mixed precision and flash attention improves memory use and speed.
Using these tools, our model takes 2 days to train on 4 A100s.
#### Context Length
Context length refers to the number of tokens that the model has to condition
on when predicting the distribution over the next token. Intuitively, as the
model is given more context, the model should get strictly better at
predicting the next token. We quantify this effect in Fig. A.7. We find that
the relative decrease in cross entropy from increasing the context length
drops off steeply for our model for pedestrians and cyclists, which aligns
with the standard intuition that these kinds of agents are more markov. In
comparison, we find a significant decrease in cross entropy with up to 2
seconds of context for vehicles, which is double the standard context length
used for vehicles on motion prediction benchmarks (Ettinger et al., 2021;
Caesar et al., 2019).
figurePartial control collision rate We plot the collision rate as a function
of rollout time when the traffic model controls only one agent while the rest
are on replay. We expect this collision rate to be higher than the log
collision rate since the replay agents do not react to the dynamic agents. We
note that the collision rate decreases significantly just by placing the agent
last in the order, showing that the model learns to condition on the actions
of other agents within a single timestep effectively.
figureContext Length We plot the negative log-likelihood (NLL) when we vary
the context length at test-time relative to the NLL at full context. Matching
with intuition, while pedestrians and cyclists are more markov on a short
horizon, interaction occurs on a longer timescale for vehicles.
Table 4: Dataset comparison by tokens
tokens rate (tok/hour) nuScenes 3M 0.85M WOMD 1.5B 1.2M WOMD (moving) 1.1B
0.88M BookCorpus (GPT-1) 1B - OpenWebText (GPT-2) 9B -
Table 5: Training efficiency
memory speed (steps/hour) no intra 14.7 MiB 8.9k trajeglish (mem-efficient)
7.2 MiB 11.1k trajeglish (bfloat16+flash) 5.6 MiB 23.0k
Figure 17: Full controll rollouts Additional visualizations of full control samples from our model. The model captures the collective behavior of agents at an intersection and maneuvers such as U-turns. |
---|---
|
Figure 18: Partial control comparison We visualize the effect of controlling
only one agent with Trajeglish and controlling the others with replay. The
left scene in each pair is a full control sample from Trajeglish. The right
scene is generated by placing all green cars on fixed replay tracks and
controlling the single blue car with Trajeglish. Our model reacts dynamically
to other agents in the scene at each timestep.
|
# Text Line Segmentation for Challenging Handwritten Document Images Using
Fully Convolutional Network
Berat Barakat, Ahmad Droby, Majeed Kassis and Jihad El-Sana The Department of
Computer Science
Ben-Gurion University of the Negev
Email: {berat, drobya, majeek<EMAIL_ADDRESS>
###### Abstract
This paper presents a method for text line segmentation of challenging
historical manuscript images. These manuscript images contain narrow interline
spaces with touching components, interpenetrating vowel signs and inconsistent
font types and sizes. In addition, they contain curved, multi-skewed and
multi-directed side note lines within a complex page layout. Therefore,
bounding polygon labeling would be very difficult and time consuming. Instead
we rely on line masks that connect the components on the same text line. Then
these line masks are predicted using a Fully Convolutional Network (FCN). In
the literature, FCN has been successfully used for text line segmentation of
regular handwritten document images. The present paper shows that FCN is
useful with challenging manuscript images as well. Using a new evaluation
metric that is sensitive to over segmentation as well as under segmentation,
testing results on a publicly available challenging handwritten dataset are
comparable with the results of a previous work on the same dataset.
## I Introduction
Historical handwritten documents are valuable since they connect past and
present. Commonly they are converted into digital form for being easily
available to scholars worldwide. However, digital historical documents pose
real challenges for automatic writer identification, keyword searching, and
indexing. Text line segmentation of document images is an essential pre-
processing operation for these automatizing problems. The problems for text
line segmentation of handwritten documents consist of touching, overlapping
and crowded characters and vowel signs among consecutive text lines besides
narrow interline spacing (Figure 1).
In addition to the problems of handwritten documents, challenging handwritten
documents contain various writing styles with inconsistent font types and font
sizes through multi-skewed, multi-directed and curved text lines (Figure 2).
Several text line extraction methods for handwritten documents have been
proposed. Projection method was initially used for printed documents [1, 2]
then modified for skewed [3, 4] and multi-skewed documents [5]. Smearing
method [6] which fills the space between consecutive foreground pixels can be
used on skewed documents [7] as well. Grouping method aggregates pixels or
connected components in a bottom up strategy and is superior in case of skewed
and curved text lines [8, 9]. Machine learning algorithms, a type of grouping
method, handle text line segmentation as a pixel classification problem. Pixel
classification can be done in a sliding window manner [10, 11] which is not
desirable due to redundant and expensive computation of overlapping areas in
the sliding windows. On the other hand, dense prediction does not suffer from
redundant computation and has been successfully used for text line
segmentation of handwritten documents [12, 13].
Figure 1: Text line segmentation problems with regular handwritten documents
Figure 2: Additional text line segmentation problems with challenging
handwritten documents. Various writing styles are also noticeable.
However, text line extraction of challenging documents has not been
extensively studied. This paper provides a dataset (https://www.cs.bgu.ac.il/
vml/) of challenging documents with multi-skewed, multi-directed and curved
handwritten text lines. It then describes text line segmentation of this
dataset using Fully Convolutional Network (FCN). We also propose a new
evaluation metric that is sensitive to both, over and under segmentation of
lines in contrast to the available metrics. Using the new evaluation metric we
show that FCN based method is comparable to Cohen et al.’s method [9].
In the rest of the paper we describe our method and the new evaluation metric
in detail, and present the challenging dataset and report experimental
results. After comparing results we conclude and discuss the future
directions.
## II Method
Fully Convolutional Network has made great improvements in object segmentation
field [14]. It is an end to end semantic segmentation framework that extracts
the features and learns the classifier function simultaneously. FCN inputs the
original images and their pixel level annotations for learning the hypothesis
function that can predict whether a pixel belongs to a text line label or not.
So the crucial question is how to annotate the text lines. Baseline labeling
is not applicable to all the alphabets whereas bounding polygon is applicable
but very cumbersome for crowded documents [15]. Instead of baseline or
bounding polygon, we used line mask labeling that connects the characters in
the same line (Figure 4). A line mask disregards diacritics and touching
components between lines.
### II-A FCN architecture
The FCN architecture (Figure 3) we used is based on the FCN proposed for
semantic segmentation [14]. First five blocks, encoder part, follow the design
of VGG 16-layer network [16] except the discarded final layer. The encoder
consists of five convolutional blocks. Each convolutional block contains a
number of convolutional layers followed by a max pooling layer. Through the
encoder, input image is downsampled, and the filters can see coarser
information with larger receptive field. Then the decoder part of FCN
upsamples coarse outputs to dense pixels. Upsampling is done by transpose
convolution by applying a convolution filter with a stride equal to
$\frac{1}{f}$, for upsampling by a factor $f$.
FCN has two types, FCN8 and FCN32, according to the upsampling factor used in
the last layer. FCN32 upsamples the last convolutional layer by $f=32$ at one
time. However, particularly FCN8 architecture was selected because it has been
successful in page layout analysis of a similar dataset [17]. FCN8 adds final
layer of encoder to the lower layers with finer information, then upsamples
the combined layer back to the input size. Default input size of VGG is
$224\times 224$, which does not cover more than 2 main text lines and 3 side
text lines. To include more context we changed the input size to $320\times
320$ pixels. We also changed the output channel to 2 which is the number of
classes, text line or background.
Figure 3: The FCN architecture. Pooling and prediction layers are shown as
grids that show relative coarseness. Convolutional layers are shown as
vertical lines. FCN8 4 times upsamples the final layer, 2 times upsamples the
pool4 layer and combine them with pool3 layer finally to upsample to input
size.
### II-B Pre-processing
We binarize the 30 document images, each with an approximate size of
$3000\times 4000$, by applying an adaptive binarization method for historical
documents [18]. Binarized images were inverted before inputting them to the
FCN. Then we manually annotated the line masks on the document images. A
sequence of original, binarized and labeled document images is demonstrated in
Figure 4. Finally a total of $50.000$ and $6.000$ random patches of size
$320\times 320$ were generated for training and validation sets of each fold
respectively.
Figure 4: A sequence of original, binarized and labeled document images.
Random patches for training are generated from the binarized and labeled
images.
### II-C Training and testing
We applied 6 fold cross validation scheme for the experiments. Each fold was
split into train, validation and test sets. In each fold, training continued
for 80 epochs and the model with the least validation loss value was saved.
The best model was then used to predict the corresponding test set. This
training procedure ensures generalizability of the proposed model. The FCN was
trained by a batch size of 16, using Stochastic Gradient Descent (SGD) with
momentum equals to $0.9$ and learning rate equals to $0.001$. VGG was
initialized with its publicly available pre-trained weights.
During the testing, a sliding window of size $320\times 320$ was used for
prediction, but only the inner window of size $100\times 100$ was considered.
Page was padded with black pixels at its right and bottom sides if its size is
not an integer multiple of the sliding window size, in addition to padding it
at 4 sides for considering only the central part of the sliding window.
### II-D Post-processing
Occasionally predicted line masks were disconnected. Thus, we needed to post-
process the FCN output. Given a predicted line mask image, firstly the
orientation of each connected component was computed. Then the image was split
into $N$ layers where each layer contains the connected components with same
orientation. Later a directional morphological operation was applied on each
layer. Resulting layers at the end were combined back using a pixel-wise OR
operation.
Let $C=\\{c_{1},c_{2},...,c_{M}\\}$ is the set of connected components in the
predicted line mask image. $C$ is further divided into $N$ intersecting
subsets $B_{1},B_{2},...,B_{N}\subseteq C$ such that:
$B_{i}=\\{c_{i}:\alpha(c_{i})^{2}|v_{j}^{T}\cdot\theta(c_{i})|<\epsilon\\}$
(1) $i=1,2,\dots M,j=1,2,\dots N$
$v_{j}=(\cos(j\frac{\pi}{N}),\sin(j\frac{\pi}{N}))$ (2)
$\alpha(c)=\frac{R_{maj}}{R_{maj}+R_{min}}$ (3)
where $v_{j}\in[0,\pi]$ is a particular orientation and $\epsilon\in[0,1]$ is
the threshold for selecting the connected components perpendicular to this
particular orientation. $R_{maj}$ and $R_{min}$ are the major and minor axes
of the fitted ellipse to the connected component $c$ respectively.
$\alpha(c)\in[0.5,1]$ indicates how sure are we about the orientation of the
component $c$. $\theta(c)$ is the unit vector that represents the orientation
of the fitted ellipse to the connected component $c$. Ellipse fitting was done
using the algorithm described in [19].
Eventually for each subset $B_{i}$ a morphological operation with a narrow
kernel in the orientation of this subset was applied. Figure 5 shows the
result of post-processing on a sample predicted line mask image.
Figure 5: Post processing phases: (a) Predicted line mask may have
disconnected components. (b) For each component an ellipse (red) is fitted and
its orientation vector $\theta(c)$ (blue) is computed. (c) Morphological
dilation is applied to each component with a narrow kernel in the direction of
its fitted ellipse.
### II-E Connectivity Component Based Line Extraction Accuracy Metric
Available evaluation methods for text line segmentation either use a pixel-
wise matching mostly normalized by line length or maximum overlap according to
a certain threshold between the extracted and annotated lines. These methods
have their short-comings. Thus, we present a different evaluation method that
provides a better picture of the results.
The theoretical basis is as follows. A line extraction algorithm succeeds in
extracting a complete text line if it has succeeded in finding all the
connected components of this line. That is if the algorithm labels all the
connected components of a line with the same label, then it has successfully
extracted this line without any errors. This is in contrast to having multiple
labels, over segmentation, or extracting part of the connected components,
under segmentation, along the same text line.
To describe the new metric, we define the term connectivity component. A
connectivity component is the connection between two consecutive components
with the same label. The number of connectivity components in a line is equal
to the number of connectivity components between every two consequent
connected components and in addition to it a beginning of line connectivity
component. The extra connectivity component handles cases where a line
contains one connected component only. _C_ omplete extraction of a line with
several connectivity components is extracting all its connectivity components
and assigning them the same label.
To quantify the new metric we define recall and precision for calculating
F-measure. Recall is the number of connectivity components extracted by the
algorithm in a line, out of all connectivity components found in the
corresponding line in ground truth. Precision is the number of correct
connectivity components extracted by the algorithm in a line out of all
connectivity components extracted by the algorithm. Note that some
connectivity components extracted by the algorithm are not found in the ground
truth, and some connectivity components are found in the ground truth but not
extracted by the algorithm. First type of error is quantified in the precision
part of the metric, while the latter type of error is quantified in the recall
part of the metric.
Let $G=\\{g_{1},g_{2},g_{3},\dots g_{m}\\}$ is the set of connected components
of a line in the ground truth, $E_{i}\in\\{E_{1},E_{2},E_{3},\dots E_{n}\\}$
is the set of extracted lines such that $E_{i}\cap G\neq\emptyset$, then for
this line in the ground truth, recall ($R$) and precision ($P$) is:
$R=\sum\limits_{i}{\frac{|E_{i}\cap G|-1}{|G|-1}}$ (4)
$P=\frac{\sum\limits_{i}{|E_{i}\cap G|-1}}{\sum\limits_{i}{|E_{i}|-1}}$ (5)
The recall definition penalizes over segmentation of a line where an
extraction algorithm assigns multiple labels to the components of a single
line. In contrast, the precision definition penalizes under segmentation where
an extraction algorithm incorrectly assigns a line label to the components
that are not in the ground truth of this line (Figure 6).
Figure 6: Connectivity component based metric penalizes under segmentation by
its precision definition and over segmentation by its recall definition.
## III Dataset
Although several benchmark datasets [20, 21, 22] of handwritten document
images are available, a challenging document dataset is absent. We collected a
set of challenging document images from the Islamic Heritage Project (IHP),
Harvard. This dataset is publicly available (https://www.cs.bgu.ac.il/ vml/).
The challenging dataset contains 30 pages from two different manuscripts. It
is written in Arabic language and contains 2732 text lines where a
considerable amount of them are multi-directed, multi-skewed or curved. Ground
truth where text lines were labeled manually by line masks is also available
in the dataset.
## IV Results
We tested the proposed system on the new challenging handwritten document
dataset. In each fold we trained FCN on 50.000 patches randomly cropped from
20 pages, validated on 6.000 patches randomly cropped from 5 pages and tested
on 5 whole pages using a sliding window. Predicted line mask images were then
post-processed with $N=10$ and $\epsilon=0.2$. Extracted text lines were
evaluated using the new metric to calculate the F-measure.
Entire training took around 9 days. Visualization of the first convolutional
layer filters shows that network have learned and filters have converged
(Figure 7). The model achieved $89\%$ training accuracy and $88\%$ validation
accuracy on average. Two characteristics of the dataset lead the model lacking
to overfit to the training set. First it contains two manuscripts with 6 and
24 pages. The manuscript with 6 pages caused most of the errors. Second,
although dataset contains considerable amount of multi-skewed, multi-directed
and curved lines, they spatially cover smaller area due to smaller font size.
This lead to less number of random patches with skewed or curved lines in
relative to the number of random patches with regular lines.
Figure 7: Visualization of the filters in the first convolutional layer.
Table I shows the performance of our method compared with the method of Cohen
et al.[9]. Their approach achieved outstanding results on ICDAR2009 [20] and
ICDAR2013 [21] datasets. We run their publicly available code
(http://www.cs.bgu.ac.il/ rafico/LineExtraction.zip) on the challenging
handwritten dataset.
TABLE I: Comparison with the method of Cohen et al. Method | Recall | Precision | F-measure
---|---|---|---
Proposed | 0.82 | 0.78 | 0.80
Cohen et al.[9] | 0.74 | 0.60 | 0.66
Figure 8: Sample image of ground truth and corresponding outputs of Cohen et
al. [9] and FCN. Lower precision values show that both method tend to under
segment. Most errors of FCN method occur at curved areas whereas most errors
of method of Cohen et al. occur at the main text areas.
Our method outperforms the method of Cohen et al. in terms of both recall and
precision. Both methods have lower precision values than recall values. This
demonstrates that most of their errors are due to wrongly connected lines in
their output. Therefore both method tend to under segment more than over
segment. We have noticed that in the output of our method, wrongly connected
lines mostly crop up at the curved areas in contrast to the output of Cohen et
al where the wrongly connected lines are mostly crop up at the main text
areas. The former was a result of small number of training patches with curved
lines. Curved lines can be long but their curved part covers relatively a
small spatial area which is one or two corner parts of a page. The latter was
a result of small number of main text lines in relative to the number of side
text lines in a page, where the average height of text lines converges to the
height of side text lines. Therefore method of Cohen et al., which runs
according to the average height of text lines, has most errors in main text
areas. Figure 8 shows some qualitative results for the latter and the former
types of errors on the challenging dataset.
## V Conclusion
This paper introduces challenging handwritten documents, presents a dataset of
challenging handwritten documents and its text line segmentation using FCN.
Line mask labeling is less cumbersome for challenging handwritten documents
and is a proper way for FCN training. We have also defined a new evaluation
metric with the concept of connectivity component. This metric is sensitive to
both over and under segmentation. New metric is used to validate the proposed
method on the challenging handwritten dataset. As a future work, performance
on curved text lines can be improved by augmenting patches with curved lines.
## Acknowledgment
The authors would like to thank the support of the Frankel Center for Computer
Science at Ben-Gurion University of the Negev.
## References
* [1] J. Ha, R. M. Haralick, and I. T. Phillips, “Document page decomposition by the bounding-box project,” in _Document Analysis and Recognition, 1995., Proceedings of the Third International Conference on_ , vol. 2. IEEE, 1995, pp. 1119–1122.
* [2] R. Manmatha and N. Srimal, “Scale space technique for word segmentation in handwritten documents,” _Lecture notes in computer science_ , pp. 22–33, 1999.
* [3] M. Arivazhagan, H. Srinivasan, and S. Srihari, “A statistical approach to handwritten line segmentation,” _Document Recognition and Retrieval XIV, Proceedings of SPIE, San Jose, CA_ , pp. 6500T–1, 2007.
* [4] I. Bar-Yosef, N. Hagbi, K. Kedem, and I. Dinstein, “Line segmentation for degraded handwritten historical documents,” in _Document Analysis and Recognition, 2009. ICDAR’09. 10th International Conference on_. IEEE, 2009, pp. 1161–1165.
* [5] N. Ouwayed and A. Belaïd, “A general approach for multi-oriented text line extraction of handwritten documents,” _International Journal on Document Analysis and Recognition_ , pp. 1–18, 2012.
* [6] K. Y. Wong, R. G. Casey, and F. M. Wahl, “Document analysis system,” _IBM journal of research and development_ , vol. 26, no. 6, pp. 647–656, 1982\.
* [7] A. Alaei, U. Pal, and P. Nagabhushan, “A new scheme for unconstrained handwritten text-line segmentation,” _Pattern Recognition_ , vol. 44, no. 4, pp. 917–928, 2011.
* [8] S. S. Bukhari, F. Shafait, and T. M. Breuel, “Script-independent handwritten textlines segmentation using active contours,” in _Document Analysis and Recognition, 2009. ICDAR’09. 10th International Conference on_. IEEE, 2009, pp. 446–450.
* [9] R. Cohen, I. Dinstein, J. El-Sana, and K. Kedem, “Using scale-space anisotropic smoothing for text line extraction in historical documents,” in _International Conference Image Analysis and Recognition_. Springer, 2014, pp. 349–358.
* [10] B. Moysset, C. Kermorvant, C. Wolf, and J. Louradour, “Paragraph text segmentation into lines with recurrent neural networks,” in _Document Analysis and Recognition (ICDAR), 2015 13th International Conference on_. IEEE, 2015, pp. 456–460.
* [11] J. Pastor-Pellicer, M. Z. Afzal, M. Liwicki, and M. J. Castro-Bleda, “Complete system for text line extraction using convolutional neural networks and watershed transform,” in _Document Analysis Systems (DAS), 2016 12th IAPR Workshop on_. IEEE, 2016, pp. 30–35.
* [12] Q. N. Vo and G. Lee, “Dense prediction for text line segmentation in handwritten document images,” in _Image Processing (ICIP), 2016 IEEE International Conference on_. IEEE, 2016, pp. 3264–3268.
* [13] G. Renton, C. Chatelain, S. Adam, C. Kermorvant, and T. Paquet, “Handwritten text line segmentation using fully convolutional network,” in _Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on_ , vol. 5. IEEE, 2017, pp. 5–9.
* [14] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440.
* [15] T. Grüning, R. Labahn, M. Diem, F. Kleber, and S. Fiel, “Read-bad: A new dataset and evaluation scheme for baseline detection in archival documents,” _arXiv preprint arXiv:1705.03311_ , 2017.
* [16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [17] B. Barakat and J. El-Sana, “Binarization free layout analysis for arabic historical documents using fully convolutional networks,” in _Arabic Script Analysis and Recognition (ASAR), 2nd International Workshop_. IEEE, 2018, pp. 26–30.
* [18] I. Bar-Yosef, I. Beckman, K. Kedem, and I. Dinstein, “Binarization, character extraction, and writer identification of historical hebrew calligraphy documents,” _International Journal of Document Analysis and Recognition (IJDAR)_ , vol. 9, no. 2-4, pp. 89–99, 2007.
* [19] A. W. Fitzgibbon, R. B. Fisher _et al._ , “A buyer’s guide to conic fitting,” _DAI Research paper_ , 1996.
* [20] B. Gatos, N. Stamatopoulos, and G. Louloudis, “Icdar2009 handwriting segmentation contest,” _International Journal on Document Analysis and Recognition (IJDAR)_ , vol. 14, no. 1, pp. 25–33, 2011.
* [21] N. Stamatopoulos, B. Gatos, G. Louloudis, U. Pal, and A. Alaei, “Icdar 2013 handwriting segmentation contest,” in _Document Analysis and Recognition (ICDAR), 2013 12th International Conference on_. IEEE, 2013, pp. 1402–1406.
* [22] M. Diem, F. Kleber, S. Fiel, T. Grüning, and B. Gatos, “cbad: Icdar2017 competition on baseline detection,” in _Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on_ , vol. 1. IEEE, 2017, pp. 1355–1360.
|
Approximation of the fixed point of the product of two operators in Banach
algebras with applications to some functional equations
Khaled Ben Amara1, Maria Isabel Berenguer2,∗ Aref Jeribi3
(1,3) Department of Mathematics. Faculty of Sciences of Sfax. University of
Sfax. Road Soukra Km $3.5B.P.1171,3000,$ Sfax, Tunisia.
(2) Department of Applied Mathematics and Institute of Mathematics (IEMath-
GR), E.T.S. de Ingenieria de Edificación, University of Granada, Granada,
Spain.
(∗) Corresponding author.
e-mails: (1<EMAIL_ADDRESS>(2<EMAIL_ADDRESS>(3)
<EMAIL_ADDRESS>
Abstract. In this paper, the existence and uniqueness of the fixed point for
the product of two nonlinear operator in Banach algebra is discussed. In
addition, an approximation method of the fixed point of hybrid nonlinear
equations in Banach algebras is established. This method is applied to two
interesting different types of functional equations. In addition, to
illustrate the applicability of our results we give some numerical examples.
Keywords: Banach algebras, Fixed point theory, Integro-differential operators,
Schauder Bases.
AMS Classification: $65$L$03$, $65$R$20$, $47$H$10$, $47$G$20.$
## 1 Introduction
Many phenomena in physics, chemistry, mechanics, electricity, and so as, can
be formulated by using the following nonlinear differential equations with a
nonlocal initial condition of the form:
$\left\\{\begin{array}[]{rl}\displaystyle\frac{d}{dt}\left(\frac{x(t)}{f(t,x(t))}\right)&=g(t,x(t)),t\in
J,\\\ \\\ x(0)&=\Gamma(x),\end{array}\right.$ (P1)
where $\Gamma$ is a mapping from $C(J)$ into $\mathbb{R}$ which represents the
nonlocal initial condition of the considered problem, see [12, 19]. The
nonlocal condition $x(0)=\Gamma(x)$ can be more descriptive in physics with
better effect than the classical initial condition $x(0)=x_{0},$ (see, e.g.
[10, 11, 13, 19]). In the last case, i.e. $x(0)=x_{0},$ the problem (P1) has
been studied by Dhage [17] and O’Regan [26] for the existence of solutions.
Therefore it is of interest to discuss and to approximate the solution of (P1)
with a nonlocal initial condition for various aspects of its solution under
some suitable conditions.
Similarly another class of nonlinear equations is used frequently to describe
many phenomen a in various fields of applied sciences such as physics, control
theory, chemistry, biology, and so forth (see [3], [15], [23] and [24]). This
class is generated by the equations of the form:
$x(t)=f(t,x(\sigma(t)))\cdot\left[q(t)+\displaystyle\int_{0}^{\eta(t)}K(t,s,x(\tau(s)))ds\right],t\in
J.$ (P2)
Both, (P1) and (P2), can be interpreted as fixed point problems in which the
equation involved is a hybrid equation of the type
$x=Ax\cdot Bx.$ (P3)
A hybrid fixed point theorem to (P3) was proved by Dhage in [14] and since
then, several extensions and generalizations of this fixed point result have
been proved. See [16, 18] and the references therein. These results can be
used to establish the existence and uniqueness of solutions. Although the
explicit calculation of the fixed point is only possible in some simple cases,
these results are regarded as one of the most powerful tools to approximate
this fixed point by a computational method and to develop numerical methods
that allow us to approximate the solution of these equations.
In Banach spaces, several works deals to develop numerical techniques to
approximate the solutions of some systems of integro-differential equations,
by using different methods such as the Chebyshev polynomial method [1], the
parameterization method [20], the wavelet methods [22], the secant-like
methods [Argyros], a collocation method in combination with operational
matrices of Berstein polynomials [25], the variational iteration method [27],
etc. A combination method of a fixed point result and Schauder’s basis in a
Banach space have been used in [4, 5, 6, 7] to solve numerically systems of
integral and integro-differential equations. The advantages of this method
over other numerical methods is its simplicity to implement it in a computer
and the approximating functions are the sum of integrals of piecewise
univariate and bivariate polynomials with coefficients easy to calculate.
Since the Banach algebras represents a practical framework for equations such
as (P1) and (P2), and in general (P3), the purposes of this paper are twofold.
Firstly, to present, under suitable conditions, a method to approximate the
fixed point of a hybrid equation of type (P3), by means of the product and
composition of operators defined in a Banach algebra. Secondly, to develop and
apply the method presented to obtain an approximation of the solutions of (P1)
and (P2).
The structure of this paper is as follows: in section 2 we present some
definitions and auxiliary results which will be needed in the sequel; in
section 3 we derive an approximation method for the fixed point of the hybrid
equation (P3); in sections 4 and 5, we apply our results to prove the
existence and the uniqueness of solution of (P1) and (P2), we give an
approximation method for these solutions and moreover, we establish some
numerical examples to illustrate the applicability of our results.
## 2 Analytical tools
In this section, we provide some concepts and results that we will need in the
following sections. The first analytical tool to be used comes from the theory
of the fixed point.
###### Definition 2.1
A mapping $A:X\longrightarrow X$ is said to be $\mathcal{D}$-Lipschitzian, if
there exists a continuous nondecreasing function
$\phi:\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}$ such that
$\|Ax-Ay\|\leq\phi(\|x-y\|)$
for all $x,y\in X$, with $\phi(0)=0$. The mapping $\phi$ is called the
$\mathcal{D}$-function associate to $A$. If $\phi(r)<r$ for $r>0,$ the mapping
$A$ is called a nonlinear contraction on $X$. $\hfill\diamondsuit$
###### Remark 2.1
The class of $\mathcal{D}$-Lipschitzian mapping on $X$ contains the class of
Lipschitzian mapping on $X$, indeed if $\phi(r)=\alpha\,r$, for some
$\alpha>0$, then $A$ is called Lipschitzian mapping with Lipschitz constant
$\alpha$ or an $\alpha$-Lipschitzian mapping. When $0\leq\alpha<1,$ we say
that $A$ is a contraction. $\hfill\diamondsuit$
The Banach fixed point theorem ensures that every contraction operator $A$ on
a complete metric space $X$ has a unique fixed point $\tilde{x}\in X,$ and the
sequence $\\{A^{n}x,n\in\mathbb{N}\\}$ converges to $\tilde{x}.$ One of the
more useful generalizations of the Banach fixed point principal is the
following result due to Boyd and Wong in [8].
###### Theorem 2.1
Let $(X,d)$ be a complete metric space, and let $A:X\to X.$ Assume that there
exists a continuous function $\varphi:[0,\infty)\to[0,\infty)$ such that
$\varphi(r)<r$ if $r>0,$ and
$d(A(x),A(y))\leq\varphi(d(x,y)),\ \ \forall x,y\in X.$
Then $A$ has a unique fixed point $\tilde{x}\in X.$ Moreover, for any $x\in
X,$ the sequence $x_{n}=A^{n}(x)$ converges to $\tilde{x}.$
###### Remark 2.2
The operator $A(x)=x-x^{2}$ mapping $X=[0,1]$ into itself, and possessing the
unique fixed point $x=0,$ does not satisfy the assumptions of the contraction
principal (the smallest possible $k$ equals $1$), whereas it satisfies those
of Theorem 2.1 with $\varphi(r)=r-r^{2}.$
On the other hand, Schauder bases will constitute the second essential tool.
A biorthogonal system in a Banach space $E$ is a system
$\\{(\tau_{n},\xi_{n}),n\geq 1\\}$ of $E\times E^{*},$ where $E^{*}$ denotes
the topological dual space of $E.$ Moreover, $\\{(\tau_{n},\xi_{n}),n\geq
1\\}$ said to be a fundamental biorthogonal system if
$\overline{span}\\{\tau_{n}\\}=E.$ Now, a sequence
$\\{\tau_{n},n\in\mathbb{N}\\}\subset E$ defines a Schauder basis of $E$ if,
for every $x\in E,$ there is a unique sequence $(a_{n})_{n}\subset\mathbb{R}$
such that
$x=\sum_{n\geq 1}a_{n}\tau_{n}.$
This produces the concept of the canonical sequence of finite dimensional
projections $P_{n}:E\to E,$ defined by the formula
$P_{n}\left(\sum_{k\geq 1}a_{k}\tau_{k}\right)=\sum_{k=1}^{n}a_{k}\tau_{k},$
and the associated sequence of coordinate functionals $\tau_{n}^{*}\in E^{*}$
defined by the formula
$\tau^{*}_{n}\left(\sum_{k\geq 1}a_{k}\tau_{k}\right)=a_{n}.$
Note that a Schauder basis is always a fundamental biorthogonal system, under
the interpretation of the coordinate functionals as biorthogonal functionals.
Moreover, in view of the Baire category theorem [9], that for all $n\geq 1,$
$\tau_{n}^{*}$ and $P_{n}$ are continuous. This yields, in particular, that
$\lim_{n\rightarrow\infty}\|P_{n}(x)-x\|=0.$
The above mentioned notions play an important role to approximating the
solution of different integral and integro-differential equations (see [4, 5,
6].)
## 3 The existence, uniqueness and approximation of a fixed point of the
product of two operators in Banach algebras.
Based on the Boyd-Wong theorem, we establish the following fixed point result
for the product of two nonlinear operators in Banach algebras.
###### Theorem 3.1
Let $X$ be a nonempty closed convex subset of a Banach algebra $E.$ Let
$A,B:X\to E$ be two operators satisfying the following conditions:
$(i)$ $A$ and $B$ are $\mathcal{D}$-lipschitzian with $\mathcal{D}$-functions
$\varphi$ and $\psi$ respectively,
$(ii)$ $A(X)$ and $B(X)$ are bounded,
$(iii)$ $Ax\cdot Bx\in X,$ for all $x\in X.$
Then, there is a unique point $\tilde{x}\in X$ such that $A\tilde{x}\cdot
B\tilde{x}=\tilde{x}$ and the sequence $\\{(A\cdot B)^{n}x\\},x\in X,$
converges to $\tilde{x}$ provided that
$\|A(X)\|\psi(r)+\|B(X)\|\varphi(r)<r,r>0.$ $\hfill\diamondsuit$
Proof. Let $x,y\in X.$ we have
$\begin{array}[]{rcl}\displaystyle\|Ax\cdot Bx-Ay\cdot By\|&\leq&\|Ax\cdot(Bx-
By)\|+\|(Ax-Ay)\cdot By\|\\\ \\\ &\leq&\|Ax\|\,\|Bx-By\|+\|By\|\,\|Ax-Ay\|\\\
\\\ &\leq&\|A(X)\|\,\psi(\|x-y\|)+\|B(X)\|\,\varphi(\|x-y\|).\end{array}$
This implies that $A\cdot B$ defines a nonlinear contraction with
$\mathcal{D}$-function
$\phi(r)=\|A(X)\|\,\psi(r)+\|B(X)\|\,\varphi(r),\ r>0.$
Applying the Boyd-Wong fixed point theorem [8], we obtain the desired result.
$\Box$
Boyd-Wong’s fixed-point theorem allows us to express the fixed point of
$A\cdot B$ as the limit of the sequence of functions $\\{(A\cdot
B)^{n}(x),n\in\mathbb{N}\\},$ with $x\in X.$ Obviously, if it were possible to
explicitly calculate, for each iteration, the expression $(A\cdot B)^{n}(x),$
then for each $n$ we would have an approximation of the fixed point. But, as a
practical matter, such an explicit calculation is only possible in very
particular cases. For this reason, we need to construct another approximation
of the fixed point which is simple to calculate in practice. Therefore, we
need the following Lemmas. The proofs of these Lemmas are similar to those of
Lemma 1 and Lemma 2 in [7].
###### Lemma 3.1
Let $X$ be a nonempty closed convex subset of a Banach algebra $E$ and let
$A,B:X\to E$ and $\phi:\mathbb{R}^{+}\to\mathbb{R}^{+}$ be a continuous
nondecreasing function such that for all $n\geq 1,$
$\left\|(A\cdot B)^{n}x-(A\cdot B)^{n}y\right\|\leq\phi^{n}(\|x-y\|).$
Let $x\in X$ and $T_{0},T_{1},\ldots,T_{m}:E\to E,$ with $T_{0}\equiv I.$ Then
$\begin{array}[]{rcl}\left\|(A\cdot B)^{m}x-T_{m}\circ\ldots\circ
T_{1}x\right\|&\leq&\displaystyle\sum_{p=1}^{m-1}\phi^{m-p}\left(\left\|A\cdot
B\circ T_{p-1}\circ\ldots\circ T_{1}x-T_{p}\circ\ldots\circ
T_{1}x\right\|\right)\\\ \\\ &&+\left\|A\cdot B\circ T_{m-1}\circ\ldots\circ
T_{1}x-T_{m}\circ\ldots\circ T_{1}x\right\|.\end{array}$
$\hfill\diamondsuit$
###### Lemma 3.2
Let $X$ be a nonempty closed convex subset of a Banach algebra $E.$ Let
$A,B:X\to E$ be two $\mathcal{D}$-Lipschitzian operators with
$\mathcal{D}$-functions $\varphi$ and $\psi,$ respectively, and $A\cdot B$
maps $X$ into $X.$ Moreover, suppose that
$\phi(r):=\|A(X)\|\psi(r)+\|B(X)\|\varphi(r)<r,r>0.$
Let $\tilde{x}$ be the unique fixed point of $A\cdot B,$ $x\in X,$
$\varepsilon>0,$ $n\in\mathbb{N}$ such that
$\left\|(A\cdot B)^{n}x-T_{n}\circ\ldots\circ
T_{1}x\right\|\leq\frac{\varepsilon}{2},$
then
$\left\|\tilde{x}-T_{n}\circ\ldots\circ T_{1}x\right\|\leq\varepsilon.$
$\hfill\diamondsuit$
Taking into account the above Lemmas, so as to approximate the solutions of
the problems (P1) and (P2), we will begin with an initial function $x_{0}\in
X$ and we will construct a sequence of operators
$\left\\{S_{n},n\in\mathbb{N}\right\\}$ in order to obtain successive
$T_{n}\circ\ldots\circ T_{1}(x_{0})$ approximations of the fixed point
$\tilde{x}$ of the product $A\cdot B$ following the scheme:
$\begin{array}[]{ccc}x_{0}&&\\\ \downarrow&&\\\ (A\cdot
B)(x_{0})&\approx&T_{1}(x_{0})=A(x_{0})\cdot S_{1}(x_{0})\\\
\downarrow&&\downarrow\\\ (A\cdot B)^{2}(x_{0})&\approx&T_{2}\circ
T_{1}x_{0}=(A\cdot S_{2})\circ T_{1}(x_{0})\\\ \vdots&\vdots&\vdots\\\ \\\
\vdots&\vdots&\vdots\\\ \downarrow&&\downarrow\\\ (A\cdot
B)^{n}(x_{0})&\approx&T_{n}\circ\ldots\circ T_{1}(x_{0})=(A\cdot S_{n})\circ
T_{n-1}\circ\ldots\circ T_{1}(x_{0})\approx\tilde{x}\end{array}$
## 4 Nonlinear differential problems (P1)
In this section we focus our attention in the following nonlinear differential
equation with a nonlocal initial condition:
$\left\\{\begin{array}[]{rl}\displaystyle\frac{d}{dt}\left(\frac{x(t)}{f(t,x(t))}\right)&=g(t,x(t)),t\in
J,\\\ \\\ x(0)&=\Gamma(x),\end{array}\right.$ (P1)
where $J:=[0,\rho],$ $f:J\times\mathbb{R}\to\mathbb{R}\setminus\\{0\\},$
$g:J\times\mathbb{R}\to\mathbb{R}$ and $\Gamma:C(J)\to\mathbb{R}.$ Here $C(J)$
is the space of all continuous functions from $J$ into $\mathbb{R}$ endowed
with the norm $\|\cdot\|_{\infty}=\sup_{t\in J}|x(t)|.$
This equation will be studied under the following assumptions:
$(i)$ The partial mappings $t\mapsto f(t,x),$ $t\mapsto g(t,x)$ are continuous
and the mapping $\Gamma$ is $L_{\Gamma}$-Lipschitzian.
$(ii)$ There exist $r>0$ and two nondecreasing, continuous functions
$\varphi,\psi:\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}$ such that
$\left|f(t,x)-f(t,{y})\right|\leq\alpha(t)\varphi(|x-y|),t\in J,\hbox{ and
}x,y\in\mathbb{R}\hbox{ with }|x|,|y|\leq r,$
and
$\left|g(t,x)-g(t,{y})\right|\leq\gamma(t)\psi(|x-y|),t\in J\text{ and
}x,y\in\mathbb{R}\hbox{ with }|x|,|y|\leq r.$
$(iii)$ There is a constant $\delta>0$ such that $\sup_{x\in\mathbb{R},|x|\leq
r}|f(0,x)|^{-1}\leq\delta.$
### 4.1 The existence and uniqueness of a solution to problem (P1).
In this subsection, we prove the existence and the uniqueness of a solution to
the functional differential problem (P1).
###### Theorem 4.1
Assume that the assumptions $(i)$-$(iii)$ hold. If
$\displaystyle\displaystyle\left\\{\begin{array}[]{lll}\displaystyle
M_{F}\delta
L_{\Gamma}t+\left(M_{F}\delta^{2}\alpha(0)\left(L_{\Gamma}r+\Gamma(0)\right)+M_{G}\|\alpha\|_{\infty}\right)\varphi(t)+M_{F}\|\gamma(\cdot)\|_{L^{1}}\psi(t)<t,\
t>0,\\\ \\\ \displaystyle M_{F}M_{G}\leq r,\end{array}\right.$
where $r$ is defined in the assumption $(ii),$ then the nonlinear differential
problem (P1) has a unique solution in ${B}_{r}.$
Proof. Let
$\Omega:=\\{x\in C(J);\|x\|\leq r\\}.$
Here the constant $r$ is defined in $(ii).$ Observe that $\Omega$ is a non-
empty, closed, convex and bounded subset of $C(J),$ and the problem of the
existence of a solution to (P1) can be formulated in the following fixed point
problem $Fx\cdot Gx=x,$ where $F,G$ are given for $x\in C(J)$ by
$\displaystyle\left\\{\begin{array}[]{ll}(Fx)(t)&=\displaystyle f(t,x(t))\\\
\\\
(Gx)(t)&=\left[\displaystyle\frac{1}{f(0,x(0))}\Gamma(x)+\displaystyle\int_{0}^{t}g(s,x(s))ds\right],t\in
J.\end{array}\right.$ (4.5)
Let $x\in\Omega$ and $t,t^{\prime}\in J.$ Since $f$ is
$\mathcal{D}$-lipschitzian with respect to the second variable and is
continuous with respect to the first variable, then by using the inequality
$\begin{array}[]{rcl}\displaystyle|f(t,x(t))-f(t^{\prime},x(t^{\prime}))|&\leq&\displaystyle|f(t,x(t))-f(t^{\prime},x(t))|+|f(t^{\prime},x(t))-f(t^{\prime},x(t^{\prime}))|,\end{array}$
we can show that $F$ maps $\Omega$ into $C(J).$
Now, let us claim that $G$ maps $\Omega$ into $C(J).$ In fact, let
$x\in\Omega$ and $t,t^{\prime}\in J$ be arbitrary. Taking into account that
$t\mapsto g(t,x)$ is a continuous mapping, it follows from assumption $(ii)$
that
$\begin{array}[]{rcl}\displaystyle|G(x)(t)-G(x)(t^{\prime})|&\leq&\displaystyle\int_{t^{\prime}}^{t}|g(s,x(s))-g(s,0)|ds+(t-t^{\prime})\|g(\cdot,0)\|_{\infty}\\\
\\\
&\leq&\displaystyle(t-t^{\prime})\left(\|\gamma\|_{\infty}\psi(r)+\|g(\cdot,0)\|_{\infty}\right).\end{array}$
This proves the claim. Our strategy is to apply Theorem 3.1 to show the
existence and the uniqueness of a fixed point for the product $F\cdot G$ in
$\Omega$ which in turn is a continuous solution for problem (P1).
For this purpose, we will claim, first, that $F$ and $G$ are
$\mathcal{D}$-lipschitzian mappings on $\Omega.$ The claim regarding $F$ is
clear in view of assumption $(ii),$ that is $F$ is $\mathcal{D}$-lipschitzian
with $\mathcal{D}$-function $\Phi$ such that
$\Phi(t)=\|\alpha\|_{\infty}\varphi(t),t\in J.$
We corroborate now the claim for $G.$ Let $x,y\in\Omega,$ and let $t\in J.$ By
using our assumptions, we obtain
$\begin{array}[]{rcl}\displaystyle\left|G(x)(t)-G(y)(t)\right|&=&\left|\displaystyle\frac{1}{f(0,x(0))}\Gamma(x)-\frac{1}{f(0,y(0))}\Gamma(y)+\int_{0}^{t}g(s,x(s))-g(s,y(s))ds\right|\\\
\\\
&\leq&\displaystyle\frac{L_{\Gamma}}{|f(0,x(0))|}\|x-y\|+\frac{\alpha(0)}{|f(0,x(0))f(0,y(0))|}\left(L_{\Gamma}r+\Gamma(0)\right)\varphi(\|x-y\|)\\\
\\\ &&+\displaystyle\int_{0}^{t}|\gamma(s)|\psi(|x(s)-y(s))|ds\\\ \\\
&\leq&\delta
L_{\Gamma}\|x-y\|+\delta^{2}\alpha(0)\left(L_{\Gamma}r+\Gamma(0)\right)\varphi(\|x-y\|)+\|\gamma(\cdot)\|_{L^{1}}\psi(\|x-y\|).\end{array}$
Taking the supremum over $t,$ we obtain that $G$ is $\mathcal{D}$-lipschitzian
with $\mathcal{D}$-function $\Psi$ such that
$\Psi(t)=\delta
L_{\Gamma}t+\delta^{2}\alpha(0)\left(L_{\Gamma}r+\Gamma(0)\right)\varphi(t)+\|\gamma(\cdot)\|_{L^{1}}\psi(t),t\in
J.$
On the other hand, bearing in mind assumption $(i),$ by using the above
discussion we can see that $F(\Omega)$ and $G(\Omega)$ are bounded with bounds
$M_{F}:=\|\alpha\|_{\infty}\varphi(r)+\|f(\cdot,0)\|_{\infty}\hbox{ and
}M_{G}:=\delta(L_{\Gamma}r+|\Gamma(0)|)+\|\gamma\|_{\infty}\rho\psi(r)+\rho\|g(\cdot,0)\|_{\infty}.$
Taking into account the estimate $M_{F}M_{G}\leq r,$ we obtain that $F\cdot G$
maps $\Omega$ into $\Omega.$
Now, applying Theorem 3.1, we infer that (P1) has one and only one solution
$\tilde{x}$ in $\Omega,$ and for each $x\in\Omega$ we have
$\displaystyle\lim_{n\rightarrow\infty}(F\cdot G)^{n}x=\tilde{x}.$
$\Box$
Notice that by induction argument we can show that
$\displaystyle\displaystyle\|(F\cdot G)^{n}x-(F\cdot
G)^{n}y\|\leq\Theta^{n}(\|x-y\|),$ (4.6)
where $\Theta(t):=M_{F}\Psi(t)+M_{G}\Phi(t),t\geq 0.$
### 4.2 Numerical method to approximate the solution of (P1).
In this subsection we find a numerical approximation of the solution to the
nonlinear equation (P1) using a Schauder basis in $C(J).$
First, let us consider a Schauder basis $\\{\tau_{n}\\}_{n\geq 1}$ in $C(J)$
and the sequence of associated projections $\\{\xi_{n}\\}_{n\geq 1}.$ Let
$\left\\{\begin{array}[]{ll}T_{p}:C(J)\longrightarrow C(J)\\\ \\\
x\mapsto\displaystyle
T_{p}(x)(t)=F(x)(t)\left(\displaystyle\frac{1}{f(0,x(0))}\Gamma(x)+\displaystyle\int_{0}^{t}\xi_{n_{p}}(U_{0}(x))(s)ds\right),\end{array}\right.$
where $F:C(J)\longrightarrow C(J)$ such that
$F(x)(t)=f(t,x(t))$
and $U_{0}:C(J)\longrightarrow C(J)$ such that
$U_{0}(x)(s)=g(s,x(s)).$
###### Remark 4.1
$(i)$ For all fixed $p\geq 1,$ the mapping $T_{p}$ maps $\Omega$ into
$\Omega.$
In fact, let $x\in\Omega,$ we have
$\begin{array}[]{rcl}\left|T_{p}(x)(t)\right|&=&\left|F(x)(t)\left(\displaystyle\frac{1}{f(0,x(0))}\Gamma(x)+\displaystyle\int_{0}^{t}\xi_{n_{p}}(U_{0}(x))(s)ds\right|\right)\\\
\\\
&\leq&\left|f(t,x(t))\right|\left(\displaystyle\delta|\Gamma(x)|+\int_{0}^{t}\left|\xi_{n_{p}}(U_{0}(x))(s)\right|ds\right).\end{array}$
Proceeding essentially as in the above subsection and using the fact that
$\xi_{n_{p}}$ is a bounded linear operator on $C(J),$ we get
$\begin{array}[]{rcl}\left|T_{p}(x)(t)\right|&\leq&M_{F}\left[\displaystyle\delta|\Gamma(x)|+\rho\left\|\xi_{n_{p}}\left(U_{0}(x)\right)\right\|\right]\\\
\\\
&\leq&M_{F}\displaystyle\left[\displaystyle\delta(L_{\Gamma}r+|\Gamma(0)|)+\rho\sup_{s\in
J}|g(s,x(s))|\right]\\\ \\\ &\leq&M_{F}M_{G}.\end{array}$
In view of assumption $(iii),$ we infer that $T_{p}$ maps $\Omega$ into
$\Omega.$
$(ii)$ Item $(i)$ means, in particular, that for all fixed $p\geq 1,$ the
operator $T_{p}\circ\ldots\circ T_{1}$ maps $\Omega$ into
$\Omega.$$\hfill\diamondsuit$
Our objective is to justify that we can choose $n_{1},\ldots,n_{m},$ so that
the operators $T_{1},\ldots,T_{m}$ can be used to obtain an approximation to
the unique solution of equation (P1).
###### Theorem 4.2
Let $\tilde{x}$ be the unique solution to the nonlinear problem (P1). Let
$x\in\Omega$ and $\varepsilon>0,$ then there exists $n\in\mathbb{N}$ such that
$\left\|\tilde{x}-T_{n}\circ\ldots\circ T_{1}x\right\|\leq\varepsilon.$
$\hfill\diamondsuit$
Proof. Let $x\in\Omega$ and $\varepsilon>0.$ For $p\in\\{1,\ldots,m\\},$ we
define $U_{p}:C(J)\to C(J)$ by
$U_{p}(x)(s):=g(s,T_{p}\circ\ldots\circ T_{1}(x)(s)),\ s\in J,x\in C(J)$
and $F_{p}:C(J)\to C(J)$ by
$F_{p}(x)(s):=F\left(s,T_{p}\circ\ldots\circ T_{1}(x)(s)\right),\ s\in J,x\in
C(J).$
According to inequality (4.6), in view of Lemma 3.1, we get
$\left\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ T_{1}x\right\|\leq$
$\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\left(\left\|(F\cdot G)\circ
T_{p-1}\circ\ldots\circ T_{1}x-T_{p}\circ\ldots\circ
T_{1}x\right\|\right)+\left\|(F\cdot G)\circ T_{m-1}\circ\ldots\circ
T_{1}x-T_{m}\circ\ldots\circ T_{1}x\right\|.$
Taking into account (4.1)-$(i)$ and using similar arguments as in the above
section, we infer that $\left\|F_{p-1}(x)\right\|$ is bounded, and
consequently we get
$\begin{array}[]{rcl}&&\displaystyle\left|(F\cdot G)\circ
T_{p-1}\circ\ldots\circ T_{1}(x)(t)-T_{p}\circ T_{p-1}\circ\ldots\circ
T_{1}(x)(t)\right|\\\ \\\
&\leq&\left|F_{p-1}(x)(t)\left(\displaystyle\int_{0}^{t}g\left(s,T_{p-1}\circ\ldots\circ
T_{1}(x)(s)\right)\,ds-\displaystyle\int_{0}^{t}\xi_{n_{p}}(U_{p-1}(x))(s)\,ds\right)\right|\\\
\\\
&\leq&\left|F_{p-1}(x)(t)\right|\,\displaystyle\int_{0}^{t}\left|\left(\xi_{n_{p}}(U_{p-1}(x))-U_{p-1}(x)\right)(s)\right|\,ds\\\
\\\
&\leq&\rho\left\|F_{p-1}(x)\right\|\,\left\|\xi_{n_{p}}(U_{p-1})(x)-U_{p-1}(x)\right\|.\end{array}$
Then, we obtain
$\left\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ
T_{1}x\right\|\leq\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\left(\rho
M_{F}\,\left\|\xi_{n_{p}}(U_{p-1}(x))-U_{p-1}(x)\right\|\right)+\rho
M_{F}\,\left\|\xi_{n_{m}}(U_{m-1}(x))-U_{m-1}(x)\right\|.$
In view of the convergence property of the Projection operators associated to
the Schauder basis and the continuity of $\Theta,$ we can find
$n_{1},\ldots,n_{m}\geq 1$ and therefore $T_{1},\ldots,T_{m},$ such that
$\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ T_{1}x\|\leq$
$\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\Big{(}\rho
M_{F}\left\|\xi_{n_{p}}(U_{p-1}(x))-U_{p-1}(x)\right\|\Big{)}+\rho
M_{F}\left\|\xi_{n_{m}}(U_{m-1}(x))-U_{m-1}(x)\right\|\leq\displaystyle\frac{\varepsilon}{2}.$
Now apply Lemma 3.2, in order to get
$\left\|\tilde{x}-T_{m}\circ\ldots\circ T_{1}(x)\right\|<\varepsilon.$
$\Box$
### 4.3 Numerical experiments.
This section is devoted to providing some examples and their numerical results
to illustrate the theorems of the above sections. We will consider $J=[0,1]$
and the classical Faber-Schauder system in $C(J)$ where the nodes are the
naturally ordered dyadic numbers (see [21, 28] for details).
###### Example 4.3.1
Consider the nonlinear differential equation with a nonlocal initial condition
$\left\\{\begin{array}[]{ll}&\displaystyle\frac{d}{dt}\left(\frac{x(t)}{f(t,x(t))}\right)=\displaystyle
ae^{-x(t)},\ \ t\in J,\\\ \\\ &\displaystyle x(0)=b(\sup_{t\in
J}|x(t)|+3/4)\end{array}\right.$ (4.7)
where $f(t,x)=\displaystyle\frac{b}{1+ae^{-b}t},$ $g(t,x)=\displaystyle
ae^{-x},$ $\Gamma(u)=b\left(\sup_{t\in J}|u(t)|+3/4\right),$ with
$a<1/\log(2).$
Let $R$ be small enough such that $a(\log(2)+R)<1.$ Let $x,y\in[-R,R],$ by an
elementary calculus we can show that
$\begin{array}[]{rcl}\left|f(t,x)-f(t,y)\right|&\leq&\displaystyle\alpha(t)\varphi(|x-y|)\end{array}$
and
$\begin{array}[]{rcl}\left|g(t,x)-g(t,y)\right|&\leq&\displaystyle\gamma(t)\psi(|x-y|)\end{array}$
where $\displaystyle\alpha(t)=\varphi(t)=0,$ $\gamma(t)=ae^{R}(1-e^{-t}),$ and
$\psi(t)=t.$
On the other hand, we have that $\Gamma$ is Lipschizian with a Lipschiz
constant $L_{\Gamma}=b,$ and
$\displaystyle\sup_{x,|x|\leq R}[f(0,x)]^{-1}\leq\delta=\frac{1}{b}.$
Applying Theorem 4.1, we obtain that (4.7) has a unique solution in
$\Omega=\left\\{x\in C([0,1]);\|x\|\leq 3/4\right\\},$ when $a$ is small
enough. In fact the solution is $x(t)=b.$ We apply the numerical method for
$a=0.1$ and the initial $x_{0}(t)=\frac{1}{4}\left(\sqrt{bt}+1\right).$
Table 1. Numerical results for (4.7) with initial
$x_{0}(t)=\frac{1}{4}\left(\sqrt{bt}+1\right)$.
| | $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---|---
$t$ | $x^{*}(t)$ | $m=2$ | $m=4$ | $m=2$ | $m=4$
$0.1$ | $0.25$ | 0.25964068562641207 | 0.2526360625738145 | 0.25779577744548676 | 0.25062384017038686
$0.2$ | $0.25$ | 0.2581608132685836 | 0.2512245431325148 | 0.2576778369861067 | 0.2506151528771704
$0.3$ | $0.25$ | 0.25785705013803817 | 0.25102089532293176 | 0.2575623161293616 | 0.25060665510642743
$0.4$ | $0.25$ | 0.25774159101031774 | 0.25100874582984495 | 0.25744919245075903 | 0.2505983412941664
$0.5$ | $0.25$ | 0.2576285346430475 | 0.2509968386936278 | 0.25733843642963494 | 0.2505902060799007
$0.6$ | $0.25$ | 0.25751784650251447 | 0.2509851672563384 | 0.2572300145734221 | 0.2505822442972077
$0.7$ | $0.25$ | 0.2574094899054699 | 0.25097372508850474 | 0.2571238909612659 | 0.2505744509661791
$0.8$ | $0.25$ | 0.25730342712624404 | 0.2509625059364119 | 0.25702002815700664 | 0.25056682128612107
$0.9$ | $0.25$ | 0.25719961932300417 | 0.25095150376429876 | 0.2569183878122034 | 0.25055935062726176
$1$ | $0.25$ | 0.2570980270442474 | 0.2509407127451644 | 0.25681893107062653 | 0.2505520345235613
| $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---
| $m=2$ | $m=4$ | $m=2$ | $m=4$
$\|x^{*}-\tilde{x}\|_{\infty}$ | $9.90603\times 10^{-3}$ | $2.86369\times 10^{-3}$ | $8.33966\times 10^{-3}$ | $1.0862\times 10^{-3}$
###### Example 4.3.2
Consider the nonlinear differential equation with a nonlocal initial condition
$\left\\{\begin{array}[]{ll}&\displaystyle\frac{d}{dt}\left(\frac{x(t)}{f(t,x(t))}\right)=\displaystyle
a(x(t))^{2},\ \ t\in J,\\\ \\\ &\displaystyle x(0)=1/(4b)\sup_{t\in
J}|x(t)|^{2},\end{array}\right.$ (4.8)
where $f(t,x)=\displaystyle\frac{b(t+1)}{1+\frac{ab^{2}}{3}(x^{3}/b^{3}-1)}$
and $g(t,x)=ax^{2},$ with $ab^{2}<3.$
Let $R>0$ such that $2b\leq R$ and $\frac{a}{3b}(b^{3}+R^{3})<1.$ Let
$x,y\in[-R,R].$ By an elementary calculus we can show that
$\begin{array}[]{rcl}\left|f(t,x)-f(t,y)\right|&\leq&\displaystyle\alpha(t)\varphi(|x-y|)\end{array}$
and
$\begin{array}[]{rcl}\left|g(t,x)-g(t,y)\right|&\leq&\displaystyle\gamma(t)\psi(|x-y|)\end{array}$
where
$\displaystyle\alpha(t)=\frac{a(t+1)R^{2}}{\left(1-\frac{a}{3b}(R^{3}+b^{3})\right)^{2}},$
$\gamma(t)=2aR,$ and $\varphi(t)=\psi(t)=t.$
On the other hand, we have that
$\begin{array}[]{rcl}\displaystyle|\Gamma(u)-\Gamma(v)|&\leq&\displaystyle\frac{R}{2b}\|u-v\|.\end{array}$
Consequently, $\Gamma$ is Lipschizian with aLipschiz constant
$L_{\Gamma}=\frac{R}{2b}.$ It is easy to prove that
$\sup_{x\in\mathbb{R},|x|\leq R}[f(0,x)]^{-1}\leq\delta=aR^{3}/(3b^{2})+1/b.$
Now, applying Theorem 4.1, in order to obtain that (4.8), with $a$ is small
enough, has a unique solution in $\Omega=\left\\{x\in C([0,1]);\|x\|\leq
1/2\right\\}.$ We can check that the solution is $x(t)=b(t+1).$
The following table shows the numerical results of the proposed method for
$a=0.05$, $b=1/4$ and $x_{0}(t)=\frac{1}{2}t.$
Table 2. Numerical results for (4.8) with initial $x_{0}(t)=\frac{1}{2}t$.
| | $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---|---
$t$ | $x^{*}(t)$ | $m=2$ | $m=4$ | $m=2$ | $m=4$
$0.1$ | $0.275$ | 0.27417118067351837 | 0.27151545133640886 | 0.2740819659311709 | 0.27145329704728827
$0.2$ | $0.3$ | 0.2990105059004299 | 0.29611673530305527 | 0.2989981055587191 | 0.29613324654650613
$0.3$ | $0.325$ | 0.32391591487977106 | 0.3207837845940706 | 0.3239149335622202 | 0.3208140511167786
$0.4$ | $0.35$ | 0.3488342313971621 | 0.34546352791535867 | 0.3488329357145739 | 0.34549585475483185
$0.5$ | $0.375$ | 0.3737541821287371 | 0.3701445199310059 | 0.3737524860894689 | 0.3701788114857308
$0.6$ | $0.40$ | 0.39867580493805804 | 0.3948268789541488 | 0.39867366683899425 | 0.39486308640853285
$0.7$ | $0.425$ | 0.42359867388137695 | 0.4195107187398104 | 0.4235961144223516 | 0.41954885401447617
$0.8$ | $0.45$ | 0.4485219829977168 | 0.4441962543294659 | 0.448518998226605 | 0.44423629583080837
$0.9$ | $0.475$ | 0.47344474190449987 | 0.4688837174935067 | 0.47344130014978747 | 0.468925600958778
$1$ | $0.5$ | 0.498366567564148 | 0.49357335586512446 | 0.49836264112050677 | 0.4936169655580174
| $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---
| $m=2$ | $m=4$ | $m=2$ | $m=4$
$\|x^{*}-\tilde{x}\|_{\infty}$ | $1.63343\times 10^{-3}$ | $6.42664\times 10^{-3}$ | $1.63736\times 10^{-3}$ | $6.38303\times 10^{-3}$
## 5 Nonlinear integral equations of type (P2).
This section deals with the following nonlinear integral equation:
$x(t)=f(t,x(\sigma(t)))\cdot\left[q(t)+\displaystyle\int_{0}^{\eta(t)}K(t,s,x(\tau(s)))ds\right],t\in
J.$ (P2)
where $\sigma,\tau,\eta:J\to J,$ $f:J\times\mathbb{R}\to\mathbb{R},q\in C(J)$
and $K:J\times J\times\mathbb{R}\to\mathbb{R}.$
More precisely, we prove the existence and the uniqueness of a solution to
equation (P2), and then we provide an approximation method of this solution.
In our consideration, we need the following hypotheses:
$(i)$ The partial mappings $t\mapsto f(t,x)$ and $(t,s)\mapsto K(t,s,x)$ are
continuous.
$(ii)$ There exist $r>0$ and two nondecreasing, continuous functions
$\varphi,\psi:\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}$ such that
$\left|f(t,x)-f(t,{y})\right|\leq\alpha(t)\varphi(|x-y|),t\in J,\hbox{ and
}x,y\in\mathbb{R}\hbox{ with }|x|,|y|\leq r,$
and
$\left|K(t,s,x)-K(t,s,{y})\right|\leq\gamma(t,s)\psi(|x-y|),t,s\in J\text{ and
}x,y\in\mathbb{R}\hbox{ with }|x|,|y|\leq r.$
### 5.1 The existence and uniqueness of a solution to Eq. (P2)
To allow the abstract formulation of equation (P2), we define the following
operators by
$\displaystyle\left\\{\begin{array}[]{ll}(Fx)(t)&=\displaystyle
f(t,x(\sigma(t)))\\\ \\\ (Gx)(t)&=\left[\displaystyle
q(t)+\displaystyle\int_{0}^{\eta(t)}K(t,s,x(\tau(s)))ds\right],t\in
J.\end{array}\right.$ (5.4)
First, we will establish the following result which shows the existence and
uniqueness of a solution.
###### Theorem 5.1
Assume that the assumptions $(i)$-$(ii)$ hold. If
$\displaystyle\displaystyle\left\\{\begin{array}[]{ll}\displaystyle
M_{F}\rho\|\gamma\|_{\infty}\psi(t)+M_{G}\|\alpha\|_{\infty}\varphi(t)<t,\
t>0\\\ \\\ \displaystyle M_{F}M_{G}\leq r,\end{array}\right.$ (5.8)
where
$M_{F}=\|\alpha\|_{\infty}\varphi(r)+\|f(\cdot,\theta)\|_{\infty}\hbox{ and
}M_{G}=\displaystyle\|q(\cdot)\|_{\infty}+\rho\left(\|k(\cdot,\cdot,0)\|_{\infty}+\|\gamma\|_{\infty}\psi(r)\right),$
then the nonlinear integral equation (P2) has a unique solution in ${B}_{r}.$
Proof. Let $\Omega:=\\{x\in C(J);\|x\|_{\infty}\leq r\\}.$ By using similar
arguments to those in the above section, we can show that $F$ and $G$ define
$\mathcal{D}$-lipschitzian mappings from $\Omega$ into $C(J),$ with
$\mathcal{D}$-functions $\|\alpha\|_{\infty}\varphi$ and
$\rho\|\gamma\|_{\infty}\psi,$ respectively. Also it is easy to see that
$F(\Omega)$ and $G(\Omega)$ are bounded with bounds, respectively,
$M_{F}=\|\alpha\|_{\infty}\varphi(r)+\|f(\cdot,0)\|_{\infty}\hbox{ and
}M_{G}=\|q\|_{\infty}+\rho\left(\|\gamma\|_{\infty}\psi(r)+\|K(\cdot,\cdot,0)\|_{\infty}\right).$
Taking into account our assumptions, we deduce that $F\cdot G$ maps $\Omega$
into $\Omega.$ Now, an application of Theorem 3.1 yields that (P2) has one and
only one solution $\tilde{x}$ in $\Omega,$ and for each $x\in\Omega$ we have
$\displaystyle\lim_{n\rightarrow\infty}(F\cdot G)^{n}x=\tilde{x}.$ $\Box$
Moreover, by induction argument we can obtain
$\displaystyle\displaystyle\|(F\cdot G)^{n}x-(F\cdot
G)^{n}y\|\leq\Theta^{n}(\|x-y\|),$ (5.9)
where
$\Theta(t):=\rho\|\gamma\|_{\infty}M_{F}\psi(t)+\|\alpha\|_{\infty}M_{G}\varphi(t),t\geq
0.$
### 5.2 A Numerical method to approximate the solution to (P2).
In this subsection we provide a numerical approximation of the solution to the
nonlinear equation (P2). Now let us consider a Schauder basis
$\\{\tau_{n}\\}_{n\geq 1}$ in $C(J\times J)$ and the sequence of associated
projections $\\{\xi_{n}\\}_{n\geq 1}.$ Let
$\left\\{\begin{array}[]{ll}T_{p}:C(J)\longrightarrow C(J)\\\ \\\
x\mapsto\displaystyle
T_{p}(x)(t)=F(x)(t)\left({q}(t)+\displaystyle\int_{0}^{\eta(t)}\xi_{n_{p}}(U_{0}(x))(t,s)ds\right),\end{array}\right.$
where $F:C(J)\longrightarrow C(J)$ such that
$F(x)(t)=f(t,x(\sigma(t)))$
and $U_{0}:C(J)\longrightarrow C(J\times J)$ such that
$U_{0}(x)(t,s)=K(t,s,x(\tau(s))).$
###### Remark 5.1
$(i)$ For all fixed $p\geq 1,$ the mapping $T_{p}$ maps $\Omega$ into
$\Omega.$
In fact, let $x\in\Omega,$ we have
$\left|T_{p}(x)(t)\right|=\left|F(x)(t)\left(\displaystyle{q}(t)+\displaystyle\int_{0}^{\eta(t)}\xi_{n_{p}}(U_{0}(x))(t,s)ds\right)\right|\\\
\leq\left|f(t,x(\sigma(t)))\right|\,\left(\displaystyle\left|{q}(t)\right|+\displaystyle\int_{0}^{\eta(t)}\left|\xi_{n_{p}}(U_{0}(x))(t,s)\right|ds\right).$
Proceeding essentially as in the above section and using the fact that
$\xi_{n_{p}}$ is a bounded linear operator on $C(J\times J),$ we get
$\left|T_{p}(x)(t)\right|\leq
M_{F}\left(\left|q(t)\right|+\displaystyle\rho\left\|\xi_{n_{p}}\left(U_{0}(x)\right)\right\|\right)\leq
M_{F}\left(\left\|q\right\|_{\infty}+\displaystyle\rho\sup_{t,s\in
J}|k(t,s,x(\tau(s)))|\right)\\\ \\\ \leq M_{F}M_{G}.$
In view of our assumptions, we infer that $T_{p}$ maps $\Omega$ into $\Omega.$
$(ii)$ Item $(i)$ means, in particular, that for all fixed $p\geq 1,$ the
operator $T_{p}\circ\ldots\circ T_{1}$ maps $\Omega$ into $\Omega.$
Again, our objective is to justify that we can choose $n_{1},\ldots,n_{m},$ so
that the operators $T_{1},\ldots,T_{m}$ can be used to obtain an approximation
of the unique solution to equation (P2).
###### Theorem 5.2
Let $\tilde{x}$ be the unique solution to the nonlinear equation (P2). Let
$x\in\Omega$ and $\varepsilon>0,$ then there exists $n\in\mathbb{N}$ such that
$\left\|\tilde{x}-T_{n}\circ\ldots\circ T_{1}x\right\|\leq\varepsilon.$
$\hfill\diamondsuit$
Proof. Let $x\in\Omega$ and $\varepsilon>0.$ For $p\in\\{1,\ldots,m\\},$ we
define $U_{p}:C(J)\to C(J\times J)$ by
$U_{p}(x)(t,s):=K(t,s,T_{p}\circ\ldots\circ T_{1}(x)(s)),\ t,s\in J,x\in C(J)$
and $F_{p}:C(J)\to C(J)$ by
$F_{p}(x)(s):=f\left(s,T_{p}\circ\ldots\circ T_{1}(x)(s)\right),\ s\in J,x\in
C(J).$
According to Lemma 3.1, we get
$\left\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ T_{1}x\right\|\leq$
$\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\left(\left\|(F\cdot G)\circ
T_{p-1}\circ\ldots\circ T_{1}x-T_{p}\circ\ldots\circ
T_{1}x\right\|\right)+\left\|(F\cdot G)\circ T_{m-1}\circ\ldots\circ
T_{1}x-T_{m}\circ\ldots\circ T_{1}x\right\|.$
Taking into account Remark 5.1, we infer that $\left\|F_{p-1}(x)\right\|$ is
bounded. Proceeding essentially, as in the above section, we get
$\begin{array}[]{rcl}\displaystyle\left|(F\cdot G)\circ
T_{p-1}\circ\ldots\circ T_{1}(x)(t)-T_{p}\circ T_{p-1}\circ\ldots\circ
T_{1}(x)(t)\right|\leq\rho\left\|F_{p-1}(x)\right\|\,\left\|\xi_{n_{p}}(U_{p-1})(x)-U_{p-1}(x)\right\|,\end{array}$
which implies that
$\left\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ T_{1}x\right\|\leq$
$\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\left(\rho
M_{F}\,\left\|\xi_{n_{p}}(U_{p-1})(x)-U_{p-1}(x)\right\|\right)+\rho
M_{F}\,\left\|\xi_{n_{m}}(U_{m-1})(x)-U_{m-1}(x)\right\|.$
In view of the convergence property of the Projection operators associated to
the Schauder basis, we can find $n_{1},\ldots,n_{m}\geq 1$ and therefore
$T_{1},\ldots,T_{m},$ such that
$\|(F\cdot G)^{m}x-T_{m}\circ\ldots\circ T_{1}x\|\leq$
$\displaystyle\sum_{p=1}^{m-1}\Theta^{m-p}\Big{(}\rho
M_{F}\left\|\xi_{n_{p}}(U_{p-1})(x)-U_{p-1}(x)\right\|\Big{)}+\rho
M_{F}\left\|\xi_{n_{m}}(U_{m-1})(x)-U_{m-1}(x)\right\|\\\ \\\
\leq\displaystyle\frac{\varepsilon}{2}.$
Now apply Lemma 3.2, in order to get
$\|\tilde{x}-T_{m}\circ\ldots\circ T_{1}(x)\|<\varepsilon.$
$\Box$
### 5.3 Numerical experiments.
This section is devoted to give some examples and their numerical results to
illustrate the previous results using the usual Schauder basis in
$C([0,1]^{2})$ with the well know square ordering (see for example [21]).
###### Example 5.3.1
Consider the nonlinear differential equation
$\displaystyle x(t)=\displaystyle
a(t+1)\left[\frac{b}{a}-\frac{b^{2}}{3}\left((t+1)^{3}-1\right)+\int_{0}^{t}(x(s))^{2}ds\right],\
\ \ t\in J.$ (5.10)
Proceeding essentially as in subsection 5.1, equation (5.10) can be written as
a fixed point problem
$x=F(x)\cdot G(x),$
where $F$ and $G$ are defined in (5.4), with $f(t,x)=a(t+1),$
$q(t)=b/a-\frac{b^{2}}{3}\left((t+1)^{3}-1\right)$ and $k(t,s,x)=x^{2}.$
Let $x,y\in[-R,R],$ we have that
$\left|k(t,s,x)-k(t,s,y)\right|\leq\gamma(t,s)\psi(|x-y|)$
where $\displaystyle\gamma(t,s)=2R,$ and $\psi(t)=t.$
An application of Theorem 5.1, yields that (4.8) has a unique solution in
$\Omega=\left\\{x\in C([0,1]);\|x\|\leq 3\right\\}$, in fact the solution is
$x(t)=b(t+1).$
Using the proposed method with $a=0.1,$ $b=0.1$ and $x_{0}(t)=t^{2},$ we
obtain the following table:
Table 3. Numerical results for the (5.10) with initial $x_{0}(t)=t^{2}$.
$n_{1}=\dots=n_{m}=9$ $n_{1}=\dots=n_{m}=33$ $t$ $x^{*}(t)$ $m=2$ $m=4$ $m=2$
$m=4$ $0.1$ $0.11$ 0.10994463333333335 0.10994463333333335 0.10995952813954675
0.10995955765685321 $0.2$ $0.12$ 0.11981787538985864 0.11981791805770493
0.11994695097301926 0.11994727822516114 $0.3$ $0.13$ 0.12975090261315447
0.12975116990203317 0.12993149120265635 0.1299327014013851 $0.4$ $0.14$
0.1396858063225927 0.13968664031615474 0.13991267664284926 0.13991561146443787
$0.5$ $0.15$ 0.14960946152701812 0.1496116012197044 0.14989041600508018
0.14989578496520412 $0.6$ $0.16$ 0.15952079194994145 0.1595251486759711
0.15986592032169128 0.15987299132148372 $0.7$ $0.17$ 0.16941936215524034
0.1694262809122463 0.1698435692932222 0.1698469898893412 $0.8$ $0.18$
0.17930673541360537 0.1793140741901599 0.17983433359550066 0.17981752625254807
$0.9$ $0.19$ 0.18918899833227526 0.18918756887790722 0.18986179093331174
0.1897843325246908 $1$ $0.2$ 0.19908194969111695 0.1990457618518603
0.1999725602822185 0.1997471266515799
| $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---
| $m=2$ | $m=4$ | $m=2$ | $m=4$
$\|x^{*}-\tilde{x}\|_{\infty}$ | $9.1805\times 10^{-4}$ | $9.544238\times 10^{-4}$ | $1.65588\times 10^{-4}$ | $2.52873\times 10^{-4}$
###### Example 5.3.2
Consider the nonlinear differential equation
$\displaystyle
x(t)=\displaystyle\left(ae^{-x(t)}+b\right)\left[\frac{t}{ae^{-t}+b}+\frac{1}{1-c}\log(\cos(1-c)t)+\int_{0}^{t}\tan(1-c)x(s)ds\right].$
(5.11)
Similarly to that above, (5.11) can be written as a fixed point problem
$x=F(x)\cdot G(x),$
with the same notations in (5.4).
Let $R>0$ and let $x,y\in[-R,R].$ By an elementary calculus we can show that
$|f(t,x)-f(t,y)|\leq\alpha(t)\varphi(|x-y|)$
and
$|k(t,s,x)-k(t,s,y)|\leq\gamma(t)\psi(|x-y|)$
where $\alpha(t)=ae^{R},$ $\displaystyle\gamma(t)=(1+\tan^{2}(1-c)R),$ and
$\varphi(t)=(1-e^{-t})$ and $\psi(t)=\tan(1-c)t.$
Apply Theorem 5.1, (5.11), with $a$ small enough and $c=1-a,$ has a unique
solution in $\Omega=\left\\{x\in C([0,1]);\|x\|\leq 3\right\\}$, in fact the
solution is $x(t)=t.$
The following table show the numerical results of the proposed method for
$a=0.01,b=1,R=3,$ and $x_{0}(t)=\sin(t).$
Table 4. Numerical results for (5.11) with initial $x_{0}(t)=\sin(t).$
| | $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---|---
$t$ | $x^{*}(t)$ | $m=2$ | $m=4$ | $m=2$ | $m=4$
$0.1$ | $0.1$ | 0.09994959265924196 | 0.09994959275258121 | 0.09997341307990056 | 0.09997341318295203
$0.2$ | $0.2$ | 0.19982697772113361 | 0.19982698062053245 | 0.19994196511596454 | 0.19994196762406424
$0.3$ | $0.3$ | 0.29970145954436234 | 0.2997014781005956 | 0.2999105557764353 | 0.29991056948622924
$0.4$ | $0.4$ | 0.39957605185527684 | 0.3995761128223367 | 0.39987915932823637 | 0.3998792008487213
$0.5$ | $0.5$ | 0.49945067663057896 | 0.49945081633085925 | 0.4998477565970089 | 0.49984784689621164
$0.6$ | $0.6$ | 0.5993252823989378 | 0.5993255387084228 | 0.5998163380147611 | 0.5998164954408373
$0.7$ | $0.7$ | 0.699199836738067 | 0.6992002390137386 | 0.699784904433895 | 0.6997851365136741
$0.8$ | $0.8$ | 0.799074326433192 | 0.7990748839377436 | 0.7997534658105931 | 0.7997537620153589
$0.9$ | $0.9$ | 0.8989487530236073 | 0.8989494465775325 | 0.8997220384049086 | 0.8997223654190059
$1$ | $1$ | 0.9988231278772514 | 0.9988239054111422 | 0.9996906411587515 | 0.9996909415162489
| $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---
| $m=2$ | $m=4$ | $m=2$ | $m=4$
$\|x^{*}-\tilde{x}\|_{\infty}$ | $1.17687\times 10^{-3}$ | $1.17609\times 10^{-3}$ | $3.09359\times 10^{-4}$ | $3.09058\times 10^{-4}$
###### Example 5.3.3
Consider the nonlinear differential equation
$\displaystyle x(t)=\displaystyle
at\left[(b+t)^{2}+\frac{t}{(t+1)}\int_{0}^{t}\left(1-e^{-(t+1)(as+1)}\right)ds\right]^{-1}\left[(b+t)^{2}+\int_{0}^{t}\int_{0}^{x(s)+1}e^{-(t+1)u}duds\right].$
(5.12)
According to the above discussion, (5.12) can be written as a fixed point
problem
$x=F(x)\cdot G(x),$
where $F$ and $G$ are defined in (5.4), with $f(t,x)=\displaystyle
at\left[(b+t)^{2}+\frac{t}{(t+1)}\int_{0}^{t}\left(1-e^{-(t+1)(as+1)}\right)ds\right]^{-1}$
and $k(t,s,x)=\int_{0}^{x+1}e^{-(t+1)u}du.$
Let $0<R<1$ and let $x,y\in[-R,R].$ By an elementary calculus, we can show
that
$|f(t,x)-f(t,y)|\leq\alpha(t)\varphi(|x-y|)$
and
$|k(t,s,x)-k(t,s,y)|\leq\gamma(t)\psi(|x-y|)$
where $\alpha(t)=\varphi(t)=0,$ $\psi(t)=\int_{0}^{2t}e^{-s}ds,$ and
$\gamma(t,s)=\frac{1}{t+1}e^{(t+1)(R-1)}.$
Taking $a=0.1,b=1,$ and applying Theorem 5.1, (5.12) has a unique solution in
$\Omega=\left\\{x\in C([0,1]);\|x\|\leq R\right\\}$. In fact the solution is
$at.$
Table 5. Numerical results for (5.12) with initial $x_{0}(t)=1/2cos(10\pi t).$
| | $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---|---
$t$ | $x^{*}(t)$ | $m=2$ | $m=4$ | $m=2$ | $m=4$
$0.1$ | $0.01$ | 0.009807889768197995 | 0.009807889768197995 | 0.009850176149181912 | 0.009850173620253975
$0.2$ | $0.02$ | 0.01913347555848587 | 0.01913346934141619 | 0.019763982715543943 | 0.01976400675926519
$0.3$ | $0.03$ | 0.028858901595188682 | 0.028858870390823594 | 0.029713698642337805 | 0.029713848529122386
$0.4$ | $0.04$ | 0.038745648738524936 | 0.038745618536895766 | 0.039685125981635705 | 0.039685476825051164
$0.5$ | $0.05$ | 0.04868660113266646 | 0.0486866179763731 | 0.04967020617859288 | 0.04967087311797989
$0.6$ | $0.06$ | 0.05866572236367756 | 0.05866579674631664 | 0.05966446431697861 | 0.05966546941999516
$0.7$ | $0.06$ | 0.06866841551194598 | 0.06866853944486338 | 0.06966463823770887 | 0.06966603029961263
$0.8$ | $0.08$ | 0.07868630321311545 | 0.07868650513410157 | 0.07966879607286238 | 0.07967053753105567
$0.9$ | $0.09$ | 0.08871376330117915 | 0.08871405879246874 | 0.08967554656783944 | 0.08967762811140002
$1$ | $0.09$ | 0.0987469779423797 | 0.0987473453913395 | 0.09968401150389958 | 0.09968636366339986
| $n_{1}=\dots=n_{m}=9$ | $n_{1}=\dots=n_{m}=33$
---|---|---
| $m=2$ | $m=4$ | $m=2$ | $m=4$
$\|x^{*}-\tilde{x}\|_{\infty}$ | $1.33714\times 10^{-3}$ | $1.33705\times 10^{-3}$ | $3.35272\times 10^{-4}$ | $3.34982\times 10^{-4}$
## 6 Conclusions
In this paper we have presented a numerical method, based on the use of
Schauder’s bases, to solve hybrid nonlinear equations in Banach algebras. To
do this, we have used Boyd-Wong’s theorem to establish the existence and
uniqueness of a fixed point for the product of two nonlinear operators in
Banach algebra (Theorem 3.1). The method is applied to a wide class of
nonlinear integro-differential equations such as the ones we have illustrated
by means of several numerical examples.
The possibility of applying this process or a similar idea to other types of
hybrid equations or systems of such equations is open and we hope to discuss
this in the near future.
## Acknowledgements
The research of Aref Jeribi and Khaled Ben Amara has been partially supported
by the University of Sfax.
The research of Maria Isabel Berenguer has been partially supported by Junta
de Andalucia, Project FQM359 and by the “Maria de Maeztu” Excellence Unit
IMAG, reference CEX2020-001105-M, funded by MCIN/AEI/10.13039/501100011033/ .
## References
* [1] Akyüz-Daşcioǧlu, A.; Sezer, M. Chebyshev polynomial solutions of systems of higher-order linear Fredholm-Volterra integro-differential equations. J. Franklin Inst. 342 (2005), 688–701.
* [2] Argyros, I. L.; Ezquerro, J. A.; Henandez, M. A.; Hilout, S.; Domero, N.; Velasco, A. I. Expanding the applicability of secant like methods for solving nonlinear equations. Carphatian J. Math. 31 (2015), 11–30.
* [3] Ben Amar, A.; Chouayekh, S.; Jeribi, A. Fixed point theory in a new class of Banach algebras and application. Afr. Mat. 24 (2013), 705–724.
* [4] Berenguer, M. I.; Garralda-Guillem, A. I.; Ruiz Galán, M. Biorthogonal systems approximating the solution of the nonlinear Volterra integro-differential equation. Fixed Point Theory Appl. 2010, Art. ID 470149, 9 pp.
* [5] Berenguer, M.I.; Gámez, D.; López Linares, A.J. Fixed point techniques and Schauder bases to approximate the solution of the first order nonlinear mixed Fredholm-Volterra integro-differential equation. J. Comput. Appl. Math. 252 (2013), 52–61.
* [6] Berenguer, M.I.; Gámez, D., A computational method for solving a class of two dimensional Volterra integral equations. J. Comput. Appl. Math. 318 (2017), 403–410.
* [7] Berenguer, M. I.; Gámez, D. Projected iterations of fixed-point type to solve nonlinear partial Volterra integro-differential equations. Bull. Malays. Math. Sci. Soc. 43 (2020), no. 6, 4431–4442.
* [8] Boyd, D. W.; Wong, J. S. W.On nonlinear contractions. Proc. Amer. Math. Soc. 20 (1969), 458–464.
* [9] Brezis, H. Functional analysis, Sobolev spaces and partial differential equations. Universitext. Springer, New York, 2011. xiv+599 pp. ISBN: 978-0-387-70913-0
* [10] Byszewski, L. Theorems about the existence and uniqueness of solutions of a semilinear evolution nonlocal Cauchy problem. J. Math. Anal. Appl. 162 (1991), no. 2, 494–505.
* [11] Byszewski, L. Existence and uniqueness of mild and classical solutions of semilinear functional-differential evolution nonlocal Cauchy problem. Selected problems of mathematics, 25–33, 50th Anniv. Cracow Univ. Technol. Anniv. Issue, 6, Cracow Univ. Technol., Krakow, 1995
* [12] Deimling, K. Nonlinear functional analysis. Springer-Verlag, Berlin, 1985. xiv+450 pp. ISBN: 3-540-13928-1
* [13] Deng, K.Exponential decay of solutions of semilinear parabolic equations with nonlocal initial conditions. J. Math. Anal. Appl. 179 (1993), no. 2, 630–637.
* [14] Dhage, B.C. On some variants of Schauder’s fixed point principle and applications to nonlinear integral equations. J. Math. Phy. Sci., 25 (1988), 603-611.
* [15] Dhage, B.C. On some nonlinear alternatives of Leray-Schauder type and functional integral equations. Arch. Math. (Brno), 42 (2006), 11–23.
* [16] Dhage, B.C. On a fixed point theorem in Banach algebras with aplications. Appl. Math. Lett., 18 (2005),273–280.
* [17] Dhage, B.C. Multi-valued mappings and fixed points I. Nonlinear Functional Anal. Appl., 10(3) (2005), 359-378.
* [18] Dhage, B.C. A hybrid fixed point theorem in Banach algebras with applications. Comm. Appl. Nonlinear Anal., 13 (2006), 71–84.
* [19] Djebali, S.; Sahnoun, Z. Nonlinear alternatives of Schauder and Krasnosel’skii types with applications to Hammerstein integral equations in $L^{1}$-spaces. J. Differ. Equ. 249 (2010), 2061–2075.
* [20] Dzhumabaev, D.S.On one approach to solve the linear boundary value problems for Fredholm integro differential equations. J. Comput. Appl. Math. 294 (2016), 342–357.
* [21] Gelbaum, B. R.; Gil de Lamadrid, J.Bases of tensor products of Banach spaces. Pacific J. Math. 11 (1961), 1281–1286.
* [22] Heydari, M.H.; Hooshmandasl, M.R.; Mohammadi, F.; Cattani, C.Wavelets method for solving systems of nonlinear singular fractional Volterra integro-differential equations. Commun. Nonlinear Sci. 19 (2014), 37–48.
* [23] Jeribi, A.; Kaddachi, N.; Krichen, B. Existence results for a system of nonlinear integral equations in Banach algebras under weak topology. Fixed Point Theory 18 (2017), no. 1, 247–267
* [24] Jeribi, A.; Krichen, B. Nonlinear functional analysis in Banach spaces and Banach algebras: Fixed point theory under weak topology for nonlinear operators and block operator matrices with applications. (Monographs and Research Notes in Mathematics), CRC Press/ Taylor and Francis, (2015).
* [25] Maleknejad, K.; Basirat, B.; Hashemizadeh, E. A Berstein operational matrix approach for solving a system of high order linear Volterra-Fredholm integro-differential equations. Math. Comput. Model. 55 (2012), 1363–1372
* [26] O’Regan, D. New fixed point results for 1-set contractive set-valued maps. Computes Math. Appl., 35(4) (1998), 27-34.
* [27] Saberi-Nadjafi, J.; Tamamgar, M. The variational iteration method: A highly promising method for solving the system of integro-differential equations. Comput. Math. Appl. 56 (2008), 346-351.
* [28] Semadeni, Z. Product Schauder bases and approximationwith nodes in spaces of continuous functions. Bull. Acad. Polon. Sci., 11 (1963), 387-391.
|
* Kel [00] , _An ergodic theoretic approach to mean field coupled maps_ , Fractal geometry and stochastics II, Springer, 2000, pp. 183–208.
* KK [92] Gerhard Keller and Martin Künzle, _Transfer operators for coupled map lattices_ , Ergodic Theory and Dynamical Systems 12 (1992), no. 2, 297–318.
* KKN [92] Gerhard Keller, Martin Künzle, and Tomasz Nowicki, _Some phase transitions in coupled map lattices_ , Physica D: Nonlinear Phenomena 59 (1992), no. 1-3, 39–51.
* KL [06] Gerhard Keller and Carlangelo Liverani, _Uniqueness of the srb measure for piecewise expanding weakly coupled map lattices in any dimension_ , Communications in Mathematical Physics 262 (2006), no. 1, 33–50.
* KL [09] , _Map lattices coupled by collisions_ , Communications in Mathematical Physics 291 (2009), no. 2, 591–597.
* KP [95] Jürgen Kurths and Arkady S Pikovsky, _Symmetry breaking in distributed systems and modulational spatio-temporal intermittency_ , Chaos, Solitons & Fractals 5 (1995), no. 10, 1893–1899.
* KT [01] Kunihiko Kaneko and Ichiro Tsuda, _Complex systems: Chaos and beyond: Chaos and beyond: A constructive approach with applications in life sciences_ , Springer Science & Business Media, 2001.
* Kur [84] Yoshiki Kuramoto, _Chemical turbulence_ , Chemical oscillations, waves, and turbulence, Springer, 1984, pp. 111–140.
* KY [10] José Koiller and Lai-Sang Young, _Coupled map networks_ , Nonlinearity 23 (2010), no. 5, 1121.
* KYY+ [14] Kiyoshi Kotani, Ikuhiro Yamaguchi, Lui Yoshida, Yasuhiko Jimbo, and G Bard Ermentrout, _Population dynamics of the modified theta model: macroscopic phase reduction and bifurcation analysis link microscopic neuronal interactions to macroscopic gamma oscillation_ , Journal of The Royal Society Interface 11 (2014), no. 95, 20140058.
* KZ [01] Gerhard Keller and Roland Zweimüller, _Unidirectionally coupled interval maps: between dynamics and statistical mechanics_ , Nonlinearity 15 (2001), no. 1, 1.
* Lai [18] Carlo R Laing, _The dynamics of networks of identical theta neurons_ , The Journal of Mathematical Neuroscience 8 (2018), no. 1, 1–24.
* Liv [95] Carlangelo Liverani, _Decay of correlations_ , Annals of Mathematics (1995), 239–301.
* LL [94] André Lambert and Ricardo Lima, _Stability of wavelengths and spatiotemporal intermittency in coupled map lattices_ , Physica D: Nonlinear Phenomena 71 (1994), no. 4, 390–411.
* LL [07] Jean-Michel Lasry and Pierre-Louis Lions, _Mean field games_ , Japanese journal of mathematics 2 (2007), no. 1, 229–260.
* LMM [95] Jérôme Losson, John Milton, and Michael C Mackey, _Phase transitions in networks of chaotic elements with short and long range interactions_ , Physica D: Nonlinear Phenomena 81 (1995), no. 1-2, 177–203.
* Luc [18] Valerio Lucarini, _Revising and extending the linear response theory for statistical mechanical systems: Evaluating observables as predictors and predictands_ , Journal of Statistical Physics 173 (2018), no. 6, 1698–1721.
* Mor [97] Satoru Morita, _Lyapunov analysis of collective behavior in a network of chaotic elements_ , Physics Letters A 226 (1997), no. 3-4, 172–178.
* MPA [16] Erik A Martens, Mark J Panaggio, and Daniel M Abrams, _Basins of attraction for chimera states_ , New Journal of Physics 18 (2016), no. 2, 022002.
* MVM [97] Christian Maes and A Van Moffaert, _Stochastic stability of weakly coupled lattice maps_ , Nonlinearity 10 (1997), no. 3, 715.
* NK [98] Naoko Nakagawa and Teruhisa S Komatsu, _Collective motion occurs inevitably in a class of populations of globally coupled chaotic elements_ , Physical Review E 57 (1998), no. 2, 1570.
* NK [99] , _Confined chaotic behavior in collective motion for populations of globally coupled chaotic elements_ , Physical Review E 59 (1999), no. 2, 1675.
* NOEE+ [22] Eddie Nijholt, Jorge Luis Ocampo-Espindola, Deniz Eroglu, István Z Kiss, and Tiago Pereira, _Emergent hypernetworks in weakly coupled oscillators_ , Nature communications 13 (2022), no. 1, 1–8.
* NT [00] Duane Q Nykamp and Daniel Tranchina, _A population density approach that facilitates large-scale modeling of neural networks: Analysis and an application to orientation tuning_ , Journal of computational neuroscience 8 (2000), no. 1, 19–50.
* Ome [18] Oleh E Omel'chenko, _The mathematics behind chimera states_ , Nonlinearity 31 (2018), no. 5, R121.
* PA [15] Mark J Panaggio and Daniel M Abrams, _Chimera states: coexistence of coherence and incoherence in networks of coupled oscillators_ , Nonlinearity 28 (2015), no. 3, R67.
* PC [92] Gabriel Perez and Hilda A Cerdeira, _Instabilities and nonstatistical behavior in globally coupled systems_ , Physical Review A 46 (1992), no. 12, 7492.
* PG [16] Mason A Porter and James P Gleeson, _Dynamical systems on networks_ , Frontiers in Applied Dynamical Systems: Reviews and Tutorials 4 (2016).
* PK [94] Arkady S Pikovsky and Jürgen Kurths, _Do globally coupled maps really violate the law of large numbers?_ , Physical review letters 72 (1994), no. 11, 1644.
* PR [15] Arkady Pikovsky and Michael Rosenblum, _Dynamics of globally coupled oscillators: Progress and perspectives_ , Chaos: An Interdisciplinary Journal of Nonlinear Science 25 (2015), no. 9, 097616.
* PRK [02] Arkady Pikovsky, Michael Rosenblum, and Jürgen Kurths, _Synchronization: a universal concept in nonlinear science_ , 2002.
* PS [88] Ya B Pesin and Ya G Sinai, _Space-time chaos in the system of weakly interacting hyperbolic systems_ , Journal of Geometry and Physics 5 (1988), no. 3, 483–492.
* PvST [20] Tiago Pereira, Sebastian van Strien, and Matteo Tanzi, _Heterogeneously coupled maps: hub dynamics and emergence across connectivity layers_ , Journal of the European Mathematical Society 22 (2020), no. 7, 2183–2252.
* RPJK [16] Francisco A Rodrigues, Thomas K DM Peron, Peng Ji, and Jürgen Kurths, _The kuramoto model in complex networks_ , Physics Reports 610 (2016), 1–98.
* RS [14] Bob Rink and Jan Sanders, _Coupled cell networks and their hidden symmetries_ , SIAM Journal on Mathematical Analysis 46 (2014), no. 2, 1577–1609.
* RS [15] , _Coupled cell networks: semigroups, lie algebras and normal forms_ , Transactions of the American Mathematical Society 367 (2015), no. 5, 3509–3548.
* Rue [09] David Ruelle, _A review of linear response theory for general differentiable dynamical systems_ , Nonlinearity 22 (2009), no. 4, 855\.
* Rug [02] Hans Henrik Rugh, _Coupled maps and analytic function spaces_ , Annales Scientifiques de l'École Normale Supérieure, vol. 35, Elsevier, 2002, pp. 489–535.
* Rul [01] Nikolai F Rulkov, _Regularization of synchronized chaotic bursts_ , Physical Review Letters 86 (2001), no. 1, 183.
* Rul [02] , _Modeling of spiking-bursting neural behavior using two-dimensional map_ , Physical Review E 65 (2002), no. 4, 041922.
* Say [14] Ali H Sayed, _Adaptive networks_ , Proceedings of the IEEE 102 (2014), no. 4, 460–497.
* SB [81] Richard S Sutton and Andrew G Barto, _Toward a modern theory of adaptive networks: expectation and prediction._ , Psychological review 88 (1981), no. 2, 135.
* SB [16] Fanni Sélley and Péter Bálint, _Mean-field coupling of identical expanding circle maps_ , Journal of Statistical Physics 164 (2016), no. 4, 858–889.
* SBAL [92] Sudeshna Sinha, D Biswas, M Azam, and SV Lawande, _Nonstatistical behavior of higher-dimensional coupled systems_ , Physical Review A 46 (1992), no. 6, 3193.
* Sch [04] Matthias Schmitt, _Spectral theory for nonanalytic coupled map lattices_ , Nonlinearity 17 (2004), no. 2, 671.
* Sél [18] Fanni M Sélley, _Symmetry breaking in a globally coupled map of four sites_ , Discrete & Continuous Dynamical Systems 38 (2018), no. 8, 3707\.
* Sél [21] , _A self-consistent dynamical system with multiple absolutely continuous invariant measures_ , Journal of Computational Dynamics 8 (2021), no. 1, 9.
* Ser [20] Sylvia Serfaty, _Mean field limit for coulomb-type flows_ , Duke Mathematical Journal 169 (2020), no. 15, 2887–2935.
* SJ [01] Frank Schmüser and Wolfram Just, _Non-equilibrium behaviour in unidirectionally coupled map lattices_ , Journal of Statistical Physics 105 (2001), no. 3, 525–559.
* SK [97] Tatsuo Shibata and Kunihiko Kaneko, _Heterogeneity-induced order in globally coupled chaotic systems_ , EPL (Europhysics Letters) 38 (1997), no. 6, 417.
* [172] , _Collective chaos_ , Physical review letters 81 (1998), no. 19, 4116.
* [173] , _Tongue-like bifurcation structures of the mean-field dynamics in a network of chaotic elements_ , Physica D: Nonlinear Phenomena 124 (1998), no. 1-3, 177–200.
* SM [91] Steven H Strogatz and Renato E Mirollo, _Stability of incoherence in a population of coupled oscillators_ , Journal of Statistical Physics 63 (1991), no. 3, 613–635.
* SPMS [17] Tomislav Stankovski, Tiago Pereira, Peter VE McClintock, and Aneta Stefanovska, _Coupling functions: universal insights into dynamical interaction mechanisms_ , Reviews of Modern Physics 89 (2017), no. 4, 045001.
* Spo [12] Herbert Spohn, _Large scale dynamics of interacting particles_ , Springer Science & Business Media, 2012.
* ST [00] T Shimada and S Tsukada, _A resolution of the puzzle of the posi-nega switch mechanism in the globally coupled map lattice_ , arXiv preprint nlin/0012036 (2000).
* ST [21] Fanni M Sélley and Matteo Tanzi, _Linear response for a family of self-consistent transfer operators_ , Communications in Mathematical Physics 382 (2021), no. 3, 1601–1624.
* ST [22] , _Synchronization for networks of globally coupled maps in the thermodynamic limit_ , Journal of Statistical Physics 189 (2022), no. 1, 1–26.
* Str [00] Steven H Strogatz, _From kuramoto to crawford: exploring the onset of synchronization in populations of coupled oscillators_ , Physica D: Nonlinear Phenomena 143 (2000), no. 1-4, 1–20.
* Szn [91] Alain-Sol Sznitman, _Topics in propagation of chaos_ , Ecole d'été de probabilités de Saint-Flour XIX—1989, Springer, 1991, pp. 165–251.
* Tan [22] Matteo Tanzi, _Uniformly expanding coupled maps: Self-consistent transfer operators and propagation of chaos_ , arXiv preprint arXiv:2209.13571 (2022).
* Vol [94] VA Volevich, _Construction of an analogue of bowen-ruelle-sinaĭ measure for a multidimensional lattice of interacting hyperbolic mappings_ , Sbornik: Mathematics 79 (1994), no. 2, 347.
* WG [18] Caroline L Wormell and Georg A Gottwald, _On the validity of linear response theory in high-dimensional deterministic dynamical systems_ , Journal of Statistical Physics 172 (2018), no. 6, 1479–1498.
* WG [19] , _Linear response for macroscopic observables in high-dimensional systems_ , Chaos: An Interdisciplinary Journal of Nonlinear Science 29 (2019), no. 11, 113127.
* WH [89] Kurt Wiesenfeld and Peter Hadley, _Attractor crowding in oscillator arrays_ , Physical Review Letters 62 (1989), no. 12, 1335.
* Win [67] Arthur T Winfree, _Biological rhythms and the behavior of populations of coupled oscillators_ , Journal of theoretical biology 16 (1967), no. 1, 15–42.
* WO [11] Matthias Wolfrum and E Omel'chenko, _Chimera states are chaotic transients_ , Physical Review E 84 (2011), no. 1, 015201.
* Wor [22] Caroline L Wormell, _Non-hyperbolicity at large scales of a high-dimensional chaotic system_ , Proceedings of the Royal Society A 478 (2022), no. 2261, 20210808.
* YZ [16] Nan Yao and Zhigang Zheng, _Chimera states in spatiotemporal systems: Theory and applications_ , International Journal of Modern Physics B 30 (2016), no. 7, 1630002. |
# Comment on “New physics constraints from atomic parity violation in 133Cs”
B. M. Roberts<EMAIL_ADDRESS>J. S. M. Ginges<EMAIL_ADDRESS>School
of Mathematics and Physics, The University of Queensland, Brisbane QLD 4072,
Australia
###### Abstract
In a recent Letter, B. K. Sahoo, B. P. Das, and H. Spiesberger, Phys. Rev. D
103, L111303 (2021) Sahoo _et al._ (2021), a calculation of the parity
violating $6S-7S$ E1 amplitude in Cs is reported, claiming an uncertainty of
just 0.3%. In this Comment, we point out that key contributions have been
omitted, and the theoretical uncertainty has been significantly
underestimated. In particular, the contribution of missed QED radiative
corrections amounts to several times the claimed uncertainty.
The $6S$–$7S$ atomic parity violation (APV) amplitude in Cs may be expressed
as $\langle\widetilde{7S}|D_{z}|\widetilde{6S}\rangle$, where $D_{z}$ is the
$z$ component of the electric dipole (E1) operator, and
$|\widetilde{6S}\rangle$ and $|\widetilde{7S}\rangle$ are weak-interaction-
perturbed atomic states; the source of this interaction is $Z$-boson exchange
between the electrons and the nucleus. In the lowest-order single-particle
picture, it may be written
$\displaystyle{E}_{\rm PV}$ $\displaystyle=\sum_{n}\left[\frac{\langle
7s|h_{w}|n\rangle\langle
n|d_{z}|6s\rangle}{\varepsilon_{7s}-\varepsilon_{n}}+\frac{\langle
7s|d_{z}|n\rangle\langle
n|h_{w}|6s\rangle}{\varepsilon_{6s}-\varepsilon_{n}}\right],$ (1)
where $d_{z}$ is the single-particle E1 operator,
$h_{w}=-\frac{G_{F}}{2\sqrt{2}}Q_{w}\rho(r)\gamma_{5}$ is the parity-violating
weak interaction operator, with $G_{F}$ the Fermi constant, $Q_{w}$ the
nuclear weak charge, $\rho$ the nuclear density, and $\gamma_{5}$ the Dirac
matrix, and $n$ runs over all $p_{1/2}$ states including the (occupied) core;
see Ref. Ginges and Flambaum (2004). The accuracy of the calculation is
determined by account of many-body effects and smaller corrections including
higher-order relativistic effects. Evaluation of $E_{\rm PV}$ in Cs with an
accuracy matching or exceeding that of the measurement Wood _et al._ (1997)
remains a formidable challenge. There is a rich history connected to this
spanning more than 20 years as the theoretical accuracy has reached the
fraction-of-a-percent level; see, e.g., reviews Ginges and Flambaum (2004);
Roberts _et al._ (2015); Safronova _et al._ (2018) and Ref. Toh _et al._
(2019). A major development over this time, following the realization of the
significance of the Breit contribution Derevianko (2000, 2001); Kozlov _et
al._ (2001); Dzuba _et al._ (2001a), was the recognition of the importance of
quantum electrodynamics (QED) radiative corrections and the formulation of
methods to account for them in precision calculations for heavy atoms Sushkov
(2001); Johnson _et al._ (2001); Dzuba _et al._ (2002); Milstein _et al._
(2002); *Milstein2002a; *Milstein2003; Kuchiev and Flambaum (2002);
*Kuchiev2002b; *KuchievQED03; Sapirstein _et al._ (2003); Shabaev _et al._
(2005a); Flambaum and Ginges (2005) (see also Sapirstein and Cheng (2005);
Shabaev _et al._ (2013); Ginges and Berengut (2016a, b)).
We have identified a number of shortcomings in the theoretical evaluation of
$E_{\rm PV}$ in the Letter Sahoo _et al._ (2021), some of which are detailed
below. Most notably, the treatment of QED radiative corrections omits
important contributions to $E_{\rm PV}$, which amount to several times the
theoretical uncertainty claimed in Ref. Sahoo _et al._ (2021).
## I QED correction to $E_{\rm PV}$
QED radiative corrections in the strong Coulomb field of the nucleus make a
significant contribution to $E_{\rm PV}$, $\lesssim$ 1%. These have been
calculated before Johnson _et al._ (2001); Dzuba _et al._ (2002); Milstein _et
al._ (2002); *Milstein2002a; *Milstein2003; Kuchiev and Flambaum (2002);
Kuchiev (2002); *KuchievQED03; Sapirstein _et al._ (2003); Shabaev _et al._
(2005a, b); Flambaum and Ginges (2005); Roberts _et al._ (2013a) and are well
established. It is said in the Letter Sahoo _et al._ (2021) that one of the
key improvements is the treatment of these QED corrections. However, details
of the QED calculation are not presented in the Letter, and the reader is
directed to the unpublished manuscript Sahoo and Das (2020) for explanation
111Note that the reference to Sahoo and Das (2020) in the Letter Sahoo _et
al._ (2021) is incorrect, linking to an unrelated arXiv paper. There it is
said that the self-energy QED correction to $E_{\rm PV}$ (and to other atomic
properties) is accounted for by including the radiative potential Flambaum and
Ginges (2005); Ginges and Berengut (2016a) into the Hamiltonian from the start
222Vacuum polarization is included using the standard Uehling potential, and a
simplified form of the Wichmann-Kroll potential from Ref. Dzuba _et al._
(2002); Flambaum and Ginges (2005)., which the authors claim to be a more
rigorous approach compared to previous calculations.
Figure 1: Feynman diagrams for self-energy corrections to matrix elements.
Dashed line with triangle represents the external field (e.g., E1, weak,
hyperfine), wavy line the photon propagator, and double line the bound
electron wavefunction and propagator. Middle diagram is vertex correction.
The radiative potential method Flambaum and Ginges (2005) enables the accurate
inclusion of self-energy corrections to the energies and wavefunctions of
many-electron atoms. It may also be used to account for QED corrections to
matrix elements of external fields whose operators act at radial distances
much larger than the electron Compton wavelength, $r\gg e\hbar/(m_{e}c)$,
e.g., the E1 field. However, this is not the case for operators that act at
small distances, including the weak and hyperfine interactions. We illustrate
this in Fig. 1. For the E1 interaction, the dominant contribution is given by
the left and right diagrams, which may be accounted for by using the radiative
potential method. However, for the weak and hyperfine interactions, other
contributions are important. In particular, the middle vertex diagram – where
the external field is locked inside the photon loop – simply cannot be
accounted for using this method. We refer the reader to the original Flambaum
and Ginges (2005) and subsequent Roberts _et al._ (2013a) works for details on
the applicability of the radiative potential method.
The QED correction to the full Cs APV amplitude (involving both E1 and weak
interactions) was determined in Refs. Flambaum and Ginges (2005); Shabaev _et
al._ (2005a). In Ref. Flambaum and Ginges (2005), the radiative potential
method was used to calculate corrections to the E1 matrix elements and energy
denominators in the sum (1), with QED corrections to weak matrix elements
$\langle s|h_{w}|p_{1/2}\rangle$ taken from previous works Milstein _et al._
(2002); *Milstein2002a; *Milstein2003; Kuchiev and Flambaum (2002); Kuchiev
(2002); *KuchievQED03; Sapirstein _et al._ (2003). In Ref. Shabaev _et al._
(2005a), Shabaev et al. calculated the total correction by applying a rigorous
QED formalism. The results of Refs. Shabaev _et al._ (2005a) and Flambaum and
Ginges (2005) are in excellent agreement, $-0.27(3)$% and $-0.32(3)$%,
respectively 333The QED results of both Refs. Shabaev _et al._ (2005a) and
Flambaum and Ginges (2005) were misattributed in Table III of the Letter Sahoo
_et al._ (2021); these papers were not cited in Ref. Sahoo _et al._ (2021)..
It is unclear how the authors of Sahoo _et al._ (2021) arrive at a QED
correction of $-0.4\%$ for the weak matrix elements and $-0.3\%$ for $E_{\rm
PV}$, in agreement with existing calculations Milstein _et al._ (2002);
*Milstein2002a; *Milstein2003; Kuchiev and Flambaum (2002); Kuchiev (2002);
*KuchievQED03; Sapirstein _et al._ (2003); Flambaum and Ginges (2005); Shabaev
_et al._ (2005a, b); Roberts _et al._ (2013a), given the important short-range
effects, including the vertex contribution, have been omitted. In an attempt
to reproduce the results of Ref. Sahoo _et al._ (2021), we calculate the
radiative potential value for the QED correction to weak matrix elements,
including vacuum polarization. The result is $-2.1\%$, too large by a factor
of five compared to the correct calculations, demonstrating the importance of
the missed short-range effects. This difference amounts to a change in $E_{\rm
PV}$ that is nearly six times the atomic theory uncertainty claimed in Ref.
Sahoo _et al._ (2021).
## II Hyperfine constants
In the Letter Sahoo _et al._ (2021), calculations of hyperfine constants are
performed to test the accuracy of the wavefunctions in the nuclear region,
crucial for assessing the accuracy of APV calculations (see Refs. Ginges _et
al._ (2017); Ginges and Volotka (2018); Roberts and Ginges (2020, 2021) for
recent studies of the nuclear magnetization distribution for Cs). By
demonstrating excellent agreement with experiment, the authors conclude the
accuracy of their wavefunctions is high, and so estimate a tremendously small
uncertainty for the APV calculation. However, it appears that serious
omissions have been made in the hyperfine calculations.
As for $E_{\rm PV}$, the vertex and short-range contributions to QED
corrections to hyperfine constants are important Sapirstein and Cheng (2003);
Ginges _et al._ (2017) (see also Blundell _et al._ (1997); Sunnergren _et al._
(1998); Artemyev _et al._ (2001); Volotka _et al._ (2008, 2012)). Moreover,
the magnetic loop vacuum polarization correction also gives a significant
contribution Sapirstein and Cheng (2003); Ginges _et al._ (2017). In the
Letter Sahoo _et al._ (2021), the radiative potential method is employed, with
no account for these contributions. Given this, it is unclear how the authors
of Sahoo _et al._ (2021); Sahoo and Das (2020) arrive at a correction of
$-0.3\%$ to the hyperfine constants for $s$ states of Cs, in good agreement
with existing calculations Sapirstein and Cheng (2003); Ginges _et al._
(2017). To investigate this result, we again use the radiative potential
method and find it gives a correction of $-1.2\%$, three times too large
compared to rigorous QED calculations Sapirstein and Cheng (2003); Ginges _et
al._ (2017), confirming the importance of the omitted effects. This difference
amounts to two times the uncertainty of the hyperfine calculations (0.4%)
claimed in the Letter Sahoo _et al._ (2021).
## III Core contribution
The contribution to $E_{\rm PV}$ coming from the (occupied) $n$ = 2–5 terms in
Eq. (1) is called the “core” (or autoionization) contribution. In the Letter
Sahoo _et al._ (2021), it is said that the main difference in the $E_{\rm PV}$
result compared to the previous calculation of Dzuba et al. Dzuba _et al._
(2012) stems from the opposite sign of the core contribution. The difference
in core contribution between Refs. Sahoo _et al._ (2021) and Dzuba _et al._
(2012) is larger than the theoretical uncertainty claimed in the Letter Sahoo
_et al._ (2021) and should be investigated thoroughly.
In Ref. Dzuba _et al._ (2012), Dzuba et al. showed that many-body effects
(core polarization and correlations) have a significant impact on the core
contribution, changing its sign compared to the lowest-order Hartree-Fock
value; see also Ref. Roberts _et al._ (2015). The authors of Ref. Sahoo _et
al._ (2021) claim their result confirms the core calculation of Ref. Porsev
_et al._ (2009); *Porsev2010 and agrees with the result of Ref. Blundell _et
al._ (1990); *Blundell1992. However, in both of those works, the core
contribution was evaluated in the lowest-order approximation.
Here, we re-examine the core contribution in detail in an attempt to elucidate
the source of this discrepancy. We include core polarization using the time-
dependent Hartree-Fock (TDHF) method Dzuba _et al._ (1984), in which the
single-particle operators are modified: $d_{z}\to{\tilde{d}_{z}}=d_{z}+\delta
V_{d}$, and ${h_{w}}\to{\tilde{h}_{w}}={h}_{w}+\delta V_{w}$. The $\delta V$
corrections are found by solving the set of TDHF equations for all electrons
in the core Dzuba _et al._ (1984). We obtain the corrections to lowest-order
in the Coulomb interaction by solving the set of equations once, and to all-
orders by iterating the equations until self-consistency is reached Dzuba _et
al._ (1984) (equivalent to the random-phase approximation with exchange, RPA
Johnson _et al._ (1980)). The equations for $\delta V_{d}$ are solved at the
frequency of the $6S$–$7S$ transition (see Roberts _et al._ (2013b) for a
numerical study). We account for correlation corrections using the second-
order Dzuba _et al._ (1987) and all-orders Dzuba _et al._ (1988);
*DzubaCPM1989plaEn; *DzubaCPM1989plaE1 correlation potential methods (see also
Dzuba _et al._ (2002)).
The core contribution arises as the sum of two terms, due to the weak-
perturbation of $6s$ and $7s$ states, respectively. These have similar
magnitude though opposite sign, and strongly cancel, meaning numerical error
may be significant. We test the numerical accuracy in a number of ways.
Firstly, we vary the number of radial grid points used for solving the
differential equations, and vary the number of basis states used in any
expansions. We find numerical errors stemming from grid/basis choices can
easily be made insignificant. More importantly, we have three physically
equivalent, but numerically distinct, ways to compute $E_{\rm PV}$:
$\displaystyle\sum_{n}\left[\frac{\langle 7s|\tilde{h}_{w}|n\rangle\langle
n|{\tilde{d}_{z}}|6s\rangle}{\varepsilon_{7s}-\varepsilon_{n}}+\frac{\langle
7s|{\tilde{d}_{z}}|n\rangle\langle
n|\tilde{h}_{w}|6s\rangle}{\varepsilon_{6s}-\varepsilon_{n}}\right]$ (2)
$\displaystyle=\langle\delta\psi_{7s}|{\tilde{d}_{z}}|6s\rangle+\langle
7s|{\tilde{d}_{z}}|\delta\psi_{6s}\rangle$ (3) $\displaystyle=\langle
7s|\tilde{h}_{w}|\Delta\psi_{6s}\rangle+\langle\Delta\psi_{7s}|\tilde{h}_{w}|6s\rangle,$
(4)
where $\delta\psi$ and $\Delta\psi$ are corrections to the valence
wavefunctions ($\psi$) due to the time-independent weak interaction, and the
time-dependent E1 interaction, respectively. These are called the sum-over-
states (2), weak-mixed-states (3), and E1-mixed-states (4) methods 444These
formulas exclude the double-core-polarization effect, which is very small for
Cs, and has been studied in detail in Ref. Roberts _et al._ (2013b). The sign
change of the core cannot be explained by the double-core-polarization
correction, which even if entirely assigned to the core contribution is a
factor of two too small to account for the difference Roberts _et al._
(2013b)..
In the sum-over-states method, a B-spline basis (e.g., Johnson _et al._
(1988); Beloy and Derevianko (2008)) is used to sum over the set of
intermediate states. In contrast, the mixed-states approach does not require a
basis at all; the $\delta$ and $\Delta$ corrections are found by solving the
differential equations Dalgarno and Lewis (1955):
$\displaystyle(h-\varepsilon)\delta\psi$ $\displaystyle=-{\tilde{h}_{w}}\psi$
(5) $\displaystyle(h-\varepsilon-\nu)\Delta\psi$
$\displaystyle=-{\tilde{d}_{z}}\psi,$ (6)
where $h$ is the single-particle atomic Hamiltonian, and $\nu$ is the
$6S$–$7S$ transition frequency. In the mixed-states approach, the core
contribution is found by projecting the corrections $\delta\psi$ and
$\Delta\psi$ onto the core states, while in the sum-over-states method it is
found by restricting the sum to include only core states. Note that the
numerics involved in solving each of the above equations is significantly
different, and the coincidence of results is indicative of high numerical
accuracy. Even with moderate choice for the radial grid, we find the results
of the two mixed-states methods agree to parts in $10^{8}$, and the mixed-
states and sum-over-states methods agree to parts in $10^{7}$, demonstrating
excellent numerical precision and completeness of the basis.
Our calculations of the core term are summarized in Table 1. The sign change
in the core contribution is mostly due to polarization of the core by the
external E1 field. This is sensitive to the frequency of the E1 field. While
correlations beyond core polarization are important, they affect both terms in
roughly the same manner; the core term and its sign are robust to the
treatment of correlations. We also performed calculations for the
$7S$-$6D_{3/2}$ $E_{\rm PV}$ for 223Ra+ to test against previous calculations;
at the RPA level, we find the core contribution to be 6.81 [in units
$-10^{-11}i(-Q_{w}/N)\,|e|a_{B}$], in excellent agreement with the result 6.83
of Ref. Pal _et al._ (2009) (see also Dzuba _et al._ (2001b); Wansbeek _et
al._ (2008)). It is unclear why the sign of the result of Ref. Sahoo _et al._
(2021) remains the same as the Hartree-Fock value, however, we note that it
may not be straight forward to compare individual contributions across
different methods as discussed in Refs. Wieman and Derevianko (2019);
Safronova _et al._ (2018).
## IV Conclusion
For the above reasons, we are not convinced the result presented in the Letter
Sahoo _et al._ (2021) is an improved value for the Cs $E_{\rm PV}$. We
conclude that the most reliable and accurate values that have been obtained to
date are: $E_{\rm PV}=0.898(5)$ Dzuba _et al._ (2002); Flambaum and Ginges
(2005) and $E_{\rm PV}=0.8977(40)$ Porsev _et al._ (2009); Dzuba _et al._
(2012), in units $-10^{-11}i(-Q_{w}/N)\,|e|a_{B}$, which agree precisely and
were obtained using different approaches. These results are also in excellent
agreement with previous calculations Dzuba _et al._ (1989c); Blundell _et al._
(1990); *Blundell1992; Kozlov _et al._ (2001); Shabaev _et al._ (2005b),
though in disagreement with the result of the Letter Sahoo _et al._ (2021).
Table 1: Core contribution to 133Cs 6S-7S $E_{\rm PV}$ in different
approximations, in units $-10^{-11}i(-Q_{w}/N)\,|e|a_{B}$, where $N=78$ is the
number of neutrons.a Here, HF denotes relativistic Hartree-Fock, $\delta
V^{(1)}$ and $\delta V^{(\infty)}$ denote lowest-order and all-orders core-
polarization, respectively, with subscripts $w$ and $d$ indicating
polarization by the weak or E1 fields, $\Sigma^{(2)}$ and $\Sigma^{(\infty)}$
denote second- and all-orders correlations, respectively, and $\lambda$
indicates correlations have been re-scaled to reproduce the lowest
experimental binding energies.
Method | $\langle\delta\psi_{7s}|{\tilde{d}_{z}}|6s\rangle$ | $\langle 7s|{\tilde{d}_{z}}|\delta\psi_{6s}\rangle$ | Sum
---|---|---|---
HF | $-0.02645$ | $0.02472$ | $-0.00174$
HF+$\delta V_{w}^{(1)}$ | $-0.03747$ | $0.03539$ | $-0.00208$
HF+$\delta V_{w}^{(\infty)}$ | $-0.04319$ | $0.04119$ | $-0.00201$
E1 TDHF equations solved at HF frequency:
HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(1)}$ | $-0.05506$ | $0.05442$ | $-0.00063$
HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(\infty)}$222HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(\infty)}$ is commonly called RPA level. | $-0.05822$ | $0.05992$ | $0.00170$
E1 TDHF equations solved at experimental frequency:
HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(1)}$ | $-0.05468$ | $0.05466$ | $-0.00002$
HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(\infty)}$222HF+$\delta V_{w}^{(\infty)}$+$\delta V_{d}^{(\infty)}$ is commonly called RPA level. | $-0.05784$ | $0.06043$ | $0.00259$
Including correlation corrections (and $\delta V_{w}^{(\infty)}+\delta
V_{d}^{(\infty)}$):
$\Sigma^{(2)}$ | $-0.06739$ | $0.06924$ | $0.00184$
$\lambda\Sigma^{(2)}$ | $-0.06547$ | $0.06732$ | $0.00184$
$\Sigma^{(\infty)}$ | $-0.06514$ | $0.06695$ | $0.00181$
$\lambda\Sigma^{(\infty)}$ | $-0.06516$ | $0.06696$ | $0.00181$
Other calculations:
HF Blundell _et al._ (1990); *Blundell1992 | | | $-0.002$
HF Porsev _et al._ (2009); *Porsev2010 | | | $-0.002$
$\Sigma^{(\infty)}$+RPA Dzuba _et al._ (2012) | | | $0.00182$
Values from the Letter Sahoo _et al._ (2021):
HF Sahoo _et al._ (2021) | | | $-0.0017$
RCCSD Sahoo _et al._ (2021) | | | $-0.0019$
RCCSDT Sahoo _et al._ (2021) | | | $-0.0018$
11footnotetext: To avoid possible ambiguity in the sign, we note that the
total amplitude is positive in these units; at the HF level it is $0.7395$.
Acknowledgments— We thank V. A. Dzuba and V. V. Flambaum for useful
discussions. This work was supported by the Australian Government through ARC
DECRA Fellowship DE210101026 and ARC Future Fellowship FT170100452.
## References
* Sahoo _et al._ (2021) B. K. Sahoo, B. P. Das, and H. Spiesberger, Phys. Rev. D 103, L111303 (2021), arXiv:2101.10095 .
* Ginges and Flambaum (2004) J. S. M. Ginges and V. V. Flambaum, Phys. Rep. 397, 63 (2004).
* Wood _et al._ (1997) C. S. Wood, S. C. Bennett, D. Cho, B. P. Masterson, J. L. Roberts, C. E. Tanner, and C. E. Wieman, Science 275, 1759 (1997).
* Roberts _et al._ (2015) B. M. Roberts, V. A. Dzuba, and V. V. Flambaum, Annu. Rev. Nucl. Part. Sci. 65, 63 (2015), arXiv:1412.6644 .
* Safronova _et al._ (2018) M. S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko, and C. W. Clark, Rev. Mod. Phys. 90, 025008 (2018).
* Toh _et al._ (2019) G. Toh, A. Damitz, C. E. Tanner, W. R. Johnson, and D. S. Elliott, Phys. Rev. Lett. 123, 073002 (2019), arXiv:1905.02768 .
* Derevianko (2000) A. Derevianko, Phys. Rev. Lett. 85, 1618 (2000).
* Derevianko (2001) A. Derevianko, Phys. Rev. A 65, 012106 (2001).
* Kozlov _et al._ (2001) M. G. Kozlov, S. G. Porsev, and I. I. Tupitsyn, Phys. Rev. Lett. 86, 3260 (2001).
* Dzuba _et al._ (2001a) V. A. Dzuba, C. Harabati, W. R. Johnson, and M. S. Safronova, Phys. Rev. A 63, 044103 (2001a).
* Sushkov (2001) O. P. Sushkov, Phys. Rev. A 63, 042504 (2001).
* Johnson _et al._ (2001) W. R. Johnson, I. Bednyakov, and G. Soff, Phys. Rev. Lett. 87, 233001 (2001).
* Dzuba _et al._ (2002) V. A. Dzuba, V. V. Flambaum, and J. S. M. Ginges, Phys. Rev. D 66, 076013 (2002).
* Milstein _et al._ (2002) A. I. Milstein, O. P. Sushkov, and I. S. Terekhov, Phys. Rev. Lett. 89, 283003 (2002).
* Milstein and Sushkov (2002) A. I. Milstein and O. P. Sushkov, Phys. Rev. A 66, 022108 (2002).
* Milstein _et al._ (2003) A. I. Milstein, O. P. Sushkov, and I. S. Terekhov, Phys. Rev. A 67, 062103 (2003).
* Kuchiev and Flambaum (2002) M. Y. Kuchiev and V. V. Flambaum, Phys. Rev. Lett. 89, 283002 (2002).
* Kuchiev (2002) M. Y. Kuchiev, J. Phys. B 35, L503 (2002).
* Kuchiev and Flambaum (2003) M. Y. Kuchiev and V. V. Flambaum, J. Phys. B 36, R191 (2003).
* Sapirstein _et al._ (2003) J. Sapirstein, K. Pachucki, A. Veitia, and K. T. Cheng, Phys. Rev. A 67, 052110 (2003).
* Shabaev _et al._ (2005a) V. M. Shabaev, K. Pachucki, I. I. Tupitsyn, and V. A. Yerokhin, Phys. Rev. Lett. 94, 213002 (2005a).
* Flambaum and Ginges (2005) V. V. Flambaum and J. S. M. Ginges, Phys. Rev. A 72, 052115 (2005).
* Sapirstein and Cheng (2005) J. Sapirstein and K. T. Cheng, Phys. Rev. A 71, 022503 (2005).
* Shabaev _et al._ (2013) V. M. Shabaev, I. I. Tupitsyn, and V. A. Yerokhin, Phys. Rev. A 88, 012513 (2013), arXiv:1305.6333 .
* Ginges and Berengut (2016a) J. S. M. Ginges and J. C. Berengut, Phys. Rev. A 93, 052509 (2016a), arXiv:1603.09116 .
* Ginges and Berengut (2016b) J. S. M. Ginges and J. C. Berengut, J. Phys. B 49, 095001 (2016b).
* Shabaev _et al._ (2005b) V. M. Shabaev, I. I. Tupitsyn, K. Pachucki, G. Plunien, and V. A. Yerokhin, Phys. Rev. A 72, 062105 (2005b).
* Roberts _et al._ (2013a) B. M. Roberts, V. A. Dzuba, and V. V. Flambaum, Phys. Rev. A 87, 054502 (2013a), arXiv:1302.0593 .
* Sahoo and Das (2020) B. K. Sahoo and B. P. Das, (2020), arXiv:2008.08941 .
* Note (1) Note that the reference to Sahoo and Das (2020) in the Letter Sahoo _et al._ (2021) is incorrect, linking to an unrelated arXiv paper.
* Note (2) Vacuum polarization is included using the standard Uehling potential, and a simplified form of the Wichmann-Kroll potential from Ref. Dzuba _et al._ (2002); Flambaum and Ginges (2005).
* Note (3) The QED results of both Refs. Shabaev _et al._ (2005a) and Flambaum and Ginges (2005) were misattributed in Table III of the Letter Sahoo _et al._ (2021); these papers were not cited in Ref. Sahoo _et al._ (2021).
* Ginges _et al._ (2017) J. S. M. Ginges, A. V. Volotka, and S. Fritzsche, Phys. Rev. A 96, 062502 (2017), arXiv:1709.07725 .
* Ginges and Volotka (2018) J. S. M. Ginges and A. V. Volotka, Phys. Rev. A 98, 032504 (2018), arXiv:1707.00551 .
* Roberts and Ginges (2020) B. M. Roberts and J. S. M. Ginges, Phys. Rev. Lett. 125, 063002 (2020), arXiv:2001.01907 .
* Roberts and Ginges (2021) B. M. Roberts and J. S. M. Ginges, Phys. Rev. A 104, 022823 (2021), arXiv:2101.09924 .
* Sapirstein and Cheng (2003) J. Sapirstein and K. T. Cheng, Phys. Rev. A 67, 022512 (2003).
* Blundell _et al._ (1997) S. A. Blundell, K. T. Cheng, and J. Sapirstein, Phys. Rev. A 55, 1857 (1997).
* Sunnergren _et al._ (1998) P. Sunnergren, H. Persson, S. Salomonson, S. M. Schneider, I. Lindgren, and G. Soff, Phys. Rev. A 58, 1055 (1998).
* Artemyev _et al._ (2001) A. N. Artemyev, V. M. Shabaev, G. Plunien, G. Soff, and V. A. Yerokhin, Phys. Rev. A 63, 062504 (2001).
* Volotka _et al._ (2008) A. V. Volotka, D. A. Glazov, I. I. Tupitsyn, N. S. Oreshkina, G. Plunien, and V. M. Shabaev, Phys. Rev. A 78, 062507 (2008), arXiv:0806.1121 .
* Volotka _et al._ (2012) A. V. Volotka, D. A. Glazov, O. V. Andreev, V. M. Shabaev, I. I. Tupitsyn, and G. Plunien, Phys. Rev. Lett. 108, 073001 (2012).
* Dzuba _et al._ (2012) V. A. Dzuba, J. C. Berengut, V. V. Flambaum, and B. M. Roberts, Phys. Rev. Lett. 109, 203003 (2012), arXiv:1207.5864 .
* Porsev _et al._ (2009) S. G. Porsev, K. Beloy, and A. Derevianko, Phys. Rev. Lett. 102, 181601 (2009).
* Porsev _et al._ (2010) S. G. Porsev, K. Beloy, and A. Derevianko, Phys. Rev. D 82, 036008 (2010).
* Blundell _et al._ (1990) S. A. Blundell, W. R. Johnson, and J. Sapirstein, Phys. Rev. Lett. 65, 1411 (1990).
* Blundell _et al._ (1992) S. A. Blundell, J. Sapirstein, and W. R. Johnson, Phys. Rev. D 45, 1602 (1992).
* Dzuba _et al._ (1984) V. A. Dzuba, V. V. Flambaum, and O. P. Sushkov, J. Phys. B 17, 1953 (1984).
* Johnson _et al._ (1980) W. R. Johnson, C. D. Lin, K. T. Cheng, and C. M. Lee, Phys. Scr. 21, 409 (1980).
* Roberts _et al._ (2013b) B. M. Roberts, V. A. Dzuba, and V. V. Flambaum, Phys. Rev. A 88, 042507 (2013b), arXiv:1309.3371 .
* Dzuba _et al._ (1987) V. A. Dzuba, V. V. Flambaum, P. G. Silvestrov, and O. P. Sushkov, J. Phys. B 20, 1399 (1987).
* Dzuba _et al._ (1988) V. A. Dzuba, V. V. Flambaum, P. G. Silvestrov, and O. P. Sushkov, Phys. Lett. A 131, 461 (1988).
* Dzuba _et al._ (1989a) V. A. Dzuba, V. V. Flambaum, and O. P. Sushkov, Phys. Lett. A 140, 493 (1989a).
* Dzuba _et al._ (1989b) V. A. Dzuba, V. V. Flambaum, A. Y. Kraftmakher, and O. P. Sushkov, Phys. Lett. A 142, 373 (1989b).
* Note (4) These formulas exclude the double-core-polarization effect, which is very small for Cs, and has been studied in detail in Ref. Roberts _et al._ (2013b). The sign change of the core cannot be explained by the double-core-polarization correction, which even if entirely assigned to the core contribution is a factor of two too small to account for the difference Roberts _et al._ (2013b).
* Johnson _et al._ (1988) W. R. Johnson, S. A. Blundell, and J. Sapirstein, Phys. Rev. A 37, 307 (1988).
* Beloy and Derevianko (2008) K. Beloy and A. Derevianko, Comput. Phys. Commun. 179, 310 (2008), arXiv:0710.3142 .
* Dalgarno and Lewis (1955) A. Dalgarno and J. T. Lewis, Proceedings of the Royal Society A 233, 70 (1955).
* Pal _et al._ (2009) R. Pal, D. Jiang, M. S. Safronova, and U. I. Safronova, Phys. Rev. A 79, 062505 (2009).
* Dzuba _et al._ (2001b) V. A. Dzuba, V. V. Flambaum, and J. S. M. Ginges, Phys. Rev. A 63, 062101 (2001b).
* Wansbeek _et al._ (2008) L. W. Wansbeek, B. K. Sahoo, R. G. E. Timmermans, K. Jungmann, B. P. Das, and D. Mukherjee, Phys. Rev. A 78, 050501 (2008).
* Wieman and Derevianko (2019) C. E. Wieman and A. Derevianko, (2019), arXiv:1904.00281 .
* Dzuba _et al._ (1989c) V. A. Dzuba, V. V. Flambaum, and O. P. Sushkov, Phys. Lett. A 141, 147 (1989c).
|
# Improving Event Representation via Simultaneous Weakly Supervised
Contrastive Learning and Clustering
Jun Gao1 Wei Wang3 Changlong Yu4 Huan Zhao5 Wilfred Ng4 Ruifeng Xu1,2
1Harbin Institute of Technology (Shenzhen) 2Peng Cheng Laboratory 3Tsinghua
University
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
4HKUST, Hong Kong, China 54Paradigm. Inc.
<EMAIL_ADDRESS><EMAIL_ADDRESS>
Corresponding author
###### Abstract
Representations of events described in text are important for various tasks.
In this work, we present SWCC: a Simultaneous Weakly supervised Contrastive
learning and Clustering framework for event representation learning. SWCC
learns event representations by making better use of co-occurrence information
of events. Specifically, we introduce a weakly supervised contrastive learning
method that allows us to consider multiple positives and multiple negatives,
and a prototype-based clustering method that avoids semantically related
events being pulled apart. For model training, SWCC learns representations by
simultaneously performing weakly supervised contrastive learning and
prototype-based clustering. Experimental results show that SWCC outperforms
other baselines on Hard Similarity and Transitive Sentence Similarity tasks.
In addition, a thorough analysis of the prototype-based clustering method
demonstrates that the learned prototype vectors are able to implicitly capture
various relations between events. Our code will be available at
https://github.com/gaojun4ever/SWCC4Event. ††Jun Gao is currently a research
intern at 4Paradigm.
## 1 Introduction
Distributed representations of events, are a common way to represent events in
a machine-readable form and have shown to provide meaningful features for
various tasks (Lee and Goldwasser, 2018; Rezaee and Ferraro, 2021; Deng et
al., 2021; Martin et al., 2018; Chen et al., 2021). Obtaining effective event
representations is challenging, as it requires representations to capture
various relations between events. Figure 1 presents four pairs of events with
different relations. Two events may share the same event attributes (e.g.
event types and sentiments), and there may also be a causal or temporal
relation between two events.
Figure 1: Four pairs of events with different relations. Stars represent
prototypes and circles represent events.
Early works (Weber et al., 2018) exploit easily accessible co-occurrence
relation of events to learn event representations. Although the use of co-
occurrence relation works well, it is too coarse for deep understanding of
events, which requires fine-grained knowledge (Lee and Goldwasser, 2019).
Recent works focus on fine-grained knowledge, such as discourse relations (Lee
and Goldwasser, 2019; Zheng et al., 2020) and commonsense knowledge (e.g.
sentiments and intents) (Sap et al., 2019; Ding et al., 2019). Concretely, Lee
and Goldwasser (2019) and Zheng et al. (2020) leverage 11 discourse relation
types to model event script knowledge. Ding et al. (2019) incorporate manually
labeled commonsense knowledge (intents and sentiments) into event
representation learning. However, the types of fine-grained event knowledge
are so diverse that we cannot enumerate all of them and currently adopted
fine-grained knowledge fall under a small set of event knowledge. In addition,
some manually labeled knowledge (Sap et al., 2019; Hwang et al., 2021) is
costly and difficult to apply on large datasets.
In our work, we observe that there is a rich amount of information in co-
occurring events, but previous works did not make good use of such
information. Based on existing works on event relation extraction (Xue et al.,
2016; Lee and Goldwasser, 2019; Zhang et al., 2020; Wang et al., 2020), we
find that the co-occurrence relation, which refers to two events appearing in
the same document, can be seen as a superset of currently defined explicit
discourse relations. To be specific, these relations are often indicated by
discourse markers (e.g., “because”, capturing the casual relation) (Lee and
Goldwasser, 2019). Therefore, two related events must exist in the same
sentence or document. More than that, the co-occurrence relation also includes
other implicit event knowledge. For example, events that occur in the same
document may share the same topic and event type. To learn event
representations, previous works (Granroth-Wilding and Clark, 2016; Weber et
al., 2018) based on co-occurrence information usually exploit instance-wise
contrastive learning approaches related to the margin loss, which consists of
an anchor, positive, and negative sample, where the anchor is more similar to
the positive than the negative. However, they share two common limitations:
(1) such margin-based approaches struggle to capture the essential differences
between events with different semantics, as they only consider one positive
and one negative per anchor. (2) Randomly sampled negative samples may contain
samples semantically related to the anchor, but are undesirably pushed apart
in embedding space. This problem arises because these instance-wise
contrastive learning approaches treat randomly selected events as negative
samples, regardless of their semantic relevance.
We are motivated to address the above issues with the goal of making better
use of co-occurrence information of events. To this end, we present SWCC: a
Simultaneous Weakly supervised Contrastive learning and Clustering framework
for event representation learning, where we exploit document-level co-
occurrence information of events as weak supervision and learn event
representations by simultaneously performing weakly supervised contrastive
learning and prototype-based clustering. To address the first issue, we build
our approach on the contrastive framework with the InfoNCE objective (van den
Oord et al., 2019), which is a self-supervised contrastive learning method
that uses one positive and multiple negatives. Further, we extend the InfoNCE
to a weakly supervised contrastive learning setting, allowing us to consider
multiple positives and multiple negatives per anchor (as opposed to the
previous works which use only one positive and one negative). Co-occurring
events are then incorporated as additional positives, weighted by a normalized
co-occurrence frequency. To address the second issue, we introduce a
prototype-based clustering method to avoid semantically related events being
pulled apart. Specifically, we impose a prototype for each cluster, which is a
representative embedding for a group of semantically related events. Then we
cluster the data while enforce consistency between cluster assignments
produced for different augmented representations of an event. Unlike the
instance-wise contrastive learning, our clustering method focuses on the
cluster-level semantic concepts by contrasting between representations of
events and clusters. Overall, we make the following contributions:
* •
We propose a simple and effective framework (SWCC) that learns event
representations by making better use of co-occurrence information of events.
Experimental results show that our approach outperforms previous approaches on
several event related tasks.
* •
We introduce a weakly supervised contrastive learning method that allows us to
consider multiple positives and multiple negatives, and a prototype-based
clustering method that avoids semantically related events being pulled apart.
* •
We provide a thorough analysis of the prototype-based clustering method to
demonstrate that the learned prototype vectors are able to implicitly capture
various relations between events.
Figure 2: Architecture of the proposed framework, where the left part is the
Weakly Supervised Contrastive Learning method and the right part is the
Prototype-based Clustering method. Given an input event $\bm{x}_{i}$, we
obtain three augmented representations $\bm{z}_{i},\bm{z}_{a_{1}}$ and
$\bm{z}_{a_{2}}$ of the same event $\bm{x}_{i}$ using the BERT model with
different dropout masks. Using the same approach, we obtain the representation
set $\\{\bm{z}_{k}\\}_{k\in\mathcal{N}(i)}$ of in-batch negatives and the
representation $\bm{z}_{a_{3}}$ of its co-occurrence event.
## 2 Preliminaries
#### Event representation model.
In the early works (Weber et al., 2018; Ding et al., 2019), Neural Tensor
Networks (NTNs) (Socher et al., 2013b, a) are widely adopted to compose the
representation of event constitutions, i.e., (subject, predicate, object).
However, such methods introduced strong compositional inductive bias and can
not extend to events with more additional arguments, such as time, location
etc. Several recent works (Zheng et al., 2020; Vijayaraghavan and Roy, 2021)
replaced static word vector compositions with powerful pretrained language
models, such as BERT Devlin et al. (2019), for flexible event representations
and achieved better performance. Following them, we also take the BERT as the
backbone model.
The BERT encoder can take as input a free-form event text, which contains a
sequence of tokens and the input format can be represented as follows:
$[\mathrm{CLS}],pred,subj,obj,[\mathrm{SEP}].$ (1)
Define $\bm{x}=[x_{0},x_{1},\cdots,x_{L}]$ to be the input sequence of length
$L$, where $x_{0}$ and $x_{L}$ are the [CLS] token and the [SEP] token
respectively. Given $\bm{x}$, the BERT returns a sequence of contextualized
vectors:
$[\bm{v}_{[\mathrm{CLS}]},\bm{v}_{x_{1}},\cdots,\bm{v}_{x_{L}}]=\mathrm{BERT}(\bm{x}),$
(2)
where $\bm{v}_{[\mathrm{CLS}]}$ is the representation for the [CLS] token. In
the default case, the final vector representation $\bm{z}$ of the event is the
output representation of the [CLS] token: $\bm{z}=\bm{v}_{[\mathrm{CLS}]}$.
#### Instance-wise contrastive learning.
Event representation models learn representations with contrastive learning,
which aims to pull related events together and push apart unrelated events.
Margin loss (Schroff et al., 2015) is a widely used contrastive loss in most
of the existing works on event representation learning (Weber et al., 2018;
Ding et al., 2019; Zheng et al., 2020). Most recently, an alternative
contrastive loss function, called InfoNCE (van den Oord et al., 2019), has
been proposed and shown effective in various contrastive learning tasks (He et
al., 2020; Hu et al., 2021; Gao et al., 2021). Chen et al. (2020a) further
demonstrate that InfoNCE works better than the Margin loss. In this work, we
explore the use of InfoNCE to train our event representation model.
Formally, given a set of $N$ paired events
$\mathcal{D}=\\{\bm{x}_{i},\bm{x}_{i}^{+}\\}_{i=1}^{N}$, where
$\bm{x}_{i}^{+}$ is a positive sample for $\bm{x}_{i}$, the InfoNCE objective
for $(\bm{x}_{i},\bm{x}_{i}^{+})$ is presented in a softmax form with in-batch
negatives (Chen et al., 2020a; Gao et al., 2021):
$\mathcal{L}=-\mathrm{log}\frac{g(\bm{z}_{i},\bm{z}_{i}^{+})}{g(\bm{z}_{i},\bm{z}_{i}^{+})+\sum_{k\in\mathcal{N}(i)}g(\bm{z}_{i},\bm{z}_{k})},$
(3)
where $\bm{z}_{i}$ and $\bm{z}_{i}^{+}$ are the augmented representations of
$\bm{x}_{i}$ and $\bm{x}_{i}^{+}$ obtained through a representation model ,
$k\in\mathcal{N}(i)$ is the index of in-batch negatives. and $g$ is a
function: $g(\bm{z}_{i},\bm{z}_{k})=\exp(\bm{z}_{i}^{\top}\bm{z}_{k}/\tau)$,
where $\tau\in\mathbb{R}^{+}$ is a positive value of temperature.
#### Data augmentation.
One critical question in contrastive learning is how to obtain
$\bm{z}_{i}^{+}$. In language representation, $\bm{z}_{i}^{+}$ are often
obtained by first applying data augmentation in the form of word deletion,
reordering, or substitution on $\bm{x}_{i}$ and then feeding it into the event
representation model. Several recent works (Gao et al., 2021; Liang et al.,
2021) exploit dropout noise as data augmentation for NLP tasks and find that
this data augmentation technique performs much better than common data
augmentation techniques. Specifically, given an input event $\bm{x}_{i}$, we
obtain $\bm{z}_{i}$ and $\bm{z}_{i}^{+}$ by feeding the same input to the BERT
encoder with the parametric weights $\theta$ twice, and each time we apply a
different dropout mask:
$\bm{z}_{i}=f_{\theta}(\bm{x}_{i},\bm{\phi}_{1}),\bm{z}_{i}^{+}=f_{\theta}(\bm{x}_{i},\bm{\phi}_{2}),$
(4)
where $\bm{\phi}_{1}$ and $\bm{\phi}_{2}$ are two different random masks for
dropout. As described in Sec.3.1, given an anchor event $\bm{z}_{i}$ , we
generate 3 positive samples $\bm{z}_{a_{1}}$, $\bm{z}_{a_{2}}$ and
$\bm{z}_{a_{3}}$ with different dropout masks.
## 3 The Proposed Approach
In this section, we will present technical details of our proposed approach
and our goal is to learn event representations by making better use of co-
occurrence information of events. Figure 2 presents an overview of our
proposed approach, which contains two parts: the weakly-supervised contrastive
learning method (left) and the prototype-based clustering method (right). In
the following sections, we will introduce both methods separately.
### 3.1 Weakly Supervised Contrastive Learning
We build our approach on the contrastive framework with the InfoNCE objective
(Eq.3) instead of the margin loss. To incorporate co-occurrence information
into event representation learning, a straightforward way is to consider the
co-occurring event of each input event as an additional positive sample, that
is, the positive augmented representations of $\bm{x}_{i}$ come not only from
itself but also from its co-occurring event denoted as $\bm{x}_{p}$. However,
The original InfoNCE objective cannot handle the case where there exists
multiple positive samples. Inspired by Khosla et al. (2020), we take a similar
formulation to tackle this problem. More than that, we also introduce a
weighting mechanism to consider co-occurrence frequency of two events, which
indicates the strength of the connection between two events.
#### Co-occurrence as weak supervision.
Formally, for each input pair $(\bm{x}_{i},\bm{x}_{p})$, where $\bm{x}_{i}$
and $\bm{x}_{p}$ refer to the input event and one of its co-occurring events,
we first compute an augmented representation $\bm{z}_{i}$ of $\bm{x}_{i}$ as
an anchor event, through the event representation model mentioned in § 2. How
the method differs from InfoNCE is in the construction of the positive set
$\mathcal{A}(i)$ for $\bm{x}_{i}$. In InfoNCE, $\mathcal{A}(i)$ only contains
one positive. In our method, we generalize Eq. 3 to support multiple positives
learning:
$\mathcal{L}=\\!\\!\sum_{a\in\mathcal{A}(i)}\\!\\!-\mathrm{log}\frac{g(\bm{z}_{i},\bm{z}_{a})}{g(\bm{z}_{i},\bm{z}_{a})+\sum_{k\in\mathcal{N}(i)}g(\bm{z}_{i},\bm{z}_{k})},\\!\\!$
(5)
where $\mathcal{A}(i)$ and $\mathcal{N}(i)$ refer to the positive set and the
negative set for the event $\bm{x}_{i}$. Note that we support arbitrary number
of positives here. In our work, considering the limited GPU memory, we use
$\mathcal{A}(i)=\\{\bm{z}_{a_{1}},\bm{z}_{a_{2}},\bm{z}_{a_{3}}\\}$, where
$\bm{z}_{a_{1}}$ and $\bm{z}_{a_{2}}$ are two augmented representations of the
same event $\bm{x}_{i}$, obtained with different dropout masks, and
$\bm{z}_{a_{3}}$ is an augmented representation of its co-occurring event.
Here $\bm{z}_{a_{1}}$ and $\bm{z}_{a_{2}}$ will then be used in the prototype-
based clustering method (See Fig. 2 for example) as detailed later (§ 3.2).
#### Incorporating co-occurrence frequency.
The co-occurrence frequency indicates the strength of the connection between
two events. To make better use of data, we introduce a weighting mechanism to
exploit the co-occurrence frequency between events as instance weights and
rewrite the Eq. 5:
$\\!\mathcal{L}_{cl}=\\!\\!\\!\\!\sum_{a\in\mathcal{A}(i)}\\!\\!\\!\\!-\mathrm{log}\frac{\varepsilon_{a}\cdot
g(\bm{z}_{i},\bm{z}_{a})}{g(\bm{z}_{i},\bm{z}_{a})+\sum_{k\in\mathcal{N}(i)}g(\bm{z}_{i},\bm{z}_{k})}.\\!\\!\\!\\!$
(6)
Here $\varepsilon_{a}$ is a weight for the positive sample $\bm{z}_{a}$. In
our work, the two weights $\varepsilon_{a_{1}}$ and $\varepsilon_{a_{2}}$ of
the positive samples ($\bm{z}_{a_{1}}$ and $\bm{z}_{a_{2}}$) obtained from the
input event, are set as
$\varepsilon_{a_{1}}=\varepsilon_{a_{2}}=\frac{1}{|\mathcal{A}(i)|-1}$, where
$|\mathcal{A}(i)|$ is its cardinality. To obtain the weight
$\varepsilon_{a_{3}}$ for the augmented representation $\bm{z}_{a_{3}}$ of the
co-occurring event, we create a co–occurrence matrix, $\bm{V}$ with each entry
corresponding to the co-occurrence frequency of two distinct events. Then
$\bm{V}$ is normalized to $\hat{\bm{V}}$ with the Min-Max normalization
method, and we take the entry in $\hat{\bm{V}}$ as the weight
$\varepsilon_{a_{3}}$ for the co-occurrence event. In this way, the model
draws the input events closer to the events with higher co-occurrence
frequency, as each entry in $\hat{\bm{V}}$ indicates the strength of the
connection between two events.
### 3.2 Prototype-based Clustering
To avoid semantically related events being pulled apart, we draw inspiration
from the recent approach (Caron et al., 2020) in the computer vision domain
and introduce a prototype-based clustering method, where we impose a
prototype, which is a representative embedding for a group of semantically
related events for each cluster. Then we cluster the data while enforce
consistency between cluster assignments produced for different augmented
representations of an event. These prototypes essentially serve as the center
of data representation clusters for a group of semantically related events
(See Figure 1 for example). Unlike the instance-wise contrastive learning, our
clustering method focuses on the cluster-level semantic concepts by
contrasting between representations of events and clusters.
#### Cluster prediction.
This method works by comparing two different augmented representations of the
same event using their intermediate cluster assignments. The motivation is
that if these two representations capture the same information, it should be
possible to predict the cluster assignment of one augmented representation
from another augmented representation. In detail, we consider a set of $M$
prototypes, each associated with a learnable vector $\bm{c}_{i}$, where
$i\in\llbracket M\rrbracket$. Given an input event, we first transform the
event into two augmented representations with two different dropout masks.
Here we use the two augmented representations $\bm{z}_{a_{1}}$ and
$\bm{z}_{a_{2}}$ of the event $\bm{x}_{i}$. We compute their cluster
assignments $\bm{q}_{a_{1}}$ and $\bm{q}_{a_{2}}$ by matching the two
augmented representations to the set of $M$ prototypes. The cluster
assignments are then swapped between the two augmented representations: the
cluster assignment $\bm{q}_{a_{1}}$ of the augmented representation
$\bm{z}_{a_{1}}$ should be predicted from the augmented representation
$\bm{z}_{a_{2}}$, and vice-versa. Formally, the cluster prediction loss is
defined as:
$\mathcal{L}_{cp}=\ell(\bm{z}_{a_{1}},\bm{q}_{a_{2}})+\ell(\bm{z}_{a_{2}},\bm{q}_{a_{1}}),$
(7)
where function $\ell(\bm{z},\bm{q})$ measures the fit between the
representation $\bm{z}$ and the cluster assignment $\bm{q}$, as defined by:
$\ell(\bm{z},\bm{q})=-\bm{q}\mathrm{log}\bm{p}$. Here $\bm{p}$ is a
probability vector over the $M$ prototypes whose components are:
$p^{(j)}=\frac{\exp(\bm{z}^{\top}\bm{c}_{j}/\tau)}{\sum_{k=1}^{M}\exp(\exp(\bm{z}^{\top}\bm{c}_{k}/\tau)},$
(8)
where $\tau$ is a temperature hyperparameter. Intuitively, this cluster
prediction method links representations $\bm{z}_{a_{1}}$ and $\bm{z}_{a_{2}}$
using the intermediate cluster assignments $\bm{q}_{a_{1}}$ and
$\bm{q}_{a_{2}}$.
#### Computing cluster assignments.
We compute the cluster assignments using an Optimal Transport solver. This
solver ensures equal partitioning of the prototypes or clusters across all
augmented representations, avoiding trivial solutions where all
representations are mapped to a unique prototype. In particular, we employ the
Sinkhorn-Knopp algorithm (Cuturi, 2013). The algorithm first begins with a
matrix $\bm{\Gamma}\in\mathbb{R}^{M\times N}$ with each element initialized to
$\bm{z}_{b}^{\top}\bm{c}_{m}$, where $b\in\llbracket N\rrbracket$ is the index
of each column. It then iteratively produces a doubly-normalized matrix, the
columns of which comprise $\bm{q}$ for the minibatch.
Model | Hard similarity (Accuracy %) | Transitive sentence
---|---|---
Original | Extended | similarity ($\rho$)
Event-comp (Weber et al., 2018)* | 33.9 | 18.7 | 0.57
Predicate Tensor (Weber et al., 2018)* | 41.0 | 25.6 | 0.63
Role-factor Tensor (Weber et al., 2018)* | 43.5 | 20.7 | 0.64
KGEB (Ding et al., 2016)* | 52.6 | 49.8 | 0.61
NTN-IntSent (Ding et al., 2019)* | 77.4 | 62.8 | 0.74
SAM-Net (Lv et al., 2019)* | 51.3 | 45.2 | 0.59
FEEL (Lee and Goldwasser, 2018)* | 58.7 | 50.7 | 0.67
UniFA-S (Zheng et al., 2020)* | 78.3 | 64.1 | 0.75
SWCC | 80.9 | 72.1 | 0.82
Table 1: Evaluation performance on the similarity tasks. Best results are
bold. *: results reported in the original papers.
### 3.3 Model Training
Our approach learns event representations by simultaneously performing weakly
supervised contrastive learning and prototype-based clustering. The overall
training objective has three terms:
$\mathcal{L}_{overall}=\mathcal{L}_{cl}+\beta\mathcal{L}_{cp}+\gamma\mathcal{L}_{mlm},$
(9)
where $\beta$ and $\gamma$ are hyperparameters. The first term is the weakly
supervised contrastive learning loss that allows us to effectively incorporate
co-occurrence information into event representation learning. The second term
is the prototype-based clustering loss, whose goal is to cluster the events
while enforcing consistency between cluster assignments produced for different
augmented representations of the input event. Lastly, we introduce the masked
language modeling (MLM) objective (Devlin et al., 2019) as an auxiliary loss
to avoid forgetting of token-level knowledge.
## 4 Experiments
Following common practice in event representation learning (Weber et al.,
2018; Ding et al., 2019; Zheng et al., 2020), we analyze the event
representations learned by our approach on two event similarity tasks (§ 4.2)
and one transfer task (§ 4.4).
### 4.1 Dataset and Implementation Details
The event triples we use for the training data are extracted from the New York
Times Gigaword Corpus using the Open Information Extraction system Ollie
(Mausam et al., 2012). We filtered the events with frequencies less than 3 and
ended up with 4,029,877 distinct events. We use the MCNC dataset adopted in
Lee and Goldwasser
(2019)111https://github.com/doug919/multi_relational_script_learning for the
transfer task.
Our event representation model is implemented using the Texar-PyTorch package
(Hu et al., 2019). The model starts from the pre-trained checkpoint of BERT-
based-uncased (Devlin et al., 2019) and we use the $[\mathrm{CLS}]$ token
representation as the event representation. We train our model with a batch
size of 256 using an Adam optimizer. The learning rate is set as 2e-7 for the
event representation model and 2e-5 for the prototype memory. We adopt the
temperature $\tau=0.3$ and the numbers of prototypes used in our experiment is
10.
### 4.2 Event Similarity Tasks
Similarity task is a common way to measure the quality of vector
representations. Weber et al. (2018) introduce two event related similarity
tasks: (1) Hard Similarity Task and (2) Transitive Sentence Similarity.
#### Hard Similarity Task.
The hard similarity task tests whether the event representation model can push
away representations of dissimilar events while pulling together those of
similar events. Weber et al. (2018) created a dataset (denoted as “Original”),
where each sample has two types of event pairs: one with events that should be
close to each other but have very little lexical overlap, and another with
events that should be farther apart but have high overlap. This dataset
contains 230 event pairs. After that, Ding et al. (2019) extended this dataset
to 1,000 event pairs (denoted as “Extended”). For this task, we use Accuracy
as the evaluation metric, which measures the percentage of cases where the
similar pair receives a higher cosine value than the dissimilar pair.
#### Transitive Sentence Similarity.
The transitive sentence similarity dataset (Kartsaklis and Sadrzadeh, 2014)
contains 108 pairs of transitive sentences that contain a single subject,
object, and verb (e.g., agent sell property) and each pair in this dataset is
manually annotated by a similarity score from 1 to 7. A larger score indicates
that the two events are more similar. Following previous work (Weber et al.,
2018; Ding et al., 2019; Zheng et al., 2020), we evaluate using the Spearman’s
correlation of the cosine similarity predicted by each method and the
annotated similarity score.
### 4.3 Comparison methods.
We compare our proposed approach with a variety of baselines. These methods
can be categorized into three types:
(1) Co-occurrence: Event-comp (Weber et al., 2018), Role-factor Tensor (Weber
et al., 2018) and Predicate Tensor (Weber et al., 2018) are models that use
tensors to learn the interactions between the predicate and its arguments and
are trained using co-occurring events as supervision.
(2) Discourse Relations: This line of work exploits discourse relations. SAM-
Net (Lv et al., 2019) explores event segment relations, FEEL (Lee and
Goldwasser, 2018) and UniFA-S (Zheng et al., 2020) adopt discourse relations.
(3) Commonsense Knowledge: Several works have shown the effectiveness of using
commonsense knowledge. KGEB (Ding et al., 2016) incorporates knowledge graph
information. NTN-IntSent (Ding et al., 2019) leverages external commonsense
knowledge about the intent and sentiment of the event.
#### Results.
Table 1 reports the performance of different methods on the hard similarity
tasks and the transitive sentence similarity task. The result shows that the
proposed SWCC achieves the best performance among the compared methods. It not
only outperforms the Role-factor Tensor method that based on co-occurrence
information, but also has better performance than the methods trained with
additional annotations and commonsense knowledge, e.g. NTN-IntSent and
UniFA-S. This implies the co-occurrence information of events is effective but
underutilized by previous works, and the proposed SWCC makes better use of the
co-occurrence information.
Model | Hard similarity (Accuracy %) | Transitive sentence
---|---|---
Original | Extended | similarity ($\rho$)
SWCC | 80.9 | 72.1 | 0.82
w/o Prototype-based Clustering | 77.4 (-3.5) | 67.4 (-4.7) | 0.77 (-0.05)
w/o Weakly Supervised CL | 75.7 (-5.2) | 65.1 (-7.0) | 0.78 (-0.04)
w/o MLM | 77.4 (-3.5) | 70.4 (-1.7) | 0.80 (-0.02)
BERT (InfoNCE) | 72.1 | 63.4 | 0.75
BERT (Margin) | 43.5 | 51.4 | 0.67
Table 2: Ablation study for several methods evaluated on the similarity tasks.
#### Ablation study.
To investigate the effect of each component in our approach, we conduct an
ablation study as reported in Table 2. We remove a certain component of SWCC
and examine the corresponding performance of the incomplete SWCC on the
similarity tasks. We first explore the impact of our prototype-based
clustering method by removing the loss term $\mathcal{L}_{cp}$ in Eq. 9. We
find that this component has a significant impact on the transitive sentence
similarity task. Removing this component causes a 0.05 (maximum) point drop in
performance on the transitive sentence similarity task. And for the weakly
supervised contrastive learning method, we find that it has a strong impact on
both hard similarity tasks, especially the extended hard similarity task.
Removing this component causes an 7.0 point drop in performance of the model.
We also study the impact of the MLM auxiliary objective. As shown in Table 2
the token-level MLM objective improves the performance on the extended hard
similarity task modestly, it does not help much for the transitive sentence
similarity task.
Next, we compare the InfoNCE against the margin loss in Table 2. For a fair
comparison, the BERT (InfoNCE) is trained using the InfoNCE objective only,
with co-occurring events as positives and other samples in the minibatch as
negatives, and the BERT (Margin) is trained using the margin loss, with co-
occurring events as positives and randomly sampled events as negatives.
Obviously, BERT (InfoNCE) achieves much competitive results on all tasks,
suggesting that the InfoNCE with adjustable temperature works better than the
margin loss. This can be explained by the fact that the InfoNCE weighs
multiple different negatives, and an appropriate temperature can help the
model learn from hard negatives, while the margin loss uses only one negative
and can not weigh the negatives by their relative hardness.
### 4.4 Transfer Task
We test the generalization of the event representations by transferring to a
downstream event related tasks, the Multiple Choice Narrative Cloze (MCNC)
task (Granroth-Wilding and Clark, 2016), which was proposed to evaluate script
knowledge. In particular, given an event chain which is a series of events,
this task requires a reasoning system to distinguish the next event from a
small set of randomly drawn events. We evaluate our methods with several
methods based on unsupervised learning: (1) Random picks a candidate at random
uniformly; (2) PPMI (Chambers and Jurafsky, 2008) uses co-occurrence
information and calculates Positive PMI for event pairs; (3) BiGram (Jans et
al., 2012) calculates bi-gram conditional probabilities based on event term
frequencies; (4) Word2Vec (Mikolov et al., 2013) uses the word embeddings
trained by Skipgram algorithm and event representations are the summation of
word embeddings of predicates and arguments. Note that we did not compare with
supervised methods Bai et al. (2021); Zhou et al. (2021); Lv et al. (2020)
since unsupervised ones are more suitable for purely evaluating event
representations.
#### Results.
Table 3 reports the performance of different methods on the MCNC task. As
shown in the table, SWCC achieves the best accuracy on the MCNC task under the
zero-shot transfer setting, suggesting the proposed SWCC has better
generalizability to the downstream tasks than other compared methods.
Model | Accuracy (%)
---|---
Random | 20.00
PPMI* | 30.52
BiGram* | 29.67
Word2Vec* | 37.39
BERT (Margin) | 36.50
BERT (InfoNCE) | 39.23
SWCC | 44.50
Table 3: Evaluation performance on the MCNC task. Best results are bold. *:
results reported in the previous work (Lee and Goldwasser, 2019).
## 5 Analysis and Visualization
In this section, we further analyze the prototype-based clustering method.
#### Number of prototypes.
Figure 3 displays the impact of the number of prototypes in training. As shown
in the figure, the performance increases as the number $M$ increases, but it
will not further increase after 10. We speculate that because these evaluation
data are too small and contain too few types of relations, a larger number of
prototypes would not help much in performance improvement.
Figure 3: Impact of # of Prototypes
#### Visualization of learned representation.
We randomly sample 3000 events and embed the event representations learned by
BERT (InfoNCE) and SWCC in 2D using the PCA method. The cluster label of each
event is determined by matching its representation to the set of $M$
prototypes. The resulting visualizations are given in Figure 4. It shows that
the proposed SWCC yields significantly better clustering performance than the
BERT (InfoNCE), which means, to a certain extent, the prototype-based
clustering method can help the event representation model capture various
relations of events. Overall, the class separation in the visualizations
qualitatively agrees with the performance in Table 1.
Figure 4: 2D visualizations of the event representation spaces learned by BERT
(InfoNCE) (left) and SWCC (right), respectively. Each event is denoted by a
color indicating a prototype.
#### Case study.
We also present sampled events from two different prototypes in Table 4 (see
Appendix for more examples), to further demonstrate the ability of SWCC to
capture various relations of events. We can see that the events belonging to
“Prototype1” mainly describe financial stuff, for example, “earnings be
reduced”, while the events belonging to “Prototype2” are mainly related to
politics. Clearly, the events in the same cluster have the same topic. And we
also find that there are also causal and temporal relations between some of
these events. For example, “earnings be reduced” led to “company cut costs”.
Prototype1 | Prototype2
---|---
loans be sell in market | president asked senate
earnings be reduced | he deal with congress
company cut costs | senate reject it
earnings be flat | council gave approval
banks earn fees | council rejected bill
Table 4: Example events of two different prototypes.
## 6 Related Work
#### Event representation learning.
Effectively representing events and their relations (casual, temporal,
entailment Ning et al. (2018); Yu et al. (2020)) becomes important for various
downstream tasks, such as event schema induction Li et al. (2020), event
narrative modeling Chambers and Jurafsky (2008); Li et al. (2018); Lee and
Goldwasser (2019), event knowledge graph construction Sap et al. (2019); Zhang
et al. (2020) etc. Many efforts have been devoted into learning distributed
event representation. Though driven by various motivations, the main idea of
these methods is to exploit explicit relations of events as supervision
signals and these supervision signals can be roughly categorized into three
types: (1) discourse relations (e.g. casual and temporal relations) obtained
with automatic annotation tools (Zheng et al., 2020); (2) manually annotated
external knowledge (e.g. sentiments and intents) (Lee and Goldwasser, 2018;
Ding et al., 2019) and (3) co-occurrence information (Weber et al., 2018).
Existing work has focused on the first two supervision signals, with less
research on how to better utilize co-occurrence information. Though, discourse
relations and external knowledge are fine-grained relations that can provide
more accurate knowledge, the current explicitly defined fine-grained relations
fall under a small set of event relations. Co-occurrence information is easily
accessible but underutilized. Our work focus on exploiting document-level co-
occurrence information of events to learn event representations, without any
additional annotations.
#### Instance-wise contrastive learning.
Recently, a number of instance-wise contrastive learning methods have emerged
to greatly improve the performance of unsupervised visual and text
representations (He et al., 2020; Chen et al., 2020b, a; Chen and He, 2021;
Grill et al., 2020; Zbontar et al., 2021; Chen et al., 2020a; Hu et al., 2021;
Gao et al., 2021; Yang et al., 2021). This line of work aims at learning an
embedding space where samples from the same instance are pulled closer and
samples from different instances are pushed apart, and usually adopt InfoNCE
(van den Oord et al., 2019) objective for training their models. Unlike the
margin loss using one positive example and one negative example, the InfoNCE
can handle the case where there exists multiple negative samples. In our work,
we extend the InfoNCE, which is a self-supervised contrastive learning
approach, to a weakly supervised contrastive learning setting, allowing us to
effectively leverage co-occurrence information.
#### Deep unsupervised clustering.
Clustering based methods have been proposed for representation learning (Caron
et al., 2018; Zhan et al., 2020; Caron et al., 2020; Li et al., 2021; Zhang et
al., 2021). Caron et al. (2018) use k-means assignments pseudo-labels to learn
visual representations. Later, Asano et al. (2020) and Caron et al. (2020)
cast the pseudo-label assignment problem as an instance of the optimal
transport problem. Inspired by Caron et al. (2020), we leverage a similar
formulation to map event representations to prototype vectors. Different from
Caron et al. (2020), we simultaneously perform weakly supervised contrastive
learning and prototype-based clustering.
## 7 Conclusion
In this work, we propose a simple and effective framework (SWCC) that learns
event representations by making better use of co-occurrence information of
events, without any addition annotations. In particular, we introduce a weakly
supervised contrastive learning method that allows us to consider multiple
positives and multiple negatives, and a prototype-based clustering method that
avoids semantically related events being pulled apart. Our experiments
indicate that our approach not only outperforms other baselines on several
event related tasks, but has a good clustering performance on events. We also
provide a thorough analysis of the prototype-based clustering method to
demonstrate that the learned prototype vectors are able to implicitly capture
various relations between events.
## Acknowledgements
This work was partially supported by the National Natural Science Foundation
of China (61876053, 62006062, 62176076), the Shenzhen Foundational Research
Funding (JCYJ20200109113441941, JCYJ20210324115614039), Joint Lab of Lab of
HITSZ and China Merchants Securities.
## References
* Asano et al. (2020) Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. 2020. Self-labelling via simultaneous clustering and representation learning. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net.
* Bai et al. (2021) Long Bai, Saiping Guan, Jiafeng Guo, Zixuan Li, Xiaolong Jin, and Xueqi Cheng. 2021\. Integrating deep event-level and script-level information for script event prediction. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 9869–9878, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Caron et al. (2018) Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. 2018. Deep clustering for unsupervised learning of visual features. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pages 132–149.
* Caron et al. (2020) Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. 2020. Unsupervised learning of visual features by contrasting cluster assignments. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* Chambers and Jurafsky (2008) Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In _Proceedings of ACL-08: HLT_ , pages 789–797, Columbus, Ohio. Association for Computational Linguistics.
* Chen et al. (2021) Hong Chen, Raphael Shu, Hiroya Takamura, and Hideki Nakayama. 2021. Graphplan: Story generation by planning with event graph. _ArXiv preprint_ , abs/2102.02977.
* Chen et al. (2020a) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020a. A simple framework for contrastive learning of visual representations. In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 1597–1607. PMLR.
* Chen et al. (2020b) Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. 2020b. Improved baselines with momentum contrastive learning. _ArXiv preprint_ , abs/2003.04297.
* Chen and He (2021) Xinlei Chen and Kaiming He. 2021. Exploring simple siamese representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 15750–15758.
* Cuturi (2013) Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. In _Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States_ , pages 2292–2300.
* Deng et al. (2021) Shumin Deng, Ningyu Zhang, Luoqiu Li, Chen Hui, Tou Huaixiao, Mosha Chen, Fei Huang, and Huajun Chen. 2021. OntoED: Low-resource event detection with ontology embedding. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 2828–2839, Online. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ding et al. (2019) Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, and Junwen Duan. 2019. Event representation learning enhanced with external commonsense knowledge. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4894–4903, Hong Kong, China. Association for Computational Linguistics.
* Ding et al. (2016) Xiao Ding, Yue Zhang, Ting Liu, and Junwen Duan. 2016. Knowledge-driven event embedding for stock prediction. In _Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers_ , pages 2133–2142, Osaka, Japan. The COLING 2016 Organizing Committee.
* Gao et al. (2021) Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. _ArXiv preprint_ , abs/2104.08821.
* Granroth-Wilding and Clark (2016) Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In _Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA_ , pages 2727–2733. AAAI Press.
* Grill et al. (2020) Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Ávila Pires, Zhaohan Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. 2020. Bootstrap your own latent - A new approach to self-supervised learning. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020_ , pages 9726–9735. IEEE.
* Hu et al. (2021) Qianjiang Hu, Xiao Wang, Wei Hu, and Guo-Jun Qi. 2021. Adco: Adversarial contrast for efficient learning of unsupervised representations from self-trained negative adversaries. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 1074–1083.
* Hu et al. (2019) Zhiting Hu, Haoran Shi, Bowen Tan, Wentao Wang, Zichao Yang, Tiancheng Zhao, Junxian He, Lianhui Qin, Di Wang, Xuezhe Ma, Zhengzhong Liu, Xiaodan Liang, Wanrong Zhu, Devendra Sachan, and Eric Xing. 2019. Texar: A modularized, versatile, and extensible toolkit for text generation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , pages 159–164, Florence, Italy. Association for Computational Linguistics.
* Hwang et al. (2021) Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sakaguchi, Antoine Bosselut, and Yejin Choi. 2021. Comet-atomic 2020: On symbolic and neural commonsense knowledge graphs. In _AAAI_.
* Jans et al. (2012) Bram Jans, Steven Bethard, Ivan Vulić, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In _Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 336–344, Avignon, France. Association for Computational Linguistics.
* Kartsaklis and Sadrzadeh (2014) Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. _ArXiv preprint_ , abs/1405.2874.
* Khosla et al. (2020) Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* Lee and Goldwasser (2018) I-Ta Lee and Dan Goldwasser. 2018. FEEL: featured event embedding learning. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 4840–4847. AAAI Press.
* Lee and Goldwasser (2019) I-Ta Lee and Dan Goldwasser. 2019. Multi-relational script learning for discourse relations. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4214–4226, Florence, Italy. Association for Computational Linguistics.
* Li et al. (2021) Junnan Li, Pan Zhou, Caiming Xiong, and Steven C. H. Hoi. 2021. Prototypical contrastive learning of unsupervised representations. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net.
* Li et al. (2020) Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 684–695, Online. Association for Computational Linguistics.
* Li et al. (2018) Zhongyang Li, Xiao Ding, and Ting Liu. 2018. Constructing narrative event evolutionary graph for script event prediction. In _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden_ , pages 4201–4207. ijcai.org.
* Liang et al. (2021) Xiaobo Liang, Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, and Tie-Yan Liu. 2021. R-drop: Regularized dropout for neural networks. _ArXiv preprint_ , abs/2106.14448.
* Lv et al. (2019) Shangwen Lv, Wanhui Qian, Longtao Huang, Jizhong Han, and Songlin Hu. 2019. Sam-net: Integrating event-level and chain-level attentions to predict what happens next. In _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019_ , pages 6802–6809. AAAI Press.
* Lv et al. (2020) Shangwen Lv, Fuqing Zhu, and Songlin Hu. 2020. Integrating external event knowledge for script learning. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 306–315, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Martin et al. (2018) Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl. 2018. Event representations for automated story generation with deep neural nets. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 868–875. AAAI Press.
* Mausam et al. (2012) Mausam, Michael Schmitz, Stephen Soderland, Robert Bart, and Oren Etzioni. 2012\. Open language learning for information extraction. In _Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning_ , pages 523–534, Jeju Island, Korea. Association for Computational Linguistics.
* Mikolov et al. (2013) Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. _ArXiv preprint_ , abs/1301.3781.
* Ning et al. (2018) Qiang Ning, Hao Wu, and Dan Roth. 2018. A multi-axis annotation scheme for event temporal relations. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1318–1328, Melbourne, Australia. Association for Computational Linguistics.
* Rezaee and Ferraro (2021) Mehdi Rezaee and Francis Ferraro. 2021. Event representation with sequential, semi-supervised discrete variables. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 4701–4716, Online. Association for Computational Linguistics.
* Sap et al. (2019) Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A. Smith, and Yejin Choi. 2019. ATOMIC: an atlas of machine commonsense for if-then reasoning. In _The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019_ , pages 3027–3035. AAAI Press.
* Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. 2015. Facenet: A unified embedding for face recognition and clustering. In _IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015_ , pages 815–823. IEEE Computer Society.
* Socher et al. (2013a) Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013a. Reasoning with neural tensor networks for knowledge base completion. In _Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States_ , pages 926–934.
* Socher et al. (2013b) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
* van den Oord et al. (2019) Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2019. Representation learning with contrastive predictive coding.
* Vijayaraghavan and Roy (2021) Prashanth Vijayaraghavan and Deb Roy. 2021. Lifelong knowledge-enriched social event representation learning. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 3624–3635, Online. Association for Computational Linguistics.
* Wang et al. (2020) Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. 2020. Joint constrained learning for event-event relation extraction. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 696–706, Online. Association for Computational Linguistics.
* Weber et al. (2018) Noah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensor-based compositions. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 4946–4953. AAAI Press.
* Xue et al. (2016) Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Attapol Rutherford, Bonnie Webber, Chuan Wang, and Hongmin Wang. 2016. CoNLL 2016 shared task on multilingual shallow discourse parsing. In _Proceedings of the CoNLL-16 shared task_ , pages 1–19, Berlin, Germany. Association for Computational Linguistics.
* Yang et al. (2021) Haoran Yang, Wai Lam, and Piji Li. 2021. Contrastive representation learning for exemplar-guided paraphrase generation. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 4754–4761, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Yu et al. (2020) Changlong Yu, Hongming Zhang, Yangqiu Song, Wilfred Ng, and Lifeng Shang. 2020. Enriching large-scale eventuality knowledge graph with entailment relations. In _Automated Knowledge Base Construction_.
* Zbontar et al. (2021) Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. 2021. Barlow twins: Self-supervised learning via redundancy reduction. In _Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event_ , volume 139 of _Proceedings of Machine Learning Research_ , pages 12310–12320. PMLR.
* Zhan et al. (2020) Xiaohang Zhan, Jiahao Xie, Ziwei Liu, Yew-Soon Ong, and Chen Change Loy. 2020\. Online deep clustering for unsupervised representation learning. In _2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020_ , pages 6687–6696. IEEE.
* Zhang et al. (2021) Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallapati, Andrew O. Arnold, and Bing Xiang. 2021. Supporting clustering with contrastive learning. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 5419–5430, Online. Association for Computational Linguistics.
* Zhang et al. (2020) Hongming Zhang, Xin Liu, Haojie Pan, Yangqiu Song, and Cane Wing-Ki Leung. 2020\. ASER: A large-scale eventuality knowledge graph. In _WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020_ , pages 201–211. ACM / IW3C2.
* Zheng et al. (2020) Jianming Zheng, Fei Cai, and Honghui Chen. 2020. Incorporating scenario knowledge into A unified fine-tuning architecture for event representation. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020_ , pages 249–258. ACM.
* Zhou et al. (2021) Yucheng Zhou, Xiubo Geng, Tao Shen, Jian Pei, Wenqiang Zhang, and Daxin Jiang. 2021\. Modeling event-pair relations in external knowledge graphs for script reasoning. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 4586–4596, Online. Association for Computational Linguistics.
## Appendix A Appendix
### A.1 Model Analysis
#### Impact of Temperature.
We study the impact of the temperature by trying out different temperature
rates in Table 5 and observe that all the variants underperform the
$\tau=0.3$.
SWCC | Hard similarity (Acc. %) | Transitive sentence
---|---|---
Original | Extended | similarity ($\rho$)
with Temperature | | |
$\tau=0.2$ | 80.0 | 71.0 | 0.80
$\tau=0.3$ | 80.9 | 71.3 | 0.82
$\tau=0.5$ | 77.4 | 68.7 | 0.78
$\tau=0.7$ | 72.2 | 50.5 | 0.75
$\tau=1.0$ | 48.7 | 22.9 | 0.67
Table 5: Impact of Temperature ($\tau$).
#### Impact of the MLM objective with different $\gamma$.
Table 6 presents the results obtained with different $\gamma$. As can be seen
in the table, larger or smaller values of gamma can harm the performance of
the model. $\gamma=1.0$ gives a better overall performance of the model.
SWCC | Hard similarity (Acc. %) | Transitive sentence
---|---|---
Original | Extended | similarity ($\rho$)
with MLM | | |
$\gamma=0.1$ | 76.5 | 70.9 | 0.80
$\gamma=0.5$ | 79.1 | 71.1 | 0.81
$\gamma=1.0$ | 80.9 | 72.1 | 0.82
$\gamma=1.5$ | 80.9 | 71.9 | 0.81
$\gamma=2.0$ | 80.9 | 72.1 | 0.80
Table 6: Impact of the MLM objective with different $\gamma$.
#### Impact of the prototype-based clustering objective with different
$\beta$.
Finally, we study the impact of the prototype-based clustering objective with
different $\beta$. As can be seen in the Table 7, the larger the $beta$, the
better the performance of the model on the hard similarity task.
SWCC | Hard similarity (Acc. %) | Transitive sentence
---|---|---
Original | Extended | similarity ($\rho$)
with $\mathcal{L}_{pc}$ | | |
$\beta=0.01$ | 78.3 | 71.6 | 0.80
$\beta=0.05$ | 76.5 | 71.6 | 0.80
$\beta=0.1$ | 80.9 | 72.1 | 0.82
$\beta=0.3$ | 80.9 | 71.3 | 0.82
$\beta=0.5$ | 80.9 | 73.1 | 0.80
$\beta=0.7$ | 80.9 | 72.8 | 0.80
$\beta=1.0$ | 80.9 | 72.1 | 0.80
Table 7: Impact of the prototype-based clustering objective with different
$\beta$.
### A.2 Case Study
#### Case study.
We present sampled events from six different prototypes in Table 8 to further
demonstrate the ability of SWCC to capture various relations of events. We can
see that the events belonging to “Prototype1” mainly describe financial stuff,
for example, “earnings be reduced”, while the events belonging to “Prototype2”
are mainly related to politics. Clearly, the events in the same cluster have
the same topic. And we also find that there are also causal and temporal
relations between some of these events. For example, “earnings be reduced”
leads to “company cut costs”.
Prototype1 | Prototype2 | Prototype3
---|---|---
loans be sell in market | president asked senate | he be known as director
earnings be reduced | he deal with congress | Wright be president of NBC
company cut costs | senate reject it | Cook be chairman of ARCO
earnings be flat | council gave approval | Bernardo be manager for Chamber
banks earn fees | council rejected bill | Philbin be manager of Board
Prototype4 | Prototype5 | Prototype6
he be encouraged by things | kind is essential | Dorsey said to James
I be content | it be approach to life | Gephardt said to Richard
they be motivated by part | we respect desire | Pherson said to Kathy
they be meaningful | thing be do for ourselves | Stone said to Professor
he be ideal | it be goal of people | Stiles said to Thomas
Table 8: Example events of different prototypes.
|
11institutetext: Know-Center GmbH, Graz, Austria
11email<EMAIL_ADDRESS>22institutetext: Graz University of
Technology, Graz, Austria
# Popularity Bias in Collaborative Filtering-Based Multimedia Recommender
Systems
Dominik Kowald 1122 Emanuel Lacic 11
###### Abstract
Multimedia recommender systems suggest media items, e.g., songs, (digital)
books and movies, to users by utilizing concepts of traditional recommender
systems such as collaborative filtering. In this paper, we investigate a
potential issue of such collaborative-filtering based multimedia recommender
systems, namely popularity bias that leads to the underrepresentation of
unpopular items in the recommendation lists. Therefore, we study four
multimedia datasets, i.e., Last.fm, MovieLens, BookCrossing and MyAnimeList,
that we each split into three user groups differing in their inclination to
popularity, i.e., LowPop, MedPop and HighPop. Using these user groups, we
evaluate four collaborative filtering-based algorithms with respect to
popularity bias on the item and the user level. Our findings are three-fold:
firstly, we show that users with little interest into popular items tend to
have large user profiles and thus, are important data sources for multimedia
recommender systems. Secondly, we find that popular items are recommended more
frequently than unpopular ones. Thirdly, we find that users with little
interest into popular items receive significantly worse recommendations than
users with medium or high interest into popularity.
###### Keywords:
multimedia recommender systems; collaborative filtering; popularity bias;
algorithmic fairness
## 1 Introduction
Collaborative filtering (CF) is one of the most traditional but also most
powerful concepts for calculating personalized recommendations [22] and is
vastly used in the field of multimedia recommender systems (MMRS) [11].
However, one issue of CF-based approaches is that they are prone to popularity
bias, which leads to the overrepresentation of popular items in the
recommendation lists [2, 3]. Recent research has studied popularity bias in
domains such as music [15, 16] or movies [3] by comparing the recommendation
performance for different user groups that differ in their inclination to
mainstream multimedia items. However, a comprehensive study of investigating
popularity bias on the item and user level across several multimedia domains
is still missing (see Section 2).
In the present paper, we therefore build upon these previous works and expand
the study of popularity bias to four different domains of MMRS: music
(Last.fm), movies (MovieLens), digital books (BookCrossing), and animes
(MyAnimeList). Within these domains, we show that users with little interest
into popular items tend to have large user profiles and thus, are important
consumers and data sources for MMRS. Furthermore, we apply four different CF-
based recommendation algorithms (see Section 3) on our four datasets that we
each split into three user groups that differ in their inclination to
popularity (i.e., LowPop, MedPop, and HighPop). With this, we address two
research questions (RQ):
* •
RQ1: To what extent does an item’s popularity affect this item’s
recommendation frequency in MMRS?
* •
RQ2: To what extent does a user’s inclination to popular items affect the
quality of MMRS?
Regarding RQ1, we find that the probability of a multimedia item to be
recommended strongly correlates with this items’ popularity. Regarding RQ2, we
find that users with less inclination to popularity (LowPop) receive
statistically significantly worse multimedia recommendations than users with
medium (MedPop) and high (HighPop) inclination to popular items (see Section
4). Our results demonstrate that although users with little interest into
popular items tend to have the largest user profiles, they receive the lowest
recommendation accuracy. Hence, future research is needed to mitigate
popularity bias in MMRS, both on the item and the user level.
## 2 Related Work
This section presents research on popularity bias that is related to our work.
We split these research outcomes in two groups: (i) work related to
recommender systems in general, and (ii) work that focuses on popularity bias
mitigation techniques.
Popularity bias in recommender systems. Within the domain of recommender
systems, there is an increasing number of works that study the effect of
popularity bias. For example, as reported in [8], bias towards popular items
can affect the consumption of items that are not popular. This in turn
prevents them to become popular in the future at all. That way, a recommender
system is prone to ignoring novel items or the items liked by niche users that
are typically hidden in the “long-tail” of the available item catalog.
Tackling these long-tail items has been recognized by some earlier work, such
as [10, 20]. This issue is further investigated by [1, 2] using the popular
movie dataset MovieLens 1M. The authors show that more than 80% of all ratings
actually belong to popular items, and based on this, focus on improving the
trade-off between the ranking accuracy and coverage of long-tail items.
Research conducted in [13] illustrates a comprehensive algorithmic comparison
with respect to popularity bias. The authors analyze multimedia datasets such
as MovieLens, Netflix, Yahoo!Movies and BookCrossing, and find that
recommendation methods only consider a small fraction of the available item
spectrum. For instance, they find that KNN-based techniques focus mostly on
high-rated items and factorization models lean towards recommending popular
items. In our work, we analyze an even larger set of multimedia domains and
study popularity bias not only on the item but also on the user level.
Popularity bias mitigation techniques. Typical research on mitigating
popularity bias performs a re-ranking step on a larger set of recommended
candidate items. The goal of such post-processing approaches is to better
expose long-tail items in the recommendation list [2, 4, 6]. Here, for
example, [7] proposes to improve the total number of distinct recommended
items by defining a target distribution of item exposure and minimizing the
discrepancy between exposure and recommendation frequency of each item. In
order to find a fair ratio between popular and less popular items, [24]
proposes to create a protected group of long-tail items and to ensure that
their exposure remains statistically indistinguishable from a given minimum.
Beside focusing on post-processing, there are some in-processing attempts in
adapting existing recommendation algorithms in a way that the generated
recommendations are less biased toward popular items. For example, [5]
proposes to use a probabilistic neighborhood selection for KNN methods, or
[23] suggests a blind-spot-aware matrix factorization approach that debiases
interactions between the recommender system and the user. We believe that the
findings of our paper can inform future research on choosing the right
mitigation technique for a given setting.
## 3 Method
In this section, we describe (i) our definition of popularity, (ii) our four
multimedia datasets, and (iii) our four recommendation algorithms based on
collaborative filtering as well as our evaluation protocol.
### 3.1 Defining Popularity
Here, we describe how we define popularity (i) on the item level, and (ii) on
the user level. We use the item popularity definition of [3], where the item
popularity score $Pop_{i}$ of an item $i$ is given by the relative number of
users who have rated $i$, i.e., $Pop_{i}=\frac{|U_{i}|}{|U|}$. Based on this,
we can also define $Pop_{i,u}$ as the average item popularity in the user
profile $I_{u}$, i.e., $Pop_{i,u}=\frac{1}{|I_{u}|}\sum_{i\in
I_{u}}{Pop_{i}}$. Additionally, we can also define an item $i$ as popular if
it falls within the top-$20\%$ of item popularity scores. Thus, we define
$I_{u,Pop}$ as the set of popular items in the user profile.
On the user level, we also follow the work of [3] and define a user $u$’s
inclination to popularity $Pop_{u}$ as the ratio of popular items in the user
profile, i.e., $Pop_{u}=\frac{|I_{u,Pop}|}{|I_{u}|}$. As an example,
$Pop_{u}=0.8$ if 80% of the items in the user’s item history are popular ones.
We use this definition to create the LowPop, MedPop and HighPop user groups in
case of MovieLens, BookCrossing and MyAnimeList. In case of Last.fm, we use a
definition for $Pop_{u}$ especially proposed for the music domain, which is
termed the mainstreaminess score [9]. Here, we use the $M^{global}_{R,APC}$
definition, which is already provided in the
dataset111https://zenodo.org/record/3475975 published in our previous work
[16]. Formally, $M^{global}_{R,APC}(u)=\tau(ranks(APC),ranks(APC(u)))$, where
$APC$ and $APC(u)$ are the artist play counts averaged over all users and for
a given user $u$, respectively. $\tau$ indicates the rank-order correlation
according to Kendall’s $\tau$. Thus, $u$’s mainstreaminess score is defined as
the overlap between a user’s item history and the aggregated item history of
all Last.fm users in the dataset. Thus, the higher the mainstreaminess score,
the higher a user’s inclination to popular music. Please note that we cannot
calculate the mainstreaminess score for the other datasets, since we do not
have multiple interactions per item (i.e., play counts) in these cases (only
one rating per user-item pair).
Table 1: Statistics of our four datasets, where $|U|$ is the number of users, $|I|$ is the number of media items, $|R|$ is the number of ratings, sparsity is defined as the ratio of observed ratings $|R|$ to possible ratings $|U|\times|I|$, and $R$-range is the rating range. Dataset | $|U|$ | $|I|$ | $|R|$ | $|R|/|U|$ | $|R|/|I|$ | Sparsity | $R$-range
---|---|---|---|---|---|---|---
Last.fm | 3,000 | 352,805 | 1,755,361 | 585 | 5 | 0.998 | [1-1,000]
MovieLens | 3,000 | 3,667 | 675,610 | 225 | 184 | 0.938 | [1-5]
BookCrossing | 3,000 | 223,607 | 577,414 | 192 | 3 | 0.999 | [1-10]
MyAnimeList | 3,000 | 9,450 | 649,814 | 216 | 69 | 0.977 | [1-10]
To get a better feeling of the relationship between average item popularity
scores in the user profiles (i.e., $Pop_{u,i}$) and the user profile size
(i.e., $|I_{u}|$), we plot these correlations for our four datasets and per
user group in Figure 1. Across all datasets, we see a negative correlation
between average item popularity and user profile size, which means that users
with little interest in popular items tend to have large user profiles. This
suggests that these users are important consumers and data sources in MMRS,
and thus, should also be treated in a fair way (i.e., should receive similar
accuracy scores as users with medium or high interest in popular items).
(a) Last.fm
(b) MovieLens
(c) BookCrossing
(d) MyAnimeList
Figure 1: Relationship between average item popularity scores in the user
profiles (i.e., $Pop_{u,i}$) and user profile size (i.e., $|I_{u}|$). We see
that users with little interest in popular items tend to have large user
profiles.
### 3.2 Multimedia Datasets
For our study, we use four datasets containing rating data of users for media
items. The statistics of our datasets can be found in Table 1, and we provide
the datasets via Zenodo222https://zenodo.org/record/6123879. The users in each
of our four datasets are split into three equally-sized user groups: (i)
LowPop, i.e., the 1,000 users with the least inclination to popular items,
(ii) MedPop, i.e., 1,000 users with medium inclination to popular media items,
and (iii) HighPop, i.e., the 1,000 users with the highest inclination to
popular media items. This sums up to $|U|=$3,000 users per dataset. Next, we
describe our four datasets and how we split the user groups based on the
popularity definitions given before:
Last.fm. For the music streaming platform Last.fm, we use the dataset
published in our previous work [16], which is based on the LFM-1b
dataset333http://www.cp.jku.at/datasets/LFM-1b/. Here, a user is assigned to
one of the three groups LowPop, MedPop and HighPop based on the user’s
mainstreaminess score [9], which we defined earlier (i.e.,
$M^{global}_{R,APC}$). Additionally, in this Last.fm dataset, the listening
counts of users for music artists are scaled to a rating range of [1-1,000].
When looking at Table 1, Last.fm has the largest number of items $|I|=$352,805
and the largest number of ratings $|R|=$1,755,361 across our four datasets.
MovieLens. In case of the movie rating portal MovieLens, we use the well-known
MovieLens-1M dataset444https://grouplens.org/datasets/movielens/1m/. We
extract all users with a minimum of 50 ratings and a maximum of 2,000 ratings.
We assign these users to one of the three user groups LowPop, MedPop and
HighPop based on the ratio of popular items in the user profiles [3] as
described earlier (i.e., $Pop_{u}$). Table 1 shows that MovieLens is the least
sparse (i.e., most dense) dataset in our study and also has the highest number
of ratings per items ($|R|/|I|$).
BookCrossing. The dataset of the (digital) book sharing platform BookCrossing
was provided by Uni Freiburg555http://www2.informatik.uni-
freiburg.de/~cziegler/BX/. We use the same popularity definitions, group
assignment method as well as rating thresholds as in case of MovieLens.
However, in contrast to MovieLens, BookCrossing contains not only explicit
feedback in the form of ratings but also implicit feedback when a user
bookmarks a book. In this case, we set the implicit feedback to a rating of 5,
which is the middle value in BookCrossing’s rating range of [1-10]. Across all
datasets, BookCrossing is the dataset with the highest sparsity.
MyAnimeList. We apply the same processing methods as used in case of
BookCrossing to the MyAnimeList dataset, which is provided via
Kaggle666https://www.kaggle.com/CooperUnion/anime-recommendations-database.
Similar to BookCrossing, MyAnimeList also contains implicit feedback when a
user bookmarks an Anime, and again we convert this feedback to an explicit
rating of 5, which is the middle value in the rating range.
### 3.3 Recommendation Algorithms and Evaluation Protocol
We use the same set of personalized recommendation algorithms as used in our
previous work [16] but since we focus on CF-based methods, we replace the
UserItemAvg algorithm with a scalable co-clustering-based approach [12]
provided by the Python-based Surprise framework777http://surpriselib.com/.
Thus, we evaluate two KNN-based algorithms without and with incorporating the
average rating of the target user and item (UserKNN and UserKNNAvg), one non-
negative matrix factorization variant [19] (NMF) as well as the aforementioned
CoClustering algorithm. In most cases, we stick to the default parameter
settings as suggested by the Surprise framework and provide the detailed
settings in our GitHub repository888https://github.com/domkowald/FairRecSys.
We also follow the same evaluation protocol as used in our previous work [16]
and formulate the recommendation task as a rating prediction problem, which we
measure using the mean absolute error (MAE). However, instead of using only
one 80/20 train-set split, we use a more sophisticated 5-fold cross-validation
evaluation protocol. To test for statistical significance, we perform pairwise
t-tests between LowPop and MedPop as well as between LowPop and HighPop since
we are interested if LowPop is treated in an unfair way by the MMRS. We report
statistical significance for LowPop only in cases in which there is a
significant difference between LowPop and MedPop as well as between LowPop and
HighPop for all five folds.
## 4 Results
We structure our results based on our two research questions. Thus, we first
investigate popularity bias on the item level by investigating the
relationship between item popularity and recommendation frequency (RQ1). Next,
we investigate popularity bias on the user level by comparing the
recommendation performance for our three user groups (RQ2).
UserKNN | UserKNNAvg | NMF | CoClustering |
---|---|---|---|---
| | | | Last.fm
| | | | MovieLens
| | | | BookCrossing
| | | | MyAnimeList
Figure 2: RQ1: Relationship between item popularity and recommendation
frequency of four CF-based algorithms for Last.fm, MovieLens, BookCrossing and
MyAnimeList. In all 16 cases, we see that popular media items have a higher
probability of being recommended than unpopular ones.
### 4.1 RQ1: Relationship Between Item Popularity and Recommendation
Frequency
Figure 2 shows the relationship between item popularity and recommendation
frequency for the four CF-based algorithms UserKNN, UserKNNAvg, NMF and
CoClustering on all five folds of our four multimedia datasets Last.fm,
MovieLens, BookCrossing and MyAnimeList. The solid lines indicate the linear
regression between the two variables for the three user groups.
In all 16 plots, and all three user groups, we observe a positive relationship
between an item’s popularity and how often this item gets recommended (RQ1).
However, for NMF applied to Last.fm, the maximum recommendation frequency is
much lower as in case of the other algorithms. Thus, only in case of NMF
applied to Last.fm, we see a weak relationship between popularity and
recommendation frequency, while in all other cases, we see a strong
relationship between these variables. This is in line with our previous
related work investigating popularity bias in Last.fm [16]. When comparing the
three user groups, we see the weakest relationship between the variables for
LowPop and the strongest relationship for HighPop. We will refer to this
finding when investigating RQ2.
### 4.2 RQ2: Relationship Between Users’ Inclination to Popular Items and
Recommendation Accuracy
Table 2: RQ2: Mean absolute error (MAE) results (the lower, the better) of our study. The lowest accuracy is always given for the LowPop user group (statistically significant according to a t-test with $p<0.001$ as indicated by ∗∗∗ and $p<0.05$ as indicated by ∗∗). Across the algorithms, the best results are indicated by bold numbers and across the user groups, the best results are indicated by italic numbers. Dataset | User group | UserKNN | UserKNNAvg | NMF | CoClustering
---|---|---|---|---|---
Last.fm | LowPop | 49.489∗∗∗ | 46.483∗∗∗ | 39.641∗∗ | 47.304∗∗∗
MedPop | 42.899 | 37.940 | 32.405 | 37.918
HighPop | 45.805 | 43.070 | 38.580 | 42.982
MovieLens | LowPop | 0.801∗∗∗ | 0.763∗∗∗ | 0.753∗∗∗ | 0.738∗∗∗
MedPop | 0.748 | 0.727 | 0.722 | 0.705
HighPop | 0.716 | 0.697 | 0.701 | 0.683
BookCrossing | LowPop | 1.403∗∗∗ | 1.372∗∗∗ | 1.424∗∗∗ | 1.392∗∗∗
MedPop | 1.154 | 1.122 | 1.214 | 1.134
HighPop | 1.206 | 1.155 | 1.274 | 1.162
MyAnimeList | LowPop | 1.373∗∗∗ | 1.001∗∗∗ | 1.010∗∗∗ | 1.001∗∗∗
MedPop | 1.341 | 0.952 | 0.968 | 0.956
HighPop | 1.311 | 0.948 | 0.951 | 0.975
Table 2 shows the MAE estimates for the aforementioned CF-based recommendation
algorithms (UserKNN, UserKNNAvg, NMF, and CoClustering) on the four multimedia
datasets (Last.fm, MovieLens, BookCrossing, and MyAnimeList) split in three
user groups that differ in their inclination to popularity (LowPop, MedPop,
and HighPop). Additionally, we indicate statistically significant differences
between both LowPop and MedPop, and LowPop and HighPop according to a t-test
with $p<0.001$ using ∗∗∗ and with $p<0.05$ using ∗∗ in the LowPop lines.
Across all datasets, we observe the highest MAE estimates, and thus lowest
recommendation accuracy, for the LowPop user groups. The best results,
indicated by italic numbers, are reached for the MedPop group in case of
Last.fm and BookCrossing, and for the HighPop group in case of MovieLens and
MyAnimeList. For Last.fm this is in line with our previous work [16]. Across
the algorithms, we see varying results: for Last.fm, and again in line with
our previous work [16], the best results are reached for NMF. For MovieLens,
we get the best results for the CoClustering approach, and for BookCrossing
and MyAnimeList the highest accuracy is reached for the UserKNN variant
UserKNNAvg. We plan to investigate these differences across the user groups
and the algorithms in our future research, as outlined in the next section.
Taken together, users with little inclination to popular multimedia items
receive statistically significantly worse recommendations by CF-based
algorithms than users with medium and high inclination to popularity (RQ2).
When referring back to our results of RQ1 in Figure 2, this is interesting
since LowPop is the group with the weakest relationship between item
popularity and recommendation frequency. However, this suggests that
recommendations are still too popular for this user group and an adequate
mitigation strategy is needed.
## 5 Conclusion
In this paper, we have studied popularity bias in CF-based MMRS. Therefore, we
investigated four recommendation algorithms (UserKNN, UserKNNAvg, NMF, and
CoClustering) for three user groups (LowPop, MedPop, and HighPop) on four
multimedia datasets (Last.fm, MovieLens, BookCrossing, and MyAnimeList).
Specifically, we investigated popularity bias from the item (RQ1) and user
(RQ2) perspective. Additionally, we have shown that users with little interest
into popular items tend to have large profile sizes, and therefore are
important data sources for MMRS.
With respect to RQ1, we find that the popularity of a multimedia item strongly
correlates with the probability that this item is recommended by CF-based
approaches. With respect to RQ2, we find that users with little interest in
popular multimedia items (i.e., LowPop) receive significantly worse
recommendations than users with medium (i.e., MedPop) or high (i.e., HighPop)
interest in popular items. This is especially problematic since users with
little interest into popularity tend to have large profile sizes, and thus,
should be treated in a fair way by MMRS.
Future work. Our results demonstrate that future work should further focus on
studying this underserved user group in order to mitigate popularity bias in
CF-based recommendation algorithms. We believe that our findings are a first
step to inform the research on popularity bias mitigation techniques (see
Section 2) to choose the right mitigation strategy for a given setting.
Additionally, as mentioned earlier, we plan to further study the differences
we found with respect to algorithmic performance for the different user groups
and multimedia domains. Here, we also want to study popularity bias in top-$n$
settings using ranking-aware metrics such as nDCG (e.g., as used in [18]).
Finally, we plan to work on further bias mitigation strategies based on
cognitive-inspired user modeling and recommendation techniques (e.g., [21, 17,
14].
Acknowledgements. This research was funded by the H2020 project TRUSTS (GA:
871481) and the “DDAI” COMET Module within the COMET – Competence Centers for
Excellent Technologies Programme, funded by the Austrian Federal Ministry for
Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry
for Digital and Economic Affairs (bmdw), the Austrian Research Promotion
Agency (FFG), the province of Styria (SFG) and partners from industry and
academia. The COMET Programme is managed by FFG.
## References
* [1] Abdollahpouri, H., Burke, R., Mobasher, B.: Controlling popularity bias in learning-to-rank recommendation. In: Proceedings of the eleventh ACM conference on recommender systems. pp. 42–46 (2017)
* [2] Abdollahpouri, H., Burke, R., Mobasher, B.: Managing popularity bias in recommender systems with personalized re-ranking. In: The thirty-second international flairs conference (2019)
* [3] Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B.: The unfairness of popularity bias in recommendation. In: RecSys Workshop on Recommendation in Multistakeholder Environments (RMSE) (2019)
* [4] Abdollahpouri, H., Mansoury, M., Burke, R., Mobasher, B., Malthouse, E.: User-centered evaluation of popularity bias in recommender systems. In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. pp. 119–129 (2021)
* [5] Adamopoulos, P., Tuzhilin, A.: On over-specialization and concentration bias of recommendations: Probabilistic neighborhood selection in collaborative filtering systems. In: Proceedings of the 8th ACM Conference on Recommender systems. pp. 153–160 (2014)
* [6] Adomavicius, G., Kwon, Y.: Improving aggregate recommendation diversity using ranking-based techniques. IEEE Transactions on Knowledge and Data Engineering 24(5), 896–911 (2011)
* [7] Antikacioglu, A., Ravi, R.: Post processing recommender systems for diversity. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 707–716 (2017)
* [8] Baeza-Yates, R.: Bias in search and recommender systems. In: Fourteenth ACM Conference on Recommender Systems. pp. 2–2 (2020)
* [9] Bauer, C., Schedl, M.: Global and country-specific mainstreaminess measures: Definitions, analysis, and usage for improving personalized music recommendation systems. PloS one 14(6), e0217389 (2019)
* [10] Brynjolfsson, E., Hu, Y.J., Smith, M.D.: From niches to riches: Anatomy of the long tail. Sloan management review 47(4), 67–71 (2006)
* [11] Deldjoo, Y., Schedl, M., Cremonesi, P., Pasi, G.: Recommender systems leveraging multimedia content. ACM Computing Surveys (CSUR) 53(5), 1–38 (2020)
* [12] George, T., Merugu, S.: A scalable collaborative filtering framework based on co-clustering. In: Fifth IEEE International Conference on Data Mining (ICDM’05). pp. 4–pp. IEEE (2005)
* [13] Jannach, D., Lerche, L., Kamehkhosh, I., Jugovac, M.: What recommenders recommend: an analysis of recommendation biases and possible countermeasures. User Modeling and User-Adapted Interaction 25(5), 427–491 (2015)
* [14] Kowald, D., Lex, E.: The influence of frequency, recency and semantic context on the reuse of tags in social tagging systems. In: Proceedings of Hypertext’2016. pp. 237–242. ACM, New York, NY, USA (2016)
* [15] Kowald, D., Muellner, P., Zangerle, E., Bauer, C., Schedl, M., Lex, E.: Support the underground: characteristics of beyond-mainstream music listeners. EPJ Data Science 10(1), 1–26 (2021)
* [16] Kowald, D., Schedl, M., Lex, E.: The unfairness of popularity bias in music recommendation: A reproducibility study. In: European Conference on Information Retrieval. pp. 35–42. Springer (2020)
* [17] Lacic, E., Kowald, D., Seitlinger, P.C., Trattner, C., Parra, D.: Recommending items in social tagging systems using tag and time information. In: In Proceedings of the 1st Social Personalization Workshop co-located with the 25th ACM Conference on Hypertext and Social Media. pp. 4–9. ACM (2014)
* [18] Lacic, E., Kowald, D., Traub, M., Luzhnica, G., Simon, J.P., Lex, E.: Tackling cold-start users in recommender systems with indoor positioning systems. In: Poster Proceedings of the 9th $\\{$ACM$\\}$ Conference on Recommender Systems. Association of Computing Machinery (2015)
* [19] Luo, X., Zhou, M., Xia, Y., Zhu, Q.: An efficient non-negative matrix-factorization-based approach to collaborative filtering for recommender systems. IEEE Transactions on Industrial Informatics 10(2), 1273–1284 (2014)
* [20] Park, Y.J., Tuzhilin, A.: The long tail of recommender systems and how to leverage it. In: Proceedings of the 2008 ACM conference on Recommender systems. pp. 11–18 (2008)
* [21] Seitlinger, P., Kowald, D., Kopeinik, S., Hasani-Mavriqi, I., Lex, E., Ley, T.: Attention please! a hybrid resource recommender mimicking attention-interpretation dynamics. In: Proceedings of WWW’2015 companion. pp. 339–345. ACM (2015)
* [22] Shi, Y., Larson, M., Hanjalic, A.: Collaborative Filtering Beyond the User-Item Matrix: A Survey of the State of the Art and Future Challenges. ACM Computing Surveys 47(1), 3:1–3:45 (May 2014)
* [23] Sun, W., Khenissi, S., Nasraoui, O., Shafto, P.: Debiasing the human-recommender system feedback loop in collaborative filtering. In: Companion Proceedings of The 2019 World Wide Web Conference. pp. 645–651 (2019)
* [24] Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa* ir: A fair top-k ranking algorithm. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. pp. 1569–1578 (2017)
|
# Trans2Unet: Neural fusion for Nuclei Semantic Segmentation
Dinh-Phu Tran Department of Automation Engineering
School of Electrical and Electronic Engineering
Hanoi University of Science and Technology
Hanoi, Vietnam
<EMAIL_ADDRESS>Quoc-Anh Nguyen Department of Automation
Engineering
School of Electrical and Electronic Engineering
Hanoi University of Science and Technology
Hanoi, Vietnam
<EMAIL_ADDRESS>Van-Truong Pham Department of Automation
Engineering
School of Electrical and Electronic Engineering
Hanoi University of Science and Technology
Hanoi, Vietnam
<EMAIL_ADDRESS>Thi-Thao Tran∗ Department of Automation
Engineering
School of Electrical and Electronic Engineering
Hanoi University of Science and Technology
Hanoi, Vietnam
<EMAIL_ADDRESS>
###### Abstract
Nuclei segmentation, despite its fundamental role in histopathological image
analysis, is still a challenge work. The main challenge of this task is the
existence of overlapping areas, which makes separating independent nuclei more
complicated. In this paper, we propose a new two-branch architecture by
combining the Unet and TransUnet networks for nuclei segmentation task. In the
proposed architecture, namely Trans2Unet, the input image is first sent into
the Unet branch whose the last convolution layer is removed. This branch makes
the network combine features from different spatial regions of the input image
and localizes more precisely the regions of interest. The input image is also
fed into the second branch. In the second branch, which is called TransUnet
branch, the input image will be divided into patches of images. With Vision
transformer (ViT) in architecture, TransUnet can serve as a powerful encoder
for medical image segmentation tasks and enhance image details by recovering
localized spatial information. To boost up Trans2Unet efficiency and
performance, we proposed to infuse TransUnet with a computational-efficient
variation called ”Waterfall” Atrous Spatial Pooling with Skip Connection
(WASP-KC) module, which is inspired by the ”Waterfall” Atrous Spatial Pooling
(WASP) module. Experiment results on the 2018 Data Science Bowl benchmark show
the effectiveness and performance of the proposed architecture while compared
with previous segmentation models.
###### Index Terms:
Unet, TransUnet, Vision Transformer, WASP, Image Medical Segmentation, Nuclei
segmentation
## I Introduction
Cell nuclei segmentation has been a crucial problem that attracts interest
because of its practical applications in the diagnosis of cancer. In general,
this task is similar to natural image segmentation, which involves the process
of extracting desired objects from a nuclei image (image 2D or 3D), and can be
done manually, semi-automatically, or full-automatically [5][6][7]. Recently,
many deep learning models with high accuracy have been used for nuclei
segmentation [8]. In 2015, Unet featured an encoder-decoder architecture
combined with skip-connections to retain important features has showed
outstanding results for segmentation tasks, especially medical images.
Although having been powerful network architectures, Unet and other CNN
networks in general still have limitations in reproducing straightforward
long-range interrelationships resulted from the intrinsic locality of
convolution operations. Unlike the CNN-based networks, the models based on
Transformers have global computing features. In [2], TransUnet was proposed to
solve that problem by employing a hybrid CNN-Transformer approach to enhance
both elaborate high-resolution spatial information from feature maps of CNN
and the global context, which is encoded by Transformers. Although Transformer
has gained popularity in Computer Vision due to global features, the lack of
low-level details makes local feature information extraction insufficient
[10].
To take full advantages of Unet and TransUnet, in this study, we propose to
combine these two architectures to obtain a more powerful architecture. The
proposed architecture named as Trans2Unet includes two main branches. One
branch sends the input image through the Unet network, the other branch sends
the input image through the TransUnet network. Finally, the outputs of these
branches are concatenated to recreate feature maps of the input image, thereby
improving the predictive ability of the model. Furthermore, instead of using
the original TransUnet architecture, we added the WASP-KC module to leverage
the progressive extraction of a larger field-of-view (FOV) block from cascade
methods.
Our main contributions can be briefed as follows:
* •
Introduce a new, more robust, and efficient architecture using Unet and
TransUnet networks.
* •
Add a WASP-KC block for the TransUnet model after the CNN block.
Through hands-on experiments on the 2018 Data Science Bowl challenge dataset,
the results showed that the proposed network has achieved fairly good accuracy
compared to other SOTA architectures on this same data set. Specifically, we
have obtained 2 parameters DSC and IoU with values of 0.9225 and 0.8613.
The following is the organization of this paper: Firstly, the related work is
described in section II. Section III introduces our proposed model.
Experimental results on the 2018 Data Science Bowl challenge dataset is
obtained in Section IV. Finally, the summaries, limitations, and further work
are described in Section V.
## II Related Work
### II-A (Unet)
Unet was first proposed in 2015, known as an effective Convolutional Network
for Biomedical Image Segmentation. Unet architecture contains two paths, an
encoder and a decoder. The encoder path is the downsampling part, each block
has a rectified linear unit (ReLU) and a 2x2 max pooling operation with stride
2 [9]. The decoder path is the upsampling part for reconstructing the high-
resolution feature map of the image. In particular, Unet uses skip connections
to preserve spatial information because during downsampling at the encoder
path, spatial information of the input image is lost, causing architecture
accuracy degradation.
### II-B (ViT)
In terms of natural language processing tasks (NLP), it has been known that
Transformer architecture is one of the key criteria. However, when it comes to
Computer Vision tasks, this model still has many limitations [3]. Vision
transformer (ViT) is a pioneering model that adapts the transformer model to
Computer Vision (CV) tasks by embedding input images into a series of visual
tokens and modeling the global dependencies among this sequence with a group
of transformer blocks. ViT simply processes the input image as a 1D sequence
which leads to a lack of inductive bias in modeling local visual structures
[11]. Recently, ViT has achieved highly competitive accuracy benchmarks in a
variety of applications: image classification, object detection, and semantic
image segmentation. Taking inspiration from processing input images as a
sequence, the Vision transformer is a combination of the Transformer
architecture part and MLP (Multilayer Perceptron) blocks [1]. The Transformer
encoder of ViT includes Multi-Head Self Attention Layer (MHSA), Multi-Layer
Perceptrons (MLP) Layer, and Layer Norm (LN) [12]. MHSA is the key component
of the Transformer block. It is achieved after repeating single-head self-
attention (SHSA) for n times, where n is the quantity of heads. MHSA is
intended to reproduce long-range structural data from the images [13].
### II-C (TransUnet)
TransUnet can also be considered an upgraded version of Unet. TransUnet is the
first architecture to use transformers for tasks related to Computer Vision
and it has opened up new research directions with the successful application
of transformers to image tasks. The big difference between TransUnet and Unet
lies in the Encoder Path. There is a fairly detailed description of the
TransUnet Encoder path architecture in Fig. 2. It includes CNN Block (in the
study [2] the author used the backbone as ResNet50) and Vision TransFormer
(ViT). The encoder which applies the transformer in ViT comprises successive
layers of multiheaded self-attention (MHSA), and MPL blocks. Instead of using
BatchNorm (BN), the transformer block uses LayerNorm (LN) before each one, and
after each block, residual connections are put [16][17].
### II-D DeepLabv3+
In research [14], the Atrous Spatial Pyramid Pooling module (ASPP) was
proposed to be integrated with the encoder-decoder structure and this research
showed better improvements to the boundaries of segmented objects in the input
images. The special structure of ASPP assembled dilated convolutions in four
parallel branches with distinct levels. Ultimately, being combined by fast
bilinear interpolation with an additional factor of eight, the resulting
feature maps were recovered to the original solution [3]. The DeepLabv3+
significantly improved over the previous version in terms of accuracy.
### II-E Waterfall Atrous Spatial Pooling (WASP)
The WASP is a highly efficient architecture for semantic segmentation. It
leverages progressive filtering in a cascading architecture while preserving
multiscale fields-of-view (FOV) in comparison with spatial pyramid
configurations. According to the study in [3], WASP when combined with the
Resnet backbone will provide a robust architecture and obtain potential
results for segmentation problems. Furthermore, this variation has effective
computation, which is an Atrous Spatial Pooling (ASP) class variant in the
DeepLabv3+ architecture. [15] demonstrated the great improvement of the WASP
module in terms of computation time in training progress and decreasing
parameters compared to the original ASPP module.
## III Methodology
### III-A Waterfall Atrous Spatial Pooling with Skip Connection (WASP-KC)
Module
Figure 1: Waterfall Atrous Spatial Pooling (WASP-KC) module
The WASP-KC module, shown in Fig.1, is inspired by the WASP module. The WASP-
KC involves four units of a large-FOV that merges together and create a
waterfall shape to give output.Besides multiscale approaches [26][23], this
module is also inspired by the cascade configuration [3][14], as well as by
the parallel structures of ASPP [24] and Res2Net modules [25].The WASP module
helps to reduce parameters and memory required, which leads to less expensive
computation,the main limitation of Atrous Convolutions [3][15]. According to
the experiments performed by the authors in [3], the WASP module successfully
reduced 20.69% of the parameters and also increases the model’s performance by
2% (mIoU) using WASPnet network built on this module compared to the Res2Net-
Seg or ASPP modules. In this research, we have replaced the WASP block with
the WASP-KC block by Dense connections, which are inspired by the DenseNet
model. In this technique, every single layer takes all previous layers’ output
as input, and its feature map will be brought to deeper layers, which means
that each layer receives the whole information from the previous ones. This
will ensure feature reusability since feature maps of prior layers are held
and added altogether which helps input image data be well kept without any
loss. This is a significant modification that makes WASP can function more
robustly. The WASP-KC block is added right after the CNN module (ResNet-50
backbone is used) to improve the performance and efficiency of the proposed
model.
Figure 2: Architecture of TransUnet after adding WASP-KC module
Figure 3: Architecture of Trans2Unet
### III-B Model architecture
Aiming at developing a new deep learning architecture for nuclei cell image
segmentation, this study proposed the Trans2Unet which combines Unet and
TransUnet branches. First, to increase the efficiency of the TransUnet branch,
we used an additional the WASP-KC block as shown in Fig. 2. The WASP-KC block
consists of four convolution units. Each unit includes of three blocks, the
first block uses convolution 3x3, followed by two blocks applying convolution
1x1. The 3x3 convolution blocks share information horizontally, through which
the information will be used in all units of the module. In addition, skip
connection is used in each unit to use the features of the previous layers.
This adjustment has improved performance significantly compared to the WASP
module. The output of the module is the sum of these 4 units and output of the
global average pooling block, and will also be the input of the ViT network.
Fig. 3 shows the general structure of the proposed Trans2Unet that includes
the Unet branch and the proposed TransUnet+WASP-KC branch. After the input
image has been forwarded through these two branches, the outputs of the two
branches will be concatenated together. And finally, after aggregating the
output of the two branches above, we continue to forward through a Convolution
block before making the predicted output. This is a fairly new and simple
combination, but it improves performance much better than just using Unet or
TransUnet as usual.
### III-C Loss function
The loss function, also known as the cost function, is an equation
representing the relationship between q (which is the model’s predicted
result) and p (which is the actual value). Our task is to minimize the value
of this equation. The loss function is used to optimize models and this is
also one of the parameters to evaluate the quality of the model. Tasks related
to image segmentation have many loss functions applied such as Binary Cross-
Entropy (BCE), Dice loss, …
The binary cross-entropy (BCE) loss function calculates the difference between
two probability distributions, they are the actual probability distribution p
and the predicted probability distribution q. It is commonly used for object
classification tasks, and in image segmentation tasks as it is classification
on pixels. This should be used for balanced datasets. BCE loss is represented
by the following equation:
$L(p,q)=-y\log(q)-(1-p)\log(1-q)$ (1)
Where p represents the ground truth label, q represents the predicted value of
the Trans2Unet model. The value of (1) reflects the difference between the
actual value and the value predicted from the model.
Dice Loss is a loss function that is popularly used in tasks relating to
arcing image segmentation or medical image segmentation. . The value of this
loss function measures the difference between the ground truth and the
predicted value. Dice Loss is represented by the following equation:
$DL(p,q)=1-\frac{2pq+1}{p+q+1}$ (2)
Mathematical notations $(p,q)$ have the meaning similar to Binary Cross-
Entropy part.
### III-D Evaluation Metrics
Currently, The Dice Similarity Score (DSC) and Jaccard Index or Intersection
over Union (IoU) are the most popular indexes for evaluating models in medical
image segmentation [18][19][20]. In this research, we also use these two
parameters to make a fair comparison with other models on the 2018 Data
Science Bowl challenge dataset.
The DCS and IoU are defined by the following mathematical expressions [21]:
$DSC=\frac{2TP}{2TP+FP+FN}$ (3)
Where: TP, FP, FN, TN are the number of true positive, false positive, false
negative, and negative predictions. In addition, in the study[22], there are
other evaluation metrics for task image segmentation such as Precision,
Accuracy, Volumetric Similarity, …
$IoU=\frac{TP}{TP+FP+FN}$ (4)
## IV EXPERIMENTAL RESULTS
### IV-A Dataset
To properly assess the performance evaluation of Trans2Unet model, we used the
public biomedical image dataset - the 2018 Data Science Bowl challenge dataset
and GlaS dataset. The 2018 Data Science Bowl challenge dataset contains the
original images, along with their masks (or ground-truth). There are 670
images in total, we splitted this dataset into the ratio of 80% - 10% - 10%
corresponding to the training set - validation set - test set. Some of State-
of-the-art models tested on 2018 Data Science Bowl such as SSFormer-L,
MSRFNet, DoubleUnet, Unet++… have achieved remarkable results. Following this
dataset with the same split ratio, through trials and errors, we are confident
that 670 images are enough for proposed model to perform robustly. GlaS
dataset contains 165 microscopic images and the corresponding target mask
annotations. In this work we split GlaS dataset into 85 training images and 80
testing images.
### IV-B Implementation detail
We have implemented this entire proposed architecture with the Pytorch
framework and conducted experiments with NVIDIA K80 GPUs. The Adam
optimization function has been deprecated, with the initial learning rate (LR)
set to 0.0003, and we also used a dropout regularization with p = 0.2. After
three epochs with no improvement, the new learning rate is calculated by
multiplying the current learning rate by a factor, which is a small value
enough to reduce current learning rate and global minimum is still reached.
All images in the 2018 Data Science Bowl challenge and GlaS dataset will be
resized to 256 x 256 resolution. Batch size used is 10 and the number of
epochs to train our model is 300.
### IV-C Evaluation
In this research, we referred our model to some of the models that achieved
remarkable results in the 2018 Data Science Bowl challenge and GlaS dataset to
objectively review the effectiveness of this approach. In table 1, the scores
reported by previous algorithms in terms of average values of the Dice
Similarity Score (DSC) and IoU indexes are compared with those by the proposed
approach. The table shows that our new approach gives the results that are
confirmed to be good on the 2018 Data Science Bowl challenge dataset with the
values of DSC - IoU are 0.9225, and 0.8613 respectively (when we fused Unet
with TransUnet).
As described in Table 1, the number of Trans2Unet parameters are up to 110M,
which is a disadvantage that needs improvement in upcoming research, whereas
those of SSFormer-L model are 66.2M. The explanation for huge size of our
proposed network is due to ViT model used in TransUnet branch. As [1], there
are 3 variants of Vit consisting of ViT-Base (86M parameters), ViT-Large (307M
parameters), and ViT-Huge(632M parameters). Considering these sizes, we
decided to use ViT-Base model in our network.
Method | Dice Coefficient | Mean IoU | Parameters (M)
---|---|---|---
SSFormer-L [27] | 0.9230 | 0.8614 | 66.2
TransUnet | 0.9027 | 0.8413 | 105.9
MSRF-Net [28] | 0.9224 | 0.8534 | 18.38
FANet [29] | 0.9176 | 0.8569 | 5.76
DoubleUNet [30] | 0.9133 | 0.8407 | 29.29
Trans2Unet (Ours) | 0.9225 | 0.8613 | 110
TABLE I: Performances comparison of various model on the 2018 Data Science
Bowl challenge dataset.
Although our results are still modest compared to other current SOTA
architectures on this dataset, we believe that with this approach, the
architecture will be improved in the future.
To show the improvement of the Trans2Unet model integrating with the WASP-KC
module more clearly, IoU and Dice metrics of this model were compared with
those of the original TransUnet as well as the Trans2Unet model integrating
with the original WASP, and all experiments were tested on the same device.
The results reported in table II show that IoU and Dice metrics of our
proposed model are second to none, specifically, this model has the IoU and
Dice metrics are 86.13% and 92.25%, respectively.
Method | Dice Coefficient | Mean IoU
---|---|---
TransUnet | 0.9027 | 0.8413
Trans2Unet + WASP | 0.9150 | 0.8499
Trans2Unet + WASP-KC | 0.9225 | 0.8613
TABLE II: Performances comparison of Trans2Unet with WASP-KC module and its
baseline models.
As can be seen from Table 3, the results show that proposed network Trans2Unet
also obtained great performance on GlaS dataset with Dice Coefficient of
89.94% and Mean IoU of 82.54%.
Method | Dice Coefficient | Mean IoU
---|---|---
FCN[32] | 0.6661 | 0.5058
Unet[9] | 0.7778 | 0.6534
Res-Unet[33] | 0.7883 | 0.6595
Axial Attention Unet[34] | 0.7630 | 0.6303
KiU-Net[35] | 0.8325 | 0.7278
Trans2Unet (Ours) | 0.8984 | 0.8254
TABLE III: Comparisons with various method on GlaS Dataset.
### IV-D Results
To demonstrate the performance of the new architecture on the 2018 Data
Science Bowl challenge dataset, we show the learning curves in Fig. 4. As
shown in this figure, the model loss and scores including the Dice (DSC), and
IoU converge after 100 epochs and stay stable. For qualitative assessment, we
also show some representative segmentation results of the test set of this
dataset in Fig. 5. It is obvious in Fig.5, the predictions by the proposed
approach are in good agreement with those by ground truths.
Figure 4: Training curves on the 2018 Data Science Bowl challenge dataset
Figure 5: Some representative segmentation results of Trans2Unet on Nuclei
images from 2018 Data Science Bowl challenge dataset
## V Conclusion
In this study, we have introduced a new architecture, which is a combination
of two other deep learning networks, Unet and TransUnet, for nuclei image
segmentation. Furthermore, to approach leverages the progressive extraction of
larger fields-of-view (FOV) from cascade methods, we integrated WASP-KC (WASP
module with Skip Connections) module into the TransUnet architecture. Through
experiments on the 2018 Data Science Bowl challenge dataset, we show that our
proposed model has achieved quite good results expressed through DSC or IoU
scores. By combining the Unet with the TransUnet architecture, the model can
maintain local features of CNN and take advantage of global features in
Transformers for more robust segmentation. We believe that this structure can
be a good approach to improve the efficiency of models not only for nuclei
cell but also for general tasks of image segmentation.
## Acknowledgment
This research is funded by the Hanoi University of Science and Technology
(HUST) under project number T2021- PC-005.
## References
* [1] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR (2021)
* [2] Chen, Jieneng and Lu, Yongyi and Yu, Qihang and Luo, Xiangde and Adeli, Ehsan and Wang, Yan and Lu, Le and Yuille, Alan L., and Zhou, Yuyin. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv preprint arXiv:2102.04306, 2021
* [3] Bruno Artacho and Andreas E. Savakis. Waterfall atrous spatial pooling architecture for efficient semantic segmentation. CoRR, abs/1912.03183, 2019.
* [4] 2018 Data Science Bowl, https://www.kaggle.com/c/data-science-bowl-2018/overview
* [5] M. E. Celebi, N. Codella, and A. Halpern, “Dermoscopy image analysis: overview and future directions,” IEEE J. Biomed. Health Inform, vol. 23, no. 2, pp. 474–478, 2019.
* [6] J. C. Caicedo et al., “Nucleus segmentation across imaging experiments: the 2018 data science bowl,” Nat. Meth., vol. 16, no. 12, pp. 1247–1253, 2019.
* [7] S. Ali et al., “Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy,” Med. Imag. Anal., p. 102002, 2021.
* [8] Caicedo, J. C. et al. Evaluation of deep learning strategies for nucleus segmentation in fluorescence images. Cytom. A 95, 952–965 (2019).
* [9] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234- 241. Springer (2015)
* [10] Chen, D., Yang, W., Wang, L., Tan, S., Lin, J., Bu, W. (2022). PCATUNet: UNet-like network fused convolution and transformer for retinal vessel segmentation. PLoS ONE, 17(1), e0262689.
* [11] Zhang, Qiming and Xu, Yufei and Zhang, Jing and Tao, Dacheng. ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond. arXiv preprint arXiv:2202.10108, 2022
* [12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
* [13] Petit, O., Thome, N., Rambour, C., Themyr, L., Collins, T., Soler, L. (2021). U-Net Transformer: Self and Cross Attention for Medical Image Segmentation. In: Lian, C., Cao, X., Rekik, I., Xu, X., Yan, P. (eds) Machine Learning in Medical Imaging. MLMI 2021. Lecture Notes in Computer Science(), vol 12966. Springer, Cham. https://doi.org/10.1007/978-3-030-87589-32 8
* [14] Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. arXiv 2015, arXiv:1511.00561
* [15] Sharma, Shorya. Semantic Segmentation for Urban-Scene Images, 2021
* [16] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In ICLR, 2019.
* [17] Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. Learning deep transformer models for machine translation. In ACL, 2019.
* [18] Kamnitsas, K., Ledig, C., Newcombe, V.F., Simpson, J.P., Kane, A.D., Menon, D.K., Rueckert, D., Glocker, B.: Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. MIA 36, 61–78 (2017)
* [19] Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. MICCAI pp. 234–241 (2015)
* [20] BRATS challenge (2018), https://www.med.upenn.edu/sbia/brats2018.html
* [21] Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med Imaging [Internet]. 2015 Aug 12 [cited 2021 May 14];15(1):29. Available from: http://bmcmedimaging.biomedcentral.com/articles/10.1186/s12880-015- 0068-x
* [22] Muller, Dominik and Hartmann, Dennis and Meyer, Philip and Auer, ¨ Florian and Soto-Rey, Inaki and Kramer, Frank. MISeval: a Metric ˜ Library for Medical Image Segmentation Evaluation. arXiv, 2022.
* [23] Roy, A.; Todorovic, S. A Multi-Scale CNN for Affordance Segmentation in RGB Images. In Proceedings of the IEEE European Conference on Computer Vision (ECCV), Amsterdam, the Netherlands, 11–14 October 2016; pp. 186–201.
* [24] Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution and Fully Connected CFRs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–845.
* [25] Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2Net: A New Multi-Scale Backbone Architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019.
* [26] Eigen, D.; Fergus, R. Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture. arXiv 2014, arXiv:1411.4734.
* [27] Wang, Jinfeng and Huang, Qiming and Tang, Feilong and Meng, Jia and Su, Jionglong and Song, Sifan. Stepwise Feature Fusion: Local Guides Global, 2022.
* [28] Srivastava, A., Jha, D., Chanda, S., Pal, U., Johansen, H.D., Johansen, D., Riegler, M.A., Ali, S., Halvorsen, P.: MSRF-Net: A Multi-scale Residual Fusion Network for Biomedical Image Segmentation. IEEE Journal of biomedical imaging and health informatics (2021)
* [29] Tomar, Nikhil Kumar, et al. ”Fanet: A feedback attention network for improved biomedical image segmentation.” IEEE Transactions on Neural Networks and Learning Systems 34.11 (2022): 9375-9388.
* [30] D. Jha et al., “Doubleu-net: A deep convolutional neural network for medical image segmentation,” 07 2020, pp. 558–564.
* [31] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. Unet++: A nested u-net architecture for medical image segmentation. In zhou2018unet, pages 3–11. 2018.
* [32] Long, J., Shelhamer, E. & Darrell, T. Fully convolutional networks for semantic segmentation. Proceedings Of The IEEE Conference On Computer Vision And Pattern Recognition. pp. 3431-3440 (2015)
* [33] Jha, D., Smedsrud, P., Riegler, M., Johansen, D., De Lange, T., Halvorsen, P. & Johansen, H. Resunet++: An advanced architecture for medical image segmentation. 2019 IEEE International Symposium On Multimedia (ISM). pp. 225-2255 (2019)
* [34] Valanarasu, V. (2021). Medical Transformer: Gated Axial-Attention for Medical Image Segmentation. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (pp. 36–46). Springer International Publishing.
* [35] Valanarasu, J., Sindagi, V., Hacihaliloglu, I. & Patel, V. Kiu-net: Towards accurate segmentation of biomedical images using over-complete representations. International Conference On Medical Image Computing And Computer-Assisted Intervention. pp. 363-373 (2020)
|
††institutetext: Tata Institute of Fundamental Research, Homi Bhabha Road,
Colaba, Mumbai 400005, India
# SMEFT predictions for semileptonic processes
Siddhartha Karmakar ID , Amol Dighe ID and Rick S. Gupta ID
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The $SU(2)_{L}\times U(1)_{Y}$ invariance of the Standard Model Effective
Field Theory (SMEFT) predicts multiple restrictions in the space of Wilson
coefficients of $U(1)_{em}$ invariant effective lagrangians such as the Low-
energy Effective Field Theory (LEFT), used for low-energy flavor-physics
observables, or the Higgs Effective Field Theory (HEFT) in unitary gauge,
appropriate for weak-scale observables. In this work, we derive and list all
such predictions for semileptonic operators up to dimension 6. We find that
these predictions can be expressed as 2223 linear relations among the
HEFT/LEFT Wilson coefficients, that are completely independent of any
assumptions about the alignment of the mass and flavor bases. These relations
connect diverse experimental searches such as rare meson decays, high-$p_{T}$
dilepton searches, top decays, $Z$-pole observables, charged lepton flavor
violating observables and non-standard neutrino interaction searches. We
demonstrate how these relations can be used to derive strong indirect
constraints on multiple Wilson coefficients that are currently either weakly
constrained from direct experiments or have no direct bound at all. These
relations also imply, in general, that evidence for new physics in a
particular search channel must be accompanied by correlated anomalies in other
channels.
###### Keywords:
Flavor Physics, SMEFT, HEFT, LEFT, Semi-Leptonic Decays
††preprint: TIFR/TH/24-3
## 1 Introduction
The Standard Model Effective Field Theory (SMEFT) Buchmuller:1985jz ;
Grzadkowski:2010es ; Jenkins:2013zja ; Isidori:2023pyp is a model-independent
way to incorporate the effects of beyond Standard Model (BSM) physics at low
energies. It modifies the Standard Model SM lagrangian by the addition of all
possible higher dimensional operators respecting the SM symmetries:
$\displaystyle\mathcal{L}$
$\displaystyle=\mathcal{L}_{SM}+\frac{1}{\Lambda^{2}}\sum_{i}{{\mathcal{C}}}_{i}^{(6)}{\cal
O}_{i}^{(6)}+\cdots,$ (1)
where $\Lambda$ is the cut-off scale, typically of the order of TeV or higher.
Here, ${\mathcal{O}}_{i}^{(d)}$ represent the $d$-dimensional BSM operators
and ${\mathcal{C}}_{i}^{(d)}$ represent the corresponding Wilson coefficients
(WCs). We assume here that the new physics preserves baryon and lepton numbers
and therefore do not include dimension-5 operators. The ellipsis represents
higher order operators with dimension $>6$.
SMEFT is manifestly invariant under $SU(3)_{C}\times SU(2)_{L}\times
U(1)_{Y}$, the SM gauge symmetry. As a consequence, there are specific
relationships among different flavor observables. For instance, the SMEFT
requirement that the up-type and down-type left-handed fermionic fields should
arise from $SU(2)_{L}$ doublets implies relations among flavor observables
probing the up sector and those probing the down sector. In this work, we
initiate a systematic derivation of such relations, beginning with the
semileptonic processes in this article.
In flavor physics, effective field theories (EFT) have long served as a
standard framework to parameterize the effects of heavy new physics. However,
for most flavor physics processes, the experimental energy scale is at or
below the mass of the $b$ quark; this includes weak decays of mesons, neutral
meson mixing, $\tau$ decays, etc. The relevant EFT at these energies is the so
called Low-energy Effective Field Theory (LEFT)111LEFT is sometimes referred
to as weak effective field theory (WET or WEFT) in literature Jenkins:2017jig
; Aebischer:2017gaw ; Aebischer:2017ugx ; London:2021lfn . Buchalla:1995vs ,
which assumes only the $SU(3)_{C}\times U(1)_{em}$ invariance and not the full
$SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}$ invariance of SM.
The flavor structure of new physics (NP) can also be probed at higher scales,
for instance, in flavor-violating decays of the $Z,W^{\pm}$, and the Higgs
boson $h$, via flavor-violating production or decay of the top quark $t$, or
by constraining the Drell-Yan processes initiating from a flavor off-diagonal
diquark state. In order to include both high-energy and low-energy
observables, one of course needs to write all possible $SU(3)_{C}\times
U(1)_{em}$ invariant operators, as in LEFT, but terms involving the top quark,
Higgs boson and electroweak bosons also need to be included. An appropriate
framework that can encompass both, low-energy flavor observables as well as
this second class of processes involving heavier SM states, is the so-called
Higgs Effective Field Theory (HEFT) Alonso:2012px ; Buchalla:2013rka ;
Pich:2016lew . This is a more general framework than SMEFT and also includes
scenarios where the EW symmetry is realized non-linearly. In the unitary
gauge, it leads to a lagrangian involving all possible $SU(3)_{C}\times
U(1)_{em}$ invariant operators. Given the HEFT lagrangian, it is possible to
derive the corresponding LEFT lagrangian by simply integrating out the heavier
SM states $W,\,Z,\,h$ and $t$.
Figure 1: Schematic representation of EFTs above and below the electroweak
scale. UV4f represents the subset of SMEFT where the BSM physics only has
four-fermion operators.
For a given set of processes, a general parametrization of possible BSM
deviations assuming only $SU(3)_{C}\times U(1)_{em}$ invariance gives rise to
many more free parameters up to a given order than the number of SMEFT WCs to
that order. This is simply because the former does not assume the full
$SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}$ invariance of SM. This situation
has been schematically presented in Fig. 1 where SMEFT can be seen to be a
subset of the more general HEFT. Within this region satisfying SMEFT
assumptions, the smaller number of free parameters implies relationships among
the WCs of HEFT. These relationships can be thought of as predictions of SMEFT
at a certain order; these predictions can be broken only by violating the
basic underlying assumptions of SMEFT.
An apparent obstacle in deriving these relations is that, while SMEFT is
written in the flavor basis, HEFT or LEFT operators have to be written in the
mass basis if we wish to connect them to physical observables. The equations
connecting HEFT Wilson coefficients in the mass basis to SMEFT Wilson
coefficients in the flavor basis, thus, contain elements of the rotation
matrices of the left-handed and right-handed up-type and down-type fermions,
which cannot be fixed by experiments. We show, however, that only the
measurable elements of the Cabbibo-Kobayashi-Maskawa (CKM) quark-mixing matrix
and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton-mixing matrix appear in
the final relations among HEFT WCs. This allows us to derive the implications
of SMEFT on flavor physics observables in a way that is completely independent
of assumptions about the alignment of the flavor and the mass bases, often
referred to as UV flavor assumptions.
In this work, we consider the 3240 semileptonic four-fermion operators in HEFT
that get contributions from the 1053 SMEFT operators, giving rise to 2187
constraints. In addition, we consider 144 HEFT operators that can contribute
to low-energy flavor observables via the exchange of $Z,W^{\pm}$ and $h$
bosons. In SMEFT, these arise from 108 independent operators, thus implying 36
constraints in the HEFT space. We derive all these 2223 constraints and
express them as analytic relations independent of any UV flavor assumptions.
Some other recent studies have also considered the implications of the
$SU(2)_{L}\times U(1)_{Y}$ invariance of SMEFT on flavor observables
Alonso:2014csa ; Cata:2015lta ; Fuentes-Martin:2020lea ; Bause:2020auq ;
Bause:2020xzj ; Bissmann:2020mfi ; Bause:2021cna ; Bause:2021ihn ;
Bause:2022rrs ; Sun:2023cuf ; Grunwald:2023nli ; Greljo:2023bab ;
Fajfer:2012vx ; Bause:2023mfe ; Chen:2024jlj . To the best of our knowledge,
however, the present work is the first study to comprehensively derive and
list all the 2223 analytic relations relevant for semileptonic processes (see,
however, Ref. Bause:2020auq ; Bause:2021cna ; Bause:2021ihn where a subset of
the above relations has been presented.) Our approach also makes it clear that
these implications can be obtained and presented in a way that is free from
all UV flavor assumptions. A similar approach has been used to derive SMEFT
predictions in Higgs physics in Ref. gupta ; LHCHiggs .
The SMEFT predictions derived in this work are expressed as linear
relationships among $SU(3)_{C}\times U(1)_{em}$ invariant BSM couplings in the
mass basis. These relationships can be directly translated to exact relations
among experimental observables. As we will see, these relations connect
diverse experimental searches: low-energy flavor observables in different
sectors such as kaon, B-meson, charm and $\tau$-decays, the Drell-Yan process
at high-$p_{T}$, top production and decay channels, $Z$-decays, and searches
for non-standard neutrino interaction. These relationships thus allow us to
utilize experimental limits on a set of well-constrained observables to put
bounds on other, otherwise poorly constrained, observables. Our work
demonstrates that indirect constraints on many WCs — such as those related to
$d\bar{d}\to\nu\bar{\nu}$, $u_{i}\to u_{j}\nu\bar{\nu}$ and top decays —
obtained in the above manner, would surpass direct bounds.
Another crucial implication of these relations among WCs is that, in general
they disallow an isolated non-vanishing WC. This is because a nonzero WC will,
via the linear relations, imply a nonzero value for multiple other WCs. This
indicates that deviations from SM would typically not appear in isolated
channels. For instance, it is known that the observed excess in $B\to K\nu\nu$
branching fraction can be explained by a nonzero WC for the operator involving
the transition $b\to s\nu\nu$. We show that this would imply non-vanishing
values for WCs involving processes such as $b\to c\ell\nu_{\ell}$, $b\to
u\ell\nu_{\ell}$ , $t\to c\mu e$, $t\to u\mu e$, etc.
While the SMEFT predictions we derive are completely independent of UV flavor
assumptions, we find that as far as phenomenological implications are
concerned, the sharpest conclusions can be drawn in an important class of
models where the dominant new physics effects come from four-fermion operators
and not from modifications of $Z,W^{\pm}$ and $h$ couplings. We call these
models ‘UV4f’ models and represent them by the dashed rectangle in Fig. 1.
This is a highly motivated class of UV completions that encompasses a majority
of the models proposed to explain the flavor anomalies. These include minimal
leptoquark models Hiller:2014yaa ; Gripaios:2014tna ;
deMedeirosVarzielas:2015yxm ; Sahoo:2015wya and many $Z^{\prime}$ models
Altmannshofer:2014cfa ; Bonilla:2017lsq ; Bian:2017rpg ; Alonso:2017uky ;
Cline:2017ihf proposed in the literature.
The plan of this paper is as follows. In Sec. 2, we present the list of
relevant operators in SMEFT, HEFT and LEFT and provide the relations among the
WCs. We discuss the phenomenological applications of these relations in Sec.
3, where we derive the indirect bounds on WCs associated with left-handed
quarks and leptons. In Sec. 4, we discuss possible directions of NP searches
suggested by the relations among the WCs, given some of the current observed
deviations from SM. We present concluding remarks in Sec. 5. In Appendix A, we
write the HEFT operators used in the text in the $SU(2)_{L}\times U(1)_{Y}$
invariant form, with the electroweak symmetry non-linearly realized, and
compare our list with the previous literature. In Appendix B, we briefly
discuss some details of the SMEFT basis used and the rationale for our choice.
In Appendix C, we present all the analytic relations in terms of semileptonic
LEFT WCs and WCs that modify the $Z$, $W^{\pm}$ and Higgs couplings to
fermions. In Appendix D, we present tables of $90\%$ C.L limits on the LEFT
WCs.
## 2 SMEFT predictions for semileptonic operators
In this section, we present all possible semileptonic operators respecting the
$U(1)_{em}$ symmetry (as the $SU(3)_{C}$ symmetry is always respected, we will
not mention it separately from here on), and derive the analytic relations
among them that are predicted by SMEFT. We consider the following lagrangian
terms at the weak scale:
$\displaystyle\mathcal{L}_{\rm HEFT}$ $\displaystyle\supset\mathcal{L}^{\rm
SM}+\sum_{f,i,j}\left[{{\mathbf{c}}}_{fZ}\right]^{ij}\,(\bar{f}_{i}\gamma^{\mu}\,f_{j})\,Z_{\mu}+\sum_{f_{u},f_{d},i,j}\left[{{\mathbf{c}}}_{f_{u}f_{d}W}\right]^{ij}\,(\bar{f}_{u_{i}}\gamma^{\mu}\,f_{d_{j}})\,W^{+}_{\mu}$
$\displaystyle~{}+\sum\left[{{\mathbf{c}}}_{fh}\right]^{ij}\,(\bar{f}_{i}\,P_{R}\,f_{j})\,h+\frac{1}{\Lambda^{2}}\,\sum_{i}{{\mathbf{c}}}_{i}\,{{\mathbf{o}}}^{4f}_{sl,i}+h.c.,$
(2)
where, in addition to the SM lagrangian $\mathcal{L}^{SM}$ and the term
containing all possible semileptonic four-fermion operators
${{\mathbf{o}}}^{4f}_{sl}$, we also include corrections to the couplings of
$Z$, $W^{\pm}$ and Higgs boson $h$ to fermions.222 We have not considered
four-quark operators and electroweak dipole operators. Although these can
contribute to semileptonic processes, they do not get matched to semileptonic
operators at the tree level. Furthermore, these operators are constrained from
processes which are not semileptonic. The four-quark operators can get
constraints from nonleptonic decays, whereas the dipole operators are bounded
by measurements such as the precise observations of dipole moments of
elementary particles, the $b\to s\gamma$ process etc. This is because the
diagrams with Higgs, $W^{\pm}$ and $Z$ exchange can generate four-fermion
effective operators at the low energies relevant to semileptonic flavor
observables. Here $f\in\\{u_{L},u_{R},d_{L},d_{R},e_{L},e_{R},\nu_{L}\\}$,
$f_{u}$ denotes neutrinos and up-type quarks (both left-handed and right-
handed) whereas $f_{d}$ denotes down-type quarks and charged leptons. A
lagrangian containing all these operators with independent coefficients is
equivalent to the HEFT lagrangian, $\mathcal{L}_{\rm HEFT}$, in the unitary
gauge. This is because, although formally $SU(2)_{L}\times U(1)_{Y}$
invariance is non-linearly realized in the HEFT lagrangian, in the unitary
gauge HEFT reduces to a lagrangian with all possible $U(1)_{em}$-invariant
terms. As we show in Appendix A, our list of operators can be rewritten in an
invariant form with non-linearly realized electroweak symmetry as in Ref.
Buchalla:2012qq . The HEFT basis of Appendix A excludes some redundant
operators that appeared in the HEFT bases presented in earlier literature
(e.g. Buchalla:2012qq ; Cata:2015lta ) and also includes some operators that
were missed in previous work. Further note that in the UV4f scenario discussed
in the Sec. 1, the coupling modifications of $Z,W^{\pm}$ and $h$ are absent,
i.e. the second, third and fourth terms on the RHS of eq. (2) vanish.
The semileptonic four-fermion operators ${{\mathbf{o}}}^{4f}_{sl}$ can be
directly probed by high-energy processes such as the Drell-Yan process
$\bar{q}_{i}q_{j}\to ll$, top production and decay processes, etc. We consider
these operators in Sec. 2.1 and list the dimension-6 (dim-6) SMEFT operators
that contribute to them. We find that the number of HEFT operators is larger
than the number of dim-6 SMEFT operators, which results in SMEFT predictions
for these HEFT WCs. These predictions are in the form of linear relations
among the HEFT WCs; we explicitly derive these relations in Sec. 2.1.
Next we consider the corrections to the SM couplings of $Z,W^{\pm}$ and $h$ to
fermions, indicated by the second, third and fourth terms in the RHS of eq.
(2). Although our reason for inclusion of these operators is that they
contribute to low-energy semileptonic processes via $Z,W^{\pm}$ and $h$
exchange, these couplings can be probed independently by studying decays of
the $Z,W^{\pm}$ and $h$. We list the SMEFT operators contributing to these in
Sec. 2.2. We find that, while the number of SMEFT operators is the same as the
number of HEFT operators for $h$ coupling corrections, the number of
contributing SMEFT operators in the case of gauge boson coupling corrections
is smaller. This results in relations among the corrections to $Z$ and
$W^{\pm}$ couplings; we derive these in Sec. 2.2.
Finally, in Sec. 2.3 we rewrite the analytic relations derived in Sec. 2.1 and
Sec. 2.2 in terms of WCs at the low scale relevant for most of the important
flavor observables, such as those connected to meson mixing, rare decays, etc.
The lagrangian relevant at these scales is the sum of the LEFT neutral-current
and charged-current lagrangians333Note that, to distinguish different EFTs, we
denote the Wilson coefficients by ‘${{\mathcal{C}}}$’ for SMEFT, by
‘${{\mathbf{c}}}$’ for HEFT and by ‘$C$’ for LEFT. The corresponding operators
are denoted by ‘${\mathcal{O}}$’, ‘${\mathbf{o}}$’ and ’$O$’ respectively.
$\displaystyle\mathcal{L}_{\textrm{LEFT}}^{\rm NC}$
$\displaystyle=\mathcal{L}^{\rm
NC}_{\textrm{SM}}+\frac{4G_{F}}{\sqrt{2}}\sum_{i}^{\textrm{NC
only}}\,{{C}}_{i}\,O_{i}^{\rm NC}~{},$ (3)
$\displaystyle\textrm{and}\quad\mathcal{L}_{\textrm{LEFT}}^{\rm CC}$
$\displaystyle=\mathcal{L}^{\rm
CC}_{\textrm{SM}}+\frac{4G_{F}}{\sqrt{2}}\sum_{i}^{\textrm{CC
only}}\,{{C}}_{i}\,O_{i}^{\rm CC}~{},$ (4)
where the first terms on the RHS arise from the first term in eq. (2) by
integrating out $Z,W^{\pm}$ and $h$, assuming SM couplings.444 A loop factor
of $e^{2}/(16\pi^{2})$ is usually included for the NC Lagrangian for LEFT in
literature Aebischer:2015fzz ; Bause:2020auq . In our convention, we have not
included this factor in order to have uniformity in our analytic relations to
be presented later. Here ‘${\rm NC}$’ and ‘${\rm CC}$’ stand for neutral-
current and charged-current, respectively. In order to obtain the SMEFT
predictions for relations among the LEFT WCs, we need to match the LEFT
coefficients above to the HEFT ones including the effect of $Z$, $W^{\pm}$,
and $h$ exchange diagrams. These matching relations can then be inverted to
write the HEFT WCs and the relations among them in terms of the LEFT ones. We
carry out this procedure in Sec. 2.3.
Vector operators $LLLL$
---
| NC | Count
$[{{\mathbf{c}}}_{e_{L}d_{L}}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}e_{L}^{\beta})(\bar{d}_{L}^{i}\gamma^{\mu}d_{L}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{euLL}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}e_{L}^{\beta})(\bar{u}_{L}^{i}\gamma^{\mu}u_{L}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{\nu dLL}^{V}]^{\alpha\beta ij}$ | $(\bar{\nu}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{d}_{L}^{i}\gamma^{\mu}d_{L}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{\nu uLL}^{V}]^{\alpha\beta ij}$ | $(\bar{\nu}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{u}_{L}^{i}\gamma^{\mu}u_{L}^{j})$ | 81 (45)
| CC |
$[{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{u}_{L}^{i}\gamma^{\mu}d_{L}^{j})$ | 162 (81)
Vector operators $RRRR$
| NC | Count
$[{{\mathbf{c}}}_{edRR}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\gamma_{\mu}e_{R}^{\beta})(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{euRR}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\gamma_{\mu}e_{R}^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 81 (45)
Vector operators $LLRR$
---
| NC | Count
$[{{\mathbf{c}}}_{edLR}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}e_{L}^{\beta})(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{euLR}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}e_{L}^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{\nu dLR}^{V}]^{\alpha\beta ij}$ | $(\bar{\nu}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{\nu uLR}^{V}]^{\alpha\beta ij}$ | $(\bar{\nu}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 81 (45)
| CC |
$[{{\mathbf{c}}}_{LR}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{L}^{\alpha}\gamma_{\mu}\nu_{L}^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 162 (81)
Vector operators $RRLL$
| NC | Count
$[{{\mathbf{c}}}_{edRL}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\gamma_{\mu}e_{R}^{\beta})(\bar{d}_{L}^{i}\gamma^{\mu}d_{L}^{j})$ | 81 (45)
$[{{\mathbf{c}}}_{euRL}^{V}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\gamma_{\mu}e_{R}^{\beta})(\bar{u}_{L}^{i}\gamma^{\mu}u_{L}^{j})$ | 81 (45)
Scalar operators with $d_{R}$
---
| NC | Count
$[{{\mathbf{c}}}_{ed,RLLR}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,e_{L}^{\beta})(\bar{d}_{L}^{i}\,d_{R}^{j})$ | 162 (81)
$[{{\mathbf{c}}}_{ed,RLRL}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,e_{L}^{\beta})(\bar{d}_{R}^{i}\,d_{L}^{j})$ | 162 (81)
| CC |
$[{{\mathbf{c}}}_{RLLR}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,\nu_{L}^{\beta})(\bar{u}_{L}^{i}\,d_{R}^{j})$ | 162 (81)
Scalar operators with $u_{R}$
| NC | Count
$[{{\mathbf{c}}}_{eu,RLLR}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,e_{L}^{\beta})(\bar{u}_{L}^{i}\,u_{R}^{j})$ | 162 (81)
$[{{\mathbf{c}}}_{eu,RLRL}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,e_{L}^{\beta})(\bar{u}_{R}^{i}\,u_{L}^{j})$ | 162 (81)
| CC |
$[{{\mathbf{c}}}_{RLRL}^{S}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\,\nu_{L}^{\beta})(\bar{u}_{R}^{i}\,d_{L}^{j})$ | 162 (81)
Tensor operators with $d_{R}$
---
| NC | Count
$[{{\mathbf{c}}}_{ed,RLLR}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}e_{L}^{\beta})(\bar{d}_{L}^{i}\sigma^{\mu\nu}d_{R}^{j})$ | 162 (81)
$[{{\mathbf{c}}}_{ed,RLRL}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}e_{L}^{\beta})(\bar{d}_{R}^{i}\sigma^{\mu\nu}d_{L}^{j})$ | 162 (81)
| CC |
$[{{\mathbf{c}}}_{RLLR}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}\nu_{L}^{\beta})(\bar{u}_{L}^{i}\sigma^{\mu\nu}d_{R}^{j})$ | 162 (81)
Tensor operators with $u_{R}$
| NC | Count
$[{{\mathbf{c}}}_{eu,RLLR}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}e_{L}^{\beta})(\bar{u}_{L}^{i}\sigma^{\mu\nu}u_{R}^{j})$ | 162(81)
$[{{\mathbf{c}}}_{eu,RLRL}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}e_{L}^{\beta})(\bar{u}_{R}^{i}\sigma^{\mu\nu}u_{L}^{j})$ | 162(81)
| CC |
$[{{\mathbf{c}}}_{RLRL}^{T}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\sigma_{\mu\nu}\nu_{L}^{\beta})(\bar{u}_{R}^{i}\sigma^{\mu\nu}d_{L}^{j})$ | 162(81)
Table 1: Semileptonic operators in HEFT. Here ${{\mathbf{c}}}$’s are the WCs
for the corresponding operators in the flavor basis. The indices
$\alpha,\beta$ denote lepton families and $i,j$ denote quark families. NC and
CC correspond to neutral-current and charged-current operators. Count denotes
the number of independent operators; the number inside the brackets is the
number of independent operators if all WCs were real. Note that for vector CC
operators as well as all the scalar and tensor operators we have not
explicitly listed their hermitian conjugates but included them in the count.
Vector operators $LLLL$
---
| Operator | Count
$[{{\mathcal{C}}}_{{\ell}q}^{(1)}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}\gamma_{\mu}l^{\beta})(\bar{q}^{i}\gamma^{\mu}q^{j})$ | 81 (45)
$[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}\gamma_{\mu}\tau^{I}l^{\beta})(\bar{q}^{i}\gamma^{\mu}\tau^{I}q^{j})$ | 81 (45)
Vector operators $RRRR$
| Operator | Count
$[{{\mathcal{C}}}_{ed}]^{\alpha\beta ij}$ | $(\bar{e}^{\alpha}\gamma_{\mu}e^{\beta})(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 81 (45)
$[{{\mathcal{C}}}_{eu}]^{\alpha\beta ij}$ | $(\bar{e}^{\alpha}\gamma_{\mu}e^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 81 (45)
Vector operators $LLRR$
---
| Operator | Count
$[{{\mathcal{C}}}_{{\ell}d}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}\gamma_{\mu}l^{\beta})(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})$ | 81 (45)
$[{{\mathcal{C}}}_{{\ell}u}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}\gamma_{\mu}l^{\beta})(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 81 (45)
Vector operators $RRLL$
| Operator | Count
$[{{\mathcal{C}}}_{eq}]^{\alpha\beta ij}$ | $(\bar{e}_{R}^{\alpha}\gamma_{\mu}e_{R}^{\beta})(\bar{q}^{i}\gamma^{\mu}q^{j})$ | 81 (45)
| |
Scalar operators with $d_{R}$
---
| Operator | Count
$[{{\mathcal{C}}}_{{\ell}edq}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}_{a}\,e_{R}^{\beta})(\bar{d}_{R}^{i}\,q^{j}_{a})$ | 162 (81)
Scalar operators with $u_{R}$
---
| Operator | Count
$[{{\mathcal{C}}}_{{\ell}equ}^{(1)}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}_{a}\,e_{R}^{\beta})\epsilon_{ab}(\bar{q}^{i}_{b}\,u_{R}^{j})$ | 162 (81)
Tensor operators
---
| Operator | Count
$[{{\mathcal{C}}}_{{\ell}equ}^{(3)}]^{\alpha\beta ij}$ | $(\bar{l}^{\alpha}_{a}\sigma_{\mu\nu}e_{R}^{\beta})\epsilon_{ab}(\bar{q}^{i}_{b}\sigma^{\mu\nu}u_{R}^{j})$ | 162 (81)
Table 2: Semileptonic operators in SMEFT. Here ${{\mathcal{C}}}$’s are the
WCs for the corresponding operators in the flavor basis. The indices
$\alpha,\beta$ denote lepton families and $i,j$ denote quark families. Here
$l=(\nu_{L},e_{L})^{T}$, $q=(u_{L},d_{L})^{T}$, $\tau^{I}$ are the Pauli
matrices and $\epsilon_{ab}$ is the $(2\times 2)$ anti-symmetric matrix with
$\epsilon_{12}=1$. Count denotes the number of independent operators; the
number inside the brackets is the number of independent operators if all WCs
were real. Note that for the scalar and tensor operators we have not
explicitly listed their hermitian conjugates but included them in the count.
### 2.1 Predictions for semileptonic operators at high energies
We begin our analysis with the 3240 (1674) semileptonic four-fermion
operators555For non-hermitian operators, we consider the operator and its
hermitian conjugate as two distinct operators, as one can treat
$(O+O^{\dagger})$ and $i(O-O^{\dagger})$ as two separate operators with real
Wilson coefficients. present in HEFT (see Table 1), where the number within
the parenthesis denotes the number of independent operators if the WCs of all
these operators were real. Note that each entry in Table 1 represents multiple
operators corresponding to different possible values for the family indices.
The first entry $[{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\beta ij}$, for instance,
represents 81 (45) operators, since the indices $\alpha$, $\beta$ denote three
lepton families and the indices $i$, $j$ denote three quark families. In Table
2, we list the 1053 (558) semileptonic four-fermion operators in SMEFT which
would give rise to the above HEFT operators.
The operators in Table 1 and 2 are divided into categories based on their
Lorentz structure and the chiralities of the fields involved. In the
following, we discuss the mapping between SMEFT and HEFT operators and the
resulting SMEFT predictions for each of these categories.
#### LLLL vector operators:
In this category, there are 486 (261) independent operators in HEFT, as shown
in Table 1, which correspond to the 162 (90) SMEFT operators shown in Table 2.
The SMEFT operators, when expanded in the unitary gauge, give the following
contributions to the HEFT Wilson coefficients:
$\displaystyle[{{\mathbf{c}}}_{\nu uLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=([{{\mathcal{C}}}_{{\ell}q}^{(1)}]^{\alpha\beta
ij}+[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta
ij})~{},\quad[{{\mathbf{c}}}_{euLL}^{V}]^{\alpha\beta
ij}=([{{\mathcal{C}}}_{{\ell}q}^{(1)}]^{\alpha\beta
ij}-[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta ij}),$ (5)
$\displaystyle[{{\mathbf{c}}}_{\nu dLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=([{{\mathcal{C}}}_{{\ell}q}^{(1)}]^{\alpha\beta
ij}-[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta
ij})~{},\quad[{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\beta
ij}=([{{\mathcal{C}}}_{{\ell}q}^{(1)}]^{\alpha\beta
ij}+[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta ij})~{},$ (6)
$\displaystyle[{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta ij}$
$\displaystyle=2\,[{{\mathcal{C}}}_{{\ell}q}^{(3)}]^{\alpha\beta ij}~{},$ (7)
where we have written both the SMEFT and HEFT WCs in the flavor basis. One can
easily read off the 324 (171) SMEFT predictions implied by the above
equations:
$\displaystyle[{{\mathbf{c}}}_{euLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathbf{c}}}_{\nu dLL}^{V}]^{\alpha\beta ij}~{},$ (8)
$\displaystyle[{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathbf{c}}}_{\nu uLL}^{V}]^{\alpha\beta ij}~{},$ (9)
$\displaystyle\,[{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\beta
ij}-[{{\mathbf{c}}}_{\nu dLL}^{V}]^{\alpha\beta ij}~{}.$ (10)
These predictions are in the flavor basis. We would like to have the relations
in terms of HEFT operators in the mass basis for later matching with the LEFT
operators and with the observables. This can be achieved by the use of unitary
matrices $S_{L,R}$ and $K_{L,R}$ for quarks and leptons, respectively. The
fields are transformed as
$\displaystyle u_{L}^{i}$
$\displaystyle\rightarrow(S_{L}^{u})^{ij}u_{L}^{j}~{},\quad
d_{L}^{i}\rightarrow(S_{L}^{d})^{ij}d_{L}^{j}~{},\quad
u_{R}^{i}\rightarrow(S_{R}^{u})^{ij}u_{R}^{j}~{},\quad
d_{R}^{i}\rightarrow(S_{R}^{d})^{ij}d_{L}^{j}~{},$ (11) $\displaystyle
e_{L}^{\alpha}$
$\displaystyle\rightarrow(K_{L}^{e})^{\alpha\beta}e_{L}^{\beta}~{},\quad\nu_{L}^{\alpha}\to(K_{L}^{\nu})^{\alpha\beta}\nu_{L}^{\beta}~{},\quad
e_{R}^{\alpha}\to(K_{R}^{e})^{\alpha\beta}e_{R}^{\beta}~{}.$ (12)
The relation in eq. (8) gets transformed in the mass basis as666 A hat on top
of the HEFT WC indicates that it is in the mass basis, otherwise it is in the
flavor basis.
$\displaystyle(K_{L}^{e})^{\alpha\rho}\,(S_{L}^{u})^{ik}\,[\hat{{\mathbf{c}}}_{euLL}^{V}]^{\rho\sigma
kl}\,(S_{L}^{u\dagger})^{{\ell}j}(K_{L}^{e\dagger})^{\sigma\beta}$
$\displaystyle=(K_{L}^{\nu})^{\alpha\rho}\,(S_{L}^{d})^{ik}\,[\hat{{\mathbf{c}}}_{\nu
dLL}^{V}]^{\rho\sigma
kl}\,(S_{L}^{d\dagger})^{{\ell}j}(K_{L}^{\nu\dagger})^{\sigma\beta}.$ (13)
Suppressing the lepton and quark family indices, the above equation can be
rewritten in a compact form as
$\displaystyle
K_{L}^{e}\,S_{L}^{u}\,\hat{{\mathbf{c}}}_{euLL}^{V}\,S_{L}^{u\dagger}\,K_{L}^{e\dagger}$
$\displaystyle=K_{L}^{\nu}\,S_{L}^{d}\,\hat{{\mathbf{c}}}_{\nu
dLL}^{V}\,S_{L}^{d\dagger}\,K_{L}^{\nu\dagger}~{},$ (14)
where the matrices $S$ and $K$ carry only quark and lepton indices,
respectively. This relation may be further expressed as
$\displaystyle V^{\dagger}\,\hat{{\mathbf{c}}}_{euLL}^{V}\,V$
$\displaystyle=U^{\dagger}\,\hat{{\mathbf{c}}}_{\nu dLL}^{V}\,U~{},$ (15)
using the CKM and PMNS matrices
$\displaystyle V$ $\displaystyle\equiv
V_{CKM}=S_{L}^{u\dagger}\,S_{L}^{d}~{}~{}\textrm{and}\quad U\equiv
U_{PMNS}=K_{L}^{\nu\dagger}\,K_{L}^{e}~{}.$ (16)
Following similar steps, we can rewrite the relations from eqs. (9) and (10)
in the mass basis as
$\displaystyle V\,\hat{{\mathbf{c}}}_{edLL}^{V}\,V^{\dagger}$
$\displaystyle=U^{\dagger}\,\hat{{\mathbf{c}}}_{\nu uLL}^{V}\,U~{},$ (17)
$\displaystyle V^{\dagger}\,\hat{{\mathbf{c}}}_{LL}^{V}\,U$
$\displaystyle=\hat{{\mathbf{c}}}_{edLL}^{V}-U^{\dagger}\,\hat{{\mathbf{c}}}_{\nu
dLL}^{V}\,U~{}.$ (18)
Note that the final SMEFT predictions, i.e. the relations among the HEFT WCs
shown in eqs. (15), (17) and (18), involve only the physically measurable CKM
and PMNS matrices. This makes the relations completely independent of any UV
flavor assumption. The relations in eqs. (15) and (17) were derived previously
in Ref. Bissmann:2020mfi .
#### LLRR vector operators:
Similar to the previous case, in this category there are 486 (261) independent
HEFT operators and 162 (90) independent SMEFT operators, as shown in Table 1
and 2, respectively. The HEFT WCs can be written in terms of the SMEFT ones as
follows:
$\displaystyle[{{\mathbf{c}}}_{\nu uLR}^{V}]^{\alpha\beta ij}$
$\displaystyle=\,[{{\mathcal{C}}}_{{\ell}u}]^{\alpha\beta
ij},\quad[{{\mathbf{c}}}_{euLR}^{V}]^{\alpha\beta
ij}=\,[{{\mathcal{C}}}_{{\ell}u}]^{\alpha\beta ij},$ (19)
$\displaystyle[{{\mathbf{c}}}_{\nu dLR}^{V}]^{\alpha\beta ij}$
$\displaystyle=\,[{{\mathcal{C}}}_{{\ell}d}]^{\alpha\beta
ij}~{},\quad[{{\mathbf{c}}}_{edLR}^{V}]^{\alpha\beta
ij}=\,[{{\mathcal{C}}}_{{\ell}d}]^{\alpha\beta
ij}~{},\quad[{{\mathbf{c}}}_{LR}^{V}]^{\alpha\beta ij}=0~{}.$ (20)
Thus, here we get 324 (171) relations among the HEFT coefficients. In the
flavor basis and the mass basis, these relations are
$\displaystyle{{\mathbf{c}}}_{euLR}^{V}={{\mathbf{c}}}_{\nu uLR}^{V}\quad$
$\displaystyle\Rightarrow\quad\hat{{\mathbf{c}}}_{euLR}^{V}=U^{\dagger}\,\hat{{\mathbf{c}}}_{\nu
uLR}^{V}\,U~{},$ (21)
$\displaystyle{{\mathbf{c}}}_{edLR}^{V}={{\mathbf{c}}}_{\nu dLR}^{V}\quad$
$\displaystyle\Rightarrow\quad\hat{{\mathbf{c}}}_{edLR}^{V}=U^{\dagger}\,\hat{{\mathbf{c}}}_{\nu
dLR}^{V}\,U~{},$ (22) $\displaystyle{{\mathbf{c}}}_{LR}^{V}=0\quad$
$\displaystyle\Rightarrow\quad\hat{{\mathbf{c}}}_{LR}^{V}=0~{}.$ (23)
Note that in this category, only right-handed quark fields appear and the
rotation matrices for the right-handed quarks cancel out in the relations when
translated to the mass basis. As a result, there is no CKM matrix in these
relations and only the PMNS matrix $U$ appears for the leptons. The relations
above show that the charged-current HEFT WCs vanish for this category of
operators. This is because in SMEFT, as noted in Burgess:2021ylu ;
Cata:2015lta , right-handed quarks cannot participate in charged-current
semileptonic processes at dimension 6 level due to hypercharge conservation.
Category | Analytic relations | Count
---|---|---
$LLLL$ | $V^{\dagger}_{ik}\,[\hat{{\mathbf{c}}}_{euLL}^{V}]^{\alpha\beta kl}\,V_{{\ell}j}=U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu dLL}^{V}]^{\rho\sigma ij}\,U_{\sigma\beta}$ | 81 (45)
$V_{ik}\,[\hat{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\beta kl}\,V^{\dagger}_{{\ell}j}=U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu uLL}^{V}]^{\rho\sigma ij}\,U_{\sigma\beta}$ | 81 (45)
$V^{\dagger}_{ik}\,[\hat{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta kj}=[\hat{{\mathbf{c}}}_{edLL}^{V}]^{\alpha\rho ij}\,U^{\dagger}_{\rho\beta}-U^{\dagger}_{\alpha\sigma}\,[{{\mathbf{c}}}_{\nu dLL}^{V}]^{\sigma\beta ij}$ | 162 (81)
$RRRR$ | No relations |
$LLRR$ | $[\hat{{\mathbf{c}}}_{edLR}^{V}]^{\alpha\beta ij}=U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu dLR}^{V}]^{\rho\sigma ij}\,U_{\rho\beta}$ | 81 (45)
$[\hat{{\mathbf{c}}}_{euLR}^{V}]^{\alpha\beta ij}=U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu uLR}^{V}]^{\rho\sigma ij}\,U_{\rho\beta}$ | 81 (45)
$[\hat{{\mathbf{c}}}_{LR}^{V}]^{\alpha\beta ij}=0$ | 162 (81)
$RRLL$ | $[\hat{{\mathbf{c}}}_{edRL}^{V}]^{\alpha\beta ij}=V^{\dagger}_{ik}\,[\hat{{\mathbf{c}}}_{euRL}^{V}]^{\rho\sigma kl}\,V_{lj}$ | 81 (45)
Scalar ($d_{R}$) | $V_{ik}\,[\hat{{\mathbf{c}}}_{ed,RLLR}^{S}]^{\alpha\beta kj}=[\hat{{\mathbf{c}}}_{RLLR}^{S}]^{\alpha\rho ij}\,U_{\rho\beta}$ | 162 (81)
$[\hat{{\mathbf{c}}}_{ed,RLRL}^{S}]^{\alpha\beta ij}=0$ | 162 (81)
Scalar ($u_{R}$) | $[\hat{{\mathbf{c}}}_{eu,RLRL}^{S}]^{\alpha\beta ik}\,V_{kj}=-[\hat{{\mathbf{c}}}_{RLRL}^{S}]^{\alpha\rho ij}\,U_{\rho\beta}$ | 162 (81)
$[\hat{{\mathbf{c}}}_{eu,RLLR}^{S}]^{\alpha\beta ij}=0$ | 162 (81)
Tensor ($d_{R}$) | $[\hat{{\mathbf{c}}}_{ed,\,\textrm{all}}^{T}]^{\alpha\beta ij}=0$ | 324 (162)
$[\hat{{\mathbf{c}}}_{RLLR}^{T}]^{\alpha\beta ij}=0$ | 162 (81)
Tensor ($u_{R}$) | $[\hat{{\mathbf{c}}}_{eu,RLRL}^{T}]^{\alpha\beta ik}\,V_{kj}=-[\hat{{\mathbf{c}}}_{RLRL}^{T}]^{\alpha\rho ij}\,U_{\rho\beta}$ | 162 (81)
$[\hat{{\mathbf{c}}}_{eu,RLLR}^{T}]^{\alpha\beta ij}=0$ | 162 (81)
$Z$ and $W^{\pm}$ | $[\hat{{\mathbf{c}}}_{ud_{L}W}]^{ij}=\frac{1}{\sqrt{2}}\cos\theta_{w}\,([\hat{{\mathbf{c}}}_{u_{L}Z}]^{ik}\,V_{kj}-V_{ik}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{kj})$ | 18 (9)
$[\hat{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\rho}\,U_{\rho\beta}=\frac{1}{\sqrt{2}}\cos\theta_{w}\,([\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}-U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\rho\sigma}\,U_{\sigma\beta})$ | 18 (9)
Table 3: Linear relations among the HEFT semileptonic WCs in the mass basis
predicted by the SMEFT. Summation over repeated indices is implicit. Count
denotes the number of independent operators; the number inside the brackets is
the number of independent operators if all WCs are real.
#### RRRR vector operators:
Right-handed fermions are not charged under $SU(2)_{L}$. Thus even in SMEFT,
the up-type and down-type right-handed fields can appear independently in
neutral-current semileptonic operators, as in HEFT. This makes the number of
neutral-current operators of RRRR type in HEFT and SMEFT to be the same as
shown in Tables 1 and 2, respectively. Furthermore, in the absence of right-
handed neutrinos, there are no charged-current operators either in HEFT or in
SMEFT in this category. As a result, in this category, there are no relations
among the HEFT coefficients.
#### RRLL vector operators:
In the case of vector operators involving right-handed leptons and left-handed
quarks, there are 162 (90) independent operators in HEFT and 81 (45) in SMEFT,
respectively. This results in 81 (45) relations among the HEFT WCs. The
mapping between HEFT and SMEFT WCs in the flavor basis and the resulting
relations in the mass basis for this category are
$\displaystyle{{\mathbf{c}}}_{ed}^{V}=\,{{\mathcal{C}}}_{eq}~{},$
$\displaystyle\quad{{\mathbf{c}}}_{eu}^{V}=\,{{\mathcal{C}}}_{eq}~{}\quad\Rightarrow\quad\hat{{\mathbf{c}}}_{edRL}^{V}=V^{\dagger}\,\hat{{\mathbf{c}}}_{euRL}^{V}\,V~{}.$
(24)
Note that the PMNS matrix does not appear in these relations as only right-
handed electrons are involved and the corresponding flavor rotations cancel
out. Furthermore, there are no charged-current operators in this category as
there is no right-handed neutrino in SM.
#### Scalar operators:
There are 486 (243) scalar semileptonic operators with right-handed down-type
quarks and 486 (243) operators with right-handed up-type quarks in HEFT. In
SMEFT, there are 162 (90) operators for each of these scenarios. We find 324
(153) relations among the HEFT coefficients for each scenario. Mapping of
these operators between HEFT and SMEFT in the flavor basis gives
$\displaystyle[{{\mathbf{c}}}_{ed,RLLR}^{S}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathcal{C}}}_{{\ell}edq}]^{\beta\alpha ji*}~{},$
$\displaystyle[{{\mathbf{c}}}_{eu,RLLR}^{S}]^{\alpha\beta ij}$
$\displaystyle=0~{},$ (25)
$\displaystyle[{{\mathbf{c}}}_{ed,RLRL}^{S}]^{\alpha\beta ij}$
$\displaystyle=0~{},$
$\displaystyle[{{\mathbf{c}}}_{eu,RLRL}^{S}]^{\alpha\beta ij}$
$\displaystyle=-[{{\mathcal{C}}}_{{\ell}equ}]^{\beta\alpha ji*}~{},$ (26)
$\displaystyle[{{\mathbf{c}}}_{RLLR}^{S}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathcal{C}}}_{{\ell}edq}]^{\beta\alpha ji\,*}~{},$
$\displaystyle[{{\mathbf{c}}}_{RLRL}^{S}]^{\alpha\beta ij}$
$\displaystyle=[{{\mathcal{C}}}_{{\ell}equ}]^{\beta\alpha ji\,*}~{}.$ (27)
From the above equations, we get the following relations among the HEFT WCs in
the mass basis:
$\displaystyle V\,\hat{{\mathbf{c}}}_{ed,RLLR}^{S}$
$\displaystyle=\hat{{\mathbf{c}}}_{RLLR}^{S}\,U~{},$
$\displaystyle\hat{{\mathbf{c}}}_{eu,RLRL}^{S}\,V$
$\displaystyle=-\hat{{\mathbf{c}}}_{RLRL}^{S}\,U~{},$ (28)
$\displaystyle\hat{{\mathbf{c}}}_{ed,RLRL}^{S}$ $\displaystyle=0~{},$
$\displaystyle\hat{{\mathbf{c}}}_{eu,RLLR}^{S}$ $\displaystyle=0~{}.$ (29)
Note that both the above relations in eq. (2.27) represent relations among
neutral-current scalar operators (on the LHS) and charged-current scalar
operators (on the RHS). The WCs in eq. (2.28) vanish777 Note that the
vanishing of these HEFT WCs correspond to the relations $C_{S}=-C_{P}$ and
$C_{S}^{\prime}=C_{P}^{\prime}$ in the conventional LEFT for the UV4f models
as noted in Alonso:2014csa ; Cata:2015lta . since the corresponding SMEFT
operators would not satisfy $U(1)_{Y}$.
#### Tensor operators:
There is no tensor operator with right-handed down-type quarks in SMEFT as
these operators cannot conserve $U(1)_{Y}$ hypercharge. Thus, all the tensor
operators with right-handed down-type quarks in HEFT get zero contribution
from SMEFT. As a result, SMEFT predicts 486 (243) constraints on such HEFT
WCs:
$\displaystyle\hat{{\mathbf{c}}}_{ed,RLLR}^{T}$
$\displaystyle=0~{},\quad\hat{{\mathbf{c}}}_{ed,RLRL}^{T}=0~{},\quad\hat{{\mathbf{c}}}_{RLLR}^{T}=0~{}.$
(30)
For the case of tensor operators with right-handed up-type quarks, the mapping
and relations are exactly the same as the scalar operators:
$\displaystyle\hat{{\mathbf{c}}}_{eu,RLRL}^{T}\,V$
$\displaystyle=-\hat{{\mathbf{c}}}_{RLRL}^{T}\,U~{},\quad\hat{{\mathbf{c}}}_{eu,RLLR}^{T}=0~{}.$
(31)
The reason for the vanishing of the WCs in the last equation is again that the
corresponding SMEFT operators would not preserve the $U(1)_{Y}$ hypercharge
symmetry. See also references Alonso:2014csa ; Cata:2015lta .
In Table 3, we present all the relations among the HEFT WCs corresponding to
four-fermion semileptonic operators, which would be predicted by SMEFT. We
express these relations in the mass basis and explicitly put the indices for
the quark and the lepton families.
### 2.2 Predictions for the couplings of $Z$, $W^{\pm}$ and $h$ to fermions
HEFT
---
$LL$ quarks
| Operator | Count
$[{{\mathbf{c}}}_{u_{L}Z}]^{ij}$ | $(\bar{u}_{L}^{i}\gamma^{\mu}u_{L}^{j})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{d_{L}Z}]^{ij}$ | $(\bar{d}_{L}^{i}\gamma^{\mu}d_{L}^{j})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{ud_{L}W}]^{ij}$ | $(\bar{u}_{L}^{i}\gamma^{\mu}d_{L}^{j})\,W_{\mu}^{+}$ | 18(9)
$RR$ quarks
| Operator | Count
$[{{\mathbf{c}}}_{u_{R}Z}]^{ij}$ | $(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{d_{R}Z}]^{ij}$ | $(\bar{d}_{R}^{i}\gamma^{\mu}d_{R}^{j})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{ud_{R}W}]^{ij}$ | $(\bar{u}_{R}^{i}\gamma^{\mu}d_{R}^{j})\,W_{\mu}^{+}$ | 18(9)
$LL$ leptons
| Operator | Count
$[{{\mathbf{c}}}_{\nu_{L}Z}]^{\alpha\beta}$ | $(\bar{\nu}_{L}^{\alpha}\gamma^{\mu}\nu_{L}^{\beta})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}$ | $(\bar{e}_{L}^{\alpha}\gamma^{\mu}e_{L}^{\beta})\,Z_{\mu}$ | 9(6)
$[{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\beta}$ | $(\bar{e}_{L}^{\alpha}\gamma^{\mu}\nu_{L}^{\beta})\,W_{\mu}^{+}$ | 18(9)
$RR$ leptons
| Operator | Count
$[{{\mathbf{c}}}_{e_{R}Z}]^{\alpha\beta}$ | $(\bar{e}_{R}^{\alpha}\gamma^{\mu}e_{R}^{\beta})\,Z_{\mu}$ | 9(6)
Scalar operators
| Operator | Count
$[{{\mathbf{c}}}_{eh}]^{\alpha\beta}$ | $(\bar{e}_{L}^{\alpha}\,e_{R}^{\beta})\,h$ | 9(6)
$[{{\mathbf{c}}}_{dh}]^{ij}$ | $(\bar{d}_{L}^{i}\,d_{R}^{j})\,h$ | 9(6)
$[{{\mathbf{c}}}_{uh}]^{ij}$ | $(\bar{u}_{L}^{i}\,e_{R}^{j})\,h$ | 9(6)
SMEFT
---
$LL$ quarks
| Operator | Count
$[{{\mathcal{C}}}_{Hq}^{(1)}]^{ij}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}H)(\bar{q}^{i}\gamma^{\mu}q^{j})$ | 9(6)
$[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}\,\tau^{I}H)(\bar{q}^{i}\gamma^{\mu}\,\tau^{I}q^{j})$ | 9(6)
$RR$ quarks
| Operator | Count
$[{{\mathcal{C}}}_{Hu}]^{ij}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}H)(\bar{u}_{R}^{i}\gamma^{\mu}u_{R}^{j})$ | 9(6)
$[{{\mathcal{C}}}_{Hd}]^{ij}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}\,\tau^{I}H)(\bar{d}_{R}^{i}\gamma^{\mu}\,\tau^{I}d_{R}^{j})$ | 9(6)
$[{{\mathcal{C}}}_{Hud}]^{ij}$ | $(\widetilde{H}^{\dagger}\overleftrightarrow{D}_{\mu}\,H)(\bar{u}_{R}^{i}\gamma^{\mu}\,d_{R}^{j})$ | 18(9)
$LL$ leptons
| Operator | Count
$[{{\mathcal{C}}}_{Hl}^{(1)}]^{\alpha\beta}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}H)(\bar{l}^{\alpha}\gamma^{\mu}l^{\beta})$ | 9(6)
$[{{\mathcal{C}}}_{Hl}^{(3)}]^{\alpha\beta}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}\,\tau^{I}H)(\bar{l}^{\alpha}\gamma^{\mu}\,\tau^{I}l^{\beta})$ | 9(6)
$RR$ leptons
| Operator | Count
$[{{\mathcal{C}}}_{He}]^{\alpha\beta}$ | $(H^{\dagger}\overleftrightarrow{D}_{\mu}H)(\bar{e}_{R}^{\alpha}\gamma^{\mu}e_{R}^{\beta})$ | 9(6)
Scalar quarks
| Operator | Count
$[{{\mathcal{C}}}_{eH}]^{\alpha\beta}$ | $(H^{\dagger}\,H)\,(\bar{l}^{\alpha}\,e_{R}^{\beta}H)$ | 9(6)
$[{{\mathcal{C}}}_{dH}]^{ij}$ | $(H^{\dagger}\,H)\,(\bar{q}^{i}\,d_{R}^{j}H)$ | 9(6)
$[{{\mathcal{C}}}_{uH}]^{ij}$ | $(H^{\dagger}\,H)\,(\bar{q}^{i}\,u_{R}^{j}\widetilde{H})$ | 9(6)
Table 4: Left column: HEFT operators representing the couplings of $Z$,
$W^{\pm}$ and $h$ with fermions. Right column: SMEFT operators contributing to
the corresponding HEFT operators (following notations of Grzadkowski:2010es ).
Count denotes the number of independent operators; the number inside the
brackets is the number of independent operators if all WCs were real. The
SMEFT basis in which these operators are written is defined in Appendix B.
In addition to quarks and leptons, HEFT also involves $Z,W^{\pm}$ and $h$
bosons as degrees of freedom. The BSM couplings of these bosons to the
fermions appear as HEFT WCs, as shown in eq. (2). These WCs contribute to low-
energy semileptonic processes via $Z,W^{\pm}$ and $h$ exchange diagrams. They
can also be probed independently by studying decays of $Z,W^{\pm}$, and $h$.
However, when the BSM couplings of these bosons to fermions are parameterized
in terms of SMEFT WCs, the number of independent WCs are less than the total
number of relevant HEFT WCs. Thus, SMEFT predicts relations among the
corresponding HEFT WCs, as earlier. In this section, we derive these
relations.
In Table 4, we show the 144 (87) HEFT operators and 108 (69) SMEFT operators
that give rise to $Z,W^{\pm}$ and $h$ couplings to fermions. Once again, the
HEFT operators have been presented in the unitary gauge as $U(1)_{em}$
invariant terms. These operators can be rewritten in an $SU(2)_{L}\times
U(1)_{Y}$ invariant form where this symmetry is non-linearly realized, as
shown in Appendix A. Note that the SMEFT basis we have used is not the
commonly used Warsaw basis. The details of our basis and our rationale for it
have been presented in Appendix B.
While the number of dimension-6 SMEFT and HEFT operators are the same for the
coupling with $h$, the number of HEFT operators contributing to $Z$ and
$W^{\pm}$ coupling deviations to left-handed quarks or leptons is 36 (21) and
it exceeds the number of contributing SMEFT operators, 18 (12). This implies
18 (12) relations among the HEFT WCs. The expressions for the HEFT WCs for
these operators in terms of the SMEFT ones can be written in the flavor basis
as
$\displaystyle[{{\mathbf{c}}}_{u_{L}Z}]^{ij}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hq}^{(1)}]^{ij}-[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij}),$
$\displaystyle[{{\mathbf{c}}}_{\nu_{L}Z}]^{\alpha\beta}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hl}^{(1)}]^{\alpha\beta}-[{{\mathcal{C}}}_{Hl}^{(3)}]^{\alpha\beta})~{},$
(32) $\displaystyle[{{\mathbf{c}}}_{d_{L}Z}]^{ij}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hq}^{(1)}]^{ij}+[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij})~{},$
$\displaystyle[{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hl}^{(1)}]^{\alpha\beta}+[{{\mathcal{C}}}_{Hl}^{(3)}]^{\alpha\beta})~{},$
(33) $\displaystyle[{{\mathbf{c}}}_{ud_{L}W}]^{ij}$
$\displaystyle=\eta_{LW}\,[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij}~{},$
$\displaystyle[{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\beta}$
$\displaystyle=\eta_{LW}\,[{{\mathcal{C}}}_{Hq}^{(3)}]^{\alpha\beta}~{}.$ (34)
Here $\eta_{LZ}=-g/(2\,\cos\theta_{W})$, where $\theta_{W}$ is the Weinberg
angle, and $\eta_{LW}=g/(\sqrt{2})$. These expressions can then be used to
derive the relations among the HEFT WCs:
$\displaystyle[\hat{{\mathbf{c}}}_{ud_{L}W}]^{ij}$
$\displaystyle=\frac{1}{\sqrt{2}}\cos\theta_{w}\,([\hat{{\mathbf{c}}}_{u_{L}Z}]^{ik}\,V_{kj}-V_{ik}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{kj})~{},$
(35)
$\displaystyle[\hat{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\rho}\,U_{\rho\beta}$
$\displaystyle=\frac{1}{\sqrt{2}}\cos\theta_{w}\,([\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}-U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\rho\sigma}\,U_{\sigma\beta})~{}.$
(36)
These relations are also shown in the last two rows of Table 3. Once again,
the relations in the mass basis contain only the physically measurable CKM and
PMNS matrices.
The relations in eq. (35) and (36) should be independent of the choice of the
SMEFT basis. In the Warsaw basis, the additional operators ${\cal
O}_{T}=(H^{\dagger}\,\overleftrightarrow{D}H)^{2}$ and ${\cal
O}_{WB}=gg^{\prime}(H^{\dagger}\tau^{I}\,H)\,W_{\mu\nu}^{I}\,B^{\mu\nu}$ would
contribute to the couplings of gauge bosons to the fermions by affecting their
mass and kinetic terms. However, their contributions in the above two
relations cancel out. This can be more transparently seen in the SMEFT basis
that we use, where these two operators are traded for two other operators
which do not affect the gauge boson couplings (see Appendix B).
### 2.3 Predictions for semileptonic operators at low energies
In the previous two subsections, we discussed the predictions of SMEFT at high
energies, i.e., relations among HEFT WCs above the EW scale. Now we consider
the SMEFT predictions for the low-energy observables where the relevant
effective field theory is LEFT. The forms of LEFT operators are the same as in
Table 1 apart from the fact that the operators involving the top quark would
not be included in the LEFT lagrangian. We will now rewrite the relations
derived in the previous two subsections in terms of the LEFT WCs. In order to
carry this out, we need the matching relations between the WCs of HEFT and
LEFT operators. For instance, for operators of the $LLLL$ category, the
matching relations in the flavor basis888In our notation, $\tilde{{C}}$
corresponds to LEFT coefficient in the flavor basis and ${{C}}$ in the mass
basis. are
$\displaystyle[\tilde{{C}}_{{\ell}qLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=\omega\,[{{\mathbf{c}}}_{{\ell}qLL}^{V}]^{\alpha\beta
ij}+k_{{\ell}_{L}}\,[{{\mathbf{c}}}_{q_{L}Z}]^{ij}\,\delta_{\alpha\beta}+k_{q_{L}}\,[{{\mathbf{c}}}_{l_{L}Z}]^{\alpha\beta}\,\delta_{ij}~{},$
(37) $\displaystyle[\tilde{{C}}_{LL}^{V}]^{\alpha\beta ij}$
$\displaystyle=\omega\,[{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta ij}+k_{e\nu
W}\,[{{\mathbf{c}}}_{ud_{L}W}]^{ij}\,\delta_{\alpha\beta}+k_{udW}\,[{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\beta}\,\delta_{ij}~{},$
(38)
where $l\in\\{\nu,e\\}$, $q\in\\{u,d\\}$, $\omega=v^{2}/(2\,\Lambda^{2})$, and
the $k$ coefficients are
$\displaystyle k_{f_{L}}$
$\displaystyle=\frac{2\cos\theta_{w}}{g}(T^{3}_{f}-Q_{f}\sin^{2}\theta_{w})~{},\quad\textrm{and}~{}~{}k_{e\nu
W}=k_{udW}=\frac{\sqrt{2}}{g}~{},$ (39)
with $f_{L}\in\\{\nu_{L},e_{L},u_{L},d_{L}\\}$.
In this work, we have not considered effects of the renormalization group (RG)
running of the LEFT coefficients from the weak scale to the scale of the
relevant experiments. These would need to be included using the RG equations
of Ref. Jenkins:2013zja for a more precise phenomenological treatment.
Using matching relations like eq. (37) and eq. (38), we can now rewrite the
relations of Table 3, which were written in terms of the HEFT WCs, the LEFT
WCs, and the BSM couplings of $Z$ and $W^{\pm}$. For the $LLLL$ operators, for
example, these relations become
$\displaystyle V_{ik}\left[[{{C}}_{edLL}^{V}]^{\alpha\beta
kl}-\left(k_{e_{L}}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{kl}\,\delta_{\alpha\beta}+k_{d_{L}}\,[\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}\,\delta_{kl}\right)\right]V^{\dagger}_{{\ell}j}$
$\displaystyle~{}~{}=U^{\dagger}_{\alpha\rho}\left[[{{C}}_{\nu
uLL}^{V}]^{\rho\sigma
ij}-\chi~{}\left(k_{\nu_{L}}\,[\hat{{\mathbf{c}}}_{u_{L}Z}]^{ij}\,\delta_{\rho\sigma}+k_{u_{L}}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\rho\sigma}\,\delta_{ij}\right)\right]U_{\sigma\beta},$
(40) $\displaystyle V^{\dagger}_{ik}\left[[{{C}}_{euLL}^{V}]^{\alpha\beta
kl}-\chi~{}\left(k_{e_{L}}\,[\hat{{\mathbf{c}}}_{u_{L}Z}]^{kl}\,\delta_{\alpha\beta}+k_{u_{L}}\,[\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}\,\delta_{kl}\right)\right]V_{{\ell}j}$
$\displaystyle~{}~{}=U^{\dagger}_{\alpha\rho}\left[[{{C}}_{\nu
dLL}^{V}]^{\rho\sigma
ij}-\left(k_{\nu_{L}}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{ij}\,\delta_{\rho\sigma}+k_{d_{L}}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\rho\sigma}\,\delta_{ij}\right)\right]U_{\sigma\beta}~{},$
(41) $\displaystyle V^{\dagger}_{ik}\,\left[[{{C}}_{LL}^{V}]^{\alpha\beta
kj}-\chi\left(k_{e\nu
W}\,[\hat{{\mathbf{c}}}_{ud_{L}W}]^{kj}\,\delta_{\alpha\beta}+[k_{udW}]^{kj}\,[\hat{{\mathbf{c}}}_{e\nu_{L}W}]^{\alpha\beta}\right)\right]$
$\displaystyle~{}~{}=\left[[{{C}}_{edLL}^{V}]^{\alpha\rho
ij}-\left(k_{e_{L}}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{ij}\,\delta_{\alpha\rho}+k_{d_{L}}\,[\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\rho}\,\delta_{ij}\right)\right]\,U^{\dagger}_{\rho\beta}$
$\displaystyle~{}~{}~{}-U^{\dagger}_{\alpha\sigma}\,\left[[{{C}}_{\nu
dLL}^{V}]^{\sigma\beta
ij}-\left(k_{\nu_{L}}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{ij}\,\delta_{\sigma\beta}+k_{d_{L}}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\sigma\beta}\,\delta_{ij}\right)\right]~{},$
(42)
where, for WCs involving top quark, we have defined999This convention is used
for the purpose of giving eqs. (40) to (42) a unified form for all quarks.
This is an exception to our normal convention, where we use ‘$C$’ only for
LEFT WCs. $[{{C}}_{{\ell}qLL}^{V}]^{\alpha\beta
ij}\equiv\omega[\hat{{\mathbf{c}}}_{{\ell}qLL}^{V}]^{\alpha\beta ij}$ and
$[{{C}}_{LL}^{V}]^{\alpha\beta
ij}\equiv\omega[\hat{{\mathbf{c}}}_{LL}^{V}]^{\alpha\beta ij}$. In eq. (42),
$\chi=0$ ($\chi=1$) if the respective four-fermion operator contains (does not
contain) the top quark. The introduction of $\chi$ ensures that the HEFT WCs
are replaced by LEFT ones for all the four-fermion operators not containing
the top quark. The relations for the WCs in the other categories can be
similarly derived and have been presented in Appendix C.
We now mention two important scenarios where the SMEFT predictions derived in
this section can be simplified. First, note that apart from neutrino physics
experiments, it is impossible to distinguish the different flavors of
neutrinos in observables. These observables thus depend on combinations of WCs
with neutrino flavor indices summed over and are independent of the basis used
for neutrinos. In particular, we can choose to work in a basis aligned to the
charged-lepton flavor basis. This amounts to substituting $U=1$ in all the
SMEFT predictions, whether it is for HEFT WCs in Table 5 or for LEFT WCs such
as those in eqs. (40-42) or in Appendix C. Secondly, in the UV4f scenario
where there are no modifications to $Z,W^{\pm}$ and $h$ couplings with respect
to SM, the matching equations in eq. (38) get simplified and we can obtain
SMEFT predictions involving LEFT WCs simply by substituting
$[\hat{{\mathbf{c}}}]^{\alpha\beta ij}$ by $[{{C}}]^{\alpha\beta ij}$ in Table
5. This scenario becomes more relevant in the phenomenological applications of
the SMEFT predictions that we present in the following section.
## 3 SMEFT-predicted constraints on new physics
In this section, we will show how the SMEFT predictions derived in Sec. 2 can
be used to obtain bounds on the LEFT Wilson coefficients $[{{C}}]^{\alpha\beta
ij}$. We utilize the fact that the SMEFT predictions give analytic equations
that can connect strongly constrained WCs to poorly constrained ones, thus
allowing us to extract stronger bounds on the latter. In this section, we
restrict ourselves to UV4f models, where the UV physics generates only four-
fermionic operators in SMEFT, so that the operators discussed in Sec. 2.2 are
absent. While a more general analysis using the constraints on $Z$ and
$W^{\pm}$ couplings (see Ref. Efrati:2015eaa ) is possible, our primary aim
here is to illustrate the power of the SMEFT predictions and thus we focus on
the very well-motivated UV4f scenario. As discussed in the previous section,
in this scenario we can use the relations in Table 3 by simply replacing
$[\hat{{\mathbf{c}}}]^{\alpha\beta ij}$ by $[{{C}}]^{\alpha\beta ij}$.
Furthermore, as explained at the end of the previous section, the observables
in this section will be insensitive to the flavor of neutrinos, and hence we
can take $U\to 1$ in the SMEFT predictions.
We further restrict ourselves to the operators involving only left-handed
quarks and leptons (i.e. $LLLL$ discussed in Sec. 2.3) as these provide
leading corrections with respect to SM.101010 While low-energy flavor
observables get interference level corrections from both $RRLL$ and $LLLL$
operators, as far as high $p_{T}$ observables are concerned, only $LLLL$
operator contributions can interfere with SM contribution if fermion masses
are neglected. The relations amongst $LLLL$ operators in UV4f models are given
by
$\displaystyle[{{C}}_{euLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=V_{ik}\,[{{C}}_{\nu dLL}^{V}]^{\alpha\beta
kl}V^{\dagger}_{{\ell}j}~{},$ (43)
$\displaystyle[{{C}}_{edLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=V^{\dagger}_{ik}\,[{{C}}_{\nu uLL}^{V}]^{\alpha\beta
kl}V_{{\ell}j}~{},$ (44) $\displaystyle[{{C}}_{LL}^{V}]^{\alpha\beta ij}$
$\displaystyle=V_{ik}\,([{{C}}_{edLL}^{V}]^{\alpha\beta kj}-[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta kj})~{},$ (45)
as can be obtained from eqs. (40 –42). Recall that RG effects have been
ignored in deriving the above relations, and as discussed in Sec. 2.3, the WCs
in the above equations that involve the top quark have been defined as
$[{{C}}]^{\alpha\beta ij}\equiv\omega\,[\hat{{\mathbf{c}}}]^{\alpha\beta ij}$.
These WCs can be constrained using data from top production and decays. All
other WCs in the above equation are the standard LEFT WCs of eqs. (3) and (4).
Note that eqs. (43-45) involve 486 (261) WCs $[{{C}}]^{\alpha\beta ij}$, which
arise from 162 (90) SMEFT coefficients. These three equations therefore
correspond to 324 (171) relations among the WCs. Note that in several earlier
analyses (e.g. Fuentes-Martin:2020lea ; Bause:2020auq ) the WCs have been
assumed to be real. This is of course valid for the WCs of Hermitian
operators, i.e where $\alpha=\beta$ and $i=j$. However, as eqs. (43–45) show,
all the WCs are related linearly with complex coefficients (i.e, combinations
of CKM matrix elements) which makes it inconsistent for all of them to be
real. Note that even if all the WCs of SMEFT in the UV scale are real in the
flavor basis, phases will appear in $[{{C}}]^{\alpha\beta ij}$ through CKM
elements while matching. We in our analysis consider complex values for all
the WCs of non-Hermitian operators.
In the rest of this section, we focus on deriving bounds on the WCs from
semileptonic processes. To start with, in Sec. 3.1 to Sec. 3.3 we consider
only processes involving muon and muon neutrinos, i.e $\alpha=\beta=2$. This
is because many of the direct bounds from the muon channel are quite stringent
compared to those from the electron or tau channel. The terms in eqs. (43) and
(44) contain only neutral current WCs. On the other hand, in eq. (45) charged
current WCs are expressed in terms of neutral current WCs. Based on these
relations, in Sec. 3.1 and 3.2 we obtain indirect bounds on neutral current
WCs appearing in eqs. (43) and (44) respectively. In Sec. 3.3 we discuss about
the indirect bounds for charged current WCs. In Sec. 3.4, we further indicate
how these relations may be used in conjunction with constraints on lepton
flavor violating decays to constrain Wilson coefficients involving other
lepton families.
### 3.1 Bounds on neutral-current WCs involving $(\nu d)$ and $(eu)$
There are 6 complex and 6 real neutral-current WCs in eq. (43) with
$\alpha=\beta=2$. These WCs correspond to operators either with neutrinos and
down-type quarks ($\nu dLL$), or with charged leptons and up-type quarks
($euLL$). We first discuss direct bounds on these WCs. We consider both low-
energy observables, such as rare decays, as well as high-energy observables,
such as the high-$p_{T}$ Drell-Yan process, top decays, etc. While the former
can directly bound the LEFT WCs $[{{C}}]^{\alpha\beta ij}$, the latter can
directly bound only the high energy HEFT WCs,
$[\hat{{\mathbf{c}}}]^{\alpha\beta ij}$. As we are considering UV4f models
here, however, the bounds on $[\hat{{\mathbf{c}}}]^{\alpha\beta ij}$ can be
converted to bounds on $[{{C}}]^{\alpha\beta ij}$ in a straightforward way by
keeping only the first term in the matching relations, eq. (37) and eq. (38).
Direct bounds on the WCs $[{{C}}_{\nu dLL}^{V}]^{2212}$, $[{{C}}_{\nu
dLL}^{V}]^{2213}$ and $[{{C}}_{\nu dLL}^{V}]^{2223}$ are obtained from rare
decays of $K$ and $B$ mesons. For $[{{C}}_{\nu dLL}^{V}]^{2212}$, we have used
the recent measurement of the branching ratio of $K^{+}\to\pi^{+}\nu\nu$ in
the NA62 experiment NA62:2022qes . For $[{{C}}_{\nu dLL}^{V}]^{2213}$ we take
the $90\%$ upper bounds on the branching ratios of the decay modes
$B^{+}\to\rho^{+}\,\nu\,\nu$ and $B^{+}\to\pi^{+}\,\nu\,\nu$
ParticleDataGroup:2022pth . For $[{{C}}_{\nu dLL}^{V}]^{2223}$, we include the
recent measurement of $B^{+}\to K^{+}\nu\,\nu$ branching ratio in Belle-
II:2023esi along with the $90\%$ upper bound on the branching ratio of
$B^{+}\to K^{*+}\,\nu\,\nu$ ParticleDataGroup:2022pth . The theoretical values
for the discussed mesonic decay modes are calculated using the package
‘flavio’ Straub:2018kue . The bound on $[{{C}}_{\nu dLL}^{V}]^{2211}$ is
obtained from constraints111111The bounds presented in Farzan:2017xzy are for
the vector and axial vector WCs. We convert these to bounds on operators in
our basis by adding the $1\sigma$ ranges in quadrature. on non-standard
interactions of neutrinos in atmospheric and accelerator neutrino experiments
Farzan:2017xzy ; Escrihuela:2011cf . These bounds are shown in the top panels
of Fig. 2. For the WCs $[{{C}}_{\nu dLL}^{V}]^{2222}$ and $[{{C}}_{\nu
dLL}^{V}]^{2233}$, there are no direct bounds available.
Figure 2: Direct bounds on the complex WCs ${{C}}_{\nu dLL}^{V}$ (top panels)
and ${{C}}_{euLL}^{V}$ (middle panels). The cyan color represents bounds from
rare meson decays, orange represents bounds from high-$pT$ dimuon searches
while purple represents bounds from top productions and decays. The WCs shown
in the bottom panels are real due to the hermiticity of the corresponding
operators. Note that the bottom panel uses the symmetric log scale. See
Appendix D for numerical values of the bounds.
The direct bounds for WCs containing up-type quarks and charged leptons are
obtained from rare decays, high-$p_{T}$ dilepton searches as well as top
production and decays. The WC $[{{C}}_{euLL}^{V}]^{2212}$ gets constraints
from rare decays of $D$ meson Fuentes-Martin:2020lea . For
$[{{C}}_{euLL}^{V}]^{2211}$, $[{{C}}_{euLL}^{V}]^{2212}$ and
$[{{C}}_{euLL}^{V}]^{2222}$, strong bounds are obtained from high-$p_{T}$
dimuon searches at the LHC. In the UV4f scenario and with the approximation of
negligible RG effects, these bounds can be taken to be bounds on the LEFT WCs.
We use CMS data for the dimuon mode CMS:2021ctt and the package ‘HighPT’
Allwicher:2022mcg ; Allwicher:2022gkm which provides bounds on SMEFT WCs. In
order to convert these into bounds on isolated LEFT WCs, we turn on those
linear combinations of SMEFT WCs which make that particular LEFT WC nonzero,
and leave other dimuon modes unaffected. Bounds on WCs involving top quark
(e.g. $[{{C}}_{euLL}^{V}]^{2213}$, $[{{C}}_{euLL}^{V}]^{2213}$ and
$[{{C}}_{euLL}^{V}]^{2233}$ ) are obtained from data on top production and
decays Afik:2021jjh . These direct bounds are shown in Fig. 2.
Note that, in order to obtain the direct bounds in Fig. 2, we have only
bounded the individual contribution of the relevant $LLLL$ operator with
$\alpha=\beta=2$ and ignored possible contributions from other operators.
Under some very reasonable assumptions, however, including these contributions
would not significantly alter the bounds we have obtained. First of all, as
far as the dineutrino decay modes are concerned, the experiments cannot
distinguish between different neutrino flavors. To extract bounds on the
$\bar{\nu}_{\mu}\nu_{\mu}$ mode, we assume that there are no large
cancellations between the interference contributions of the different neutrino
flavor modes. Also, for low energy observables a linear combination of $LLLL$
WCs and WCs of other vector operators in Table 1 enter the interference term
in EFT corrections. In the cases where measurements are sensitive to the
interference term, there can in principle be flat directions where the bounds
obtained here get weakened, but this would again require a fine-tuned
cancellation between the interference terms of the $LLLL$ and other vector
operators; we assume such cancellations are absent. Finally, there are
operators in Table 1, such as the scalar and tensor operators, that give
contributions proportional to the square of their WCs but the inclusion of
such positive definite terms would only strengthen our bounds. Thus, under
these assumptions, the direct bounds discussed here hold also in the presence
of other operator contributions.
Figure 3: Direct bounds from low-energy (cyan) and high-$p_{T}$ (orange)
processes, along with the indirect (green) bounds on the complex WCs
$[{{C}}_{euLL}^{V}]^{2213}$ and $[{{C}}_{euLL}^{V}]^{2223}$ and on the real
WCs $[{{C}}_{\nu dLL}^{V}]^{2211}$, $[{{C}}_{\nu dLL}^{V}]^{2222}$ and
$[{{C}}_{euLL}^{V}]^{2222}$. The input parameters used are the four complex
WCs $[{{C}}_{\nu dLL}^{V}]^{2212}$, $[{{C}}_{\nu dLL}^{V}]^{2213}$,
$[{{C}}_{\nu dLL}^{V}]^{2223}$ and $[{{C}}_{euLL}^{V}]^{2212}$ and one real WC
$[{{C}}_{euLL}^{V}]^{2211}$. Note that the bottom panel uses the symmetric log
scale. See Appendix D for numerical values of the bounds.
Now we turn to the indirect bounds obtained by using the SMEFT predictions.
Counting the real and imaginary parts of the WCs separately, eq. (43) involves
a total of 18 parameters, connected by 9 linear relations. Our goal is to find
indirect bounds on WCs that are weakly bound or have no direct bound, with the
help of these relations. To this end, we first choose the 9 parameters which
have the most stringent bounds:
$\displaystyle\textrm{Re}\left([{{C}}_{\nu
dLL}^{V}]^{2212}\right),~{}\textrm{Im}\left([{{C}}_{\nu
dLL}^{V}]^{2212}\right),~{}\textrm{Re}\left([{{C}}_{\nu
dLL}^{V}]^{2213}\right),~{}\textrm{Im}\left([{{C}}_{\nu
dLL}^{V}]^{2213}\right),$ $\displaystyle\textrm{Re}\left([{{C}}_{\nu
dLL}^{V}]^{2223}\right),\textrm{Im}\left([{{C}}_{\nu
dLL}^{V}]^{2223}\right),~{}\textrm{Re}\left([{{C}}_{euLL}^{V}]^{2212}\right),~{}\textrm{Im}\left([{{C}}_{euLL}^{V}]^{2212}\right)~{},$
(46)
and the real WC $[{{C}}_{euLL}^{V}]^{2211}$. The remaining 9 parameters can
then be written in terms of these using eq. (43), and indirect bounds on them
may be obtained. In Fig. 3 we show the resultant indirect bounds on these
parameters. For the complex WCs $[{{C}}_{euLL}^{V}]^{2213}$ and
$[{{C}}_{euLL}^{V}]^{2223}$, the region of intersection between the indirect
and the direct bounds can put a tighter constraint on the preferred values.
These WCs correspond to single top production along with two leptons or top
decays via $t\to c\ell\ell$ and $t\to u\ell\ell$ channels. It may be noticed
that the constraints on the imaginary part of $[{{C}}_{euLL}^{V}]^{2223}$ are
strong, making $[{{C}}_{euLL}^{V}]^{2223}$ appear almost as a real WC. This
feature may be understood as follows. Eq (43) implies
$\displaystyle[{{C}}_{\nu
dLL}^{V}]^{2233}=|V_{tb}|^{2}[{{C}}_{euLL}^{V}]^{2233}+V_{cb}^{*}\,V_{tb}\,[{{C}}_{euLL}^{V}]^{2223}+\mathcal{O}(\lambda^{3})~{}.$
(47)
Here $\lambda=\sin(\theta_{c})$ where $\theta_{c}\sim 0.227$ is the Cabbibo
angle. Since $[{{C}}_{euLL}^{V}]^{2233}$ and $[{{C}}_{\nu dLL}^{V}]^{2233}$
are real and $V_{cb}^{*}V_{tb}$ is real up to $\mathcal{O}(\lambda^{3})$, the
only imaginary quantity appearing in this equation is
$\textrm{Im}([{{C}}_{euLL}^{V}]^{2223})$; hence it is strongly constrained.
As far as the real WCs are concerned, for $[{{C}}_{\nu dLL}^{V}]^{2211}$ we
get a better constraint than the available direct bound which may be tested in
experiments studying matter effects on neutrino oscillations. At the same
time, $[{{C}}_{\nu dLL}^{V}]^{2222}$, which has no direct bound, now gets
bounded. For $[{{C}}_{euLL}^{V}]^{2222}$, the indirect bound is slightly worse
than the direct bound. For the other two, viz. $[{{C}}_{\nu dLL}^{V}]^{2233}$
and $[{{C}}_{\nu dLL}^{V}]^{2233}$, the indirect bounds are much worse than
the direct bounds.
Similar relations have been explored in literature in order to put indirect
bounds on various EFT coefficients, albeit for a smaller subset of WCs, with
some UV flavor assumptions, or by neglecting CKM elements. In Bause:2020auq ;
Bause:2021cna ; Bause:2021ihn , similar bounds have been calculated assuming
the WCs to be real and neglecting terms in eqs. (43-45) having CKM elements
that are higher order in $\lambda$. The indirect bounds obtained on the real
WCs in Bause:2020auq become weaker when all the CKM matrix elements are
inserted.
Note that our choice of the 9 input parameters need not have been the best one
for finding the best indirect bounds on any parameter. A different set of 9
input parameters could be optimum. Indeed the best bounds may be obtained by
using all the available direct bounds in a combined fit. Since the primary aim
of this paper is to illustrate the utility of the linear relations in
obtaining indirect bounds, we leave the detailed analysis for future work.
Figure 4: The top panels and the bottom left panel show direct bounds from
meson decays (cyan) for the complex WCs $[{{C}}_{edLL}^{V}]^{2212}$,
$[{{C}}_{edLL}^{V}]^{2213}$, $[{{C}}_{edLL}^{V}]^{2223}$. The orange
background in these three panels indicates that the parameter space of these
complex WCs displayed in this figure is allowed by the high-$p_{T}$ dimuon
searches, and only constrained by meson decays. The bottom right panel shows
the constraints from high-$p_{T}$ dimuon searches (orange) on the real WCs
$[{{C}}_{edLL}^{V}]^{2211}$, $[{{C}}_{edLL}^{V}]^{2222}$ and
$[{{C}}_{edLL}^{V}]^{2233}$. Note that the bottom-right panel uses the
symmetric log scale. See Appendix D for numerical values of the bounds.
### 3.2 Bounds on neutral-current WCs involving $(ed)$ and $(\nu u)$
In this section, we perform a similar analysis as in Sec. 3.1 for neutral-
current WCs involving the muon family, using the relation in eq. (44). The WCs
involved correspond to the operators containing either charged leptons and
down-type quarks $(edLL)$, or neutrinos and up-type quarks $(\nu uLL)$.
The bounds on $(edLL)$ WCs are typically stronger since they involve charged
muon. The WCs $[{{C}}_{edLL}^{V}]^{2212}$, $[{{C}}_{edLL}^{V}]^{2213}$ and
$[{{C}}_{edLL}^{V}]^{2223}$ get direct bounds from rare decays of $K$ and $B$
mesons. Bound on the absolute value of $[{{C}}_{edLL}^{V}]^{2212}$ is provided
in Bause:2020auq ; we convert this to bounds on the real and the imaginary
parts of this WC by taking into account all possible values for its phase. For
$[{{C}}_{edLL}^{V}]^{2213}$, we obtain the bound from the branching ratio
measurement of $B^{0}\to\mu^{+}\,\mu^{-}$ ParticleDataGroup:2022pth . For the
real and the imaginary parts of $[{{C}}_{edLL}^{V}]^{2223}$, we use a combined
fit to the observables $\mathcal{B}(B^{(+,0)}\to
K^{(+,0)}\,\mu^{+}\,\mu^{-})$, $\mathcal{B}(B^{(+,0)}\to
K^{*\,(+,0)}\,\mu^{+}\,\mu^{-})$, $R_{K^{(*)}}$,
$\mathcal{B}(B_{s}\to\mu^{+}\,\mu^{-})$, as well as the angular observables
$P_{5}^{\prime}$ and $F_{L}$ in $B^{0}\to K^{*0}\,\mu^{+}\,\mu^{-}$. The
high-$p_{T}$ dimuon searches give bounds on the three real WCs
$[{{C}}_{edLL}^{V}]^{2211}$, $[{{C}}_{edLL}^{V}]^{2222}$ and
$[{{C}}_{edLL}^{V}]^{2233}$. We show these bounds in Fig. 4. Among the $(\nu
uLL)$ WCs, only a weak bound is available on $[{{C}}_{\nu uLL}^{V}]^{2211}$
from constraints on non-standard interactions of neutrinos in atmospheric
neutrino experiments Farzan:2017xzy . Once again, while these bounds are on
the individual contributions of the respective operators, inclusion of other
operators would not significantly alter them given the assumptions stated in
Sec. 3.1.
Figure 5: Indirect bounds (green) on the complex WCs $[{{C}}_{\nu
uLL}^{V}]^{2212}$, $[{{C}}_{\nu uLL}^{V}]^{2213}$, $[{{C}}_{\nu
uLL}^{V}]^{2223}$, and the real WCs $[{{C}}_{\nu uLL}^{V}]^{2211}$,
$[{{C}}_{\nu uLL}^{V}]^{2222}$, $[{{C}}_{\nu uLL}^{V}]^{2233}$. The direct
bound is available only for the real WC $[{{C}}_{\nu uLL}^{V}]^{2211}$ (shown
in cyan). Note that the bottom-right panel uses the symmetric log scale. See
Appendix D for numerical values of the bounds.
Counting the real and imaginary parts of the WCs separately, eq. (43) involves
a total of 18 parameters, connected by 9 linear relations. In order to get
stronger bounds on the $(\nu uLL)$ WCs, we take the 9 parameters corresponding
to the $(edLL)$ WCs as inputs and derive the indirect bounds using this
relation. These bounds have been shown in Fig. 5. It can be seen that the
complex WCs $[{{C}}_{\nu uLL}^{V}]^{2212}$, $[{{C}}_{\nu uLL}^{V}]^{2213}$,
$[{{C}}_{\nu uLL}^{V}]^{2223}$ and the real WCs $[{{C}}_{\nu uLL}^{V}]^{2222}$
and $[{{C}}_{\nu uLL}^{V}]^{2233}$, which do not have any direct bounds, get
indirect constraints. The first among these WCs would contribute to the
invisible decay widths of $D$ mesons while the next two would contribute to
the semileptonic top decays, $t\to u\nu\nu$ and $t\to c\nu\nu$. The indirect
bound also improves the constraints on $[{{C}}_{\nu uLL}^{V}]^{2211}$
significantly. This indirect bound would be important for constraining models
with neutrino non-standard interactions (NSI) Farzan:2017xzy and can be
tested in precision neutrino oscillation experiments.
Note that the indirect constraints suggest that $[{{C}}_{\nu uLL}^{V}]^{2212}$
is almost real. This can be understood by looking at the leading-order
contributions to $[{{C}}_{\nu uLL}^{V}]^{2212}$ in eq. (44):
$\displaystyle[{{C}}_{\nu uLL}^{V}]^{2212}$
$\displaystyle=V_{ud}\,V_{cs}^{*}[{{C}}_{edLL}^{V}]^{2212}+V_{ud}\,V_{cd}^{*}[{{C}}_{edLL}^{V}]^{2211}+V_{us}\,V_{cs}^{*}\,[{{C}}_{edLL}^{V}]^{2222}$
$\displaystyle~{}+V_{us}\,V_{cd}^{*}\,[{{C}}_{edLL}^{V\,*}]^{2212}+\mathcal{O}(\lambda^{3})~{}.$
(48)
In the above equation, all the CKM coefficients are real up to ${\cal
O}(\lambda^{3})$. The WCs $[{{C}}_{edLL}^{V}]^{2211}$ and
$[{{C}}_{edLL}^{V}]^{2222}$ are real, while $\textrm{Im}([{{C}}_{\nu
uLL}^{V}]^{2212})$ has strong constraints of $\mathcal{O}(0.02)$. Therefore,
the imaginary part of the left-hand side, i.e. $\textrm{Im}\left([{{C}}_{\nu
uLL}^{V}]^{2212}\right)$, is strongly constrained.
### 3.3 Bounds on charged-current WCs
Eq. (45) allows us to express charged-current WCs as combinations of neutral-
current WCs. Restricting to the muon family of lepton, i.e $\alpha=\beta=2$,
there are 9 charged-current WCs on the left-hand side of eq. (45); all of them
can be complex in general. All these charged-current WCs would get indirectly
constrained due to the bounds on the neutral-current WCs. In this section, we
first show the direct bounds for the 9 charged-current WCs from mesonic decays
and from high-$p_{T}$ monolepton searches. Later, we compare these bounds with
the ones derived indirectly using eq. (45).
Figure 6: Direct bounds on the charged-current WCs from meson decays (cyan)
and from high-$p_{T}$ mono-muon searches (orange). Note that there are no
direct constraints on the WCs associated with charged current decays of top
quark. See Appendix D for numerical values of the bounds.
For the WCs $[{{C}}_{LL}^{V}]^{2211}$, $[{{C}}_{LL}^{V}]^{2212}$,
$[{{C}}_{LL}^{V}]^{2213}$, $[{{C}}_{LL}^{V}]^{2221}$,
$[{{C}}_{LL}^{V}]^{2222}$ and $[{{C}}_{LL}^{V}]^{2223}$, we obtain direct
bounds using the branching ratios ParticleDataGroup:2022pth of the decay
modes $\pi^{+}\to\mu^{+}\nu$, $K^{+}\to\pi\mu^{+}\nu$, $B^{+}\to\pi^{0}l\nu$
$D^{+}\to\mu^{+}\nu$, $D_{s}\to\mu\nu$ and $B^{+}\to Dl\nu$, respectively.
However, stronger bounds can be obtained for these WCs from high-$p_{T}$
monolepton searches. In order to do this121212In Allwicher:2022gkm , bounds on
SMEFT coefficients are provided using high-$p_{T}$ single lepton and dilepton
searches. However, no combination of SMEFT coefficients can map to a single
charged-current LEFT coefficient without generating other LEFT coefficient
that can contribute to the same single charged-lepton final state mode.
Therefore we calculate these bounds independently, we generate bin-wise events
in MadGraph Alwall:2014hca . Note that the charged-current NP would not change
the shape of the $q^{2}$ dependence from the SM prediction, since the relevant
charged-current operators in SM and NP are identical. We use the results from
the ATLAS analysis in Ref. ATLAS:2019lsy , and incorporate the effect of their
cuts by using a re-scaling factor on our generated events such that they
reproduce the ATLAS data for SM. We then perform a $\chi^{2}$ fit for the
isolated charged-current WC to obtain bound on the NP WC. These direct bounds
obtained from the meson decays (cyan) and from the high-$p_{T}$ mono-muon
searches (orange) are shown in Fig. 6.
Figure 7: Direct bounds (orange) on the charged-current WCs from high-$p_{T}$
mono-muon searches along with the indirect bounds (green) obtained using eq.
(45). Note that the quantities in the bottom panels have no direct bounds. See
Appendix D for numerical values of the bounds.
In order to obtain indirect bounds, we use the best available bounds (direct
or indirect) for the neutral-current WCs appearing on the right-hand side of
eq. (45). These indirect bounds (green) along with the best available direct
bounds (orange) are shown in Fig. 7. The figure shows that this method
provides constraints on $[{{C}}_{LL}^{V}]^{2231}$, $[{{C}}_{LL}^{V}]^{2232}$
and $[{{C}}_{LL}^{V}]^{2233}$, where no direct bounds were available. For
$[{{C}}_{LL}^{V}]^{2221}$, the indirect constraints are significantly stronger
than the direct bounds. These WCs would contribute to branching ratios of
semileptonic decays of top quark and D meson decays, viz. $D\to\pi\mu\nu$,
etc.
In addition, the imaginary parts of $[{{C}}_{LL}^{V}]^{2211}$,
$[{{C}}_{LL}^{V}]^{2212}$, $[{{C}}_{LL}^{V}]^{2222}$ and
$[{{C}}_{LL}^{V}]^{2223}$ are constrained more strongly. These WCs would
contribute to branching ratios of meson decays, viz. $K\to\pi\mu\nu$, $B\to
D\mu\nu$, etc. The reason for strong indirect constraints on the imaginary
parts of these four WCs may be understood using eq. (45). For example,
$[{{C}}_{LL}^{V}]^{2211}$ may be written using eq. (45) as
$\displaystyle[{{C}}_{LL}^{V}]^{2211}$
$\displaystyle=V_{ud}\,([{{C}}_{edLL}^{V}]^{2211}-[{{C}}_{\nu
dLL}^{V}]^{2211})+V_{us}\,([{{C}}_{edLL}^{V}]^{2212}-[{{C}}_{\nu
dLL}^{V}]^{2212})+\mathcal{O}(\lambda^{3})~{}.$ (49)
The CKM coefficients appearing on the right-hand side of the above equation
are real up to $\mathcal{O}(\lambda^{3})$. The WCs $[{{C}}_{edLL}^{V}]^{2211}$
and $[{{C}}_{\nu dLL}^{V}]^{2211}$ are real. Furthermore, the bounds on the
imaginary parts of $[{{C}}_{edLL}^{V}]^{2212}$ and $[{{C}}_{\nu
dLL}^{V}]^{2212}$ are of $\mathcal{O}(0.02)$ and these WCs appear with a CKM
coefficient of $\mathcal{O}(\lambda)$. Thus, the imaginary part of the WC on
the left-hand side, i.e. $[{{C}}_{LL}^{V}]^{2211}$, is strongly constrained.
Using similar arguments, we can show that the WCs $[{{C}}_{LL}^{V}]^{2212}$,
$[{{C}}_{LL}^{V}]^{2222}$ and $[{{C}}_{LL}^{V}]^{2233}$ are expected to be
dominantly real.
### 3.4 Predictions for lepton flavor violating observables
So far we have considered the relations only among WCs involving one lepton
family i.e. muon. In this subsection, we expand our discussion to SMEFT
predictions that include all lepton families, while remaining in the UV4f
scenario. These relations will relate diverse reaction channels like rare
decays of $B$, $D$ and $K$ mesons as well as lepton flavor violating (LFV)
processes such as $\tau\rightarrow{\ell}\,q_{i}\,q_{j}$ and
${\ell}\,N\to{\ell}^{\prime}\,N$. Focusing once again on UV4f models, we shall
indicate the methodology by one example and present a set of relevant
processes in Table 5.
Eq. $\downarrow$ | LHS WC | RHS WCs | Transitions | Processes
---|---|---|---|---
(43) | $[{{C}}_{euLL}^{V}]^{{\ell}311}$ | $[{{C}}_{\nu dLL}^{V}]^{{\ell}311}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}312}$ | $\tau\to u\,u\,{\ell}$ $s\to d\,\nu\,\nu$ | LFV $\tau$ decay, $K\to\pi\nu\nu$
$[{{C}}_{euLL}^{V}]^{{\ell}{\ell}^{\prime}11}$ | $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}11}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}12}$ | ${\ell}\,u\to{\ell}^{\prime}\,u$ $s\to d\,\nu\,\nu$ | LFV ${\ell}\,N\to{\ell}^{\prime}\,N$ $K\to\pi\nu\nu$
$[{{C}}_{euLL}^{V}]^{{\ell}312}$ | $[{{C}}_{\nu dLL}^{V}]^{{\ell}312}$ | $c\,u\to\tau\,{\ell}$ $s\to d\,\nu\,\nu$ | LFV $D$ decays, $K\to\pi\nu\nu$ Bause:2020auq
$[{{C}}_{euLL}^{V}]^{{\ell}{\ell}^{\prime}i3}$ | $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}13}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}23}$ | $t\to u_{i}\,{\ell}\,{\ell}^{\prime}$ $b\to d\,\nu\,\nu$ $b\to s\,\nu\,\nu$ | LFV top decay, $B$ decays to dineutrinos Bause:2020auq
(44) | $[{{C}}_{edLL}^{V}]^{{\ell}3ij}$ | $[{{C}}_{\nu uLL}^{V}]^{{\ell}312}$ | $\tau\to d\,d\,{\ell}$ $c\to u\,\nu\,\nu$ | LFV $\tau$ decay, $D$ decay to dineutrinos
$[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}ij}$ | $[{{C}}_{\nu uLL}^{V}]^{{\ell}{\ell}^{\prime}12}$ | ${\ell}\,d\to{\ell}^{\prime}\,d$ $c\to u\,\nu\,\nu$ | LFV ${\ell}\,N\to{\ell}^{\prime}\,N$ $D$ decay to dineutrinos
$[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}i3}$ | $[{{C}}_{\nu uLL}^{V}]^{{\ell}{\ell}^{\prime}13}$ $[{{C}}_{\nu uLL}^{V}]^{{\ell}{\ell}^{\prime}23}$ | $b\to d_{i}\,{\ell}\,{\ell}^{\prime}$ $t\to u\,\nu\,\nu$ $t\to c\,\nu\,\nu$ | LFV B decay, top decays to dineutrinos
(45) | $[{{C}}_{LL}^{V}]^{{\ell}311}$ | $[{{C}}_{edLL}^{V}]^{{\ell}311}$ $[{{C}}_{edLL}^{V}]^{{\ell}312}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}311}$ | $\tau\to u\,d\,\nu$ $\tau\to d\,d\,{\ell}$ $\tau\to d\,s\,{\ell}$ $s\to d\nu\,\nu$ | CC decay of $\tau$ LFV $\tau$ decay, $K\to\pi\,\nu\,\nu$
$[{{C}}_{LL}^{V}]^{{\ell}{\ell}^{\prime}11}$ | $[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}11}$ $[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}12}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}11}$ | ${\ell}\to u\,d\,\nu$ ${\ell}\,d\to{\ell}^{\prime}\,d$ $s\to d\,{\ell}\,{\ell}^{\prime}$ $s\to d\nu\,\nu$ | LFV ${\ell}\,N\to{\ell}^{\prime}\,N$ $K\to\pi\,{\ell}\,{\ell}^{\prime}$ $K\to\pi\,\nu\,\nu$
$[{{C}}_{LL}^{V}]^{{\ell}{\ell}^{\prime}i3}$ | $[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}13}$ $[{{C}}_{edLL}^{V}]^{{\ell}{\ell}^{\prime}23}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}13}$ $[{{C}}_{\nu dLL}^{V}]^{{\ell}{\ell}^{\prime}23}$ | $b\to u_{i}\,{\ell}\,\nu$ $b\to d_{i}\,{\ell}\,{\ell}^{\prime}$ $b\to d_{i}\,\nu\,\nu$ | CC decay of $B$ meson, LFV $B$ decays, $B$ decays to dineutrinos Bause:2021ihn
Table 5: Correlations among different WCs involving all lepton families,
derived from eqs. (43-45). The second column shows the WC appearing on the
left hand side of these equations, whereas the third column contains the WCs
appearing on the right-hand side of those equations with large CKM
coefficients, with values $\mathcal{O}(\lambda)$ or more.
From eq. (43), we get the following relation among the LEFT WCs:
$\displaystyle[{{C}}_{euLL}^{V}]^{{\ell}311}$
$\displaystyle=|V_{ud}|^{2}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}311}+(V_{ud}^{*}\,V_{cd}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}312}+\mbox{c.c.})+(V_{ud}^{*}\,V_{td}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}313}+\mbox{c.c.})$
$\displaystyle~{}+|V_{cd}|^{2}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}322}+(V_{cd}^{*}\,V_{td}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}323}+\mbox{c.c.})+|V_{td}|^{2}\,[{{C}}_{\nu
dLL}^{V}]^{{\ell}333}~{},$ (50)
where $\ell=1~{}{\rm or}~{}2$. Among the CKM coefficients in this equation,
the leading ones are $|V_{ud}|^{2}\sim\mathcal{O}(1)$ and
$|V_{ud}^{*}V_{cd}|\sim\mathcal{O}(\lambda)$. All the other coefficients are
$\mathcal{O}(\lambda^{2})$ or smaller. Therefore, at the leading order, this
equation connects the three WCs $[{{C}}_{euLL}^{V}]^{{\ell}311}$, $[{{C}}_{\nu
dLL}^{V}]^{{\ell}311}$ and $[{{C}}_{\nu dLL}^{V}]^{{\ell}312}$. Hence the new
physics WCs contributing to LFV tau decays and $K\to\pi\,\nu\,\nu$ are related
to each other.
Further relations involving other lepton and quark families are given in Table
5. Some similar relations have been presented in Fuentes-Martin:2020lea . Note
that such discussions in earlier literature often assume some flavor structure
for the quark sector. We emphasize again that in our discussion, the
implications presented in this section are independent of any NP flavor
structure assumption for the quarks.
So far, we have discussed observables that are insensitive to the flavor of
neutrinos. Neutrino experiments that are sensitive to neutrino flavor can
probe the neutrino non-standard interactions (NSI) generated by the operators
in Table 1 containing neutrinos. The predictions in eq. (43)-eq. (45) in UV4f
models (or the more general predictions in Table 3) would then imply
constraints on NSI from charged LFV. We discuss this in more detail in Sec.
4.3.
## 4 SMEFT-predicted evidence for new physics
In this section, we discuss how to use the SMEFT predictions derived in Sec. 2
in the event that measurements provide evidence for certain new physics WCs to
be nonzero. We will show that, given the SMEFT predictions derived in this
work, it is in general not consistent to assume a single non-zero WC to
explain an excess in a certain channel.131313For some operators, such as the
$RRRR$ vector operators, there are no constraints implied by SMEFT. For this
category of operators, therefore, we can have a single non-zero WC. In fact,
we will show that for certain operators a non-zero WC must be accompanied by
multiple other WCs that are non-vanishing. This would imply that the observed
excess must be accompanied by correlated excesses in many other channels. This
is because the SMEFT predictions in Table 5 are linear equations involving
multiple WCs, implying that it is not possible for only one of these
coefficients to be nonzero.
For example, consider the situation where an observed deviation from SM in a
particular channel indicates that one of the $LLLL$ LEFT WCs is non-vanishing.
In SMEFT, this LEFT coefficient might arise either from a four-fermion
operator or an operator inducing an off-diagonal $W$ or $Z$ coupling to
fermions.
The former situation is realized in the UV4f models, where we can use SMEFT
predictions in eq. (43). These are 6 linear equations involving 12 (possibly)
complex WCs when $\alpha=\beta$. If one of these WCs, (say $C_{1}$) is found
to be nonzero, we can write these equations in a form where $C_{7}$ to
$C_{12}$ are expressed as linear combinations of $C_{1}$ to $C_{6}$. Then, as
long as the coefficient of $C_{1}$ is nonzero in all these equations (as is
generically observed to be the case), all the 6 coefficients $C_{7}$ to
$C_{12}$ also have to be nonzero. For one of them to be vanishing, we will
need one of the other coefficients, $C_{2}$ to $C_{6}$, to be nonzero in order
to cancel the $C_{1}$ contribution. Thus, the nonvanishing nature of $C_{1}$
necessarily implies that overall at least 7 WCs are nonvanishing in principle.
Of course, depending on the CKM coefficients, the magnitudes of these
coefficients may be small or large.
When $\alpha\neq\beta$, eq. (43) gives 9 linear equations, therefore one
nonzero WC among these will imply at least 10 of the WCs of the type ($\nu d$)
or ($eu$) nonvanishing. As eq. (44) is completely decoupled from eq. (43), it
is of course still consistent for all the WCs appearing in it to vanish. The
charged current WCs in eq. (45), however, cannot all vanish and one can use
similar arguments to conclude that at least 3 of them must be nonzero whether
or not $\alpha$ equals $\beta$.
Similarly, from eq (44), for $\alpha=\beta$ $(\alpha\neq\beta)$ we have 6 (9)
linear relations. These imply that, if one of the WCs of the kind ($ed$) or
($\nu u$) is found to be nonzero, then a total of at least 7 (10) WCs of these
kinds should be nonzero in principle. Again, by eq. (45) a non-zero neutral-
current WC will lead to at least 3 non-vanishing charged-current WCs. The CKM
coefficients will guide us regarding which of these WCs are likely to have
larger magnitudes. Thus, these relations direct us toward specific decay
channels where deviation from SM is expected to be present.
In the latter situation, i.e. when the LEFT operators arise from modifications
of $W/Z$ couplings, the low-energy pattern of deviations is very different.
For example, if one of the $Z$ coupling to down quarks gets BSM corrections,
the penultimate row of Table 3 would imply at least three $W$-coupling
modifications. Alternatively, if all the $W$ couplings are to be at their SM
value, this would imply modifications of at least 10 of the 18 $Z$ couplings
to up and down-type quarks. Once the $W$ and $Z$ are integrated out, each
$W$-coupling modification will induce 3 non-vanishing semileptonic LEFT WCs,
and each $Z$-coupling modification will induce 6 non-vanishing semileptonic
LEFT WCs. Studying the pattern of BSM deviations can, therefore, help pinpoint
the underlying UV physics. We shall not consider this scenario further in this
section.
### 4.1 Implications of the measured excess in $B\to K\nu\nu$
In the recent measurement of $B\to K\nu\nu$ at Belle II Belle-II:2023esi , the
observed branching ratio has $3.5\sigma$ excess over the SM value. If this
excess were to be explained in terms of the LEFT coefficients $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23}$, the required values of these WCs in various
scenarios are shown in Fig. 8. In the first scenario, we assume that new
physics turns on a lepton flavor universal (LFU) combination of WCs whereas in
the second (i.e. LFUV) and third (i.e. LFV) scenarios, we assume that a single
WC is turned on with $\alpha=\beta$ and $\alpha\neq\beta$, respectively.
Figure 8: Preferred parameter region at $90\%$ C.L. for $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23}$ in order to explain the observed excess in $B\to
K\nu\nu$ branching ratio. The left panel shows lepton flavor universal (LFU)
scenario, where $[{{C}}_{\nu dLL}^{V}]^{\alpha\beta 23}$ is nonzero and equal
for all $\alpha=\beta\in\\{e\,,\mu\,,\tau\\}$. The middle panel shows lepton
flavor nonuniversal (LFUV) scenario where $[{{C}}_{\nu dLL}^{V}]^{\alpha\beta
23}$ is nonzero only for one value of $\alpha=\beta$. The right panel depicts
the LFV scenario with $\alpha\neq\beta$ and only one $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23}$ nonzero.
From this figure, it is clear that the coefficient $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23}$ is non-vanishing at $90\%$ C.L. for all scenarios
considered. As discussed earlier, a nonzero $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23}$ will indicate at least seven (ten) non-vanishing
WCs appearing in eq. (43) for $\alpha=\beta$ $(\alpha\neq\beta)$.
For example, in the LFUV (LFV) scenarios, eq. (43) corresponds to 27 (54)
equations of the form
$\displaystyle[{{C}}_{euLL}^{V}]^{\alpha\beta ij}$
$\displaystyle=V_{i2}\,[{{C}}_{\nu dLL}^{V}]^{\alpha\beta
23}V^{\dagger}_{3j}+...$ (51)
in UV4f models. Since the CKM coefficients $V_{cs}V_{tb}^{*}$ and
$V_{us}V_{tb}^{*}$, which are $\mathcal{O}(1)$ and $\mathcal{O}(\lambda)$
respectively, are significant, it is expected that in the absence of any
cancellation coming from other $[{{C}}_{\nu dLL}^{V}]^{\alpha\beta ij}$
elements, the WCs $[{{C}}_{euLL}^{V}]^{\alpha\beta 13}$ and
$[{{C}}_{euLL}^{V}]^{\alpha\beta 23}$ will have significant nonzero values.
Thus the modes $t\to c\,e^{\alpha}\,e^{\beta}$ and $t\to
ue^{\alpha}\,e^{\beta}$ will be the ones where there can be potential new
physics. Currently the bounds on these coefficients are
$|[{{C}}_{euLL}^{V}]^{\alpha\beta 13}|<0.003$ and
$|[{{C}}_{euLL}^{V}]^{\alpha\beta 23}|<0.02$, respectively. Exploration of
these modes further may lead to discovery of further anomalies in these two
channels. These processes will also test the solution of $B\to K\nu\nu$
anomaly in terms of $[{{C}}_{\nu dLL}^{V}]^{\alpha\beta 23}$. This
demonstrates that the semileptonic neutral-current top decays will be strong
probes of the origin of the $B\to K\nu\nu$ anomaly in the context of SMEFT.
Eq. (45), in this LFUV (LFV) scenario, gives the 9 (18) equations of the form
$\displaystyle[{{C}}_{LL}^{V}]^{\alpha\beta i3}$
$\displaystyle=V_{i2}\,([{{C}}_{edLL}^{V}]^{\alpha\beta 23}-[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta 23})~{}.$ (52)
Since the CKM coefficients $V_{cs}$ and $V_{us}$, which are $\mathcal{O}(1)$
and $\mathcal{O}(\lambda)$, respectively, are significant, it is expected that
in the absence of any cancellation coming from other $[{{C}}_{\nu
dLL}^{V}]^{\alpha\beta ij}$ or $[{{C}}_{edLL}^{V}]^{\alpha\beta ij}$ elements,
the WCs $[{{C}}_{LL}^{V}]^{\alpha\beta 23}$ and $[{{C}}_{LL}^{V}]^{\alpha\beta
13}$ will have significant nonzero values. Thus, charged-current semileptonic
$B$ meson decays would also be sensitive probes of the origin of $B\to
K\nu\nu$ anomaly.
Similar discussions can be found in Bause:2023mfe ; Chen:2024jlj . In
Bause:2023mfe , relations among the WCs in the $LLRR$ category, as shown in
Table 3, have been used to relate $b_{R}\to s_{R}\tau\tau$ and $b_{R}\to
s_{R}\nu\nu$. These relations, as discussed in Bause:2023mfe , predict excess
branching fractions for the modes $B\to K^{(*)}\tau\tau$, $B_{s}\to\tau\tau$,
etc. In Chen:2024jlj , matching relations have been derived among the SMEFT
and LEFT WCs, assuming up-alignment. These have been then used to obtain the
effects of the observed excess in $B\to K\nu\nu$ on other processes, namely,
$B\to D^{(*)}\ell\nu_{\ell}$, $B\to K^{(*)}\ell^{+}\ell^{-}$,
$B_{q}\to\tau\nu_{\tau}$, $B_{s}\to\tau^{+}\tau^{-}$,
$D_{s}\to\tau^{+}\nu_{\tau}$, etc.
### 4.2 Implications of the $R(D^{(*)})$ anomalies
One possible explanation of multiple anomalies observed in the $b\to
c\tau\bar{\nu}$ channels, such as $R(D)$, $R(D^{*})$ and $R(J/\psi)$, is to
have nonzero values for the LEFT WC $[{{C}}_{LL}^{V}]^{3323}$. We show the
preferred range of this WC at $90\%$ C.L. in Fig. 9. Note that this preferred
range does not include the point $[{{C}}_{LL}^{V}]^{3323}=0$.
Figure 9: Preferred parameter region at $90\%$ C.L. for the WC
$[{{C}}_{LL}^{V}]^{3323}$.
From eq. (45), we can write $[{{C}}_{LL}^{V}]^{3323}$ in terms of the neutral-
current WCs as
$\displaystyle[{{C}}_{LL}^{V}]^{3323}$
$\displaystyle=V_{cd}\left[[\hat{{C}}_{edLL}^{V}]^{3313}-[{{C}}_{\nu
dLL}^{V}]^{3313}\right]+V_{cs}\left[[\hat{{C}}_{edLL}^{V}]^{3323}-[{{C}}_{\nu
dLL}^{V}]^{3323}\right]$
$\displaystyle~{}+V_{cb}\left[[\hat{{C}}_{edLL}^{V}]^{3333}-[{{C}}_{\nu
dLL}^{V}]^{3333}\right]~{}.$ (53)
Since $[{{C}}_{LL}^{V}]^{3323}\neq 0$, it suggests that at least one WC
appearing on the right-hand side of eq. (53) has to be nonzero. Relevant
interesting modes could be of the type $b\to d\tau\tau$, $b\to s\tau\tau$,
$b\to d\nu\nu$ and $b\to s\nu\nu$ which suggests that the NP can manifest in
observables related to processes such as $B\to\tau\tau$, $B_{s}\to\tau\tau$,
$B\to K^{(*)}\tau\tau$, $B\to K^{(*)}\nu\nu$, etc.
### 4.3 Implications of the violation of SMEFT predictions
In this subsection, we consider a scenario where many anomalies have been
observed and multiple LEFT coefficients must have non-zero values to explain
them. According to our results, these LEFT WCs must obey the SMEFT predictions
of Table 5. We now discuss what an observation of a violation of these
predictions would imply.
First of all, if low-energy measurements indicate a violation of the UV4f
predictions in eq. (43–45), it may only mean that the UV model is not in the
UV4f category, but still maps to SMEFT when heavier degrees of freedom are
integrated out. It would only indicate that we are outside the UV4f region of
Fig. 1, and not necessarily outside the SMEFT region. We must then check
whether or not the more general predictions Table 3 are obeyed. This would
require looking for deviations in $W$ and $Z$ decays and/or high-$p_{T}$
Drell-Yan data.141414If we use only the high-$p_{T}$ data to test our
predictions, we can directly use Table 3 and thus test the validity of our
predictions without taking into account $Z$ and $W^{\pm}$ decays.
If the violation of the predictions persists at the level of Table 3 (or the
equations in Appendix. C), it would imply that one of the assumptions used in
deriving these predictions is incorrect. Note, first of all, that we have only
included dim-6 operators in our analysis. Inclusion of dimension-8 (dim-8)
operators will result in a violation of these predictions at ${\cal
O}(v^{4}/\Lambda^{4})$. For instance, the dim-8 operator
$\left[{\cal O}_{{\ell}q3}\right]^{\alpha\beta
ij}=(\bar{l}^{\alpha}\gamma_{\mu}l^{\beta})(\bar{q}^{i}\gamma^{\mu}\tau^{I}q^{j})(H^{\dagger}\tau^{I}H)$
(54)
will break the equality in the first row of Table 3, as follows:
$V^{\dagger}_{ik}\,[\hat{{\mathbf{c}}}_{euLL}^{V}]^{\alpha\beta
kl}\,V_{{\ell}j}-U^{\dagger}_{\alpha\rho}\,[\hat{{\mathbf{c}}}_{\nu
dLL}^{V}]^{\rho\sigma ij}\,U_{\sigma\beta}\sim v^{4}/\Lambda^{4}\left[{\cal
C}_{{\ell}q3}\right]~{}.$ (55)
Similarly, other operators at dim-8 or higher order will introduce a breaking
of the other predictions in Table 3. Such effects are, however, higher order
in the SMEFT expansion parameter $v^{2}/\Lambda^{2}$, and are thus expected to
be small.
If larger, ${\cal O}(1)$ violations of the predictions are observed, it would
indicate something more radical, namely, that one of the assumptions of SMEFT
itself is violated and we lie outside the SMEFT region of Fig. 1. This would
be the case if (i) the scale of new physics is below the weak scale, (ii)
there is heavy new physics that does not decouple because it gets a large
fraction of its mass from the electroweak vacuum expectation value
Banta:2021dek ; Cohen:2020xca , or (iii) the observed 125 GeV scalar, $h$, is
not a part of the SU(2) doublet that breaks the electroweak symmetry
Falkowski:2019tft ; Cohen:2020xca ; Banta:2021dek ; Cata:2015lta .
As an example, consider the case of neutrino NSI that are induced by operators
containing neutrinos in Table 1. As mentioned in Sec. 3.4, for a given choice
of the quark flavor indices, eqs. (43–45) (or the equations in Table 3), imply
relations between the NSI and the stringently constrained lepton flavor
violating operators Proceedings:2019qno . These predictions can, however, be
evaded in new physics scenarios where dim-8 operators become important. For
instance, if the leading contribution to the NSI is from dim-8 (and not dim-6)
operators like
$\left[{\cal O}_{l3q}\right]^{\alpha\beta
ij}=(\tilde{H}^{\dagger}\tau^{I}\tilde{H})(\bar{l}^{\alpha}\gamma_{\mu}\tau^{I}l^{\beta})(\bar{q}^{i}\gamma^{\mu}q^{j})~{},$
(56)
new physics affects only the neutrino and not the charged-lepton sector. Even
in this case, however, dim-6 charged-lepton flavor-violating effects will be
generated at loop level Ardu:2022pzk . A more natural way of decoupling these
two sectors is if the new physics scale is below the electroweak scale (see,
e.g. Ref. Farzan:2019xor ).
## 5 Concluding remarks
In this work, we have systematically derived the consequences of the
$SU(2)_{L}\times U(1)_{Y}$ invariance of SMEFT on semileptonic flavor
observables. These consequences arise from the fact that a complete
parametrization of BSM deviations in flavor physics observables can be only
achieved by writing a lagrangian that respects $U(1)_{em}$ and not the full
symmetry of SMEFT. For instance, while the left-handed up and down type
fermions form $SU(2)_{L}$ doublets and always appear together in SMEFT
operators, as far as flavor observables are concerned, searches in the up and
down sectors are completely independent. Therefore, BSM deviations in these
channels must be parameterized by independent operators.
To be more precise, while the most general $U(1)_{em}$ invariant lagrangian
has 3240 independent semileptonic four-fermion operators (see Table 1) and
another set of 144 operators that contribute to semileptonic processes via
$Z,W^{\pm}$ and $h$ exchange (see Table 4), the number of dim-6 SMEFT
operators in these categories are 1053 (see Table 2) and 108 (see Table 4),
respectively. This then results in 2223 constraints in the space of WCs of the
$U(1)_{em}$ invariant lagrangian that can be thought of as predictions of
SMEFT at the dim-6 level. One of the main results of this work is the
derivation of these 2223 constraints. We present these constraints as linear
relations among the WCs of the $U(1)_{em}$ invariant lagrangian, in Table 3.
These relations are a succinct expression of the consequences of the
$SU(2)_{L}\times U(1)_{Y}$ invariance of SMEFT for semileptonic operators.
They are completely independent of UV flavor assumptions as we find that the
elements of the rotation matrices of the left-handed and right-handed up-type
and down-type fermions do not individually appear in them but only in
combinations that form CKM and PMNS elements. We then show how these relations
can be written in terms of LEFT WCs by integrating out the $Z,W^{\pm}$ and $h$
bosons. We refer the reader to Fig. 1 where this scenario has been pictorially
represented.
The $U(1)_{em}$ invariant lagrangian we have considered is in fact equivalent,
in the unitary gauge, to the HEFT lagrangian which is generally written in an
$SU(2)_{L}\times U(1)_{Y}$ invariant form but with the gauge symmetry being
realized non-linearly. We show this explicitly in Appendix A where we present
a one-to-one mapping between the invariant HEFT operators and the list of
$U(1)_{em}$ invariant operators in Table 1 and Table 4. In the process, we
find some HEFT operators that were missed in earlier literature and others
that were considered but are actually redundant.
The SMEFT predictions we have derived have powerful phenomenological
consequences as they connect observables in different sectors, such as rare
decays in the kaon, B-Meson and charm sectors; decays of the top, $Z,W^{\pm}$
and $\tau$; lepton flavor violating observables and even neutrino NSI. On the
one hand, they can be used to express poorly constrained WCs in terms of
strongly constrained ones, thus allowing us to put new stronger indirect
bounds on the former. On the other hand, if evidence for new physics is seen,
they in general imply that BSM effects cannot appear in a single isolated
channel because these linear relations imply that if one WC is non-zero,
multiple others also must be non-vanishing.
To illustrate the usefulness of these relations in phenomenology, we focus on
the well-motivated UV4f scenario, where the UV physics only involves four-
fermion operators, and HEFT WCs corresponding to BSM couplings of the $Z$,
$W^{\pm}$ and Higgs to fermions are absent. We further restrict ourselves to
the operators with only left-handed fermions, i.e the $LLLL$ class of
operators. In this scenario, there are three sets of relations among the LEFT
WCs. The first set relates the WCs of the neutral-current operators
$(\bar{\nu}_{L}\gamma_{\mu}\nu_{L})\,(\bar{d}_{L}\gamma^{\mu}d_{L})$ and
$(\bar{e}_{L}\gamma_{\mu}e_{L})\,(\bar{u}_{L}\gamma^{\mu}u_{L})$. The second
set consists of relations among the WCs of the neutral-current operators
$(\bar{e}_{L}\gamma_{\mu}e_{L})\,(\bar{d}_{L}\gamma^{\mu}d_{L})$ and
$(\bar{\nu}_{L}\gamma_{\mu}\nu_{L})\,(\bar{u}_{L}\gamma^{\mu}u_{L})$. In the
third set, the charged-current WCs are related to the above neutral-current
coefficients.
The main phenomenological results of this work are as follows:
1. 1.
Indirect bounds from SMEFT predictions: In Sec. 3.1\- 3.3, we consider $LLLL$
operators in UV4f models. Using bounds from meson decays and high-$p_{T}$
Drell-Yan searches and applying the SMEFT predictions, we obtain novel bounds
on WCs related to $d\bar{d}\to\nu\bar{\nu}$, $u_{i}\to u_{j}\nu\bar{\nu}$ and
top decays, that are much stronger than the direct bounds. Our main results
are summarised in Fig. 3, 5 and 7 .
2. 2.
Connecting quark and lepton flavor violation: In Sec. 3.4, we show how the
SMEFT predictions derived by us connect flavor violation in the quark and
lepton sectors. In Table 5, we present a list of processes spanning diverse
observation channels (e.g. LFV tau decays, LFV ${\ell}N\to{\ell}^{\prime}N$
transitions, rare semileptonic $B$, $D$ and $K$ decays, top production and
decays, etc.) that are connected via our analytic relations among the WCs.
3. 3.
Evidence for new physics from SMEFT predictions: In Sec. 4, we show that the
relations among the WCs of the type $LLLL$ imply that a single nonzero WC
requires that there are at least 9 other nonzero WCs. We then discuss the
specific cases of the observed excess in the $B\to K\nu\nu$ branching fraction
and $R(D^{(*)})$ anomalies, and list other search channels that should see a
correlated signal if these anomalies survive in the future.
In future studies, we aim to extend the approach developed here and apply it
to other flavor physics observables. In this work, we have considered only a
subset of operators appearing in LEFT, HEFT and SMEFT, namely the set of
semileptonic operators. In future work, we will extend our analysis by
including all operators up to dimension-6 in order to find SMEFT-predicted
relations among the corresponding LEFT and HEFT WCs. These predictions will
allow us to interconnect many other important flavor observables. For
instance, predictions can be obtained for dipole operators connected to
observables such as the $b\to s\gamma$ process, for four quark operators that
are associated to the $\Delta F=2$ meson-mixing processes and nonleptonic
meson decays, for four-lepton operators associated to LFV processes such as
$\mu\to 3e$, etc. We, thus, hope that this work will initiate a rich program
in quark and lepton flavor phenomenology that uncovers many more SMEFT-
predicted links between observables.
###### Acknowledgements.
We would like to thank Tuhin S. Roy, Ketan M. Patel, Abhishek M. Iyer, Arnab
Roy, Samadrita Mukherjee, Dibya S. Chattopadhyay and Radhika Vinze for useful
discussions. This work is supported by the Department of Atomic Energy,
Government of India, under Project Identification Number RTI 4002. We
acknowledge the use of computational facilities of the Department of
Theoretical Physics at Tata Institute of Fundamental Research, Mumbai. We
would also like to thank Ajay Salve and Kapil Ghadiali for technical
assistance.
## Appendix A Semileptonic HEFT operators in $SU(2)_{L}\times U(1)_{Y}$
invariant form
In Table 1 and Table 4, we have presented all possible $U(1)_{em}$-invariant
semileptonic operators relevant to this work. In this Appendix, we show that
these operators can be rewritten as $SU(2)_{L}\times U(1)_{Y}$ invariant
operators of HEFT with the symmetry realized non-linearly. Following the
notation and approach used in Ref. Buchalla:2012qq , we introduce the
Goldstone matrix $U=\exp(2i\varphi^{a}T^{a}/v)$, where $\varphi_{a}$ are the
Goldstones of the breaking of $SU(2)_{L}\times SU(2)_{R}\to SU(2)_{V}$. Under
$SU(2)_{L}\times SU(2)_{R}$, the matrix $U$ transforms as $U\to
g_{L}Ug_{R}^{\dagger}$, where $g_{L}$ and $g_{R}$ are the respective group
elements. We also introduce the $SU(2)_{R}$ quark and lepton doublets denoted
by $r\equiv(u_{R},d_{R})^{T}$ and $\eta\equiv(0,e_{R})^{T}$, respectively.
As the correct symmetry-breaking pattern in SM is $SU(2)_{L}\times U(1)_{Y}\to
U(1)_{em}$, and not $SU(2)_{L}\times SU(2)_{R}\to SU(2)_{V}$, one must include
explicit sources of $SU(2)_{R}$ breaking (see for eg. Ref. murayama ). For
bosonic operators, this is usually done by introducing the two spurions
$L_{\mu}=UD_{\mu}U^{\dagger}$ and $\tau_{L}=UT_{3}U^{\dagger}$. For fermionic
operators, we need other sources of $SU(2)_{R}$ breaking. As shown in Ref.
Buchalla:2012qq , this can be achieved by including factors of $UP_{i}$ in the
operators where the projection matrices $P_{i}$ are defined as
$\displaystyle P_{\pm}\equiv\frac{1}{2}\pm T_{3},\quad P_{12}\equiv
T_{1}+iT_{2},\quad P_{21}\equiv T_{1}-iT_{2}~{}.$ (57)
In the above equation, $T_{i}$ are the $SU(2)_{L}$ generators. One can keep
track of the hypercharge invariance of the operators by keeping in mind that,
while $UP_{+}$ and $UP_{12}$ extract the $Y=-1$ components of $U$, the
projections $UP_{-}$ and $UP_{21}$ extract the $Y=1$ components of $U$.
$LLLL$ | $LLRR$
---|---
${{\mathbf{o}}}_{LL3}=({\bar{l}\gamma_{\mu}l})\,({\bar{q}\gamma^{\mu}q})$ | ${{\mathbf{o}}}_{LR5}=({\bar{l}\gamma_{\mu}l})\,({\bar{u}\gamma^{\mu}u})$
${{\mathbf{o}}}_{LL4}=({\bar{l}\gamma_{\mu}\tau^{a}l})\,({\bar{q}\gamma^{\mu}\tau^{a}q})$ | ${{\mathbf{o}}}_{LR6}=({\bar{l}\gamma_{\mu}l})\,({\bar{d}\gamma^{\mu}d})$
${{\mathbf{o}}}_{LL10}=({\bar{l}\gamma_{\mu}U\tau^{3}U^{\dagger}l})\,({\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}q})$ | ${{\mathbf{o}}}_{FY11}=({\bar{\ell}UP_{-}r})\,({\bar{r}P_{+}U^{\dagger}l})$
${{\mathbf{o}}}_{LL11}=({\bar{l}\gamma_{\mu}l})\,({\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}q})$ | ${{\mathbf{o}}}_{LR14}=({\bar{l}\gamma_{\mu}U\tau^{3}U^{\dagger}l})\,({\bar{u}\gamma^{\mu}u})$
${{\mathbf{o}}}_{LL12}=({\bar{l}\gamma_{\mu}U\tau^{3}U^{\dagger}l})\,({\bar{q}\gamma^{\mu}q})$ | ${{\mathbf{o}}}_{LR15}=({\bar{l}\gamma_{\mu}U\tau^{3}U^{\dagger}l})\,({\bar{d}\gamma^{\mu}d})$
${{\mathbf{o}}}_{LL14}=({\bar{l}\gamma_{\mu}q})\,({\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}l})$ |
$RRLL$ | $RRRR$
${{\mathbf{o}}}_{LR7}=({\bar{e}\gamma_{\mu}e})\,({\bar{q}\gamma^{\mu}q})$ | ${{\mathbf{o}}}_{RR5}=({\bar{e}\gamma_{\mu}e})\,({\bar{u}\gamma^{\mu}u})$
${{\mathbf{o}}}_{LR16}=({\bar{e}\gamma_{\mu}e})\,({\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}q})$ | ${{\mathbf{o}}}_{RR6}=({\bar{e}\gamma_{\mu}e})\,({\bar{d}\gamma^{\mu}d})$
Scalar with $d_{R}$ | Tensor with $d_{R}$
${{\mathbf{o}}}_{FY7}=({\bar{q}UP_{-}r})\,({\bar{\ell}UP_{-}\eta})$ | ${{\mathbf{o}}}_{FY8}=({\bar{q}\sigma^{\mu\nu}UP_{-}r})\,({\bar{l}\sigma_{\mu\nu}UP_{-}\eta})$
${{\mathbf{o}}}_{LR9}=({\bar{q}\gamma^{\mu}l})\,({\bar{e}\gamma_{\mu}d})$ | $\bullet$ ${{\mathbf{o}}}_{ST13}=({\bar{r}P_{-}\sigma^{\mu\nu}Uq})\,({\bar{l}\sigma_{\mu\nu}UP_{-}\eta})$
${{\mathbf{o}}}_{LR18}=({\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}l})\,({\bar{e}\gamma_{\mu}d})$ | $\bullet$ ${{\mathbf{o}}}_{ST14}=({\bar{q}\sigma^{\mu\nu}UP_{12}r})\,({\bar{\eta}\sigma_{\mu\nu}UP_{21}l})$
Scalar with $u_{R}$ | Tensor with $u_{R}$
${{\mathbf{o}}}_{ST9}=({\bar{q}UP_{+}r})\,({\bar{\ell}UP_{-}\eta})$ | ${{\mathbf{o}}}_{ST11}=({\bar{q}\sigma^{\mu\nu}UP_{+}r})\,({\bar{l}\sigma_{\mu\nu}UP_{-}\eta})$
${{\mathbf{o}}}_{FY9}=({\bar{\ell}UP_{-}\eta})\,({\bar{r}P_{+}U^{\dagger}q})$ | $\bullet$ ${{\mathbf{o}}}_{ST16}=({\bar{r}P_{+}\sigma^{\mu\nu}Uq})\,({\bar{l}\sigma_{\mu\nu}UP_{-}\eta})$
${{\mathbf{o}}}_{ST10}=({\bar{q}UP_{21}r})\,({\bar{\ell}UP_{12}\eta})$ | ${{\mathbf{o}}}_{ST12}=({\bar{q}\sigma^{\mu\nu}UP_{21}r})\,({\bar{l}\sigma_{\mu\nu}UP_{12}\eta})$
Table 6: List of semileptonic $SU(2)_{L}\times U(1)_{Y}$ invariant HEFT
operators. Note that this list is somewhat different from the list presented
in Buchalla:2012qq (see the text for more details). Some redundant operators
present in Buchalla:2012qq are omitted from this list. On the other hand,
some operators (preceded by a bullet) which were absent in Buchalla:2012qq
have been added and have been named using a similar nomenclature. Note that
$\tau^{a}=T^{a}/2$ are the Pauli matrices.
We first consider four-fermion operators. In Table 6, we present all possible
$SU(2)_{L}\times U(1)_{Y}$ invariant HEFT operators with two quarks and two
leptons, up to dimension 6. Note that this list has some differences from the
list of operators presented in Ref. Buchalla:2012qq that we will discuss in
detail in the following. Working in the unitary gauge, i.e. taking $U\to 1$,
we now write each of the operators in Table 6 in terms of the operators in
Table 1. This would confirm that there is a one-to-one mapping between these
two sets of operators in the unitary gauge.
For $LLLL$ vector operators:
$\displaystyle{{\mathbf{o}}}_{LL3}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}+{{\textbf{o}}}_{euLL}^{V}+{{\textbf{o}}}_{\nu
dLL}^{V}+{{\textbf{o}}}_{edLL}^{V}~{},$ (58)
$\displaystyle{{\mathbf{o}}}_{LL4}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}-{{\textbf{o}}}_{euLL}^{V}-{{\textbf{o}}}_{\nu
dLL}^{V}+{{\textbf{o}}}_{edLL}^{V}+2\,{{\textbf{o}}}_{LL}^{V}+2\,{{\textbf{o}}}_{LL}^{\prime
V}~{},$ (59) $\displaystyle{{\mathbf{o}}}_{LL10}$
$\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}-{{\textbf{o}}}_{euLL}^{V}-{{\textbf{o}}}_{\nu
dLL}^{V}+{{\textbf{o}}}_{edLL}^{V}~{},$ (60)
$\displaystyle{{\mathbf{o}}}_{LL11}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}+{{\textbf{o}}}_{euLL}^{V}-{{\textbf{o}}}_{\nu
dLL}^{V}-{{\textbf{o}}}_{edLL}^{V}~{},$ (61)
$\displaystyle{{\mathbf{o}}}_{LL12}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}-{{\textbf{o}}}_{euLL}^{V}+{{\textbf{o}}}_{\nu
dLL}^{V}-{{\textbf{o}}}_{edLL}^{V}~{},$ (62)
$\displaystyle{{\mathbf{o}}}_{LL14}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}+{{\textbf{o}}}_{LL}^{V}-{{\textbf{o}}}_{LL}^{\prime
V}-{{\textbf{o}}}_{edLL}^{V}~{},$ (63)
where we have suppressed the quark and lepton flavor indices. Here
$[{{\textbf{o}}}_{LL}^{\prime V}]^{\alpha\beta
ij}=([{{\textbf{o}}}_{LL}^{V}]^{\beta\alpha ji})^{\dagger}$ and
$[{{\textbf{o}}}_{LL}^{V}]^{\alpha\beta ij}$ are two independent operators.
The 6 operators listed in Table 6, therefore, receive contributions from 6
independent operators of this category in Table 1, providing a one-to-one
mapping between these two lists. In this category, there is one more operator
in Buchalla:2012qq i.e
${{\mathbf{o}}}_{LL13}=(\bar{q}\gamma^{\mu}U\tau^{3}U^{\dagger}l)(\bar{l}\gamma_{\mu}U\tau^{3}U^{\dagger}q)$.
But this operator is not independent of the 6 operators appearing on the
‘$LLLL$’ block of Table 6. Indeed, it can be written as
$\displaystyle{{\mathbf{o}}}_{LL13}$ $\displaystyle={{\textbf{o}}}_{\nu
uLL}^{V}-{{\textbf{o}}}_{LL}^{V}-({{\textbf{o}}}_{LL}^{\prime
V})+{{\textbf{o}}}_{edLL}^{V}~{},$ (64)
which is equivalent to the relation,
$\displaystyle{{\mathbf{o}}}_{LL13}$
$\displaystyle=\frac{1}{2}({{\mathbf{o}}}_{LL3}+2\,{{\mathbf{o}}}_{LL10}-{{\mathbf{o}}}_{LL4})~{}.$
(65)
This operator has therefore been omitted in our list.
For $LLRR$ vector operators:
$\displaystyle{{\mathbf{o}}}_{LR5}$ $\displaystyle={{\textbf{o}}}_{\nu
uLR}^{V}+{{\textbf{o}}}_{euLR}^{V}~{},$ $\displaystyle{{\mathbf{o}}}_{LR6}$
$\displaystyle={{\textbf{o}}}_{\nu dLR}^{V}+{{\textbf{o}}}_{edLR}^{V}~{},$
(66) $\displaystyle{{\mathbf{o}}}_{FY{11}}$
$\displaystyle=-\frac{1}{2}\,{{\textbf{o}}}_{LR}^{V}~{},$
$\displaystyle{{\mathbf{o}}}_{LR14}$ $\displaystyle={{\textbf{o}}}_{\nu
uLR}^{V}-{{\textbf{o}}}_{euLR}^{V}~{},$ (67)
$\displaystyle{{\mathbf{o}}}_{LR15}$ $\displaystyle={{\textbf{o}}}_{\nu
dLR}^{V}-{{\textbf{o}}}_{edLR}^{V}~{},$ (68)
Note that the operator ${{\mathbf{o}}}_{FY11}$ as defined in Table 6 consists
of two scalar currents. However, this operator maps to a vector operator after
the Fierz transformation and hence it is included in the category $LLRR$.
For $RRLL$ vector operators:
$\displaystyle{{\mathbf{o}}}_{LR7}$
$\displaystyle={{\textbf{o}}}_{euRL}^{V}+{{\textbf{o}}}_{edRL}^{V}~{},\quad{{\mathbf{o}}}_{LR16}={{\textbf{o}}}_{euRL}^{V}-{{\textbf{o}}}_{edRL}^{V}~{}.$
(69)
For $RRRR$ vector operators:
$\displaystyle{{\mathbf{o}}}_{RR5}$
$\displaystyle={{\textbf{o}}}_{euRR}^{V}~{},\quad\quad{{\mathbf{o}}}_{RR6}={{\textbf{o}}}_{edRR}^{V}~{}.$
(70)
For scalar operators:
$\displaystyle{{\mathbf{o}}}_{FY7}$
$\displaystyle={{\textbf{o}}}_{edRLRL}^{\prime S}~{},$
$\displaystyle{{\mathbf{o}}}_{ST9}$
$\displaystyle={{\textbf{o}}}_{euRLRL}^{\prime S}~{},$ (71)
$\displaystyle{{\mathbf{o}}}_{LR9}$
$\displaystyle=-2\,{{\textbf{o}}}_{RLLR}^{S}-2\,{{\textbf{o}}}_{edRLLR}^{S}~{}.$
$\displaystyle{{\mathbf{o}}}_{FY9}$
$\displaystyle={{\textbf{o}}}_{euRLLR}^{\prime S}~{},$ (72)
$\displaystyle{{\mathbf{o}}}_{LR18}$
$\displaystyle=-2\,{{\textbf{o}}}_{RLLR}^{S}+2\,{{\textbf{o}}}_{edRLLR}^{S}~{},$
$\displaystyle{{\mathbf{o}}}_{ST10}$
$\displaystyle={{\textbf{o}}}_{RLRL}^{\prime S}~{}.$ (73)
Here $[{{\textbf{o}}}_{ed(u)RLLR}^{\prime S}]^{\alpha\beta
ij}=([{{\textbf{o}}}_{ed(u)RLLR}^{S}]^{\beta\alpha ji})^{\dagger}$ and
$[{{\textbf{o}}}_{RLRL}^{\prime S}]^{\alpha\beta
ij}=([{{\textbf{o}}}_{RLRL}^{S}]^{\beta\alpha ji})^{\dagger}$. Note that the
operators ${{\mathbf{o}}}_{LR9}$ and ${{\mathbf{o}}}_{LR18}$ are defined as
products of vector currents in Table 6. However, they map to scalar operators
after Fierz transformations, as can be seen from eqs. (72–73).
In Buchalla:2012qq , there is one more scalar operator,
${{\mathbf{o}}}_{ST3}=\varepsilon_{ij}(\bar{q}^{i}u)(\bar{l}^{j}e)$ . This
operator is not independent from the scalar operators appearing in Table 6 and
can be written as
$\displaystyle{{\mathbf{o}}}_{ST3}$
$\displaystyle={{\mathbf{o}}}_{ST9}-{{\mathbf{o}}}_{ST10}~{}.$ (74)
Hence this operator has been omitted in our list.
For tensor operators:
$\displaystyle{{\mathbf{o}}}_{FY8}$
$\displaystyle={{\textbf{o}}}_{edRLRL}^{\prime T}~{},$
$\displaystyle{{\mathbf{o}}}_{ST11}$
$\displaystyle={{\textbf{o}}}_{euRLRL}^{\prime T}~{},$ (75)
$\displaystyle{{\mathbf{o}}}_{ST13}$
$\displaystyle={{\textbf{o}}}_{edRLLR}^{\prime T}~{},$
$\displaystyle{{\mathbf{o}}}_{ST16}$
$\displaystyle={{\textbf{o}}}_{euRLLR}^{\prime T}~{},$ (76)
$\displaystyle{{\mathbf{o}}}_{ST14}$ $\displaystyle={{\textbf{o}}}_{RLLR}^{T}$
$\displaystyle{{\mathbf{o}}}_{S12}$
$\displaystyle=({{\textbf{o}}}_{RLRL}^{\prime T})~{},$ (77)
where $[{{\mathbf{o}}}^{\prime}]^{\alpha\beta
ij}\equiv([{{\mathbf{o}}}]^{\beta\alpha ji})^{\dagger}$. The three tensor
operators ${{\mathbf{o}}}_{ST13}$, ${{\mathbf{o}}}_{ST14}$ and
${{\mathbf{o}}}_{ST16}$ are absent in the list of HEFT operators presented in
Buchalla:2012qq . On the other hand, the operator
${{\mathbf{o}}}_{ST4}=\varepsilon_{ij}\,(\bar{q}^{i}\sigma_{\mu\nu}\,u)\,(\bar{l}^{j}\sigma^{\mu\nu}e)$
included in Buchalla:2012qq is not an independent one. It can be written as
$\displaystyle{{\mathbf{o}}}_{ST4}$
$\displaystyle={{\mathbf{o}}}_{ST11}-{{\mathbf{o}}}_{ST12}~{},$ (78)
and has been omitted in our list.
HEFT operators with $Z$, $W^{\pm}$ couplings
---
${{\mathbf{o}}}_{\psi V1}=\left(\bar{q}\gamma^{\mu}q\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$ | ${{\mathbf{o}}}_{\psi V2}=\left(\bar{q}\gamma^{\mu}UT_{3}U^{\dagger}q\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$
${{\mathbf{o}}}_{\psi V3}=\left(\bar{q}\gamma^{\mu}UP_{12}U^{\dagger}q\right)\langle U^{\dagger}iD_{\mu}UP_{21}\rangle~{}+{\rm h.c.}$ | ${{\mathbf{o}}}_{\psi V4}=\left(\bar{u}\gamma^{\mu}u\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$
${{\mathbf{o}}}_{\psi V5}=\left(\bar{d}\gamma^{\mu}d\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$ | ${{\mathbf{o}}}_{\psi V6}=\left(\bar{u}\gamma^{\mu}d\right)\langle U^{\dagger}iD_{\mu}UP_{21}\rangle~{}+{\rm h.c.}$
${{\mathbf{o}}}_{\psi V7}=\left(\bar{l}\gamma^{\mu}l\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$ | ${{\mathbf{o}}}_{\psi V8}=\left(\bar{l}\gamma^{\mu}UT_{3}U^{\dagger}l\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$
${{\mathbf{o}}}_{\psi V9}=\left(\bar{l}\gamma^{\mu}UP_{12}U^{\dagger}l\right)\langle U^{\dagger}iD_{\mu}UP_{21}\rangle~{}+{\rm h.c.}$ | ${{\mathbf{o}}}_{\psi V10}=\left(\bar{e}\gamma^{\mu}e\right)\langle U^{\dagger}iD_{\mu}UT_{3}\rangle$
${{\mathbf{o}}}_{\psi h1}=h\,\left(\bar{q}UP_{-}r\right)$ | ${{\mathbf{o}}}_{\psi h2}=h\,\left(\bar{q}UP_{+}r\right)$
${{\mathbf{o}}}_{\psi h3}=h\,\left(\bar{l}UP_{-}\eta\right)$ |
Table 7: HEFT operators in Buchalla:2012qq with $Z$, $W^{\pm}$ and $h$
couplings to fermions.
For HEFT operators with BSM coupling of $Z$, $W^{\pm}$ to the fermions, we
reproduce the list provided in Buchalla:2012qq in Table 7. In addition, we
have also included the HEFT operators that modify the coupling of $h$ to
fermions. Once again there is a one-to-one mapping between the operators in
Table 7 and Table 4:
$\displaystyle{{\mathbf{o}}}_{\psi V1}$
$\displaystyle=-\frac{g}{2\cos{\theta}}\left({{\textbf{o}}}_{u_{L}Z}+{{\textbf{o}}}_{d_{L}Z}\right)~{},\quad{{\mathbf{o}}}_{\psi
V2}=-\frac{g}{2\cos{\theta}}\left({{\textbf{o}}}_{u_{L}Z}-{{\textbf{o}}}_{d_{L}Z}\right)~{},$
(79) $\displaystyle{{\mathbf{o}}}_{\psi V3}$
$\displaystyle=-\frac{g}{\sqrt{2}}{{\textbf{o}}}_{ud_{L}W}~{},\qquad\qquad\qquad~{}{{\mathbf{o}}}_{\psi
V4}=-\frac{g}{2\cos{\theta}}{{\textbf{o}}}_{u_{R}Z}~{},$ (80)
$\displaystyle{{\mathbf{o}}}_{\psi V5}$
$\displaystyle=-\frac{g}{2\cos{\theta}}{{\textbf{o}}}_{d_{R}Z}~{},\qquad\qquad\quad~{}~{}{{\mathbf{o}}}_{\psi
V6}=-\frac{g}{\sqrt{2}}{{\textbf{o}}}_{ud_{R}W}~{},$ (81)
$\displaystyle{{\mathbf{o}}}_{\psi V7}$
$\displaystyle=-\frac{g}{2\cos{\theta}}\left({{\textbf{o}}}_{\nu_{L}Z}+{{\textbf{o}}}_{e_{L}Z}\right)~{},\quad{{\mathbf{o}}}_{\psi
V8}=-\frac{g}{2\cos{\theta}}\left({{\textbf{o}}}_{\nu_{L}Z}-{{\textbf{o}}}_{e_{L}Z}\right)~{},$
(82) $\displaystyle{{\mathbf{o}}}_{\psi V9}$
$\displaystyle=-\frac{g}{\sqrt{2}}({{\textbf{o}}}_{e\nu_{L}W})^{\dagger}~{},\qquad\qquad~{}~{}\,{{\mathbf{o}}}_{\psi
V10}=-\frac{g}{2\cos{\theta}}{{\textbf{o}}}_{e_{R}Z}~{},$ (83)
$\displaystyle{{\mathbf{o}}}_{\psi h1}$
$\displaystyle={{\mathbf{o}}}_{dh}~{},\qquad{{\mathbf{o}}}_{\psi
h2}={{\mathbf{o}}}_{uh}~{},\qquad{{\mathbf{o}}}_{\psi
h3}={{\mathbf{o}}}_{eh}~{}.$ (84)
Thus, we have explicitly demonstrated the one-to-one mapping between the HEFT
operators in the $U(1)_{em}$ invariant language and the HEFT operators in
$SU(2)_{L}\times SU(2)_{R}$ language in the unitary gauge.
## Appendix B Details of the SMEFT basis used
To obtain the SMEFT predictions, we have used the
$\left(m_{W},m_{Z},\alpha_{EM}\right)$ input parameter scheme and the basis as
proposed in Ref. Masso:2014xra . Note that this basis is different from the
Warsaw basis that is conventionally used for SMEFT. In this appendix, we
discuss the difference and the rationale for the choice of this basis.
In the Warsaw basis, the two operators151515Note that in Ref.
Grzadkowski:2010es the operator ${\cal O}_{T}$ would actually appear as a
linear combination of the operators,
$(H^{\dagger}\,D_{\mu}\,H)^{*}(H^{\dagger}\,D_{\mu}\,H)$ and
$(H^{\dagger}H)\partial_{\mu}\partial^{\mu}(H^{\dagger}H)$. The combination of
these operators orthogonal to ${\cal O}_{T}$, ${\cal
O}_{H}=\frac{1}{2}(\partial_{\mu}|H|^{2})^{2}$, is part of the basis of Ref.
Masso:2014xra .
$\displaystyle{\cal O}_{WB}$
$\displaystyle=gg^{\prime}H^{\dagger}\tau^{I}\,H\,W_{\mu\nu}^{I}\,B^{\mu\nu}~{},$
(85) $\displaystyle{\cal O}_{T}$
$\displaystyle=(H^{\dagger}\,\overleftrightarrow{D}H)^{2}$ (86)
would contribute to the couplings of gauge bosons to the fermions by affecting
their mass and kinetic terms. One needs to carefully normalize the kinetic
term to bring it to the canonical form and also take into account input
parameter shifts. These subtleties become relevant when we try to write the
$Z$ and $W^{\pm}$ coupling modifications in SMEFT as in eqs.(̇32-34).
Instead, in the basis of Ref. Masso:2014xra , the operators ${\cal O}_{WB}$
and ${\cal O}_{T}$ are traded for the following two operators:
$\displaystyle{\cal O}_{WB^{\prime}}$ $\displaystyle={\cal
O}_{WB}-2ig^{\prime}\left(H^{\dagger}\overleftrightarrow{D}^{\mu}H\right)\partial^{\nu}B_{\mu\nu},$
$\displaystyle{\cal O}_{W^{\prime}}$
$\displaystyle=\frac{ig}{2}\left(H^{\dagger}\tau^{a}\overleftrightarrow{D}^{\mu}H\right)D^{\nu}W_{\mu\nu}^{a}-\frac{ig^{\prime}}{2}\left(H^{\dagger}\overleftrightarrow{D}^{\mu}H\right)\partial^{\nu}B_{\mu\nu}.$
(87)
This way of writing the operators eliminates their contributions to any mass
or kinetic term Masso:2014xra , thus allowing us to obtain the SMEFT
predictions in a straightforward way.
Note that the final predictions we obtain should be independent of the basis
being used and the Warsaw basis should also yield the same predictions, albeit
with more complicated intermediate calculations. We show in the following
that, in the Warsaw basis, the contributions of the two operators ${\cal
O}_{T}$ and ${\cal O}_{WB}$ to the HEFT WCs associated with $Z$, $W^{\pm}$
couplings to the fermions cancel out in the final relations. In the Warsaw
basis, these HEFT WCs receive the following SMEFT contributions Efrati:2015eaa
:
$\displaystyle[{{\mathbf{c}}}_{u_{L}Z}]^{ij}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hq}^{(1)}]^{ij}-[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij})+f(1/2,\,2/3)~{},$
(88) $\displaystyle[{{\mathbf{c}}}_{d_{L}Z}]^{ij}$
$\displaystyle=\eta_{LZ}\,([{{\mathcal{C}}}_{Hq}^{(1)}]^{ij}+[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij})+f(-1/2,\,-1/3)~{},$
(89) $\displaystyle[{{\mathbf{c}}}_{ud_{L}W}]^{ij}$
$\displaystyle=\eta_{LW}\,[{{\mathcal{C}}}_{Hq}^{(3)}]^{ij}+f(1/2,\,2/3)-f(-1/2,\,-1/3)~{},$
(90)
where $\eta_{LZ}\equiv-g/(2\,\cos\theta)$, $\eta_{LW}=g/(\sqrt{2})$ and the
term $f(T^{3},Q)$ is defined as Efrati:2015eaa
$\displaystyle f(T^{3},Q)$
$\displaystyle=\mathcal{I}\left[-{{\mathcal{C}}}_{WB}\,Q\,\frac{g^{2}\,g^{\prime
2}}{g^{2}-g^{\prime 2}}+({{\mathcal{C}}}_{T}-\delta
v)\left(T^{3}+Q\,\frac{g^{\prime 2}}{g^{2}-g^{\prime 2}}\right)\right]~{},$
(91)
with $[\delta
v]^{ij}=([{{\mathcal{C}}}_{Hl}^{(3)}]^{11}+[{{\mathcal{C}}}_{Hl}^{(3)}]^{22})/2+[{{\mathcal{C}}}_{{\ell}{\ell}}^{(1)}]^{1221}/4$.
From eqs. (88-90), we obtain the SMEFT prediction
$\displaystyle{{\mathbf{c}}}_{ud_{L}W}=\frac{1}{\sqrt{2}}\cos\theta_{w}\,({{\mathbf{c}}}_{u_{L}Z}-\,{{\mathbf{c}}}_{d_{L}Z})~{},$
(92)
which is the same as the one in Table 3. We see that in the prediction shown
in eq. (92), the function $f(T^{3},Q)$ and the operators within do not appear.
Similarly for $Z$, $W^{\pm}$ coupling to leptons, we recover the prediction
already presented in Table 3,
$\displaystyle{{\mathbf{c}}}_{e\nu_{L}W}=\frac{1}{\sqrt{2}}\cos\theta_{w}\,({{\mathbf{c}}}_{e_{L}Z}-\,{{\mathbf{c}}}_{\nu_{L}Z})~{}.$
(93)
Thus, even in the Warsaw basis, the contributions to our relations from ${\cal
O}_{WB}$ and ${\cal O}_{T}$ cancel out, confirming that the final relations
among the HEFT WCs are independent of the choice of the basis for SMEFT.
## Appendix C Linear relations among LEFT and HEFT operators
In Sec. 2.3, we have presented SMEFT predictions for LEFT WCs of the class
$LLLL$. In this appendix, we provide a similar analysis for the other classes
of LEFT operators. We first write the matching of four-fermion semileptonic
WCs between LEFT and HEFT. We then substitute the HEFT WCs with the LEFT WCs
for each of the analytic relations presented in Table 3. As a result, we get
relations among the LEFT WCs which also involve the BSM couplings of
$Z,W^{\pm}$ and Higgs bosons to fermions. These relations for the vector
operators are listed in Table 8. For the scalar and the tensor operators, the
relations are presented in Table 9.
Class | Analytic relations for WCs of vector operators | Count
---|---|---
$LLLL$ | $\begin{aligned} &V_{ik}\left[[{{C}}_{edLL}^{V}]^{\alpha\beta kl}-\left(k_{e_{L}}\,[\hat{{\mathbf{c}}}_{d_{L}Z}]^{kl}\,\delta_{\alpha\beta}+k_{d_{L}}\,[\hat{{\mathbf{c}}}_{e_{L}Z}]^{\alpha\beta}\,\delta_{kl}\right)\right]V^{\dagger}_{{\ell}j}\\\ &~{}~{}=U^{\dagger}_{\alpha\rho}\left[[{{C}}_{\nu uLL}^{V}]^{\rho\sigma ij}-\chi\,\left(k_{\nu_{L}}\,[\hat{{\mathbf{c}}}_{u_{L}Z}]^{ij}\,\delta_{\rho\sigma}+k_{u_{L}}\,[\hat{{\mathbf{c}}}_{\nu_{L}Z}]^{\rho\sigma}\,\delta_{ij}\right)\right]U_{\sigma\beta}\end{aligned}$ | 81 (45) |
Toffoli complexity and qubit cost when constructing the qubitization walk
operator. Unfortunately, $\lambda$ is increased in these cases. The increase
is attributed to the lower variational freedom in constructing non-orthogonal
bases when representing the two-electron integral tensor in factorized form
compared with the non-symmetry adapted setting. For the THC case, no
asymptotic speedup is formally possible. This stems from the linear cost of
unary iteration over all basis states. Nevertheless, due to competing
prefactors between unary iteration and state preparation, we do observe a
$\sqrt{N_{k}}$ scaling improvement in the Toffoli per step and logical qubit
cost for the range of systems studied. This is likely a finite size effect,
but may be a practically important when considering which algorithm to chose
in the future. Thus, improving the $\lambda$ value of THC through more
sophisticated and affordable means is worth further investigation.
Reaching the thermodynamic and complete basis set limit is very challenging,
even for classical wavefunction methods like CCSD and ph-AFQMC. Previous ph-
AFQMC results for simple insulating solids with two-atom unit cells suggest
that at least a $3\times 3\times 3$ and $4\times 4\times 4$ sampling of the
Brillouin zone is required to extrapolate correlation energies to the
thermodynamic limit [108]. Similarly, it has been found that quadruple-zeta
quality basis sets are required to converge the cohesive energy to less than
0.1 eV / atom, while a triple-zeta quality basis is likely sufficient for
quantities such as the lattice constant and bulk modulus [109]. Similar system
sizes and basis sets were found to be required for CCSD simulations of
metallic systems [18]. Although the theory of finite size corrections [110,
111, 112, 113] is still an area of active research [114, 115], the simulation
of bulk systems even with these corrections typically requires on the order of
50 atoms, which in turn corresponds to hundreds of electrons and thousands of
orbitals. For excited state properties, particularly those concerning charged
excitations, even larger system sizes may be required without the use of
sophisticated finite size correction schemes [116]. Thus, we suspect that
simulating large system sizes will continue to be necessary in order to obtain
high accuracy for condensed phase simulations. It is important to note that
high accuracy classical wavefunction methods are often considered too
expensive for practical materials simulation, and DFT is still the workhorse
of the field. Appendix F shows that simulating even simple solids with coarse
$k$-meshes can take on the order of hours, which would otherwise take seconds
for a modern DFT code. From the quantum resource estimates it is clear that
several orders of magnitude of improvement are necessary before practical
materials simulation is possible. Despite this, the fairly low scaling of
phase estimation as a function of system size serves as encouragement to
pursue quantum simulation for materials further.
The aforementioned convergence difficulties are demonstrated in our classical
calculations on the LNO system when attempting to resolve the discrepancy
between band-theory predictions and experimental observations of the ground
state geometry. Furthermore, the variance in energy between CCSD, MP2, ph-
AFQMC, and DMET (and their expenses) make it difficult to select an efficient
method for determining Hamiltonian parameter cutoffs to use in quantum
resource estimation. If anything, this highlights the need for high accuracy
classical computation when performing quantum resource estimates and
ultimately picking an algorithm for quantum simulation. The quantum resource
estimates for LNO simulations are exorbitantly expensive even at small
$k$-mesh; estimated to run in $\mathcal{O}(10^{2})-\mathcal{O}(10^{3})$ days
using the DF LCU. Just as resource estimates for chemistry fell drastically
with algorithmic developments clearly further algorithmic improvements are
needed to make a LNO sized problem feasible on a quantum computer.
Qubitization is a general tool for Hamiltonian simulation and there may be
other simulation scenarios when the improved walk operators yield faster
simulations. There are also areas to further improve the quantum algorithms by
taking advantage of space group symmetry along with translational symmetry. In
classical calculations this can lead to substantial computational savings even
at the mean-field level. Just as in the case of quantum algorithms for
molecular simulations we expect the quantum resource costs to fall with
further exploration of algorithmic improvements.
## Acknowledgements
The authors thank Yuan Su for helpful conversations. FDM thanks Miguel Morales
for helpful discussions on the form of the $k$-point THC factorization. DWB
worked on this project under a sponsored research agreement with Google
Quantum AI. DWB is also supported by Australian Research Council Discovery
Projects DP190102633 and DP210101367.
## References
* Kassal _et al._ [2008] I. Kassal, S. P. Jordan, P. J. Love, M. Mohseni, and A. Aspuru-Guzik, Proceedings of the National Academy of Sciences 105, 18681 (2008).
* Babbush _et al._ [2018a] R. Babbush, C. Gidney, D. Berry, N. Wiebe, J. McClean, A. Paler, A. Fowler, and H. Neven, Physical Review X 8, 041015 (2018a).
* Su _et al._ [2021] Y. Su, D. W. Berry, N. Wiebe, N. Rubin, and R. Babbush, PRX Quantum 2, 040332 (2021).
* Babbush _et al._ [2018b] R. Babbush, N. Wiebe, J. McClean, J. McClain, H. Neven, and G. K.-L. Chan, Physical Review X 8, 011044 (2018b).
* Grüneis _et al._ [2013] A. Grüneis, J. J. Shepherd, A. Alavi, D. P. Tew, and G. H. Booth, The Journal of Chemical Physics 139, 84112 (2013).
* Kato [1957] T. Kato, Communications on Pure and Applied Mathematics 10, 151 (1957).
* Imada _et al._ [1998] M. Imada, A. Fujimori, and Y. Tokura, Reviews of Modern Physics 70, 1039 (1998).
* Dagotto [1994] E. Dagotto, Reviews of Modern Physics 66, 763 (1994).
* Sachdev [2003] S. Sachdev, Reviews of Modern Physics 75, 913 (2003).
* Pisani [2003] C. Pisani, Journal of Molecular Structure (Theochem) 621, 141 (2003).
* Pisani _et al._ [2005] C. Pisani, M. Busso, G. Capecchi, S. Casassa, R. Dovesi, L. Maschio, C. Zicovich-Wilson, and M. Schütz, Journal of Chemical Physics 122 (2005), 10.1063/1.1857479.
* Grüneis _et al._ [2011] A. Grüneis, G. H. Booth, M. Marsman, J. Spencer, A. Alavi, and G. Kresse, Journal of Chemical Theory and Computation 7, 2780 (2011).
* Ben _et al._ [2012] M. D. Ben, J. Hutter, and J. Vandevondele, Journal of Chemical Theory and Computation 8, 4177 (2012).
* Ben _et al._ [2013] M. D. Ben, J. Hutter, and J. Vandevondele, Journal of Chemical Theory and Computation 9, 2654 (2013).
* Booth _et al._ [2013] G. H. Booth, A. Grüneis, G. Kresse, and A. Alavi, Nature 493, 365 (2013).
* Booth _et al._ [2016] G. H. Booth, T. Tsatsoulis, G. K. L. Chan, and A. Grüneis, Journal of Chemical Physics 145 (2016), 10.1063/1.4961301.
* McClain _et al._ [2017] J. McClain, Q. Sun, G. K. L. Chan, and T. C. Berkelbach, Journal of Chemical Theory and Computation 13, 1209 (2017).
* Neufeld _et al._ [2022] V. A. Neufeld, H.-Z. Ye, and T. C. Berkelbach, The Journal of Physical Chemistry Letters 13, 7497 (2022).
* Cui _et al._ [2020] Z. H. Cui, T. Zhu, and G. K. L. Chan, Journal of Chemical Theory and Computation 16, 119–129 (2020).
* Zhu _et al._ [2020] T. Zhu, Z. H. Cui, and G. K. L. Chan, Journal of Chemical Theory and Computation 16, 141 (2020).
* Cui _et al._ [2022] Z.-H. Cui, H. Zhai, X. Zhang, and G. K.-L. Chan, Science 377, 1192 (2022).
* Motta _et al._ [2019] M. Motta, S. Zhang, and G. K.-L. Chan, Physical Review B 100, 045127 (2019).
* Cotton [1991] F. A. Cotton, _Chemical applications of group theory_ (John Wiley & Sons, 1991).
* Crawford and Di Remigio [2019] T. D. Crawford and R. Di Remigio, in _Annual Reports in Computational Chemistry_, Vol. 15 (Elsevier, 2019) pp. 79–101.
* Georges _et al._ [1996a] A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg, Rev. Mod. Phys. 68, 13 (1996a).
* Cortona [1991] P. Cortona, Phys. Rev. B 44, 8454 (1991).
* Inglesfield [1981] J. Inglesfield, Journal of Physics C: Solid State Physics 14, 3795 (1981).
* Knizia and Chan [2012a] G. Knizia and G. K.-L. Chan, Phys. Rev. Lett. 109, 186404 (2012a).
* Cui _et al._ [2019] Z.-H. Cui, T. Zhu, and G. K.-L. Chan, Journal of Chemical Theory and Computation 16, 119 (2019).
* Pham _et al._ [2019] H. Q. Pham, M. R. Hermes, and L. Gagliardi, Journal of Chemical Theory and Computation 16, 130 (2019).
* Zheng _et al._ [2018] H. Zheng, H. J. Changlani, K. T. Williams, B. Busemeyer, and L. K. Wagner, Frontiers in Physics 6, 43 (2018).
* Lee _et al._ [2021] J. Lee, D. W. Berry, C. Gidney, W. J. Huggins, J. R. McClean, N. Wiebe, and R. Babbush, PRX Quantum 2, 030305 (2021).
* Low and Chuang [2019] G. H. Low and I. L. Chuang, Quantum 3, 163 (2019).
* Ivanov _et al._ [2022] A. V. Ivanov, C. Sünderhauf, N. Holzmann, T. Ellaby, R. N. Kerber, G. Jones, and J. Camps, arXiv:2210.02403 (2022).
* Reiher _et al._ [2017] M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Proceedings of the National Academy of Sciences 114, 7555 (2017).
* von Burg _et al._ [2021] V. von Burg, G. H. Low, T. Häner, D. S. Steiger, M. Reiher, M. Roetteler, and M. Troyer, Physical Review Research 3, 033055 (2021).
* Goings _et al._ [2022] J. J. Goings, A. White, J. Lee, C. S. Tautermann, M. Degroote, C. Gidney, T. Shiozaki, R. Babbush, and N. C. Rubin, Proceedings of the National Academy of Sciences 119, e2203533119 (2022).
* Bianchini _et al._ [2019] M. Bianchini, M. Roca-Ayats, P. Hartmann, T. Brezesinski, and J. Janek, Angewandte Chemie Int. Ed. 58, 2 (2019).
* Sicolo _et al._ [2020] S. Sicolo, M. Mock, M. Bianchini, and K. Albe, Chemistry of Materials 32, 10096 (2020).
* Chen _et al._ [2011] H. Chen, C. L. Freeman, and J. H. Harding, Physical Review B 84, 085108 (2011).
* Vandevondele _et al._ [2005] J. Vandevondele, M. Krack, F. Mohamed, M. Parrinello, T. Chassaing, and J. Hutter, Computer Physics Communications 167, 103 (2005).
* Kühne _et al._ [2020] T. D. Kühne, M. Iannuzzi, M. D. Ben, V. V. Rybkin, P. Seewald, F. Stein, T. Laino, R. Z. Khaliullin, O. Schütt, F. Schiffmann, D. Golze, J. Wilhelm, S. Chulkov, M. H. Bani-Hashemian, V. Weber, U. Borštnik, M. Taillefumier, A. S. Jakobovits, A. Lazzaro, H. Pabst, T. Müller, R. Schade, M. Guidon, S. Andermatt, N. Holmberg, G. K. Schenter, A. Hehn, A. Bussy, F. Belleflamme, G. Tabacchi, A. Glöß, M. Lass, I. Bethune, C. J. Mundy, C. Plessl, M. Watkins, J. VandeVondele, M. Krack, and J. Hutter, Journal of Chemical Physics 152, 194103 (2020).
* Dovesi _et al._ [2020] R. Dovesi, F. Pascale, B. Civalleri, K. Doll, N. M. Harrison, I. Bush, P. D’Arco, Y. Noel, M. Rera, P. Carbonniere, M. Causa, S. Salustro, V. Lacivita, B. Kirtman, A. M. Ferrari, F. S. Gentile, J. Baima, M. Ferrero, R. Demichelis, and M. D. L. Pierre, Journal of Chemical Physics 152, 204111 (2020); (2020).
* Guidon _et al._ [2009] M. Guidon, J. Hutter, and J. VandeVondele, Journal of Chemical Theory and Computation 5, 3010 (2009).
* Guidon _et al._ [2010] M. Guidon, J. Hutter, and J. VandeVondele, Journal of Chemical Theory and Computation 6, 2348–2364 (2010).
* Pisani _et al._ [1988] C. Pisani, R. Dovesi, and C. Roetti, “Hartree-fock ab initio treatment of crystalline systems,” (1988).
* Blum _et al._ [2009] V. Blum, R. Gehrke, F. Hanke, P. Havu, V. Havu, X. Ren, K. Reuter, and M. Scheffler, Computer Physics Communications 180, 2175 (2009).
* Goedecker [1999] S. Goedecker, Reviews of Modern Physics 71, 1085 (1999).
* Bowler and Miyazaki [2012] D. R. Bowler and T. Miyazaki, Reports on Progress in Physics 75, 036503 (2012).
* Wu _et al._ [2009] X. Wu, A. Selloni, and R. Car, Physical Review B - Condensed Matter and Materials Physics 79, 085102 (2009).
* Lippert _et al._ [1997] G. Lippert, J. Hutter, and M. Parinello, Molecular Physics 92, 477 (1997).
* Whitten [1973] J. L. Whitten, The Journal of Chemical Physics 58, 4496 (1973).
* Mintmire and Dunlap [1982] J. W. Mintmire and B. I. Dunlap, Physical Review A 25, 88 (1982).
* Dunlap [2000] B. I. Dunlap, Journal of Molecular Structure: THEOCHEM 529, 37 (2000).
* Weigend [2002] F. Weigend, Physical Chemistry Chemical Physics 4, 4285 (2002).
* Varga [2005] S. Varga, Physical Review B 71, 073103 (2005).
* Maschio and Usvyat [2008a] L. Maschio and D. Usvyat, Physical Review B1 78, 073102 (2008a).
* Burow _et al._ [2009] A. M. Burow, M. Sierka, and F. Mohamed, Journal of Chemical Physics 131 (2009), 10.1063/1.3267858.
* Wang _et al._ [2020] X. Wang, C. A. Lewis, and E. F. Valeev, The Journal of Chemical Physics 153, 124116 (2020).
* Ye and Berkelbach [2021] H. Z. Ye and T. C. Berkelbach, Journal of Chemical Physics 154 (2021), 10.1063/5.0046617.
* Hohenstein _et al._ [2012a] E. G. Hohenstein, R. M. Parrish, and T. J. Martínez, The Journal of Chemical Physics 137, 1085 (2012a).
* Parrish _et al._ [2012] R. M. Parrish, E. G. Hohenstein, T. J. Martínez, and C. D. Sherrill, The Journal of Chemical Physics 137, 224106 (2012).
* Hohenstein _et al._ [2012b] E. G. Hohenstein, R. M. Parrish, C. D. Sherrill, and T. J. Martínez, _The Journal of Chemical Physics_ , The Journal of Chemical Physics 137, 221101 (2012b).
* Ye and Berkelbach [2022] H.-Z. Ye and T. C. Berkelbach, Journal of Chemical Theory and Computation 18, 1595 (2022).
* Hartwigsen _et al._ [1998] C. Hartwigsen, S. Goedecker, and J. Hutter, Physical Review B 58, 3641 (1998).
* Heyd _et al._ [2005] J. Heyd, J. E. Peralta, G. E. Scuseria, and R. L. Martin, The Journal of Chemical Physics 123, 174101 (2005).
* Grüneis _et al._ [2010] A. Grüneis, M. Marsman, and G. Kresse, The Journal of Chemical Physics 133, 074107 (2010).
* Nadler and Kempier [1959] M. R. Nadler and C. Kempier, Analytical Chemistry 31, 2109 (1959).
* Tang _et al._ [2009] F. L. Tang, X. X. Che, W. J. Lu, G. B. Chen, Y. Xie, and W. Y. Yu, Physica B: Condensed Matter 404, 2489 (2009).
* Babbush _et al._ [2018c] R. Babbush, C. Gidney, D. W. Berry, N. Wiebe, J. McClean, A. Paler, A. Fowler, and H. Neven, Physical Review X 8, 041015 (2018c).
* Berry _et al._ [2019] D. W. Berry, C. Gidney, M. Motta, J. McClean, and R. Babbush, Quantum 3, 208 (2019).
* Gilyén _et al._ [2019] A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, in _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_ (2019) pp. 193–204.
* Szegedy [2004] M. Szegedy, in _45th Annual IEEE Symposium on Foundations of Computer Science_ (IEEE, 2004) pp. 32–41.
* Childs and Wiebe [2012] A. M. Childs and N. Wiebe, Quantum Information and Computation 12, 901–924 (2012).
* Baldereschi [1973] A. Baldereschi, Physical Review B 7, 5212 (1973).
* Low _et al._ [2018] G. H. Low, V. Kliuchnikov, and L. Schaeffer, arXiv: 1812.00954 (2018).
* Werner _et al._ [2003] H.-J. Werner, F. R. Manby, and P. J. Knowles, The Journal of chemical physics 118, 8149 (2003).
* Maschio and Usvyat [2008b] L. Maschio and D. Usvyat, Physical Review B 78, 073102 (2008b).
* Oumarou _et al._ [2022] O. Oumarou, M. Scheurer, R. M. Parrish, E. G. Hohenstein, and C. Gogolin, arXiv:2212.07957 (2022).
* Lu and Ying [2016] J. Lu and L. Ying, Annals of Mathematical Sciences and Applications 1, 321 (2016).
* Wu _et al._ [2021] K. Wu, X. Qin, W. Hu, and J. Yang, Journal of Chemical Theory and Computation 18, 206 (2021).
* Hu _et al._ [2017] W. Hu, L. Lin, and C. Yang, Journal of Chemical Theory and Computation 13, 5420 (2017).
* Dong _et al._ [2018] K. Dong, W. Hu, and L. Lin, Journal of Chemical Theory and Computation 14, 1311 (2018).
* [84] “Project by BMW, BASF, Samsung SDI and Samsung Electronics to enhance sustainable cobalt mining,” https://www.basf.com/global/en/who-we-are/sustainability/responsible-partnering/cobalt-initiative.html (accessed October 26, 2020).
* Olivetti _et al._ [2017] E. A. Olivetti, G. Ceder, G. G. Gaustad, and X. Fu, Joule 1, 229 (2017).
* Dahn _et al._ [1990] J. R. Dahn, U. Vonsacken, and C. A. Michal, Solid State Ionics 44, 87 (1990).
* Ohzuku _et al._ [1993] T. Ohzuku, A. Ueda, and M. Nagayama, Journal of The Electrochemical Society 140, 1862 (1993).
* Radin _et al._ [2020] M. D. Radin, J. C. Thomas, and A. Van der Ven, Physical Review Materials 4, 043601 (2020).
* Møller and Plesset [1934] C. Møller and M. S. Plesset, Physical Review 46, 618 (1934).
* Cremer [2011] D. Cremer, Ltd. WIREs Comput Mol Sci 1, 509 (2011).
* Purvis and Bartlett [1982] G. D. Purvis and R. J. Bartlett, The Journal of Chemical Physics 76, 1910 (1982).
* Bartlett and Musiał [2007] R. J. Bartlett and M. Musiał, Reviews of Modern Physics 79, 291 (2007).
* Sun _et al._ [2018] Q. Sun, T. C. Berkelbach, N. S. Blunt, G. H. Booth, S. Guo, Z. Li, J. Liu, J. D. McClain, E. R. Sayfutyarova, S. Sharma, S. Wouters, and G. K. L. Chan, Wiley Interdisciplinary Reviews: Computational Molecular Science 8, e1340 (2018).
* Sun _et al._ [2020] Q. Sun, X. Zhang, S. Banerjee, P. Bao, M. Barbry, N. S. Blunt, N. A. Bogdanov, G. H. Booth, J. Chen, Z. H. Cui, J. J. Eriksen, Y. Gao, S. Guo, J. Hermann, M. R. Hermes, K. Koh, P. Koval, S. Lehtola, Z. Li, J. Liu, N. Mardirossian, J. D. McClain, M. Motta, B. Mussard, H. Q. Pham, A. Pulkin, W. Purwanto, P. J. Robinson, E. Ronca, E. R. Sayfutyarova, M. Scheurer, H. F. Schurkus, J. E. Smith, C. Sun, S. N. Sun, S. Upadhyay, L. K. Wagner, X. Wang, A. White, J. D. Whitfield, M. J. Williamson, S. Wouters, J. Yang, J. M. Yu, T. Zhu, T. C. Berkelbach, S. Sharma, A. Y. Sokolov, and G. K. L. Chan, The Journal of chemical physics 153, 024109 (2020).
* Kim _et al._ [2018] J. Kim, A. D. Baczewski, T. D. Beaudet, A. Benali, M. C. Bennett, M. A. Berrill, N. S. Blunt, E. J. L. Borda, M. Casula, D. M. Ceperley, _et al._ , Journal of Physics: Condensed Matter 30, 195901 (2018).
* Kent _et al._ [2020] P. R. C. Kent, A. Annaberdiyev, A. Benali, M. C. Bennett, E. J. Landinez Borda, P. Doak, H. Hao, K. D. Jordan, J. T. Krogel, I. Kylänpää, J. Lee, Y. Luo, F. D. Malone, C. A. Melton, L. Mitas, M. A. Morales, E. Neuscamman, F. A. Reboredo, B. Rubenstein, K. Saritas, S. Upadhyay, G. Wang, S. Zhang, and L. Zhao, The Journal of Chemical Physics 152, 174105 (2020).
* Goedecker _et al._ [1996] S. Goedecker, M. Teter, and J. Hutter, Physical Review B 54, 1703 (1996).
* [98] J. Hutter, “New optimization of gth pseudopotentials for pbe, scan, pbe0 functionals. gth pseudopotentials for hartree-fock. nlcc pseudopotentials for pbe,” .
* VandeVondele and Hutter [2007] J. VandeVondele and J. Hutter, Journal of Chemical Physics 127, 114105 (2007).
* Georges _et al._ [1996b] A. Georges, G. Kotliar, W. Krauth, and M. J. Rozenberg, Reviews of Modern Physics 68, 13 (1996b).
* Kotliar _et al._ [2006] G. Kotliar, S. Y. Savrasov, K. Haule, V. S. Oudovenko, O. Parcollet, and C. A. Marianetti, Reviews of Modern Physics 78, 865 (2006).
* Held [2007] K. Held, Advances in Physics 56, 829 (2007).
* Vollhardt [2020] D. Vollhardt (2020) p. 11001.
* Knizia and Chan [2012b] G. Knizia and G. K.-L. Chan, Physical Review Letters 109, 186404 (2012b).
* Knizia and Chan [2013] G. Knizia and G. K. L. Chan, Journal of Chemical Theory and Computation 9, 1428 (2013).
* Liu _et al._ [2023] Y. Liu, O. R. Meitei, Z. E. Chin, A. Dutt, M. Tao, T. Van Voorhis, and I. L. Chuang, arXiv:2301.01457 (2023).
* Rubin _et al._ [2022] N. C. Rubin, J. Lee, and R. Babbush, Journal of Chemical Theory and Computation 18, 1480 (2022).
* Malone _et al._ [2020] F. D. Malone, S. Zhang, and M. A. Morales, Journal of Chemical Theory and Computation 16, 4286 (2020).
* Morales and Malone [2020] M. A. Morales and F. D. Malone, The Journal of Chemical Physics 153, 194111 (2020).
* Chiesa _et al._ [2006] S. Chiesa, D. M. Ceperley, R. M. Martin, and M. Holzmann, Physical Review Letters 97, 076404 (2006).
* Drummond _et al._ [2008] N. Drummond, R. Needs, A. Sorouri, and W. Foulkes, Physical Review B 78, 125106 (2008).
* Azadi and Foulkes [2015] S. Azadi and W. Foulkes, The Journal of Chemical Physics 143, 102807 (2015).
* Holzmann _et al._ [2016] M. Holzmann, R. C. Clay III, M. A. Morales, N. M. Tubman, D. M. Ceperley, and C. Pierleoni, Physical Review B 94, 035126 (2016).
* Dagrada _et al._ [2016] M. Dagrada, S. Karakuzu, V. L. Vildosola, M. Casula, and S. Sorella, Physical Review B 94, 245108 (2016).
* Mihm _et al._ [2021] T. N. Mihm, T. Schäfer, S. K. Ramadugu, L. Weiler, A. Grüneis, and J. J. Shepherd, Nature Computational Science 1, 801 (2021).
* Yang _et al._ [2020] Y. Yang, V. Gorelov, C. Pierleoni, D. M. Ceperley, M. Holzmann, _et al._ , Physical Review B 101, 085115 (2020).
* Sanders _et al._ [2020] Y. R. Sanders, D. W. Berry, P. C. S. Costa, L. W. Tessler, N. Wiebe, C. Gidney, H. Neven, and R. Babbush, PRX Quantum 1, 020312 (2020).
* DeYonker _et al._ [2007] N. J. DeYonker, K. A. Peterson, G. Steyl, A. K. Wilson, and T. R. Cundari, Journal of Physical Chemistry A 111, 11269 (2007).
* Lee and Taylor [1989] T. J. Lee and P. R. Taylor, International Journal of Quantum Chemistry 36, 199 (1989).
* Nielsen and Janssen [1999] I. M. Nielsen and C. L. Janssen, Chemical Physics Letters 310, 568 (1999).
* Lee [2003] T. J. Lee, Chemical Physics Letters 372, 362 (2003).
* Lee _et al._ [2022] J. Lee, H. Q. Pham, and D. R. Reichman, Journal of Chemical Theory and Computation 18, 7024 (2022).
## Appendix A Sparse representation derivations
### A.1 The Pauli operator representation of the one-body term
Here we derive the Pauli operator form of the one-body operator amenable to
implementation as a Majorana select operation. The one-body operator is
rewritten as
$\displaystyle\sum_{p,q=1}^{N/2}h_{p\mathbf{k},q\mathbf{k}}a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}\mapsto\frac{1}{4}\sum_{p,q=1}^{N/2}h_{p\mathbf{k},q\mathbf{k}}[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}+iY_{q\mathbf{k}\sigma})]$
$\displaystyle=\frac{1}{8}\sum_{p,q=1}^{N/2}h_{p\mathbf{k},q\mathbf{k}}[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}+iY_{q\mathbf{k}\sigma})]+\frac{1}{8}\sum_{p,q=1}^{N/2}h_{q\mathbf{k},p\mathbf{k}}[\vec{Z}(X_{q\mathbf{k}\sigma}-iY_{q\mathbf{k}\sigma})][\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p\mathbf{k}\sigma})]$
$\displaystyle=\frac{1}{8}\sum_{p\neq
q=1}^{N/2}h_{p\mathbf{k},q\mathbf{k}}[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}+iY_{q\mathbf{k}\sigma})]-\frac{1}{8}\sum_{p\neq
q=1}^{N/2}h^{*}_{p\mathbf{k},q\mathbf{k}}[\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}-iY_{q\mathbf{k}\sigma})]$
$\displaystyle\quad+\frac{1}{4}\sum_{p=1}^{N/2}h_{p\mathbf{k},p\mathbf{k}}[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p\mathbf{k}\sigma})]$
$\displaystyle=\frac{1}{8}\sum_{p\neq q=1}^{N/2}{\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})\left\\{[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}+iY_{q\mathbf{k}\sigma})]-[\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}-iY_{q\mathbf{k}\sigma})]\right\\}$
$\displaystyle\quad+\frac{i}{8}\sum_{p,q=1}^{N/2}{\rm
Im}(h_{p\mathbf{k},q\mathbf{k}})\left\\{[\vec{Z}(X_{p\mathbf{k}\sigma}-iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}+iY_{q\mathbf{k}\sigma})]+[\vec{Z}(X_{p\mathbf{k}\sigma}+iY_{p\mathbf{k}\sigma})][\vec{Z}(X_{q\mathbf{k}\sigma}-iY_{q\mathbf{k}\sigma})]\right\\}$
$\displaystyle\quad+\frac{1}{2}\sum_{p=1}^{N/2}h_{p\mathbf{k},p\mathbf{k}}(\openone_{p\mathbf{k}\sigma}-Z_{p\mathbf{k}\sigma})$
$\displaystyle=\frac{1}{4}\sum_{p\neq q=1}^{N/2}{\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})\left\\{-i\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+i\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}$
$\displaystyle\quad+\frac{i}{4}\sum_{p,q=1}^{N/2}{\rm
Im}(h_{p\mathbf{k},q\mathbf{k}})\left\\{\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}+\frac{1}{2}\sum_{p=1}^{N/2}h_{pp}(\mathbf{k})(\openone_{p\mathbf{k}\sigma}-Z_{p\mathbf{k}\sigma})$
$\displaystyle=\frac{i}{4}\sum_{p\neq q=1}^{N/2}{\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})\left\\{\vec{Z}X_{q\mathbf{k}\sigma}\vec{Z}Y_{p\mathbf{k}\sigma}+\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}$
$\displaystyle\quad+\frac{i}{4}\sum_{p,q=1}^{N/2}{\rm
Im}(h_{p\mathbf{k},q\mathbf{k}})\left\\{\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}+\frac{1}{2}\sum_{p=1}^{N/2}h_{pp}(\mathbf{k})(\openone_{p\mathbf{k}\sigma}-Z_{p\mathbf{k}\sigma})$
$\displaystyle=\frac{i}{2}\sum_{p,q=1}^{N/2}{\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}+\frac{i}{4}\sum_{p,q=1}^{N/2}{\rm
Im}(h_{p\mathbf{k},q\mathbf{k}})\left\\{\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}+\frac{1}{2}\sum_{p=1}^{N/2}h_{p\mathbf{k},p\mathbf{k}}\openone.$
(74)
In the last line we have used the symmetry of ${\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})$ to combine
$\vec{Z}X_{q\mathbf{k}\sigma}\vec{Z}Y_{p\mathbf{k}\sigma}$ and
$\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}$, then used the fact
that $iXY=-Z$ to combine the sum with $p\neq q$ with that for $p$. The
complete expression for the Hamiltonian has the sum over $\sigma$ and
$\mathbf{k}$, which we have left out for simplicity here. Including those
gives the expression in Eq. (III.1).
### A.2 One-body correction for sparse case
Next we derive the effective one-body term from the two-electron part of the
Hamiltonian. In the case $p=q$ and $\mathbf{Q}=0$, the second term in square
brackets in Eq. (20) can be written as
$\displaystyle-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}^{\dagger}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle=V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle\quad-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}(a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}+a_{p\mathbf{k}\sigma}^{\dagger}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma})a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle=V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle\quad-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}.$
(75)
In the last line we have used the fact that for $p=q$ and $\mathbf{Q}=0$,
$a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}+a_{p\mathbf{k}\sigma}^{\dagger}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}$
is just the identity, so this becomes a one-body operator.
Similarly, if $r=s$ and $\mathbf{Q}=0$ (but $p\neq q$), the second term in
square brackets in Eq. (20) can be written as
$\displaystyle-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau}$
$\displaystyle=V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle\quad-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}(a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}+a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau})$
$\displaystyle=V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle\quad-V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}.$
(76)
Thus we see that in either case ($p=q$ or $r=s$), we have the same expression
as in Eq. (21), plus a one-body operator. Moreover, because of the symmetry of
$V$ (in swapping the $pq$ pair with the $rs$ pair), these corrections are
equal. Note also that we can relabel swapping $p$ with $q$ and $r$ with $s$ to
replace
$V^{*}_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q\mathbf{k}\sigma}^{\dagger}$
with (now explicitly taking $\mathbf{Q}=0$)
$V^{*}_{q\mathbf{k},p\mathbf{k},s\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}a_{q\mathbf{k}\sigma}a_{p\mathbf{k}\sigma}^{\dagger}=-V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}.$
(77)
This means that the contribution of these corrections is
$\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p,q=1}^{N/2}\left(\sum_{r=1}^{N/2}\sum_{\mathbf{k}^{\prime}}^{N_{k}}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}.$
(78)
In this expression the constant factor is determined as follows. There is a
factor of $1/4$ in Eq. (20). Next, there is a factor of 2 because we have the
contribution from $p=q$ as well as that from $r=s$. Last, there is the factor
of 2 from the summation over the spin $\tau$. As a result, these factors
cancel to give 1 above. Therefore, for $p\neq q$, we can combine this one-body
term with $h_{pq}$ as
$h^{\prime}_{pq}=h_{pq}+\sum_{r=1}^{N/2}\sum_{\mathbf{k}^{\prime}}^{N_{k}}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}.$
(79)
Next we consider the case where $p=q$, $r=s$, and $\mathbf{Q}=0$. Then the
second term in square brackets in Eq. (20) can be written as
$\displaystyle
V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}^{\dagger}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau}$
$\displaystyle=V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle\quad+V^{*}_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}(a_{p\mathbf{k}\sigma}^{\dagger}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}^{\dagger}a_{s\mathbf{k}^{\prime}\tau}-a_{p\mathbf{k}\sigma}a_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}^{\dagger}a_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q})\tau}a_{s\mathbf{k}^{\prime}\tau}^{\dagger}).$
(80)
The operators in brackets in the final line can be written as, taking $p=q$,
$r=s$, and $\mathbf{Q}=0$,
$\displaystyle
a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}+a_{p\mathbf{k}\sigma}a_{p\mathbf{k}\sigma}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}-a_{p\mathbf{k}\sigma}a_{p\mathbf{k}\sigma}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}-a_{p\mathbf{k}\sigma}a_{p\mathbf{k}\sigma}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}a_{r\mathbf{k}^{\prime}\tau}^{\dagger}$
$\displaystyle=a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}-a_{p\mathbf{k}\sigma}a_{p\mathbf{k}\sigma}^{\dagger}$
$\displaystyle=a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}+a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}-\openone.$
(81)
By symmetry of swapping $p$ and $q$, and swapping $r$ and $s$, we must be able
to simplify the final line of (80) to
$V_{p\mathbf{k},p\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}(a_{r\mathbf{k}^{\prime}\tau}^{\dagger}a_{r\mathbf{k}^{\prime}\tau}+a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}-\openone).$
(82)
That is, this value of $V$ is real. We can also relabel $r$ and $p$ and use
symmetry to show the contribution from the first term in Eq. (82) is
equivalent to
$V_{r\mathbf{k}^{\prime},r\mathbf{k}^{\prime},p\mathbf{k},p\mathbf{k}}a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}=V_{p\mathbf{k},p\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}.$
(83)
Hence the contribution of these corrections is
$\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p=1}^{N/2}\left(\sum_{r=1}^{N/2}\sum_{\mathbf{k}^{\prime}}^{N_{k}}V_{p\mathbf{k},p\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)(a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}-\openone/2).$
(84)
In this case, the constant factor comes from $1/4$ in Eq. (20), and a factor
of 2 from the sum over $\tau$. As a result the expression in Eq. (82) is
divided by 2 here, and apart from the identity we have the same expression as
that accounting for only one of the pairs $p,q$ and $r,s$ being equal. The
operator proportional to the identity can be ignored in the implementation of
the Hamiltonian because it just gives a global shift in the eigenvalues.
### A.3 Complexity for sparse implementation
The fundamental operator we are aiming to implement is in the form of Eq.
(III.1) for the one-body term and Eq. (III.1) for the two-body term. In both
we have a real part and an imaginary part; for the one-body term this is
$h_{pq}$, and for the two-body term this is
$V_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime}}$.
We need to perform a state preparation that provides _real_ amplitudes for the
real and imaginary parts of $h$ and $V$ on separate basis states (not just
real and imaginary parts of an amplitude on each basis state). This means the
number of items of data to output is doubled in order to give the real and
imaginary parts. The state preparation is otherwise essentially unchanged from
that in [71], as described in Eq. (48) of that work and the accompanying
explanation.
Recall that in the sparse state preparation procedure, we use a register
indexing the nonzero entries (see Eq. (43) of [71]). That is used to output
“ind”, “alt”, and “keep” values via QROM (see Eq. (44) of [71]). The “ind”
values are values of $p,q,r,s$, as well as the sign needed, and a qubit
distinguishing between the one- and two-body terms. The “alt” values are
alternate values of these quantities, and “keep” governs the probability of
swapping these registers for the state preparation via coherent alias
sampling. Since we need a bit to flag whether the amplitude being produced is
for the real or imaginary part, that would indicate we need two extra bits
output, one for the “ind” value and one for the “alt” value. However, we can
use one bit in the register indexing the nonzero entries to flag between real
and imaginary parts. It is most convenient to make this register the least
significant bit. Then we just need to produce “alt” values of this register,
so the output size is only increased by 1 bit instead of 2. A requirement for
this approach is that the non-zero entries of $V$ that are retained are the
same for the real and imaginary parts.
A further increase in the size of the output register is because we need to
output values of $\mathbf{k}$, $\mathbf{k}^{\prime}$, and $\mathbf{Q}$. The
number of bits needed to store $\mathbf{k}$ is not simply $\lceil\log
N_{k}\rceil$ because $\mathbf{k}$ is a vector. The number of bits will be
denoted $n_{k}$. If we assume that the number of values is given by the
product of numbers in the three dimensions $N_{k}=N_{x}N_{y}N_{z}$, then
$n_{k}=\lceil\log N_{x}\rceil+\lceil\log N_{y}\rceil+\lceil\log N_{z}\rceil.$
(85)
Therefore $\mathbf{k}$, $\mathbf{k}^{\prime}$, and $\mathbf{Q}$ increase the
size of both the ind and alt registers by $3n_{k}$, for a total of $6n_{k}$.
The size of the output is given in Eq. (A13) of [32] as
$m=\aleph+8\lceil\log(N/2)\rceil+4$, and would here be increased to
$\aleph+8\lceil\log(N/2)\rceil+6n_{k}+5,$ (86)
where we have also increased the size of the output by 1 to account for
selecting between real and imaginary parts, as discussed above. The quantity
$\aleph$ is the number of bits for the “keep” register.
The remaining consideration for the sparse state preparation is the symmetry.
In prior work there were three symmetries, with swap of $p,q$ with $r,s$ as
well as swaps within the $p,q$ and $r,s$ pairs. The method to take advantage
of this was described from about Eq. (49) on in [71]. There you only perform
the preparation for a restricted range of $p,q,r,s$, then use three qubits to
control swaps to generate the symmetries.
Here we have the symmetry with swap of $p,q$ with $r,s$, but we can only swap
the $p,q$ and $r,s$ pairs simultaneously. We also need to take the complex
conjugate when performing that swap. In order to implement the symmetries
here, we will have two control qubits. One qubit will be in a
$\mathinner{|{+}\rangle}$ state and control swap of the $p,q$ with $r,s$ as
before, except we now have the registers containing
$\mathbf{k},\mathbf{k}^{\prime},\mathbf{k}{\ominus}\mathbf{Q},\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$
to swap. That qubit is only set to $\mathinner{|{+}\rangle}$ for the two-body
term, since that symmetry does not make sense for the one-body term. The
second qubit is used to simultaneously swap the $p,q$ and $r,s$ pairs, as well
as the registers containing $\mathbf{k}$, etc. It will also be used as a
control for a $Z$ phase gate on a qubit flagging imaginary components. That is
a Clifford gate and is not included in the Toffoli count.
The net result is that the cost of the swaps to produce these symmetries is
unchanged from that in [71], except in that we are counting the qubits needed
to store $\mathbf{k}$, etc, as well as $p,q,r,s$. Since a controlled swap of
two qubits can be performed with a single Toffoli (and Clifford gates), the
Toffoli cost of the two controlled swaps of registers is the total number of
qubits used to store $p,q,r,s$ as well as
$\mathbf{k},\mathbf{k}^{\prime},\mathbf{k}{\ominus}\mathbf{Q},\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$,
which is $4\lceil\log(N/2)\rceil+4n_{k}$. Note that in the state preparation
we will be producing $\mathbf{k},\mathbf{k}^{\prime},\mathbf{Q}$, and need to
compute $\mathbf{k}{\ominus}\mathbf{Q}$ and
$\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$ before performing the swaps for these
symmetries.
Assuming for the moment that $N_{x},N_{y},N_{z}$ are all powers of two, then
the number of Toffolis needed for the modular subtractions of the three
components will be $n_{k}-3$, unless one or more of $N_{x},N_{y},N_{z}$ is
equal to 1. It is simpler to give the cost as $n_{k}$ Toffolis, to avoid
needing to address special cases. A further complication is when one or more
of $N_{x},N_{y},N_{z}$ are not powers of two. In this case, the subtraction
can be performed in the usual way for two’s complement binary. Then you can
check if the result for any component is negative, and if it is then add the
appropriate $N_{x},N_{y},N_{z}$ to make it non-negative. The controlled
addition of a classically given number has complexity $n_{k}$, so this at
worst doubles the complexity to $2n_{k}$ for the modular subtraction.
The other major feature that we need to account for is the modified select
operation needed. The basic circuit primitive was given in Figure 13 of [32],
in order to apply $\vec{Z}Y_{p,\sigma}$ followed by $\vec{Z}X_{q,\sigma}$. A
more complicated circuit primitive was given in Figure 1 of [71], which
included testing $p=q$ which is not needed in the approach of [32]. Here the
scheme is more complicated, because instead of having a fixed sequence where
we need to apply $Y$ followed by $X$ we have every combination. This can be
achieved by simply performing each twice; once with a controlled $\vec{Z}Y$
and once with a controlled $\vec{Z}X$, with a doubling of the Toffoli
complexity. That can be seen easily from the diagram in Figure 9 of [2]. There
a control qubit is used, so that can be used to control application of this
circuit with $Y$, then to control application of this circuit with $X$.
To understand how $X$ versus $Y$ is selected, note that there are effectively
five bits controlling here. Let us call the bit selecting between the one- and
two-body terms $b_{0}$; this is created in the sparse state preparation. Let
us call the bit selecting real versus imaginary parts $b_{1}$; this is again
created in the state preparation. There also needs to be a bit $b_{2}$ for
selecting between the two lines for real and the two lines for imaginary in
the expression in Eq. (III.1). Then we have $b_{3}$ to select between the two
terms in the first set of square brackets in each line of Eq. (III.1), and a
bit $b_{4}$ selecting between the two terms in the second set of square
brackets.
Now, considering the operators indexed by $r,s$ first, these are applied for
the two-body terms but not the one-body term. This control of the operations
adds only one Toffoli to the cost. For the first operation,
$\vec{Z}X_{s\mathbf{k}^{\prime}\tau}$ or
$\vec{Z}Y_{s\mathbf{k}^{\prime}\tau}$, we can see that the selection between
$X$ and $Y$ depends only on bit $b_{4}$. For the second operation, the
selection is independent of whether we have the real or imaginary part. We
select $X$ if we have $b_{4}=0$ (the first term) and $b_{2}=0$ (the first
line), or if we have $b_{4}=b_{2}=1$. To create a bit selecting between $X$
and $Y$ we can simply perform a CNOT between these bits, with no Toffoli cost.
Next, consider the operators indexed by $p,q$. For simplicity we will first
consider just the two-body terms. Again the first operation can select between
$X$ and $Y$ just by using the bit $b_{3}$ selecting between the terms. Then
for the second operation, we select $X$ if we have $b_{1},b_{2},b_{3}$ equal
to $0,0,0$, or $0,1,1$, or $1,0,1$, or $1,1,0$. It is easily seen that if we
apply CNOTs with $b_{1}$ then $b_{2}$ as control and $b_{3}$ as target, then
we should apply $X$ if we have $b_{3}=0$. This selection can be performed
without Toffolis again.
Now to take account of how the one-body terms are applied, it is convenient to
rewrite the first line of Eq. (III.1) as
$-\frac{i}{4}\sum_{p,q=1}^{N/2}{\rm
Re}(h_{p\mathbf{k},q\mathbf{k}})\left\\{\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}-\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right\\}.$
(87)
Then the selection between the operations is identical to that for $b_{2}=1$
(second lines) for the two-body part. Therefore, for the above analysis of the
two-body implementation, we can replace $b_{2}$ with a bit that is 1 if
$b_{2}=1$ OR $b_{0}=0$. This operation requires one more Toffoli.
Another modification we need to make is to compute
$\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$ and $\mathbf{k}{\ominus}\mathbf{Q}$
to use in the selection for the two-body operations. As explained above, these
modular subtractions have complexity at worst $2n_{k}$. The calculation
$\mathbf{k}{\ominus}\mathbf{Q}$ needs to be controlled on the bit $b_{0}$
selecting between the one- and two-body terms, which increases its complexity
by $n_{k}$. Therefore the complexity of this arithmetic is $3\lceil\log
N_{k}\rceil$ Toffolis. We can keep the working qubits in order to uncompute
this arithmetic with Clifford gates.
Finally, we should account for the phase factors needed in the implementation.
The phase factors needed are as follows.
1. 1.
We should apply an $i$ phase factor on the one-body term. That can be
implemented with an $S$ gate which is Clifford.
2. 2.
If we have the one-body term (flagged by $b_{0}=0$) we should flip the sign of
the real part (flagged by $b_{1}=0$). This can be done with a controlled
phase, which is again Clifford.
3. 3.
For the two-body term ($b_{0}=1$), real ($b_{1}=0$), and second line
($b_{2}=1$) we should flip the sign. This doubly controlled phase has a cost
of one Toffoli.
4. 4.
We should flip the sign with $b_{3}=1$ if we have the two-body term
($b_{0}=1$) and the second line for real ($b_{1}=0,b_{2}=1$) or the first line
for imaginary ($b_{1}=1,b_{2}=0$). We should also flip the sign with $b_{3}=1$
if we have the one-body term ($b_{0}=0$) and real ($b_{1}=0$). To achieve this
we can first perform a CNOT with $b_{1}$ as control and $b_{2}$ as target.
Then, if $b_{0}=0,b_{1}=0$ OR $b_{0}=1,b_{2}=1$ we should apply a $Z$ gate to
the qubit containing $b_{3}$. This can be achieved with two double controlled
phase gates, so has Toffoli cost 2.
5. 5.
We should flip the sign for $b_{4}=1$ if we have the second line $b_{2}=1$.
That is just a controlled phase with no non-Clifford cost.
As a result, the total complexity of implementing these phase factors is 3
Toffoli gates.
The total additional complexity is therefore $2$ Toffolis for the selection of
$X$ versus $Y$ when we account for needing to perform the one-or two-body
term, $3\lceil\log N_{k}\rceil$ Toffolis for subtractions, 3 Toffolis for
phase factors, and doubling the selection cost to select between $X$ and $Y$.
The two Toffolis to account for the one-body term were one for selecting
performing the operators indexed by $r,s$, and another Toffoli to perform an
OR between $b_{0}$ and $b_{2}$ for the operators indexed by $p,q$.
A further complication arises where the $h$ and $V$ are dependent on the spins
$\sigma$ and $\tau$. This is easily accounted for by outputting the values of
$\sigma,\tau$ as part of the state preparation. This means that the size of
both the “ind” and “alt” outputs are increased by 2, making the total size of
the output increase by 4 to be
$\aleph+8\lceil\log(N/2)\rceil+6n_{k}+9.$ (88)
Often there is the symmetry that for $V$ the value with
$\sigma=\uparrow,\tau=\downarrow$ are the same as for
$\sigma=\downarrow,\tau=\uparrow$. This means that we can omit the case
$\sigma=\downarrow,\tau=\uparrow$, and use a swap of these two qubits
controlled by an ancilla qubit in the usual way for obtaining symmetries. In
the detailed costing below, we give results for the case where $h$ and $V$ are
not dependent on spin for simplicity.
The QROM output size is
$m=\aleph+8n_{N}+6n_{k}+5,$ (89)
where $n_{N}=\lceil\log(N/2)\rceil$. This output size is increased above that
analysed in [32]. Then, using that output size, the formula for the cost of
the preparation with $d$ unique nonzero entries is
$\lceil d/k_{1}\rceil+m(k_{1}-1)$ (90)
and of the inverse preparation is
$\lceil d/k_{2}\rceil+k_{2}.$ (91)
Here $k_{1}$ and $k_{2}$ must be chosen as powers of 2. This formula is the
same as in [32], but with the modified value of $m$.
To begin the state preparation, we need to prepare an equal superposition
state over $d$ basis states. The analysis is described in [32], which gives
the costing $3\lceil\log d\rceil-3\eta+2b_{r}-9$ Toffoli gates. Here $\eta$ is
a number such that $2^{\eta}$ is a factor of $d$, and $b_{r}$ is a number of
bits used for rotation of an ancilla qubit to improve the amplitude of
success. This is a cost needed both for the preparation and inverse
preparation.
Other minor Toffoli costs are as follows. We use extra ancillas to save cost,
because a large number of ancillas were used for the QROM, and can be reused
here without increasing the maximum number of ancillas needed. In the
following we use the notation $n_{N}=\lceil\log(N/2)\rceil$.
1. 1.
Perform select as shown in Figure 13 of [32] twice, but controlling between
$X$ and $Y$. This complexity is $4NN_{k}-6$, since we have $8$ times a
complexity of $NN_{k}/2-1$ for each of the selected operations, plus 2
Toffolis to generate the qubits we need for the control. There were two
Toffolis needed to account for selecting between one- and two- body terms, and
otherwise the selection to account for the various terms can be performed
using Clifford gates. In addition to this, we need to perform swaps controlled
by spin qubits twice for each of the two spin qubits, with a complexity
$2NN_{k}$. That then gives a total complexity of this step $6NN_{k}-6$.
2. 2.
The state preparation needs an inequality test on $\aleph$ qubits, as well as
controlled swaps. The controlled swaps are on $n_{N}+3n_{k}+2$ qubits. Here
$4\lceil\log(N/2)\rceil$ are for the values of $p$, $q$, $r$, and $s$, the
$3n_{k}$ is for the $\mathbf{k}$, $\mathbf{k}^{\prime}$, and $\mathbf{Q}$
values, a $+1$ is for the qubit which distinguishes between the one- and two-
electron terms, and a further $+1$ comes from the qubit for selecting between
the real and imaginary parts. There are also ind and alt values of the sign,
but the correct phase can be applied with Clifford gates, so this does not add
to the Toffoli cost. The cost of the inequality test on $\aleph$ qubits is
$\aleph$. As in [32], we can eliminate the non-Clifford cost of the inverse
preparation using ancillas and measurements, so the Toffoli cost is
$\aleph+4n_{N}+3n_{k}+2$.
3. 3.
The controlled swaps used to generate the symmetries have a cost of
$4n_{N}+4n_{k}$. This is increased by $4n_{k}$ over that in [32], since we
need to swap $\mathbf{k}$ registers as well. Although there are only two
controlled swaps rather than three in [32], two of the controlled swaps in
[32] together act on as many qubits as one controlled swap here, so the factor
of 4 is the same as in [32]. A further $4n_{k}$ cost is for computing
$\mathbf{k}{\ominus}\mathbf{Q}$ and $\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$
(or $2n_{k}$ if $N_{x},N_{y},N_{z}$ are powers of 2), and an extra $n_{k}$ is
needed to make the computation of $\mathbf{k}{\ominus}\mathbf{Q}$ controlled.
Again these controlled swaps can be inverted for the inverse preparation with
measurements and Clifford gates. Thus the total Toffoli cost here is
$4n_{N}+9n_{k}$.
4. 4.
For the qubitization construction a reflection on the ancilla is needed as
well. The qubits that need to be reflected on are
1. (a)
the $\lceil\log d\rceil$ qubits for preparing the state,
2. (b)
$\aleph$ qubits for the equal superposition state in coherent alias sampling,
3. (c)
two qubits that are used for controlled swaps to generate the symmetries of
the state,
4. (d)
the two spin qubits,
5. (e)
the ancilla qubit that is rotated to produce the equal superposition state,
6. (f)
and the qubits storing $b_{2},b_{3},b_{4}$, which are also used in the linear
combination of unitaries.
There is no non-Clifford Toffoli cost for the preparation on
$b_{2},b_{3},b_{4}$, since an equal superposition may be prepared with a
Hadamard. They are control qubits that need to be reflected upon for the
qubitization, so add a cost of 3 Toffolis to the reflection giving a total
cost $\lceil\log d\rceil+\aleph+6$.
5. 5.
As before, the control for the phase estimation uses unary iteration on the
control registers, with one more Toffoli for each step. The control by these
registers is implemented simply by controlling the reflection, which needs
just one Toffoli per step.
6. 6.
An extra three Toffolis are needed for the phase factors.
Adding all these minor costs together gives, in the spin-independent case
$\displaystyle 2(3\lceil\log
d\rceil-3\eta+2b_{r}-9)+(6NN_{k}-6)+(\aleph+4n_{N}+3n_{k}+2)+4n_{N}+9n_{k}+\lceil\log
d\rceil+\aleph+6+2+3$ $\displaystyle=6NN_{k}+8n_{N}+10\lceil\log
N_{k}\rceil+2\aleph+7\lceil\log d\rceil-6\eta+4b_{r}-8.$ (92)
The total cost for a single step is then
$\left\lceil\frac{d}{k_{1}}\right\rceil+m(k_{1}-1)+\left\lceil\frac{d}{k_{2}}\right\rceil+k_{2}+6NN_{k}+8n_{N}+12n_{k}+2\aleph+7\lceil\log
d\rceil-6\eta+4b_{r}-8,$ (93)
with $m=\aleph+8n_{N}+6\lceil\log N_{k}\rceil+5$,
$n_{N}=\lceil\log(N/2)\rceil$, $\eta$ an integer such that $2^{\eta}$ is a
factor of $d$, and $b_{r}$ the number of bits used for rotation of an ancilla
qubit.
We may count the qubit costs by considering the maximum used during the QROM,
as the advanced QROM has a high qubit usage that will not be exceeded in other
parts of the algorithm. The qubit costs are therefore as follows.
1. 1.
The control register for the phase estimation uses
$\lceil\log(\mathcal{I}+1)\rceil$ qubits, and there are
$\lceil\log(\mathcal{I}+1)\rceil-1$ qubits for the unary iteration.
2. 2.
The system uses $NN_{k}$ qubits.
3. 3.
The $\lceil\log d\rceil+\aleph+8$ qubits that need to be reflected upon listed
above.
4. 4.
A qubit is needed to flag success of the equal superposition state
preparation.
5. 5.
The phase gradient state uses $b_{r}$ qubits.
6. 6.
The QROM uses qubits (including the output)
$mk_{1}+\lceil\log(d/k_{1})\rceil$.
This gives a total number of logical qubits
$2\lceil\log(\mathcal{I}+1)\rceil+NN_{k}+\lceil\log
d\rceil+b_{r}+\aleph+mk_{1}+\lceil\log(d/k_{1})\rceil+8,$ (94)
with $m=\aleph+8n_{N}+6\lceil\log N_{k}\rceil+5$.
## Appendix B Single-factorization derivations
### B.1 One-body correction for single factorization
For the single factorized form of the Hamiltonian, we may use the same
expressions for $\hat{A}$ and $\hat{B}$ for the case $\mathbf{Q}=0$ as for
$\mathbf{Q}\neq 0$, with an additional correction proportional to the
identity. This yields a one-body correction in the case of $\hat{A}$ but not
$\hat{B}$. For $\hat{A}_{n}(\mathbf{Q}=0)$ we obtain a term proportional to
the identity, as follows
$\displaystyle\hat{A}_{n}(\mathbf{Q}=0)$
$\displaystyle=\frac{1}{2}\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}\sum_{p\neq
q}\left(L_{p\mathbf{k},q\mathbf{k},n}a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}+L_{p\mathbf{k},q\mathbf{k},n}^{*}a_{q\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}\right)+\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}\sum_{p}L_{p\mathbf{k}p\mathbf{k},n}a_{p\mathbf{k}\sigma}^{\dagger}a_{p\mathbf{k}\sigma}$
(95)
$\displaystyle=\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p\neq
q}^{N/2}\left(\frac{i{\rm
Re}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}\right)\right.$
$\displaystyle\quad+\left.\frac{i{\rm
Im}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}\right)\right)+\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p}^{N/2}\frac{L_{p\mathbf{k},p\mathbf{k},n}}{2}(\openone-Z)$
$\displaystyle=\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{pq}^{N/2}\left(\frac{i{\rm
Re}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}\right)\right.$
$\displaystyle\quad+\left.\frac{i{\rm
Im}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]}{4}\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q(\mathbf{k}{\ominus}\mathbf{Q})\sigma}\right)\right)+\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p}^{N/2}\frac{L_{p\mathbf{k}p\mathbf{k},n}}{2}\openone.$
(96)
Here we have used the symmetry of $L$, so $L_{p\mathbf{k}p\mathbf{k},n}$ is
real. This derivation is similar to that for the one-body term in Appendix
A.1.
Because $\hat{A}_{n}(\mathbf{Q}=0)$ is squared, the identity term gives rise
to a one-body correction
$\displaystyle\frac{i}{4}\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{n}^{M}\sum_{\mathbf{k}}^{N_{k}}\sum_{p,q}^{N/2}\left({\rm
Re}[L_{p\mathbf{k}q\mathbf{k},n}]\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}\right)\right.$
$\displaystyle\quad+\left.{\rm
Im}[L_{p\mathbf{k}q\mathbf{k},n}]\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right)\right)\sum_{\mathbf{k}^{\prime}}^{N_{k}}\sum_{r=1}^{N/2}L_{r\mathbf{k}^{\prime}r\mathbf{k}^{\prime},n}$
$\displaystyle=\frac{i}{4}\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{\mathbf{k}}^{N_{k}}\sum_{p,q}^{N/2}\sum_{\mathbf{k}^{\prime}}^{N_{k}}\sum_{r=1}^{N/2}\left({\rm
Re}[V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}]\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}-\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}\right)\right.$
$\displaystyle\quad+\left.{\rm
Im}[V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}]\left(\vec{Z}X_{p\mathbf{k}\sigma}\vec{Z}X_{q\mathbf{k}\sigma}+\vec{Z}Y_{p\mathbf{k}\sigma}\vec{Z}Y_{q\mathbf{k}\sigma}\right)\right).$
(97)
Here there was a factor of $1/2$ on the square of $\hat{A}_{n}(\mathbf{Q}=0)$,
a factor of 2 from the cross term in the square, a factor of 2 from the sum
over the spin on the identity, and so a factor of $1/2$ has been cancelled.
The form of this correction is identical to that for the one-body term, except
$h_{p\mathbf{k},q\mathbf{k}}$ is replaced with
$\sum_{r=1}^{N/2}\sum_{\mathbf{k}^{\prime}}^{N_{k}}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}.$
(98)
For $\hat{B}_{n}(\mathbf{Q}=0)$, it is easily seen that the symmetry
$L_{p\mathbf{k}q\mathbf{k},n}=L^{*}_{q\mathbf{k}p\mathbf{k},n}$ implies that
$\hat{B}_{n}(\mathbf{Q}=0)=0$. If we use the form for
$\hat{B}_{n}(\mathbf{Q}=0)$ in terms of Pauli operators given in Eq. (III.2),
then it will be proportional to the identity due to the case $p=q$. Squaring
then just gives a correction proportional to the identity (which can be
ignored in the implementation because it is just an energy shift), and it
gives no one-body correction. As a result we add the expression in Eq. (98) to
$h_{pq}$ to obtain the complete one-body Hamiltonian given in Eq. (38).
### B.2 Complexity for single-factorized representation
To see the changes we need to make to the algorithm for the single-factorized
representation, recall that the two-body term was of the form [32]
$W^{\prime}=\frac{1}{8}\sum_{\ell=1}^{L}\left(\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{p,q=1}^{N/2}W^{(\ell)}_{pq}Q_{pq\sigma}\right)^{2},$
(99)
where $Q_{pq\sigma}$ was an individual Pauli string. So the changes in the
representation are
* •
The sum over $\ell$ up to $L$ has been replaced with a sum over $\mathbf{Q}$
and $n$, as well as a sum over the squares of $A$ and $B$.
* •
Inside the square, the sum over just $\sigma,p,q$ now also has a sum over
$\mathbf{k}$.
* •
Inside the sum, instead of just having a single Pauli string, we have a sum
over $4$, with real and imaginary parts of
$L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}$.
The amendments we will make to the original algorithm (according to the
description in [32]) to implement the block encoding are as follows.
* •
For the sum over $\mathbf{Q}$ and $n$ we can combine them into $\ell$, and use
the same state preparation method as before. The value of $\mathbf{Q}$ will
need to be used in the select operation, so needs to be output as part of that
state preparation.
* •
In the preparation for the block encoding of $A$ and $B$, the index
$\mathbf{k}$ will be needed as well as $p$ and $q$.
* •
We no longer take advantage of $p,q$ symmetry.
* •
We need to perform arithmetic to compute $\mathbf{k}{\ominus}\mathbf{Q}$ and
$\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$, with a cost of $4n_{k}$ (or $2n_{k}$
if $N_{x},N_{y},N_{z}$ are powers of 2).
* •
A number of qubits can be used for selecting between the parts of the linear
combination of unitaries, similar to the sparse case. We have $b_{0}$ to
select between the one- and two-body terms, $b_{1}$ for selecting between the
real and imaginary parts, and $b_{3}$ selecting between the two terms in one
application of $A$ or $B$. The qubit $b_{2}$ can be used for selecting between
$A$ and $B$, which is a change from the sparse case, where it was used for
selecting between lines. We do not need $b_{4}$ because we are implementing
$A$ or $B$ twice (and creating the bit $b_{3}$ both times).
* •
There needs to be a doubling of the selection cost to select between $X$ and
$Y$ as in the sparse case.
* •
The creation of the qubits for controlling between $X$ and $Y$ can be
performed with one additional Toffoli. Note first that the terms in $A$ are
equivalent to the one-body part, and the terms in $B$ are the same except with
the real and imaginary lines swapped around. This means that we can use
$b_{0}$ and $b_{2}$ as a control to flip $b_{1}$, which effectively swaps the
real and imaginary parts for $B$ so it can be implemented in the same way.
Now, for the first selection of $X$ versus $Y$, we can apply a CNOT with
$b_{1}$ as control and $b_{3}$ as target, and use that as control For the
second selection we can simply use $b_{3}$ as control.
* •
For the phase factors, we just need a sign flip if $b_{1}=0$ and $b_{3}=1$,
which is a Clifford controlled phase.
To explain the modifications needed for the costings, here we give the
sequence of steps with the same numbering as in [32], explaining the
differences.
1. 1.
We first prepare a state as
$\frac{1}{\sqrt{\lambda}}\left(\mathinner{|{0,0,0,0}\rangle}\sqrt{\sum_{p,q}\left(|{\rm
Re}(h^{\prime}_{pq})|+|{\rm
Im}(h^{\prime}_{pq})|\right)}+\frac{1}{\sqrt{2}}\sum_{\mathbf{Q},n}\mathinner{|{\ell,\mathbf{Q},n,1}\rangle}\sum_{\mathbf{k},pq}(|{\rm
Re}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]|+|{\rm
Im}[L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n}]|)\right),$ (100)
where $\mathinner{|{\ell,\mathbf{Q},n}\rangle}$ indicates $\ell$ which starts
from 1 indexing values of $\mathbf{Q},n$, but $\mathbf{Q}$ and $n$ are also
output in registers. That is, we will be preparing $\ell$ while outputting
values of $\mathbf{Q},n$. We are assuming the more difficult case where the
number of values of $\mathbf{Q}$ or $n$ are not powers of 2, but if they are
then further simplifications are possible. This has complexity as follows.
1. (a)
Preparing an equal superposition on $MN_{k}+1$ basis states has complexity
$3n_{MN}+2b_{r}-9$, where $b_{r}$ is the number of bits used for the rotation
on the ancilla,
$n_{MN}=\lceil\log(MN_{k}+1)\rceil.$ (101)
2. (b)
A QROM is applied with output size
$b_{MN}=\aleph_{1}+n_{MN}+2n_{k}+2,$ (102)
with $\aleph_{1}$ being the number of bits used for the keep values (which
govern the precision of the state preparation via the inequality test). Here
$n_{MN}$ and $2n_{k}$ are for $\ell$ and $\mathbf{Q}$, with the factor of 2
accounting for ind and alt values of $\mathbf{Q}$. The extra 2 qubits are for
outputting a qubit showing if $\ell=0$ (for selecting between the one- and
two-body parts). The complexity is
$\left\lceil\frac{MN_{k}+1}{k_{MN}}\right\rceil+b_{MN}(k_{MN}-1).$ (103)
3. (c)
An inequality test is performed with complexity $\aleph_{1}$.
4. (d)
A controlled swap is performed with complexity $n_{k}+\lceil\log M\rceil+1$.
2. 2.
Next, we prepare a state on the second register as
$\displaystyle\frac{1}{\sqrt{\lambda}}\left(\mathinner{|{0,0,0,0}\rangle}\sum_{p,q}\left[\sqrt{2{|{\rm
Re}(h^{\prime}_{pq})|}}\mathinner{|{\theta_{pq0}^{(0)}}\rangle}\mathinner{|{0,p,q,0}\rangle}+\sqrt{2{|{\rm
Im}(h^{\prime}_{pq})|}}\mathinner{|{\theta_{pq1}^{(0)}}\rangle}\mathinner{|{0,p,q,1}\rangle}\right]\right.$
$\displaystyle+\frac{1}{\sqrt{2}}\sum_{\mathbf{Q},n}\mathinner{|{\ell,\mathbf{Q},n,1}\rangle}\sqrt{\sum_{\mathbf{k},rs}(|{\rm
Re}[L_{r\mathbf{k}s(\mathbf{k}{\ominus}\mathbf{Q}),n}]|+|{\rm
Im}[L_{r\mathbf{k}s(\mathbf{k}{\ominus}\mathbf{Q}),n}]|)}$
$\displaystyle\times\left.\sum_{\mathbf{k},p,q}\left[\sqrt{|{\rm
Re}(L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n})|}\mathinner{|{\theta_{\mathbf{k}pq0}^{(\ell)}}\rangle}\mathinner{|{\mathbf{k},p,q,0}\rangle}+\sqrt{|{\rm
Im}(L_{p\mathbf{k}q(\mathbf{k}{\ominus}\mathbf{Q}),n})|}\mathinner{|{\theta_{\mathbf{k}pq1}^{(\ell)}}\rangle}\mathinner{|{\mathbf{k},p,q,1}\rangle}\right]\right)\mathinner{|{+}\rangle}\mathinner{|{+}\rangle},$
(104)
where $\theta_{\mathbf{k}pq0}^{(\ell)}$, $\theta_{\mathbf{k}pq1}^{(\ell)}$ are
used to obtain the correct signs on the terms, and the
$\mathinner{|{+}\rangle}$ states at the end are used to select the spin and
control the swap between the $p$ and $q$ registers.
Now we have a distinction from [32] in that we have separate real and
imaginary parts, and a separate prepared qubit to flag between the real and
imaginary parts. Because of the large number of variables, we will again use a
single variable for iteration, and use it to output $\mathbf{k},p,q$. The
complexity of this state preparation is then as follows.
1. (a)
First, prepare an equal superposition over the variable for iteration. There
are $P=N_{k}N^{2}/2$ values to take, which includes a factor of $2$ for the
real and imaginary parts, $N_{k}$ for $\mathbf{k}$, and $N^{2}/4$ for the
values of $p,q$. Then the complexity of preparing the equal superposition is
$3n_{P}-3\eta+2b_{r}-9$, where $n_{P}=\lceil\log P\rceil$, with $\eta$ being
the largest number such that $2^{\eta}$ is a factor of $P$.
2. (b)
The size of the QROM output is
$b_{p}=2n_{k}+4n_{N}+\aleph_{2}+3,$ (105)
where the first term is for the three components of $\mathbf{k}$, the second
is for $p$ and $q$. The third is for ind and alt values of the qubit to store
the correct sign, as well as an alt value of the extra qubit for selecting
between the real and imaginary parts. We do not include an ind value for that
qubit, because it is part of the register we are iterating over. The
complexity of this QROM will be
$\left\lceil\frac{MN_{k}+1}{k_{p1}}\right\rceil\left\lceil\frac{P}{k_{p2}}\right\rceil+b_{p}(k_{p1}k_{p2}-1),$
(106)
where we are accounting for the cost to select based on both the index from
the factorization and the index for $\mathbf{k},p,q$, and using the result for
the complexity of QROM on two registers from Appendix G of [32].
3. (c)
Perform the inequality test with cost $\aleph_{2}$, which is the bits of
precision for this state preparation.
4. (d)
Perform the controlled swap with the alt values with cost $n_{k}+2n_{N}+1$.
Here we are swapping the ind and alt values of $\mathbf{k},p,q$, as well as
the qubit selecting between real and imaginary parts. The sign required for
the sign qubits can be implemented with Cliffords as in [32], so does not add
to this Toffoli cost.
3. 3.
We no longer perform swaps of $p$ and $q$ for symmetry, but we do need to
perform arithmetic to compute $\mathbf{k}{\ominus}\mathbf{Q}$ and
$\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$, with a cost of $4n_{k}$.
4. 4.
Perform select by performing the sequence of four controlled
$\vec{Z}X_{p,\sigma}$ or $\vec{Z}Y_{p,\sigma}$ operations. The cost is
$4(NN_{k}/2-1)$ Toffolis since it must be controlled, and there is a cost of
one more Toffoli to create the qubits to control on. In order to select the
spin we also perform a swap controlled by the spin selection qubit before and
after, with a cost of $NN_{k}$ Toffolis.
5. 5.
Reverse steps 2 and 3, where the complexities are the same except the QROM
complexity which is changed to
$\left\lceil\frac{MN_{k}+1}{k^{\prime}_{p1}}\right\rceil\left\lceil\frac{P}{k^{\prime}_{p2}}\right\rceil+k^{\prime}_{p1}k^{\prime}_{p2}.$
(107)
6. 6.
Reflect on the qubits that were prepared in step 2. The qubits we need to
reflect on are as follows.
1. (a)
The $n_{P}$ qubits for the variable of iteration.
2. (b)
We need to reflect on the $\aleph_{2}$ registers that are used for the equal
superposition state for the state preparation.
3. (c)
One that is rotated for the preparation of the equal superposition state.
4. (d)
One for the spin.
5. (e)
One for controlling the swap between the $p$ and $q$ registers.
6. (f)
One for selecting between the real and imaginary part.
7. (g)
One for selecting between $A$ and $B$.
That gives a total of $n_{P}+\aleph_{2}+5$ qubits. The reflection needs to be
controlled on the success of the preparation on the $\ell$ register, and
$\ell\neq 0$, making the total cost $n_{P}+\aleph_{2}+5$ Toffolis.
7. 7.
Perform steps 2 to 5 again, but this time $MN_{k}+1$ is replaced with $MN_{k}$
in Eq. (106) and Eq. (107). Also, the select operation needs to be controlled
on $\ell\neq 0$, which flags the one-body term. That requires another 4
Toffolis.
8. 8.
Invert the state preparation on the $\ell$ register, where the complexity of
the QROM is reduced to
$\left\lceil\frac{MN_{k}+1}{k^{\prime}_{P}}\right\rceil+k^{\prime}_{P}.$ (108)
9. 9.
To complete the step of the quantum walk, perform a reflection on the ancillas
used for the state preparation. There are
$n_{MN}+n_{P}+\aleph_{1}+\aleph_{2}+5$, where the qubits we need to reflect on
are as follows.
1. (a)
The $n_{MN}$ qubits for the $\ell$ register.
2. (b)
The $n_{P}$ qubits for the registers in the state preparation for $A$ and $B$.
3. (c)
The $\aleph_{1}$ qubits for the equal superposition state used for preparing
the state on the $\ell$ register using the coherent alias sampling.
4. (d)
The $\aleph_{2}$ qubits for the equal superposition state for preparing the
state for $A$ and $B$.
5. (e)
Two qubits rotated for the boosting the success probability for the equal
superposition states.
6. (f)
One qubit for the spin.
7. (g)
One qubit for controlling the swap of the $p$ and $q$ registers.
8. (h)
One for selecting between the real and imaginary part.
9. (i)
One for selecting between $A$ and $B$.
This reflection has cost $n_{MN}+n_{P}+\aleph_{1}+\aleph_{2}+4$.
10. 10.
The steps of the walk are made controlled by using unary iteration on an
ancilla used for the phase estimation. Each step requires another two Toffolis
for the unary iteration and making the reflection controlled.
In this list of steps we have not explicitly included the part for applying
the phase factors, but that has no non-Clifford cost.
Next we consider the total number of logical qubits needed for the simulation
via this method.
1. 1.
The control register for the phase estimation, and the ancillas for the unary
iteration, together need $2\lceil\log\mathcal{I}\rceil-1$ qubits.
2. 2.
There are $NN_{k}$ qubits for the target system.
3. 3.
There are $n_{MN}+2$ qubits for the $\ell$ register, the qubit rotated in
preparing the equal superposition, and the qubit flagging success of preparing
the equal superposition.
4. 4.
The state preparation on the $\ell$ register uses $b_{MN}=2n_{k}+2\lceil\log
M\rceil+2\aleph_{1}+2$ qubits. Here $2n_{k}+2\lceil\log M\rceil$ is for the
ind and alt values of $\mathbf{Q}$ and $n$, $\aleph_{1}$ are for keep values,
$\aleph_{1}$ are for the equal superposition state, 1 is for the output of the
inequality test, and 2 are for the qubit flagging $\ell\neq 0$ and its
alternate value.
5. 5.
There are $n_{P}+2$ qubits needed for the register preparing $p,q,\mathbf{k}$
values, a qubit that is rotated for the equal superposition, and a qubit
flagging success of preparing the equal superposition.
6. 6.
The equal superposition state used for the second preparation uses
$\aleph_{2}$ qubits.
7. 7.
The phase gradient register uses $b_{r}$ qubits.
8. 8.
The qubits for the spin, controlling the swap of $p$ and $q$, selection
between the real and imaginary parts, and selection between $A$ and $B$ for a
total of 4.
9. 9.
The QROM needs a number of qubits
$b_{p}k_{p1}k_{p2}+\lceil\log[(MN_{k}+1)/k_{p1}]\rceil+\lceil\log[L/k_{p2}]\rceil$.
The QROM for the state preparation on the second register uses a large number
of temporary ancillas, which can be reused by later parts of the algorithm, so
those later parts of the algorithm do not need the number of qubits counted.
The total number of qubits used is then
$2\lceil\log\mathcal{I}\rceil+NN_{k}+n_{MN}+n_{P}+2n_{k}+2\lceil\log
M\rceil+2\aleph_{1}+\aleph_{2}+b_{r}+9+b_{p}k_{p1}k_{p2}+\lceil\log[(MN_{k}+1)/k_{p1}]\rceil+\lceil\log[L/k_{p2}]\rceil$
(109)
with $b_{p}=2n_{k}+2n_{N}+\aleph_{2}+3$, $n_{N}=\lceil\log(N/2)\rceil$,
$n_{P}=\lceil\log P\rceil$, $L=N_{k}N(N+2)/4$. This completes the costing of
the low rank factorization method.
## Appendix C Double-factorization derivations
### C.1 One-body correction
Here we derive the correction for the one-body Hamiltonian as given in Eq.
(51). The lambda value for the Hamiltonian can be calculated by determining
the total L1-norm using the second factorization
$\hat{H}^{\prime}_{2}=\frac{1}{2}\sum_{\mathbf{Q}}^{N_{k}}\sum_{n}^{M}\left(\hat{A}^{2}_{n}(\mathbf{Q})+\hat{B}^{2}_{n}(\mathbf{Q})\right)$
(110)
with
$\displaystyle 2\hat{A}_{n}(\mathbf{Q})$
$\displaystyle=\sum_{\mathbf{k}}\left[U^{A}_{n}(\mathbf{Q},\mathbf{k})\left(\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},A}}f^{A}_{p}(\mathbf{Q},n,\mathbf{k})(\mathbb{1}-Z_{p\mathbf{k}\sigma})\right)U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\right]$
$\displaystyle=\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}-\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}$
(111)
where
$\hat{\mathbb{1}}^{A}_{\mathbf{k}}=\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},A}}f^{A}_{p}(\mathbf{Q},n,\mathbf{k})\mathbb{1}$
and
$\hat{Z}^{A}_{\mathbf{k}}=\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},A}}f^{A}_{p}(\mathbf{Q},n,\mathbf{k})Z_{p\mathbf{k}\sigma}$,
and
$\displaystyle 2\hat{B}_{n}(\mathbf{Q})$
$\displaystyle=\sum_{\mathbf{k}}\left[U^{B}_{n}(\mathbf{Q},\mathbf{k})\left(\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},B}}f^{B}_{p}(\mathbf{Q},n,\mathbf{k})(\mathbb{1}-Z_{p\mathbf{k}\sigma})\right)U^{B}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\right]$
$\displaystyle=\sum_{\mathbf{k}}U^{B}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{B}_{\mathbf{k}}U^{B}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}-\sum_{\mathbf{k}}U^{B}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{B}_{\mathbf{k}}U^{B}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}$
(112)
where
$\hat{\mathbb{1}}^{B}_{\mathbf{k}}=\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},B}}f^{B}_{p}(\mathbf{Q},n,\mathbf{k})\mathbb{1}$
and
$\hat{Z}^{B}_{\mathbf{k}}=\sum_{\sigma}\sum_{p}^{\Xi_{\mathbf{Q},n,\mathbf{k},B}}f^{B}_{p}(\mathbf{Q},n,\mathbf{k})Z_{p\mathbf{k}\sigma}$.
The factor of 1/2 from the Jordan-Wigner transform is squared to 1/4, which is
moved outside each term and combined with the prefactor 1/2 to produce a
prefactor of 1/8. We note that $\hat{A}_{n}(\mathbf{Q})^{2}$ can be written as
$\displaystyle 4\hat{A}_{n}(\mathbf{Q})^{2}$
$\displaystyle=\left(\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}-\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\right)$
$\displaystyle\quad\times\left(\sum_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{\mathbb{1}}^{A}_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}-\sum_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{Z}^{A}_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}\right)$
$\displaystyle=2\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\hat{A}_{n}(\mathbf{Q})+2\hat{A}_{n}(\mathbf{Q})\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}$
$\displaystyle\quad+\sum_{\mathbf{k},\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{Z}^{A}_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}$
$\displaystyle\quad-\sum_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{\mathbb{1}}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\sum_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{\mathbb{1}}^{A}_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}.$
(113)
The last term in the above equation is proportional to the identity and is
ignored. A similar expression can be derived for $\hat{B}_{n}(\mathbf{Q})^{2}$
and thus the component of the two-body term involving two Pauli $Z$ operators
is written as
$\displaystyle V$
$\displaystyle=\frac{1}{8}\sum_{\mathbf{Q},n,\mathbf{k},\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{A}_{\mathbf{k}}U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{Z}^{A}_{\mathbf{k}^{\prime}}U^{A}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}$
$\displaystyle\quad+\frac{1}{8}\sum_{\mathbf{Q},n,\mathbf{k},\mathbf{k}^{\prime}}U^{B}_{n}(\mathbf{Q},\mathbf{k})\hat{Z}^{B}_{\mathbf{k}}U^{B}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}U^{B}_{n}(\mathbf{Q},\mathbf{k}^{\prime})\hat{Z}^{B}_{\mathbf{k}^{\prime}}U^{B}_{n}(\mathbf{Q},\mathbf{k}^{\prime})^{\dagger}$
(114)
which implies the two-body L1-norm, $\lambda_{\mathrm{DF},2}$, is
$\displaystyle\lambda_{\mathrm{DF},2}=\frac{1}{4}\sum_{\mathbf{Q},n}\left[\left(\sum_{\mathbf{k},p}^{N_{k}\Xi_{\mathbf{Q},n,\mathbf{k},A}}|f^{A}_{n}(p,\mathbf{Q},\mathbf{k})|\right)^{2}+\left(\sum_{\mathbf{k},p}^{N_{k}\Xi_{\mathbf{Q},n,\mathbf{k},B}}|f^{B}_{n}(p,\mathbf{Q},\mathbf{k})|\right)^{2}\right]$
(115)
where the factor of $1/8$ becomes a factor of $1/2$ accounting for spin. This
factor of $1/2$ is further divided by two because we perform oblivious
amplitude amplification–i.e. the inner step of qubitization evolving by
$2\hat{A}_{n}(\mathbf{Q})^{2}-\mathbb{1}$ and
$2\hat{B}_{n}(\mathbf{Q})^{2}-\mathbb{1}$.
Next, the one-body terms in the third line of Eq. (113) can be rewritten as
$2\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}\hat{A}_{n}(\mathbf{Q})+2\hat{A}_{n}(\mathbf{Q})\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}\,.$
(116)
This expression needs to be divided by 8 to give the contribution to the
Hamiltonian, and there is a similar contribution from
$\hat{B}_{n}(\mathbf{Q})^{2}$ to give the overall contribution to the one-body
Hamiltonian
$\displaystyle\frac{1}{2}\sum_{n,\mathbf{Q}}\left(\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}\hat{A}_{n}(\mathbf{Q})+\sum_{\mathbf{k}}\hat{\mathbb{1}}^{B}_{\mathbf{k}}\hat{B}_{n}(\mathbf{Q})\right).$
(117)
Taking the trace of Eq. (C.1) and (C.1) then implies
$\displaystyle\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}$
$\displaystyle=\mathbb{1}\operatorname{Tr}(\hat{A}_{n}(\mathbf{Q}))=\mathbb{1}\,\frac{1}{2}\left[\operatorname{Tr}(\hat{\rho}_{n}(\mathbf{Q}))+\operatorname{Tr}(\hat{\rho}^{\dagger}_{n}(\mathbf{Q}))\right],$
(118) $\displaystyle\sum_{\mathbf{k}}\hat{\mathbb{1}}^{B}_{\mathbf{k}}$
$\displaystyle=\mathbb{1}\operatorname{Tr}(\hat{B}_{n}(\mathbf{Q}))=\mathbb{1}\,\frac{i}{2}\left[\operatorname{Tr}(\hat{\rho}_{n}(\mathbf{Q}))-\operatorname{Tr}(\hat{\rho}^{\dagger}_{n}(\mathbf{Q}))\right].$
(119)
The trace of $\hat{\rho}_{n}(\mathbf{Q})$ is non-zero only for $\mathbf{Q}=0$.
In that case
$\operatorname{Tr}(\hat{\rho}_{n}(0))=2\sum_{\mathbf{k}}\left(\sum_{r}^{N/2}L_{r\mathbf{k}r\mathbf{k},n}\right)$
(120)
which is real. Moreover, it is easily seen that $\hat{\rho}(0)$ is Hermitian
using the symmetry
$L_{p\mathbf{k}q\mathbf{k},n}=L_{q\mathbf{k}p\mathbf{k},n}^{*}$, so
$\hat{A}_{n}(0)=\hat{\rho}_{n}(0)$ and $\hat{B}_{n}(0)=0$. Therefore
$\displaystyle\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}\hat{A}_{n}(0)+\sum_{\mathbf{k}}\hat{\mathbb{1}}^{B}_{\mathbf{k}}\hat{B}_{n}(0)=2\sum_{\mathbf{k},p,q,\sigma}\left(\sum_{\mathbf{k}^{\prime},r}L_{p\mathbf{k}q\mathbf{k},n}L_{r\mathbf{k}^{\prime}r\mathbf{k}^{\prime},n}\right)a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}.$
(121)
Therefore the contribution to the one-body Hamiltonian becomes
$\displaystyle\frac{1}{2}\sum_{n,\mathbf{Q}}\left(\sum_{\mathbf{k}}\hat{\mathbb{1}}^{A}_{\mathbf{k}}\hat{A}_{n}(\mathbf{Q})+\sum_{\mathbf{k}}\hat{\mathbb{1}}^{B}_{\mathbf{k}}\hat{B}_{n}(\mathbf{Q})\right)=\sum_{\mathbf{k},p,q,\sigma}\left(\sum_{\mathbf{k}^{\prime},r}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}.$
(122)
As a result, the complete one-body Hamiltonian is
$\displaystyle
H_{1}^{\prime}=\sum_{\mathbf{k},p,q,\sigma}\left(h_{p\mathbf{k},q\mathbf{k}}+\sum_{\mathbf{k}^{\prime},r}V_{p\mathbf{k},q\mathbf{k},r\mathbf{k}^{\prime},r\mathbf{k}^{\prime}}\right)a_{p\mathbf{k}\sigma}^{\dagger}a_{q\mathbf{k}\sigma}.$
(123)
This is identical to the result that was obtained in the single-factorization
case as in Eq. (98). Thus the L1-norm of $H_{1}^{\prime}$ is the sum
$\displaystyle\lambda_{\mathrm{DF},1}=\sum_{\mathbf{k}}\sum_{p}|\lambda_{\mathbf{k},p}|$
(124)
where $\lambda_{\mathbf{k},p}$ is an eigenvalue of the matrix representing
$H_{1}^{\prime}(\mathbf{k})$ which are the coefficients in the parenthesis of
Eq. (123).
### C.2 Complexity of the double-factorized representation
Our form of the two-body part of the Hamiltonian is
$\hat{H}^{\prime}_{2}=\frac{1}{2}\sum_{\mathbf{Q}}^{N_{k}}\sum_{n}^{M}\left(\hat{A}^{2}_{n}(\mathbf{Q})+\hat{B}^{2}_{n}(\mathbf{Q})\right).$
(125)
with
$\displaystyle\hat{A}_{n}(\mathbf{Q})=\sum_{\mathbf{k}}\left[U^{A}_{n}(\mathbf{Q},\mathbf{k})\left(\sum_{\sigma}\sum_{r}^{\Xi_{\mathbf{Q},n,\mathbf{k},A}}f^{A}_{r}(\mathbf{Q},n,\mathbf{k})n_{r,\mathbf{k},\sigma}\right)U^{A}_{n}(\mathbf{Q},\mathbf{k})^{\dagger}\right]$
(126)
and similarly for $\hat{B}_{n}(\mathbf{Q})$. In comparison, the double-
factorized Hamiltonian from [36, 32] is
$F^{\prime}=\frac{1}{8}\sum_{\ell=1}^{L}U_{\ell}\left(\sum_{\sigma\in\\{\uparrow,\downarrow\\}}\sum_{p=1}^{\Xi^{(\ell)}}f_{p}^{(\ell)}Z_{p,\sigma}\right)^{2}U_{\ell}^{\dagger}.$
(127)
So, in contrast to the decomposition before, instead of a sum over $\ell$, we
have a sum over $\mathbf{Q},n$, and a qubit indexing over $\hat{A},\hat{B}$.
This difference can be accounted for easily in the method as presented in
[32]. That method may be summarized as follows.
1. 1.
Perform a state preparation over $\ell$ for the first factorisation.
2. 2.
Use a QROM on $\ell$ to output some parameters needed for the state
preparation for the second factorisation (the operator that is squared).
3. 3.
Perform the inner state preparation over $p$.
4. 4.
Apply a QROM to output the sequence of rotations dependent on $\ell$ and $p$.
5. 5.
Apply the Givens rotations.
6. 6.
Apply a controlled $Z$.
7. 7.
Invert the Givens rotations, QROM, and state preparation over $p$.
8. 8.
Perform a reflection on the ancilla qubits used for the state preparation over
$p$.
9. 9.
Perform steps 3 to 7 again.
10. 10.
Invert the QROM from step 2.
11. 11.
Invert the state preparation from step 1.
Note that this is distinct from the procedure in [36] which combined the
$\ell$ and $p$ preparations.
To account for the changes here, the index $\ell$ can be used to iterate
through all possible values of $\mathbf{Q},n$, and the qubit indexing over
$\hat{A},\hat{B}$. Most of the steps can be performed ignoring these values,
but we will need to know $\mathbf{Q}$ before performing the Givens rotations.
It is convenient to output this value in the QROM used in step 2, which
slightly increases the output size of this QROM. We will also need to output
$\mathbf{k}$ values, and these will be given in the second state preparation
used in step 3. But, that preparation will produce a joint index of $p$ and
$\mathbf{k}$ without giving $\mathbf{k}$ explicitly (similar to our
preparation over $\ell$ not giving $\mathbf{Q}$ explicitly. This can be output
by the QROM in step 4.
In order to apply the Givens rotations, we will need to perform controlled
swaps of system registers $\mathbf{k},\mathbf{k}{\ominus}\mathbf{Q}$ into
working registers, then apply the Givens rotations on those working registers.
Since $\mathbf{k}{\ominus}\mathbf{Q}$ is not given directly by the state
preparation, it needs to be computed with cost $2n_{k}$ (or $n_{k}$ if
$N_{x},N_{y},N_{z}$ are powers of 2). The controlled swaps have a Toffoli cost
of $2n_{k}$ for the unary iteration, and $NN_{k}$ for the controlled swaps.
The cost of $NN_{k}$ is because we need to run through $NN_{k}/2$ system
qubits twice. These controlled swaps are performed 4 times, because they need
to be performed before and after each application of the Givens rotations.
That gives a total cost from this part
$4NN_{k}+12n_{k}.$ (128)
Then for the QROM outputting the Givens rotations, the number of items of data
can be given as
$\sum_{\mathbf{Q},n,\mathbf{k}}(\Xi_{\mathbf{Q},n,\mathbf{k},A}+\Xi_{\mathbf{Q},n,\mathbf{k},B}),$
(129)
where $\Xi_{\mathbf{Q},n,\mathbf{k},A}$ and $\Xi_{\mathbf{Q},n,\mathbf{k},B}$
are the cutoffs in the sums for $\hat{A}$ and $\hat{B}$. As per Eq. (47), we
define $\Xi$ as this quantity divided by $L=2N_{k}M$, so we can write the
number of items of data as $L\Xi$.
The Givens rotations need to be on $2N$ orbitals, so there are $2N$ Givens
rotations. For each of these rotations two angles need to be specified, in
contrast to one in [36, 32]. The size of the data output for the QROM for the
Givens rotations is increased to $2N\beth$, because there are 2 registers of
size $N/2$, and there are 2 rotations of $\beth$ of precision for each Givens
rotation. The total complexity of applying the Givens rotations is increased
to $16N(\beth-2)$. This is an increase of a factor of 4 over that in [32],
with a factor of 2 from using 2 working registers, and a factor of 2 because
there are two rotations for each Givens rotation.
The other changes in the cost are relatively trivial. There is a swap on the
system registers controlled on the spin register. Since this is now on
$NN_{k}$ qubits instead of $N$, the cost is multiplied by $N_{k}$.
So, to summarize the complexity using the same numbering of steps as in [32],
we have the following.
1. 1.
The cost of the state preparation over $\ell$ is
$(3n_{L}-3\eta+2b_{r}-9)+\left\lceil\frac{L+1}{k_{p1}}\right\rceil+b_{p1}(k_{p1}-1)+\aleph_{1}+n_{L},$
(130)
where $L$ is now $2N_{k}M$ and as before $b_{p1}=n_{L}+\aleph_{1}$,
$n_{L}=\lceil\log L\rceil$.
2. 2.
The complexity of the QROM on $\ell$ is now
$\left\lceil\frac{L+1}{k_{o}}\right\rceil+b_{o}(k_{o}-1),$ (131)
with
$b_{o}=n_{k}+n_{\Xi}+n_{L,\Xi}+b_{r}+1,$ (132)
with the extra $n_{k}$ being to output $\mathbf{Q}$. Here $n_{\Xi}$ is the
number of bits needed for $\Xi_{\mathbf{k},p}$ values of $p$, and $n_{L,\Xi}$
$n_{L,\Xi}=\lceil\log(L\Xi+N_{k}N/2)\rceil$ (133)
is the number of bits needed for the offset.
3. 3.
The cost of the second stage of state preparation is
$\displaystyle
4(7n_{\Xi}+2b_{r}-6)+4(n_{L,\Xi}-1)+\left(\left\lceil\frac{L\Xi+NN_{k}/2}{k_{p2}}\right\rceil+\left\lceil\frac{L\Xi}{k_{p2}}\right\rceil+2b_{p2}(k_{p2}-1)\right)+4(\aleph_{2}+n_{\Xi}),$
(134)
where the brackets are used to indicate the cost of parts (a) to (d) of step
3. As well as using our modified definition of $\Xi$, the only change over the
costing in [32] is replacing $N/2$ with $NN_{k}/2$ for the range of values for
the one-body term. In this cost we are including the second use of the
preparation in part 7.
4. 4.
The cost of the number operators via QROM is
$\left\lceil\frac{L\Xi+NN_{k}/2}{k_{r}}\right\rceil+\left\lceil\frac{L\Xi}{k_{r}}\right\rceil+(4N\beth+n_{k})(k_{r}-1)+\left\lceil\frac{L\Xi+NN_{k}/2}{k^{\prime}_{r}}\right\rceil+\left\lceil\frac{L\Xi}{k^{\prime}_{r}}\right\rceil+2k^{\prime}_{r}+4(n_{L,\Xi}-1)+16N(\beth-2)+2NN_{k}+2.$
(135)
Here the term $4N\beth(k_{r}-1)$ has been increased by a factor of 4 over that
in [32], because we have 2 times as many qubits that the Givens rotations need
to act on, and there are twice as many rotations needed for each Givens
rotation. (This term is corresponding to the output size for the QROM.) We
have also added $n_{k}$ for the output size so we can output the value of
$\mathbf{k}$ needed to select the register. Again $N/2$ is replaced with
$NN_{k}/2$ for the one-body term. The quantity $16N(\beth-2)$ is for the cost
of the Givens rotations, and is also multiplied by a factor of 4 over that in
[32]. The $2NN_{k}$ for the controlled swaps for spin, and is increased over
$2N$ in [32] because we now have $\mathbf{k}$.
5. 5.
The inversion of the state preparation has cost
$\displaystyle
2(7n_{\Xi}+2b_{r}-6)+2(n_{L,\Xi}-1)+\left(\left\lceil\frac{L\Xi+NN_{k}/2}{k^{\prime}_{p2}}\right\rceil+\left\lceil\frac{L\Xi}{k^{\prime}_{p2}}\right\rceil+2k^{\prime}_{p2}\right)+2(\aleph_{2}+n_{\Xi}).$
(136)
This cost is the same as in part 3, except the cost of erasing the QROM is
reduced. We are again including both uses (with the second described in step
7).
6. 6.
The reflection for the oblivious amplitude amplification has an unchanged cost
$n_{\Xi}+\aleph_{2}+2.$ (137)
7. 7.
The cost of the second use of the block encoding to give the square are
already accounted for above.
8. 8.
The cost of inverting step 1 is
$(3n_{L}-3\eta+2b_{r}-9)+\left\lceil\frac{L+1}{k^{\prime}_{p1}}\right\rceil+k^{\prime}_{p1}+\aleph_{1}+n_{L},$
(138)
and for inverting step 2 is
$\left\lceil\frac{L+1}{k^{\prime}_{o}}\right\rceil+k^{\prime}_{o},$ (139)
where we are using the improved cost for erasing QROM.
9. 9.
The reflection cost is unchanged at
$n_{L}+n_{\Xi}+\aleph_{1}+\aleph_{2}+1.$ (140)
10. 10.
The extra cost of unary iteration on the control register and of controlling
the reflection on that register is 2 Toffolis.
11. 11.
The new costs of performing controlled swaps into working registers and
arithmetic to compute $\mathbf{k}{\ominus}\mathbf{Q}$ and
$\mathbf{k}^{\prime}{\ominus}\mathbf{Q}$ are
$4NN_{k}+12n_{k}.$ (141)
Adding all these costs together gives the total cost for block encoding the
Hamiltonian.
The cost in terms of logical qubits is very similar to that for the original
double-factorized approach. The differences are as follows.
1. 1.
There are registers needed to store
$\mathbf{k},\mathbf{Q},\mathbf{k}{\ominus}\mathbf{Q}$. Because
$\mathbf{k}{\ominus}\mathbf{Q}$ can be computed in place in the $\mathbf{Q}$
register, we only need storage for 2. Moreover, because $\mathbf{k}$ is given
in the QROM output in part 4 above, it does not need to be added to that qubit
costing.
2. 2.
There are $N$ qubits used for the working registers (2 of size $N/2$).
3. 3.
A number of parameters are changed, in particular the number of system qubits
is now $NN_{k}$, and $L$ is computed from the number of values of $\mathbf{Q}$
and $n$.
4. 4.
The size of the output for the Givens rotations is multiplied by a factor of
4.
## Appendix D Tensor hypercontraction derivations
### D.1 THC symmetries
In this section we derive the symmetry relationships for the central tensor
based on the four-fold symmetry of the two-electron integral tensor as used in
Eq. (62). Recall an element of the two-electron integral tensor can be
represented in THC form as
$\displaystyle
V_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r\mathbf{k}^{\prime}{\ominus}\mathbf{Q},s\mathbf{k}^{\prime}}=\sum_{\mu,\nu}\chi_{p\mathbf{k},\mu}^{*}\chi_{q(\mathbf{k}{\ominus}\mathbf{Q}),\mu}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\chi_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),\nu}^{*}\chi_{s\mathbf{k}^{\prime},\nu}$
(142)
where $\mathbf{G}_{1}$ is shorthand for
$\mathbf{G}_{\mathbf{k},\mathbf{k}-\mathbf{Q}}$ and $\mathbf{G}_{2}$ is
shorthand for $G_{\mathbf{k}^{\prime},\mathbf{k}^{\prime}-\mathbf{Q}}$. The
four fold symmetry of the complex valued two-electron integral tensor is
reflected in the central tensor $\zeta$. We recover the symmetry by first
noting the four equivalent two-electron integrals
$\displaystyle
V_{p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q}),r\mathbf{k}^{\prime}-\mathbf{Q},s\mathbf{k}^{\prime}}=$
$\displaystyle\sum_{\mu,\nu}\chi_{p\mathbf{k},\mu}^{*}\chi_{q(\mathbf{k}{\ominus}\mathbf{Q}),\mu}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}\chi_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),\nu}^{*}\chi_{s\mathbf{k}^{\prime},\nu}$
$\displaystyle
V_{q(\mathbf{k}{\ominus}\mathbf{Q}),p\mathbf{k},s\mathbf{k}^{\prime},r\mathbf{k}^{\prime}-\mathbf{Q}}^{*}=$
$\displaystyle\left(\sum_{\mu,\nu}\chi_{q(\mathbf{k}{\ominus}\mathbf{Q}),\mu}^{*}\chi_{p\mathbf{k},\mu}\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{1},!\mathbf{G}_{2}}\chi_{s\mathbf{k}^{\prime},\nu}^{*}\chi_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),\nu}\right)^{*}$
$\displaystyle
V_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),s\mathbf{k}^{\prime},p\mathbf{k},q(\mathbf{k}{\ominus}\mathbf{Q})}=$
$\displaystyle\sum_{\mu,\nu}\chi_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),\mu}^{*}\chi_{s\mathbf{k}^{\prime},\mu}\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{2},!\mathbf{G}_{1}}\chi_{p\mathbf{k},\nu}^{*}\chi_{q(\mathbf{k}{\ominus}\mathbf{Q}),\nu}$
$\displaystyle\
V_{s\mathbf{k}^{\prime},r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),q(\mathbf{k}{\ominus}\mathbf{Q}),p\mathbf{k}}=$
$\displaystyle\left(\sum_{\mu,\nu}\chi_{s\mathbf{k}^{\prime},\mu}^{*}\chi_{r(\mathbf{k}^{\prime}{\ominus}\mathbf{Q}),\mu}\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{2},\mathbf{G}_{1}}\chi_{q(\mathbf{k}{\ominus}\mathbf{Q}),\nu}^{*}\chi_{p\mathbf{k},\nu}\right)^{*}$
which, implies
$\displaystyle\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}=\left(\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{1},!\mathbf{G}_{2}}\right)^{*}=\zeta_{\nu\mu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{2},!\mathbf{G}_{1}}=\left(\zeta_{\nu\mu}^{\mathbf{Q},\mathbf{G}_{2},\mathbf{G}_{1}}\right)^{*}.$
(143)
Here $({\ominus}\mathbf{Q})$ is used to indicate a modular negative of
$\mathbf{Q}$, similar to modular subtraction. In the above expression the
complement of $\mathbf{G}_{1}$, $!\mathbf{G}_{1}$, is defined through
$\displaystyle\mathbf{k}_{p}-\mathbf{k}_{q}=\mathbf{Q}+\mathbf{G}_{1}$
$\displaystyle\mathbf{k}_{q}-\mathbf{k}_{p}=({\ominus}\mathbf{Q})+!\mathbf{G}_{1}$
$\displaystyle!\mathbf{G}_{1}=-\left(\mathbf{Q}+\mathbf{G}_{1}+({\ominus}\mathbf{Q})\right),$
(144)
and it is important to note that $({\ominus}\mathbf{Q})$ is defined to be in
the original set of $k$-points and it is useful as we only build
$\zeta^{\mathbf{Q}}$. A similar expression can be derived for
$!\mathbf{G}_{2}$. It is helpful to consider some concrete examples, which are
given in Table 7.
$k$-mesh | $\mathbf{k}_{p}$ | $\mathbf{k}_{q}$ | $\mathbf{k}_{p}-\mathbf{k}_{q}$ | $\mathbf{Q}$ | $\mathbf{G}$ | $\mathbf{k}_{q}-\mathbf{k}_{p}$ | $({\ominus}\mathbf{Q})$ | $!\mathbf{G}$
---|---|---|---|---|---|---|---|---
$[1,1,4]$ | (0, 0, 3) | (0, 0, 1) | (0, 0, 2) | (0, 0, 2) | (0,0,0) | $(0,0,-2)$ | (0, 0, 2) | $(0,0,-4)$
$[1,4,4]$ | (0, 2, 1) | (0, 3, 1) | $(0,-1,0)$ | (0, 3, 0) | $(0,-4,0)$ | $(0,1,0)$ | (0, 1, 0) | $(0,0,0)$
$[1,4,4]$ | (0, 2, 1) | (0, 3, 3) | $(0,-1,-2)$ | (0, 3, 2) | $(0,-4,-4)$ | $(0,1,2)$ | (0, 1, 2) | $(0,0,0)$
$[1,4,4]$ | (0, 1, 2) | (0, 1, 3) | $(0,0,-1)$ | (0, 0, 3) | $(0,0,-4)$ | $(0,0,1)$ | (0, 0, 2) | $(0,0,0)$
$[1,4,4]$ | (0, 1, 3) | (0, 1, 2) | (0, 0, 1) | (0, 0, 1) | (0,0,0) | $(0,0,-1)$ | (0, 0, 3) | $(0,0,-4)$
$[4,4,4]$ | (2, 1, 3) | (3, 1, 2) | $(-1,0,1)$ | (3, 0, 1) | $(-4,0,0)$ | $(1,0,-1)$ | (0, 0, 3) | $(0,0,-4)$
$[4,4,4]$ | (2, 1, 2) | (3, 3, 3) | $(-1,-2,-1)$ | (3, 2, 3) | $(-4,-4,-4)$ | $(1,2,1)$ | (1, 2, 1) | $(0,0,0)$
Table 7: Some examples of the values that the different momentum labels can
take in Eq. 144. We restrict $\mathbf{k},\mathbf{Q},({\ominus}\mathbf{Q})$ to
be in the original $k$-point set.
### D.2 Complexity of the tensor hypercontraction representation
The following is a detailed costing for the qubitization oracles using the THC
LCU. In the initial state preparation, we need to prepare a superposition over
$\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2},\mu,\nu$ with weights
$\sqrt{|\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}|}$. The
state can be prepared via the coherent alias sampling procedure, starting with
QROM to output keep and alt values. One option here is to produce an equal
superposition over $\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2},\mu,\nu$, then
calculate a contiguous register from these values to use for the QROM. That
procedure is fairly complicated, because it requires preparing equal
superpositions over three components of $\mathbf{Q}$ as well as
$\mathbf{G}_{1},\mathbf{G}_{2},\mu$ and $\nu$, then arithmetic for the
contiguous register. To simplify the procedure we give the complexity for
giving ind values like for sparse state preparation. That is, we prepare the
contiguous register, and use the QROM to output both ind (index) and alt
values of $\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2},\mu,\nu$.
There is the symmetry
$\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}=(\zeta_{\nu,\mu}^{\mathbf{Q},\mathbf{G}_{2},\mathbf{G}_{1}})^{*}$,
which indicates only half the range of $\mu,\nu,\mathbf{G}_{1},\mathbf{G}_{2}$
values need be prepared. It is convenient to prepare the full range, but use
part of the range for real and part for imaginary components. If we only were
considering $\mu,\nu$, we could use $\mu\leq\nu$ for real components and
$\mu>\nu$ for imaginary components. To account for
$\mathbf{G}_{1},\mathbf{G}_{2}$ as well, we can combine them with $\mu,\nu$ as
least-significant bits for combined integers to use in inequality tests. This
inequality test between $\mu,\mathbf{G}_{1}$ and $\nu,\mathbf{G}_{2}$ is used
to give a qubit flagging that the component should be imaginary. A further
qubit in a $\mathinner{|{+}\rangle}$ state is used to control a swap of
$\mu,\mathbf{G}_{1}$ with $\nu,\mathbf{G}_{2}$ registers, and a controlled $Z$
gate on the qubit flagging the imaginary component gives the desired complex
conjugate. As a result the range for $\mu,\nu$ is $M^{2}$ taking account of
giving real and imaginary components.
For $\mathbf{Q}$ there is also the symmetry where
$\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}=(\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{1},!\mathbf{G}_{2}})^{*}$,
so it is only necessary to produce approximately half as many values of
$\mathbf{Q}$. This is complicated by the cases where
$\mathbf{Q}={\ominus}\mathbf{Q}$. If $N_{x},N_{y},N_{z}$ are odd, then the
only case where this can be true is that $\mathbf{Q}=0$, so the number of
values of $\mathbf{Q}$ that need be considered is $(N_{x}N_{y}N_{z}+1)/2$. If
one of $N_{x},N_{y},N_{z}$ is even and the other two are odd, then for the one
that is even there will be a second values of that component of $\mathbf{Q}$
that is equal to its negative. That means there are two value of $\mathbf{Q}$
overall satisfying $\mathbf{Q}={\ominus}\mathbf{Q}$, and the number of unique
values is $N_{x}N_{y}N_{z}/2+1$. Similarly, if there are two even values of
$N_{x},N_{y},N_{z}$, then there are four values of $\mathbf{Q}$ satisfying
$\mathbf{Q}={\ominus}\mathbf{Q}$, and the number of unique values is
$N_{x}N_{y}N_{z}/2+2$. For all three of $N_{x},N_{y},N_{z}$ even the number of
unique values is $N_{x}N_{y}N_{z}/2+4$. We also need $NN_{k}/2$ values for the
one-body term. The number of values is then
$d=32[N_{x}N_{y}N_{z}+2^{v}]M^{2}+NN_{k}/2,$ (145)
where $v$ is the number of even values of $N_{x},N_{y},N_{z}$.
The size of the output is then
$m=2(2\lceil\log M\rceil+n_{k}+8)+\aleph.$ (146)
where $\aleph$ is the number of bits for the keep register. There is a factor
of 2 at the front to account for ind and alt values, then $\lceil\log M\rceil$
for each of $\mu$ and $\nu$, and $n_{k}$ for the components of $\mathbf{Q}$.
There is a further qubit distinguishing between the one and two-electron
terms, a qubit giving the sign of the real or imaginary component of $\zeta$,
and 6 qubits for $\mathbf{G}_{1},\mathbf{G}_{2}$, for a total $+8$.
1. 1.
There is a cost of $N_{k}N/2$ for controlled swaps for the spin. In principle
this is performed four times, because it is performed before and after the two
$c^{\dagger}c$ operators. The middle pair can be combined, with the single
controlled swap being controlled by the parity of the two spin qubits, for a
total cost of $3N_{k}N/2$.
2. 2.
Before the state preparation, we need to prepare an equal superposition over
$d$ basis states, with costing $3\lceil\log d\rceil-3\eta+2b_{r}-9$ Toffoli
gates. As before, $\eta$ is a number such that $2^{\eta}$ is a factor of $d$,
and $b_{r}$ is a number of bits used for rotation of an ancilla qubit to
improve the amplitude of success. This cost is incurred twice, once for the
preparation and once for the inverse preparation.
3. 3.
The complexity of the QROM being used for the state preparation is
$\left\lceil\frac{d}{k_{p}}\right\rceil+m(k_{p}-1),$ (147)
with $k_{p}$ being a power of 2. The inverse preparation then has a cost
$\left\lceil\frac{d}{k^{\prime}_{p}}\right\rceil+k^{\prime}_{p}.$ (148)
4. 4.
We perform an inequality test with cost $\aleph$. Accounting for the inverse
of the preparation gives a total cost $2\aleph$.
5. 5.
The controlled swap based on the result of the inequality test is on
$2\lceil\log M\rceil+n_{k}+7$ (149)
pairs of qubits, so has this Toffoli cost. Note that we have $+7$ here rather
than $+8$. This is because we do not need to swap the sign qubits; the sign
can be applied with $Z$ gates controlled on the result of the inequality test,
not adding to the Toffoli cost (as usual). This cost is incurred again in the
inverse preparation for a total of $4\lceil\log M\rceil+2n_{k}+14$.
6. 6.
As described above, we perform an inequality test between $\mu,\mathbf{G}_{1}$
and $\nu,\mathbf{G}_{2}$ to give the qubit flagging whether we have a real or
imaginary component. Then we perform a controlled swap of $\mu,\mathbf{G}_{1}$
with $\nu,\mathbf{G}_{2}$ to generate one symmetry for the state preparation,
with the complex conjugate applied using a Clifford gate. This part therefore
has Toffoli cost $2\lceil\log M\rceil+6$. This cost is incurred again in the
inverse preparation giving a total cost $4\lceil\log M\rceil+12$. In addition
to this controlled swap, we perform a controlled swap in the middle, but it is
not controlled so does not add to the Toffoli complexity.
7. 7.
For the symmetry where
$\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}=(\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{1},!\mathbf{G}_{2}})^{*}$,
we can use a second control qubit to flip the sign on $\mathbf{Q}$, negate
$\mathbf{G}_{1}$ and $\mathbf{G}_{2}$, and apply the complex conjugate. The
complex conjugate again can be applied with a Clifford gate, and so can the
controlled-NOT gates on $\mathbf{G}_{1},\mathbf{G}_{2}$. A controlled sign
flip of $\mathbf{Q}$ can be performed with $2n_{k}$ Toffolis, simply by
flipping the sign in usual two’s complement binary, then controlling addition
of $N_{x},N_{y},N_{z}$ in each component.
8. 8.
Next we need to prepare a superposition over allowed values of $\mathbf{k}$,
because $\mathbf{k}-\mathbf{Q}-\mathbf{G}$ needs to be in the allowed range of
$\mathbf{k}$ values (using $\mathbf{G}$ to indicate $\mathbf{G}_{1}$ or
$\mathbf{G}_{2}$ depending on which part we are performing). In particular,
for the $x$-component we have an allowed range for $k_{x}$ from $Q_{x}$ to
$N_{x}-1$ when $G_{x}=0$, or $0$ to $Q_{x}-1$ when $G_{x}\neq 0$. It is
similar for the other two components. We can therefore prepare a superposition
over the appropriate range then add $Q_{x}$ if $G_{x}=0$.
Creating an equal superposition requires Hadamards on the appropriate subset
of qubits, as well as a $Q_{x},G_{x}$-dependent rotation to give a high
success probability for the amplitude amplification. This information can be
output with Toffoli cost $2N_{x}-2$ on the qubits representing $Q_{x},G_{x}$.
The complexity of the controlled Hadamards is then $\lceil\log N_{x}\rceil$
Toffolis, assuming we use a catalytic T state as in [32]. The complexity of
preparing the equal superposition is then $6\lceil\log N_{x}\rceil+2b_{r}-6$,
including $3\lceil\log N_{x}\rceil$ for three rounds of $\lceil\log
N_{x}\rceil$ controlled Hadamards. The reason why there is $-6$ rather than
$-9$ is the inequality test is with a value in a quantum register (in each of
three tests), which requires one more Toffoli than an inequality test with a
classically given value.
The controlled addition of $Q_{x}$ has complexity $2\lceil\log N_{x}\rceil$.
The total complexity of the preparation of the superposition for the three
components of $\mathbf{k}$ is therefore
$N_{x}+N_{y}+N_{z}+8n_{k}+6b_{r}-24.$ (150)
This cost is incurred four times for the preparation and inverse preparation
of $\mathbf{k}$ and $\mathbf{k}^{\prime}$.
9. 9.
In order to account for the one-body term, we note than the one-body term has
a single $\mu$ and $\mathbf{k}$ rather than $\mathbf{Q}$. We also do not want
the operations we perform in the two-body part for the symmetry to affect the
one-body part. We can therefore output $\mu=\nu$ for the one-body part in the
QROM, so the swap of $\mu$ and $\nu$ has no effect. The value of $\mathbf{k}$
for the one-body part can be stored in the same register as used for
$\mathbf{Q}$ for the two-body part. To prevent the operations used to generate
the symmetry
$\zeta_{\mu\nu}^{\mathbf{Q},\mathbf{G}_{1},\mathbf{G}_{2}}=(\zeta_{\mu\nu}^{({\ominus}\mathbf{Q}),!\mathbf{G}_{1},!\mathbf{G}_{2}})^{*}$
being applied for the one-body part, we can simply apply a Toffoli to produce
a new control qubit. The remaining part above is the preparation of the
superposition over the $\mathbf{k}$ values controlled on $\mathbf{Q}$; this
does not need to be amended to account for the one-body part because there we
will not be using this value in the extra register.
10. 10.
Now that we have prepared the register that is in an equal superposition over
the appropriate range of $\mathbf{k}$, we need to use that in combination with
$\mathbf{Q}$ and $\mu$ to prepare a superposition with the correct weights. To
do this, we will use coherent alias sampling in the usual way, but will need
to construct an appropriate register to iterate over from registers
$\mathbf{k},\mathbf{Q},\mathbf{G},\mu$. First we compute
$\mathbf{k}-\mathbf{Q}-\mathbf{G}$ in an ancilla register. These two
subtractions have cost $2n_{k}$. Since it needs to be computed and uncomputed
for each of the two factors in the Hamiltonian, the total cost is $8n_{k}$.
Now, because $\mathbf{k}$ and $\mathbf{k}{\ominus}\mathbf{Q}$ uniquely specify
$\mathbf{Q},\mathbf{G}$, these two registers can be used for the iteration
instead of $\mathbf{Q}$, with the additional advantage that they are both over
the full range of the Brillouin zone. Now we need to compute a contiguous
register
$(((((\mathbf{k}_{x}N_{y}+\mathbf{k}_{y})N_{z}+\mathbf{k}_{z})N_{x}+\mathbf{k}^{\prime}_{x})N_{y}+\mathbf{k}^{\prime}_{y})N_{z}+\mathbf{k}^{\prime}_{z})M+\mu,$
(151)
where we are using $\mathbf{k}^{\prime}$ for $\mathbf{k}{\ominus}\mathbf{Q}$.
This contiguous register includes many multiplications by classically chosen
constants, which has complexity depending on how many ones are in these
constants. The worst case is where these numbers are all ones, so we will give
the cost for that case even though it is rare.
As discussed in [117] the cost of multiplying two integers when one is given
classically is no more than the product of the numbers of bits. For the
additions, the cost is no more than the number of bits on the larger number.
So, we have a cost as follows.
1. (a)
For multiplying $\mathbf{k}_{x}N_{y}$ a cost of $\lceil\log
N_{x}\rceil\lceil\log N_{y}\rceil$. Here $N_{y}$ would have more bits if it
were a power of 2, but then the multiplication cost would be zero.
2. (b)
For adding $+\mathbf{k}_{y}$ a cost of $\lceil\log N_{x}N_{y}\rceil$.
3. (c)
For multiplying $\times N_{z}$ the cost is $\lceil\log
N_{x}N_{y}\rceil\lceil\log N_{z}\rceil$.
4. (d)
For adding $+\mathbf{k}_{z}$ the cost is $\lceil\log N_{k}\rceil$.
5. (e)
For multiplying $\times N_{x}$ the cost is $\lceil\log N_{k}\rceil\lceil\log
N_{x}\rceil$.
6. (f)
For adding $+\mathbf{k}^{\prime}_{x}$ the cost is $\lceil\log
N_{x}N_{k}\rceil$.
7. (g)
For multiplying $\times N_{y}$ the cost is $\lceil\log
N_{x}N_{k}\rceil\lceil\log N_{y}\rceil$.
8. (h)
For adding $+\mathbf{k}^{\prime}_{y}$ the cost is $\lceil\log
N_{x}N_{y}N_{k}\rceil$.
9. (i)
For multiplying $\times N_{z}$ the cost is $\lceil\log
N_{x}N_{y}N_{k}\rceil\lceil\log N_{z}\rceil$.
10. (j)
For adding $+\mathbf{k}^{\prime}_{z}$ the cost is $\lceil\log
N_{k}^{2}\rceil$.
11. (k)
For multiplying $\times M$ the cost is $\lceil\log N_{k}^{2}\rceil\lceil\log
M\rceil$.
12. (l)
For finally adding $+\mu$ the cost is $\lceil\log N_{k}^{2}M\rceil$.
We need to add all these items together to give the total cost, and it needs
to be multiplied by 4 because we compute and uncompute for each of the two
factors in the Hamiltonian.
Next we have a QROM on this contiguous register with cost
$\left\lceil\frac{N_{k}^{2}M}{k_{\rm nrm}}\right\rceil+(k_{\rm
nrm}-1)(n_{k}+\aleph).$ (152)
with $k_{\rm nrm}$ a power of 2. This is because there are $N_{k}^{2}M$ items
to iterate over, and we need to output $n_{k}$ bits for the alternate value of
$k$ and $\aleph$ for the keep value. We have twice this cost because of the
two factors in the Hamiltonian, but the erasure cost for each factor is
$\left\lceil\frac{N_{k}^{2}M}{k_{\rm era}}\right\rceil+k_{\rm era}.$ (153)
The last two steps of the coherent alias sampling are an inequality test with
cost $\aleph$ and a controlled swap with cost $n_{k}$. These costs are
incurred 4 times, once for preparation and once for inverse preparation for
each of the two factors for the Hamiltonian.
11. 11.
We will need to prepare a register that is $\mathbf{k}-\mathbf{Q}-\mathbf{G}$
again. We previously computed this, but we need to compute it again because we
have performed a state preparation on $\mathbf{k}$. This has a cost of
$2n_{k}$ again, and needs to be done 4 times for a total cost of $8n_{k}$.
A further subtlety is that we are storing the value of $\mathbf{k}$ to use in
the $\mathbf{Q}$ register in the one-body case. We can perform a controlled
swap into the working register for $\mathbf{k}$ or
$\mathbf{k}{\ominus}\mathbf{Q}$, which has a total cost of $4n_{k}$. Combined
with the arithmetic cost this is $12n_{k}$.
12. 12.
To use the register with $\mathbf{k}$ or $\mathbf{k}{\ominus}\mathbf{Q}$ to
control the swap of system registers into working registers, we can use each
qubit to control swaps of the system registers in a similar way as is used for
advanced QROM. The cost for selecting each qubit out of $N_{k}$ is $N_{k}-1$,
similar to the use in advanced QROM, despite the use of multiple components.
In particular, we can perform swaps of system registers based on the
$x$-component of $\mathbf{k}$ with cost $(N_{x}-1)N_{y}N_{z}$. Then swapping
the registers based on the $y$-component out of the subset of $N_{y}N_{z}$ has
cost $(N_{y}-1)N_{z}$. Then the cost of swapping based on the $z$-component
has cost $N_{z}-1$. Adding these three costs together gives
$N_{x}N_{y}N_{z}-1=N_{k}-1$. This is performed for each of the $N/2$ qubits we
need, for cost $N(N_{k}-1)/2$. We need to swap and inverse swap 8 times on
$NN_{k}/2$ system registers, for a total cost of $4N(N_{k}-1)$.
13. 13.
Next, we consider the output of the rotations for the $c$ modes. These will be
controlled by the registers with $\mu$ and $\mathbf{k}$ (or
$\mathbf{k}{\ominus}\mathbf{Q}$), as well as the qubit selecting between the
one- and two-body terms. A difficulty is that we would need a contiguous
register in order to be able to effectively apply the advanced QROM. A method
around this is to use a QROM on the selection qubit and $\mathbf{k}$ to output
an offset, then add $\mu$. The complexity of that QROM is $2N_{k}$, then the
complexity of the addition is $\lceil\log N_{k}(M+N/2)\rceil$.
That is in the case where we need to apply the one-body term as part of the
implementation. Recall that we only need to do that once when we are applying
$c^{\dagger}c$ twice for the two-body term. In the part where we are not
applying the one-body term we instead have complexity $N_{k}+\lceil\log
N_{k}M\rceil$.
The size of the QROM output for the rotations is then $N\beth$. That is again
because we need Givens rotations on $N/2$ qubits, and need two angles for each
Givens rotation with $\beth$ each. The complexity is then, in the case with
the one-body term,
$\left\lceil\frac{N_{k}(M+N/2)}{k_{r}}\right\rceil+N\beth(k_{r}-1),$ (154)
and for the case where we do not need the one-body term
$\left\lceil\frac{N_{k}M}{k_{r}}\right\rceil+N\beth(k_{r}-1),$ (155)
with $k_{r}$ being a power of 2. The cost of the Givens rotations is
$2N(\beth-2)$, because we have $N/2$ qubits and 2 angles for each Givens
rotation.
The cost of erasing the QROM is for the two cases is
$\displaystyle\left\lceil\frac{N_{k}(M+N/2)}{k^{\prime}_{r}}\right\rceil+k^{\prime}_{r},$
(156)
$\displaystyle\left\lceil\frac{N_{k}M}{k^{\prime}_{r}}\right\rceil+k^{\prime}_{r}.$
(157)
Lastly we note that we need to apply the sequence of Givens rotations 8 times
to account for the four $c^{\dagger}$ and $c$ operators, and similarly we need
to apply the QROM and invert it four times. That gives a total complexity for
the QROM-based basis rotations
$\displaystyle
2\left\lceil\frac{N_{k}(M+N/2)}{k_{r}}\right\rceil+2N\beth(k_{r}-1)+2\left\lceil\frac{N_{k}M}{k_{r}}\right\rceil+2N\beth(k_{r}-1)+2\left\lceil\frac{N_{k}(M+N/2)}{k^{\prime}_{r}}\right\rceil+2k^{\prime}_{r}$
$\displaystyle+2\left\lceil\frac{N_{k}M)}{k^{\prime}_{r}}\right\rceil+2k^{\prime}_{r}+16N(\beth-2)+12N_{k}+4\lceil\log
N_{k}(M+N/2)\rceil+4\lceil\log N_{k}M\rceil,$ (158)
where $16N(\beth-2)$ is for the Givens rotations themselves, and the terms
from $12N_{k}$ on are for creating and erasing contiguous registers.
14. 14.
The next part of the complexity that needs to be accounted for is the
selection of $\vec{Z}X$ and $\vec{Z}Y$ for the implementation of the
$c^{\dagger}$ and $c$ operators. We only need to select between $\vec{Z}X$ and
$\vec{Z}Y$, but not select the location the $X$ or $Y$ is performed since
these are applied in the working registers. This selection can therefore be
performed entirely with Clifford gates. However, we do need additional control
to avoid performing these operators for one of the $c^{\dagger}c$ for the one-
body component. That is a cost of just one Toffoli for each of $c^{\dagger}$
and $c$.
A complication is that we need to perform $Z$ gates on remaining system
registers (that have not been swapped into the working registers). We perform
unary iteration on the register containing $\mathbf{k}$ (or
$\mathbf{k}{\ominus}\mathbf{Q}$), and use that to apply the appropriate $Z$
operators with a Toffoli cost $N_{k}-1$. The Toffoli complexity is independent
of $N$, because we only have a Toffoli cost for the unary iteration, not for
the controlled-$Z$ gates.
15. 15.
The last part to the complexity when constructing the qubitised operator is
the reflection on the control ancillas. The qubits we need to reflect on are
as follows.
1. (a)
The $\lceil\log d\rceil$ qubits used for the equal superposition state for the
initial preparation, and another qubit rotated for this preparation.
2. (b)
The $\aleph$ used for the equal superposition state for the inequality test in
state preparation.
3. (c)
The two qubits for the two spins in the sum.
4. (d)
There are $n_{k}$ qubits that an equal superposition of $\mathbf{k}$ values is
prepared in, and this is done twice for $2n_{k}$ qubits. To save on qubit use
we can also reuse these qubits and flag that they are equal to zero in between
preparing $\mathbf{k}$ and $\mathbf{k}^{\prime}$, but that does not affect the
Toffoli count.
5. (e)
There are $\aleph$ used for the equal superposition state for two rounds of
state preparation for $\mathbf{k}$. Again these can be reused but that does
not affect the qubit count.
6. (f)
There are 4 qubits to select between $X$ and $Y$ for each of the
$c^{\dagger},c$.
7. (g)
There are two qubits used to generate the symmetries.
This is a total of
$\lceil\log d\rceil+3\aleph+2n_{k}+9$ (159)
qubits for the control, and the reflection cost is 2 less than this. We do
require an additional Toffoli for the control by the phase estimation
registers, and another for unary iteration on the phase estimation registers.
Therefore this expression can be used for the additional Toffoli cost.
Next we account for the qubit costs.
1. 1.
There are $NN_{k}$ system qubits.
2. 2.
The control register for the phase estimation, and the ancillas for the unary
iteration, together need $2\lceil\log\mathcal{I}\rceil-1$ qubits.
3. 3.
There are all the qubits needed for controls as described in the last item in
the Toffoli costing above. Taking the inversion of the equal superposition
over $\mathbf{k}$ to be flagged by a single qubit, the $2n_{k}$ can be
replaced with $n_{k}+1$. Similarly flag qubits can be used on the qubits used
for the equal superposition state for the inequality test to replace $3\aleph$
with $\aleph+2$ qubits. That gives a number of qubits
$\lceil\log d\rceil+\aleph+n_{k}+12.$ (160)
4. 4.
A phase gradient state of size $\beth$ is used for rotations.
5. 5.
A single T state is used for the controlled Hadamards.
6. 6.
The QROM used for the first state preparation outputs $m$ qubits.
7. 7.
This first state preparation also uses
$m(k_{p}-1)+\lceil\log(d/k_{p})\rceil-1$ temporary qubits.
8. 8.
A single qubit is used for the result of the inequality test for the coherent
alias sampling.
9. 9.
A single qubit is used for the result of the inequality test between
$\mu,\mathbf{G}_{1}$ and $\nu,\mathbf{G}_{2}$.
10. 10.
There are $n_{k}+3b_{r}$ qubits used as the output of the QROM used to give
the information needed for the preparation of the equal superposition over
$\mathbf{k}$, as well as $b_{r}$ for $\mathbf{k}$.
11. 11.
In the preparation of the state for $\mathbf{k}$ we compute
$\mathbf{k}{\ominus}\mathbf{Q}$ in an ancilla needing $n_{k}$ qubits, and
compute a contiguous register that needs $\lceil\log(N_{k}^{2}M)\rceil$
qubits.
12. 12.
The state preparation for $\mathbf{k}$ uses $n_{k}+\beth$ output qubits.
13. 13.
The state preparation uses
$(k_{\rm nrm}-1)(n_{k}+\beth)+\left\lceil\log\left(\frac{N_{k}^{2}M}{k_{\rm
nrm}}\right)\right\rceil-1$ (161)
temporary qubits.
14. 14.
We also use $\aleph$ in this state preparation for a superposition state, and
another qubit for the result of the inequality test.
15. 15.
There are $\lceil\log N_{k}(M+N/2)\rceil$ qubits used for the contiguous
register for the QROM for the qubit rotations.
16. 16.
There are $N\beth k_{r}$ used for the QROM for the rotations, with another
$\left\lceil\log\left(\frac{N_{k}(M+N/2)}{k_{r}}\right)\right\rceil-1$ (162)
temporary qubits.
Accounting for the maximum total involves determining the maximum number used
which depends on the use of temporary ancillas. We first need to determine
whether the temporary qubits described in part 13 or the total qubits in parts
14 to 16 are larger. We take the maximum of these, and add it to the qubits
used in parts 8 to 12. Then we compare that to the number of temporary qubits
in part 7. The maximum of that is added to the qubits in parts 1 to 6.
Figure 15: This shows how to construct a self-inverse procedure for block
encoding two-electron terms, as in Fig. 6 of [32]. The left side is the
manifestly self-inverse form, and the right side is a more intuitive form
where the $\mathinner{|{+}\rangle}$ state is used to generate the symmetry
between $\mu$ and $\nu$, and the two $V$ operations are controlled by $\mu$
and $\nu$ in succession.
Next we give a little more detail on how the construction is made self-
inverse. As explained in Fig. 6 of [32] (shown above as Figure 15) the THC
construction may be made self-inverse by using the qubit in the
$\mathinner{|{+}\rangle}$ state which controls the swap of the
$\mathinner{|{\mu}\rangle}$ and $\mathinner{|{\nu}\rangle}$ registers. As can
be seen in Figure 6, we are using a similar procedure, where the
$\mathinner{|{+}\rangle}$ state controls the swap of $\mu$ and $\nu$ at the
top left and at the lower right. The $X$ gate and swaps on the lower left
corresponds to the $X$ gate and swap in the middle of Figure 15.
We have currently just drawn the quantum circuit as having $c$ and
$c^{\dagger}$, but these would be implemented using ancilla qubits to control
the election between $X$ and $Y$ in $X\pm iY$. For the implementation to be
self-inverse, we would want the qubit used to control the first $c$ to be the
same as that for the final $c^{\dagger}$. This can be achieved by taking the
four qubits for control of each of the $c$ and $c^{\dagger}$ so the top one
can be used as the control each time. In particular, after the first $c$, swap
the first two qubits, then after the $c^{\dagger}$ swap the first pair with
the second pair, then after the next $c$ swap the first two qubits again. As a
result, the first qubit can be used as the control each time. Moreover, this
arrangement of swaps is obviously self-inverse.
The application of the controlled $X$ and $iY$ gates is also automatically
self-inverse. The reason is that $iY$ squared is $-\openone$. In the block
encoding we perform $Z$ gates for the control qubits for $c$ but not
$c^{\dagger}$. Then performing the unitaries for the block encoding twice, we
have first that the final controlled $c^{\dagger}$ in the first block encoding
is matched with the first $c$ for the second block encoding. The same control
qubit is used, so the two controlled $X$ operations cancel, and the two
controlled $iY$ operations give $-\openone$. This cancels with the $Z$ gate on
that control qubit. In this way all the operations can be cancelled, and
because each time we have controlled $c$ and $c^{\dagger}$ matched, which
cancel the $Z$ gate on the control qubit. There is no additional Toffoli cost
for these swaps and phase gates, because they are all Clifford gates.
## Appendix E Correlation diagnostics for LNO
structure | max($|t_{1}|$) | max($|t_{2}|$) | T1 | D1 | UHF $S^{2}$
---|---|---|---|---|---
C2/m | 0.2538 | 0.0330 | 0.0482 | 0.1912 | 0.7783
P21/c | 0.2313 | 0.0322 | 0.0472 | 0.2178 | 0.8027
P2/c | 0.1688 | 0.0571 | 0.0371 | 0.2089 | 1.0447
Table 8: Some common diagnostics of strong correlation from the ROHF-CCSD
calculations for each of the distorted LiNiO2 structures. The UHF $S^{2}$
values in the final column are given per formula unit and should be compared
with the exact doublet ($\langle S^{2}\rangle=0.75$). The basis set (GTH-DZVP)
and other details of the calculations are described in Section V.2
For insulators like the distorted structures of LiNiO2, there are several
common diagnostics that are used in molecular calculations to identify cases
of “strong correlation.” Here we examine the maximum elements of the CCSD T1
and T2 tensors (max($|t_{1}|$) and max($|t_{2}|$)), which are commonly used as
a measure of correlation[118], as well as the T1[119] and D1[120, 121]
diagnostics computed from the ROHF $k$-point CCSD calculations. We also show
the expectation value of the $S^{2}$ operator for the UHF solution because
spin contamination in the UHF calculation can be a signature of strong
correlation. The results are shown in Table 8. As expected, these results do
not suggest particularly strong correlation. Only the max($|t_{1}|$) values
for the C2/m and P21/c structures and the UHF $S^{2}$ for the P2/c structure
are larger than might be expected. The larger max($|t_{1}|$) is likely an
indication that the ROHF orbitals are far from optimal, and the symmetry
breaking in the UHF calculations does not mean that CCSD cannot provide
reliable energies.
## Appendix F Classical timing benchmarks
In order to compare the quantum algorithm run time to state of the art
classical algorithms we measured the cost to compute the CCSD and ph-AFQMC
total energy for the benchmark systems listed in Table 1 in double- and
triple-zeta basis sets. The results of these timings are presented in Fig. 16.
For CCSD we used pyscf [93, 94] and timed the data on a node with 30 3.1 GHz
Intel Xeon CPUs (30 OpenMP threads).
For ph-AFQMC, which is considerably more expensive than CCSD for small system
sizes, we estimated the run time by performing a short ph-AFQMC calculation
for each system using 8 Nvidia V100 GPUs. From this data we can then estimate
the run time to achieve an statistical error bar (per atom) of $1\times
10^{-4}$ Ha through the assumption that the statistical error of ph-AFQMC
decays like $N_{s}^{-1/2}$, where $N_{s}$ is the number of Monte Carlo
samples. Formally, ph-AFQMC should asymptotically scale like
$\mathcal{O}(N_{k}^{3})$ [22] assuming the number of samples required to reach
the desired precision does not scale with the system size. Interestingly, we
found that the statistical error bar per atom for a fixed number of samples
actually decreased with $N_{k}$, which implies the variance of ph-AFQMC is
increasing sub-linearly with the system size. For smaller system sizes it is
important to note that practically one can saturate the GPU with walkers with
nearly no loss speed [108], thus reducing the error bar given a fixed wall
time. As a result the small $N_{k}$ AFQMC numbers represent a large
overestimation in runtime one would practically need to obtain the desired
statistical error. Another confounding factor which may affect the scaling of
ph-AFQMC is the time step error (we fixed the time step at 0.005 Ha-1 for all
systems). Recent results suggest that ph-AFQMC suffers from a size extensivity
error [122], which is practically remedied through time step extrapolation. |
(a) Initially, the thick, black bond is unscreened, but as (b) the atom enters
the region of influence (light gray ellipses) the bond weakens. (c) The atom
has moved into the close vicinity of the bond (dark gray region), effectively
disabling it while creating two new bonds (red lines). The Baskes screening
functions [376] are defined the aspect ratio of the light gray region
($C_{\text{max}}$) and the dark gray ($C_{\text{min}}$), defining a measure of
bond screen that is independent of the absolute lengths of the bonds in the
system.
The Baskes screening functions [376] were applied to empirical bond-order
potentials independently by Pastewka et al. [378, 25, 401] and Kumagai et al.
[402]. Both groups emphasized that the screened potentials significantly
improved the properties of amorphous carbon modeled with REBO2 or Tersoff-type
potentials. In addition, the screening functions served to overcome the issue
with dissociation of a bond under external stress discussed in Refs. [377,
378, 25, 401]. This enabled modeling of fracture in crystalline and amorphous
carbon systems [403]. Perriot et al. [404] presented slightly different
screening concept requiring the REBO potential to be refitted.
Screening functions can be rationalized as originating from nonorthogonality
in a tight-binding framework. This nonorthogonality leads to an environment-
dependence of the bond-integrals, when the nonorthogonal tight-binding is
“coarse-grained” to an equivalent orthogonal tight-binding model. Nguyen-Manh,
Pettifor and Vitek [405, 406] showed, that the theory of the bond-order
expansion, briefly touched upon in Sect. 5.8, can be used to derive screening
functions from a nonorthogonal tight-binding model. This first-principles
construction lends additional support to Baskes’ screening functions and other
screening approaches, such as empirical environment-dependence introduced in
the context of orthogonal tight-binding shortly after Baskes work [407, 408,
409, 410].
## 10 Summary and perspectives
Eugene Wigner allegedly said: “ _It is nice to know that the computer
understands the problem. But I would like to understand it too._ ” One main
motivation for writing this review was to assist people with similar ambitions
as Wigner. To this end, we summarized our understanding of what properties in
condensed-matter systems can be induced by the functional form of the
potentials used for their description. In our endeavor, we felt compelled to
create much own data and new figures with the purpose to create insight and to
convey trends and differences between potential classes rather than to produce
numbers for a specific system. When doing so, we did our best to embed
anything written into a historical context, which is summed up in Fig. 19.
Figure 19: Selected highlights of the development of interaction potentials.
Yet many times, we could not find appropriate references or quotes, which we
are certain do exist. As one of numerous examples, we found many papers
computing shear and bulk moduli of metals or vacancy-defect and cohesive
energies, but always missed the argument why their respective ratios are
correlated and how they relate to the ratio of melting and boiling
temperature. We expect our discussion to have satisfied Wigner’s desire for
understanding the correlation between these ratios. Despite certainly having
missed well known studies, we did find some old works, which may have been
underappreciated, such as Slater’s paper on the interaction between helium
atoms [84]. As mentioned earlier, Slater derived the exponential repulsion
between atoms with closed valence shell, which promoted Born and Mayer [85] as
well as Buckingham [7] to use this or slightly modified forms for the
repulsion in the potentials now carrying their names. However, Slater also
derived the dispersive coefficient for helium to within 15% accuracy, two
years before London [56] generalized the results to other closed-valence
shells.
Despite the length of this article, we could only scratch the surface of the
large field of interatomic potentials. Many central aspects were not touched
upon. Most importantly, we barely discussed how to adjust parameters, in
particular the pros and cons to fit to experimental or to in-silico data.
There are good reasons to follow the main-stream opinion that the quality of a
potential increases the less empirical the data on which the potential is
parameterized. Computer-generated reference data is much more versatile than
that provided from experiments. Forces on individual atoms can be used for
characteristic bonding situations or rare but important configurations like a
transition state occurring during a chemical reaction or a collective phase
transformation [411]. Moreover, in-silico data does not contain quantum
effects, which frequently need to be accounted for when comparing computer-
generated data to experiments. Describing how to do that properly would have
required us to outline how to approximately subtract the nuclear quantum
effects from experimental data or how to incorporate quantum fluctuations into
the simulations, e.g., through path-integral techniques, or, by encoding their
effect into effective temperature-dependent, many-body potentials, which would
have been beyond the scope of this review.
Thus, there is scarcely any argument to gauge parameters on experiments, if
there was not the small but important detail that experimental data is by and
large more accurate than density-functional theory, which cannot be deemed
exact, as long as the exact functional has not been identified. It could be
argued that we base one theory or potentials with uncontrolled approximations
on another one with uncontrolled approximations, which has trouble to predict
two dislike molecules or clusters separated by a large distance to each
acquire an integer charge [412, 413]. When dismissing empiricism as
fundamentally problematic, one may also keep in mind that one of the greatest
theoretical achievements in chemistry, arguably in all of science, was the
construction of the periodic table by Mendeleev. He even predicted the
existence of unknown elements including some of their physical and chemical
properties with an accuracy that people using potentials or even DFT might
have a hard time to match if they did not know what they had to predict, or,
rather postdict. Moreover, the amount of data that Mendeleev could build on
was noticeably less than what is required in machine learning.
The potentials discussed in this review pertain mostly to situations, in which
bonds can be clearly classified as dispersive, metallic, covalent, or ionic.
For situations, where this simple categorization cannot be made, different
potential classes are combined in a mix-and-match fashion into compound
potentials. Prominent examples are the adaptive intermolecular REBO (AIREBO)
[252] (combining Brenner’s potential with nonbonded Lennard-Jones
interaction), the Streitz-Mintmire potential [414] (combining EAM with charge
transfer), the charge-optimized many-body potential (COMB) [415, 416, 417]
(combining Tersoff’s potential with charge transfer), a merger of REBO with
split-charge equilibration [333, 418], as well as early combinations of
Keating-type with charge-transfer potentials potentials [419, 420]. Of course,
the widely-used ReaxFF potential [421, 12], which merges a bond-order approach
(different in nature than the approaches discussed in this review) with non-
bonded interactions and charge transfer, must also be mentioned.
While compound approaches can be extremely powerful, many of them simply add
different energy terms. This can be problematic even for seemingly simple
alloys or intermetallics formed by elements of large electronegativity
difference. Put simply, negatively charged atoms grow in size while positively
charged atoms shrink. This symmetry breaking between negative and positive
charge is not reflected when simply adding charge equilibration to a (post)
Ducastelle potential. Yet, it is supposedly responsible for why the negatively
charged atoms in intermetallics have the tendency to close pack while positive
atoms occupy interstitial positions, as it happens, for example, for Al2Au,
also called the purple plague: Au atoms form an fcc lattice while Al atoms
assume interstitial positions. Although promising steps toward true compound
potentials have been taken [422], e.g., by augmenting or reducing the valence
density of a neutral atom with a term proportional to its partial charge,
systematically merged potential remain a dream.
Novel paths that are taken with machine learning potentials seem extremely
promising. However, a puzzling question is why machine-learned potentials
outperform parameterized potentials. The claim that they are parameter free or
free of functional constraints is not entirely justified. Many of the local
descriptors are suspiciously close to what is used in potentials, as indeed
they are often “physics-inspired” [352]. However, the big advantage of MLPs is
that they do not make strong assumptions like pair-wise additive repulsion,
which might be one of the most important sources of error in classical
interatomic potentials.
A show-stopping problem central to all potentials is the curse of
dimensionality. Fitting multi-species (or alloy) potential requires a number
of pair-parameters that scales asymptotically as $N_{\textrm{s}}^{2}$ with the
number of atomic species $N_{\textrm{s}}$. The scaling becomes even less
favorable if we need specific parameters for triplets ($\propto
N_{\textrm{s}}^{3}$), quadruplets ($\propto N_{\textrm{s}}^{4}$) and so on,
quickly becoming intractable for a large number of species. The compression of
chemical fingerprints has recently been proposed to circumvent the curse of
dimensionality for MLPs [423]. Using explicit functional forms, it can be
possible to circumnavigate the curse of dimensionality with combining rules.
However, they are only available for few interactions types and may be plagued
with poor transferability.
As a final note, we would like to point out that despite the fact that (with
the exception of bare Coulomb interaction) all potentials discussed here are
local, chemistry can be quite non-local. By non-locality we do not mean the
range of the bare interaction, such as the range of the bond-integrals in a
tight-binding formulation. We mean the non-locality intrinsic in the
diagonalization of the quantum mechanical Hamiltonian. In hydrocarbon
chemistry, the non-locality manifests itself for example in bond conjugation
and in metals through an algebraic decay of the density matrix [424], while in
group 15–17 in the periodic table, it is reflected in the Peierls deformation
causing elemental crystals to reduce from the simple cubic to less symmetric
structures [425]. As another example, carbon chains – also called carbynes –
can exist in a polyynic form of alternating single and triple bonds or a
cumulenic form of repeating double bonds [426, 427, 428]. Which form is chosen
depends on whether the chain is odd or even numbered and how it is terminated.
This crucially affects how they interact with their environment, for example
with oxygen [429, 430]. Such non-local effects even manifest in bulk
materials: Force-locality tests on amorphous carbon by Deringer and Csányi
showed that chemistry in low-density, graphite-like amorphous carbon is much
longer ranged than in denser more diamond-like carbon [355]. Approaches for
incorporating true quantum non-locality into potentials currently do not
appear to exist. Modeling it appears to require new classes of potential,
e.g., the ability of an EAM or MEAM potential to make atoms adjust their
donating charge density in response to the environment in a fashion that
allows for multistability.
We hope that this review was successful in highlighting the incredible
achievements throughout the last century in understanding the bonding of
matter, and molding these insights into simple analytical expressions. The
wide availability of high-accuracy electronic structure calculations and
advances in statistical modeling have moved the field into exciting new
directions. We would also like to add that the wide availability of present-
day interatomic potentials in the form of open-source software, ideally
embedded in a standard database [13] or a standard code [310, 311], is
accelerating quick adoption of potentials into practice — not to mention the
savings in students’ lifetimes, by not having to dissect which of the $50$
parameters just manually copied from printed publication XYZ is missing a $0$
in print. (Yes, we are thinking about our own PhD theses.) Of course,
significant challenges remain, both for traditional fixed-form as well as
machine-learned interatomic potentials, of which we believe the curse of
dimensionality and the coupling of electron transfer and Coulomb interaction
to the electronic bond as most crucial.
## Acknowledgement(s)
We thank Gábor Csányi, Volker L. Deringer, Christian Elsässer, Peter Gumbsch,
Judith Harrison, James Kermode, Pekka Koskinen, Gianpietro Moras, Michael
Moseler, Matous Mrovec, Toon Verstraelen and Michael Walter for many
enlightening discussions over the years. We are further indebted to James
Kermode for comments on the machine learning section of the manuscript as well
as Joshua Weißenfels and Jan Grießer for proofreading and commenting on the
full manuscript. We used gpaw [431] for all DFT calculations shown here that
are not obviously taken from third sources.
## Disclosure statement
No potential conflict of interest was reported by the author(s).
## Nomenclature/Notation
### Abbreviations
ACE | atomic cluster expansion
---|---
ATM | Axilrod-Teller-Muto interaction potential
DFT | density-functional theory
EAM | embedded-atom method
EOS | equation of state
GAP | Gaussian approximation potential
LJ | Lennard-Jones
MEAM | modified EAM
ML | machine learning
MLP | machine-learned potential
QEq | charge equilibration
REBO | reactive empirical bond-order potential
REBO2 | second generation REBO
SQE | split-charge equilibration
SW | Stillinger-Weber
TB, TB$n$M | tight-binding, TB $n$-th order moment expansion
### Symbols
$\alpha$, $\alpha^{\prime}$ | polarizability in SI and atomic units
---|---
$\alpha_{\text{M}}$ | Madelung constant
$\delta_{\alpha\beta}$ | Kronecker delta
$\epsilon$ | Lennard-Jones energy parameter
$\varepsilon_{0}$ | vacuum permittivity
$\varepsilon_{\textrm{r}}$ | dielectric constant
$\varepsilon_{\alpha\beta}$ | element of the Eulerian strain tensor
$\eta_{\alpha\beta}$ | element of the Lagrangian strain tensor
$\rho$ | charge density, number density
$\sigma$ | length scale parameter
$\sigma_{\alpha\beta}$ | element of the Cauchy stress tensor
$A$ | electron affinity
---|---
$B_{n}$ | $n$-th order virial coefficient
$C_{\alpha\beta\gamma\delta}$ | element of elastic tensor
$C_{ij}$ | element of elastic tensor in Voigt notation
$C_{n}$ | dispersion coefficient of order $n$
$\hat{H}$ | Hamilton operator
$H_{i\alpha j\beta}$ | Hamiltonian integral between orbital $\alpha$ on atom $i$ and orbital $\beta$ on atom $j$
$I$ | ionization energy
$B$ | bulk modulus
$N$ | particle number
$Q_{i}$ | charge of atom $i$
$S_{ij}$ | square of the distance between atoms $i$ and $j$
$U$ | interaction energy
$U_{0}$ | dimer/molecular binding energy
$U_{\text{pa}}$ | potential energy per atom
$U_{\text{pa}}^{\text{eq}}$, $U_{\text{coh}}$ | equilibrium potential energy per atom (cohesive energy)
$U_{\text{pb}}$ | potential energy per bond
$U_{\text{pb}}^{\text{eq}}$ | equilibrium potential energy per bond
$U_{1}$, $U_{1}^{(i)}$ | single-body interaction energy
$U_{2}$, $U_{3}$ | pair and triplet interaction energy
$V$ | volume
$Z_{0}$ | coordination number
$Z_{s}$ | number of atoms in $s$’th nearest-neighbor shell
$a_{n}$ | distance between an atom with a $(n+1)$-nearest neighbor
---|---
$a_{0}^{\textrm{eq}}$ | equilibrium bond length
$a_{\textrm{B}}$ | Bohr radius
e | elementary charge
$f_{\text{c}}$ | cutoff function
$k_{\textrm{B}}T$ | thermal energy
$m$ | mass
$n(\varepsilon)$ | density of states
$n_{i\alpha}(\varepsilon)$ | local density of states of orbital $i\alpha$
$\hat{p}_{i}$ | momentum operator
$\mathbf{p}_{i}$, $p_{i}$ | dipole moment of species $i$ and its magnitude
$p$ | pressure
$\mathbf{q}_{i}$ | descriptor of the environment of atom $i$
$q_{ij}$ | bond charge donated from atom $i$ to atom $j$
$r$, $r_{ij}$ | (pair) distance
$r_{0}$ | equilibrium distance in a diatomic molecule
$r_{\textrm{c}}$ | cutoff radius
$\nu_{\textrm{s}}^{\alpha\beta}$, $\nu_{\textrm{s}}^{\alpha\beta\gamma\delta}$ | second- and fourth-rank shell tensor for the $s$’th nearest-neighbor shell
$v_{\text{pa}}$ | volume per atom
## References
* [1] R. P. Feynman, R. B. Leighton and M. Sands “The Feynman Lectures on Physics” New York: Addison-Wesley, 1964
* [2] E. A. Guggenheim “The Principle of Corresponding States” In _J Chem Phys_ 13.7 AIP Publishing, 1945, pp. 253–261 DOI: 10.1063/1.1724033
* [3] Leo P. Kadanoff et al. “Static Phenomena Near Critical Points: Theory and Experiment” In _Rev Mod Phys_ 39.2 American Physical Society (APS), 1967, pp. 395–431 DOI: 10.1103/revmodphys.39.395
* [4] M E Fisher “The theory of equilibrium critical phenomena” In _Rep Progr Phys_ 30.2 IOP Publishing, 1967, pp. 615–730 DOI: 10.1088/0034-4885/30/2/306
* [5] Kurt Kremer and Gary S. Grest “Dynamics of entangled linear polymer melts: A molecular-dynamics simulation” In _J Chem Phys_ 92.8 AIP Publishing, 1990, pp. 5057–5086 DOI: 10.1063/1.458541
* [6] Philip M. Morse “Diatomic Molecules According to the Wave Mechanics. II. Vibrational Levels” In _Phys Rev_ 34.1 American Physical Society (APS), 1929, pp. 57–64 DOI: 10.1103/physrev.34.57
* [7] R. A. Buckingham “The classical equation of state of gaseous helium, neon and argon” In _Proc Roy Soc Lond Math Phys Sci_ 168.933 The Royal Society, 1938, pp. 264–283 DOI: 10.1098/rspa.1938.0173
* [8] J E Lennard-Jones “Cohesion” In _Proc Phys Soc_ 43.5 IOP Publishing, 1931, pp. 461–482 DOI: 10.1088/0959-5309/43/5/301
* [9] Murray S. Daw and M. I. Baskes “Semiempirical, Quantum Mechanical Calculation of Hydrogen Embrittlement in Metals” In _Phys Rev Lett_ 50.17 American Physical Society (APS), 1983, pp. 1285–1288 DOI: 10.1103/physrevlett.50.1285
* [10] Donald W. Brenner “Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films” In _Phys Rev B_ 42.15 American Physical Society (APS), 1990, pp. 9458–9471 DOI: 10.1103/physrevb.42.9458
* [11] Donald W Brenner et al. “A second-generation reactive empirical bond order (REBO) potential energy expression for hydrocarbons” In _J Phys Condens Matter_ 14.4 IOP Publishing, 2002, pp. 783–802 DOI: 10.1088/0953-8984/14/4/312
* [12] Adri C. T. Duin, Siddharth Dasgupta, Francois Lorant and William A. Goddard “ReaxFF: A Reactive Force Field for Hydrocarbons” In _J Phys Chem A_ 105.41 American Chemical Society (ACS), 2001, pp. 9396–9409 DOI: 10.1021/jp004368u
* [13] E. B. Tadmor et al. “The potential of atomistic simulations and the knowledgebase of interatomic models” In _JOM_ 63.7 Springer ScienceBusiness Media LLC, 2011, pp. 17–17 DOI: 10.1007/s11837-011-0102-6
* [14] Chandler A. Becker, Francesca Tavazza, Zachary T. Trautt and Robert A. Buarque Macedo “Considerations for choosing and using force fields and interatomic potentials in materials science and engineering” In _Curr Opin Solid State Mater Sci_ 17.6 Elsevier BV, 2013, pp. 277–283 DOI: 10.1016/j.cossms.2013.10.001
* [15] Zachary T Trautt, Francesca Tavazza and Chandler A Becker “Facilitating the selection and creation of accurate interatomic potentials with robust tools and characterization” In _Model Simulat Mater Sci Eng_ 23.7 IOP Publishing, 2015, pp. 074009 DOI: 10.1088/0965-0393/23/7/074009
* [16] Carla Tomas, Irene Suarez-Martinez and Nigel A. Marks “Graphitization of amorphous carbons: A comparative study of interatomic potentials” In _Carbon_ 109 Elsevier BV, 2016, pp. 681–693 DOI: 10.1016/j.carbon.2016.08.024
* [17] Lucas M Hale, Zachary T Trautt and Chandler A Becker “Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants” In _Model Simulat Mater Sci Eng_ 26.5 IOP Publishing, 2018, pp. 055003 DOI: 10.1088/1361-651x/aabc05
* [18] Carla Tomas et al. “Transferability in interatomic potentials for carbon” In _Carbon_ 155 Elsevier BV, 2019, pp. 624–634 DOI: 10.1016/j.carbon.2019.07.074
* [19] A.E. Carlsson “Beyond Pair Potentials in Elemental Transition Metals and Semiconductors” In _Solid State Physics_ Cambridge: Academic Press, 1990, pp. 1–91 DOI: 10.1016/s0081-1947(08)60323-9
* [20] D.W. Brenner “The Art and Science of an Analytic Potential” In _Phys Status Solidi (b)_ 217.1 Wiley, 2000, pp. 23–40 DOI: 10.1002/(sici)1521-3951(200001)217:1¡23::aid-pssb23¿3.0.co;2-n
* [21] G. J. Ackland “1.10 - Interatomic Potential Development” In _Comprehensive Nuclear Materials_ Oxford: Elsevier, 2012, pp. 267 –291
* [22] Susan B. Sinnott and Donald W. Brenner “Three decades of many-body potentials in materials research” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 469–473 DOI: 10.1557/mrs.2012.88
* [23] M.W. Finnis “Concepts for simulating and understanding materials at the atomic scale” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 477–484 DOI: 10.1557/mrs.2012.92
* [24] Stephen M. Foiles and Michael I. Baskes “Contributions of the embedded-atom method to materials science and engineering” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 485–491 DOI: 10.1557/mrs.2012.93
* [25] Lars Pastewka, Matous Mrovec, Michael Moseler and Peter Gumbsch “Bond order potentials for fracture, wear, and plasticity” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 493–503 DOI: 10.1557/mrs.2012.94
* [26] Yun Kyung Shin et al. “Variable charge many-body interatomic potentials” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 504–512 DOI: 10.1557/mrs.2012.95
* [27] Steven J. Plimpton and Aidan P. Thompson “Computational aspects of many-body potentials” In _MRS Bulletin_ 37.5 Springer ScienceBusiness Media LLC, 2012, pp. 513–521 DOI: 10.1557/mrs.2012.96
* [28] Tao Liang et al. “Reactive Potentials for Advanced Atomistic Simulations” In _Annu Rev Mater Res_ 43.1 Annual Reviews, 2013, pp. 109–129 DOI: 10.1146/annurev-matsci-071312-121610
* [29] Jörg Behler “Perspective: Machine learning potentials for atomistic simulations” In _J Chem Phys_ 145.17 AIP Publishing, 2016, pp. 170901 DOI: 10.1063/1.4966192
* [30] Judith A. Harrison et al. “Review of force fields and intermolecular potentials used in atomistic computational materials research” In _Appl Phys Rev_ 5.3 AIP Publishing, 2018, pp. 031104 DOI: 10.1063/1.5020808
* [31] Volker L. Deringer, Miguel A. Caro and Gábor Csányi “Machine Learning Interatomic Potentials as Emerging Tools for Materials Science” In _Adv Mater_ 31.46 Wiley, 2019, pp. 1902765 DOI: 10.1002/adma.201902765
* [32] Volker L. Deringer et al. “Gaussian Process Regression for Materials and Molecules” In _Chem Rev_ 121.16 American Chemical Society (ACS), 2021, pp. 10073–10141 DOI: 10.1021/acs.chemrev.1c00022
* [33] Y. Mishin “Machine-learning interatomic potentials for materials science” In _Acta Mater_ 214 Elsevier BV, 2021, pp. 116980 DOI: 10.1016/j.actamat.2021.116980
* [34] M. Finnis “Interatomic forces in condensed matter” Oxford: Oxford University Press, 2005
* [35] Ellad B. Tadmor and Ronald E. Miller “Modeling Materials” Cambridge: Cambridge University Press, 2009 DOI: 10.1017/cbo9781139003582
* [36] Paul K. Weiner and Peter A. Kollman “AMBER: Assisted model building with energy refinement. A general program for modeling molecules and their interactions” In _J Comput Chem_ 2.3 Wiley, 1981, pp. 287–303 DOI: 10.1002/jcc.540020311
* [37] Alexander D. MacKerell, Joanna Wiorkiewicz-Kuczera and Martin Karplus “An all-atom empirical energy function for the simulation of nucleic acids” In _J Am Chem Soc_ 117.48 American Chemical Society (ACS), 1995, pp. 11946–11975 DOI: 10.1021/ja00153a017
* [38] David A. Pearlman et al. “AMBER, a package of computer programs for applying molecular mechanics, normal mode analysis, molecular dynamics and free energy calculations to simulate the structural and energetic properties of molecules” In _Comput Phys Comm_ 91.1-3 Elsevier BV, 1995, pp. 1–41 DOI: 10.1016/0010-4655(95)00041-d
* [39] William L. Jorgensen, David S. Maxwell and Julian Tirado-Rives “Development and Testing of the OPLS All-Atom Force Field on Conformational Energetics and Properties of Organic Liquids” In _J Am Chem Soc_ 118.45 American Chemical Society (ACS), 1996, pp. 11225–11236 DOI: 10.1021/ja9621760
* [40] A. D. MacKerell et al. “All-Atom Empirical Potential for Molecular Modeling and Dynamics Studies of Proteins” In _J Phys Chem B_ 102.18 American Chemical Society (ACS), 1998, pp. 3586–3616 DOI: 10.1021/jp973084f
* [41] Alexander D. Mackerell, Michael Feig and Charles L. Brooks “Extending the treatment of backbone energetics in protein force fields: Limitations of gas-phase quantum mechanics in reproducing protein conformational distributions in molecular dynamics simulations” In _J Comput Chem_ 25.11 Wiley, 2004, pp. 1400–1415 DOI: 10.1002/jcc.20065
* [42] David A. Case et al. “The Amber biomolecular simulation programs” In _J Comput Chem_ 26.16 Wiley, 2005, pp. 1668–1688 DOI: 10.1002/jcc.20290
* [43] Romelia Salomon-Ferrer, David A. Case and Ross C. Walker “An overview of the Amber biomolecular simulation package” In _Wiley Interdiscip Rev Comput Mol Sci_ 3.2 Wiley, 2012, pp. 198–210 DOI: 10.1002/wcms.1121
* [44] Evan T. Walters, Mohamad Mohebifar, Erin R. Johnson and Christopher N. Rowley “Evaluating the London Dispersion Coefficients of Protein Force Fields Using the Exchange-Hole Dipole Moment Model” In _J Phys Chem B_ 122.26 American Chemical Society (ACS), 2018, pp. 6690–6701 DOI: 10.1021/acs.jpcb.8b02814
* [45] Murray S. Daw and M. I. Baskes “Embedded-atom method: Derivation and application to impurities, surfaces, and other defects in metals” In _Phys Rev B_ 29.12 American Physical Society (APS), 1984, pp. 6443–6453 DOI: 10.1103/physrevb.29.6443
* [46] F. Ducastelle “Modules élastiques des métaux de transition” In _Journal de Physique_ 31.11-12 EDP Sciences, 1970, pp. 1055–1062 DOI: 10.1051/jphys:019700031011-120105500
* [47] M. W. Finnis and J. E. Sinclair “A simple empirical $N$-body potential for transition metals” In _Phil Mag A_ 50.1 Informa UK Limited, 1984, pp. 45–55 DOI: 10.1080/01418618408244210
* [48] M.J.L. Sangster and M. Dixon “Interionic potentials in alkali halides and their use in simulations of the molten salts” In _Adv Phys_ 25.3 Informa UK Limited, 1976, pp. 247–342 DOI: 10.1080/00018737600101392
* [49] JA Barker and A Pompe “Atomic interactions in argon” In _Aust J Chem_ 21.7 CSIRO Publishing, 1968, pp. 1683 DOI: 10.1071/ch9681683
* [50] M. Thiesen “Untersuchungen über die Zustandsgleichung” In _Ann Phys_ 260.3 Wiley, 1885, pp. 467–492 DOI: 10.1002/andp.18852600308
* [51] A J Masters “Virial expansions” In _J Phys Condens Matter_ 20.28 IOP Publishing, 2008, pp. 283102 DOI: 10.1088/0953-8984/20/28/283102
* [52] Joseph E. Mayer and Elliott Montroll “Molecular Distribution” In _J Chem Phys_ 9.1 AIP Publishing, 1941, pp. 2–16 DOI: 10.1063/1.1750822
* [53] David Chandler and Hans C. Andersen “Optimized Cluster Expansions for Classical Fluids. II. Theory of Molecular Liquids” In _J Chem Phys_ 57.5 AIP Publishing, 1972, pp. 1930–1937 DOI: 10.1063/1.1678513
* [54] J. L. Lebowitz and O. Penrose “Convergence of Virial Expansions” In _J Math Phys_ 5.7 AIP Publishing, 1964, pp. 841–847 DOI: 10.1063/1.1704186
* [55] J. D. Waals “Over de Continuiteit van den Gas- en Vloeistoftoestand”, 1873
* [56] F. London “Zur Theorie und Systematik der Molekularkräfte” In _Z Phys_ 63.3-4 Springer ScienceBusiness Media LLC, 1930, pp. 245–279 DOI: 10.1007/bf01421741
* [57] J. N. Israelachvili and D. Tabor “Measurement of van der Waals Dispersion Forces in the Range 1.4 to 130 nm” In _Nat Phys Sci_ 236.68 Springer ScienceBusiness Media LLC, 1972, pp. 106–106 DOI: 10.1038/physci236106a0
* [58] Jacob Nissim Israelachvili and David Tabor “The measurement of van der Waals dispersion forces in the range 1.5 to 130 nm” In _Proc R Soc A: Math Phys Eng Sci_ 331.1584 The Royal Society, 1972, pp. 19–38 DOI: 10.1098/rspa.1972.0162
* [59] Dirk Reith, Mathias Pütz and Florian Müller-Plathe “Deriving effective mesoscale potentials from atomistic simulations” In _J Comput Chem_ 24.13 Wiley, 2003, pp. 1624–1636 DOI: 10.1002/jcc.10307
* [60] Alexander P. Lyubartsev and Aatto Laaksonen “Calculation of effective interaction potentials from radial distribution functions: A reverse Monte Carlo approach” In _Phys Rev E_ 52.4 American Physical Society (APS), 1995, pp. 3730–3737 DOI: 10.1103/physreve.52.3730
* [61] Sergey V. Sukhomlinov and Martin H. Müser “A mixed radial, angular, three-body distribution function as a tool for local structure characterization: Application to single-component structures” In _J Chem Phys_ 152.19 AIP Publishing, 2020, pp. 194502 DOI: 10.1063/5.0007964
* [62] G. C. Abell “Empirical chemical pseudopotential theory of molecular and metallic bonding” In _Phys Rev B_ 31.10 American Physical Society (APS), 1985, pp. 6184–6196 DOI: 10.1103/physrevb.31.6184
* [63] Volker Heine et al. “Many-atom interactions in solids” In _Philos. Trans. R. Soc. A_ 334.1635 The Royal Society, 1991, pp. 393–405 DOI: 10.1098/rsta.1991.0021
* [64] Raju P. Gupta “Lattice relaxation at a metal surface” In _Phys Rev B_ 23.12 American Physical Society (APS), 1981, pp. 6265–6270 DOI: 10.1103/physrevb.23.6265
* [65] Danilo Capecchi, Giuseppe Ruta and Patrizia Trovalusci “From classical to Voigt’s molecular models in elasticity” In _Arch Hist Exact Sci_ 64.5 Springer ScienceBusiness Media LLC, 2010, pp. 525–559 DOI: 10.1007/s00407-010-0065-y
* [66] A. E. H. Love “Mathematical Theory of Elasticity” Cambridge, UK: Cambridge University Press, 1954
* [67] Woldemar Voigt “Lehrbuch der Kristallphysik” Leipzig: B.G. Teubner, 1910
* [68] Gene Simmons and Herbert Wang “Single crystal elastic constants and calculated aggregate properties: a handbook [by] Gene Simmons and Herbert Wang.” In _Single crystal elastic constants and calculated aggregate properties: a handbook_ Cambridge, Mass: M.I.T. Press, 1971
* [69] B W James and H Kheyrandish “The low-temperature variation of the elastic constants of lithium hydride and lithium deuteride” In _J Phys C Solid State Phys_ 15.31 IOP Publishing, 1982, pp. 6321–6337 DOI: 10.1088/0022-3719/15/31/009
* [70] Hongzhi Fu, WenFang Liu and Tao Gao “Atomistic simulation of MgS polymorphs” In _Phys Status Solidi (b)_ 247.1 Wiley, 2010, pp. 48–53 DOI: 10.1002/pssb.200945138
* [71] B. H. Lee “Elastic Constants of ZnTe and ZnSe between 77°–300°K” In _J Appl Phys_ 41.7 AIP Publishing, 1970, pp. 2984–2987 DOI: 10.1063/1.1659349
* [72] O. D. Slagle and H. A. McKinstry “Temperature Dependence of the Elastic Constants of the Alkali Halides. III. CsCl, CsBr, and CsI” In _J Appl Phys_ 38.2 AIP Publishing, 1967, pp. 451–458 DOI: 10.1063/1.1709358
* [73] Michael A. Carpenter et al. “Calibration of excess thermodynamic properties and elastic constant variations associated with the alpha $<$–$>$ beta phase transition in quartz” In _Am Mineral_ 83.1-2 Mineralogical Society of America, 1998, pp. 2–22 DOI: 10.2138/am-1998-1-201
* [74] Jinglian Du, Bin Wen, Roderick Melnik and Yoshiyuki Kawazoe “Phase stability, elastic and electronic properties of Cu–Zr binary system intermetallic compounds: A first-principles study” In _J Alloy Comp_ 588 Elsevier BV, 2014, pp. 96–102 DOI: 10.1016/j.jallcom.2013.11.018
* [75] Max Born and Kun Huang “Dynamical theory of crystal lattices” London: Oxford Clarendon Press, 1954
* [76] Ronald A. Aziz and M.J. Slaman “The argon and krypton interatomic potentials revisited” In _Mol Phys_ 58.4 Informa UK Limited, 1986, pp. 679–697 DOI: 10.1080/00268978600101501
* [77] George H. Booth, Deidre Cleland, Alex J. W. Thom and Ali Alavi “Breaking the carbon dimer: The challenges of multiple bond dissociation with full configuration interaction quantum Monte Carlo methods” In _J Chem Phys_ 135.8 AIP Publishing, 2011, pp. 084104 DOI: 10.1063/1.3624383
* [78] Y. Mishin et al. “Structural stability and lattice defects in copper: Ab initio, tight-binding, and embedded-atom calculations” In _Phys Rev B_ 63.22 American Physical Society (APS), 2001, pp. 224106 DOI: 10.1103/physrevb.63.224106
* [79] Wolf B. Dapp and Martin H. Müser “Towards time-dependent, non-equilibrium charge-transfer force fields” In _Eur Phys J B_ 86.7 Springer ScienceBusiness Media LLC, 2013 DOI: 10.1140/epjb/e2013-40047-x
* [80] W. Heitler and F. London “Wechselwirkung neutraler Atome und homöopolare Bindung nach der Quantenmechanik” In _Z Phys_ 44.6-7 Springer ScienceBusiness Media LLC, 1927, pp. 455–472 DOI: 10.1007/bf01397394
* [81] Y. Sugiura “Über die Eigenschaften des Wasserstoffmoleküls im Grundzustande” In _Z Phys_ 45.7-8 Springer ScienceBusiness Media LLC, 1927, pp. 484–492 DOI: 10.1007/bf01329207
* [82] Gustav Mie “Zur kinetischen Theorie der einatomigen Körper” In _Ann Phys_ 316.8 Wiley, 1903, pp. 657–697 DOI: 10.1002/andp.19033160802
* [83] J. E. Jones “On the determination of molecular fields. I. From the variation of the viscosity of a gas with temperature” In _Proc R Soc Lond A_ 106.738 The Royal Society, 1924, pp. 441–462 DOI: 10.1098/rspa.1924.0081
* [84] J. C. Slater “The Normal State of Helium” In _Phys Rev_ 32.3 American Physical Society (APS), 1928, pp. 349–360 DOI: 10.1103/physrev.32.349
* [85] Max Born and Joseph E. Mayer “Zur Gittertheorie der Ionenkristalle” In _Z Phys_ 75.1-2 Springer ScienceBusiness Media LLC, 1932, pp. 1–18 DOI: 10.1007/bf01340511
* [86] R. A. Buckingham “The repulsive interaction of atoms in S states” In _Trans Faraday Soc_ 54 Royal Society of Chemistry (RSC), 1958, pp. 453 DOI: 10.1039/tf9585400453
* [87] Thomas C. O’Connor, Jan Andzelm and Mark O. Robbins “AIREBO-M: A reactive model for hydrocarbons at extreme pressures” In _J Chem Phys_ 142.2 AIP Publishing, 2015, pp. 024903 DOI: 10.1063/1.4905549
* [88] Mary J. Van Vleet, Alston J. Misquitta, Anthony J. Stone and J. R. Schmidt “Beyond Born-Mayer: Improved Models for Short-Range Repulsion in ab Initio Force Fields” In _J Chem Theor Comput_ 12.8 American Chemical Society (ACS), 2016, pp. 3851–3870 DOI: 10.1021/acs.jctc.6b00209
* [89] Flaviu S. Cipcigan, Vlad P. Sokhan, Jason Crain and Glenn J. Martyna “Electronic coarse graining enhances the predictive power of molecular simulation allowing challenges in water physics to be addressed” In _J Comput Phys_ 326 Elsevier BV, 2016, pp. 222–233 DOI: 10.1016/j.jcp.2016.08.030
* [90] Edmund S. Rittner “Binding Energy and Dipole Moment of Alkali Halide Molecules” In _J Chem Phys_ 19.8 AIP Publishing, 1951, pp. 1030–1035 DOI: 10.1063/1.1748448
* [91] K. T. Tang and J. Peter Toennies “An improved simple model for the van der Waals potential based on universal damping functions for the dispersion coefficients” In _J Chem Phys_ 80.8 AIP Publishing, 1984, pp. 3726–3741 DOI: 10.1063/1.447150
* [92] F. S. Cipcigan, J. Crain, V. P. Sokhan and G. J. Martyna “Electronic coarse graining: Predictive atomistic modeling of condensed matter” In _Rev Mod Phys_ 91.2 American Physical Society (APS), 2019, pp. 025003 DOI: 10.1103/revmodphys.91.025003
* [93] Paul A. Madden and Mark Wilson “‘Covalent’ effects in ‘ionic’ systems” In _Chem Soc Rev_ 25.5 Royal Society of Chemistry (RSC), 1996, pp. 339–350 DOI: 10.1039/cs9962500339
* [94] A. Dalgarno “Atomic polarizabilities and shielding factors” In _Adv Phys_ 11.44 Informa UK Limited, 1962, pp. 281–315 DOI: 10.1080/00018736200101302
* [95] Hideki Yukawa “On the Interaction of Elementary Particles. I” In _Proc Phys Math Soc Jpn. 3rd Series_ 17, 1935, pp. 48–57 DOI: 10.11429/ppmsj1919.17.0˙48
* [96] J F Ziegler, J P Biersack and U Littmark “The Stopping and Range of Ions in Matter” New York: Pergamon, 1985
* [97] K. Nordlund, N. Runeberg and D. Sundholm “Repulsive interatomic potentials calculated using Hartree-Fock and density-functional theory methods” In _Nucl Instrum Methods Phys Res B_ 132.1 Elsevier BV, 1997, pp. 45–54 DOI: 10.1016/s0168-583x(97)00447-3
* [98] N. Juslin et al. “Analytical interatomic potential for modeling nonequilibrium processes in the W–C–H system” In _J Appl Phys_ 98.12 AIP Publishing, 2005, pp. 123520 DOI: 10.1063/1.2149492
* [99] W. Lawrence Bragg “XVIII. The arrangement of atoms in crystals” In _Lond Edinb Dublin Philos Mag J Sci_ 40.236 Informa UK Limited, 1920, pp. 169–189 DOI: 10.1080/14786440808636111
* [100] H. A. Lorentz “Ueber die Anwendung des Satzes vom Virial in der kinetischen Theorie der Gase” In _Ann Phys_ 248.1 Wiley, 1881, pp. 127–136 DOI: 10.1002/andp.18812480110
* [101] Daniel Berthelot “Sur le mélange des gaz” In _Compt Rendus_ 126, 1898, pp. 1703–1855
* [102] G.D. Zeiss and William J. Meath “Dispersion energy constants $C_{6}$(A, B), dipole oscillator strength sums and refractivities for Li, N, O, H2, N2, O2, NH3, H2O, NO and N2O” In _Mol Phys_ 33.4 Informa UK Limited, 1977, pp. 1155–1176 DOI: 10.1080/00268977700100991
* [103] K. T. Tang and J. P. Toennies “The van der Waals potentials between all the rare gas atoms from He to Rn” In _J Chem Phys_ 118.11 AIP Publishing, 2003, pp. 4976–4983 DOI: 10.1063/1.1543944
* [104] John C. Slater and John G. Kirkwood “The Van Der Waals Forces in Gases” In _Phys Rev_ 37.6 American Physical Society (APS), 1931, pp. 682–697 DOI: 10.1103/physrev.37.682
* [105] Adolf A. Abrahamson “Born-Mayer-Type Interatomic Potential for Neutral Ground-State Atoms with $Z=2$ to $Z=105$” In _Phys Rev_ 178.1 American Physical Society (APS), 1969, pp. 76–79 DOI: 10.1103/physrev.178.76
* [106] Felix T. Smith “Atomic Distortion and the Combining Rule for Repulsive Potentials” In _Phys Rev A_ 5.4 American Physical Society (APS), 1972, pp. 1708––1713 DOI: 10.1103/PhysRevA.5.1708
* [107] T. L. Gilbert “Soft-Sphere Model for Closed-Shell Atoms and Ions” In _J Chem Phys_ 49.6 AIP Publishing, 1968, pp. 2640–2642 DOI: 10.1063/1.1670463
* [108] Hans-Joachim Böhm and Reinhart Ahlrichs “A study of short-range repulsions” In _J Chem Phys_ 77.4 AIP Publishing, 1982, pp. 2028–2034 DOI: 10.1063/1.444057
* [109] B. W. H. Beest, G. J. Kramer and R. A. Santen “Force fields for silicas and aluminophosphates based onab initiocalculations” In _Phys Rev Lett_ 64.16 American Physical Society (APS), 1990, pp. 1955–1958 DOI: 10.1103/physrevlett.64.1955
* [110] M. I. Mendelev, D. J. Sordelet and M. J. Kramer “Using atomistic computer simulations to analyze x-ray diffraction data from metallic glasses” In _J Appl Phys_ 102.4 AIP Publishing, 2007, pp. 043501 DOI: 10.1063/1.2769157
* [111] M.I. Mendelev et al. “Development of suitable interatomic potentials for simulation of liquid and amorphous Cu–Zr alloys” In _Phil Mag_ 89.11 Informa UK Limited, 2009, pp. 967–987 DOI: 10.1080/14786430902832773
* [112] Y.Q. Cheng and E. Ma “Atomic-level structure and structure–property relationship in metallic glasses” In _Progr Mater Sci_ 56.4 Elsevier BV, 2011, pp. 379–473 DOI: 10.1016/j.pmatsci.2010.12.002
* [113] Yunfeng Shi and Michael L. Falk “Atomic-scale simulations of strain localization in three-dimensional model amorphous solids” In _Phys Rev B_ 73.21 American Physical Society (APS), 2006 DOI: 10.1103/physrevb.73.214201
* [114] Yezeng He, Peng Yi and Michael L. Falk “Critical Analysis of an FeP Empirical Potential Employed to Study the Fracture of Metallic Glasses” In _Phys Rev Lett_ 122.3 American Physical Society (APS), 2019 DOI: 10.1103/physrevlett.122.035501
* [115] Göran Wahnström “Molecular-dynamics study of a supercooled two-component Lennard-Jones system” In _Phys Rev A_ 44.6 American Physical Society (APS), 1991, pp. 3752–3764 DOI: 10.1103/physreva.44.3752
* [116] Walter Kob and Hans C. Andersen “Testing mode-coupling theory for a supercooled binary Lennard-Jones mixture I: The van Hove correlation function” In _Phys Rev E_ 51.5 American Physical Society (APS), 1995, pp. 4626–4641 DOI: 10.1103/physreve.51.4626
* [117] Edan Lerner “Mechanical properties of simple computer glasses” In _J Non Cryst Solids_ 522 Elsevier BV, 2019, pp. 119570 DOI: 10.1016/j.jnoncrysol.2019.119570
* [118] Corrado Rainone, Eran Bouchbinder and Edan Lerner “Pinching a glass reveals key properties of its soft spots” In _Proc Natl Acad Sci Unit States Am_ 117.10 Proceedings of the National Academy of Sciences, 2020, pp. 5228–5234 DOI: 10.1073/pnas.1919958117
* [119] Sergey V. Sukhomlinov and Martin H. Müser “Anomalous system-size dependence of properties at the fragile-to-strong transition in a bulk-metallic-glass forming melt” In _Comput Mater Sci_ 156 Elsevier BV, 2019, pp. 129–134 DOI: 10.1016/j.commatsci.2018.09.047
* [120] Brian B. Laird and H. R. Schober “Localized low-frequency vibrational modes in a simple model glass” In _Phys Rev Lett_ 66.5 American Physical Society (APS), 1991, pp. 636–639 DOI: 10.1103/physrevlett.66.636
* [121] H. R. Schober and G. Ruocco “Size effects and quasilocalized vibrations” In _Phil Mag_ 84.13-16 Informa UK Limited, 2004, pp. 1361–1372 DOI: 10.1080/14786430310001644107
* [122] Edan Lerner, Gustavo Düring and Eran Bouchbinder “Statistics and Properties of Low-Frequency Vibrational Modes in Structural Glasses” In _Phys Rev Lett_ 117.3 American Physical Society (APS), 2016 DOI: 10.1103/physrevlett.117.035501
* [123] Hideyuki Mizuno, Hayato Shiba and Atsushi Ikeda “Continuum limit of the vibrational properties of amorphous solids” In _Proc Natl Acad Sci Unit States Am_ 114.46 Proceedings of the National Academy of Sciences, 2017, pp. E9767–E9774 DOI: 10.1073/pnas.1709015114
* [124] Mikhail Dzugutov “Glass formation in a simple monatomic liquid with icosahedral inherent local order” In _Phys Rev A_ 46.6 American Physical Society (APS), 1992, pp. R2984–R2987 DOI: 10.1103/physreva.46.r2984
* [125] Mikhail Dzugutov “Formation of a dodecagonal quasicrystalline phase in a simple monatomic liquid” In _Phys Rev Lett_ 70.19 American Physical Society (APS), 1993, pp. 2924–2927 DOI: 10.1103/physrevlett.70.2924
* [126] Thomas A. Weber and Frank H. Stillinger “Local order and structural transitions in amorphous metal-metalloid alloys” In _Phys Rev B_ 31.4 American Physical Society (APS), 1985, pp. 1954–1963 DOI: 10.1103/physrevb.31.1954
* [127] Andrea Ninarello, Ludovic Berthier and Daniele Coslovich “Models and Algorithms for the Next Generation of Glass Transition Studies” In _Phys Rev X_ 7.2 American Physical Society (APS), 2017, pp. 021039 DOI: 10.1103/physrevx.7.021039
* [128] Steven Vancoillie, Per Åke Malmqvist and Valera Veryazov “Potential Energy Surface of the Chromium Dimer Re-re-revisited with Multiconfigurational Perturbation Theory” In _J Chem Theor Comput_ 12.4 American Chemical Society (ACS), 2016, pp. 1647–1655 DOI: 10.1021/acs.jctc.6b00034
* [129] Binghui Deng and Yunfeng Shi “A reactive coarse-grained model for polydisperse polymers” In _Polymer_ 98 Elsevier BV, 2016, pp. 88–99 DOI: 10.1016/j.polymer.2016.06.018
* [130] Binghui Deng, Edmund F. Palermo and Yunfeng Shi “Comparison of chain-growth polymerization in solution versus on surface using reactive coarse-grained simulations” In _Polymer_ 129 Elsevier BV, 2017, pp. 105–116 DOI: 10.1016/j.polymer.2017.09.048
* [131] Francis Birch “Finite Elastic Strain of Cubic Crystals” In _Phys Rev_ 71.11 American Physical Society (APS), 1947, pp. 809–824 DOI: 10.1103/physrev.71.809
* [132] Francis Birch “Elasticity and constitution of the Earth’s interior” In _J Geophys Res_ 57.2 American Geophysical Union (AGU), 1952, pp. 227–286 DOI: 10.1029/jz057i002p00227
* [133] D. C. Wallace “Lattice Dynamics and Elasticity of Stressed Crystals” In _Rev Mod Phys_ 37.1 American Physical Society (APS), 1965, pp. 57–67 DOI: 10.1103/revmodphys.37.57
* [134] T H K Barron and M L Klein “Second-order elastic constants of a solid under stress” In _Proc Phys Soc_ 85.3 IOP Publishing, 1965, pp. 523–532 DOI: 10.1088/0370-1328/85/3/313
* [135] Duane C. Wallace “Thermoelasticity of Stressed Materials and Comparison of Various Elastic Constants” In _Phys Rev_ 162.3 American Physical Society (APS), 1967, pp. 776–789 DOI: 10.1103/physrev.162.776
* [136] Duane C. Wallace “Thermoelastic Theory of Stressed Crystals and Higher-Order Elastic Constants” In _Solid State Physics_ Cambridge: Academic Press, 1970, pp. 301–404 DOI: 10.1016/s0081-1947(08)60010-7
* [137] Jinghan Wang, Sidney Yip, S. R. Phillpot and Dieter Wolf “Crystal instabilities at finite strain” In _Phys Rev Lett_ 71.25 American Physical Society (APS), 1993, pp. 4182–4185 DOI: 10.1103/physrevlett.71.4182
* [138] Jinghan Wang et al. “Mechanical instabilities of homogeneous crystals” In _Phys Rev B_ 52.17 American Physical Society (APS), 1995, pp. 12627–12635 DOI: 10.1103/physrevb.52.12627
* [139] L D Landau and E M Lifshitz “Theory of Elasticity” Oxford: Butterworth-Heinemann, 1986
* [140] Sergey V Sukhomlinov and Martin H Müser “Constraints on phase stability, defect energies, and elastic constants of metals described by EAM-type potentials” In _J Phys Condens Matter_ 28.39 IOP Publishing, 2016, pp. 395701 DOI: 10.1088/0953-8984/28/39/395701
* [141] Anaël Lemaître and Craig Maloney “Sum Rules for the Quasi-Static and Visco-Elastic Response of Disordered Solids at Zero Temperature” In _J Stat Phys_ 123.2 Springer ScienceBusiness Media LLC, 2006, pp. 415–453 DOI: 10.1007/s10955-005-9015-5
* [142] Max Born “On the stability of crystal lattices. I” In _Math Proc Camb Phil Soc_ 36.2 Cambridge University Press (CUP), 1940, pp. 160–172 DOI: 10.1017/s0305004100017138
* [143] Félix Mouhat and François-Xavier Coudert “Necessary and sufficient elastic stability conditions in various crystal systems” In _Phys Rev B_ 90.22 American Physical Society (APS), 2014 DOI: 10.1103/physrevb.90.224104
* [144] P. Drude “Zur Elektronentheorie der Metalle” In _Ann Phys_ 306.3 Wiley, 1900, pp. 566–613 DOI: 10.1002/andp.19003060312
* [145] H. A. Lorentz “The theory of electrons and its applications to the phenomena of light and radiant heat” Leipzig: Teubner, 1909
* [146] B. M. Axilrod and E. Teller “Interaction of the van der Waals Type Between Three Atoms” In _J Chem Phys_ 11.6 AIP Publishing, 1943, pp. 299–300 DOI: 10.1063/1.1723844
* [147] Y. Muto “Force between nonpolar molecules” In _J. Phys. Math. Soc. Jpn._ 17, 1943, pp. 629
* [148] O. Anatole Lilienfeld and Alexandre Tkatchenko “Two- and three-body interatomic dispersion energy contributions to binding in molecules and solids” In _J Chem Phys_ 132.23 AIP Publishing, 2010, pp. 234109 DOI: 10.1063/1.3432765
* [149] Li-Yan Tang et al. “The long-range non-additive three-body dispersion interactions for the rare gases, alkali, and alkaline-earth atoms” In _J Chem Phys_ 136.10 AIP Publishing, 2012, pp. 104104 DOI: 10.1063/1.3691891
* [150] J. A. Barker, M. L. Klein and M. V. Bobetic “Elastic Constants and Phonon Dispersion Curves for Solid Argon near 0 °K” In _Phys Rev B_ 2.10 American Physical Society (APS), 1970, pp. 4176–4179 DOI: 10.1103/physrevb.2.4176
* [151] B. M. Axilrod “Triple-Dipole Interaction. II. Cohesion in Crystals of the Rare Gases” In _J Chem Phys_ 19.6 AIP Publishing, 1951, pp. 724–729 DOI: 10.1063/1.1748340
* [152] Laurens Jansen “Quantum Chemistry and Crystal Physics” In _Advances in Quantum Chemistry Volume 2_ Cambridge: Academic Press, 1966, pp. 119–194 DOI: 10.1016/s0065-3276(08)60074-x
* [153] Victor F. Lotrich and Krzysztof Szalewicz “Three-Body Contribution to Binding Energy of Solid Argon and Analysis of Crystal Structure” In _Phys Rev Lett_ 79.7 American Physical Society (APS), 1997, pp. 1301–1304 DOI: 10.1103/physrevlett.79.1301
* [154] Krzysztof Rościszewski, Beate Paulus, Peter Fulde and Hermann Stoll “Ab initio coupled-cluster calculations for the fcc and hcp structures of rare-gas solids” In _Phys Rev B_ 62.9 American Physical Society (APS), 2000, pp. 5482–5488 DOI: 10.1103/physrevb.62.5482
* [155] N. W. Ashcroft and N. D. Mermin “Solid State Physics” Philadelphia: Holt-Saunders, 1976
* [156] Itai Leven and Teresa Head-Gordon “C-GeM: Coarse-Grained Electron Model for Predicting the Electrostatic Potential in Molecules” In _J Phys Chem Lett_ 10.21 American Chemical Society (ACS), 2019, pp. 6820–6826 DOI: 10.1021/acs.jpclett.9b02771
* [157] Dineli T. S. Ranathunga, Alexandra Shamir, Xianming Dai and Steven O. Nielsen “Molecular Dynamics Simulations of Water Condensation on Surfaces with Tunable Wettability” In _Langmuir_ 36.26 American Chemical Society (ACS), 2020, pp. 7383–7391 DOI: 10.1021/acs.langmuir.0c00915
* [158] Stewart K. Reed, Oliver J. Lanning and Paul A. Madden “Electrochemical interface between an ionic liquid and a model metallic electrode” In _J Chem Phys_ 126.8 AIP Publishing, 2007, pp. 084704 DOI: 10.1063/1.2464084
* [159] Wolf B. Dapp and Martin H. Müser “Redox reactions with empirical potentials: Atomistic battery discharge simulations” In _J Chem Phys_ 139.6 AIP Publishing, 2013, pp. 064106 DOI: 10.1063/1.4817772
* [160] Mark O. Robbins, Kurt Kremer and Gary S. Grest “Phase diagram and dynamics of Yukawa systems” In _J Chem Phys_ 88.5 AIP Publishing, 1988, pp. 3286–3312 DOI: 10.1063/1.453924
* [161] Axel Arnold et al. “Efficient Algorithms for Electrostatic Interactions Including Dielectric Contrasts” In _Entropy_ 15.12 MDPI AG, 2013, pp. 4569–4588 DOI: 10.3390/e15114569
* [162] A. D. Buckingham “Permanent and Induced Molecular Moments and Long-Range Intermolecular Forces” In _Advances in Chemical Physics_ Hoboken, New Jersey: John Wiley & Sons, Inc., 1967, pp. 107–142 DOI: 10.1002/9780470143582.ch2
* [163] Mark Wilson, Paul A. Madden and Benedito J. Costa-Cabral “Quadrupole Polarization in Simulations of Ionic Systems: Application to AgCl” In _J Phys Chem_ 100.4 American Chemical Society (ACS), 1996, pp. 1227–1237 DOI: 10.1021/jp9512319
* [164] Jorge Nocedal and Stephen J. Wright “Numerical optimization”, Springer series in operations research New York: Springer, 2006
* [165] M. Parrinello and A. Rahman “Crystal Structure and Pair Potentials: A Molecular-Dynamics Study” In _Phys Rev Lett_ 45.14 American Physical Society (APS), 1980, pp. 1196–1199 DOI: 10.1103/physrevlett.45.1196
* [166] R. Car and M. Parrinello “Unified Approach for Molecular Dynamics and Density-Functional Theory” In _Phys Rev Lett_ 55.22 American Physical Society (APS), 1985, pp. 2471–2474 DOI: 10.1103/physrevlett.55.2471
* [167] P. Tangney and S. Scandolo “How well do Car-Parrinello simulations reproduce the Born-Oppenheimer surface? Theory and examples” In _J Chem Phys_ 116.1 AIP Publishing, 2002, pp. 14 DOI: 10.1063/1.1423331
* [168] Gerhard Hummer, Lawrence R. Pratt and Angel E. García “Molecular Theories and Simulation of Ions and Polar Molecules in Water” In _J Phys Chem A_ 102.41 American Chemical Society (ACS), 1998, pp. 7885–7895 DOI: 10.1021/jp982195r
* [169] B. G. Dick and A. W. Overhauser “Theory of the Dielectric Constants of Alkali Halide Crystals” In _Phys Rev_ 112.1 American Physical Society (APS), 1958, pp. 90–103 DOI: 10.1103/physrev.112.90
* [170] N. H. Leeuw and S. C. Parker “Molecular-dynamics simulation of MgO surfaces in liquid water using a shell-model potential for water” In _Phys Rev B_ 58.20 American Physical Society (APS), 1998, pp. 13901–13908 DOI: 10.1103/physrevb.58.13901
* [171] Guillaume Lamoureux and Benoît Roux “Modeling induced polarization with classical Drude oscillators: Theory and molecular dynamics simulation algorithm” In _J Chem Phys_ 119.6 AIP Publishing, 2003, pp. 3025–3039 DOI: 10.1063/1.1589749
* [172] Mark Wilson, Paul A. Madden, Mahin Hemmati and C. Austen Angell “Polarization Effects, Network Dynamics, and the Infrared Spectrum of Amorphous SiO2” In _Phys Rev Lett_ 77.19 American Physical Society (APS), 1996, pp. 4023–4026 DOI: 10.1103/physrevlett.77.4023
* [173] P. Tangney and S. Scandolo “An ab initio parametrized interatomic force field for silica” In _J Chem Phys_ 117.19 AIP Publishing, 2002, pp. 8898–8904 DOI: 10.1063/1.1513312
* [174] Daniel Herzbach, Kurt Binder and Martin H. Müser “Comparison of model potentials for molecular-dynamics simulations of silica” In _J Chem Phys_ 123.12 AIP Publishing, 2005, pp. 124711 DOI: 10.1063/1.2038747
* [175] Ersan Demiralp, Tahir Çağin and William A. Goddard “Morse Stretch Potential Charge Equilibrium Force Field for Ceramics: Application to the Quartz-Stishovite Phase Transition and to Silica Glass” In _Phys Rev Lett_ 82.8 American Physical Society (APS), 1999, pp. 1708–1711 DOI: 10.1103/physrevlett.82.1708
* [176] Adrian J. Rowley, Patrick Jemmer, Mark Wilson and Paul A. Madden “Evaluation of the many-body contributions to the interionic interactions in MgO” In _J Chem Phys_ 108.24 AIP Publishing, 1998, pp. 10209–10219 DOI: 10.1063/1.476481
* [177] F. London “The general theory of molecular forces” In _Trans Faraday Soc_ 33 Royal Society of Chemistry (RSC), 1937, pp. 8b DOI: 10.1039/tf937330008b
* [178] Troy W. Whitfield and Glenn J. Martyna “A unified formalism for many-body polarization and dispersion: The quantum Drude model applied to fluid xenon” In _Chem Phys Lett_ 424.4-6 Elsevier BV, 2006, pp. 409–413 DOI: 10.1016/j.cplett.2006.04.035
* [179] R. Eisenschitz and F. London “Über das Verhältnis der van der Waalsschen Kräfte zu den homöopolaren Bindungskräften” In _Z Phys_ 60.7-8 Springer ScienceBusiness Media LLC, 1930, pp. 491–527 DOI: 10.1007/bf01341258
* [180] Mark E. Tuckerman, Bruce J. Berne, Glenn J. Martyna and Michael L. Klein “Efficient molecular dynamics and hybrid Monte Carlo algorithms for path integrals” In _J Chem Phys_ 99.4 AIP Publishing, 1993, pp. 2796–2808 DOI: 10.1063/1.465188
* [181] D. M. Ceperley “Path integrals in the theory of condensed helium” In _Rev Mod Phys_ 67.2 American Physical Society (APS), 1995, pp. 279–355 DOI: 10.1103/revmodphys.67.279
* [182] Scott Habershon, David E. Manolopoulos, Thomas E. Markland and Thomas F. Miller “Ring-Polymer Molecular Dynamics: Quantum Effects in Chemical Dynamics from Classical Trajectories in an Extended Phase Space” In _Annu Rev Phys Chem_ 64.1 Annual Reviews, 2013, pp. 387–413 DOI: 10.1146/annurev-physchem-040412-110122
* [183] C P Herrero and R Ramírez “Path-integral simulation of solids” In _J Phys Condens Matter_ 26.23 IOP Publishing, 2014, pp. 233201 DOI: 10.1088/0953-8984/26/23/233201
* [184] James B. Anderson “Quantum chemistry by random walk. H ${}^{2}\boldsymbol{P}$, H${}^{+}_{3}$ $\mathbf{D}_{3h}$ ${}^{1}\boldsymbol{A}^{\prime}_{1}$, H2 ${}^{3}\Upsigma^{+}_{u}$, H4 ${}^{1}\Upsigma^{+}_{g}$, Be ${}^{1}\boldsymbol{S}$” In _J Chem Phys_ 65.10 AIP Publishing, 1976, pp. 4121–4127 DOI: 10.1063/1.432868
* [185] Andrew Jones et al. “Norm-conserving diffusion Monte Carlo method and diagrammatic expansion of interacting Drude oscillators: Application to solid xenon” In _Phys Rev B_ 79.14 American Physical Society (APS), 2009 DOI: 10.1103/physrevb.79.144119
* [186] U. Schröder “A new model for lattice dynamics (breathing shell model)” In _Solid State Comm_ 4.7 Elsevier BV, 1966, pp. 347–349 DOI: 10.1016/0038-1098(66)90185-2
* [187] A. N. Basu and S. Sengupta “A Deformable Shell Model for the Alkali Halides” In _Phys Status Solidi (b)_ 29.1 Wiley, 1968, pp. 367–375 DOI: 10.1002/pssb.19680290137
* [188] Masanori Matsui “Breathing shell model in molecular dynamics simulation: Application to MgO and CaO” In _J Chem Phys_ 108.8 AIP Publishing, 1998, pp. 3304–3309 DOI: 10.1063/1.475727
* [189] P. Tangney and S. Scandolo “A many-body interatomic potential for ionic systems: Application to MgO” In _J Chem Phys_ 119.18 AIP Publishing, 2003, pp. 9673–9685 DOI: 10.1063/1.1609980
* [190] M. Wilson, P. A. Madden, N. C. Pyper and J. H. Harding “Molecular dynamics simulations of compressible ions” In _J Chem Phys_ 104.20 AIP Publishing, 1996, pp. 8068–8081 DOI: 10.1063/1.471523
* [191] S. L. Price et al. “The Nature of –Cl$\cdot\cdot\cdot$Cl– Intermolecular Interactions” In _J Am Chem Soc_ 116.11 American Chemical Society (ACS), 1994, pp. 4910–4918 DOI: 10.1021/ja00090a041
* [192] Sergey V. Sukhomlinov and Martin H. Müser “Charge-transfer potentials for ionic crystals: Cauchy violation, LO-TO splitting, and the necessity of an ionic reference state” In _J Chem Phys_ 143.22 AIP Publishing, 2015, pp. 224101 DOI: 10.1063/1.4936575
* [193] A. N. Basu, D. Roy and S. Sengupta “Polarisable models for ionic crystals and the effective many-body interaction” In _Phys Status Solidi (a)_ 23.1 Wiley, 1974, pp. 11–32 DOI: 10.1002/pssa.2210230102
* [194] M.J.L. Sangster “Interionic potentials and force constant models for rocksalt structure crystals” In _J Phys Chem Solid_ 34.2 Elsevier BV, 1973, pp. 355–363 DOI: 10.1016/0022-3697(73)90095-4
* [195] P. Hohenberg and W. Kohn “Inhomogeneous Electron Gas” In _Phys Rev_ 136.3B American Physical Society (APS), 1964, pp. B864–B871 DOI: 10.1103/physrev.136.b864
* [196] W. Kohn and L. J. Sham “Self-Consistent Equations Including Exchange and Correlation Effects” In _Phys Rev_ 140.4A American Physical Society (APS), 1965, pp. A1133–A1138 DOI: 10.1103/physrev.140.a1133
* [197] P. E. Blöchl “Projector augmented-wave method” In _Phys Rev B_ 50.24 American Physical Society (APS), 1994, pp. 17953–17979 DOI: 10.1103/physrevb.50.17953
* [198] E. Wigner “On the Interaction of Electrons in Metals” In _Phys Rev_ 46.11 American Physical Society (APS), 1934, pp. 1002–1011 DOI: 10.1103/physrev.46.1002
* [199] D. M. Ceperley and B. J. Alder “Ground State of the Electron Gas by a Stochastic Method” In _Phys Rev Lett_ 45.7 American Physical Society (APS), 1980, pp. 566–569 DOI: 10.1103/physrevlett.45.566
* [200] P. Ascarelli and Ralph J. Harrison “Density-Dependent Potentials and the Hard-Sphere Model for Liquid Metals” In _Phys Rev Lett_ 22.9 American Physical Society (APS), 1969, pp. 385–388 DOI: 10.1103/physrevlett.22.385
* [201] V. Vitek “Pair Potentials in Atomistic Computer Simulations” In _MRS Bulletin_ 21.2 Springer ScienceBusiness Media LLC, 1996, pp. 20–23 DOI: 10.1557/s088376940004625x
* [202] Olga Degtyareva “Crystal structure of simple metals at high pressures” In _High Pres Res_ 30.3 Informa UK Limited, 2010, pp. 343–371 DOI: 10.1080/08957959.2010.508877
* [203] F. Cyrot-Lackmann “On the calculation of surface tension in transition metals” In _Surf Sci_ 15.3 Elsevier BV, 1969, pp. 535–548 DOI: 10.1016/0039-6028(69)90140-x
* [204] J. Friedel “The physics of clean metal surfaces” In _Ann Phys_ 1 EDP Sciences, 1976, pp. 257–307 DOI: 10.1051/anphys/197601060257
* [205] David G Pettifor “Bonding and Structure of Molecules and Solids” Oxford: Oxford University Press, 1995
* [206] Fabrizio Cleri and Vittorio Rosato “Tight-binding potentials for transition metals and alloys” In _Phys Rev B_ 48.1 American Physical Society (APS), 1993, pp. 22–33 DOI: 10.1103/physrevb.48.22
* [207] D. Tománek, S. Mukherjee and K. H. Bennemann “Simple theory for the electronic and atomic structure of small clusters” In _Phys Rev B_ 28.2 American Physical Society (APS), 1983, pp. 665–673 DOI: 10.1103/physrevb.28.665
* [208] D. Tománek and K.H. Bennemann “Electronic model for energies, relaxations and reconstruction trends at metal surfaces” In _Surf Sci_ 163.2-3 Elsevier BV, 1985, pp. 503–515 DOI: 10.1016/0039-6028(85)91076-3
* [209] I. Pagonabarraga and D. Frenkel “Dissipative particle dynamics for interacting systems” In _J Chem Phys_ 115.11 AIP Publishing, 2001, pp. 5015–5026 DOI: 10.1063/1.1396848
* [210] P. B. Warren “Vapor-liquid coexistence in many-body dissipative particle dynamics” In _Phys Rev E_ 68.6 American Physical Society (APS), 2003 DOI: 10.1103/physreve.68.066702
* [211] M. J. Stott and E. Zaremba “Quasiatoms: An approach to atoms in nonuniform electronic systems” In _Phys Rev B_ 22.4 American Physical Society (APS), 1980, pp. 1564–1583 DOI: 10.1103/physrevb.22.1564
* [212] J. K. Nørskov and N. D. Lang “Effective-medium theory of chemical binding: Application to chemisorption” In _Phys Rev B_ 21.6 American Physical Society (APS), 1980, pp. 2131–2136 DOI: 10.1103/physrevb.21.2131
* [213] J. K. Nørskov “Covalent effects in the effective-medium theory of chemical binding: Hydrogen heats of solution in the $3d$ metals” In _Phys Rev B_ 26.6 American Physical Society (APS), 1982, pp. 2875–2885 DOI: 10.1103/physrevb.26.2875
* [214] F. Ercolessi, E. Tosatti and M. Parrinello “Au (100) Surface Reconstruction” In _Phys Rev Lett_ 57.6 American Physical Society (APS), 1986, pp. 719–722 DOI: 10.1103/physrevlett.57.719
* [215] F. Ercolessi, M. Parrinello and E. Tosatti “Au(100) reconstruction in the glue model” In _Surf Sci_ 177.2 Elsevier BV, 1986, pp. 314–328 DOI: 10.1016/0039-6028(86)90141-x
* [216] F. Ercolessi, M. Parrinello and E. Tosatti “Simulation of gold in the glue model” In _Phil Mag A_ 58.1 Informa UK Limited, 1988, pp. 213–226 DOI: 10.1080/01418618808205184
* [217] Y. Mishin, D. Farkas, M. J. Mehl and D. A. Papaconstantopoulos “Interatomic potentials for monoatomic metals from experimental data and ab initio calculations” In _Phys Rev B_ 59.5 American Physical Society (APS), 1999, pp. 3393–3407 DOI: 10.1103/physrevb.59.3393
* [218] Y. Mishin, M. J. Mehl and D. A. Papaconstantopoulos “Embedded-atom potential for $B2$-NiAl” In _Phys Rev B_ 65.22 American Physical Society (APS), 2002 DOI: 10.1103/physrevb.65.224114
* [219] P L Williams, Y Mishin and J C Hamilton “An embedded-atom potential for the Cu–Ag system” In _Model Simulat Mater Sci Eng_ 14.5 IOP Publishing, 2006, pp. 817–833 DOI: 10.1088/0965-0393/14/5/002
* [220] G. P. Purja Pun and Y. Mishin “Embedded-atom potential for hcp and fcc cobalt” In _Phys Rev B_ 86.13 American Physical Society (APS), 2012 DOI: 10.1103/physrevb.86.134116
* [221] S. M. Foiles, M. I. Baskes and M. S. Daw “Embedded-atom-method functions for the fcc metals Cu, Ag, Au, Ni, Pd, Pt, and their alloys” In _Phys Rev B_ 33.12 American Physical Society (APS), 1986, pp. 7983–7991 DOI: 10.1103/physrevb.33.7983
* [222] M Wen, S M Whalen, R S Elliott and E B Tadmor “Interpolation effects in tabulated interatomic potentials” In _Model Simulat Mater Sci Eng_ 23.7 IOP Publishing, 2015, pp. 074008 DOI: 10.1088/0965-0393/23/7/074008
* [223] Jari Jalkanen and Martin H Müser “Systematic analysis and modification of embedded-atom potentials: case study of copper” In _Model Simulat Mater Sci Eng_ 23.7 IOP Publishing, 2015, pp. 074001 DOI: 10.1088/0965-0393/23/7/074001
* [224] M. I. Baskes “Application of the Embedded-Atom Method to Covalent Materials: A Semiempirical Potential for Silicon” In _Phys Rev Lett_ 59.23 American Physical Society (APS), 1987, pp. 2666–2669 DOI: 10.1103/physrevlett.59.2666
* [225] M. I. Baskes, J. S. Nelson and A. F. Wright “Semiempirical modified embedded-atom potentials for silicon and germanium” In _Phys Rev B_ 40.9 American Physical Society (APS), 1989, pp. 6085–6100 DOI: 10.1103/physrevb.40.6085
* [226] M. I. Baskes “Modified embedded-atom potentials for cubic materials and impurities” In _Phys Rev B_ 46.5 American Physical Society (APS), 1992, pp. 2727–2742 DOI: 10.1103/physrevb.46.2727
* [227] M I Baskes and R A Johnson “Modified embedded atom potentials for HCP metals” In _Model Simulat Mater Sci Eng_ 2.1 IOP Publishing, 1994, pp. 147–163 DOI: 10.1088/0965-0393/2/1/011
* [228] Frank H. Stillinger and Thomas A. Weber “Computer simulation of local order in condensed phases of silicon” In _Phys Rev B_ 31.8 American Physical Society (APS), 1985, pp. 5262–5271 DOI: 10.1103/physrevb.31.5262
* [229] P. N. Keating “Effect of Invariance Requirements on the Elastic Strain Energy of Crystals with Application to the Diamond Structure” In _Phys Rev_ 145.2 American Physical Society (APS), 1966, pp. 637–645 DOI: 10.1103/physrev.145.637
* [230] J. Q. Broughton and X. P. Li “Phase diagram of silicon by molecular dynamics” In _Phys Rev B_ 35.17 American Physical Society (APS), 1987, pp. 9120–9127 DOI: 10.1103/physrevb.35.9120
* [231] Mark D. Kluge and John R. Ray “Velocity versus temperature relation for solidification and melting of silicon: A molecular-dynamics study” In _Phys Rev B_ 39.3 American Physical Society (APS), 1989, pp. 1738–1746 DOI: 10.1103/physrevb.39.1738
* [232] Y. Waseda and K. Suzuki “Structure of molten silicon and germanium by X-ray diffraction” In _Z Phys B_ 20.4 Springer ScienceBusiness Media LLC, 1975, pp. 339–343 DOI: 10.1007/bf01313204
* [233] Valeria Molinero, Srikanth Sastry and C. Austen Angell “Tuning of Tetrahedrality in a Silicon Potential Yields a Series of Monatomic (Metal-like) Glass Formers of Very High Fragility” In _Phys Rev Lett_ 97.7 American Physical Society (APS), 2006 DOI: 10.1103/physrevlett.97.075701
* [234] Debdas Dhabal, Charusita Chakravarty, Valeria Molinero and Hemant K. Kashyap “Comparison of liquid-state anomalies in Stillinger-Weber models of water, silicon, and germanium” In _J Chem Phys_ 145.21 AIP Publishing, 2016, pp. 214502 DOI: 10.1063/1.4967939
* [235] Waldemar Hujo et al. “The Rise and Fall of Anomalies in Tetrahedral Liquids” In _J Stat Phys_ 145.2 Springer ScienceBusiness Media LLC, 2011, pp. 293–312 DOI: 10.1007/s10955-011-0293-9
* [236] R. Biswas and D. R. Hamann “Interatomic Potentials for Silicon Structural Energies” In _Phys Rev Lett_ 55.19 American Physical Society (APS), 1985, pp. 2001–2004 DOI: 10.1103/physrevlett.55.2001
* [237] Dominic Holland and M. Marder “Ideal Brittle Fracture of Silicon Studied with Molecular Dynamics” In _Phys Rev Lett_ 80.4 American Physical Society (APS), 1998, pp. 746–749 DOI: 10.1103/physrevlett.80.746
* [238] Dominic Holland and M. Marder “Erratum: Ideal Brittle Fracture of Silicon Studied with Molecular Dynamics [Phys. Rev. Lett. 80, 746 (1998)]” In _Phys Rev Lett_ 81.18 American Physical Society (APS), 1998, pp. 4029–4029 DOI: 10.1103/physrevlett.81.4029
* [239] Martin Z. Bazant, Efthimios Kaxiras and J. F. Justo “Environment-dependent interatomic potential for bulk silicon” In _Phys Rev B_ 56.14 American Physical Society (APS), 1997, pp. 8542–8552 DOI: 10.1103/physrevb.56.8542
* [240] João F. Justo et al. “Interatomic potential for silicon defects and disordered phases” In _Phys Rev B_ 58.5 American Physical Society (APS), 1998, pp. 2539–2550 DOI: 10.1103/physrevb.58.2539
* [241] N. A. Marks “Generalizing the environment-dependent interaction potential for carbon” In _Phys Rev B_ 63.3 American Physical Society (APS), 2000 DOI: 10.1103/physrevb.63.035401
* [242] Nigel Marks “Modelling diamond-like carbon with the environment-dependent interaction potential” In _J Phys Condens Matter_ 14.11 IOP Publishing, 2002, pp. 2901–2927 DOI: 10.1088/0953-8984/14/11/308
* [243] J. Tersoff “New empirical model for the structural properties of silicon” In _Phys Rev Lett_ 56.6 American Physical Society (APS), 1986, pp. 632–635 DOI: 10.1103/physrevlett.56.632
* [244] J. Tersoff “New empirical approach for the structure and energy of covalent systems” In _Phys Rev B_ 37.12 American Physical Society (APS), 1988, pp. 6991–7000 DOI: 10.1103/physrevb.37.6991
* [245] J. Tersoff “Empirical Interatomic Potential for Carbon, with Applications to Amorphous Carbon” In _Phys Rev Lett_ 61.25 American Physical Society (APS), 1988, pp. 2879–2882 DOI: 10.1103/physrevlett.61.2879
* [246] J. Tersoff “Empirical interatomic potential for silicon with improved elastic properties” In _Phys Rev B_ 38.14 American Physical Society (APS), 1988, pp. 9902–9905 DOI: 10.1103/physrevb.38.9902
* [247] J. Tersoff “Modeling solid-state chemistry: Interatomic potentials for multicomponent systems” In _Phys Rev B_ 39.8 American Physical Society (APS), 1989, pp. 5566–5568 DOI: 10.1103/physrevb.39.5566
* [248] Linus Pauling “The Nature of the Chemical Bond and the Structure of Molecules and Crystals: An Introduction to Mode” Ithaca, New York: Cornell University Press, 1960
* [249] Donald W. Brenner “Relationship between the embedded-atom method and Tersoff potentials” In _Phys Rev Lett_ 63.9 American Physical Society (APS), 1989, pp. 1022–1022 DOI: 10.1103/physrevlett.63.1022
* [250] James H. Rose, John R. Smith and John Ferrante “Universal features of bonding in metals” In _Phys Rev B_ 28.4 American Physical Society (APS), 1983, pp. 1835–1845 DOI: 10.1103/physrevb.28.1835
* [251] Donald W. Brenner “Erratum: Empirical potential for hydrocarbons for use in simulating the chemical vapor deposition of diamond films” In _Phys Rev B_ 46.3 American Physical Society (APS), 1992, pp. 1948–1948 DOI: 10.1103/physrevb.46.1948.2
* [252] Steven J. Stuart, Alan B. Tutein and Judith A. Harrison “A reactive potential for hydrocarbons with intermolecular interactions” In _J Chem Phys_ 112.14 AIP Publishing, 2000, pp. 6472–6486 DOI: 10.1063/1.481208
* [253] Carl Edward Rasmussen and Christopher K. I. Williams “Gaussian processes for machine learning”, Adaptive computation and machine learning Cambridge, Mass: MIT Press, 2006
* [254] F Cyrot-Lackmann “Sur le calcul de la cohésion et de la tension superficielle des métaux de transition par une méthode de liaisons fortes” In _J Phys Chem Solid_ 29.7 Elsevier BV, 1968, pp. 1235–1243 DOI: 10.1016/0022-3697(68)90216-3
* [255] F. Ducastelle and F. Cyrot-Lackmann “Moments developments and their application to the electronic charge distribution of d bands” In _J Phys Chem Solid_ 31.6 Elsevier BV, 1970, pp. 1295–1306 DOI: 10.1016/0022-3697(70)90134-4
* [256] R Haydock, V Heine and M J Kelly “Electronic structure based on the local atomic environment for tight-binding bands” In _J Phys C Solid State Phys_ 5.20 IOP Publishing, 1972, pp. 2845–2858 DOI: 10.1088/0022-3719/5/20/004
* [257] R Haydock, V Heine and M J Kelly “Electronic structure based on the local atomic environment for tight-binding bands. II” In _J Phys C Solid State Phys_ 8.16 IOP Publishing, 1975, pp. 2591–2605 DOI: 10.1088/0022-3719/8/16/011
* [258] D. Pettifor “New many-body potential for the bond order” In _Phys Rev Lett_ 63.22 American Physical Society (APS), 1989, pp. 2480–2483 DOI: 10.1103/physrevlett.63.2480
* [259] A. P. Horsfield et al. “Bond-order potentials: Theory and implementation” In _Phys Rev B_ 53.19 American Physical Society (APS), 1996, pp. 12694–12712 DOI: 10.1103/physrevb.53.12694
* [260] E. T. Jaynes “Information Theory and Statistical Mechanics” In _Phys Rev_ 106.4 American Physical Society (APS), 1957, pp. 620–630 DOI: 10.1103/physrev.106.620
* [261] E. T. Jaynes “Information Theory and Statistical Mechanics. II” In _Phys Rev_ 108.2 American Physical Society (APS), 1957, pp. 171–190 DOI: 10.1103/physrev.108.171
* [262] Lawrence R. Mead and N. Papanicolaou “Maximum entropy in the problem of moments” In _J Math Phys_ 25.8 AIP Publishing, 1984, pp. 2404–2417 DOI: 10.1063/1.526446
* [263] Randall H. Brown and A. E. Carlsson “Critical evaluation of low-order moment expansions for the bonding energy of lattices and defects” In _Phys Rev B_ 32.10 American Physical Society (APS), 1985, pp. 6125–6130 DOI: 10.1103/physrevb.32.6125
* [264] J Friedel “On the band structure of transition metals” In _J Phys F Met Phys_ 3.4 IOP Publishing, 1973, pp. 785–794 DOI: 10.1088/0305-4608/3/4/021
* [265] G J Ackland, M W Finnis and V Vitek “Validity of the second moment tight-binding model” In _J Phys F Met Phys_ 18.8 IOP Publishing, 1988, pp. L153–L157 DOI: 10.1088/0305-4608/18/8/002
* [266] A. E. Carlsson, P. A. Fedders and Charles W. Myles “Generalized embedded-atom format for semiconductors” In _Phys Rev B_ 41.2 American Physical Society (APS), 1990, pp. 1247–1250 DOI: 10.1103/physrevb.41.1247
* [267] A. E. Carlsson “Angular forces in group-VI transition metals: Application to W(100)” In _Phys Rev B_ 44.13 American Physical Society (APS), 1991, pp. 6590–6597 DOI: 10.1103/physrevb.44.6590
* [268] L. Hansen, P. Stoltze, K. W. Jacobsen and J. K. Nørskov “Self-diffusion on copper surfaces” In _Phys Rev B_ 44.12 American Physical Society (APS), 1991, pp. 6523–6526 DOI: 10.1103/physrevb.44.6523
* [269] S. M. Foiles “Interatomic interactions for Mo and W based on the low-order moments of the density of states” In _Phys Rev B_ 48.7 American Physical Society (APS), 1993, pp. 4287–4298 DOI: 10.1103/physrevb.48.4287
* [270] A P Sutton, M W Finnis, D G Pettifor and Y Ohta “The tight-binding bond model” In _J Phys C Solid State Phys_ 21.1 IOP Publishing, 1988, pp. 35–66 DOI: 10.1088/0022-3719/21/1/007
* [271] D. G. Pettifor “From Exact to Approximate Theory: The Tight Binding Bond Model and Many-Body Potentials” In _Many-Atom Interactions in Solids_ 48, Springer Proceedings in Physics Berlin, Heidelberg: Springer, 1990, pp. 64–84
* [272] David G Pettifor et al. “Bonding and structure of intermetallics: a new bond order potential” In _Philos Trans R Soc A_ 334.1635 The Royal Society, 1991, pp. 439–449 DOI: 10.1098/rsta.1991.0024
* [273] S. R. Nishitani, P. Alinaghian, C. Hausleitner and D. G. Pettifor “Angularly dependent embedding potentials and structural prediction” In _Phil Mag Letters_ 69.4 Informa UK Limited, 1994, pp. 177–184 DOI: 10.1080/09500839408241589
* [274] I. I. Oleinik and D. G. Pettifor “Analytic bond-order potentials beyond Tersoff-Brenner. II. Application to the hydrocarbons” In _Phys Rev B_ 59.13 American Physical Society (APS), 1999, pp. 8500–8507 DOI: 10.1103/physrevb.59.8500
* [275] Matous Mrovec, Michael Moseler, Christian Elsässer and Peter Gumbsch “Atomistic modeling of hydrocarbon systems using analytic bond-order potentials” In _Progr Mater Sci_ 52.2-3 Elsevier BV, 2007, pp. 230–254 DOI: 10.1016/j.pmatsci.2006.10.012
* [276] X. W. Zhou, D. K. Ward and M. E. Foster “An analytical bond-order potential for carbon” In _J Comput Chem_ 36.23 Wiley, 2015, pp. 1719–1735 DOI: 10.1002/jcc.23949
* [277] M. Mrovec, D. Nguyen-Manh, D. G. Pettifor and V. Vitek “Bond-order potential for molybdenum: Application to dislocation behavior” In _Phys Rev B_ 69.9 American Physical Society (APS), 2004 DOI: 10.1103/physrevb.69.094115
* [278] D. A. Murdick et al. “Analytic bond-order potential for the gallium arsenide system” In _Phys Rev B_ 73.4 American Physical Society (APS), 2006 DOI: 10.1103/physrevb.73.045206
* [279] B. A. Gillespie et al. “Bond-order potential for silicon” In _Phys Rev B_ 75.15 American Physical Society (APS), 2007 DOI: 10.1103/physrevb.75.155207
* [280] M. Mrovec et al. “Bond-order potential for simulations of extended defects in tungsten” In _Phys Rev B_ 75.10 American Physical Society (APS), 2007 DOI: 10.1103/physrevb.75.104119
* [281] M. Mrovec, D. Nguyen-Manh, C. Elsässer and P. Gumbsch “Magnetic Bond-Order Potential for Iron” In _Phys Rev Lett_ 106.24 American Physical Society (APS), 2011 DOI: 10.1103/physrevlett.106.246402
* [282] D. K. Ward et al. “Analytical bond-order potential for the cadmium telluride binary system” In _Phys Rev B_ 85.11 American Physical Society (APS), 2012 DOI: 10.1103/physrevb.85.115206
* [283] X. W. Zhou, D. K. Ward, M. Foster and J. A. Zimmerman “An analytical bond-order potential for the copper–hydrogen binary system” In _J Mater Sci_ 50.7 Springer ScienceBusiness Media LLC, 2015, pp. 2859–2875 DOI: 10.1007/s10853-015-8848-9
* [284] X.W. Zhou, D.K. Ward and M.E. Foster “An analytical bond-order potential for the aluminum copper binary system” In _J Alloy Comp_ 680 Elsevier BV, 2016, pp. 752–767 DOI: 10.1016/j.jallcom.2016.04.055
* [285] X. W. Zhou, D. K. Ward and M. E. Foster “A bond-order potential for the Al–Cu–H ternary system” In _New J Chem_ 42.7 Royal Society of Chemistry (RSC), 2018, pp. 5215–5228 DOI: 10.1039/c8nj00513c
* [286] Christopher J. Fennell and J. Daniel Gezelter “Is the Ewald summation still necessary? Pairwise alternatives to the accepted standard for long-range electrostatics” In _J Chem Phys_ 124.23 AIP Publishing, 2006, pp. 234104 DOI: 10.1063/1.2206581
* [287] G. Andrés Cisneros, Mikko Karttunen, Pengyu Ren and Celeste Sagui “Classical Electrostatics for Biomolecular Simulations” In _Chem Rev_ 114.1 American Chemical Society (ACS), 2013, pp. 779–814 DOI: 10.1021/cr300461d
* [288] M.S. Anderson and C.A. Swenson “Experimental equations of state for the rare gas solids” In _J Phys Chem Solid_ 36.3 Elsevier BV, 1975, pp. 145–162 DOI: 10.1016/0022-3697(75)90004-9
* [289] M. Ross, H. K. Mao, P. M. Bell and J. A. Xu “The equation of state of dense argon: A comparison of shock and static studies” In _J Chem Phys_ 85.2 AIP Publishing, 1986, pp. 1028–1033 DOI: 10.1063/1.451346
* [290] Agnès Dewaele, Paul Loubeyre and Mohamed Mezouar “Equations of state of six metals above 94 GPa” In _Phys Rev B_ 70.9 American Physical Society (APS), 2004, pp. 094112 DOI: 10.1103/physrevb.70.094112
* [291] Yi Wang, Dongquan Chen and Xinwei Zhang “Calculated Equation of State of Al, Cu, Ta, Mo, and W to 1000 GPa” In _Phys Rev Lett_ 84.15 American Physical Society (APS), 2000, pp. 3220–3223 DOI: 10.1103/physrevlett.84.3220
* [292] Florent Occelli, Paul Loubeyre and René LeToullec “Properties of diamond under hydrostatic pressures up to 140 GPa” In _Nat Mater_ 2.3 Springer ScienceBusiness Media LLC, 2003, pp. 151–154 DOI: 10.1038/nmat831
* [293] Reinhard Boehler and George C. Kennedy “Equation of state of sodium chloride up to 32 kbar and 500°C” In _J Phys Chem Solid_ 41.5 Elsevier BV, 1980, pp. 517–523 DOI: 10.1016/0022-3697(80)90183-3
* [294] P. I. Dorogokupets and A. Dewaele “Equations of state of MgO, Au, Pt, NaCl-B1, and NaCl-B2: Internally consistent high-temperature pressure scales” In _High Pres Res_ 27.4 Informa UK Limited, 2007, pp. 431–446 DOI: 10.1080/08957950701659700
* [295] John Ferrante, John R. Smith and James H. Rose “Diatomic Molecules and Metallic Adhesion, Cohesion, and Chemisorption: A Single Binding-Energy Relation” In _Phys Rev Lett_ 50.18 American Physical Society (APS), 1983, pp. 1385–1386 DOI: 10.1103/physrevlett.50.1385
* [296] P Vinet, J H Rose, J Ferrante and J R Smith “Universal features of the equation of state of solids” In _J Phys Condens Matter_ 1.11 IOP Publishing, 1989, pp. 1941–1963 DOI: 10.1088/0953-8984/1/11/002
* [297] Karsten Albe, Kai Nordlund and Robert S. Averback “Modeling the metal-semiconductor interaction: Analytical bond-order potential for platinum-carbon” In _Phys Rev B_ 65.19 American Physical Society (APS), 2002 DOI: 10.1103/physrevb.65.195124
* [298] Karsten Albe, Kai Nordlund, Janne Nord and Antti Kuronen “Modeling of compound semiconductors: Analytical bond-order potential for Ga, As, and GaAs” In _Phys Rev B_ 66.3 American Physical Society (APS), 2002 DOI: 10.1103/physrevb.66.035205
* [299] J Nord, K Albe, P Erhart and K Nordlund “Modelling of compound semiconductors: analytical bond-order potential for gallium, nitrogen and gallium nitride” In _J Phys Condens Matter_ 15.32 IOP Publishing, 2003, pp. 5649–5662 DOI: 10.1088/0953-8984/15/32/324
* [300] Paul Erhart and Karsten Albe “Analytical potential for atomistic simulations of silicon, carbon, and silicon carbide” In _Phys Rev B_ 71.3 American Physical Society (APS), 2005 DOI: 10.1103/physrevb.71.035211
* [301] C. Kittel “Introduction to Solid State Physics, 8th ed.” New York: Wiley, 2005
* [302] H.-E. Schaefer “Investigation of Thermal Equilibrium Vacancies in Metals by Positron Annihilation” In _Phys Status Solidi (a)_ 102.1 Wiley, 1987, pp. 47–65 DOI: 10.1002/pssa.2211020104
* [303] N H March “Displaced charge and formation energies of point defects in metals” In _J Phys F Met Phys_ 3.2 IOP Publishing, 1973, pp. 233–247 DOI: 10.1088/0305-4608/3/2/002
* [304] T. Górecki “Comments on vacancies and melting” In _Scripta Metall_ 11.12 Elsevier BV, 1977, pp. 1051–1053 DOI: 10.1016/0036-9748(77)90305-2
* [305] C. Thierfelder, A. Hermann, P. Schwerdtfeger and W. G. Schmidt “Strongly bonded water monomers on the ice Ih basal plane: Density-functional calculations” In _Phys Rev B_ 74.4 American Physical Society (APS), 2006 DOI: 10.1103/physrevb.74.045422
* [306] Y. Fujii, N. A. Lurie, R. Pynn and G. Shirane “Inelastic neutron scattering from solid 36Ar” In _Phys Rev B_ 10.8 American Physical Society (APS), 1974, pp. 3647–3659 DOI: 10.1103/physrevb.10.3647
* [307] Online; accessed on April 1, 2022., https://en.wikipedia.org/wiki/Ductility, 2022
* [308] John Price Hirth and Jens Lothe “Theory of Dislocations” Krieger Publishing Company, Malabar, Florida, 1982
* [309] Wei Cai and William D. Nix “Imperfections in Crystalline Solids” Cambridge: Cambridge University Press, 2016 DOI: 10.1017/cbo9781316389508
* [310] Steve Plimpton “Fast Parallel Algorithms for Short-Range Molecular Dynamics” In _J Comput Phys_ 117.1 Elsevier BV, 1995, pp. 1–19 DOI: 10.1006/jcph.1995.1039
* [311] Aidan P. Thompson et al. “LAMMPS - a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales” In _Comput Phys Comm_ 271 Elsevier BV, 2022, pp. 108171 DOI: 10.1016/j.cpc.2021.108171
* [312] R. S. Mulliken “Electronic Population Analysis on LCAO-MO Molecular Wave Functions. I” In _J Chem Phys_ 23.10 AIP Publishing, 1955, pp. 1833–1840 DOI: 10.1063/1.1740588
* [313] Per-Olov Löwdin “On the Non-Orthogonality Problem Connected with the Use of Atomic Wave Functions in the Theory of Molecules and Crystals” In _J Chem Phys_ 18.3 AIP Publishing, 1950, pp. 365–375 DOI: 10.1063/1.1747632
* [314] Richard F. W. Bader “A quantum theory of molecular structure and its applications” In _Chem Rev_ 91.5 American Chemical Society (ACS), 1991, pp. 893–928 DOI: 10.1021/cr00005a013
* [315] F. L. Hirshfeld “Bonded-atom fragments for describing molecular charge densities” In _Theor Chim Acta_ 44.2 Springer ScienceBusiness Media LLC, 1977, pp. 129–138 DOI: 10.1007/bf00549096
* [316] J. Meister and W. H. E. Schwarz “Principal Components of Ionicity” In _J Phys Chem_ 98.33 American Chemical Society (ACS), 1994, pp. 8245–8252 DOI: 10.1021/j100084a048
* [317] James X. Mao “Atomic Charges in Molecules: A Classical Concept in Modern Computational Chemistry” In _PostDoc J Rev_ 2.2 Postdoc Journal, 2014, pp. 15–18 DOI: 10.14304/surya.jpr.v2n2.2
* [318] Indrani Choudhuri and Donald G. Truhlar “Calculating and characterizing the charge distributions in solids” In _J Chem Theor Comput_ 16.5, 2020, pp. 5884–5892 DOI: 10.1021/acs.jctc.0c00440
* [319] EN Maslen and MA Spackman “Atomic Charges and Electron Density Partitioning” In _Aust J Phys_ 38.3 CSIRO Publishing, 1985, pp. 273 DOI: 10.1071/ph850273
* [320] Timothy C. Lillestolen and Richard J. Wheatley “Redefining the atom: atomic charge densities produced by an iterative stockholder approach” In _Chem Comm_ Royal Society of Chemistry (RSC), 2008, pp. 5909–5911 DOI: 10.1039/b812691g
* [321] T. Verstraelen, P. W. Ayers, V. Van Speybroeck and M. Waroquier “Hirshfeld-E Partitioning: AIM Charges with an Improved Trade-off between Robustness and Accurate Electrostatics” In _J Chem Theor Comput_ 9.5 American Chemical Society (ACS), 2013, pp. 2221–2225 DOI: 10.1021/ct4000923
* [322] Carlos Campañá, Bastien Mussard and Tom K. Woo “Electrostatic Potential Derived Atomic Charges for Periodic Systems Using a Modified Error Functional” In _J Chem Theor Comput_ 5.10 American Chemical Society (ACS), 2009, pp. 2866–2878 DOI: 10.1021/ct9003405
* [323] Wilfried J. Mortier, Karin Van Genechten and Johann Gasteiger “Electronegativity equalization: application and parametrization” In _J Am Chem Soc_ 107.4 American Chemical Society (ACS), 1985, pp. 829–835 DOI: 10.1021/ja00290a017
* [324] Wilfried J. Mortier, Swapan K. Ghosh and S. Shankar “Electronegativity-equalization method for the calculation of atomic charges in molecules” In _J Am Chem Soc_ 108.15 American Chemical Society (ACS), 1986, pp. 4315–4320 DOI: 10.1021/ja00275a013
* [325] Robert G. Parr and Ralph G. Pearson “Absolute hardness: companion parameter to absolute electronegativity” In _J Am Chem Soc_ 105.26 American Chemical Society (ACS), 1983, pp. 7512–7516 DOI: 10.1021/ja00364a005
* [326] Robert S. Mulliken “A New Electroaffinity Scale; Together with Data on Valence States and on Valence Ionization Potentials and Electron Affinities” In _J Chem Phys_ 2.11 AIP Publishing, 1934, pp. 782–793 DOI: 10.1063/1.1749394
* [327] Robert G. Parr, Robert A. Donnelly, Mel Levy and William E. Palke “Electronegativity: The density functional viewpoint” In _J Chem Phys_ 68.8 AIP Publishing, 1978, pp. 3801–3807 DOI: 10.1063/1.436185
* [328] Anthony K. Rappe and William A. Goddard “Charge equilibration for molecular dynamics simulations” In _J Phys Chem_ 95.8 American Chemical Society (ACS), 1991, pp. 3358–3363 DOI: 10.1021/j100161a070
* [329] Jiahao Chen and Todd J. Martínez “QTPIE: Charge transfer with polarization current equalization. A fluctuating charge model with correct asymptotics” In _Chem Phys Lett_ 438.4-6 Elsevier BV, 2007, pp. 315–320 DOI: 10.1016/j.cplett.2007.02.065
* [330] Riccardo Chelli, Piero Procacci, Roberto Righini and Salvatore Califano “Electrical response in chemical potential equalization schemes” In _J Chem Phys_ 111.18 AIP Publishing, 1999, pp. 8569–8575 DOI: 10.1063/1.480198
* [331] G. Lee Warren, Joseph E. Davis and Sandeep Patel “Origin and control of superlinear polarizability scaling in chemical potential equalization methods” In _J Chem Phys_ 128.14 AIP Publishing, 2008, pp. 144110 DOI: 10.1063/1.2872603
* [332] Razvan A. Nistor and Martin H. Müser “Dielectric properties of solids in the regular and split-charge equilibration formalisms” In _Phys Rev B_ 79.10 American Physical Society (APS), 2009 DOI: 10.1103/physrevb.79.104303
* [333] Paul T. Mikulski, M. Todd Knippenberg and Judith A. Harrison “Merging bond-order potentials with charge equilibration” In _J Chem Phys_ 131.24 AIP Publishing, 2009, pp. 241105 DOI: 10.1063/1.3271798
* [334] John P. Perdew, Robert G. Parr, Mel Levy and Jose L. Balduz “Density-Functional Theory for Fractional Particle Number: Derivative Discontinuities of the Energy” In _Phys Rev Lett_ 49.23 American Physical Society (APS), 1982, pp. 1691–1694 DOI: 10.1103/physrevlett.49.1691
* [335] Laura Scalfi et al. “A semiclassical Thomas-Fermi model to tune the metallicity of electrodes in molecular simulations” In _J Chem Phys_ 153.17 AIP Publishing, 2020, pp. 174704 DOI: 10.1063/5.0028232
* [336] Pawel Pomorski et al. “Capacitance, induced charges, and bound states of biased carbon nanotube systems” In _Phys Rev B_ 69.11 American Physical Society (APS), 2004 DOI: 10.1103/physrevb.69.115418
* [337] Lars Pastewka, Tommi T. Järvi, Leonhard Mayrhofer and Michael Moseler “Charge-transfer model for carbonaceous electrodes in polar environments” In _Phys Rev B_ 83.16 American Physical Society (APS), 2011 DOI: 10.1103/physrevb.83.165418
* [338] Steven W. Rick, Steven J. Stuart and B. J. Berne “Dynamical fluctuating charge force fields: Application to liquid water” In _J Chem Phys_ 101.7 AIP Publishing, 1994, pp. 6141–6156 DOI: 10.1063/1.468398
* [339] Razvan A. Nistor, Jeliazko G. Polihronov, Martin H. Müser and Nicholas J. Mosey “A generalization of the charge equilibration method for nonmetallic materials” In _J Chem Phys_ 125.9 AIP Publishing, 2006, pp. 094108 DOI: 10.1063/1.2346671
* [340] Toon Verstraelen, Veronique Van Speybroeck and Michel Waroquier “The electronegativity equalization method and the split charge equilibration applied to organic systems: Parametrization, validation, and comparison” In _J Chem Phys_ 131.4 AIP Publishing, 2009, pp. 044127 DOI: 10.1063/1.3187034
* [341] T. Verstraelen, P. W. Ayers, V. Van Speybroeck and M. Waroquier “ACKS2: Atom-condensed Kohn-Sham DFT approximated to second order” In _J Chem Phys_ 138.7 AIP Publishing, 2013, pp. 074108 DOI: 10.1063/1.4791569
* [342] J H Hannay “The Clausius-Mossotti equation: an alternative derivation” In _Eur J Phys_ 4.3 IOP Publishing, 1983, pp. 141–143 DOI: 10.1088/0143-0807/4/3/003
* [343] Daniel J. Lacks and Troy Shinbrot “Long-standing and unresolved issues in triboelectric charging” In _Nat Rev Chem_ 3.8 Springer ScienceBusiness Media LLC, 2019, pp. 465–476 DOI: 10.1038/s41570-019-0115-1
* [344] Ganesh Sivaraman et al. “Experimentally Driven Automated Machine-Learned Interatomic Potential for a Refractory Oxide” In _Phys Rev Lett_ 126.15 American Physical Society (APS), 2021 DOI: 10.1103/physrevlett.126.156002
* [345] Anja Aarva et al. “Understanding X-ray Spectroscopy of Carbonaceous Materials by Combining Experiments, Density Functional Theory, and Machine Learning. Part II: Quantitative Fitting of Spectra” In _Chem Mater_ 31.22 American Chemical Society (ACS), 2019, pp. 9256–9267 DOI: 10.1021/acs.chemmater.9b02050
* [346] Anja Aarva et al. “Understanding X-ray Spectroscopy of Carbonaceous Materials by Combining Experiments, Density Functional Theory, and Machine Learning. Part I: Fingerprint Spectra” In _Chem Mater_ 31.22 American Chemical Society (ACS), 2019, pp. 9243–9255 DOI: 10.1021/acs.chemmater.9b02049
* [347] Anja Aarva et al. “X-ray Spectroscopy Fingerprints of Pristine and Functionalized Graphene” In _J Phys Chem C_ 125.33 American Chemical Society (ACS), 2021, pp. 18234–18246 DOI: 10.1021/acs.jpcc.1c03238
* [348] Dorothea Golze et al. “Accurate computational prediction of core-electron binding energies in carbon-based materials: A machine-learning model combining DFT and GW” arXiv:2112.06551 arXiv, 2021 DOI: 10.48550/ARXIV.2112.06551
* [349] Brian W. Dodson “Development of a many-body Tersoff-type potential for silicon” In _Phys Rev B_ 35.6 American Physical Society (APS), 1987, pp. 2795–2798 DOI: 10.1103/physrevb.35.2795
* [350] Albert P. Bartók, James Kermode, Noam Bernstein and Gábor Csányi “Machine Learning a General-Purpose Interatomic Potential for Silicon” In _Phys Rev X_ 8.4 American Physical Society (APS), 2018, pp. 041048 DOI: 10.1103/physrevx.8.041048
* [351] Albert P. Bartók and Gábor Csányi “Gaussian approximation potentials: A brief tutorial introduction” In _Int J Quant Chem_ 115.16 Wiley, 2015, pp. 1051–1057 DOI: 10.1002/qua.24927
* [352] Felix Musil et al. “Physics-Inspired Structural Representations for Molecules and Materials” In _Chem Rev_ 121.16 American Chemical Society (ACS), 2021, pp. 9759–9815 DOI: 10.1021/acs.chemrev.1c00021
* [353] Lauri Himanen et al. “DScribe: Library of descriptors for machine learning in materials science” In _Comput Phys Comm_ 247 Elsevier BV, 2020, pp. 106949 DOI: 10.1016/j.cpc.2019.106949
* [354] Albert P. Bartók, Mike C. Payne, Risi Kondor and Gábor Csányi “Gaussian Approximation Potentials: The Accuracy of Quantum Mechanics, without the Electrons” In _Phys Rev Lett_ 104.13 American Physical Society (APS), 2010 DOI: 10.1103/physrevlett.104.136403
* [355] Volker L. Deringer and Gábor Csányi “Machine learning based interatomic potential for amorphous carbon” In _Phys Rev B_ 95.9 American Physical Society (APS), 2017 DOI: 10.1103/physrevb.95.094203
* [356] Patrick Rowe et al. “An accurate and transferable machine learning potential for carbon” In _J Chem Phys_ 153.3 AIP Publishing, 2020, pp. 034702 DOI: 10.1063/5.0005084
* [357] Jörg Behler and Michele Parrinello “Generalized Neural-Network Representation of High-Dimensional Potential-Energy Surfaces” In _Phys Rev Lett_ 98.14 American Physical Society (APS), 2007 DOI: 10.1103/physrevlett.98.146401
* [358] Jörg Behler “Atom-centered symmetry functions for constructing high-dimensional neural network potentials” In _J Chem Phys_ 134.7 AIP Publishing, 2011, pp. 074106 DOI: 10.1063/1.3553717
* [359] Albert P. Bartók, Risi Kondor and Gábor Csányi “On representing chemical environments” In _Phys Rev B_ 87.18 American Physical Society (APS), 2013 DOI: 10.1103/physrevb.87.184115
* [360] Paul J. Steinhardt, David R. Nelson and Marco Ronchetti “Bond-orientational order in liquids and glasses” In _Phys Rev B_ 28.2 American Physical Society (APS), 1983, pp. 784–805 DOI: 10.1103/physrevb.28.784
* [361] Ralf Drautz “Atomic cluster expansion for accurate and transferable interatomic potentials” In _Phys Rev B_ 99.1 American Physical Society (APS), 2019 DOI: 10.1103/physrevb.99.014104
* [362] Geneviève Dusson et al. “Atomic cluster expansion: Completeness, efficiency and stability” In _J Comput Phys_ 454 Elsevier BV, 2022, pp. 110946 DOI: 10.1016/j.jcp.2022.110946
* [363] Yury Lysogorskiy et al. “Performant implementation of the atomic cluster expansion (PACE) and application to copper and silicon” In _Npj Comput Mater_ 7.1 Springer ScienceBusiness Media LLC, 2021, pp. 97 DOI: 10.1038/s41524-021-00559-9
* [364] A.P. Thompson et al. “Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials” In _J Comput Phys_ 285 Elsevier BV, 2015, pp. 316–330 DOI: 10.1016/j.jcp.2014.12.018
* [365] Jonathan Vandermause et al. “On-the-fly active learning of interpretable Bayesian force fields for atomistic rare events” In _Npj Comput Mater_ 6.1 Springer ScienceBusiness Media LLC, 2020 DOI: 10.1038/s41524-020-0283-z
* [366] Mingjian Wen and Ellad B. Tadmor “Uncertainty quantification in molecular simulations with dropout neural network potentials” In _Npj Comput Mater_ 6.1 Springer ScienceBusiness Media LLC, 2020 DOI: 10.1038/s41524-020-00390-8
* [367] Gabor Csányi, T. Albaret, M. C. Payne and A. De Vita ““Learn on the Fly”: A Hybrid Classical and Quantum-Mechanical Molecular Dynamics Simulation” In _Phys Rev Lett_ 93.17 American Physical Society (APS), 2004 DOI: 10.1103/physrevlett.93.175503
* [368] Zhenwei Li, James R. Kermode and Alessandro De Vita “Molecular Dynamics with On-the-Fly Machine Learning of Quantum-Mechanical Forces” In _Phys Rev Lett_ 114.9 American Physical Society (APS), 2015 DOI: 10.1103/physrevlett.114.096405
* [369] J. R. Kermode et al. “Low-speed fracture instabilities in a brittle crystal” In _Nature_ 455.7217 Springer ScienceBusiness Media LLC, 2008, pp. 1224–1227 DOI: 10.1038/nature07297
* [370] J.R. Kermode et al. “Macroscopic scattering of cracks initiated at single impurity atoms” In _Nat Comm_ 4.1 Springer ScienceBusiness Media LLC, 2013 DOI: 10.1038/ncomms3441
* [371] Yury Lysogorskiy et al. “Performant implementation of the atomic cluster expansion (PACE) and application to copper and silicon” In _Npj Comput Mater_ 7.1 Springer ScienceBusiness Media LLC, 2021, pp. 97 DOI: 10.1038/s41524-021-00559-9
* [372] P.W. Anderson “Chemical pseudopotentials” In _Phys Rep_ 110.5-6 Elsevier BV, 1984, pp. 311–319 DOI: 10.1016/0370-1573(84)90193-5
* [373] M P Allen and Dominic J Tildesley “Computer Simulation of Liquids” Oxford: Oxford University Press, 1989
* [374] P. J. Veld, Ahmed E. Ismail and Gary S. Grest “Application of Ewald summations to long-range dispersion forces” In _J Chem Phys_ 127.14 AIP Publishing, 2007, pp. 144711 DOI: 10.1063/1.2770730
* [375] Rolf E. Isele-Holder et al. “Reconsidering Dispersion Potentials: Reduced Cutoffs in Mesh-Based Ewald Solvers Can Be Faster Than Truncation” In _J Chem Theo and Comp_ 9.12 American Chemical Society (ACS), 2013, pp. 5412–5420 DOI: 10.1021/ct4004614
* [376] M I Baskes, J E Angelo and C L Bisson “Atomistic calculations of composite interfaces” In _Model Simulat Mater Sci Eng_ 2.3A IOP Publishing, 1994, pp. 505–518 DOI: 10.1088/0965-0393/2/3a/006
* [377] Alessandro Mattoni, Mariella Ippolito and Luciano Colombo “Atomistic modeling of brittleness in covalent materials” In _Phys Rev B_ 76.22 American Physical Society (APS), 2007 DOI: 10.1103/physrevb.76.224103
* [378] Lars Pastewka et al. “Describing bond-breaking processes by reactive potentials: Importance of an environment-dependent interaction range” In _Phys Rev B_ 78.16 American Physical Society (APS), 2008 DOI: 10.1103/physrevb.78.161402
* [379] Michael Griebel, Stephan Knapek and Gerhard Zumbusch “Numerical Simulation in Molecular Dynamics” Springer Berlin Heidelberg, 2007 DOI: 10.1007/978-3-540-68095-6
* [380] Jirasak Wong-Ekkabut, Markus S. Miettinen, Cristiano Dias and Mikko Karttunen “Static charges cannot drive a continuous flow of water molecules through a carbon nanotube” In _Nat Nanotechnol_ 5.8 Springer ScienceBusiness Media LLC, 2010, pp. 555–557 DOI: 10.1038/nnano.2010.152
* [381] Xiaojing Gong et al. “A charge-driven molecular water pump” In _Nat Nanotechnol_ 2.11 Springer ScienceBusiness Media LLC, 2007, pp. 709–712 DOI: 10.1038/nnano.2007.320
* [382] F. Brugè and S.L. Fornili “Concurrent molecular dynamics simulation of spinodal phase transition on transputer arrays” In _Comput Phys Comm_ 60.1 Elsevier BV, 1990, pp. 31–38 DOI: 10.1016/0010-4655(90)90076-d
* [383] S.Y. Liem, D. Brown and J.H.R. Clarke “Molecular dynamics simulations on distributed memory machines” In _Comput Phys Comm_ 67.2 Elsevier BV, 1991, pp. 261–267 DOI: 10.1016/0010-4655(91)90021-c
* [384] S. Chynoweth, U.C. Klomp and L.E. Scales “Simulation of organic liquids using pseudo-pairwise interatomic forces on a toroidal transputer array” In _Comput Phys Comm_ 62.2-3 Elsevier BV, 1991, pp. 297–306 DOI: 10.1016/0010-4655(91)90102-q
* [385] M. R. S. Pinches, D. J. Tildesley and W. Smith “Large Scale Molecular Dynamics on Parallel Computers using the Link-cell Algorithm” In _Mol Simulat_ 6.1-3 Informa UK Limited, 1991, pp. 51–87 DOI: 10.1080/08927029108022139
* [386] David Brown, Julian H.R. Clarke, Motoi Okuda and Takao Yamazaki “A domain decomposition parallelization strategy for molecular dynamics simulations on distributed memory machines” In _Comput Phys Comm_ 74.1 Elsevier BV, 1993, pp. 67–80 DOI: 10.1016/0010-4655(93)90107-n
* [387] V Rokhlin “Rapid solution of integral equations of classical potential theory” In _J Comput Phys_ 60.2 Elsevier BV, 1985, pp. 187–207 DOI: 10.1016/0021-9991(85)90002-6
* [388] L Greengard and V Rokhlin “A fast algorithm for particle simulations” In _J Comput Phys_ 73.2 Elsevier BV, 1987, pp. 325–348 DOI: 10.1016/0021-9991(87)90140-9
* [389] Leslie Greengard “Fast Algorithms for Classical Physics” In _Science_ 265.5174 American Association for the Advancement of Science (AAAS), 1994, pp. 909–914 DOI: 10.1126/science.265.5174.909
* [390] L. Prandtl “Ein Gedankenmodell zur kinetischen Theorie der festen Körper” In _Z Angew Math Mech_ 8.2 Wiley, 1928, pp. 85–106 DOI: 10.1002/zamm.19280080202
* [391] B. Smit “Phase diagrams of Lennard-Jones fluids” In _J Chem Phys_ 96.11 AIP Publishing, 1992, pp. 8639–8640 DOI: 10.1063/1.462271
* [392] S. Toxvaerd and J. C. Dyre “Communication: Shifted forces in molecular dynamics” In _J Chem Phys_ 134.8 AIP Publishing, 2011, pp. 081102
* [393] M. V. Ramana Murty and Harry A. Atwater “Empirical interatomic potential for Si-H interactions” In _Phys Rev B_ 51.8 American Physical Society (APS), 1995, pp. 4889–4893 DOI: 10.1103/physrevb.51.4889
* [394] Martin H. Müser “Improved cutoff functions for short-range potentials and the Wolf summation” In _Molecular Simulation_ 0 Informa UK Limited, 2022, pp. 1–9 DOI: 10.1080/08927022.2022.2094430
* [395] D. Wolf, P. Keblinski, S. R. Phillpot and J. Eggebrecht “Exact method for the simulation of Coulombic systems by spherically truncated, pairwise r-1 summation” In _J Chem Phys_ 110.17 AIP Publishing, 1999, pp. 8254–8282 DOI: 10.1063/1.478738
* [396] P. P. Ewald “Die Berechnung optischer und elektrostatischer Gitterpotentiale” In _Ann Phys_ 369.3 Wiley, 1921, pp. 253–287 DOI: 10.1002/andp.19213690304
* [397] A. Rahbari et al. “Effect of truncating electrostatic interactions on predicting thermodynamic properties of water–methanol systems” In _Mol Simul_ 45.4-5 Informa UK Limited, 2018, pp. 336–350 DOI: 10.1080/08927022.2018.1547824
* [398] M. Patra et al. “Molecular Dynamics Simulations of Lipid Bilayers: Major Artifacts Due to Truncating Electrostatic Interactions” In _Biophys J_ 84.6 Elsevier BV, 2003, pp. 3636–3645 DOI: 10.1016/s0006-3495(03)75094-2
* [399] T. J. H. Vlugt et al. “Computing the Heat of Adsorption using Molecular Simulations: The Effect of Strong Coulombic Interactions” In _J Chem Theo and Comp_ 4.7 American Chemical Society (ACS), 2008, pp. 1107–1118 DOI: 10.1021/ct700342k
* [400] O. A. Shenderova et al. “Atomistic modeling of the fracture of polycrystalline diamond” In _Phys Rev B_ 61.6 American Physical Society (APS), 2000, pp. 3877–3888 DOI: 10.1103/physrevb.61.3877
* [401] Lars Pastewka, Andreas Klemenz, Peter Gumbsch and Michael Moseler “Screened empirical bond-order potentials for Si-C” In _Phys Rev B_ 87.20 American Physical Society (APS), 2013 DOI: 10.1103/physrevb.87.205410
* [402] T. Kumagai et al. “Development of empirical bond-order-type interatomic potential for amorphous carbon structures” In _J Appl Phys_ 105.6 AIP Publishing, 2009, pp. 064310 DOI: 10.1063/1.3086631
* [403] S. Mostafa Khosrownejad, James R. Kermode and Lars Pastewka “Quantitative prediction of the fracture toughness of amorphous carbon from atomic-scale simulations” In _Phys Rev Materials_ 5.2 American Physical Society (APS), 2021 DOI: 10.1103/physrevmaterials.5.023602
* [404] Romain Perriot et al. “Screened environment-dependent reactive empirical bond-order potential for atomistic simulations of carbon materials” In _Phys Rev B_ 88.6 American Physical Society (APS), 2013 DOI: 10.1103/physrevb.88.064101
* [405] D. Nguyen-Manh, D. G. Pettifor and V. Vitek “Analytic Environment-Dependent Tight-Binding Bond Integrals: Application to MoSi2” In _Phys Rev Lett_ 85.19 American Physical Society (APS), 2000, pp. 4136–4139 DOI: 10.1103/physrevlett.85.4136
* [406] D. Nguyen-Manh et al. “Environmentally dependent bond-order potentials: New developments and applications” In _Bull Mater Sci_ 26.1 Springer ScienceBusiness Media LLC, 2003, pp. 43–51 DOI: 10.1007/bf02712786
* [407] M. S. Tang, C. Z. Wang, C. T. Chan and K. M. Ho “Environment-dependent tight-binding potential model” In _Phys Rev B_ 53.3 American Physical Society (APS), 1996, pp. 979–982 DOI: 10.1103/physrevb.53.979
* [408] M. S. Tang, C. Z. Wang, C. T. Chan and K. M. Ho “Erratum: Environment-dependent tight-binding potential model” In _Phys Rev B_ 54.15 American Physical Society (APS), 1996, pp. 10982–10982 DOI: 10.1103/physrevb.54.10982
* [409] H. Haas et al. “Environment-dependent tight-binding model for molybdenum” In _Phys Rev B_ 57.3 American Physical Society (APS), 1998, pp. 1461–1470 DOI: 10.1103/physrevb.57.1461
* [410] C Z Wang, B C Pan and K M Ho “An environment-dependent tight-binding potential for Si” In _J Phys Condens Matter_ 11.8 IOP Publishing, 1999, pp. 2043–2049 DOI: 10.1088/0953-8984/11/8/017
* [411] F Ercolessi and J. B Adams “Interatomic Potentials from First-Principles Calculations: The Force-Matching Method” In _Europhys Lett_ 26.8 IOP Publishing, 1994, pp. 583–588 DOI: 10.1209/0295-5075/26/8/005
* [412] Thomas Bally and G. Narahari Sastry “Incorrect Dissociation Behavior of Radical Ions in Density Functional Calculations” In _J Phys Chem A_ 101.43 American Chemical Society (ACS), 1997, pp. 7923–7925 DOI: 10.1021/jp972378y
* [413] Yingkai Zhang and Weitao Yang “A challenge for density functionals: Self-interaction error increases for systems with a noninteger number of electrons” In _J Chem Phys_ 109.7 AIP Publishing, 1998, pp. 2604–2608 DOI: 10.1063/1.476859
* [414] F. H. Streitz and J. W. Mintmire “Electrostatic potentials for metal-oxide surfaces and interfaces” In _Phys Rev B_ 50.16 American Physical Society (APS), 1994, pp. 11996–12003 DOI: 10.1103/physrevb.50.11996
* [415] Jianguo Yu, Susan B. Sinnott and Simon R. Phillpot “Charge optimized many-body potential for the Si/SiO2 system” In _Phys Rev B_ 75.8 American Physical Society (APS), 2007 DOI: 10.1103/physrevb.75.085311
* [416] Tzu-Ray Shan et al. “Second-generation charge-optimized many-body potential for Si/SiO2 and amorphous silica” In _Phys Rev B_ 82.23 American Physical Society (APS), 2010, pp. 235302 DOI: 10.1103/physrevb.82.235302
* [417] Jackelyn Martinez, Tao Liang, Susan B. Sinnott and Simon R. Phillpot “A third-generation charge optimized many body (COMB3) potential for nitrogen-containing organic molecules” In _Comput Mater Sci_ 139 Elsevier BV, 2017, pp. 153–161 DOI: 10.1016/j.commatsci.2017.07.019
* [418] M. Todd Knippenberg et al. “Bond-order potentials with split-charge equilibration: Application to C-, H-, and O-containing systems” In _J Chem Phys_ 136.16 AIP Publishing, 2012, pp. 164701 DOI: 10.1063/1.4704800
* [419] Richard M. Martin “Elastic Properties of ZnS Structure Semiconductors” In _Phys. Rev. B_ 1.10 American Physical Society (APS), 1970, pp. 4005–4011 DOI: 10.1103/physrevb.1.4005
* [420] P. Vashishta, Rajiv K. Kalia, José P. Rino and Ingvar Ebbsjö “Interaction potential for SiO: A molecular-dynamics study of structural correlations” In _Phys. Rev. B_ 41.17 American Physical Society (APS), 1990, pp. 12197–12209 DOI: 10.1103/physrevb.41.12197
* [421] Thomas P Senftle et al. “The ReaxFF reactive force-field: development, applications and future directions” In _Npj Comput Mater_ 2.1 Springer ScienceBusiness Media LLC, 2016 DOI: 10.1038/npjcompumats.2015.11
* [422] Hemanta Bhattarai, Kathie E. Newman and J. Daniel Gezelter “Polarizable potentials for metals: The density readjusting embedded atom method (DR-EAM)” In _Phys Rev B_ 99.9 American Physical Society (APS), 2019, pp. 094106 DOI: 10.1103/physrevb.99.094106
* [423] James P. Darby, James R. Kermode and Gábor Csányi “Compressing local atomic neighbourhood descriptors” In _Npj Comput Mater_ 8.166 Springer ScienceBusiness Media LLC, 2022 DOI: 10.1038/s41524-022-00847-y
* [424] Stefan Goedecker “Linear scaling electronic structure methods” In _Rev Mod Phys_ 71.4 American Physical Society (APS), 1999, pp. 1085–1123 DOI: 10.1103/revmodphys.71.1085
* [425] J.-P. Gaspard, A. Pellegatti, F. Marinelli and C. Bichara “Peierls instabilities in covalent structures I. Electronic structure, cohesion and the $Z=8{\textendash}N$ rule” In _Phil Mag B_ 77.3 Informa UK Limited, 1998, pp. 727–744 DOI: 10.1080/13642819808214831
* [426] P. P. K. Smith and Peter R. Buseck “Carbyne Forms of Carbon: Do They Exist?” In _Science_ 216.4549 American Association for the Advancement of Science (AAAS), 1982, pp. 984–986 DOI: 10.1126/science.216.4549.984
* [427] R. B. Heimann, J. Kleiman and N. M. Salansky “A unified structural approach to linear carbon polytypes” In _Nature_ 306.5939 Springer ScienceBusiness Media LLC, 1983, pp. 164–167 DOI: 10.1038/306164a0
* [428] A. Greenville Whittaker “Carbyne Forms of Carbon: Evidence for Their Existence” In _Science_ 229.4712 American Association for the Advancement of Science (AAAS), 1985, pp. 485–486 DOI: 10.1126/science.229.4712.485
* [429] Gianpietro Moras et al. “Progressive Shortening of sp-Hybridized Carbon Chains through Oxygen-Induced Cleavage” In _J Phys Chem C_ 115.50 American Chemical Society (ACS), 2011, pp. 24653–24661 DOI: 10.1021/jp209198g
* [430] Gianpietro Moras, Lars Pastewka, Peter Gumbsch and Michael Moseler “Formation and Oxidation of Linear Carbon Chains and Their Role in the Wear of Carbon Materials” In _Tribol Lett_ 44.3 Springer ScienceBusiness Media LLC, 2011, pp. 355–365 DOI: 10.1007/s11249-011-9864-9
* [431] J Enkovaara et al. “Electronic structure calculations with GPAW: a real-space implementation of the projector augmented-wave method” In _J Phys Condens Matter_ 22.25 IOP Publishing, 2010, pp. 253202 DOI: 10.1088/0953-8984/22/25/253202 |
# Low-Energy and CPA-Resistant Adiabatic CMOS/MTJ Logic for IoT Devices
Zachary Kahleifeh and Himanshu Thapliyal VLSI Emerging Design And Nano Things
Security Lab (VEDANTS-Lab)
Department of Electrical and Computer Engineering, University of Kentucky,
Lexington, KY, USA
Email<EMAIL_ADDRESS>
###### Abstract
The tremendous growth in the number of Internet of Things (IoT) devices has
increased focus on the energy efficiency and security of an IoT device. In
this paper, we will present a design level, non-volatile adiabatic
architecture for low-energy and Correlation Power Analysis (CPA) resistant IoT
devices. IoT devices constructed with CMOS integrated circuits suffer from
high dynamic energy and leakage power. To solve this, we look at both
adiabatic logic and STT-MTJs (Spin Transfer Torque Magnetic Tunnel Junctions)
to reduce both dynamic energy and leakage power. Furthermore, CMOS integrated
circuits suffer from side-channel leakage making them insecure against power
analysis attacks. We again look to adiabatic logic to design secure circuits
with uniform power consumption, thus, defending against power analysis
attacks. We have developed a hybrid adiabatic-MTJ architecture using two-phase
adiabatic logic. We show that hybrid adiabatic-MTJ circuits are both low
energy and secure when compared with CMOS circuits. As a case study, we have
constructed one round of PRESENT and have shown energy savings of 64.29% at a
frequency of 25 MHz. Furthermore, we have performed a correlation power
analysis attack on our proposed design and determined that the key was kept
hidden.
###### Index Terms:
Adiabatic Logic, Magnetic Tunnel Junction, Correlation Power Analysis, Side-
channel Attacks, Internet of Things (IoT).
## I Introduction
The age of portable devices has resulted in a sharp upward trend in Internet
of Things (IoT) devices. IoT devices are typically battery-operated, and thus
the need for energy-efficient processors is high. Furthermore, many IoT
devices store and transmit sensitive data and thus the security of IoT devices
should not be neglected [1]. In this paper, we look to create a secure device
against power analysis attacks without suffering from energy efficiency
degradation. To remain secure against power analysis attacks and consume lower
energy we look to both non-volatile memory in the form of Spin Transfer Torque
Magnetic Tunnel Junction (STT-MTJ) [2] and a low energy design technique known
as adiabatic logic.
STT-MTJ has numerous advantages over common memory technologies such as
extremely low standby power, non-volatility, easy compatibility with CMOS, and
high integration density [3, 4, 5]. MTJs can be combined with standard CMOS
devices to create low-energy circuits [6].
Figure 1: Hybrid adiabatic-MTJ circuits can introduce a golden age to IoT
devices.
While MTJ based circuits reduce standby power, adiabatic-based circuits can
reduce overall energy consumption. Adiabatic logic is an emerging design
technique to design low energy and secure circuits. Adiabatic logic recycles
energy from the load capacitor back into the clock generator to reduce energy
consumption [7].
When reducing the energy consumption of a device, security should not be
neglected. The threat vector of IoT devices continues to grow thus defenses
against these attacks should be developed. One such security threat that IoT
devices can experience is a class of hardware attacks known as side-channel
attacks. Side-channel attacks look to retrieve hidden information through a
device’s side-channel such as power consumption[8], circuit timing [9], etc.
Side-channel attacks are a dangerous threat to device functionality and vital
device information such as encryption keys. One particular side-channel attack
we will focus on is the Correlation Power Analysis Attack (CPA) [10]. This
attack looks to correlate power with bits to retrieve hidden information.
Figure 2: Structure of Magnetic Tunnel Junction with Spin Transfer Torque
(STT) switching.
In this paper, we look to combine the emerging technology of STT-MTJs with the
emerging design technique of adiabatic logic to design ultra-low energy
circuits while also remaining secure against power analysis attacks. To
demonstrate energy savings, we have constructed one round of the PRESENT
encryption algorithm using our proposed hybrid adiabatic-MTJ architecture
[11]. Our simulations show that when compared with CMOS our designs save
64.29% at 25 MHz. To demonstrate secure operations, we also performed a CPA
attack on the PRESENT Substitution Box (S-Box). When performing the attack on
the CMOS implementation we were able to retrieve the secret encryption key.
However, when performing the attack on our hybrid adiabatic-MTJ design we were
not able to steal the key thus demonstrating its resilience against CPA
attacks.
## II Background
### II-A Magnetic Tunnel Junction
Magnetic Tunnel Junction (MTJ) consists of two ferromagnetic (FM) layers and
an oxide layer that serves as a barrier between the two ferromagnetic layers
[12]. The magnetization of one of the FM layers is fixed in most circuit
applications of MTJs, while the other FM layer is free to take either a
parallel or antiparallel magnetization [13]. This can be seen in Figure 2 as
the bottom layer of the MTJ is fixed and the top layer is free to take a
direction. If the MTJ shows a parallel magnetization ($R_{P}$) then it will
have lower resistance than when it has an antiparallel magnetization
($R_{AP}$). [14]. The MTJ structure and two configurations are shown in Figure
2. The difference in resistance between the two states of the MTJ devices is
given by the tunnel magnetoresistance ratio $TMR=(R_{AP}-R_{P})/R_{P}$. MTJ
devices with higher TMR ratios have been shown to have higher reliability
[15].
### II-B CMOS-MTJ Hybrid Circuits
Figure 3 shows the general structure of an existing version of a Logic-In-
Memory (LIM) based CMOS-MTJ circuit. The LIM architecture consists of a Pre-
Charged Sense Amplifier (PCSA) circuit consisting of MP1, MP2, MN1, and MN2. A
dual-rail NMOS only logic tree (T1-T4) evaluates the inputs and the non-
volatile MTJs store data. The write circuit is used to switch the MTJs when
the respective input is switched.
Figure 3: General structure of Hybrid CMOS-MTJ circuits.
The operation of the PCSA can be explained through the existing PCSA based
CMOS/MTJ XOR gate (Figure 6)[3] [16]. The PCSA, which uses a CLK signal,
operates in two phases. The outputs are pre-charged to ”1” when CLK is set to
”0” and the output voltages begin to discharge to ground when CLK is set to
”1”. The discharge speed will be different for each branch due to the
difference in resistance of the different MTJ configurations (parallel and
antiparallel). For example, if MTJ1 is configured in parallel mode and MTJ2 is
configured in antiparallel mode, then $R_{MTJ2}>R_{MTJ1}$. Due to the
difference in resistances between $R_{MTJ1}$ and $R_{MTJ2}$, the discharge
current through MTJ1 will be greater than MTJ2. When XOR reaches the threshold
voltage of MP1, XNOR will be charged to “1” and XOR will be discharged to “0”.
Figure 4: Hybrid CMOS-MTJ XOR circuit [3][16].
### II-C Adiabatic Logic
Adiabatic logic is a circuit design technique for designing ultra-low-energy
circuits [17]. Adiabatic logic reduces the energy of a circuit by recovering
the energy stored in the load capacitor at the end of each clock cycle. The
recovered energy is stored either through magnetic energy in clock inductors
or through an electric charge in the clock capacitance. The recovered energy
is then reused in the next cycle to reduce energy consumption. The energy
dissipated in an adiabatic circuit is given by: __
$E_{diss}=\frac{RC}{T}CV_{dd}^{2}$ (1)
Where $T$ is the charging period of the capacitor, $C$ is the output load
capacitor, $V_{dd}$ is the full swing of the power clock. If the charging time
$T>2RC$, then the energy dissipated by an adiabatic circuit is less than a
conventional CMOS circuit. Figure 5 illustrates the principle of energy
recovery within an adiabatic system.
Figure 5: Adiabatic charging and recovery principle.
## III Proposed Secure Hybrid Adiabatic-MTJ Circuit
In this section, we will review the structure of our proposed hybrid
adiabatic-MTJ circuit. The proposed XOR/XNOR gate circuit can be seen in
Figure 6. We can see that the structure consists of a 2 PMOS and 2 NMOS (2P2N)
Pre-Charged Sense Amplifier (PCSA). There is also a dual-rail evaluation
network that consists of only NMOS transistors connected to two MTJs with
opposite configurations. Finally, two NMOS transistors are used to discharge
the outputs before the next clock cycle begins. Our proposed hybrid adiabatic-
MTJ uses a two-phase clocking scheme consisting of two sinusoidal clocks 90∘
out of phase as well as two discharge signals in phase with the respective
clocks. The clocking waveform for two-phase adiabatic logic can be seen in
Figure 7.
Figure 6: Proposed low energy and secure adiabatic-MTJ XOR/XNOR gate. Figure
7: CPA-resistant two-phase adiabatic logic clocking scheme.
### III-A Proposed Hybrid Adiabatic-MTJ PRESENT Implementation
PRESENT [11] is an ultra-lightweight block cipher. PRESENT has low area when
compared with other block ciphers which makes it a strong choice for
implementation in area constrained IoT devices that look to be resilient
against CPA attacks. In this paper, we intend to use the 80-bit version of
PRESENT.
One of the components of PRESENT is the substitution box (S-box) which
performs a non-linear substitution. When constructed with CMOS, the S-box
consumes high energy and is prone to Correlation Power Analysis Attacks (CPA)
thus we look to construct the S-box using our hybrid adiabatic-MTJ circuit.
When constructing the S-box circuit using our proposed design we intend to
limit the switching of MTJs to reduce energy consumption. To do this, we
construct the S-box using a Look-Up-Table (LUT) based method in which the MTJs
are written only once so that the output of the S-box is stored within the
MTJs. Figure 8 illustrates our proposed S-box.
Another component of PRESENT is the XOR gate. As mentioned previously, MTJ
circuits consume high power when there is frequent switching thus the XOR
circuit cannot be designed with the proposed adiabatic-MTJ circuit. Instead,
we have designed our XOR gate using 2-EE-SPFAL [18]. 2-EE-SPFAL has been shown
to be CPA-resistant and low energy which allows the implementation of PRESENT
to also be secure and low-energy. The complete implementation of PRESENT can
be seen in Figure 10. The XOR gate can be seen in Figure 9.
Figure 8: Proposed hybrid adiabatic-MTJ S-box LUT. Figure 9: 2-EE-SPFAL XOR
Gate used to implement PRESENT. Figure 10: Complete structure of 1-Round of
PRESENT implemented with 2-EE-SPFAL [18] and hybrid Adiabati-MTJ.
## IV Simulation Results
The simulation results of the proposed hybrid adiabatic-MTJ based circuits are
presented in this section. Simulations are performed using Cadence Spectre
simulator with 45nm standard CMOS technology with perpendicular anisotropy
CoFeB/MgO MTJ model [19]. Because we do not switch our MTJs, we model the
device using a basic resistor. The resistance values are calculated based on
our MTJ parameters which are listed in Table I.
TABLE I: NED and NSD values for hybrid adiabatic-MTJ S-box. Parameter | Description | Value
---|---|---
$t_{sl}$ | Thickness of free layer | 1.3nm
a | Length of surface long axis | 40nm
b | Width of surface short axis | 40nm
$t_{ox}$ | Thickness of the Oxide barrier | 0.85nm
TMR | 0.Tunnel Magneto Resistance ratio | 150%
RA | Resistance Area Product | $5\Omega\mu^{2}$
Area | MTJ layout surface | 40nm x 40nm x $\pi$/4
$R_{p}$ | Parallel resistance | 6.21 k$\Omega$
$R_{ap}$ | Antiparallel resistance | 18.64 k$\Omega$
### IV-A Normalized Energy Deviation and Normalized Standard Deviation
The two criteria we will use to evaluate the security of our proposed design
are Normalized Energy Deviation and Normalized Standard Deviation. The
criteria Normalized Energy Deviation (NED) is defined as ($E_{max}$ \-
$E_{min}$)/$E_{max}$. NED is used to determine the percent difference between
the minimum and maximum energy consumption. A second parameter, Normalized
Standard Deviation (NSD), is defined as $\frac{\sigma_{e}}{\overline{E}}$
where $\sigma_{e}$ is the standard deviation of the energy dissipated by the
circuit per input transition and $\overline{E}$ is the average energy
dissipation. Both NED and NSD are important criteria when determining circuit
resilience to CPA attacks. The lower the NED and NSD value the more uniform
the power consumption and therefore the more secure a circuit is.
In this paper, we have calculated the NED and NSD values for our proposed
S-box. Table II shows the NED and NSD values for our proposed design as well
as a standard CMOS-based S-box as a base value to compare. From Table II we
can see that our proposed design has lower average energy consumption than the
CMOS-based S-box. Furthermore, our proposed S-box has lower NED and NSD values
pointing towards its ability to defend against power analysis attacks.
TABLE II: NED and NSD values for hybrid adiabatic-MTJ S-box. Parameter | Proposed S-box | CMOS
---|---|---
$E_{min}(fJ)$ | 33.7 | 7.1
$E_{max}(fJ)$ | 34.0 | 102.0
$E_{avg}(fJ)$ | 33.9 | 54.8
NED(%) | 0.80 | 93.0
NSD(%) | 0.18 | 42.0
### IV-B Hybrid Adiabatic-MTJ Case Study: 1-Round of PRESENT
A PRESENT S-box implemented with a hybrid adiabatic-MTJ circuit consumes
uniform power and is therefore secure against Correlation Power Analysis (CPA)
Attacks. Uniform power consumption from 1 round of PRESENT can be seen in
Figure 11. The uniform power consumption is an indicator that the circuit is
secure against a CPA attack as we will see when one is performed. MTJs are
non-volatile memories that store data within the MTJs therefore, as a fair
comparison we have added 64 Flip-Flops to the CMOS implementation to
synchronize the inputs and store the output. Figure 12 and Table III shows the
energy per cycle of the proposed hybrid adiabatic-MTJ circuit and CMOS
implementations of 1 round of PRESENT. From Figure 12 and Table III we can see
that our proposed design consumes 0.50 pJ/cycle at 5 MHz while the CMOS
implementation consumes 0.80 pJ/cycle resulting in a 36.6% reduction in
energy. At 50 MHz, our proposed design consumes 0.25 pJ/cycle while the CMOS
implementation consumes 0.78 pJ/cycle resulting in a substantial energy
reduction of 67.2%.
Figure 11: Proposed hybrid adiabatic-MTJ S-box LUT uniform power consumption. Figure 12: Energy per cycle of hybrid adiabatic-MTJ and CMOS implementations of PRESENT. TABLE III: Energy per cycle of Present-80 implemented with CMOS and hybrid Adiabatic-MTJ Energy Per Cycle (pJ/Cycle) | 5 MHz | 10 MHz | 12.5 MHz | 25 MHz | 50 MHz
---|---|---|---|---|---
CMOS | 0.80 | 0.79 | 0.79 | 0.78 | 0.78
Adiabatic-MTJ | 0.50 | 0.37 | 0.30 | 0.28 | 0.25
Energy Reduction (%) | 37.6 | 53.5 | 61.8 | 64.2 | 67.2
### IV-C CPA Attack on PRESENT-80
Previously we have shown our proposed hybrid adiabatic-MTJ based
implementation of PRESENT to consume less energy when compared to CMOS. While
reducing the energy consumption of a circuit we must also ensure the
resilience of a circuit against side-channel attacks. The S-box of PRESENT
will be the attack point in the Correlation Power Analysis (CPA) attack. The
CPA attack is performed by following the steps described in [10]. The
simulation was performed at 12.5 MHz with a 10fF load. Practical CPA attacks
usually require a large amount of traces to steal encryption keys. However, we
are performing a simulation without electrical noise and therefore we require
much fewer traces to steal the encryption key. In our attack, we have chosen
80 samples per clock period thus we will sample every 1ns. Using 5120 input
traces, we were able to steal the encryption key in the CMOS-based design of
PRESENT-80. Figure 13(a) shows a successful CPA attack on a CMOS
implementation of PRESENT-80.
(a) Successful CPA attack on CMOS based implementation of PRESENT S-box.
(b) Unsuccessful CPA Attack on hybrid adiabatic-MTJ based implementation of
PRESENT S-box.
Figure 13: Correlation power analysis performed on both CMOS and hybrid
adiabatic-MTJ implementation of PRESENT-80.
While the CMOS key was revealed in 5120 traces, the hybrid adiabatic-MTJ
implementation of the PRESENT S-box did not reveal the key in greater than
12,000 traces. Figure 13(b) shows an unsuccessful CPA attack against the
hybrid adiabatic-MTJ implemented PRESENT S-box. This case study demonstrates
our circuits resistance against CPA attacks and shows it is a promising
candidate to design secure and low-energy IoT devices.
## V Conclusion and Future Work
A novel hybrid adiabatic-MTJ circuit was presented in this paper. The novel
circuit provides substantial energy savings and is also resistant to
Correlation Power Analysis Attacks. As a case study, we constructed one round
of PRESENT and demonstrated that it consumed lower energy when compared to its
CMOS counterpart. Furthermore, we have performed a Correlation Power Analysis
Attack on both implementations of the S-box and determined that we could
retrieve the key from the CMOS implementation but not from the hybrid
adiabatic-MTJ implementation. As future work, a reliability analysis will need
to be conducted to determine the usefulness of the circuit under a variety of
variations.
## Acknowledgment
This work is partially supported by National Science Foundation CAREER Award
No. 1845448.
## References
* [1] M. Alioto and M. Shahghasemi, “The internet of things on its edge: Trends toward its tipping point,” _IEEE Consumer Electronics Magazine_ , vol. 7, no. 1, pp. 77–87, 2017.
* [2] Y. Huai _et al._ , “Spin-transfer torque mram (stt-mram): Challenges and prospects,” _AAPPS bulletin_ , vol. 18, no. 6, pp. 33–40, 2008.
* [3] E. Deng, Y. Zhang, J.-O. Klein, D. Ravelsona, C. Chappert, and W. Zhao, “Low power magnetic full-adder based on spin transfer torque mram,” _IEEE transactions on magnetics_ , vol. 49, no. 9, pp. 4982–4987, 2013.
* [4] W. Kang, W. Lv, Y. Zhang, and W. Zhao, “Low store power high-speed high-density nonvolatile sram design with spin hall effect-driven magnetic tunnel junctions,” _IEEE Transactions on Nanotechnology_ , vol. 16, no. 1, pp. 148–154, 2016.
* [5] W. Kang, Y. Zhang, Z. Wang, J.-O. Klein, C. Chappert, D. Ravelosona, G. Wang, Y. Zhang, and W. Zhao, “Spintronics: Emerging ultra-low-power circuits and systems beyond mos technology,” _ACM Journal on Emerging Technologies in Computing Systems (JETC)_ , vol. 12, no. 2, pp. 1–42, 2015.
* [6] W. Zhao, M. Moreau, E. Deng, Y. Zhang, J.-M. Portal, J.-O. Klein, M. Bocquet, H. Aziza, D. Deleruyelle, C. Muller _et al._ , “Synchronous non-volatile logic gate design based on resistive switching memories,” _IEEE Transactions on Circuits and Systems I: Regular Papers_ , vol. 61, no. 2, pp. 443–454, 2013.
* [7] P. Teichmann, _Adiabatic logic: future trend and system level perspective_. Springer Science & Business Media, 2011, vol. 34.
* [8] P. Kocher, J. Jaffe, and B. Jun, “Differential power analysis,” in _Annual international cryptology conference_. Springer, 1999, pp. 388–397.
* [9] J.-F. Dhem, F. Koeune, P.-A. Leroux, P. Mestré, J.-J. Quisquater, and J.-L. Willems, “A practical implementation of the timing attack,” in _International Conference on Smart Card Research and Advanced Applications_. Springer, 1998, pp. 167–182.
* [10] J. Wu, Y. Shi, and M. Choi, “Measurement and evaluation of power analysis attacks on asynchronous s-box,” _IEEE Transactions on Instrumentation and Measurement_ , vol. 61, no. 10, pp. 2765–2775, 2012.
* [11] A. Bogdanov, L. R. Knudsen, G. Leander, C. Paar, A. Poschmann, M. J. Robshaw, Y. Seurin, and C. Vikkelsoe, “Present: An ultra-lightweight block cipher,” in _International workshop on cryptographic hardware and embedded systems_. Springer, 2007, pp. 450–466.
* [12] J. S. Moodera, L. R. Kinder, T. M. Wong, and R. Meservey, “Large magnetoresistance at room temperature in ferromagnetic thin film tunnel junctions,” _Physical review letters_ , vol. 74, no. 16, p. 3273, 1995.
* [13] R. Zand, A. Roohi, S. Salehi, and R. F. DeMara, “Scalable adaptive spintronic reconfigurable logic using area-matched mtj design,” _IEEE Transactions on Circuits and Systems II: Express Briefs_ , vol. 63, no. 7, pp. 678–682, 2016\.
* [14] B. Behin-Aein, J.-P. Wang, and R. Wiesendanger, “Computing with spins and magnets,” _arXiv preprint arXiv:1411.6960_ , 2014.
* [15] A. D. Kent, “Perpendicular all the way,” _Nature materials_ , vol. 9, no. 9, pp. 699–700, 2010.
* [16] Y. Gang, W. Zhao, J.-O. Klein, C. Chappert, and P. Mazoyer, “A high-reliability, low-power magnetic full adder,” _IEEE Transactions on Magnetics_ , vol. 47, no. 11, pp. 4611–4616, 2011.
* [17] W. C. Athas, L. J. Svensson, J. G. Koller, N. Tzartzanis, and E. Y.-C. Chou, “Low-power digital systems based on adiabatic-switching principles,” _IEEE Transactions on Very Large Scale Integration (VLSI) Systems_ , vol. 2, no. 4, pp. 398–407, 1994.
* [18] Z. Kahleifeh and H. Thapliyal, “2-phase energy-efficient secure positive feedback adiabatic logic for cpa-resistant iot devices,” in _2020 IEEE 6th World Forum on Internet of Things (WF-IoT)_. IEEE, 2020, pp. 1–5.
* [19] Y. Wang, H. Cai, L. A. de Barros Naviner, Y. Zhang, X. Zhao, E. Deng, J.-O. Klein, and W. Zhao, “Compact model of dielectric breakdown in spin-transfer torque magnetic tunnel junction,” _IEEE Transactions on Electron Devices_ , vol. 63, no. 4, pp. 1762–1767, 2016.
|
# Diagonal double Kodaira fibrations with minimal signature
Francesco Polizzi Dipartimento di Matematica e Informatica
Università della Calabria
Ponte Pietro Bucci 30B, I-87036 Arcavacata di Rende, Cosenza, Italy
<EMAIL_ADDRESS>and Pietro Sabatino Via Val Sillaro 5
00141 Roma, Italy<EMAIL_ADDRESS>
###### Abstract.
We study some special systems of generators on finite groups, introduced in
previous work by the first author and called _diagonal double Kodaira
structures_ , in order to investigate non-abelian, finite quotients of the
pure braid group on two strands $\mathsf{P}_{2}(\Sigma_{b})$, where
$\Sigma_{b}$ is a closed Riemann surface of genus $b$. In particular, we prove
that, if a finite group $G$ admits a diagonal double Kodaira structure, then
$|G|\geq 32$, and equality holds if and only if $G$ is extra-special. In the
last section, as a geometrical application of our algebraic results, we
construct two $3$-dimensional families of double Kodaira fibrations having
signature $16$.
###### Key words and phrases:
Surface braid groups, extra-special $p$-groups, Kodaira fibrations
_2010 Mathematics Subject Classification._ 14J29, 14J25, 20D15
###### Contents
1. 0 Introduction
2. 1 Group-theoretical preliminaries: CCT-groups and extra-special groups
3. 2 Diagonal double Kodaira structures
4. 3 Structures on groups of order at most $32$
1. 3.1 Prestructures
2. 3.2 The case $|G|<32$
3. 3.3 The case $|G|=32$ and $G$ non-extra-special
4. 3.4 The case $|G|=32$ and $G$ extra-special
5. 4 Geometrical application: diagonal double Kodaira fibrations
1. 4.1 The computation of $H_{1}(S,\,\mathbb{Z})$
## 0\. Introduction
A _Kodaira fibration_ is a smooth, connected holomorphic fibration
$f_{1}\colon S\longrightarrow B_{1}$, where $S$ is a compact complex surface
and $B_{1}$ is a compact closed curve, which is not isotrivial (this means
that not all fibres are biholomorphic each other). The genus $b_{1}:=g(B_{1})$
is called the _base genus_ of the fibration, and the genus $g:=g(F)$, where
$F$ is any fibre, is called the _fibre genus_. A surface $S$ that is the total
space of a Kodaira fibration is called a _Kodaira fibred surface_. For every
Kodaira fibration, we have $b_{1}\geq 2$ and $g\geq 3$, see [Kas68, Theorem
1.1]. Since the fibration is smooth, the condition on the base genus implies
that $S$ contains no rational or elliptic curves; hence $S$ is minimal and, by
the sub-additivity of the Kodaira dimension, it is of general type, hence
algebraic.
An important topological invariant of a Kodaira fibred surface $S$ is its
_signature_ $\sigma(S)$, namely the signature of the intersection form on the
middle cohomology group $H^{2}(S,\,\mathbb{R})$. Actually, the first examples
of Kodaira fibrations (see [Kod67]) were constructed in order to show that
$\sigma$ is not multiplicative for fibre bundles. In fact, $\sigma(S)>0$ for
every Kodaira fibration (see the introduction to [LLR20]), whereas
$\sigma(B_{1})=\sigma(F)=0$, hence $\sigma(S)\neq\sigma(B_{1})\sigma(F)$; by
[CHS57], this in turn means that the monodromy action of $\pi_{1}(B)$ on the
rational cohomology ring $H^{*}(S,\,\mathbb{Q})$ is non-trivial.
Every Kodaira fibred surface $S$ has the structure of a real surface bundle
over a smooth real surface, and so $\sigma(S)$ is divisible by $4$, see
[Mey73]. If, in addition, $S$ has a spin structure, i.e. its canonical class
is $2$-divisible in $\operatorname{Pic}(S)$, then $\sigma(S)$ is a positive
multiple of $16$ by Rokhlin’s theorem, and examples with $\sigma(S)=16$ are
constructed in [LLR20]. It is not known whether there exists a Kodaira fibred
surface with $\sigma(S)\leq 12$.
Kodaira fibred surfaces are a source of fascinating ad deep questions at the
cross-road between the algebro-geometric properties of a compact, complex
surface and the topological properties of the underlying closed, oriented
$4$-manifold. In fact, they can be studied by using, besides the usual
algebro-geometric methods, techniques borrowed from geometric topology such as
the Meyer signature formula, the Birman-Hilden relations in the mapping class
group and the subtraction of Lefchetz fibrations, see [En98, EKKOS02, St02,
L17]. We refer the reader to the survey paper [Cat17] and the references
contained therein for further details.
The original example by Kodaira, and its variants described in [At69, Hir69],
are obtained by taking ramified covers of products of curves, so they come
with a pair of Kodaira fibrations. This leads to the definition of “double”
Kodaira fibration, see [Zaal95, LeBrun00, BDS01, BD02, CatRol09, Rol10,
LLR20]:
###### Definition 0.1.
A _double Kodaira surface_ is a compact, complex surface $S$, endowed with a
_double Kodaira fibration_ , namely a surjective, holomorphic map $f\colon
S\longrightarrow B_{1}\times B_{2}$ yielding, by composition with the natural
projections, two Kodaira fibrations $f_{i}\colon S\longrightarrow B_{i}$,
$i=1,\,2$.
In the sequel, we will describe our approach to the construction of double
Kodaira fibrations based on the techniques introduced in [CaPol19, Pol20], and
present our results. The main step is to “detopologize” the problem, by
transforming it into a purely algebraic one. This will be done in the
particular case of _diagonal_ double Kodaira fibrations, namely, Stein
factorizations of finite Galois covers
(1) $\mathbf{f}\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b},$
branched with order $n\geq 2$ over the diagonal
$\Delta\subset\Sigma_{b}\times\Sigma_{b}$, where $\Sigma_{b}$ is a closed
Riemann surface of genus $b$. By Grauert-Remmert’s extension theorem and
Serre’s GAGA, the existence of a $G$-cover $\mathbf{f}$ as in (1), up to cover
isomorphisms, is equivalent to the existence of a group epimorphism
(2) $\varphi\colon\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta)\longrightarrow
G,$
up to automorphisms of $G$. Furthermore, the condition that $\mathbf{f}$ is
branched of order $n$ over $\Delta$ is rephrased by asking that
$\varphi(\gamma_{\Delta})$ has order $n$ in $G$, where $\gamma_{\Delta}$ is
the homotopy class in $\Sigma_{b}\times\Sigma_{b}-\Delta$ of a loop in
$\Sigma_{b}\times\Sigma_{b}$ that “winds once” around $\Delta$. The
requirement $n\geq 2$ means that $\varphi$ does not factor through
$\pi_{1}(\Sigma_{b}\times\Sigma_{b})$; it also implies that $G$ is non-
abelian, because $\gamma_{\Delta}$ is a non-trivial commutator in
$\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta)$.
Recall now that the group $\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta)$ is
isomorphic to $\mathsf{P}_{2}(\Sigma_{b})$, the pure braid group of genus $b$
on two strands; such a group admit a geometric presentation with $4g+1$
generators
(3) $\rho_{11},\,\tau_{11},\ldots,\rho_{1b},\,\tau_{1b},\,A_{12},$
where $A_{12}$ corresponds to $\gamma_{\Delta}$, subject to the set of
relations written in Section 2, see [GG04, Theorem 7]. Taking the images of
these generators via the group epimorphism $\eqref{eq:intr-varphi}$, we get an
ordered set
(4)
$\mathfrak{S}=(\mathsf{r}_{11},\,\mathsf{t}_{11},\ldots,\mathsf{r}_{1b},\,\mathsf{t}_{1b},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\ldots,\mathsf{r}_{2b},\,\mathsf{t}_{2b},\,\mathsf{z})$
of $4b+1$ generators of $G$, such that $o(\mathsf{z})=n$. This will be called
a _diagonal double Kodaira structure_ of type $(b,\,n)$ on $G$, see Definition
2.1. In the light of the previous considerations, we see that the geometric
problem of constructing a $G$-cover $\mathbf{f}$ as in (1) is now translated
into the combinatorial-algebraic problem of finding a diagonal double Kodaira
structure of type $(b,\,n)$ in $G$.
It turns out that the $G$-cover $\mathbf{f}$ is a diagonal double Kodaira
fibration (namely, the two surjective maps $f_{i}\colon
S\longrightarrow\Sigma_{b}$, obtained as composition with the natural
projections, have connected fibres) if and only if $\mathfrak{S}$ is _strong_
, an additional condition introduced in Definition 2.8; furthermore, the
algebraic signature $\sigma(\mathfrak{S})$, see Definition 2.7, equals the
geometric signature $\sigma(S)$.
Summing up, classifying diagonal double Kodaira fibrations is equivalent to
describing finite groups which admit a diagonal double Kodaira structure. Our
first main result in this direction is the following:
###### Theorem A (see Proposition 3.9, 3.11 and Theorem 3.15).
Let $G$ be a finite group admitting a diagonal double Kodaira structure. Then
$|G|\geq 32$, with equality if and only if $G$ is extra-special $($see
$\operatorname{Section}$ $\operatorname{\ref{sec:CCT}}$ for the definition$)$.
Moreover, the following holds.
* $\boldsymbol{(1)}$
Both extra-special groups $G$ of order $32$ admit $2211840=1152\cdot 1920$
diagonal double Kodaira structures of type $(b,\,n)=(2,\,2)$. Every such a
structure $\mathfrak{S}$ is strong and satisfies $\sigma(\mathfrak{S})=16$.
* $\boldsymbol{(2)}$
If $G=G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$, these structures form $1920$
orbits under the action of $\operatorname{Aut}(G)$.
* $\boldsymbol{(3)}$
If $G=G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$, these structures form $1152$
orbits under the action of $\operatorname{Aut}(G)$.
Theorem A should be compared with previous results, obtained by the first
author in collaboration with A. Causin, regarding the construction of diagonal
double Kodaira structures on some extra-special groups of order at least
$2^{7}=128$, see [CaPol19, Pol20]. It turns out that the examples presented
here are really new, in the sense that they cannot be obtained as images of
structures on extra-special groups of larger order, see Remark 3.17.
A restatement of Theorem A in terms of surface braid groups is the following,
cf. Remark 3.18. First of all, let us say that a quotient map/group
epimorphism
$\varphi\colon\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta)\longrightarrow G$ is
_admissible_ if $\varphi(A_{12})$ has order $n\geq 2$, then:
###### Theorem A’.
Let $G$ be a finite group admitting an admissible epimorphism
$\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$. Then $|G|\geq 32$,
with equality if and only if $G$ is extra-special. Moreover, the following
holds.
* $\boldsymbol{(1)}$
For both extra-special groups $G$ of order $32$, there are $2211840=1152\cdot
1920$ admissible epimorphisms
$\varphi\colon\mathsf{P}_{2}(\Sigma_{2})\longrightarrow G$. For all of them,
$\varphi(A_{12})$ is the generator of $Z(G)$, so $n=2$.
* $\boldsymbol{(2)}$
If $G=G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$, these epimorhisms form
$1920$ orbits under the natural action of $\operatorname{Aut}(G)$.
* $\boldsymbol{(3)}$
If $G=G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$, these epimorhisms form
$1152$ orbits under the natural action of $\operatorname{Aut}(G)$.
The geometrical counterpart of Theorems A and A’ can be now expressed in terms
of diagonal double Kodaira fibrations as follows:
###### Theorem B (see Theorem 4.7.).
Let $G$ be a finite group and $\mathbf{f}\colon
S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ be a Galois cover, with Galois
group $G$, branched on the diagonal $\Delta$ with branching order $n\geq 2$.
Then $|G|\geq 32$, with equality if and only if $G$ is extra-special.
Moreover, the following holds.
* $\boldsymbol{(1)}$
For both extra-special groups of order $32$, there exist $2211840=1152\cdot
1920$ distinct $G$-covers $\mathbf{f}\colon
S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ as above. All of them are diagonal
double Kodaira fibrations with $n=2$ and
(5) $b_{1}=b_{2}=2,\quad g_{1}=g_{2}=41,\quad\sigma(S)=16.$
* $\boldsymbol{(2)}$
If $G=G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$, these $G$-covers form $1920$
equivalence classes up to cover isomorphisms.
* $\boldsymbol{(3)}$
If $G=G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$, these $G$-covers form $1152$
equivalence classes up to cover isomorphisms.
As a consequence, we obtain a sharp lower bound for the signature of a
diagonal double Kodaira fibration or, equivalently, of a diagonal double
Kodaira structure:
###### Theorem C (see Corollary 4.8).
Let $f\colon S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ be a diagonal
double Kodaira fibration, associated with a diagonal double Kodaira structure
of type $(b,\,n)$ on a finite group $G$. Then $\sigma(S)\geq 16$, and equality
holds precisely when $(b,\,n)=(2,\,2)$ and $G$ is an extra-special group of
order $32$.
These results yield, as a by-product, new “double solutions” to a problem
(stated by G. Mess) from Kirby’s problem list in low-dimensional topology
[Kir97, Problem 2.18 A], asking what is the smallest number $b$ for which
there exists a real surface bundle over a real surface with base genus $b$ and
non-zero signature. We actually have $b=2$, also for double Kodaira
fibrations, as shown in [CaPol19, Proposition 3.19] and [Pol20, Theorem 3.6]
by using double Kodaira structures of type $(2,\,3)$ on extra-special groups
of order $3^{5}$. Those fibrations had signature $144$ and fibre genera $325$;
we are now able to substantially lower both these values:
###### Theorem D (see Theorem 4.9).
Let $S$ be a diagonal double Kodaira surface, associated with a strong
diagonal double Kodaira structure of type $(b,\,n)=(2,\,2)$ on an extra-
special group $G$ of order $32$. Then the real manifold $M$ underlying $S$ is
a closed, orientable $4$-manifold of signature $16$ that can be realized as a
real surface bundle over a real surface of genus $2$, with fibre genus $41$,
in two different ways.
In fact, we may ask whether $16$ and $41$ are the minimum possible values for
the signature and the fibre genus of a (non necessarily diagonal) double
Kodaira fibration $f\colon S\longrightarrow\Sigma_{2}\times\Sigma_{2}$, cf.
Corollary 4.10.
We believe that the results described above are significant for at least two
reasons:
* $\boldsymbol{(i)}$
although we know that $\mathsf{P}_{2}(\Sigma_{b})$ is residually $p$-finite
for all prime number $p\geq 2$, see [BarBel09, pp. 1481-1490], so far there
has been no systematic work aimed to describe its admissible quotients. The
first results in this direction were those of A. Causin and the first author,
who showed that both extra-special groups of order $p^{4b+1}$ appear as
admissible quotients of $\mathsf{P}_{2}(\Sigma_{b})$ for all $b\geq 2$ and all
prime numbers $p\geq 5$; moreover, if $p$ divides $b+1$, then both extra-
special groups of order $p^{2b+1}$ appear as admissible quotients, too. As we
said before, the smallest adimissible quotients detected in [CaPol19] and
[Pol20], corresponding to the case $(b,\,p)=(3,\,2)$, have order $2^{7}=128$.
Our Theorem B sheds some new light on this problem, by providing a sharp lower
bound for the order of $G$: more precisely, if a finite group $G$ is an
admissible quotient of $\mathsf{P}_{2}(\Sigma_{b})$ for some $b$, then
$|G|\geq 32$, with equality if and only if $G$ is extra-special. Moreover, for
both extra-special groups of order $32$, Theorem B computes the number of
admissible quotient maps
$\varphi\colon\mathsf{P}_{2}(\Sigma_{2})\longrightarrow G$, and the number of
their equivalence classes up to the natural action of $\operatorname{Aut}(G)$;
* $\boldsymbol{(ii)}$
constructing (double) Kodaira fibrations with small signature is a rather
difficult problem. As far as we know, before the present work the only
examples with signature $16$ were the ones listed in [LLR20, Table 3, Cases
6.2, 6.6, 6.7 (Type 1), 6.9]. Our examples in Theorem A are new, since both
the base genera and the fibre genera are different from the ones in the
aforementioned cases. Note that our results also show that _every_ curve of
genus $2$ (and not only some special curve with extra automorphisms) is the
base of a double Kodaira fibration with signature $16$. Thus, we obtain two
families of dimension $3$ of such fibrations that, to the best of our
knowledge, provide the first examples of positive-dimensional families of
double Kodaira fibrations with small signature.
Finally, this work also contain a Computer Algebra part, concerning the
calculation of the group $H_{1}(S,\,\mathbb{Z})$, where $S$ is as in Theorem
D, by using the software `GAP4`, see [GAP4]. The result is the following:
###### Theorem E (see Proposition 4.14).
Let $f\colon S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ be the diagonal
double Kodaira fibration associated with a diagonal double Kodaira structure
of type $(b,\,n)=(2,\,2)$ on an extra-special group $G$ of order $32$. Then
(6) $H_{1}(S,\,\mathbb{Z})=\mathbb{Z}^{8}\oplus(\mathbb{Z}_{2})^{4}.$
In particular, this homology group is independent both on $G$ and on the
chosen structure on it.
Thus $\mathsf{b}_{1}(S)=8$ and, subsequently, the pull-back map $f^{*}\colon
H^{1}(\Sigma_{b}\times\Sigma_{b},\,\mathbb{Q})\longrightarrow
H^{1}(S,\,\mathbb{Q})$ is an isomorphism. Following [Breg18], we will express
this fact by saying that $f$ is _maximal_ , see Proposition 4.18. For an
interpretation of maximality in terms of monodromy, see Corollary 4.17.
Let us now describe how this paper is organized. In Section 1 we introduce
some algebraic preliminaries, in particular we discuss the so-called CCT-
groups (Definition 1.1), namely, finite non-abelian groups in which
commutativity is a transitive relation on the set of non-central elements.
These groups are of historical importance in the context of classification of
finite simple groups, see Remark 1.3, and they play a fundamental role in this
paper, as we will soon explain. It turns out that there are precisely eight
groups $G$ with $|G|\leq 32$ that are not CCT-groups, namely $\mathsf{S}_{4}$
and seven groups of order $32$, see Corollary 1.6, Proposition 1.7 and
Proposition 1.14.
In Section 2 we define diagonal double Kodaira structures on finite groups and
we explain the relation with their counterpart in geometric topology, namely
admissible group epimorphisms from pure surface braid groups.
Section 3 is devoted to the study of diagonal double Kodaira structures in
groups of order at most $32$. One crucial technical result is Proposition 3.4,
stating that there are no such structures on CCT-groups. Thus, in order to
prove the first part of Theorem A, we only need to exclude the existence of
diagonal double Kodaira structures on $\mathsf{S}_{4}$ and on the five non-
abelian, non-CCT groups of order $32$; this is done in Proposition 3.9 and
Proposition 3.11, respectively. The second part of Theorem A, i.e. the
computation of number of structures in each case, is obtained by using some
techniques borrowed from [Win72]; more precisely, we exploit the fact that
$V=G/Z(G)$ is a symplectic vector space of dimension $4$ over
$\mathbb{Z}_{2}$, and that $\operatorname{Out}(G)$ embeds in
$\mathrm{Sp}(4,\,\mathbb{Z}_{2})$ as the orthogonal group associated with the
quadratic form $q\colon V\longrightarrow\mathbb{Z}_{2}$ related to the
symplectic form $(\cdot\;,\cdot)$ by
$q(\overline{\mathsf{x}}\,\overline{\mathsf{y}})=q(\overline{\mathsf{x}})+q(\overline{\mathsf{y}})+(\overline{\mathsf{x}},\,\overline{\mathsf{y}})$.
Finally, in Section 4 we establish the relation between our algebraic results
and the geometrical framework of diagonal double Kodaira fibrations, and we
provide the proofs of Theorems B, C, and D; furthermore, we state Theorem E,
discussing some of its consequences.
The paper ends with two appendices. In Appendix A we collect the presentations
for the non-abelian groups of order $24$ and $32$ that we used in our
calculations, while Appendix B contains the details about the computational
proof of Theorem E.
$\mathbf{Notation\;and\;conventions}$. If $S$ is a complex, non-singular
projective surface, then $c_{1}(S)$, $c_{2}(S)$ denote the first and second
Chern class of its tangent bundle $T_{S}$, respectively. If $X$ is a
topological space, the fundamental group of $X$ will be denoted by
$\pi_{1}(X)$ and its first Betti number by $\mathsf{b}_{1}(X)$.
Throughout the paper we use the following notation for groups:
* •
$\mathbb{Z}_{n}$: cyclic group of order $n$.
* •
$G=N\rtimes Q$: semi-direct product of $N$ and $Q$, namely, split extension of
$Q$ by $N$, where $N$ is normal in $G$.
* •
$G=N.Q$: non-split extension of $Q$ by $N$.
* •
$\operatorname{Aut}(G)$: the automorphism group of $G$.
* •
$\mathsf{D}_{p,\,q,\,r}=\mathbb{Z}_{q}\rtimes\mathbb{Z}_{p}=\langle
x,\,y\;|\;x^{p}=y^{q}=1,\;xyx^{-1}=y^{r}\rangle$: split metacyclic group of
order $pq$. The group $\mathsf{D}_{2,\,n,\,-1}$ is the dihedral group of order
$2n$ and will be denoted by $\mathsf{D}_{2n}$.
* •
If $n$ is an integer greater or equal to $4$, we denote by
$\mathsf{QD}_{2^{n}}$ the quasi-dihedral group of order $2^{n}$, having
presentation
(7) $\mathsf{QD}_{2^{n}}:=\langle x,\,y\mid
x^{2}=y^{2^{n-1}}=1,\;xyx^{-1}=y^{2^{n-2}-1}\rangle.$
* •
The generalized quaternion group of order $4n$ is denoted by $\mathsf{Q}_{4n}$
and is presented as
(8) $\mathsf{Q}_{4n}=\langle x,\,y,\,z\mid x^{n}=y^{2}=z^{2}=xyz\rangle.$
For $n=2$ we obtain the usual quaternion group $\mathsf{Q}_{8}$, for which we
adopt the classical presentation
(9) $\mathsf{Q}_{8}=\langle i,\,j,\,k\mid i^{2}=j^{2}=k^{2}=ijk\rangle,$
denoting by $-1$ the unique element of order $2$.
* •
$\mathsf{S}_{n},\;\mathsf{A}_{n}$: symmetric, alternating group on $n$
symbols. We write the composition of permutations from the right to the left;
for instance, $(13)(12)=(123)$.
* •
$\mathsf{GL}(n,\,\mathbb{F}_{q}),\,\mathsf{SL}(n,\,\mathbb{F}_{q}),\,\mathsf{Sp}(n,\,\mathbb{F}_{q})$:
general linear group, special linear group and symplectic group of $n\times n$
matrices over a field with $q$ elements.
* •
The order of a finite group $G$ is denoted by $|G|$. If $x\in G$, the order of
$x$ is denoted by $o(x)$ and its centralizer in $G$ by $C_{G}(x)$.
* •
If $x,\,y\in G$, their commutator is defined as $[x,\,y]=xyx^{-1}y^{-1}$.
* •
The commutator subgroup of $G$ is denoted by $[G,\,G]$, the center of $G$ by
$Z(G)$.
* •
If $S=\\{s_{1},\ldots,s_{n}\\}\subset G$, the subgroup generated by $S$ is
denoted by $\langle S\rangle=\langle s_{1},\ldots,s_{n}\rangle$.
* •
$\mathrm{IdSmallGroup}(G)$ indicates the label of the group $G$ in the `GAP4`
database of small groups. For instance
$\mathrm{IdSmallGroup}(\mathsf{D}_{4})=G(8,\,3)$ means that $\mathsf{D}_{4}$
is the third in the list of groups of order $8$.
* •
If $N$ is a normal subgroup of $G$ and $g\in G$, we denote by $\bar{g}$ the
image of $g$ in the quotient group $G/N$.
## 1\. Group-theoretical preliminaries: CCT-groups and extra-special groups
###### Definition 1.1.
A non-abelian, finite group $G$ is said to be a _center commutative-transitive
group_ $($or a CCT-_group_ , for short$)$ if commutativity is a transitive
relation on the set on non-central elements of $G$. In other words, if
$x,\,y,\,z\in G-Z(G)$ and $[x,\,y]=[y,\,z]=1$, then $[x,\,z]=1$.
###### Proposition 1.2.
For a finite group $G$, the following properties are equivalent.
* $\boldsymbol{(1)}$
$G$ is a _CCT_ -group.
* $\boldsymbol{(2)}$
For every pair $x,\,y$ of non-central elements in $G$, the relation
$[x,\,y]=1$ implies $C_{G}(x)=C_{G}(y)$.
* $\boldsymbol{(3)}$
For every non-central element $x\in G$, the centralizer $C_{G}(x)$ is abelian.
###### Proof.
$\boldsymbol{(1)\Rightarrow(2)}$ Take two commuting elements $x,\,y\in G-Z(G)$
and let $z\in C_{G}(x)$. If $z$ is central then $z\in C_{G}(y)$ by definition,
otherwise $[x,\,y]=[x,\,z]=1$ implies $[y,\,z]=1$ by the assumption that $G$
is a CCT-group. Therefore we get $C_{G}(x)\subseteq C_{G}(y)$, and exchanging
the roles of $x,\,y$ we can deduce the reverse inclusion.
$\boldsymbol{(2)\Rightarrow(3)}$ Given any element $x\in G-Z(G)$, it is
sufficient to check that $[y,\,z]=1$ for every pair of non-central elements
$y,\,z\in C_{G}(x)$. By assumption, $C_{G}(y)=C_{G}(z)$, hence $y\in C_{G}(z)$
and we are done.
$\boldsymbol{(3)\Rightarrow(1)}$ Let $x,\,y,\,z\in G-Z(G)$ and suppose
$[x,\,y]=[y,\,z]=1$, namely, $x,\,z\in C_{G}(y)$. Since we are assuming that
$C_{G}(y)$ is abelian, this gives $[x,\,z]=1$, hence $G$ is a CCT-group. ∎
###### Remark 1.3.
CCT-groups are of historical importance in the context of classification of
finite simple groups, see for instance [Suz61], where they are called CA-
_groups_. Further references on the topic are [Schm70], [Reb71], [Rocke73],
[Wu98].
###### Lemma 1.4.
If $G$ is a finite group such that $G/Z(G)$ is cyclic, then $G$ is abelian.
###### Proof.
Every element of $G$ can be written as $zy^{n}$, where $y\in G$ is such that
its image generates $G/Z(G)$, $z\in Z(G)$ and $n\in\mathbb{Z}$. It follows
that any two elements of $G$ commute. ∎
###### Proposition 1.5.
Let $G$ be a non-abelian, finite group.
* $\boldsymbol{(1)}$
If $|G|$ is the product of at most three prime factors $($non necessarily
distinct$)$, then $G$ is a _CCT_ -group.
* $\boldsymbol{(2)}$
If $|G|=p^{4}$, with $p$ prime, then $G$ is a _CCT_ -group.
* $\boldsymbol{(3)}$
If $G$ contains an abelian normal subgroup of prime index, then $G$ is a _CCT_
-group.
###### Proof.
$\boldsymbol{(1)}$ Assume that $|G|$ is the product of at most three prime
factors, and take a non-central element $y$. Then the centralizer $C_{G}(y)$
has non-trivial center, because $1\neq y\in C_{G}(y)$, and its order is the
product of at most two primes. Therefore the quotient of $C_{G}(y)$ by its
center is cyclic, hence $C_{G}(y)$ is abelian by Lemma 1.4.
$\boldsymbol{(2)}$ Assume $|G|=p^{4}$ and suppose by contradiction that there
exist three elements $x,\,y,\,z\in G-Z(G)$ such that $[x,\,y]=[y,\,z]=1$ but
$[x,\,z]\neq 1$. They generate a non-abelian subgroup $N=\langle
x,\,y,\,z\rangle$, which is not the whole of $G$ since $y\in Z(N)$ but
$y\notin Z(G)$. It follows that $N$ has order $p^{3}$ and so, by Lemma 1.4,
its center is cyclic of order $p$, generated by $y$. The group $G$ is a finite
$p$-group, hence a nilpotent group; being a proper subgroup of maximal order
in a nilpotent group, $N$ is normal in $G$ (see [Mac12, Corollary 5.2]), so we
have a conjugacy homomorphism $G\longrightarrow\mathrm{Aut}(N)$, that in turn
induces a conjugacy homomorphism
$G\longrightarrow\mathrm{Aut}(Z(N))\simeq\mathbb{Z}_{p-1}$. The image of such
a homomorphism must have order dividing both $p^{4}$ and $p-1$, hence it is
trivial. In other words, the conjugacy action of $G$ on $Z(N)=\langle
y\rangle$ is trivial, hence $y$ is central in $G$, contradiction.
$\boldsymbol{(3)}$ Let $N$ be an abelian normal subgroup of $G$ such that
$G/N$ has prime order $p$. As $G/N$ has no non-trivial proper subgroups, it
follows that $N$ is a maximal subgroup of $G$. Let $x$ be any non-central
element of $G$, so that $C_{G}(x)$ is a proper subgroup of $G$; then there are
two possibilities.
Case 1: $x\in N$. Then $N\subseteq C_{G}(x)$ and so, by the maximality of $N$,
we get $C_{G}(x)=N$, which is abelian.
Case 2: $x\notin N$. Then the image of $x$ generates $G/N$, and so every
element $y\in G$ can be written in the form $y=ux^{r}$, where $u\in N$ and
$0\leq r\leq p-1$. In particular, if $y\in C_{G}(x)$, the condition
$[x,\,y]=1$ yields $[x,\,u]=1$, namely $u\in N\cap C_{G}(x)$. Since $N$ is
abelian, it follows that $C_{G}(x)$ is abelian, too.
∎
We now want to classify non-abelian, non-CCT groups of order at most $32$.
First of all, as an immediate consequence of Parts $\boldsymbol{(1)}$ and
$\boldsymbol{(2)}$ of Proposition 1.5, we have the following
###### Corollary 1.6.
Let $G$ be a non-abelian, finite group such that $|G|\leq 32$. If $G$ is not a
_CCT_ -group, then either $|G|=24$ or $|G|=32$.
Let us start by disposing of the case $G=24$.
###### Proposition 1.7.
Let $G$ be a non-abelian finite group such that $|G|=24$ and $G$ is not a
_CCT_ -group. Then $G=\mathsf{S}_{4}$.
###### Proof.
We start by observing that $\mathsf{S}_{4}$ is not a CCT-group. In fact,
$(1234)$ commutes to its square $(13)(24)$, which commutes to $(12)(34)$, but
$(1234)$ and $(12)(34)$ do not commute.
What is left is to show that the remaining non-abelian groups of order $24$
are all CCT-groups; we will do a case-by-case analysis, referring the reader
to the presentations given in Table 1 of Appendix A. Apart from
$G=G(24,\,3)=\mathsf{SL}(2,\,\mathbb{F}_{3})$, for which we give an ad-hoc
proof, we will show that all these groups contain an abelian subgroup $N$ of
prime index, so that we can conclude by using Part $\boldsymbol{(3)}$ of
Proposition 1.5.
* •
$G=G(24,\,1).$ Take $N=\langle x^{2}y\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,3).$ The action of $\mathrm{Aut}(G)$ has five orbits, whose
representative elements are $\\{1,\,x,\,x^{2},\,z,\,z^{2}\\}$, see [SL(2,3)].
We have $\langle z^{2}\rangle=Z(G)$ and so, since $C_{G}(x)\subseteq
C_{G}(x^{2})$, it suffices to show that the centralizers of $x^{2}$ and $z$
are both abelian. In fact, we have
(10) $C_{G}(x^{2})=\langle x\rangle\simeq\mathbb{Z}_{6},\quad C_{G}(z)=\langle
z\rangle\simeq\mathbb{Z}_{4}.$
* •
$G=G(24,\,4).$ Take $N=\langle x\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,5).$ Take $N=\langle y\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,6).$ Take $N=\langle y\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,7).$ Take $N=\langle
z,\,x^{2}y\rangle\simeq\mathbb{Z}_{6}\times\mathbb{Z}_{2}$.
* •
$G=G(24,\,8).$ Take $N=\langle
y,\,z,\,w\rangle\simeq\mathbb{Z}_{6}\times\mathbb{Z}_{2}$.
* •
$G=G(24,\,10).$ Take $N=\langle z,\,y\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,11).$ Take $N=\langle z,\,i\rangle\simeq\mathbb{Z}_{12}$.
* •
$G=G(24,\,13).$ Take $N=\langle
z\rangle\times\mathsf{V}_{4}\simeq(\mathbb{Z}_{2})^{3}$, where
$\mathsf{V}_{4}=\langle(1\,2)(3\,4),\;(1\,3)(2\,4)\rangle$ is the Klein
subgroup.
* •
$G=G(24,\,14).$ Take $N=\langle
z,\,w\rangle\times\langle(123)\rangle\simeq\mathbb{Z}_{6}\times\mathbb{Z}_{2}$.
This completes the proof. ∎
The next step is to classify non-abelian, non-CCT groups $G$ with $|G|=32$; it
will turn out that there are precisely seven of them, see Proposition 1.14.
Before doing this, let us introduce the following classical definition, see
for instance [Gor07, p. 183] and [Is08, p. 123].
###### Definition 1.8.
Let $p$ be a prime number. A finite $p$-group $G$ is called _extra-special_ if
its center $Z(G)$ is cyclic of order $p$ and the quotient $V=G/Z(G)$ is a non-
trivial, elementary abelian $p$-group.
An elementary abelian $p$-group is a finite-dimensional vector space over the
field $\mathbb{Z}_{p}$, hence it is of the form $V=(\mathbb{Z}_{p})^{\dim V}$
and $G$ fits into a short exact sequence
(11) $1\longrightarrow\mathbb{Z}_{p}\longrightarrow G\longrightarrow
V\longrightarrow 1.$
Note that, $V$ being abelian, we must have $[G,\,G]=\mathbb{Z}_{p}$, namely
the commutator subgroup of $G$ coincides with its center. Furthermore, since
the extension (11) is central, it cannot be split, otherwise $G$ would be
isomorphic to the direct product of the two abelian groups $\mathbb{Z}_{p}$
and $V$, which is impossible because $G$ is non-abelian.
If $G$ is extra-special, then we can define a map $\omega\colon V\times
V\longrightarrow\mathbb{Z}_{p}$ as follows: for every $v_{1},\,v_{2}\in V$, we
set $\omega(v_{1},\,v_{2})=[g_{1},\,g_{2}]$, where $g_{i}$ is any lift of
$v_{i}$ in $G$. This turns out to be a symplectic form on $V$, hence $\dim V$
is even and $|G|=p^{\dim V+1}$ is an odd power of $p$.
For every prime number $p$, there are precisely two isomorphism classes
$M(p)$, $N(p)$ of non-abelian groups of order $p^{3}$, namely
$\begin{split}M(p)&=\langle\mathsf{r},\,\mathsf{t},\,\mathsf{z}\;|\;\mathsf{r}^{p}=\mathsf{t}^{p}=1,\,\mathsf{z}^{p}=1,[\mathsf{r},\,\mathsf{z}]=[\mathsf{t},\,\mathsf{z}]=1,\,[\mathsf{r},\,\mathsf{t}]=\mathsf{z}^{-1}\rangle\\\
N(p)&=\langle\mathsf{r},\,\mathsf{t},\,\mathsf{z}\;|\;\mathsf{r}^{p}=\mathsf{t}^{p}=\mathsf{z},\,\mathsf{z}^{p}=1,[\mathsf{r},\,\mathsf{z}]=[\mathsf{t},\,\mathsf{z}]=1,\,[\mathsf{r},\,\mathsf{t}]=\mathsf{z}^{-1}\rangle\\\
\end{split}$
and both of them are in fact extra-special, see [Gor07, Theorem 5.1 of Chapter
5].
If $p$ is odd, then the groups $M(p)$ and $N(p)$ are distinguished by their
exponent, which equals $p$ and $p^{2}$, respectively. If $p=2$, the group
$M(p)$ is isomorphic to the dihedral group $D_{8}$, whereas $N(p)$ is
isomorphic to the quaternion group $\mathsf{Q}_{8}$.
The classification of extra-special $p$-groups is now provided by the result
below, see [Gor07, Section 5 of Chapter 5] and [CaPol19, Section 2].
###### Proposition 1.9.
If $b\geq 2$ is a positive integer and $p$ is a prime number, there are
exactly two isomorphism classes of extra-special $p$-groups of order
$p^{2b+1}$, that can be described as follows.
* •
The central product $\mathsf{H}_{2b+1}(\mathbb{Z}_{p})$ of $b$ copies of
$M(p)$, having presentation
(12)
$\begin{split}\mathsf{H}_{2b+1}(\mathbb{Z}_{p})=\langle\,&\mathsf{r}_{1},\,\mathsf{t}_{1},\ldots,\mathsf{r}_{b},\,\mathsf{t}_{b},\,\mathsf{z}\;|\;\mathsf{r}_{j}^{p}=\mathsf{t}_{j}^{p}=\mathsf{z}^{p}=1,\\\
&[\mathsf{r}_{j},\,\mathsf{z}]=[\mathsf{t}_{j},\,\mathsf{z}]=1,\\\
&[\mathsf{r}_{j},\,\mathsf{r}_{k}]=[\mathsf{t}_{j},\,\mathsf{t}_{k}]=1,\\\
&[\mathsf{r}_{j},\,\mathsf{t}_{k}]=\mathsf{z}^{-\delta_{jk}}\,\rangle.\end{split}$
If $p$ is odd, this group has exponent $p$ and is isomorphic to the matrix
Heisenberg group
$\mathcal{H}_{2b+1}(\mathbb{Z}_{p})\subset\mathsf{GL}(b+2,\,\mathbb{Z}_{p})$
of dimension $2b+1$ over the field $\mathbb{Z}_{p}$.
* •
The central product $\mathsf{G}_{2b+1}(\mathbb{Z}_{p})$ of $b-1$ copies of
$M(p)$ and one copy of $N(p)$, having presentation
(13)
$\begin{split}\mathsf{G}_{2b+1}(\mathbb{Z}_{p})=\langle\,&\mathsf{r}_{1},\,\mathsf{t}_{1},\ldots,\mathsf{r}_{b},\,\mathsf{t}_{b},\,\mathsf{z}\;|\;\mathsf{r}_{b}^{p}=\mathsf{t}_{b}^{p}=\mathsf{z},\\\
&\mathsf{r}_{1}^{p}=\mathsf{t}_{1}^{p}=\ldots=\mathsf{r}_{b-1}^{p}=\mathsf{t}_{b-1}^{p}=\mathsf{z}^{p}=1,\\\
&[\mathsf{r}_{j},\,\mathsf{z}]=[\mathsf{t}_{j},\,\mathsf{z}]=1,\\\
&[\mathsf{r}_{j},\,\mathsf{r}_{k}]=[\mathsf{t}_{j},\,\mathsf{t}_{k}]=1,\\\
&[\mathsf{r}_{j},\,\mathsf{t}_{k}]=\mathsf{z}^{-\delta_{jk}}\,\rangle.\end{split}$
If $p$ is odd, this group has exponent $p^{2}$.
###### Remark 1.10.
In both cases, from the relations above we deduce
(14)
$[\mathsf{r}_{j}^{-1},\,\mathsf{t}_{k}]=\mathsf{z}^{\delta_{jk}},\quad[\mathsf{r}_{j}^{-1},\,\mathsf{t}_{k}^{-1}]=\mathsf{z}^{-\delta_{jk}}$
###### Remark 1.11.
For both groups $\mathsf{H}_{2b+1}(\mathbb{Z}_{p})$ and
$\mathsf{G}_{2b+1}(\mathbb{Z}_{p})$, the center coincides with the derived
subgroup and is equal to $\langle\mathsf{z}\rangle\simeq\mathbb{Z}_{p}$.
###### Remark 1.12.
If $p=2$, we can distinguish the two groups
$\mathsf{H}_{2b+1}(\mathbb{Z}_{p})$ and $\mathsf{G}_{2b+1}(\mathbb{Z}_{p})$ by
counting the number of elements of order $4$.
###### Remark 1.13.
The groups $\mathsf{H}_{2b+1}(\mathbb{Z}_{p})$ and
$\mathsf{G}_{2b+1}(\mathbb{Z}_{p})$ are not CCT-groups. In fact, let us take
two distinct indices $j,\,k\in\\{1,\ldots,b\\}$ and consider the non-central
elements $\mathsf{r}_{j}$, $\mathsf{t}_{j}$, $\mathsf{t}_{k}$. Then we have
$[\mathsf{r}_{j},\,\mathsf{t}_{k}]=[\mathsf{t}_{k},\,\mathsf{t}_{j}]=1$, but
$[\mathsf{r}_{j},\,\mathsf{t}_{j}]=\mathsf{z}^{-1}$.
We can now dispose of the case $|G|=32$.
###### Proposition 1.14.
Let $G$ be a non-abelian, finite group such that $|G|=32$ and $G$ is not a
_CCT_ -group. Then $G=G(32,\,t)$, where
$t\in\\{6,\,7,\,8,\,43,\,44,\,49,\,50\\}$. Here
$G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$ and
$G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$ are the two extra-special groups
of order $32$, in particular they have nilpotence class $2$, whereas the
remaining five groups have nilpotence class $3$.
###### Proof.
We first do a case-by case analysis showing that, if
$t\notin\\{6,\,7,\,8,\,43,\,44,\,49,\,50\\}$, then $G=G(32,\,t)$ contains an
abelian subgroup $N$ of index $2$, so that $G$ is a CCT-group by Part
$\boldsymbol{(3)}$ of Proposition 1.5. In every case, we refer the reader to
the presentation given in Table 2 of Appendix A.
* •
$G=G(32,\,2).$ Take $N=\langle
x,\,y^{2},z\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,4).$ Take $N=\langle x,\,y^{2}\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,5).$ Take $N=\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,9).$ Take $N=\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,10).$ Take $N=\langle
ix,\,k\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,11).$ Take $N=\langle x,\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,12).$ Take $N=\langle x^{2},\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,13).$ Take $N=\langle
x^{2},\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,14).$ Take $N=\langle
x^{2},\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,15).$ Take $N=\langle
x^{2},\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,17).$ Take $N=\langle y\rangle\simeq\mathbb{Z}_{16}$.
* •
$G=G(32,\,18).$ Take $N=\langle y\rangle\simeq\mathbb{Z}_{16}$.
* •
$G=G(32,\,19).$ Take $N=\langle y\rangle\simeq\mathbb{Z}_{16}$.
* •
$G=G(32,\,20).$ Take $N=\langle x\rangle\simeq\mathbb{Z}_{16}$.
* •
$G=G(32,\,22).$ Take $N=\langle w\rangle\times\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,23).$ Take $N=\langle z\rangle\times\langle
x,\,y^{2}\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,24).$ Take $N=\langle x,\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,25).$ Take $N=\langle z\rangle\times\langle
y^{2}\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,26).$ Take $N=\langle z\rangle\times\langle
i\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,27).$ Take $N=\langle
x,\,y,\,a,\,b\rangle\simeq(\mathbb{Z}_{2})^{4}$.
* •
$G=G(32,\,28).$ Take $N=\langle
x,\,y,\,z\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,29).$ Take $N=\langle
x,\,i,\,z\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,30).$ Take $N=\langle
x,\,y,\,z\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,31).$ Take $N=\langle x,\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,32).$ Take $N=\langle y,\,z\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,33).$ Take $N=\langle x,\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,34).$ Take $N=\langle x,\,y\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,35).$ Take $N=\langle x,\,k\rangle\simeq(\mathbb{Z}_{4})^{2}$.
* •
$G=G(32,\,37).$ Take $N=\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,38).$ Take $N=\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,39).$ Take $N=\langle z\rangle\times\langle
y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,40).$ Take $N=\langle z\rangle\times\langle
y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,41).$ Take $N=\langle w\rangle\times\langle
x\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,42).$ Take $N=\langle
x,\,y\rangle\simeq\mathbb{Z}_{8}\times\mathbb{Z}_{2}$.
* •
$G=G(32,\,46).$ Take $N=\langle z,\,w\rangle\times\langle
y\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,47).$ Take $N=\langle z,\,w\rangle\times\langle
i\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
* •
$G=G(32,\,48).$ Take $N=\langle
x,\,y,\,z\rangle\simeq\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2}$.
It remains to show that $G=G(32,\,t)$ is not a CCT-group for
$t\in\\{6,\,7,\,8,\,43,\,44,\,49,\,50\\}$, and to compute the nilpotency class
in each case. In the sequel, we will denote by
$G=\Gamma_{1}\supseteq\Gamma_{2}\supseteq\Gamma_{3}\supseteq\ldots$
the lower central series of $G$. Recall that a group $G$ has nilpotency class
$c$ if $\Gamma_{c}\neq\\{1\\}$ and $\Gamma_{c+1}=\\{1\\}$.
For $t=49$ and $t=50$ we have the two extra-special cases, that are not CCT-
groups by Remark 1.13; their nilpotency class is $2$ by Remark 1.11. Let us
now deal with the remaining cases. For each of them, we exhibit three non-
central elements for which commutativity is not a transitive relation, and we
show that $\Gamma_{3}\neq\\{1\\}$ and $\Gamma_{4}=\\{1\\}$; note that this
means that $\Gamma_{2}=[G,\,G]$ is not contained in $Z(G)$, whereas
$\Gamma_{3}=[\Gamma_{2},\,G]$ is contained in $Z(G)$.
* •
$G=G(32,\,6).$ The center of $G$ is $Z(G)=\langle
x\rangle\simeq\mathbb{Z}_{2}$. We have $[y,\,w^{2}]=[w^{2},\,w]=1$, but
$[y,\,w]=x$. The derived subgroup of $G$ is $\Gamma_{2}=[G,\,G]=\langle
x,\,y\rangle\simeq(\mathbb{Z}_{2})^{2}$, and a short computation gives
$\Gamma_{3}=Z(G)$, so $c=3$.
* •
$G=G(32,\,7).$ The center of $G$ is $Z(G)=\langle
w\rangle\simeq\mathbb{Z}_{2}$. We have $[y,\,z]=[z,\,u]=1$, but $[y,\,u]=w$.
The derived subgroup of $G$ is $\Gamma_{2}=[G,\,G]=\langle
w,\,z\rangle\simeq(\mathbb{Z}_{2})^{2}$, and a short computation gives
$\Gamma_{3}=Z(G)$, so $c=3$.
* •
$G=G(32,\,8).$ The center of $G$ is $Z(G)=\langle
x^{4}\rangle\simeq\mathbb{Z}_{2}$. We have $[x,\,x^{2}]=[x^{2},\,y]=1$, but
$[x,\,y]=z^{2}$. The derived subgroup of $G$ is $\Gamma_{2}=[G,\,G]=\langle
x^{4},\,y\rangle\simeq(\mathbb{Z}_{2})^{2}$, and a short computation gives
$\Gamma_{3}=Z(G)$, so $c=3$.
* •
$G=G(32,\,43).$ The center of $G$ is $Z(G)=\langle
x^{4}\rangle\simeq\mathbb{Z}_{2}$. We have $[x,\,x^{2}]=[x^{2},\,z]=1$, but
$[x,\,z]=x^{4}$. The derived subgroup of $G$ is $\Gamma_{2}=[G,\,G]=\langle
x^{2}\rangle\simeq\mathbb{Z}_{4}$, and a short computation gives
$\Gamma_{3}=Z(G)$, so $c=3$.
* •
$G=G(32,\,44).$ The center of $G$ is $Z(G)=\langle
i^{2}\rangle\simeq\mathbb{Z}_{2}$. We have $[x,\,xk]=[xk,\,z]=1$, but
$[x,\,z]=i^{2}$. The derived subgroup of $G$ is $\Gamma_{2}=[G,\,G]=\langle
k\rangle\simeq\mathbb{Z}_{4}$, and a short computation gives
$\Gamma_{3}=Z(G)$, so $c=3$.
This completes the proof. ∎
## 2\. Diagonal double Kodaira structures
For more details on the material contained in this section, we refer the
reader to [CaPol19] and [Pol20]. Let $G$ be a finite group and let $b,\,n\geq
2$ be two positive integers.
###### Definition 2.1.
A _diagonal double Kodaira structure_ of type $(b,\,n)$ on $G$ is an ordered
set of $4b+1$ generators
(15)
$\mathfrak{S}=(\mathsf{r}_{11},\,\mathsf{t}_{11},\ldots,\mathsf{r}_{1b},\,\mathsf{t}_{1b},\;\mathsf{r}_{21},\,\mathsf{t}_{21},\ldots,\mathsf{r}_{2b},\,\mathsf{t}_{2b},\;\mathsf{z}),$
with $o(\mathsf{z})=n$, such that the following relations are satisfied. We
systematically use the commutator notation in order to indicate relations of
conjugacy type, writing for instance $[x,\,y]=zy^{-1}$ instead of
$xyx^{-1}=z$.
* •
Surface relations
(16)
$\displaystyle[\mathsf{r}_{1b}^{-1},\,\mathsf{t}_{1b}^{-1}]\,\mathsf{t}_{1b}^{-1}\,[\mathsf{r}_{1\,b-1}^{-1},\,\mathsf{t}_{1\,b-1}^{-1}]\,\mathsf{t}_{1\,b-1}^{-1}\cdots[\mathsf{r}_{11}^{-1},\,\mathsf{t}_{11}^{-1}]\,\mathsf{t}_{11}^{-1}\,(\mathsf{t}_{11}\,\mathsf{t}_{12}\cdots\mathsf{t}_{1b})=\mathsf{z}$
(17)
$\displaystyle[\mathsf{r}_{21}^{-1},\,\mathsf{t}_{21}]\,\mathsf{t}_{21}\,[\mathsf{r}_{22}^{-1},\,\mathsf{t}_{22}]\,\mathsf{t}_{22}\cdots[\mathsf{r}_{2b}^{-1},\,\mathsf{t}_{2b}]\,\mathsf{t}_{2b}\,(\mathsf{t}_{2b}^{-1}\,\mathsf{t}_{2\,b-1}^{-1}\cdots\mathsf{t}_{21}^{-1})=\mathsf{z}^{-1}$
* •
Conjugacy action of $\mathsf{r}_{1j}$
(18) $\displaystyle[\mathsf{r}_{1j},\,\mathsf{r}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (19)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{r}_{2j}]$ $\displaystyle=1$ (20)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{r}_{2k}]$
$\displaystyle=\mathsf{z}^{-1}\,\mathsf{r}_{2k}\,\mathsf{r}_{2j}^{-1}\,\mathsf{z}\,\mathsf{r}_{2j}\,\mathsf{r}_{2k}^{-1}\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (22)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{t}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (23)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{t}_{2j}]$
$\displaystyle=\mathsf{z}^{-1}$ (24)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{t}_{2k}]$
$\displaystyle=[\mathsf{z}^{-1},\,\mathsf{t}_{2k}]$
$\displaystyle\mathrm{if}\;\;j>k$ (26)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{z}]$
$\displaystyle=[\mathsf{r}_{2j}^{-1},\,\mathsf{z}]$
* •
Conjugacy action of $\mathsf{t}_{1j}$
(27) $\displaystyle[\mathsf{t}_{1j},\,\mathsf{r}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (28)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{r}_{2j}]$
$\displaystyle=\mathsf{t}_{2j}^{-1}\,\mathsf{z}\,\mathsf{t}_{2j}$ (29)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{r}_{2k}]$
$\displaystyle=[\mathsf{t}_{2j}^{-1},\,\mathsf{z}]\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (31)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{t}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (32)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{t}_{2j}]$
$\displaystyle=[\mathsf{t}_{2j}^{-1},\,\mathsf{z}]$ (33)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{t}_{2k}]$
$\displaystyle=\mathsf{t}_{2j}^{-1}\,\mathsf{z}\,\mathsf{t}_{2j}\,\mathsf{z}^{-1}\,\mathsf{t}_{2k}\,\mathsf{z}\,\mathsf{t}_{2j}^{-1}\,\mathsf{z}^{-1}\,\mathsf{t}_{2j}\,\mathsf{t}_{2k}^{-1}\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (35)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{z}]$
$\displaystyle=[\mathsf{t}_{2j}^{-1},\,\mathsf{z}]$
###### Remark 2.2.
From (18) and (27) we can infer the corresponding conjugacy actions of
$\mathsf{r}_{1j}^{-1}$ and $\mathsf{t}_{1j}^{-1}$. We leave the cumbersome but
standard computations to the reader.
###### Remark 2.3.
Abelian groups admit no diagonal double Kodaira structures. Indeed, the
relation $[\mathsf{r}_{1j},\,\mathsf{t}_{2j}]=\mathsf{z}^{-1}$ in (18)
provides a non-trivial commutator in $G$, because $o(\mathsf{z})=n$.
###### Remark 2.4.
If $[G,\,G]\subseteq Z(G)$, then the relations defining a diagonal double
Kodaira structure of type $(b,\,n)$ assume the following simplified form.
* •
Relations expressing the centrality of $\mathsf{z}$
(36)
$[\mathsf{r}_{1j},\mathsf{z}]=[\mathsf{t}_{1j},\mathsf{z}]=[\mathsf{r}_{2j},\mathsf{z}]=[\mathsf{t}_{2j},\mathsf{z}]=1$
* •
Surface relations
(37)
$\displaystyle[\mathsf{r}_{1b}^{-1},\,\mathsf{t}_{1b}^{-1}]\,[\mathsf{r}_{1\,b-1}^{-1},\,\mathsf{t}_{1\,b-1}^{-1}]\,\cdots[\mathsf{r}_{11}^{-1},\,\mathsf{t}_{11}^{-1}]\,=\mathsf{z}$
(38)
$\displaystyle[\mathsf{r}_{21}^{-1},\,\mathsf{t}_{21}]\,[\mathsf{r}_{22}^{-1},\,\mathsf{t}_{22}]\cdots[\mathsf{r}_{2b}^{-1},\,\mathsf{t}_{2b}]=\mathsf{z}^{-1}$
* •
Conjugacy action of $\mathsf{r}_{1j}$
(39) $\displaystyle[\mathsf{r}_{1j},\,\mathsf{r}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{for\;all}\;\;j,\,k$ (40)
$\displaystyle[\mathsf{r}_{1j},\,\mathsf{t}_{2k}]$
$\displaystyle=\mathsf{z}^{-\delta_{jk}}$
* •
Conjugacy action of $\mathsf{t}_{1j}$
(42) $\displaystyle[\mathsf{t}_{1j},\,\mathsf{r}_{2k}]$
$\displaystyle=\mathsf{z}^{\delta_{jk}}$ (43)
$\displaystyle[\mathsf{t}_{1j},\,\mathsf{t}_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{for\;all}\;\;j,\,k$
where $\delta_{jk}$ stands for the Kronecker symbol. Note that, being $G$ non-
abelian by Remark 2.3, the condition $[G,\,G]\subseteq Z(G)$ is equivalent to
$G$ having nilpotency class $2$, see [Is08, p. 22].
The definition of diagonal double Kodaira structure can be motivated by means
of some well-known concepts in geometric topology. Let $\Sigma_{b}$ be a
closed Riemann surface of genus $b$ and let $\mathscr{P}=(p_{1},\,p_{2})$ be
an ordered set of two distinct points on it. Let
$\Delta\subset\Sigma_{b}\times\Sigma_{b}$ be the diagonal. We denote by
$\mathsf{P}_{2}(\Sigma_{b})$ the _pure braid group_ of genus $b$ on two
strands, which is isomorphic to the fundamental group
$\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta,\,\mathscr{P})$. By Gonçalves-
Guaschi’s presentation of surface pure braid groups, see [GG04, Theorem 7],
[CaPol19, Theorem 1.7], we see that $\mathsf{P}_{2}(\Sigma_{b})$ can be
generated by $4g+1$ elements
(45) $\rho_{11},\,\tau_{11},\ldots,\rho_{1b},\,\tau_{1b},\,A_{12}$
subject to the following set of relations.
* •
Surface relations
(46)
$\displaystyle[\rho_{1b}^{-1},\,\tau_{1b}^{-1}]\,\tau_{1b}^{-1}\,[\rho_{1\,b-1}^{-1},\,\tau_{1\,b-1}^{-1}]\,\tau_{1\,b-1}^{-1}\cdots[\rho_{11}^{-1},\,\tau_{11}^{-1}]\,\tau_{11}^{-1}\,(\tau_{11}\,\tau_{12}\cdots\tau_{1b})=A_{12}$
(47)
$\displaystyle[\rho_{21}^{-1},\,\tau_{21}]\,\tau_{21}\,[\rho_{22}^{-1},\,\tau_{22}]\,\tau_{22}\cdots[\rho_{2b}^{-1},\,\tau_{2b}]\,\tau_{2b}\,(\tau_{2b}^{-1}\,\tau_{2\,b-1}^{-1}\cdots\tau_{21}^{-1})=A_{12}^{-1}$
* •
Conjugacy action of $\rho_{1j}$
(48) $\displaystyle[\rho_{1j},\,\rho_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (49) $\displaystyle[\rho_{1j},\,\rho_{2j}]$
$\displaystyle=1$ (50) $\displaystyle[\rho_{1j},\,\rho_{2k}]$
$\displaystyle=A_{12}^{-1}\,\rho_{2k}\,\rho_{2j}^{-1}\,A_{12}\,\rho_{2j}\,\rho_{2k}^{-1}\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (52) $\displaystyle[\rho_{1j},\,\tau_{2k}]$
$\displaystyle=1$ $\displaystyle\mathrm{if}\;\;j<k$ (53)
$\displaystyle[\rho_{1j},\,\tau_{2j}]$ $\displaystyle=A_{12}^{-1}$ (54)
$\displaystyle[\rho_{1j},\,\tau_{2k}]$
$\displaystyle=[A_{12}^{-1},\,\tau_{2k}]$ $\displaystyle\mathrm{if}\;\;j>k$
(56) $\displaystyle[\rho_{1j},\,A_{12}]$
$\displaystyle=[\rho_{2j}^{-1},\,A_{12}]$
* •
Conjugacy action of $\tau_{1j}$
(57) $\displaystyle[\tau_{1j},\,\rho_{2k}]$ $\displaystyle=1$
$\displaystyle\mathrm{if}\;\;j<k$ (58) $\displaystyle[\tau_{1j},\,\rho_{2j}]$
$\displaystyle=\tau_{2j}^{-1}\,A_{12}\,\tau_{2j}$ (59)
$\displaystyle[\tau_{1j},\,\rho_{2k}]$
$\displaystyle=[\tau_{2j}^{-1},\,A_{12}]\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (61) $\displaystyle[\tau_{1j},\,\tau_{2k}]$
$\displaystyle=1$ $\displaystyle\mathrm{if}\;\;j<k$ (62)
$\displaystyle[\tau_{1j},\,\tau_{2j}]$
$\displaystyle=[\tau_{2j}^{-1},\,A_{12}]$ (63)
$\displaystyle[\tau_{1j},\,\tau_{2k}]$
$\displaystyle=\tau_{2j}^{-1}\,A_{12}\,\tau_{2j}\,A_{12}^{-1}\,\tau_{2k}\,A_{12}\,\tau_{2j}^{-1}\,A_{12}^{-1}\,\tau_{2j}\,\tau_{2k}^{-1}\;\;$
$\displaystyle\mathrm{if}\;\;j>k$ (65) $\displaystyle[\tau_{1j},\,A_{12}]$
$\displaystyle=[\tau_{2j}^{-1},\,A_{12}]$
Here the elements $\rho_{ij}$ and $\tau_{ij}$ are the braids depicted in
Figure 1, whereas $A_{12}$ is the braid depicted in Figure 2.
Figure 1. The pure braids $\rho_{1j}$ and $\rho_{2j}$ on $\Sigma_{b}$. If
$\ell\neq i$, the path corresponding to $\rho_{ij}$ and $\tau_{ij}$ based at
$p_{\ell}$ is the constant path. Figure 2. The pure braid $A_{12}$ on
$\Sigma_{b}$
###### Remark 2.5.
Under the identification of $\mathsf{P}_{2}(\Sigma_{b})$ with
$\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta,\,\mathscr{P})$, the generator
$A_{12}\in\mathsf{P}(\Sigma_{b})$ represents the homotopy class
$\gamma_{\Delta}\in\pi_{1}(\Sigma_{b}\times\Sigma_{b}-\Delta,\,\mathscr{P})$
of a loop in $\Sigma_{b}\times\Sigma_{b}$ that “winds once” around the
diagonal $\Delta$.
We can now state the following
###### Proposition 2.6.
A finite group $G$ admits a diagonal double Kodaira structure of type
$(b,\,n)$ if and only if there is a surjective group homomorphism
(66) $\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$
such that $\varphi(A_{12})$ has order $n$.
###### Proof.
If such a $\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$ exists,
we can obtain a diagonal double Kodaira structure on $G$ by setting
(67)
$\mathsf{r}_{ij}=\varphi(\rho_{ij}),\quad\mathsf{t}_{ij}=\varphi(\tau_{ij}),\quad\mathsf{z}=\varphi({A_{12}}).$
Conversely, if $G$ admits a diagonal double Kodaira structure, then (67)
defines a group homomorphism
$\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$ with the desired
properties. ∎
The braid group $\mathsf{P}_{2}(\Sigma_{b})$ is the middle term of two split
short exact sequences
(68)
$1\longrightarrow\pi_{1}(\Sigma_{b}-\\{p_{i}\\},\,p_{j})\longrightarrow\mathsf{P}_{2}(\Sigma_{b})\longrightarrow\pi_{1}(\Sigma_{b},\,p_{i})\longrightarrow
1,$
where $\\{i,\,j\\}=\\{1,\,2\\}$, induced by the two natural projections of
pointed topological spaces
(69)
$(\Sigma_{b}\times\Sigma_{b}-\Delta,\,\mathscr{P})\longrightarrow(\Sigma_{b},\,p_{i}),$
see [GG04, Theorem 1]. Since we have
(70)
$\begin{split}\pi_{1}(\Sigma_{b}-\\{p_{2}\\},\,p_{1})&=\langle\rho_{11},\,\tau_{11},\ldots,\rho_{1b},\,\tau_{1b},\;A_{12}\rangle\\\
\pi_{1}(\Sigma_{b}-\\{p_{1}\\},\,p_{2})&=\langle\rho_{21},\,\tau_{21},\ldots,\rho_{2b},\,\tau_{2b},\;A_{12}\rangle,\end{split}$
it follows that the two subgroups
(71)
$\begin{split}K_{1}&:=\langle\mathsf{r}_{11},\,\mathsf{t}_{11},\ldots,\mathsf{r}_{1b},\,\mathsf{t}_{1b},\;\mathsf{z}\rangle\\\
K_{2}&:=\langle\mathsf{r}_{21},\,\mathsf{t}_{21},\ldots,\mathsf{r}_{2b},\,\mathsf{t}_{2b},\;\mathsf{z}\rangle\end{split}$
are both normal in $G$, and that there are two short exact sequences
(72) $\begin{split}&1\longrightarrow K_{1}\longrightarrow G\longrightarrow
Q_{2}\longrightarrow 1\\\ &1\longrightarrow K_{2}\longrightarrow
G\longrightarrow Q_{1}\longrightarrow 1,\end{split}$
such the elements
$\mathsf{r}_{21},\,\mathsf{t}_{21},\ldots,\mathsf{r}_{2b},\,\mathsf{t}_{2b}$
yield a complete system of coset representatives for $Q_{2}$, whereas the
elements
$\mathsf{r}_{11},\,\mathsf{t}_{11},\ldots,\mathsf{r}_{1b},\,\mathsf{t}_{1b}$
yield a complete system of coset representatives for $Q_{1}$.
Let us now give a couple of definitions, whose geometrical meaning will become
clear in Section 4, see in particular Proposition 4.3 and Remark 4.4.
###### Definition 2.7.
Let $\mathfrak{S}$ be a diagonal double Kodaira structure of type $(b,\,n)$ on
a finite group $G$. Its _signature_ is defined as
(73)
$\sigma(\mathfrak{S})=\frac{1}{3}\,|G|\,(2b-2)\left(1-\frac{1}{n^{2}}\right).$
###### Definition 2.8.
A diagonal double Kodaira structure on $G$ is called _strong_ if
$K_{1}=K_{2}=G$.
For later use, let us write down the special case consisting of a diagonal
double Kodaira structure of type $(2,\,n)$. It is an ordered set of nine
generators of $G$
(74)
$(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\,\mathsf{z}),$
with $o(\mathsf{z})=n$, subject to the following relations.
(75) $\displaystyle\mathbf{(S1)}$
$\displaystyle\,\,[\mathsf{r}_{12}^{-1},\,\mathsf{t}_{12}^{-1}]\,\mathsf{t}_{12}^{-1}\,[\mathsf{r}_{11}^{-1},\,\mathsf{t}_{11}^{-1}]\,\mathsf{t}_{11}^{-1}\,(\mathsf{t}_{11}\,\mathsf{t}_{12})=\mathsf{z}$
$\displaystyle\mathbf{(S2)}$
$\displaystyle\,\,[\mathsf{r}_{21}^{-1},\,\mathsf{t}_{21}]\;\mathsf{t}_{21}\;[\mathsf{r}_{22}^{-1},\,\mathsf{t}_{22}]\,\mathsf{t}_{22}\,(\mathsf{t}_{22}^{-1}\,\mathsf{t}_{21}^{-1})=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R1)}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(R6)}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(R2)}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{r}_{21}]=1$
$\displaystyle\mathbf{(R7)}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{r}_{21}]=\mathsf{z}^{-1}\,\mathsf{r}_{21}\,\mathsf{r}_{22}^{-1}\,\mathsf{z}\,\mathsf{r}_{22}\,\mathsf{r}_{21}^{-1}$
$\displaystyle\mathbf{(R3)}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{t}_{22}]=1$
$\displaystyle\mathbf{(R8)}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{t}_{22}]=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R4)}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{t}_{21}]=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R9)}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{t}_{21}]=[\mathsf{z}^{-1},\,\mathsf{t}_{21}]$
$\displaystyle\mathbf{(R5)}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{z}]=[\mathsf{r}_{21}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(R10)}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{z}]=[\mathsf{r}_{22}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(T1)}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(T6)}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{r}_{22}]=\mathsf{t}_{22}^{-1}\,\mathsf{z}\,\mathsf{t}_{22}$
$\displaystyle\mathbf{(T2)}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{r}_{21}]=\mathsf{t}_{21}^{-1}\,\mathsf{z}\,\mathsf{t}_{21}$
$\displaystyle\mathbf{(T7)}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{r}_{21}]=[\mathsf{t}_{22}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(T3)}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{t}_{22}]=1$
$\displaystyle\mathbf{(T8)}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{t}_{22}]=[\mathsf{t}_{22}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(T4)}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{t}_{21}]=[\mathsf{t}_{21}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(T9)}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{t}_{21}]=\mathsf{t}_{22}^{-1}\,\mathsf{z}\,\mathsf{t}_{22}\,\mathsf{z}^{-1}\,\mathsf{t}_{21}\,\mathsf{z}\,\mathsf{t}_{22}^{-1}\,\mathsf{z}^{-1}\,\mathsf{t}_{22}\,\mathsf{t}_{21}^{-1}$
$\displaystyle\mathbf{(T5)}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{z}]=[\mathsf{t}_{21}^{-1},\,\mathsf{z}]$
$\displaystyle\mathbf{(T10)}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{z}]=[\mathsf{t}_{22}^{-1},\,\mathsf{z}]$
###### Remark 2.9.
When $[G,\,G]\subseteq Z(G)$, we have
(76)
$\begin{split}&[\mathsf{r}_{11},\,\mathsf{z}]=[\mathsf{t}_{11},\,\mathsf{z}]=[\mathsf{r}_{12},\,\mathsf{z}]=[\mathsf{t}_{12},\,\mathsf{z}]=1\\\
&[\mathsf{r}_{21},\,\mathsf{z}]=[\mathsf{t}_{21},\,\mathsf{z}]=[\mathsf{r}_{22},\,\mathsf{z}]=[\mathsf{t}_{22},\,\mathsf{z}]=1\end{split}$
and the previous relations become
(77) $\displaystyle\mathbf{(S1^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{12}^{-1},\,\mathsf{t}_{12}^{-1}]\,[\mathsf{r}_{11}^{-1},\,\mathsf{t}_{11}^{-1}]=\mathsf{z}$
$\displaystyle\mathbf{(S2^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{21}^{-1},\,\mathsf{t}_{21}]\;[\mathsf{r}_{22}^{-1},\,\mathsf{t}_{22}]=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R1^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(R6^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(R2^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{r}_{21}]=1$
$\displaystyle\mathbf{(R7^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{r}_{21}]=1$
$\displaystyle\mathbf{(R3^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{t}_{22}]=1$
$\displaystyle\mathbf{(R8^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{t}_{22}]=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R4^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{11},\,\mathsf{t}_{21}]=\mathsf{z}^{-1}$
$\displaystyle\mathbf{(R9^{\prime})}$
$\displaystyle\,\,[\mathsf{r}_{12},\,\mathsf{t}_{21}]=1$
$\displaystyle\mathbf{(T1^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{r}_{22}]=1$
$\displaystyle\mathbf{(T6^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{r}_{22}]=\mathsf{z}$
$\displaystyle\mathbf{(T2^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{r}_{21}]=\mathsf{z}$
$\displaystyle\mathbf{(T7^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{r}_{21}]=1$
$\displaystyle\mathbf{(T3^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{t}_{22}]=1$
$\displaystyle\mathbf{(T8^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{t}_{22}]=1$
$\displaystyle\mathbf{(T4^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{11},\,\mathsf{t}_{21}]=1$
$\displaystyle\mathbf{(T9^{\prime})}$
$\displaystyle\,\,[\mathsf{t}_{12},\,\mathsf{t}_{21}]=1$
## 3\. Structures on groups of order at most $32$
### 3.1. Prestructures
###### Definition 3.1.
Let $G$ be a finite group. A _prestructure_ on $G$ is an ordered set of nine
elements
(78)
$(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\,\mathsf{z}),$
with $o(\mathsf{z})=n\geq 2$, subject to the relations
$(\mathrm{R1}),\ldots,(\mathrm{R10})$, $(\mathrm{T1}),\ldots,(\mathrm{T10})$
in (75).
In other words, the nine elements must satisfy all the relations defining a
diagonal double Kodaira structure of type $(2,\,n)$, except the surface
relations. In particular, no abelian group admits prestructures. Note that we
are _not_ requiring that the elements of the prestructure generate $G$.
###### Proposition 3.2.
If a finite group $G$ admits a diagonal double Kodaira structure of type
$(b,\,n)$, then it admits a prestructure with $o(\mathsf{z})=n$.
###### Proof.
Consider the ordered set of nine elements
$(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\,\mathsf{z})$
in Definition (2.1) and the relations satisfied by them, with the exception of
the surface relations. ∎
###### Remark 3.3.
Let $G$ be a finite group that admits a prestructure. Then $\mathsf{z}$ and
all its conjugates are non-trivial elements of $G$ and so, from relations
$\mathrm{(R4)}$, $\mathrm{(R8)}$, $\mathrm{(T2)}$, $\mathrm{(T6)}$, it follows
that $\mathsf{r}_{11},\,\mathsf{r}_{12},\,\mathsf{r}_{21},\,\mathsf{r}_{22}$
and $\mathsf{t}_{12},\,\mathsf{t}_{12},\,\mathsf{t}_{21},\mathsf{t}_{22}$ are
non-central elements of $G$.
###### Proposition 3.4.
If $G$ is a _CCT_ -group, then $G$ admits no prestructures and, subsequently,
no diagonal double Kodaira structures.
###### Proof.
The second statement is a direct consequence of the first one (Proposition
3.2), hence it suffices then to check that $G$ admits no prestructures.
Otherwise, keeping in mind Remark 3.3, we see that $\mathrm{(R6)}$ and
$\mathrm{(T1)}$ imply $[\mathsf{r}_{12},\,\mathsf{t}_{11}]=1$. From this and
$\mathrm{(T3)}$ we get $[\mathsf{r}_{12},\,\mathsf{t}_{22}]=1$, that
contradicts $\mathrm{(R8)}$. ∎
Given a finite group $G$, we define the _socle_ of $G$, denoted by
$\mathrm{soc}(G)$, as the intersection of all non-trivial, normal subgroups of
$G$. For instance, $G$ is simple if and only if $\mathrm{soc}(G)=G$.
###### Definition 3.5.
A finite group $G$ is called _monolithic_ if $\mathrm{soc}(G)\neq\\{1\\}$.
Equivalently, $G$ is monolithic if it contains precisely one minimal non-
trivial, normal subgroup.
###### Example 3.6.
If $G$ is an extra-special $p$-group, then $G$ is monolithic and
$\operatorname{soc}(G)=Z(G)$. Indeed, since $Z(G)\simeq\mathbb{Z}_{p}$ is
normal in $G$, by definition of socle we always have
$\operatorname{soc}(G)\subseteq Z(G)$. On the other hand, every non-trivial,
normal subgroup of an extra-special group contains the center (see [Rob96,
Exercise 9 p. 146]), hence $Z(G)\subseteq\operatorname{soc}(G)$.
###### Proposition 3.7.
The following holds.
* $\boldsymbol{(1)}$
Assume that $G$ admits a prestructure, whereas no proper quotient of $G$ does.
Then $G$ is monolithic and $\mathsf{z}\in\mathrm{soc}(G)$.
* $\boldsymbol{(2)}$
Assume that $G$ admits a prestructure, whereas no proper subgroup of $G$ does.
Then the elements of the prestructure generate $G$.
###### Proof.
$\boldsymbol{(1)}$ Let
$\mathfrak{S}=(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\,\mathsf{z})$
be a prestructure in $G$. Assume that there is a non-trivial normal subgroup
$N$ of $G$ such that $\mathsf{z}\notin N$. Then $\mathsf{\bar{z}}\in G/N$ is
non-trivial, and so
$\bar{\mathfrak{S}}=(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22},\,\mathsf{\bar{z}})$
is a prestructure in the quotient group $G/N$, contradiction. Therefore we
must have $\mathsf{z}\in\mathrm{soc}(G)$, in particular, $G$ is monolithic.
$\boldsymbol{(2)}$ Clear, because every prestructure $\mathfrak{S}$ in $G$ is
also a prestructure in the subgroup $\langle\mathfrak{S}\rangle$. ∎
###### Corollary 3.8.
Given a prestructure on an extra-special $p$-group $G$, the element
$\mathsf{z}$ is a generator of $Z(G)\simeq\mathbb{Z}_{p}$.
###### Proof.
If $G$ is extra-special, every proper quotient of $G$ is abelian, hence it
admits no prestructures. The result now follows from Example 3.6 and
Proposition 3.7 (1). ∎
Note that, by Corollary 3.8, in the case of extra-special $p$-groups the
choice of calling $\mathsf{z}$ the element in the prestructure is coherent
with presentations (12) and (13). The case of diagonal double Kodaira
structures on extra-special groups of order $32$ will be studied in Subsection
3.4.
### 3.2. The case $|G|<32$
###### Proposition 3.9.
If $|G|<32$, then $G$ admits no diagonal double Kodaira structures.
###### Proof.
By Corollary 1.6, Proposition 1.7 and Proposition 3.4, it remains only to
check that the symmetric group $\mathsf{S}_{4}$ admits no prestructures. We
start by observing that
$\mathrm{soc}(\mathsf{S}_{4})=\mathsf{V}_{4}=\langle(1\,2)(3\,4),\,(1\,3)(2\,4)\rangle$
and so, by part $\mathrm{(1)}$ of Proposition 3.7, if $\mathfrak{S}$ is a
prestructure on $\mathsf{S}_{4}$ then $\mathsf{z}\in\mathsf{V}_{4}$. Let
$\mathsf{x},\,\mathsf{y}\in\mathsf{S}_{4}$ be such that
$[\mathsf{x},\,\mathsf{y}]=\mathsf{z}$. Examining the tables of subgroups of
$\mathsf{S}_{4}$ given in [S4], by straightforward computations and keeping in
mind that the cycle type determines the conjugacy class, we deduce that either
$\mathsf{x},\,\mathsf{y}\in
C_{\mathsf{S}_{4}}(\mathsf{z})\simeq\mathsf{D}_{8}$ or
$\mathsf{x},\,\mathsf{y}\in\mathsf{A}_{4}$. Every pair in $\mathsf{A}_{4}$
includes at least a $3$-cycle and so, if
$[\mathsf{x},\,\mathsf{y}]=\mathsf{z}$ and both $\mathsf{x}$ and $\mathsf{y}$
have even order, then $\mathsf{x}$ and $\mathsf{y}$ centralize $\mathsf{z}$.
If $\mathsf{x}\in\mathsf{S}_{4}$ is a $3$-cycle, then
$C_{\mathsf{S}_{4}}(\mathsf{x})=\langle\mathsf{x}\rangle\simeq\mathbb{Z}_{3}$.
So, from relations $(\mathrm{R1})$, $(\mathrm{R2})$, $(\mathrm{R3})$,
$(\mathrm{R6})$, it follows that, if one of the elements
$\mathsf{r}_{11},\,\mathsf{r}_{12},\,\mathsf{r}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}$
is a $3$-cycle, then all these elements generate the same cyclic subgroup.
This contradicts $(\mathrm{R8})$, hence
$\mathsf{r}_{11},\,\mathsf{r}_{12},\,\mathsf{r}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}$
all have even order.
Let us look now at relation $(\mathrm{R8})$. Since
$\mathsf{r}_{12},\,\mathsf{t}_{22}$ have even order, from the previous remark
we infer $\mathsf{r}_{12},\,\mathsf{t}_{22}\in
C_{\mathsf{S}_{4}}(\mathsf{z})$. Let us consider $\mathsf{r}_{11}$. If
$\mathsf{r}_{11}$ belongs to $\mathsf{A}_{4}$, being an element of even order
it must be conjugate to $\mathsf{z}$, and so it commutes with $\mathsf{z}$;
otherwise, by $(\mathrm{R4})$, both $\mathsf{r}_{11}$ and $\mathsf{t}_{21}$
commute with $\mathsf{z}$. Summing up, in any case we have $\mathsf{r}_{11}\in
C_{\mathsf{S}_{4}}(\mathsf{z})$.
Relation $(\mathrm{R5})$ can be rewritten as
$\mathsf{r}_{11}\mathsf{r}_{21}\in C_{\mathsf{S}_{4}}(\mathsf{z})$, hence
$\mathsf{r}_{21}\in C_{\mathsf{S}_{4}}(\mathsf{z})$. Analogously, relation
$(\mathrm{R10})$ can be rewritten as $\mathsf{r}_{12}\mathsf{r}_{22}\in
C_{\mathsf{S}_{4}}(\mathsf{z})$, hence $\mathsf{r}_{22}\in
C_{\mathsf{S}_{4}}(\mathsf{z})$.
Using relation $\mathrm{(R9)}$, we get $\mathsf{r}_{12}\mathsf{z}\in
C_{\mathsf{S}_{4}}(\mathsf{t}_{21})$. Since $\mathsf{r}_{12}$ and $\mathsf{z}$
commute and their orders are powers of $2$, it follows that
$o(\mathsf{r}_{12}\mathsf{z})$ is also a power of $2$. Therefore
$\mathsf{t}_{21}$ cannot be a $3$-cycle, otherwise
$C_{\mathsf{S}_{4}}(\mathsf{t}_{21})\simeq\mathbb{Z}_{3}$ and so
$\mathsf{r}_{12}\mathsf{z}=1$ that, in turn, would imply
$[\mathsf{r}_{12},\,\mathsf{t}_{22}]=1$, contradicting $(\mathrm{R8})$. It
follows that $\mathsf{t}_{21}$ has even order and so, since $\mathsf{r}_{11}$
has even order as well, by $\mathrm{(R4)}$ we infer $\mathsf{t}_{21}\in
C_{\mathsf{S}_{4}}(z)$.
Now we can rewrite $(\mathrm{T2})$ as
$[\mathsf{t}_{11},\,\mathsf{r}_{21}]=\mathsf{z}$. If $\mathsf{t}_{11}$ were a
$3$-cycle, from $(\mathrm{T1})$ we would get $\mathsf{r}_{22}\in
C_{\mathsf{S}_{4}}(\mathsf{t}_{11})\simeq\mathbb{Z}_{3}$, a contradiction
since $\mathsf{r}_{22}$ has even order. Thus $\mathsf{t}_{11}$ has even order
and so it belongs to $C_{\mathsf{S}_{4}}(z)$, because $\mathsf{r}_{21}$ has
even order, too. Analogously, by using $(\mathrm{T6})$ and $(\mathrm{T7})$, we
infer $\mathsf{t}_{12}\in C_{\mathsf{S}_{4}}(z)$.
Summarizing, if $\mathfrak{S}$ were a prestructure on $\mathsf{S}_{4}$ we
should have
(79) $\langle\mathfrak{S}\rangle=C_{\mathsf{S}_{4}}(z)\simeq\mathsf{D}_{8},$
contradicting part $(\mathrm{2})$ of Proposition 3.7. ∎
### 3.3. The case $|G|=32$ and $G$ non-extra-special
We start by proving the following partial strengthening of Proposition 3.4.
###### Proposition 3.10.
Let $G$ be a non-abelian finite group, and let $H$ be the subgroup of $G$
generated by those elements whose centralizer is non-abelian. If $H$ is
abelian and $[H:Z(G)]\leq 4$, then $G$ admits no prestructures with
$\mathsf{z}\in Z(G)$.
###### Proof.
First of all, remark that $Z(G)$ is a (normal) subgroup of $H$ because $G$ is
non-abelian. Assume now, by contradiction, that the elements
$(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\,\mathsf{z})$
form a prestructure on $G$, with $\mathsf{z}\in Z(G)$. Then these elements
satisfy relations $(\mathrm{R1}^{\prime}),\ldots,(\mathrm{R9}^{\prime})$,
$(\mathrm{T1}^{\prime}),\ldots,(\mathrm{T9}^{\prime})$ in (77). As $H$ is
abelian, $(\mathrm{R4}^{\prime})$ implies that at least one between
$\mathsf{r}_{11},\,\mathsf{t}_{21}$ does not belong to $H$.
Let us assume $\mathsf{r}_{11}\notin H$. Thus $C_{G}(\mathsf{r}_{11})$ is
abelian, and so $(\mathrm{R2}^{\prime})$ and $(\mathrm{R3}^{\prime})$ yield
$[\mathsf{r}_{21},\,\mathsf{t}_{22}]=1$. From this, using
$(\mathrm{T2}^{\prime})$ and $(\mathrm{T3}^{\prime})$, we infer that
$C_{G}(\mathsf{t}_{22})$ is non-abelian. Similar considerations show that
$C_{G}(\mathsf{r}_{21})$ and $C_{G}(\mathsf{r}_{22})$ are non-abelian, and so
we have $\mathsf{r}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\in H$. Using
$(\mathrm{T2}^{\prime})$, $(\mathrm{T6}^{\prime})$, $(\mathrm{R8}^{\prime})$,
together with the fact that $H$ is abelian, we deduce
$\mathsf{t}_{11},\,\mathsf{t}_{12},\,\mathsf{r}_{12}\notin H$. In particular,
$C_{G}(\mathsf{r}_{12})$ is abelian, so $(\mathrm{R7}^{\prime})$ and
$(\mathrm{R9}^{\prime})$ yield $[\mathsf{r}_{21},\,\mathsf{t}_{21}]=1$;
therefore $(\mathrm{T2}^{\prime})$ and $(\mathrm{T4}^{\prime})$ imply that
$C_{G}(\mathsf{t}_{21})$ is non-abelian, and so $\mathsf{t}_{21}\in H$.
Summing up, we have proved that the four elements
$\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}$ belong
to $H$; since they are all non-central, we infer that they yield four non-
trivial elements in the quotient group $H/Z(G)$. On the other hand, we have
$[H:Z(G)]\leq 4$, and so $H/Z(G)$ contains at most three non-trivial elements;
it follows that (at least) two among the elements
$\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}$ have
the same image in $H/Z(G)$. This means that these two elements are of the form
$g,\,gz$, with $z\in Z(G)$, and so they have the same centralizer. But this is
impossible: in fact, relations (77) show that each element in the set
$\\{\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\\}$
fails to commute with exactly one element in the set
$\\{\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12}\\}$,
and no two elements in
$\\{\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\\}$
fail to commute with the same element in
$\\{\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12}\\}$.
The remaining case, namely $\mathsf{t}_{21}\notin H$, can be dealt with in an
analogous way. Indeed, in this situation we obtain
$\\{\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12}\\}\subseteq
H$, that leads to a contradiction as before. ∎
We can now rule out the non-extra-special groups of order $32$.
###### Proposition 3.11.
Let $G$ be a finite group of order $32$ which is not extra-special. Then $G$
admits no diagonal double Kodaira structures.
###### Proof.
If $G$ is a CCT-group, then the result follows from Proposition 3.4. Thus, by
Proposition 1.14, we must only consider the cases $G=G(32,\,t)$, where
$t\in\\{6,\,7,\,8,\,43,\,44\\}$. Standard computations using the presentations
in Table 2 of Appendix A show that all these groups are monolithic, and that
for all of them $\mathrm{soc}(G)=Z(G)\simeq\mathbb{Z}_{2}$, cf. the proof of
Proposition 1.14. Since no proper quotients of $G$ admit diagonal double
Kodaira structures (Proposition 3.9), it follows from Proposition 3.7 that
every diagonal double Kodaira structure on $G$ is such that $\mathsf{z}$ is
the generator of $Z(G)$. Let $H$ be the subgroup of $G$ generated by those
elements whose centralizer is non-abelian; then, by Proposition 3.10, we are
done, provided that in every case $H$ is abelian and $[H\,:\,Z(G)]\leq 4$. Let
us now show that this is indeed true, leaving the straightforward computations
to the reader.
* •
$G=G(32,\,6).$ In this case $\mathrm{soc}(G)=Z(G)=\langle x\rangle$ and
$H=\langle x,\,y,\,w^{2}\rangle$. Then $H\simeq(\mathbb{Z}_{2})^{3}$ and
$[H:Z(G)]=4$.
* •
$G=G(32,\,7).$ In this case $\mathrm{soc}(G)=Z(G)=\langle w\rangle$ and
$H=\langle z,\,u,\,w\rangle$. Then $H\simeq\mathbb{Z}_{4}\times\mathbb{Z}_{2}$
and $[H:Z(G)]=4$.
* •
$G=G(32,\,8).$ In this case $\mathrm{soc}(G)=Z(G)=\langle x^{4}\rangle$ and
$H=\langle x^{2},\,y,\,z^{2}\rangle$. Then
$H\simeq\mathbb{Z}_{4}\times\mathbb{Z}_{2}$ and $[H:Z(G)]=4$.
* •
$G=G(32,\,43).$ In this case $\mathrm{soc}(G)=Z(G)=\langle x^{4}\rangle$ and
$H=\langle x^{2},\,z\rangle$. Then $H\simeq\mathbb{Z}_{4}\times\mathbb{Z}_{2}$
and $[H:Z(G)]=4$.
* •
$G=G(32,\,44).$ In this case $\mathrm{soc}(G)=Z(G)=\langle i^{2}\rangle$ and
$H=\langle x,\,k\rangle$. Then $H\simeq\mathbb{Z}_{4}\times\mathbb{Z}_{2}$ and
$[H:Z(G)]=4$.
This completes the proof. ∎
### 3.4. The case $|G|=32$ and $G$ extra-special
We are now ready to address the case where $|G|=32$ and $G$ is extra-special.
Let us first recall some results on extra-special $p$-groups, referring the
reader to [Win72] for more details.
Let $G$ be an extra-special $p$-group of order $p^{2b+1}$ and
$\mathsf{x},\,\mathsf{y}\in G$. Setting
$(\bar{\mathsf{x}},\,\bar{\mathsf{y}})=\bar{a}$ where
$[\mathsf{x},\,\mathsf{y}]=\mathsf{z}^{a}$, the quotient group
$V=G/Z(G)\simeq(\mathbb{Z}_{p})^{2b}$ becomes a non-degenerate symplectic
vector space over $\mathbb{Z}_{p}$. Looking at $\eqref{eq:H5}$ and
$\eqref{eq:G5}$, we see that in both cases
$G=\mathsf{H}_{2b+1}(\mathbb{Z}_{p})$ and
$G=\mathsf{G}_{2b+1}(\mathbb{Z}_{p})$ we have
(80)
$(\bar{\mathsf{r}}_{j},\,\bar{\mathsf{r}}_{k})=0,\quad(\bar{\mathsf{t}}_{j},\,\bar{\mathsf{t}}_{k})=0,\quad(\bar{\mathsf{r}}_{j},\,\bar{\mathsf{t}}_{k})=-\delta_{jk}$
for all $j,\,k\in\\{1,\ldots,b\\}$, so that
(81)
$\bar{\mathsf{r}}_{1},\,\bar{\mathsf{t}}_{1},\ldots,\bar{\mathsf{r}}_{b},\,\bar{\mathsf{t}}_{b}$
is an ordered symplectic basis for $V\simeq(\mathbb{Z}_{p})^{2b}$. If $p=2$,
we can also set $q(\bar{\mathsf{x}})=\bar{c}$, where
$\mathsf{x}^{2}=\mathsf{z}^{c}$ and $c\in\\{0,\,1\\}$; this is a quadratic
form on $V$. If $\bar{\mathsf{x}}\in G/Z(G)$ is expressed in coordinates, with
respect to the symplectic basis (81), by the vector
$(\xi_{1},\,\psi_{1},\ldots,\xi_{b},\,\psi_{b})\in(\mathbb{Z}_{2})^{2b}$, then
a straightforward computation yields
(82)
$q(\bar{\mathsf{x}})=\begin{cases}\xi_{1}\psi_{1}+\cdots+\xi_{b}\psi_{b},&\textrm{if
}G=\mathsf{H}_{2b+1}(\mathbb{Z}_{2})\\\
\xi_{1}\psi_{1}+\cdots+\xi_{b}\psi_{b}+\xi_{b}^{2}+\psi_{b}^{2}&\textrm{if
}G=\mathsf{G}_{2b+1}(\mathbb{Z}_{2}).\end{cases}$
These are the two possible normal forms for a non-degenerate quadratic form
over $\mathbb{Z}_{2}$. Moreover, in both cases the symplectic and the
quadratic form are related by
(83)
$q(\bar{\mathsf{x}}\bar{\mathsf{y}})=q(\bar{\mathsf{x}})+q(\bar{\mathsf{y}})+(\bar{\mathsf{x}},\,\bar{\mathsf{y}})\quad\textrm{for
all }\bar{\mathsf{x}},\,\bar{\mathsf{y}}\in V.$
If $\phi\in\mathrm{Aut}(G)$, then $\phi$ induces a linear map
$\bar{\phi}\in\mathrm{End}(V)$; moreover, if $p=2$, then $\phi$ acts trivially
on $Z(G)=[G,\,G]\simeq\mathbb{Z}_{2}$, and this in turn implies that $\phi$
preserves the symplectic form on $V$. In other words, if we identify $V$ with
$(\mathbb{Z}_{2})^{2b}$ via the symplectic basis (81), we have
$\bar{\phi}\in\mathsf{Sp}(2b,\,\mathbb{Z}_{2})$.
We are now in a position to describe the structure of $\mathrm{Aut}(G)$, see
[Win72, Theorem 1].
###### Proposition 3.12.
Let $G$ be an extra-special group of order $2^{2b+1}$. Then the kernel of the
group homomorphism
$\mathrm{Aut}(G)\longrightarrow\mathsf{Sp}(2b,\,\mathbb{Z}_{2})$ given by
$\phi\mapsto\bar{\phi}$ is the subgroup $\mathrm{Inn}(G)$ of inner
automorphisms of $G$. Therefore
$\mathrm{Out}(G)=\mathrm{Aut}(G)/\mathrm{Inn}(G)$ embeds in
$\mathsf{Sp}(2b,\,\mathbb{Z}_{2})$. More precisely, $\mathrm{Out}(G)$
coincides with the orthogonal group
$\mathsf{O}_{\epsilon}(2b,\,\mathbb{Z}_{2})$, of order
(84)
$|\mathsf{O}_{\epsilon}(2b,\,\mathbb{Z}_{2})|=2^{b(b-1)+1}(2^{b}-\epsilon)\prod_{i=1}^{b-1}(2^{2i}-1),$
associated with the quadratic form $\mathrm{\eqref{eq:form-of-quadratic-
forms}}$. Here $\epsilon=1$ if $G=\mathsf{H}_{2b+1}(\mathbb{Z}_{2})$ and
$\epsilon=-1$ if $G=\mathsf{G}_{2b+1}(\mathbb{Z}_{2})$.
###### Corollary 3.13.
Let $G$ be an extra-special group of order $2^{2b+1}$. We have
(85)
$|\mathrm{Aut}(G)|=2^{b(b+1)+1}(2^{b}-\epsilon)\prod_{i=1}^{b-1}(2^{2i}-1).$
###### Proof.
By Proposition 3.12 we get
$|\mathrm{Aut}(G)|=|\mathrm{Inn}(G)|\cdot|\mathsf{O}_{\epsilon}(2b,\,\mathbb{Z}_{2})|$.
Since $\mathrm{Inn}(G)\simeq G/Z(G)$ has order $2^{2b}$, the claim follows
from (84). ∎
In particular, plugging $b=2$ in (85), we can compute the orders of
automorphism groups of extra-special groups of order $32$, namely
(86)
$|\mathrm{Aut}(\mathsf{H}_{5}(\mathbb{Z}_{2}))|=1152,\quad|\mathrm{Aut}(\mathsf{G}_{5}(\mathbb{Z}_{2}))|=1920.$
Assume now that
$\mathfrak{S}=(\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12},\,\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22},\mathsf{z})$
is a diagonal double Kodaira structure of type $(2,\,n)$ on an extra-special
group $G$ of order $32$; by Corollary 3.8, the element $\mathsf{z}$ is the
generator of $Z(G)\simeq\mathbb{Z}_{2}$, hence $n=2$. Then
(87)
$\bar{\mathfrak{S}}=(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})$
is an ordered set of generators for the symplectic $\mathbb{Z}_{2}$-vector
space $V=G/Z(G)\simeq(\mathbb{Z}_{2})^{4}$, and (77) yields the relations
(88)
$\begin{split}&(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})+(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11})=1,\\\
&(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})+(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})=1,\\\
&(\mathsf{\bar{r}}_{1j},\,\mathsf{\bar{t}}_{2k})=\delta_{jk},\quad(\mathsf{\bar{r}}_{1j},\,\mathsf{\bar{r}}_{2k})=0\\\
&(\mathsf{\bar{t}}_{1j},\,\mathsf{\bar{r}}_{2k})=\delta_{jk},\quad(\mathsf{\bar{t}}_{1j},\,\mathsf{\bar{t}}_{2k})=0.\end{split}$
Conversely, given any set of generators $\bar{\mathfrak{S}}$ of $V$ as in
(87), whose elements satisfy (88), a diagonal double Kodaira structure of type
$(b,\,n)=(2,\,2)$ on $G$ inducing $\bar{\mathfrak{S}}$ is necessarily of the
form
(89)
$\mathfrak{S}=(\mathsf{r}_{11}\mathsf{z}^{a_{11}},\,\mathsf{t}_{11}\mathsf{z}^{b_{11}},\,\mathsf{r}_{12}\mathsf{z}^{a_{12}},\,\mathsf{t}_{12}\mathsf{z}^{b_{12}},\,\mathsf{r}_{21}\mathsf{z}^{a_{21}},\,\mathsf{t}_{21}\mathsf{z}^{b_{21}},\,\mathsf{r}_{22}\mathsf{z}^{a_{22}},\,\mathsf{t}_{22}\mathsf{z}^{b_{22}},\,\mathsf{z}),$
where $a_{ij},\,b_{ij}\in\\{0,\,1\\}$. This proves the following
###### Lemma 3.14.
The total number of diagonal double Kodaira structures of type
$(b,\,n)=(2,\,2)$ on an extra-special group $G$ of order $32$ is obtained
multiplying by $2^{8}$ the number of ordered sets of generators
$\bar{\mathfrak{S}}$ of $V$ as in $\eqref{eq:bar-Sigma}$, whose elements
satisfy (88). In particular, such a number does not depend on $G$.
We are now ready to state the main result of this section.
###### Theorem 3.15.
A finite group $G$ of order $32$ admits a diagonal double Kodaira structure if
and only if $G$ is extra-special. In this case, the following holds.
* $\boldsymbol{(1)}$
Both extra-special groups of order $32$ admit $2211840=1152\cdot 1920$
distinct diagonal double Kodaira structures of type $(b,\,n)=(2,\,2)$. Every
such a structure $\mathfrak{S}$ is strong and satisfies
$\sigma(\mathfrak{S})=16$.
* $\boldsymbol{(2)}$
If $G=G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$, these structures form $1920$
orbits under the action of $\mathrm{Aut}(G)$.
* $\boldsymbol{(3)}$
If $G=G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$, these structures form $1152$
orbits under the action of $\mathrm{Aut}(G)$.
###### Proof.
We already know that non-extra-special groups of order $32$ admit no diagonal
double Kodaira structures (Proposition 3.11) and so, in the sequel, we can
assume that $G$ is extra-special.
Looking at the first two relations in (88), we see that we must consider four
cases:
* $\boldsymbol{(a)}$
$\;(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})=0,\quad(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11})=1,\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})=0,\quad(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})=1,$
* $\boldsymbol{(b)}$
$\;(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})=1,\quad(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11})=0,\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})=1,\quad(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})=0,$
* $\boldsymbol{(c)}$
$\;(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})=0,\quad(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11})=1,\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})=1,\quad(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})=0,$
* $\boldsymbol{(d)}$
$\;(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})=1,\quad(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11})=0,\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})=0,\quad(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22})=1.$
Case $\boldsymbol{(a)}$. In this case the vectors
$\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22}$
are a symplectic basis of $V$, whereas the subspace
$W=\langle\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21}\rangle$
is isotropic, namely the symplectic form is identically zero on it. Since $V$
is a symplectic vector space of dimension $4$, the Witt index of $V$, i.e. the
dimension of a maximal isotropic subspace of $V$, is $\frac{1}{2}\dim(V)=2$,
see [Ar62, Théorèmes 3.10, 3.11]. On the other hand, we have
$(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{22})=1$ and
$(\mathsf{\bar{t}}_{12},\,\,\mathsf{\bar{t}}_{22})=0$, hence
$\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12}$ are linearly independent and
so they must generate a maximal isotropic subspace; it follows that
$W=\langle\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12}\rangle$. Let us set
now
(90)
$\begin{split}(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{r}}_{12})&=a,\quad(\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{12})=b,\quad(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{11})=c,\quad(\mathsf{\bar{t}}_{11},\,\mathsf{\bar{t}}_{12})=d,\\\
(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{r}}_{22})&=e,\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{22})=f,\quad(\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{21})=g,\quad(\mathsf{\bar{t}}_{21},\,\mathsf{\bar{t}}_{22})=h,\\\
\end{split}$
where $a,\,b,\,c,\,d,\,e,\,f,\,g,\,h\in\mathbb{Z}_{2}$, and let us express the
remaining vectors of $\bar{\mathfrak{S}}$ in terms of the symplectic basis.
Standard computations yield
(91) $\displaystyle\mathsf{\bar{r}}_{12}$
$\displaystyle=c\mathsf{\bar{r}}_{11}+a\mathsf{\bar{t}}_{11}+\mathsf{\bar{r}}_{22},$
$\displaystyle\mathsf{\bar{t}}_{12}=d\mathsf{\bar{r}}_{11}+b\mathsf{\bar{t}}_{11}+\mathsf{\bar{t}}_{22},$
(92) $\displaystyle\mathsf{\bar{r}}_{21}$
$\displaystyle=\mathsf{\bar{r}}_{11}+f\mathsf{\bar{r}}_{22}+e\mathsf{\bar{t}}_{22},$
$\displaystyle\mathsf{\bar{t}}_{21}=\mathsf{\bar{t}}_{11}+h\mathsf{\bar{r}}_{22}+g\mathsf{\bar{t}}_{22}.$
Now recall that $W$ is isotropic; then, using the expressions in (91) and
imposing the relations
(93)
$\begin{array}[]{lll}(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12})=0,&\quad(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{r}}_{21})=0,&\quad(\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{21})=0,\\\
(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{12})=0,&\quad(\mathsf{\bar{t}}_{12},\,\mathsf{\bar{t}}_{21})=0,&\quad(\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21})=0,\end{array}$
we get
(94) $\begin{array}[]{lll}ad+bc=1,&\quad a+e=0,&\quad\quad c+g=0,\\\ \quad
b+f=0,&\quad d+h=0,&\quad eh+fg=1.\end{array}$
Summing up, the elements $\mathsf{\bar{r}}_{12}$, $\mathsf{\bar{t}}_{12}$,
$\mathsf{\bar{r}}_{21}$, $\mathsf{\bar{t}}_{21}$ can be determined from the
symplectic basis via the relations
(95) $\displaystyle\mathsf{\bar{r}}_{12}$
$\displaystyle=c\mathsf{\bar{r}}_{11}+a\mathsf{\bar{t}}_{11}+\mathsf{\bar{r}}_{22},$
$\displaystyle\mathsf{\bar{t}}_{12}=d\mathsf{\bar{r}}_{11}+b\mathsf{\bar{t}}_{11}+\mathsf{\bar{t}}_{22},$
(96) $\displaystyle\mathsf{\bar{r}}_{21}$
$\displaystyle=\mathsf{\bar{r}}_{11}+b\mathsf{\bar{r}}_{22}+a\mathsf{\bar{t}}_{22},$
$\displaystyle\mathsf{\bar{t}}_{21}=\mathsf{\bar{t}}_{11}+d\mathsf{\bar{r}}_{22}+c\mathsf{\bar{t}}_{22},$
where $a,\,b,\,c,\,d\in\mathbb{Z}_{2}$ and $ad+bc=1$. Conversely, given any
symplectic basis
$\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22}$
of $V$ and elements $\mathsf{\bar{r}}_{12}$, $\mathsf{\bar{t}}_{12}$,
$\mathsf{\bar{r}}_{21}$, $\mathsf{\bar{t}}_{21}$ as in (95), with $ad+bc=1$,
we get a set of generators $\bar{\mathfrak{S}}$ of $V$ having the form (87),
and whose elements satisfy (88). Thus, the total number of such
$\bar{\mathfrak{S}}$ in this case is given by
(97)
$|\mathsf{Sp}(4,\,\mathbb{Z}_{2})|\cdot|\mathsf{GL}(2,\,\mathbb{Z}_{2})|=720\cdot
6=4320$
and so, by Lemma 3.14, the corresponding number of diagonal double Kodaira
structures is $2^{8}\cdot 4320=1105920$. All these structures are strong: in
fact, we have
(98)
$\begin{split}K_{1}=\langle\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{12},\,\mathsf{t}_{12}\rangle&=\langle\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{11}^{c}\mathsf{t}_{11}^{a}\mathsf{r}_{22},\,\mathsf{r}_{11}^{d}\mathsf{t}_{11}^{b}\mathsf{t}_{22}\rangle\\\
&=\langle\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\rangle=G\end{split}$
(99)
$\begin{split}K_{2}=\langle\mathsf{r}_{21},\,\mathsf{t}_{21},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\rangle&=\langle\mathsf{r}_{11}\mathsf{r}_{22}^{b}\mathsf{t}_{22}^{a},\,\mathsf{t}_{11}\mathsf{r}_{22}^{d}\mathsf{t}_{22}^{c},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\rangle\\\
&=\langle\mathsf{r}_{11},\,\mathsf{t}_{11},\,\mathsf{r}_{22},\,\mathsf{t}_{22}\rangle=G,\end{split}$
the last equality following in both cases because
$\langle\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22}\rangle=V$
and $[\mathsf{r}_{11},\,\mathsf{t}_{11}]=\mathsf{z}$.
Case $\boldsymbol{(b)}$. In this situation, the elements
$\\{\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21},\,\mathsf{\bar{t}}_{21}\\}$
form a symplectic basis for $V$, whereas
$W=\langle\mathsf{\bar{r}}_{11},\,\mathsf{\bar{t}}_{11},\,\mathsf{\bar{r}}_{22},\,\mathsf{\bar{t}}_{22}\rangle$
is an isotropic subspace. The same calculations as in case $(a)$ show that
there are again $1105920$ diagonal double Kodaira structures.
Case $\boldsymbol{(c)}$. This case do not occur. In fact, in this situation
the subspace
$W=\langle\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21}\rangle$
is isotropic. Take a linear combination of its generators giving the zero
vector, namely
(100)
$a\mathsf{\bar{r}}_{12}+b\mathsf{\bar{t}}_{12}+c\mathsf{\bar{r}}_{21}=0.$
Pairing with $\mathsf{\bar{t}}_{21}$, $\mathsf{\bar{t}}_{22}$,
$\mathsf{\bar{r}}_{22}$, we get $c=a=b=0$. Thus,
$\mathsf{\bar{r}}_{12},\,\mathsf{\bar{t}}_{12},\,\mathsf{\bar{r}}_{21}$ are
linearly independent, and $W$ is an isotropic subspace of dimension $3$ inside
the $4$-dimensional symplectic space $V$, contradiction.
Case $\boldsymbol{(d)}$. This case is obtained from $(c)$ by exchanging the
indices $1$ and $2$, so it does not occur, either.
Summarizing, we have found $1105920$ diagonal double Kodaira structures in
cases $(a)$ and $(b)$, and no structure at all in cases $(c)$ and $(d)$. So
the total number of diagonal double Kodaira structures on $G$ is $2211840$,
and this concludes the proof of part $\boldsymbol{(1)}$.
Now observe that, since every diagonal double Kodaira structure $\mathfrak{S}$
generates $G$, the only automorphism $\phi$ of $G$ fixing $\mathfrak{S}$
elementwise is the identity. This means that $\mathrm{Aut}(G)$ acts freely on
the set of diagonal double Kodaira structures, hence the number of orbits is
obtained dividing $2211840$ by $|\mathrm{Aut}(G)|$. Parts $\boldsymbol{(2)}$
and $\boldsymbol{(3)}$ now follow from (86), and we are done. ∎
###### Example 3.16.
Let us give an explicit example of diagonal double Kodaira structure on an
extra-special group $G$ of order $32$, by using the construction described in
the proof of part $\mathbf{(1)}$ of Theorem 3.15. Referring to the
presentations for $\mathsf{H}_{5}(\mathbb{Z}_{2})$ and
$\mathsf{G}_{5}(\mathbb{Z}_{2})$ given in Proposition 1.9, we start by
choosing in both cases the following elements, whose images give a symplectic
basis for $V$:
(101)
$\mathsf{r}_{11}=\mathsf{r}_{1},\quad\mathsf{t}_{11}=\mathsf{t}_{1},\quad\mathsf{r}_{22}=\mathsf{r}_{2},\quad\mathsf{t}_{22}=\mathsf{t}_{2}.$
Choosing $a=d=1$ and $b=c=0$ in (95), we find the remaining elements,
obtaining the diagonal double Kodaira structure
(102) $\displaystyle\mathsf{r}_{11}$ $\displaystyle=\mathsf{r}_{1},$
$\displaystyle\quad\mathsf{t}_{11}$ $\displaystyle=\mathsf{t}_{1},$
$\displaystyle\quad\mathsf{r}_{12}$
$\displaystyle=\mathsf{r}_{2}\,\mathsf{t}_{1},$
$\displaystyle\quad\mathsf{t}_{12}$
$\displaystyle=\mathsf{r}_{1}\,\mathsf{t}_{2}$ $\displaystyle\mathsf{r}_{21}$
$\displaystyle=\mathsf{r}_{1}\,\mathsf{t}_{2},$
$\displaystyle\quad\mathsf{t}_{21}$
$\displaystyle=\mathsf{r}_{2}\,\mathsf{t}_{1},$
$\displaystyle\quad\mathsf{r}_{22}$ $\displaystyle=\mathsf{r}_{2},$
$\displaystyle\quad\mathsf{t}_{22}$ $\displaystyle=\mathsf{t}_{2}.$
###### Remark 3.17.
Theorem 3.15 should be compared with previous results of [CaPol19] and
[Pol20], regarding the construction of diagonal double Kodaira structures on
some extra-special groups of order at least $2^{7}=128$. The examples on
extra-special groups of order $32$ presented here are really new, in the sense
that they cannot be obtained by taking the image of structures on extra-
special groups of bigger order: in fact, an extra-special group admits no non-
abelian proper quotients, cf. Example 3.6.
###### Remark 3.18.
Although it is known that the pure braid group $\mathsf{P}_{2}(\Sigma_{b})$ is
residually $p$-finite for all prime number $p\geq 2$, see [BarBel09, pp.
1481-1490], it is a non-trivial problem to understand its non-abelian, finite
quotients. The extra-special examples of order at least $128$ cited in Remark
3.17 were the outcome of the first (as far as we know) systematic
investigation of this matter. Our approach in the present work sheds some new
light on this problem, providing a sharp lower bound for the order of a non-
abelian quotient $G$ of $\mathsf{P}_{2}(\Sigma_{b})$, under the assumption
that the quotient map $\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow
G$ does not factor through
$\pi_{1}(\Sigma_{b}\times\Sigma_{b},\,\mathscr{P})$; indeed, by [CaPol19,
Remark 1.7 (iv)], the factorization occurs if and only if $\varphi(A_{12})$ is
trivial. More precisely, we showed that, for every such a quotient, the
inequality $|G|\geq 32$ holds, with equality if and only if $G$ is extra-
special: in fact, both extra-special groups of order $32$ do appear as
quotients of $\mathsf{P}_{2}(\Sigma_{2})$. Moreover, for these groups, Theorem
3.15 also computes the number of distinct group epimorphisms
$\varphi\colon\mathsf{P}_{2}(\Sigma_{2})\longrightarrow G$ such that
$\varphi(A_{12})=\mathsf{z}$, and the number of their equivalence classes up
to the natural action of $\operatorname{Aut}(G)$.
## 4\. Geometrical application: diagonal double Kodaira fibrations
Recall that a _Kodaira fibration_ is a smooth, connected holomorphic fibration
$f_{1}\colon S\longrightarrow B_{1}$, where $S$ is a compact complex surface
and $B_{1}$ is a compact complex curve, which is not isotrivial. The genus
$b_{1}:=g(B_{1})$ is called the _base genus_ of the fibration, whereas the
genus $g:=g(F)$, where $F$ is any fibre, is called the _fibre genus_.
###### Definition 4.1.
A _double Kodaira surface_ is a compact complex surface $S$, endowed with a
_double Kodaira fibration_ , namely a surjective, holomorphic map $f\colon
S\longrightarrow B_{1}\times B_{2}$ yielding, by composition with the natural
projections, two Kodaira fibrations $f_{i}\colon S\longrightarrow B_{i}$,
$i=1,\,2$.
The aim of this section is to show how the existence of diagonal double
Kodaira structures is equivalent to the existence of some special double
Kodaira fibrations, that we call _diagonal double Kodaira fibrations_. We
closely follow the treatment given in [Pol20, Section 3].
With a slight abuse of notation, in the sequel we will use the symbol
$\Sigma_{b}$ to indicate both a smooth complex curve of genus $b$ and its
underlying real surface. By Grauert-Remmert’s extension theorem and Serre’s
GAGA, the group epimorphism
$\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$ described in
Proposition 2.6 yields the existence of a smooth, complex, projective surface
$S$ endowed with a Galois cover
(103) $\mathbf{f}\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b},$
with Galois group $G$ and branched precisely over $\Delta$ with branching
order $n$, see [CaPol19, Proposition 3.4]. Composing the left homomorphism in
(68) with $\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$, we get
two homomorphisms
(104) $\varphi_{1}\colon\pi_{1}(\Sigma_{b}-\\{p_{2}\\},\,p_{1})\longrightarrow
G,\quad\varphi_{2}\colon\pi_{1}(\Sigma_{b}-\\{p_{1}\\},\,p_{2})\longrightarrow
G,$
whose respective images coincide with the subgroups $K_{1}$ and $K_{2}$
defined in (72). By construction, these are the homomorphisms induced by the
restrictions $\mathbf{f}_{i}\colon\Gamma_{i}\longrightarrow\Sigma_{b}$ of the
Galois cover $\mathbf{f}\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ to
the fibres of the two natural projections
$\pi_{i}\colon\Sigma_{b}\times\Sigma_{b}\longrightarrow\Sigma_{b}$. Since
$\Delta$ intersects transversally at a single point all the fibres of the
natural projections, it follows that both such restrictions are branched at
precisely one point, and the number of connected components of the smooth
curve $\Gamma_{i}\subset S$ equals the index $m_{i}:=[G:K_{i}]$ of $K_{i}$ in
$G$.
So, taking the Stein factorizations of the compositions
$\pi_{i}\circ\mathbf{f}\colon S\longrightarrow\Sigma_{b}$ as in the diagram
below
(105)
${S}$${\Sigma_{b}}$${\Sigma_{b_{i}}}$$\scriptstyle{\pi_{i}\circ\mathbf{f}}$$\scriptstyle{f_{i}}$$\scriptstyle{\theta_{i}}$
we obtain two distinct Kodaira fibrations $f_{i}\colon
S\longrightarrow\Sigma_{b_{i}}$, hence a double Kodaira fibration by
considering the product morphism
(106) $f=f_{1}\times f_{2}\colon
S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}.$
###### Definition 4.2.
We call $f\colon S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ the
_diagonal double Kodaira fibration_ associated with the diagonal double
Kodaira structure $\mathfrak{S}$ on the finite group $G$. Conversely, we will
say that a double Kodaira fibration $f\colon
S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ is _of diagonal type_
$(b,\,n)$ if there exists a finite group $G$ and a diagonal double Kodaira
structure $\mathfrak{S}$ of type $(b,\,n)$ on it such that $f$ is associated
with $\mathfrak{S}$.
Since the morphism $\theta_{i}\colon\Sigma_{b_{i}}\longrightarrow\Sigma_{b}$
is étale of degree $m_{i}$, by using the Hurwitz formula we obtain
(107) $b_{1}-1=m_{1}(b-1),\quad b_{2}-1=m_{2}(b-1).$
Moreover, the fibre genera $g_{1}$, $g_{2}$ of the Kodaira fibrations
$f_{1}\colon S\longrightarrow\Sigma_{b_{1}}$, $f_{2}\colon
S\longrightarrow\Sigma_{b_{2}}$ are computed by the formulae
(108) $2g_{1}-2=\frac{|G|}{m_{1}}(2b-2+\mathfrak{n}),\quad
2g_{2}-2=\frac{|G|}{m_{2}}\left(2b-2+\mathfrak{n}\right),$
where $\mathfrak{n}:=1-1/n$. Finally, the surface $S$ fits into a diagram
(109)
${S}$${\Sigma_{b}\times\Sigma_{b}}$${\Sigma_{b_{1}}\times\Sigma_{b_{2}}}$$\scriptstyle{\mathbf{f}}$$\scriptstyle{f}$$\scriptstyle{\theta_{1}\times\theta_{2}}$
so that the diagonal double Kodaira fibration $f\colon
S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ is a finite cover of
degree $\frac{|G|}{m_{1}m_{2}}$, branched precisely over the curve
(110)
$(\theta_{1}\times\theta_{2})^{-1}(\Delta)=\Sigma_{b_{1}}\times_{\Sigma_{b}}\Sigma_{b_{2}}.$
Such a curve is always smooth, being the preimage of a smooth divisor via an
étale morphism. However, it is reducible in general, see [CaPol19, Proposition
3.11]. The invariants of $S$ can be now computed as follows, see [CaPol19,
Proposition 3.8].
###### Proposition 4.3.
Let $f\colon S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ be a diagonal
double Kodaira fibration, associated with a diagonal double Kodaira structure
$\mathfrak{S}$ of type $(b,\,n)$ on a finite group $G$. Then we have
(111)
$\begin{split}c_{1}^{2}(S)&=|G|\,(2b-2)(4b-4+4\mathfrak{n}-\mathfrak{n}^{2})\\\
c_{2}(S)&=|G|\,(2b-2)(2b-2+\mathfrak{n})\end{split}$
where $\mathfrak{n}=1-1/n$. As a consequence, the slope and the signature of
$S$ can be expressed as
(112)
$\begin{split}\nu(S)&=\frac{c_{1}^{2}(S)}{c_{2}(S)}=2+\frac{2\mathfrak{n}-\mathfrak{n}^{2}}{2b-2+\mathfrak{n}}\\\
\sigma(S)&=\frac{1}{3}\left(c_{1}^{2}(S)-2c_{2}(S)\right)=\frac{1}{3}\,|G|\,(2b-2)\left(1-\frac{1}{n^{2}}\right)=\sigma(\mathfrak{S})\end{split}$
###### Remark 4.4.
By definition, the diagonal double Kodaira structure $\mathfrak{S}$ is strong
if and only if $m_{1}=m_{2}=1$, that in turn implies $b_{1}=b_{2}=b$, i.e.,
$f=\mathbf{f}$. In other words, $\mathfrak{S}$ is strong if and only if no
Stein factorization as in (105) is needed or, equivalently, if and only if the
Galois cover $\mathbf{f}\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$
induced by (66) is already a double Kodaira fibration, branched on the
diagonal $\Delta\subset\Sigma_{b}\times\Sigma_{b}$.
###### Remark 4.5.
Every Kodaira fibred surface $S$ satisfies $\sigma(S)>0$, see the introduction
to [LLR20]; moreover, since $S$ is a differentiable $4$-manifold that is a
real surface bundle, its signature is divisible by $4$, see [Mey73]. In
addition, if $S$ is associated with a diagonal double Kodaira structure of
type $(b,\,n)$, with $n$ odd, then $K_{S}$ is $2$-divisible in
$\textrm{Pic}(S)$ and so $\sigma(S)$ is a positive multiple of $16$ by
Rokhlin’s theorem, see [CaPol19, Remark 3.9].
###### Remark 4.6.
Not all double Kodaira fibration are of diagonal type. In fact, if $S$ is of
diagonal type then its slope satisfies $\nu(S)=2+s$, where $s$ is rational and
$0<s<6-4\sqrt{2}$, see [Pol20, Proposition 3.12 and Remark 3.13].
We are now ready to give a geometric interpretation of Proposition 3.9,
Proposition 3.11 and Theorem 3.15 in terms of double Kodaira fibrations.
###### Theorem 4.7.
Let $G$ be a finite group and
(113) $\mathbf{f}\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$
be a Galois cover with Galois group $G$, branched over the diagonal $\Delta$
with branching order $n\geq 2$. Then the following hold.
* $\boldsymbol{(1)}$
We have $|G|\geq 32$, with equality precisely when $G$ is extra-special.
* $\boldsymbol{(2)}$
If $G=G(32,\,49)=\mathsf{H}_{5}(\mathbb{Z}_{2})$ and $b=2$, there are $1920$
$G$-covers of type (113), up to cover isomorphisms.
* $\boldsymbol{(3)}$
If $G=G(32,\,50)=\mathsf{G}_{5}(\mathbb{Z}_{2})$ and $b=2$, there are $1152$
$G$-covers of type (113), up to cover isomorphisms.
Finally, in both cases $\boldsymbol{(2)}$ and $\boldsymbol{(3)}$, we have
$n=2$ and each cover $\mathbf{f}$ is a double Kodaira fibration with
(114) $b_{1}=b_{2}=2,\quad g_{1}=g_{2}=41,\quad\sigma(S)=16.$
###### Proof.
By the result of Section 4, a cover as in (113), branched over $\Delta$ with
order $n$, exists if and only if $G$ admits a double Kodaira structure of type
$(b,\,n)$, and the number of such covers, up to cover isomorphisms, equals the
number of structures up the natural action of $\mathrm{Aut}(G)$. Then,
$\boldsymbol{(1)}$, $\boldsymbol{(2)}$ and $\boldsymbol{(3)}$ can be deduced
from the corresponding statements in Theorem 3.15. The same theorem tells us
that all double Kodaira structures on an extra-special group of order $32$ are
strong, hence the cover $\mathbf{f}$ is already a double Kodaira fibration and
no Stein factorization is needed (Remark 4.4). The fibre genera, the slope and
the signature of $S$ can be now computed by using (108) and (112). ∎
As a consequence, we obtain a sharp lower bound for the signature of a
diagonal double Kodaira fibration or, equivalently, of a diagonal double
Kodaira structure.
###### Corollary 4.8.
Let $f\colon S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ be a diagonal
double Kodaira fibration, associated with a diagonal double Kodaira structure
of type $(b,\,n)$ on a finite group $G$. Then $\sigma(S)\geq 16$, and equality
holds precisely when $(b,\,n)=(2,\,2)$ and $G$ is an extra-special group of
order $32$.
###### Proof.
Theorem 3.15 implies $|G|\geq 32$. Since $b\geq 2$ and $n\geq 2$, from (112)
we get
(115) $\sigma(S)=\frac{1}{3}\,|G|\,(2b-2)\left(1-\frac{1}{\;n^{2}}\right)\\\
\geq\frac{1}{3}\cdot 32\cdot(2\cdot 2-2)\left(1-\frac{1}{\;2^{2}}\right)=16,$
and equality holds if and only if we are in the situation described in Theorem
4.7, namely, $b=n=2$ and $G$ an extra-special group of order $32$. ∎
These results provide, in particular, new “double solutions” to a problem,
posed by G. Mess, from Kirby’s problem list in low-dimensional topology
[Kir97, Problem 2.18 A], asking what is the smallest number $b$ for which
there exists a real surface bundle over a real surface with base genus $b$ and
non-zero signature. We actually have $b=2$, also for double Kodaira
fibrations, as shown in [CaPol19, Proposition 3.19] and [Pol20, Theorem 3.6]
by using double Kodaira structures of type $(2,\,3)$ on extra-special groups
of order $3^{5}$. Those fibrations had signature $144$ and fibre genera $325$;
by using our new examples, we are now able to substantially lower both these
values.
###### Theorem 4.9.
Let $S$ be the diagonal double Kodaira surface associated with a strong
diagonal double Kodaira structure of type $(b,\,n)=(2,\,2)$ on an extra-
special group $G$ of order $32$. Then the real manifold $M$ underlying $S$ is
a closed, orientable $4$-manifold of signature $16$ that can be realized as a
real surface bundle over a real surface of genus $2$, with fibre genus $41$,
in two different ways.
Theorem 4.7 also implies the following partial answer to [CaPol19, Question
3.20].
###### Corollary 4.10.
Let $g_{\mathrm{min}}$ and $\sigma_{\mathrm{min}}$ be the minimal possible
fibre genus and signature for a double Kodaira fibration $f\colon
S\longrightarrow\Sigma_{2}\times\Sigma_{2}$. Then we have
(116) $g_{\mathrm{min}}\leq 41,\quad\sigma_{\mathrm{min}}\leq 16.$
In fact, it is an interesting question whether $16$ and $41$ are the minimum
possible values for the signature and the fibre genus of a (non necessarily
diagonal) double Kodaira fibration $f\colon
S\longrightarrow\Sigma_{2}\times\Sigma_{2}$, but we will not address this
problem here.
###### Remark 4.11.
Constructing (double) Kodaira fibrations with small signature is a rather
difficult problem. As far as we know, before our work the only examples with
signature $16$ were the ones listed in [LLR20, Table 3, Cases 6.2, 6.6, 6.7
(Type 1), 6.9]. The examples provided by Theorem 4.7 are new, since both the
base genera and the fibre genera are different. Note that our results also
show that _every_ curve of genus $2$ (and not only some special curve with
extra automorphisms) is the base of a double Kodaira fibration with signature
$16$. Thus, we obtain two families of dimension $3$ of such fibrations that,
to the best of our knowledge, provide the first examples of a positive-
dimensional families of double Kodaira fibrations with small signature.
###### Remark 4.12.
Let $f\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ be a double Kodaira
fibration, associated with a strong diagonal double Kodaira structure of type
$(b,\,n)$ on a finite group $G$. Then the branch locus of $f$ is
$\Delta\subset\Sigma_{b}\times\Sigma_{b}$, namely the graph of the identity
$\operatorname{id}\colon\Sigma_{b}\longrightarrow\Sigma_{b}$, and so, adopting
the terminology in [CatRol09], we say that $f$ is _very simple_. Let us denote
by $\mathfrak{M}_{S}$ the connected component of the Gieseker moduli space of
surfaces of general type containing the class of $S$, and by $\mathcal{M}_{b}$
the moduli space of smooth curves of genus $b$. Thus, by applying [Rol10, Thm.
1.7], we infer that every surface in $\mathfrak{M}_{S}$ is still a very simple
double Kodaira fibration and that there is a natural map of schemes
(117) $\mathcal{M}_{b}\longrightarrow\mathfrak{M}_{S},$
which is an isomorphism on geometric points. Roughly speaking, since the
branch locus $\Delta\subset\Sigma_{b}\times\Sigma_{b}$ is rigid, all the
deformations of $S$ are realized by deformations of
$\Sigma_{b}\times\Sigma_{b}$ preserving the diagonal, hence by deformations of
$\Sigma_{b}$, cf. [CaPol19, Proposition 3.22]. In particular, this shows that
$\mathfrak{M}_{S}$ is a connected and irreducible component of the Gieseker
moduli space.
### 4.1. The computation of $H_{1}(S,\,\mathbb{Z})$
We end this section by computing the first homology group
$H_{1}(S,\,\mathbb{Z})$, where $f\colon
S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ is the diagonal double Kodaira
fibration associated with a diagonal double Kodaira structure of type
$(b,\,n)=(2,\,2)$ on an extra-special group of order $32$. To this purpose, we
will make use of the following result, which is a consequence of [Fox57,
Theorem p. 254].
###### Proposition 4.13.
Let $G$ be a finite group and
$\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$ be a surjective
group homomorphism, such that $\varphi(A_{12})$ has order $n\geq 2$. If
$f\colon S\longrightarrow\Sigma_{b_{1}}\times\Sigma_{b_{2}}$ is the diagonal
double Kodaira fibration associated with $\varphi$, then $\pi_{1}(S)$ sits
into a short exact sequence
(118)
$1\longrightarrow\pi_{1}(S)\longrightarrow\mathsf{P}_{2}(\Sigma_{b})^{\operatorname{orb}}\stackrel{{\scriptstyle\,\,\,\varphi^{\operatorname{orb}}}}{{\longrightarrow}}G\longrightarrow
1,$
where $\mathsf{P}_{2}(\Sigma_{b})^{\operatorname{orb}}$ is the quotient of
$\mathsf{P}_{2}(\Sigma_{b})$ by the normal closure of the cyclic subgroup
$\langle A_{12}^{n}\rangle$, and
$\varphi^{\operatorname{orb}}\colon\mathsf{P}_{2}(\Sigma_{b})^{\operatorname{orb}}\longrightarrow
G$ is the group epimorphism naturally induced by $\varphi$.
Proposition 4.13 allows one, at least in principle, to compute $\pi_{1}(S)$,
and so its abelianization $H_{1}(S,\,\mathbb{Z})$. However, doing all the
calculations by hand seems quite difficult, so we resorted to the Computer
Algebra System `GAP4`, see [GAP4]. The reader can find the idea behind the
calculation and the corresponding script in Appendix B, while here we just
report the result.
###### Proposition 4.14.
Let $f\colon S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ be the diagonal
double Kodaira fibration associated with a diagonal double Kodaira structure
of type $(b,\,n)=(2,\,2)$ on an extra-special group $G$ of order $32$. Then
(119) $H_{1}(S,\,\mathbb{Z})=\mathbb{Z}^{8}\oplus(\mathbb{Z}_{2})^{4}.$
In particular, this homology group is independent both on $G$ and on the
chosen structure on it.
###### Corollary 4.15.
Let $f\colon S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ be the diagonal
double Kodaira fibration associated with a diagonal double Kodaira structure
of type $(b,\,n)=(2,\,2)$ on an extra-special group of order $32$. Then
(120) $c_{1}^{2}(S)=368,\quad c_{2}(S)=160,\quad p_{g}(S)=47,\quad q(S)=4.$
###### Proof.
The integers $c_{1}^{2}(S),\,c_{2}(S)$ can be computed by using (111). Since
$\mathsf{b}_{1}(S)=8$ (Proposition 4.14), it follows $q(S)=4$. On the other
hand, Noether formula gives
(121)
$1-q(S)+p_{g}(S)=\chi(\mathcal{O}_{S})=\frac{c_{1}^{2}(S)+c_{2}(S)}{12}=44,$
hence $p_{g}(S)=47$. ∎
Finally, we want to relate Proposition 4.14 to some facts about monodromy.
Recall that, by [CatRol09, Proposition 2.5], given a Kodaira fibration
$S\longrightarrow\Sigma_{b}$, with base genus $b$ and fibre genus $g$, there
is a short exact sequence of fundamental groups
(122)
$1\longrightarrow\pi_{1}(\Sigma_{g})\longrightarrow\pi_{1}(S)\longrightarrow\pi_{1}(\Sigma_{b})\longrightarrow
1,$
which induces, by conjugation, a monodromy representation
$\pi_{1}(\Sigma_{b})\longrightarrow\operatorname{Out}(\pi_{1}(\Sigma_{g}))$.
Taking the abelianization of the right side, and dualizing over $\mathbb{Q}$,
we obtain a monodromy representation
(123)
$\rho_{\pi_{1}(\Sigma_{b})}\colon\pi_{1}(\Sigma_{b})\longrightarrow\operatorname{Aut}(H^{1}(\Sigma_{g},\,\mathbb{Q})),$
whose invariant subspace will be denoted by
$H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}$.
Now, let $f\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ be a diagonal
double Kodaira fibration, associated with a group epimorphism
$\varphi\colon\mathsf{P}_{2}(\Sigma_{b})\longrightarrow G$ such that
$\varphi(A_{12})$ has order $n\geq 2$. Set $\\{i,\,j\\}=\\{1,\,2\\}$ and let
$f_{i}\colon S\longrightarrow\Sigma_{b}$ be the Kodaira fibration obtained
composing $f$ with the natural projection
$\pi_{i}\colon\Sigma_{b}\times\Sigma_{b}\longrightarrow\Sigma_{b}$ onto the
$i$th factor. Assume moreover that the induced group homomorphism
$\varphi_{j}\colon\pi_{1}(\Sigma_{b}-\\{p_{i}\\})\longrightarrow G$ is
surjective. Then $\pi_{1}(\Sigma_{g})$ fits into a short exact sequence
(124)
$1\longrightarrow\pi_{1}(\Sigma_{g})\longrightarrow\pi_{1}(\Sigma_{b}-\\{p_{i}\\})^{\operatorname{orb}}\stackrel{{\scriptstyle\,\,\,\varphi_{j}^{\operatorname{orb}}}}{{\longrightarrow}}G\longrightarrow
1,$
where
$\pi_{1}(\Sigma_{b}-\\{p_{i}\\})^{\operatorname{orb}}=\big{\langle}\rho_{j1},\,\tau_{j1},\ldots,\rho_{jb},\,\tau_{jb},\;A_{12}\mid
A_{12}\prod_{t=1}^{b}[\rho_{jt},\,\tau_{jt}]=1,\,A_{12}^{n}=1\big{\rangle},$
see (70), and
$\varphi_{j}^{\operatorname{orb}}\colon\pi_{1}(\Sigma_{b}-\\{p_{i}\\})^{\operatorname{orb}}\longrightarrow
G$ is the group epimorphism naturally induced by $\varphi_{j}$.
Correspondingly, we have a monodromy representation
(125) $\rho_{G}\colon
G\longrightarrow\operatorname{Aut}(H^{1}(\Sigma_{g},\,\mathbb{Q})),$
whose invariant subspace will be denoted by
$H^{1}(\Sigma_{g},\,\mathbb{Q})^{G}$.
###### Proposition 4.16.
Let $f\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ be a diagonal double
Kodaira fibration as above. Then the following holds.
* $\boldsymbol{(1)}$
$\dim H^{1}(S,\,\mathbb{Q})=\dim
H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}+2b$
* $\boldsymbol{(2)}$
$H^{1}(\Sigma_{g},\,\mathbb{Q})^{G}\subseteq
H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}$
###### Proof.
$\boldsymbol{(1)}$ If $\iota\colon\Sigma_{g}\longrightarrow S$ is the
inclusion, the Hochschild-Serre spectral sequence in cohomology associated
with (122) provides a short exact sequence of $\mathbb{Q}$-vector spaces
(126) $0\longrightarrow H^{1}(\Sigma_{b},\,\mathbb{Q})\stackrel{{\scriptstyle
f_{i}^{*}}}{{\longrightarrow}}H^{1}(S,\,\mathbb{Q})\stackrel{{\scriptstyle\iota^{*}}}{{\longrightarrow}}H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}\longrightarrow
0,$
see for instance [Breg18, p. 5] or [Vid19, p. 740], so the claim follows.
$\boldsymbol{(2)}$ The $G$-cover $h\colon\Sigma_{g}\longrightarrow\Sigma_{b}$
define an injective pull-back map $h^{*}\colon
H^{1}(\Sigma_{b},\,\mathbb{Q})\longrightarrow H^{1}(\Sigma_{g},\,\mathbb{Q})$,
whose image is precisely $H^{1}(\Sigma_{g},\,\mathbb{Q})^{G}$. So it suffices
to check that $h^{*}$ is invariant under the monodromy map
$\rho_{\pi_{1}(\Sigma_{b})}$ and, to this purpose, we consider the
factorization of $h$ given as follows:
(127)
$\Sigma_{g}\stackrel{{\scriptstyle\iota}}{{\longrightarrow}}S\stackrel{{\scriptstyle
f}}{{\longrightarrow}}\Sigma_{b}\times\Sigma_{b}\stackrel{{\scriptstyle\pi_{i}}}{{\longrightarrow}}\Sigma_{b}.$
By (126), the image of $\iota^{*}\colon H^{1}(S,\,\mathbb{Q})\longrightarrow
H^{1}(\Sigma_{g},\,\mathbb{Q})$ coincides with the invariant subspace
$H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}$; thus, given any
automorphism $\xi\colon H^{1}(\Sigma_{g},\,\mathbb{Q})\longrightarrow
H^{1}(\Sigma_{g},\,\mathbb{Q})$ in the image of $\rho_{\pi_{1}(\Sigma_{b})}$,
we get $\xi\circ\iota^{*}=\iota^{*}$. Using (127), this in turn implies
(128) $\xi\circ h^{*}=\xi\circ(\iota^{*}\circ
f^{*}\circ\pi_{i}^{*})=(\xi\circ\iota^{*})\circ
f^{*}\circ\pi_{i}^{*}=\iota^{*}\circ f^{*}\circ\pi_{i}^{*}=h^{*},$
so $h^{*}$ is $\rho_{\pi_{1}(\Sigma_{b})}$-invariant and we are done. ∎
###### Corollary 4.17.
Let $f\colon S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ be a diagonal double
Kodaira fibration as above. Then the following are equivalent.
* $\boldsymbol{(1)}$
The pull-back map $f^{*}\colon
H^{1}(\Sigma_{b}\times\Sigma_{b},\,\mathbb{Q})\longrightarrow
H^{1}(S,\,\mathbb{Q})$ is an isomorphism.
* $\boldsymbol{(2)}$
$H^{1}(\Sigma_{g},\,\mathbb{Q})^{G}=H^{1}(\Sigma_{g},\,\mathbb{Q})^{\pi_{1}(\Sigma_{b})}$.
###### Proof.
It is sufficient to show that both conditions are equivalent to
$\mathsf{b}_{1}(S)=4b$. For $\boldsymbol{(1)}$, this follows from the
injectivity of the pull-back in cohomology associated with a finite $G$-cover.
On the other hand, by Proposition 4.16, equality $\boldsymbol{(2)}$ holds if
and only if
(129) $\dim H^{1}(\Sigma_{b},\,\mathbb{Q})=\dim H^{1}(S,\,\mathbb{Q})-2b,$
namely, if and only if $\dim H^{1}(S,\,\mathbb{Q})=4b$, as claimed. ∎
Borrowing the terminology from [Breg18, Definition 2.8], we say that a
diagonal double Kodaira fibration $f\colon
S\longrightarrow\Sigma_{b}\times\Sigma_{b}$ is _maximal_ if it satisfies one
of the equivalent conditions in Corollary 4.17. Since
$\mathsf{b}_{1}(\Sigma_{2}\times\Sigma_{2})=8$, Proposition 4.14 implies the
following
###### Proposition 4.18.
Let $f\colon S\longrightarrow\Sigma_{2}\times\Sigma_{2}$ be a diagonal double
Kodaira fibration, associated with a diagonal double Kodaira structure of type
$(b,\,n)=(2,\,2)$ on an extra-special group $G$ of order $32$. Then $f$ is
maximal.
## Acknowledgments
F. Polizzi was partially supported by GNSAGA-INdAM. He thanks Andrea Causin
for drawing the figures and Zönke Rollenske for answering some of his
questions via e-mail. Both authors are grateful to Ian Agol, Yves de
Cornulier, “Jonathan”, Derek Holt, Max Horn, Moishe Kohan, Roberto Pignatelli,
“Primoz”, Geoff Robinson, John Shareshian, Remy van Dobben de Bruyn, Will
Sawin for their precious answers and comments in the MathOverflow threads
https://mathoverflow.net/questions/357453
https://mathoverflow.net/questions/366044
https://mathoverflow.net/questions/366771
https://mathoverflow.net/questions/368628
https://mathoverflow.net/questions/371181
https://mathoverflow.net/questions/379272
https://mathoverflow.net/questions/380292
https://mathoverflow.net/questions/390447
## Appendix A. Non abelian groups of order $24$ and $32$
$\mathrm{IdSmallGroup}(G)$ | $G$ | $\mathrm{Presentation}$
---|---|---
$G(24,\,1)$ | $\mathsf{D}_{8,\,3,\,-1}$ | $\langle x,\,y\;|\;x^{8}=y^{3}=1,\,xyx^{-1}=y^{-1}\rangle$
$G(24,\,3)$ | $\mathsf{SL}(2,\,\mathbb{F}_{3})$ | $\langle x,\,y,\,z\;|\;\,x^{3}=y^{3}=z^{2}=xyz\rangle$
$G(24,\,4)$ | $\mathsf{Q}_{24}$ | $\langle x,\,y,\,z\;|\;\,x^{6}=y^{2}=z^{2}=xyz\rangle$
$G(24,\,5)$ | $\mathsf{D}_{2,\,12,\,5}$ | $\langle x,\,y\;|\;x^{2}=y^{12}=1,\,xyx^{-1}=y^{5}\rangle$
$G(24,\,6)$ | $\mathsf{D}_{24}$ | $\langle x,\,y\;|\;x^{2}=y^{12}=1,\,xyx^{-1}=y^{-1}\rangle$
$G(24,\,7)$ | $\mathbb{Z}_{2}\times\mathsf{D}_{4,\,3,\,-1}$ | $\langle z\;|\;z^{2}=1\rangle\times\langle x,\,y\;|\;x^{4}=y^{3}=1,\,xyx^{-1}=y^{-1}\rangle$
$G(24,\,8)$ | $((\mathbb{Z}_{2})^{2}\times\mathbb{Z}_{3})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,\,y,\,z,\,w\;|\;x^{2}=y^{2}=z^{2}=w^{3}=1,$
| | $[y,z]=[y,w]=[z,w]=1,$
| | $xyx^{-1}=y,\;xzx^{-1}=zy,\;xwx^{-1}=w^{-1}\rangle$
$G(24,\,10)$ | $\mathbb{Z}_{3}\times\mathsf{D}_{8}$ | $\langle z\;|\;z^{3}=1\rangle\times\langle x,\,y\;|\;x^{2}=y^{4}=1,\,xyx^{-1}=y^{-1}\rangle$
$G(24,\,11)$ | $\mathbb{Z}_{3}\times\mathsf{Q}_{8}$ | $\langle z\;|\;z^{3}=1\rangle\times\langle i,\,j,\,k\;|\;i^{2}=j^{2}=k^{2}=ijk\rangle$
$G(24,\,12)$ | $\mathsf{S}_{4}$ | $\langle x,\,y\;|\;x=(12),\,y=(1234)\rangle$
$G(24,\,13)$ | $\mathbb{Z}_{2}\times\mathsf{A}_{4}$ | $\langle z\;|\;z^{2}=1\rangle\times\langle x,\,y\;|\;x=(12)(34),y=(123)\rangle$
$G(24,\,14)$ | $(\mathbb{Z}_{2})^{2}\times\mathsf{S}_{3}$ | $\langle z,\,w\;|\;z^{2}=w^{2}=[z,\,w]=1\rangle$
| | $\times\langle x,\,y\;|\;x=(12),\,y=(123)\rangle$
Table 1. Nonabelian groups of order $24$.
Source: groupprops.subwiki.org/wiki/Groups_of_order_24 $\mathrm{IdSmallGroup}(G)$ | $G$ | $\mathrm{Presentation}$
---|---|---
$G(32,\,2)$ | $(\mathbb{Z}_{4}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{4}$ | $\langle x,\,y,\,z\;|\;x^{4}=y^{4}=z^{2}=1,$
| | $[x,\,y]=z,\,[x,\,z]=[y,\,z]=1\rangle$
$G(32,\,4)$ | $\mathsf{D}_{4,\,8,\,5}$ | $\langle x,\,y\;|\;x^{4}=y^{8}=1,xyx^{-1}=y^{5}\rangle$
$G(32,\,5)$ | $(\mathbb{Z}_{8}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\;|\;x^{8}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=x^{5}y,\,zyz^{-1}=y\rangle$
$G(32,\,6)$ | $(\mathbb{Z}_{2})^{3}\rtimes\mathbb{Z}_{4}$ | $\langle x,\,y,\,z,\,w\mid x^{2}=y^{2}=z^{2}=w^{4}=1,$
| | $[x,\,y]=1,\,[x,\,z]=1,\,[y,\,z]=1,$
| | $wxw^{-1}=x,\,wyw^{-1}=xy,\,wzw^{-1}=yz\rangle$
$G(32,\,7)$ | $(\mathbb{Z}_{8}\rtimes\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z,\,u,\,w\mid y^{2}=z^{2}=w^{2}=1,$
| | $u^{2}=w^{-1},\,x^{2}=u,\,(yz)^{2}=1,\,(yu^{-1})^{2}=1,$
| | $uzu^{-1}=z^{-1},\,xyzx^{-1}=y^{-1}\rangle$
$G(32,\,8)$ | $(\mathbb{Z}_{2})^{2}\,.\,(\mathbb{Z}_{4}\times\mathbb{Z}_{2})$ | $\langle x,\,y,\,z\mid x^{8}=y^{2}=1,\,z^{2}=x^{4},$
| | $xy=yx^{5},\,[y,\,z]=1,\,xz=zxy^{-1}\rangle$
$G(32,\,9)$ | $(\mathbb{Z}_{8}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\;|\;x^{8}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=x^{3}y,\,zyz^{-1}=y\rangle$
$G(32,\,10)$ | $\mathsf{Q}_{8}\rtimes\mathbb{Z}_{4}$ | $\langle i,\,j,\,k,\,x\mid i^{2}=j^{2}=k^{2}=ijk,\,x^{4}=1,$
| | $xix^{-1}=j,\,xjx^{-1}=i,\,xkx^{-1}=k^{-1}\rangle$
$G(32,\,11)$ | $(\mathbb{Z}_{4})^{2}\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\mid x^{4}=y^{4}=[x,\,y]=1,\,z^{2}=1,$
| | $zxz^{-1}=y,\,zyz^{-1}=x\rangle$
$G(32,\,12)$ | $\mathsf{D}_{8,\,4,\,3}$ | $\langle x,\,y\;|\;x^{8}=y^{4}=1,xyx^{-1}=y^{3}\rangle$
$G(32,\,13)$ | $\mathsf{D}_{4,\,8,\,3}$ | $\langle x,\,y\;|\;x^{4}=y^{8}=1,xyx^{-1}=y^{3}\rangle$
$G(32,\,14)$ | $\mathsf{D}_{4,\,8,\,-1}$ | $\langle x,\,y\;|\;x^{4}=y^{8}=1,xyx^{-1}=y^{-1}\rangle$
$G(32,\,15)$ | $\mathbb{Z}_{4}\,.\,\mathsf{D}_{8}$ | $\langle x,\,y,\,z,\,u,\,w\mid w^{2}=1,\,z^{2}=u^{2}=w^{-1},$
| | $x^{2}=u,\,y^{2}=z,\,xzx^{-1}=z^{-1},$
| | $[y,\,u]=1,\,xyxu=y^{-1}\rangle$
$G(32,\,17)$ | $\mathsf{D}_{2,\,16,\,9}$ | $\langle x,\,y\mid x^{2}=y^{16}=1,\,xyx^{-1}=y^{9}\rangle$
$G(32,\,18)$ | $\mathsf{D}_{32}$ | $\langle x,\,y\mid x^{2}=y^{16}=1,xyx^{-1}=y^{-1}\rangle$
$G(32,\,19)$ | $\mathsf{QD}_{32}$ | $\langle x,\,y\mid x^{2}=y^{16}=1,xyx^{-1}=y^{7}\rangle$
$G(32,\,20)$ | $\mathsf{Q}_{32}$ | $\langle x,\,y,\,z\mid x^{8}=y^{2}=z^{2}=xyz\rangle$
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{G(32,\,22)}$ | $\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{Z}_{2}\times((\mathbb{Z}_{4}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2})}$ | $\langle w\mid w^{2}=1\rangle\times$
| | $\langle x,\,y,\,z\mid x^{4}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=xy,\,zyz^{-1}=y\rangle$
$G(32,\,23)$ | $\mathbb{Z}_{2}\times\mathsf{D}_{4,\,4,\,3}$ | $\langle z\mid z^{2}=1\rangle\times\langle x,\,y\mid x^{4}=y^{4}=1,\,xyx^{-1}=y^{3}\rangle$
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{G(32,\,24)}$ | $\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{(\mathbb{Z}_{4})^{2}\rtimes\mathbb{Z}_{2}}$ | $\langle x,\,y,\,z\mid x^{4}=y^{4}=z^{2}=1,$
| | $[x,\,y]=1,\,zxz^{-1}=x,\,zyz^{-1}=x^{2}y\rangle$
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{G(32,\,25)}$ | $\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{Z}_{4}\times\mathsf{D}_{8}}$ | $\langle z\mid z^{4}=1\rangle\times\langle x,\,y\mid x^{2}=y^{4}=1,\,xyx^{-1}=y^{-1}\rangle$
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{G(32,\,26)}$ | $\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathbb{Z}_{4}\times\mathsf{Q}_{8}}$ | $\langle z\mid z^{4}=1\rangle\times\langle i,\,j,\,k\mid i^{2}=j^{2}=k^{2}=ijk\rangle$
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{G(32,\,27)}$ | $\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{(\mathbb{Z}_{2})^{3}\rtimes(\mathbb{Z}_{2})^{2}}$ | $\langle x,\,y,\,z,\,a,\,b\mid$
| | $x^{2}=y^{2}=z^{2}=a^{2}=b^{2}=1,$
| | $[x,\,y]=[y,\,z]=[x,\,z]=[a,\,b]=1,$
| | $axa^{-1}=x,\,aya^{-1}=y,\,aza^{-1}=xz,$
| | $bxb^{-1}=x,\,byb^{-1}=y,\,bzb^{-1}=yz\rangle$
$G(32,\,28)$ | $(\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z,\,w\mid x^{4}=y^{2}=z^{2}=w^{2}=1,$
| | $[x,y]=[x,\,z]=[y,\,z]=1,$
| | $wxw^{-1}=x^{-1},\,wyw^{-1}=z,\,wzw^{-1}=y\rangle$
$G(32,\,29)$ | $(\mathbb{Z}_{2}\times\mathsf{Q}_{8})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,i,\,j,\,k,\,z\mid x^{2}=z^{2}=1,\,i^{2}=j^{2}=k^{2}=ijk,$
| | $[x,\,i]=[x,\,j]=[x,\,k]=1,$
| | $zxz^{-1}=x,\,ziz^{-1}=i,\,zjz^{-1}=xj^{-1}\rangle$
$G(32,\,30)$ | $(\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z,\,w\mid x^{4}=y^{2}=z^{2}=w^{2}=1,$
| | $[x,y]=[x,\,z]=[y,\,z]=1,$
| | $wxw^{-1}=xy,\,wyw^{-1}=y,\,wzw^{-1}=x^{2}z\rangle$
$\mathrm{IdSmallGroup}(G)$ | $G$ | $\mathrm{Presentation}$
---|---|---
$G(32,\,31)$ | $(\mathbb{Z}_{4})^{2}\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\mid x^{4}=y^{4}=[x,\,y]=1,\,z^{2}=1,$
| | $zxz^{-1}=xy^{2},\,zyz^{-1}=x^{2}y\rangle$
$G(32,\,32)$ | $(\mathbb{Z}_{2})^{2}\,.\,(\mathbb{Z}_{2})^{3}$ | $\langle x,\,y,\,z,\,u,\,w\mid u^{2}=w^{2}=1,$
| | $u=z^{2},\,u=x^{-2},\,w=y^{-2},$
| | $yxy^{-1}=x^{-1},\,[y,\,z]=1,\,xzxwz=1\rangle$
$G(32,\,33)$ | $(\mathbb{Z}_{4})^{2}\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\mid x^{4}=y^{4}=[x,\,y]=1,\,z^{2}=1,$
| | $zxz^{-1}=xy^{2},\,zyz^{-1}=x^{2}y^{-1}\rangle$
$G(32,\,34)$ | $(\mathbb{Z}_{4})^{2}\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\mid x^{4}=y^{4}=[x,\,y]=1,\,z^{2}=1,$
| | $zxz^{-1}=x^{-1},\,zyz^{-1}=y^{-1}\rangle$
$G(32,\,35)$ | $\mathbb{Z}_{4}\rtimes\mathsf{Q}_{8}$ | $\langle x,\,i,\,j,\,k\mid x^{4}=1,\,i^{2}=j^{2}=k^{2}=ijk,$
| | $ixi^{-1}=x^{-1},\,jxj^{-1}=x^{-1},\,kxk^{-1}=x\rangle$
$G(32,\,37)$ | $(\mathbb{Z}_{8}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\;|\;x^{8}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=x^{5},\,zyz^{-1}=y\rangle$
$G(32,\,38)$ | $(\mathbb{Z}_{8}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\;|\;x^{8}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=x,\,zyz^{-1}=x^{4}y\rangle$
$G(32,\,39)$ | $\mathbb{Z}_{2}\times\mathsf{D}_{16}$ | $\langle z\mid z^{2}=1\rangle\times\langle x,\,y\mid x^{2}=y^{8}=1,xyx^{-1}=y^{-1}\rangle$
$G(32,\,40)$ | $\mathbb{Z}_{2}\times\mathsf{QD}_{16}$ | $\langle z\mid z^{2}=1\rangle\times\langle x,\,y\mid x^{2}=y^{8}=1,xyx^{-1}=y^{3}\rangle$
$G(32,\,41)$ | $\mathbb{Z}_{2}\times\mathsf{Q}_{16}$ | $\langle w\mid w^{2}=1\rangle\times\langle x,\,y,\,z\mid x^{4}=y^{2}=z^{2}=xyz\rangle$
$G(32,\,42)$ | $(\mathbb{Z}_{8}\times\mathbb{Z}_{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z\;|\;x^{8}=y^{2}=z^{2}=1,\,[x,\,y]=1,$
| | $zxz^{-1}=x^{3},\,zyz^{-1}=x^{4}y\rangle$
$G(32,\,43)$ | $\mathbb{Z}_{8}\rtimes(\mathbb{Z}_{2})^{2}$ | $\langle x,\,y,\,z\mid x^{8}=1,\,y^{2}=z^{2}=[y,\,z]=1,$
| | $yxy^{-1}=x^{-1},\,zxz^{-1}=x^{5}\rangle$
$G(32,\,44)$ | $(\mathbb{Z}_{2}\times\mathsf{Q}_{8})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,i,\,j,\,k,\,z\mid x^{2}=z^{2}=1,\,i^{2}=j^{2}=k^{2}=ijk,$
| | $[x,\,i]=[x,\,j]=[x,\,k]=1,$
| | $zxz^{-1}=xi^{2},\,ziz^{-1}=j,\,zjz^{-1}=i\rangle$
$G(32,\,46)$ | $(\mathbb{Z}_{2})^{2}\times\mathsf{D}_{8}$ | $\langle z,\,w\mid z^{2}=w^{2}=[z,\,w]=1\rangle$
| | $\times\langle x,\,y\mid x^{2}=y^{4}=1,\,xyx^{-1}=y^{-1}\rangle$
$G(32,\,47)$ | $(\mathbb{Z}_{2})^{2}\times\mathsf{Q}_{8}$ | $\langle z,\,w\mid z^{2}=w^{2}=[z,\,w]=1\rangle$
| | $\times\langle i,\,j,\,k\mid i^{2}=j^{2}=k^{2}=ijk\rangle$
$G(32,\,48)$ | $(\mathbb{Z}_{4}\times(\mathbb{Z}_{2})^{2})\rtimes\mathbb{Z}_{2}$ | $\langle x,\,y,\,z,\,w\mid x^{4}=y^{2}=z^{2}=w^{2}=1,$
| | $[x,y]=[x,\,z]=[y,\,z]=1,$
| | $wxw^{-1}=x,\,wyw^{-1}=y,\,wzw^{-1}=x^{2}z\rangle$
$G(32,\,49)$ | $\mathsf{H}_{5}(\mathbb{Z}_{2})$ | $\langle\mathsf{r}_{1},\,\mathsf{t}_{1},\,\mathsf{r}_{2},\,\mathsf{t}_{2},\,\mathsf{z}\mid\mathsf{r}_{j}^{2}=\mathsf{t}_{j}^{2}=\mathsf{z}^{2}=1,$
| | $[\mathsf{r}_{j},\,\mathsf{z}]=[\mathsf{t}_{j},\,\mathsf{z}]=1,$
| | $[\mathsf{r}_{j},\,\mathsf{r}_{k}]=[\mathsf{t}_{j},\,\mathsf{t}_{k}]=1,$
| | $[\mathsf{r}_{j},\,\mathsf{t}_{k}]=\mathsf{z}^{-\delta_{jk}}\,\rangle,$ see (12)
$G(32,\,50)$ | $\mathsf{G}_{5}(\mathbb{Z}_{2})$ | $\langle\,\mathsf{r}_{1},\,\mathsf{t}_{1},\,\mathsf{r}_{2},\,\mathsf{t}_{2},\,\mathsf{z}\mid,\mathsf{r}_{1}^{2}=\mathsf{t}_{1}^{2}=\mathsf{z}^{2}=1,$
| | $\mathsf{r}_{2}^{2}=\mathsf{t}_{2}^{2}=\mathsf{z},$
| | $[\mathsf{r}_{j},\,\mathsf{z}]=[\mathsf{t}_{j},\,\mathsf{z}]=1,$
| | $[\mathsf{r}_{j},\,\mathsf{r}_{k}]=[\mathsf{t}_{j},\,\mathsf{t}_{k}]=1,$
| | $[\mathsf{r}_{j},\,\mathsf{t}_{k}]=\mathsf{z}^{-\delta_{jk}}\,\rangle,$ see (13)
Table 2. Nonabelian groups of order $32$.
Source: groupprops.subwiki.org/wiki/Groups_of_order_32 |
# Continual Multi-task Gaussian Processes
Pablo Moreno-Muñoz1 Antonio Artés-Rodríguez1 Mauricio A. Álvarez2
1Dept. of Signal Theory and Communications, Universidad Carlos III de Madrid,
Spain
2Dept. of Computer Science, University of Sheffield, UK
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We address the problem of continual learning in multi-task Gaussian process
(GP) models for handling sequential input-output observations. Our approach
extends the existing prior-posterior recursion of online Bayesian inference,
i.e. past posterior discoveries become future prior beliefs, to the infinite
functional space setting of GP. For a reason of scalability, we introduce
variational inference together with an sparse approximation based on inducing
inputs. As a consequence, we obtain tractable continual lower-bounds where two
novel Kullback-Leibler (KL) divergences intervene in a natural way. The key
technical property of our method is the recursive reconstruction of
conditional GP priors conditioned on the variational parameters learned so
far. To achieve this goal, we introduce a novel factorization of past
variational distributions, where the predictive GP equation propagates the
posterior uncertainty forward. We then demonstrate that it is possible to
derive GP models over many types of sequential observations, either discrete
or continuous and amenable to stochastic optimization. The continual inference
approach is also applicable to scenarios where potential multi-channel or
heterogeneous observations might appear. Extensive experiments demonstrate
that the method is fully scalable, shows a reliable performance and is robust
to uncertainty error propagation over a plenty of synthetic and real-world
datasets.
## 1 Introduction
A remarkable evidence of how necessary real-time adaptation is for machine
learning can be deduced from multiple medical applications, i.e. intensive
care unit (ICU) patients or electronic health records (EHR), among others. In
such cases, inference methods for probabilistic models typically focus on two
principal paradigms: i) discovering the latent structure that underlies a
sequence of observations and ii) adapting them to new incoming data. Out of
the medical framework, we often encounter situations where we want to solve
multiple tasks that evolve over time, potential examples are signal
processing, control, econometrics or even spatio-temporal demographics.
The resurgence of interest on probabilistic adaptative methods shows us that,
the better the model is adapted to such time evolving behavior, the easier its
applicability on real-world problems is. Among the adaptive approaches that we
may consider, in this paper we focus on continual ones. Particularly,
continual learning, also known as life-long learning, is a very general family
of online learning methods whose principal properties are the adaptation to
non i.i.d. data, characterization of tasks that evolve over time and capture
of new emergent tasks previously unseen by the model itself.
Gaussian process (GP) models (Rasmussen and Williams, 2006) are not excluded
from this necessity of real-time adaptation. Despite their extended use in
temporal applications, recursively updating the parameters without revisiting
training samples is not trivial. Particularly in such models, the difficulty
is double. First, the estimation of non-linear latent functions is constrained
by the same principles of online Bayesian learning, that is, how to re-
introduce former posterior discoveries as new prior beliefs. Secondly, due to
GP priors are based on the construction of covariance matrices via kernel
functions, incrementally adapting such matrices to new incoming samples
requires expensive ways of matrix completion or even unfeasible inversions
when large-scale data is observed.
However, there has been a noticeable effort on adapting GP models for
sequential input-output observations over the past decades. As standard
Gaussian regression scenarios are usually accompanied by tractable solutions,
preliminary works focused exclusively on the iterative counterpart. In
particular, this paradigm attracted significant attention since seminal works
by Csató and Opper (2002) and Girard et al. (2003) presented the two
preliminar alternatives to perform online predictions using GPs. The first one
proposed an online regression model where variational inference is used within
moment matching to fit sequential posterior distributions from one single
recent sample. In the second case, motivated by one-step ahead predictions,
they incorporate an additive input in an equivalent state-space model, which
consists of a mapping over the last few observed outputs, $L$ steps back.
Besides initial approaches to _online_ GPs, other recent works have also
addressed the continual learning problem. For example, sequential rank-one
updates of a locally trained GP were proposed in Nguyen-Tuong et al. (2008) or
even label ranking of data points for an inclusion-deletion strategy in an
active training set. The GP is learned by Expectation-Propagation (EP) as in
Henao and Winther (2010). Also for the single-output GP case, but closer to
the scalable framework presented in this paper, we find that the stochastic
gradient descent method in Hensman et al. (2013) for Gaussian regression and
Hensman et al. (2015) for classification, is applicable to online settings but
considering ever-increasing datasets, which _a priori_ may be problematic.
Another recent example is the semi-described (missing inputs) and semi-
supervised (missing outputs) GP learning model in Damianou and Lawrence
(2015), where a forecasting regression problem is seen as a semi-described
model where predictions are obtained iteratively in an auto-regressive manner.
In terms of scalability for single-output GP models, both Cheng and Boots
(2016) and Bui et al. (2017a) extended online learning methods and uncertainty
propagation to the popular variational inference setup of sparse GP
approximations. They used a novel Kullback-Leibler (KL) divergence that
constrains the new fitted distribution w.r.t. the one in the previous instant.
While the first work is only related to univariate Gaussian regression
problems, the last reference has the additional advantage of accepting limited
non-Gaussian likelihoods as well as it is able to include $\alpha$-divergences
for more general inference, whose theoretical bounds are analysed in Nguyen et
al. (2017).
An exception to the previous works is Solin et al. (2018), which instead of
employing sparse methods, they use the approximate Markovian structure of
Gaussian processes to reformulate the problem as a state-space model. Within
this framework, the complexity is reduced from cubic to linear cost in the
number of observations, but still stays unfeasible w.r.t. the number of
states. Introducing a fast EP inference scheme helps to overcome this issue
and additionally, the model is able to perform online learning of kernel
hyperparameters as well as dealing with non-Gaussian likelihoods.
Moreover, if we pay attention to the treatment of non-stationary properties,
we see that most approaches assume a perpetual latent function behavior which
we aim to discover adaptively. In contrast to this assumption, Zhang et al.
(2019) recently introduced mixtures of GP experts (Rasmussen and Ghahramani,
2002) within sequential Monte Carlo (SMC) inference that addresses the
variability of such latent functions along time. It is worthy to mention that
Solin et al. (2018) is also a potential solution for non-stationary structure
of models, but using a different approach.
In our paper, we are focused in the general problem of streaming data
modelling where samples can be observed as an irregular sequence of batches,
one-sample steps or even the case where the complete set of input-output
observations is available. Sequential data is not restricted to be i.i.d.
conditioned to the given model. Additionally, we assume that our dataset might
be also high-dimensional and its adaption to non-Gaussian likelihoods is a
strict requirement. Similarly to Bui et al. (2017a), our model is fitted to
the aforementioned constraints, where scalability is addressed through sparse
approximations and we use variational inference (Titsias, 2009), which is the
standard practice in modern GPs.
Regarding multi-output Gaussian process (MOGP) models, we see that there have
been few attempts to extend them to the continual learning scenario. For
instance, Cheng et al. (2017) contributes to real-time monitoring of patients
via structured kernels inside a MOGP model. However, they update the
hyperparameters in real-time using momentum methods with a sliding window,
rather than discovering the posterior distribution over the latent functions
in an online manner. One exception is Yang et al. (2018), since they derive a
variational lower bound for multiple online regression. It is worthy to
mention that this is the most closely related work to our multi-output
extension, with the important difference that non-Gaussian likelihoods are not
considered and neither a variational update of hyperparameters. In contrast to
our approach, they use particle filtering given that the model is constrained
by a fixed number of inducing-points in the sparse approximation.
Our main contribution in this paper is to provide a novel approach that
extends the existing posterior-prior recursion of online Bayesian inference,
to the infinite functional space setting of GP models. The key principle in
our model is the use of the conditional GP predictive distribution to build a
novel implicit prior expression where past posterior discoveries are
propagated forward. In addition, we introduce this solution with variational
inference for sparse approximations, which avoids any form of data revisiting.
The entire model is amenable to stochastic optimization, letting us consider
any irregular form in the sequential observation process. Another detail is
that the continual learning method is fully applicable to the multi-channel
framework, that is, to multi-output Gaussian process models.
Importantly, the ability of readapting conditional GP priors w.r.t. the
previous inferred variational distribution is feasible under non-Gaussian
likelihoods in the output observations. As non-Gaussian likelihoods are also
permitted in the multi-task setup, the continual GP model is useful for
heterogeneous problems (Moreno-Muñoz et al., 2018). This is the case of
several channels for which the outputs are a mix of continuous, categorical,
binary or discrete variables. We also consider asymmetric cases where the
observation process of data is not synchronous between channels. Finally, the
Python implementation is publicly available with the especial advantage of
being easily adapted to multi-task and heterogeneous likelihood problems.
This paper is divided in two main sections that are organized as follows. In
Section 2, we introduce the sequential data formulation for single-output GPs,
that is valid either for univariate regression and classification problems. We
then review the deployment of continual variational inference over the sparse
GP approximation, where the past data revisiting issue is noticeable.
Moreover, we present the recurrent conditional prior reconstruction based on
online Bayesian learning that is later used in the definition of our continual
lower-bounds. In Section 3, we extend the sequential model for accepting
multiple output settings. Particularly, we derive stochastic variational
inference for sparse multi-output GPs that follows the same continual learning
mechanism but amenable for heterogeneous likelihood models and asymmetric
channels setups. Finally, in Section 4, we study the performance of our
scalable method on several experiments with synthetic and real-world datasets
for both regression and classification tasks.
## 2 Continual Gaussian Processes
Consider supervised learning scenarios where pairs of input-output data
$\mathcal{D}=\\{\bm{x}_{n},y_{n}\\}^{N}_{n=1}$ are observed in a sequential
manner, with $\bm{x}_{n}\in\mathbb{R}^{p}$ and outputs $y_{n}$ being either
continuous or discrete. We assume the sequential observation process to be a
finite stream of smaller subsets or batches, such that
$\mathcal{D}=\\{\mathcal{D}_{1},\mathcal{D}_{2},\dots,\mathcal{D}_{T}\\}$.
Additionally, each $t$-th batch,
$\mathcal{D}_{t}=\\{\bm{x}_{n},y_{n}\\}^{N_{t}}_{n=1}$, may have an irregular
size, that is, different length per batch of data and $N_{t}<N$ in all cases.
From the GP perspective, we consider that every output sample is generated as
$y_{n}\sim p(y_{n}|f_{n})$, where $f_{n}$ is a non-linear function evaluation
$f(\bm{x}_{n})$. Here, the latent function $f$ that parameterizes the
likelihood model is drawn from a prior $f\sim\mathcal{GP}(0,k(\cdot,\cdot))$,
where $k(\cdot,\cdot)$ can be any valid covariance function or kernel, and the
zero-mean is assumed for simplicity.
Since we do not know when the next subset $\mathcal{D}_{t}$ arrives at each
time-step, the waiting time and memory allocation resources cannot be
estimated a priori, mainly due to the size of the batches is being irregular
and unknown. Based on Bui et al. (2017a), we assume that receiving the entire
sequence of data and computing the posterior distribution $p(f|\mathcal{D})$
is unfeasible and extremely high-time demanding. As alternative, we consider
continual learning approaches, which refer to the ability of adapting models
in an online fashion when data samples are not i.i.d. and updating their
parameters without re-observing the entire data sequence.
In what follows, we will use the notation
$\mathcal{D}=\\{\mathcal{D}_{\text{old}},\mathcal{D}_{\text{new}}\\}$, where
$\mathcal{D}_{\text{old}}=\\{{\bm{x}_{\text{old}}},{\bm{y}_{\text{old}}}\\}$
refers to all observations seen so far and the partition
$\mathcal{D}_{\text{new}}=\\{{\bm{x}_{\text{new}}},{\bm{y}_{\text{new}}}\\}$
represents the smaller subset of new incoming samples. For this construction,
note that if $\mathcal{D}_{t}$ arrives at a given time, the old data
correspond to
$\mathcal{D}_{\text{old}}=\\{\mathcal{D}_{1},\cdots,\mathcal{D}_{t-1}\\}$
while $\mathcal{D}_{\text{new}}=\mathcal{D}_{t}$. This results in an ever-
increasing dataset $\mathcal{D}_{\text{old}}$ that is recursively evaluated.
### 2.1 Sparse approximations for sequential data
Exact inference in GP models is widely known for its $\mathcal{O}(N^{3})$
complexity for training and $\mathcal{O}(N^{2})$ per test prediction. Given
the previously described model, the computational effort for learning under
such sequential observations could be even more intensive, with a recurrent
cost
$\mathcal{O}(N_{1}^{3}),\mathcal{O}((N_{1}+N_{2})^{3}),\dots,\mathcal{O}(N^{3})$.
In order to sidestep that prohibitive complexity, we introduce auxiliary
variables also known as inducing inputs (Snelson and Ghahramani, 2006). The
auxiliary variables serve as an optimal subset of pseudo-observations that
summarize the data, reducing the cost of the learning process.
We start by defining the set of inducing inputs
$\mathcal{Z}=\\{\bm{z}_{m}\\}^{M}_{m=1}$, where $\bm{z}_{m}\in\mathbb{R}^{p}$
take values in the same space as $\bm{x}_{n}$. Moreover, we denote the
inducing variables ${\mathbf{u}}=[u_{1},\dots,u_{M}]^{\top}$ as the vector of
output function evaluations, where $u_{m}=f(\bm{z}_{m})$. Under a construction
of this form, the joint distribution $p(y_{n},{\mathbf{f}}_{n},{\mathbf{u}})$,
simplified for a single output sample $y_{n}$, factorises as
$p(y_{n},f_{n},{\mathbf{u}})=p(y_{n}|f_{n})p(f_{n},{\mathbf{u}})=p(y_{n}|f_{n})p(f_{n}|{\mathbf{u}})p({\mathbf{u}}),$
(1)
where $p(y_{n}|f_{n})$ can be any valid likelihood model and
$p(f_{n}|{\mathbf{u}})$, $p({\mathbf{u}})$ are conditional and marginal GP
priors respectively. Similarly to the formulation of vectors ${\mathbf{u}}$,
we consider ${\mathbf{f}}=[f_{1},\dots,f_{N}]^{\top}$ to be the vector of
output function evaluations.
In practice, obtaining closed-form posterior distributions over both
${\mathbf{f}}$ and ${\mathbf{u}}$ is difficult and in many cases, impossible.
The problem is generally solved via variational methods, formally denoted with
approximations of the form $q({\mathbf{f}},{\mathbf{u}})\approx
p({\mathbf{f}},{\mathbf{u}}|\mathcal{D})$. Following the same derivation of
Titsias (2009), we assume that the auxiliary distribution $q$ factorises as
$q({\mathbf{f}},{\mathbf{u}})=p({\mathbf{f}}|{\mathbf{u}})q({\mathbf{u}})$,
reducing the problem to learn a single distribution $q({\mathbf{u}})$ that we
assume to be Gaussian.
Importantly, we condition every observed output $y_{n}$ to the infinite-
dimensional function space $f$ similarly to Bui et al. (2017b), having
$p(y_{n}|f)$ instead. As a consequence, every variable $f$ will correspond to
an infinitely large number of function evaluations, i.e. the entire domain
$\mathbb{R}^{p}$, including the input values in $\mathcal{Z}$. It will play a
key role in the development of the continual inference mechanism later on
these lines.111Infinite dimensional integrals related to $f$ get reduced via
properties of Gaussian marginals. The lower bound equation is still tractable.
The complete details are included in the Appendix.
When using variational inference (VI) methods for sparse GP models, the common
approach is to fit some parameters $\bm{\phi}$ of the auxiliary distribution
$q({\mathbf{u}}|\bm{\phi})$ by maximizing a lower bound $\mathcal{L}$ on the
log-marginal likelihood of the dataset $\log p(\mathcal{D})$. In the GP
literature, this marginal distribution is often rewritten as $\log p(\bm{y})$
and in our case, we may express it also as $\log
p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}})$. From a VI perspective, the
log-marginal distribution of the sequential dataset can be decomposed as
$\log p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}})=\log\int
p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}}|f)p(f)df.$ (2)
Suppose now that both ${\bm{y}_{\text{old}}}$ and ${\bm{y}_{\text{new}}}$ are
non i.i.d. but conditioned to the whole function space $f$, allowing us to
apply conditional independence (CI). That is, it leads us to obtain the
factorized likelihood
$p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}}|f)=p({\bm{y}_{\text{old}}}|f)p({\bm{y}_{\text{new}}}|f)$
as in Bui et al. (2017a), with two separate terms between the old and new
data. Then, any standard lower bound $\mathcal{L}$ that we want to build from
Eq. (2) would require to evaluate expectations of the form
$\mathbb{E}_{q(f)}[\log p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}}|f)]$,
where $q(f)=\int p(f|{\mathbf{u}})q({\mathbf{u}}|\bm{\phi})d{\mathbf{u}}$ as
in the uncollapsed version of the bound (Lázaro-Gredilla and Titsias, 2011,
Hensman et al., 2012). Notice that the evaluation of the expectations is
critical due to the difference of size between ${\bm{y}_{\text{old}}}$ and
${\bm{y}_{\text{new}}}$ might be huge, i.e. millions of samples vs. hundreds
respectively. This fact results in very long time computations for re-training
with a few more recent observations included in the model, mainly due to the
size of the likelihood term $p({\bm{y}_{\text{old}}}|f)$.
### 2.2 Recurrent prior reconstruction
A meaningful solution for avoiding the sequential evaluation of ever-
increasing datasets is approximating old likelihood terms
$p({\bm{y}_{\text{old}}}|f)$ using the previous inferred (joint) variational
distribution $q(f|{\bm{\phi}_{\text{old}}})$ at each time-step. This idea was
first introduced in Bui et al. (2017a) by means of the Bayes rule, such that
$q(f|{\bm{\phi}_{\text{old}}})\approx
p(f|{\bm{y}_{\text{old}}},{\bm{x}_{\text{old}}})\propto
p(f)p({\bm{y}_{\text{old}}}|f),$ (3)
where the equality can be inverted to give a proportional estimate of the form
$p({\bm{y}_{\text{old}}}|f)\approx\frac{q(f|{\bm{\phi}_{\text{old}}})}{p(f)}.$
(4)
Having the recursive approximation in Eq. (4) for old likelihood terms, we can
use it to build lower bounds $\mathcal{L}$ where data re-visiting is avoided.
Under this strategy, the variational distribution
$q(f|{\bm{\phi}_{\text{old}}})$ usually factorises according to
$p(f_{\neq{\mathbf{u}}}|{\mathbf{u}},{\bm{\phi}_{\text{old}}})q({\mathbf{u}}|{\bm{\phi}_{\text{old}}})$,
where $f=\\{f_{\neq{\mathbf{u}}}\cup{\mathbf{u}}\\}$. The main problem that we
encounter here is on re-using distributions
$q({\mathbf{u}}|{\bm{\phi}_{\text{old}}})$ estimated over a fixed number of
inducing-points $\mathcal{Z}_{\text{old}}$. If for example, the model requires
a different subset of inducing inputs $\mathcal{Z}_{\text{new}}$, the previous
posterior distribution could not be introduced directly. This is what we will
refer as the explicit variational distribution issue. Particularly, when we
directly introduce Eq. (4) in our target lower bound $\mathcal{L}$, what we
are doing is to recurrently introduce a summary of our data, through the
inducing-points ${\mathbf{u}}$ and their parameters
${\bm{\phi}_{\text{old}}}$. In terms of rigorous continual learning, this is
another way of revisiting past observed data and forces the GP model to
concatenate old and new subsets ${\mathbf{u}}$, something that can be
undesired for certain tasks, i.e. high-dimensional input problems.
#### Continual GP prior
Inspired on online Bayesian inference methods, where past posterior
distributions are usually taken as future priors, our main goal is to
reconstruct the GP prior conditioned on the given parameters
${\bm{\phi}_{\text{old}}}$. The particular construction is as follows. We take
the posterior predictive distribution from GP models. It usually is obtained
by marginalising the posterior probabilities $p(f|\mathcal{D})$ given the
conditional distribution at test inputs $p(f_{*}|f)$, whose output values
$y_{*}$ we aim to predict.
Typically, the predictive distribution takes the form
$p(f_{*}|\mathcal{D})=\int
p(f_{*}|{\mathbf{u}})p({\mathbf{u}}|\mathcal{D})d{\mathbf{u}}$ when it is
applied via sparse approximations. This posterior predictive formulation is
the key idea for recurrently building continual GP priors, that is, a new
implicit distribution at each time step, where all the estimated parameters
intervene. For its derivation, we take the appendix A.2 of Álvarez et al.
(2009) as our starting point. Thus, we have a conditional prior of the form
$p(u_{*}|{\mathbf{u}})=\mathcal{N}(u_{*}|k_{*{\mathbf{u}}}{\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}{\mathbf{u}},k_{**}-k_{*{\mathbf{u}}}{\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}k_{*{\mathbf{u}}}^{\top}),$
(5)
where $u_{*}$ refers to function evaluations $f(\cdot)$ on any arbitrary
input-vector $\mathcal{Z}_{*}$ that we may consider. Here, the covariance
matrix corresponds to
${\mathbf{K}}_{{\mathbf{u}}{\mathbf{u}}}\in\mathbb{R}^{M\times M}$, with
entries $k(\bm{z}_{i},\bm{z}_{j})$ as
$\bm{z}_{i},\bm{z}_{j}\in\mathcal{Z}_{\text{old}}$ and
$k_{*{\mathbf{u}}}=[k(\cdot,\bm{z}_{1}),\cdots,k(\cdot,\bm{z}_{M})]^{\top}$.
In a similar manner, $k_{**}=k(\cdot,\cdot)$ as in the kernel function of any
GP prior. Having the conditional distribution in Eq. (5), which combines both
explicit and implicit covariance function constructions, we may use the
expectations from the variational distribution
$q({\mathbf{u}}|{\bm{\phi}_{\text{old}}})$ to make the conditional GP prior
behave as the former posterior indicates. The process results in a novel
continual distribution, formally denoted
$\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})$, that we obtain as
$\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})\approx\int
p(u_{*}|{\mathbf{u}})q({\mathbf{u}}|{\bm{\phi}_{\text{old}}})d{\mathbf{u}}.$
(6)
Additionally, if we assume that
$q({\mathbf{u}}|{\bm{\phi}_{\text{old}}})=\mathcal{N}({\mathbf{u}}|\bm{\mu}_{\text{old}},{\mathbf{S}}_{\text{old}})$,
then our variational parameters becomes
${\bm{\phi}_{\text{old}}}=\\{\bm{\mu}_{\text{old}},{\mathbf{S}}_{\text{old}}\\}$.
Then, the previous expression leads us to an updated GP prior. Its form is
$u_{*}\sim\mathcal{GP}(k_{*{\mathbf{u}}}{\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}\bm{\mu}_{\text{old}},k_{**}+k_{*{\mathbf{u}}}{\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}({\mathbf{S}}_{\text{old}}-{\mathbf{K}}_{{\mathbf{u}}{\mathbf{u}}}){\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}k^{\top}_{*{\mathbf{u}}}).$
(7)
A similar expression is derived in Burt et al. (2019) where theoretical
analysis on sparse GP regression is performed out of the continual learning
problem. In particular, the conditional GP prior in Eq. (7) coincides with the
approximated posterior process that VI on sparse GP models aims to minimize
through the KL divergence (Matthews et al., 2016). This result is of
particular interest to us, since it provides a closed-form way to introduce
Bayesian online learning into GP models, allowing us to naturally avoid any
data revisiting, only passing past parameters forward and fixing the
posterior-prior recursion.
### 2.3 Continual lower-bounds
Exact posterior inference is still intractable using the previous framework
and variational methods are required. However, we are now able to sequentially
build lower bounds on the log-marginal likelihood in Eq. (2) by only updating
from a few recent observations $\mathcal{D}_{\text{new}}$. The continual
lower-bound $\mathcal{L}_{\mathcal{C}}$ is obtained as follows
$\log
p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}})\leq\mathcal{L}_{\mathcal{C}}\approx\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)q(f|{\bm{\phi}_{\text{old}}})p(f|{\bm{\psi}_{\text{new}}})}{q(f|{\bm{\phi}_{\text{new}}})p(f|{\bm{\psi}_{\text{old}}})}df,$
(8)
where $q(f|{\bm{\phi}_{\text{new}}})$ is the new variational distribution that
we want to update, and ${\bm{\psi}_{\text{old}}}$ and
${\bm{\psi}_{\text{new}}}$ are the past and current subsets of hyperparameters
involved in the GP prior, respectively. We often use $\bm{\psi}$ to refer both
${\bm{\psi}_{\text{old}}}$ and ${\bm{\psi}_{\text{new}}}$ simultaneously,
i.e., $\bm{\psi}=\\{{\bm{\psi}_{\text{old}}},{\bm{\psi}_{\text{new}}}\\}$.
Again, to avoid data revisiting, we have substituted the past likelihood term
$p({\bm{y}_{\text{old}}}|f)$ by its unnormalised approximation, taken from the
inverted Bayes rule in Eq. (4). A key difference with respect to Bui et al.
(2017a) appears on the factorisation of our past variational distribution
$q(f|{\bm{\phi}_{\text{old}}})$. Instead of conditioning on a fixed number of
inducing-points ${{\mathbf{u}}_{\text{old}}}$, we now make use of the
continual GP prior in Eq. (7), leading to
$q(f|{\bm{\phi}_{\text{old}}})=p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{old}}})\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}}),$
(9)
where we extended the factorisation of Titsias (2009) to accept the entire
function space $f$. Moreover, it makes sense to reduce the lower-bound in Eq.
(8) by critically canceling all conditionals of the form $p(f_{\neq
u_{*}}|u_{*})$. Notice that we use $f=\\{f_{\neq u_{*}}\cup u_{*}\\}$ to apply
CI. The complete details of this derivation are provided in the Appendix.
Then, we obtain the triple-termed bound
$\mathcal{L}_{\mathcal{C}}=\int q(f|{\bm{\phi}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|f)df-\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{q(u_{*}|{\bm{\phi}_{\text{new}}})}{p(u_{*}|{\bm{\psi}_{\text{new}}})}df+\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})}{p(u_{*}|{\bm{\psi}_{\text{old}}})}df.$
We are now interested in the derivation of a closed-form version of
$\mathcal{L}_{\mathcal{C}}$ that can be evaluated on a specific number of
inducing inputs $\mathcal{Z}$ rather than on the infinite-dimensional
integrals $f$. For that purpose, suppose that our new incoming samples
$\mathcal{D}_{\text{new}}$ contain a subset of input values
${\bm{x}_{\text{new}}}$ whose distance from all the previous ones
${\bm{x}_{\text{old}}}$ is significant. It makes sense to increase the
capacity of $\mathcal{Z}$ in order to refine the approximated posterior (Burt
et al., 2019). As a consequence, we introduce a new set of inducing variables
$\mathcal{Z}_{\text{new}}=\\{\bm{z}_{m}\\}^{M_{\text{new}}}_{m=1}$, where the
vector ${{\mathbf{u}}_{\text{new}}}$ of function evaluations corresponds to
${{\mathbf{u}}_{\text{new}}}=[u(\bm{z}_{1}),\cdots,u(\bm{z}_{M_{\text{new}}})]^{\top}$.
Notice that we aim to update the distribution
$q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})=\mathcal{N}({{\mathbf{u}}_{\text{new}}}|\bm{\mu}_{\text{new}},\bm{{\mathbf{S}}}_{\text{new}})$
where
${\bm{\phi}_{\text{new}}}=\\{\bm{\mu}_{\text{new}},\bm{{\mathbf{S}}}_{\text{new}}\\}$
in this case.
One strategy is that all the distributions that make reference to $u_{*}$ in
$\mathcal{L}_{\mathcal{C}}$ can be substituted by
${{\mathbf{u}}_{\text{new}}}$. That is, the former prediction at test-points
$\mathcal{Z}_{*}$ are now computed at $\mathcal{Z}_{\text{new}}$. In addition,
except for the log-likelihood term in Eq. (2.3), distributions on $f$ may
factorise as, for example,
$q(f|{\bm{\phi}_{\text{new}}})=q(f_{\neq{{\mathbf{u}}_{\text{new}}}}|{{\mathbf{u}}_{\text{new}}},{\bm{\psi}_{\text{new}}})p({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})$,
particularly the variational ones. This convenient factorization allows us to
use properties of Gaussian marginals, integrating all function values
$u_{\neq{{\mathbf{u}}_{\text{new}}}}$ out of the $\mathcal{L}_{\mathcal{C}}$
bound. Given that, we are able to obtain a closed-form expression of the
$\mathcal{L}_{\mathcal{C}}$ bound where three prior and one posterior
distributions intervene. Respectively, these terms are: i) the new GP
$p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})$, ii) the old GP
$p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})$, iii) the continual
GP $\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})$ and
iv) the variational posterior
$q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})$. Then, using the
previous expressions we can further simplify $\mathcal{L}_{\mathcal{C}}$ to be
$\displaystyle\mathcal{L}_{\mathcal{C}}$ $\displaystyle=$
$\displaystyle\mathbb{E}_{q({{\mathbf{f}}_{\text{new}}})}[\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})]-\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})]$
(10) $\displaystyle+$
$\displaystyle\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})]-\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})],$
where $q({{\mathbf{f}}_{\text{new}}})=\int
p({{\mathbf{f}}_{\text{new}}}|{{\mathbf{u}}_{\text{new}}})q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})d{{\mathbf{u}}_{\text{new}}}$
as in Saul et al. (2016), with ${{\mathbf{f}}_{\text{new}}}$ being the vector
of output function evaluations $f(\cdot)$ over the inputs
${\bm{x}_{\text{new}}}$.222See analytical expression of
$q({{\mathbf{f}}_{\text{new}}})$ in the Appendix. This functional form of the
$\mathcal{L}_{\mathcal{C}}$ bound simplifies the continual learning process to
recurrently make the update of parameters
$\bm{\phi}^{(t+1)}_{\text{old}}~{}\leftarrow~{}\bm{\phi}^{(t)}_{\text{new}}:=\underset{{\bm{\phi}_{\text{new}}}}{\arg\max}\Big{[}\mathcal{L}_{\mathcal{C}}\Big{(}\mathcal{D}^{(t)}_{\text{new}},\bm{\phi}^{(t)}_{\text{old}}\Big{)}\Big{]}.$
From a practical point of view, when $t=0$ in the expression above, that is,
the first time step, we train the model using the bound in Hensman et al.
(2015) in order to set $\bm{\phi}^{(0)}_{\text{new}}$. The complete recursive
computation of Eq. (10) is detailed in Algorithm 1. Moreover, to learn the
variational parameters
${\bm{\phi}_{\text{new}}}=\\{\bm{\mu}_{\text{new}},{\mathbf{S}}_{\text{new}}\\}$,
we represent the covariance matrix as
${\mathbf{S}}_{\text{new}}={\mathbf{L}}_{\text{new}}{\mathbf{L}}_{\text{new}}^{\top}$.
Particularly, we maximise $\mathcal{L}_{\mathcal{C}}$ w.r.t. the triangular
lower matrix ${\mathbf{L}}_{\text{new}}$ to ensure positive definiteness when
using unconstrained optimization. In terms of computational effort, the three
KL divergence terms in Eq. (10) are analytically tractable and of equal
dimension (e.g. $M_{\text{new}}$). However, depending on the likelihood model
considered for $p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})$, i.e.
Gaussian, Bernoulli or Poisson distributed, the expectations could be
intractable. For instance, if we observe binary samples $y_{n}\in[0,1]$, such
integrals could be solved via Gaussian-Hermite quadratures, similarly to
Hensman et al. (2015), Saul et al. (2016).
The selection of $\mathcal{Z}_{\text{new}}$ is of particular importance for
the consistency of the continual learning recursion. Its size,
$M_{\text{new}}$, may vary from the number $M_{\text{old}}$ of previous
inducing-points $\mathcal{Z}_{\text{old}}$ without constraints. Notice that,
if the incoming batch of samples $\mathcal{D}_{t}$ is determined by some
inputs $\bm{x}_{\text{new}}$ that explore unseen regions of $\mathbb{R}^{p}$,
then $\mathcal{Z}_{\text{new}}$ should capture this new corresponding area.
However, due to we marginalise former pseudo-observations
${{\mathbf{u}}_{\text{old}}}$ in Eq. (7) for our continual prior construction,
either $\mathcal{Z}_{\text{old}}$ and $\mathcal{Z}_{\text{new}}$ are no longer
permitted to coincide in any value. If so, the continual bound might not hold,
due to a wrong conditioning between variables. However, as we always assume
that pseudo inputs $\bm{z}_{m}$ belong to the real-valued space
$\mathbb{R}^{p}$, the problem is generally solved by choosing robust
initializations for $\mathcal{Z}_{\text{new}}$. Additional constraints are not
needed.
Algorithm 1 — Continual Gaussian process learning
1: Initialize $\bm{\phi}_{\text{new}}^{(0)}$ and
$\bm{\psi}_{\text{new}}^{(0)}$ randomly.
2: input: Observe $\mathcal{D}^{(0)}_{\text{new}}$
3: Maximise $\mathcal{L}\leq\log p(\mathcal{D}^{(0)}_{\text{new}})$ w.r.t.
$\\{\bm{\phi}_{\text{new}}^{(0)},\bm{\psi}_{\text{new}}^{(0)}\\}$. $//$
standard variational inference
4: for $t\in 1,\dots,T$ do
5: Update
$\\{\bm{\phi}_{\text{old}}^{(t)},\bm{\psi}_{\text{old}}^{(t)}\\}\leftarrow\\{\bm{\phi}_{\text{new}}^{(t-1)},\bm{\psi}_{\text{new}}^{(t-1)}\\}$
$//$ past learned parameters become the old ones
6: Choose initial $\mathcal{Z}_{\text{new}}$ $//$ initialization of inducing
points
7: Compute continual GP prior
$\widetilde{q}(\cdot|\bm{\phi}_{\text{old}}^{(t)})$ $//$ conditional prior
reconstruction
8: input: Observe $\mathcal{D}^{(t)}_{\text{new}}$
9: Maximise $\mathcal{L}_{\mathcal{C}}$ w.r.t.
$\\{\bm{\phi}_{\text{new}}^{(t)},\bm{\psi}_{\text{new}}^{(t)}\\}$. $//$
continual variational inference
10: end for
### 2.4 Stochastic continual learning
Based on Hensman et al. (2013), we assume that the likelihood model is
conditionally independent and fully factorisable across samples, it holds
$p(\bm{y}|{\mathbf{f}})=\prod_{n=1}^{N}p(y_{n}|f_{n})$. The likelihood
factorisation leads to conditional expectation terms in Eq. (10) that are also
valid across data observations, allowing us to introduce stochastic
variational inference (SVI) methods (Hoffman et al., 2013). In our case, the
particular form of the bound $\mathcal{L}_{\mathcal{C}}$ is expressed as
$\displaystyle\sum_{n=1}^{N_{\text{new}}}\mathbb{E}_{q({\mathbf{f}}_{n})}[\log
p(y_{n}|{\mathbf{f}}_{n})]$ $\displaystyle-$
$\displaystyle\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})]$
(11) $\displaystyle+$
$\displaystyle\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})]-\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})].$
So far, under a factorized bound of this form, we are able to combine both
continual learning with stochastic optimization, splitting our new incoming
subset of data $\mathcal{D}_{\text{new}}$ in smaller mini-batches for faster
training. Intuitively, it makes the $\mathcal{L}_{\mathcal{C}}$ bound
applicable to a wide number of problems, particularly those ones with an
extremely asymmetric sequence of observations. That is, if the size of
streaming batches is still large for training, we can apply SVI until the next
incoming batch will be observed. The combination of SVI with continual
learning leads to a best-of-both-worlds strategy, since many times stochastic
approximations can be also considered for streaming settings (Hensman et al.,
2013). In contrast, if the number of new observations goes to the opposite
limit, i.e. a reduced number of samples per time-step $t$, then, the
stochastic version in Eq. (11) can be avoided, leading to solutions closer to
Solin et al. (2018) and Bayesian filtering.
## 3 Generalization for Multi-task Models
Regarding the applicability of continual GP priors to high dimensional output
settings, we study how to adapt the previous results to sequences of multiple
output data. Concretely, we are interested in the generalisation of the
continual GP scheme to accept extremely asymmetric cases. For instance, those
ones for which, in addition to an unknown stream of observations, the order of
appearance of the multi-output dimensions might be unknown as well. Several
cases of both symmetric and asymmetric observation processes are depicted in
Figure 1.
We begin by considering parallel sequences with different size, formally
denoted as channels, $\mathcal{D}_{d}$ with $d\in[1,\dots,D]$. From each
$d$-th channel, we sequentially observe batches of input-output data, such
that
$\mathcal{D}_{d}=\\{\mathcal{Y}^{1}_{d},\mathcal{Y}^{2}_{d},\dots,\mathcal{Y}^{t}_{d}\\}$
where $\mathcal{Y}^{t}_{d}=\\{y_{d}(\bm{x}_{n})\\}^{N^{t}_{d}}_{n=1}$ and
$\bm{x}_{n}\in\mathbb{R}^{p}$. Notice that here, time steps $t$ are not
necessarily aligned across different channels, and its size $N^{t}_{d}$ may
also vary. At this point, we initially consider the case for which each
$y_{d}(\bm{x}_{n})$ is continuous and Gaussian distributed. The assumption
will be relaxed later on this section.
Figure 1: Illustration of the scenarios that two sequences of streaming input-
output observations may belong to. Upper row. General cases for the two output
channels: symmetric (l) and asymmetric (r) sequential data. Lower row. Special
forms of the upper cases: i) one channel is longer at time $t$ (l1), ii)
channels have different frequency (l2), iii) switching missing channels (r1)
and iv) both outputs sequences are in incomplete (r2).
$\textsc{r}=\text{right}$, $\textsc{l}=\text{left}$.
Having a multiple output problem of this type, we want to jointly model it
using multi-output Gaussian processes (MOGP). These models generalise the
flexible prediction system of GP approaches to the vector-valued random field
setup (Alvarez et al., 2012). Particularly, it is demonstrated that by
exploiting correlations among different streams of outputs, or channels, they
are able to improve in the prediction for every $d$-th output. We aim to
exploit the idea of correlated outputs in the multi-task sequential framework.
However, little work has been done on extending MOGP models to the continual
learning scenario. The most closely related works to ours are Cheng et al.
(2017) and Yang et al. (2018). Importantly, we are different from Cheng et al.
(2017) because we allow for continual updates of the MOGP model while they
focus on adding structure to the kernel functions. The work by Yang et al.
(2018) also derives tractable variational lower bounds based on the sparse
approximation, but they do not handle non-Gaussian likelihoods and the
learning method uses particle filtering with a fixed number of inducing
points. In this section, we present a novel extension to perform continual
learning given any MOGP model, independently of the likelihood distributions
considered.
### 3.1 Multi-parameter GP prior
The following description of the multi-parameter GP prior is built on the
heterogeneous MOGP model (Moreno-Muñoz et al., 2018). Based on the single-
output model presented above, we begin by defining the set of Gaussian
likelihoods for each set of output vector values $\bm{y}_{d}$ given a channel
$\mathcal{D}_{d}$, such that
$\bm{y}_{d}=[y_{d}(\bm{x}_{1}),y_{d}(\bm{x}_{2}),\cdots,y_{d}(\bm{x}_{N^{t}_{d}})]^{\top}.$
We also assume for every batch that its samples are conditionally independent
(CI) given the vector of parameter functions
$\bm{\theta}_{d}(\bm{x})\in\mathcal{X}^{J_{d}}$, where $\mathcal{X}$ is the
specific domain for each parameterisation and $J_{d}$ is the number of
parameters that define the target distribution. In the particular case of
standard GP regression, the set $\bm{\theta}_{d}(\bm{x})$ corresponds to the
mean parameter $\mu_{d}(\bm{x})\in\mathbb{R}$, which is assumed to be a non-
linear function $f_{d}(\bm{x})$ drawn from a GP prior. This means that we use
$J_{d}=1$ in this first approach, with $\mu_{d}(\bm{x})=f_{d}(\bm{x})$ for all
outputs. A potential exception would be linking several functions together to
the same parameter $\theta_{d,j}(\bm{x})$ as in Saul et al. (2016), or casting
the standard deviation as positive-real valued function $\sigma_{d}(\bm{x})$,
i.e. heteroscedastic GP regression (Lázaro-Gredilla and Titsias, 2011). Both
extensions are applicable to the present approach, but we avoid them for the
reason of simplicity in the notation. Our definition for every likelihood
distribution of $\bm{y}_{d}$ is therefore
$p(\bm{y}_{d}|\bm{\theta}_{d}(\bm{x}))=p(\bm{y}_{d}|{\mathbf{f}}_{d}(\bm{x}))=\mathcal{N}(\bm{y}_{d}|{\mathbf{f}}_{d}(\bm{x}),\sigma^{2}_{d}\mathbb{I}),$
(12)
where we specify the vector of latent output functions (LOF) as
${\mathbf{f}}_{d}(\bm{x})=[f(\bm{x}_{1}),f(\bm{x}_{2}),\cdots,f(\bm{x}_{N_{t}})]^{\top}\in\mathbb{R}^{N_{t}\times
1}$, that here acts as the mean vector function of the aforementioned Gaussian
distributions. Importantly, notice that the likelihood noise variances
$\sigma_{d}$ are assumed to be fixed. Hence, if we consider single-output
approaches for every channel, we would have $D$ independent priors for each
$f_{d}$ such that $f_{d}\sim\mathcal{GP}(0,k_{d}(\cdot,\cdot))$, with $k_{d}$
being different kernel functions.
Notice that, since our goal is to build a multi-parameter prior, we correlate
all output parameter functions $\mathcal{F}=\\{f_{d}(\bm{x})\\}^{D}_{d=1}$
together. That is, we jointly model the output channels through the linear
model of corregionalisation (LMC) (Journel and Huijbregts, 1978). The
construction of the multi-output prior is as follows.
Instead of using a single GP prior per $f_{d}$, we introduce an additional set
of independent latent functions (LF) denoted by
$\mathcal{U}=\\{u_{q}(\bm{x})\\}^{Q}_{q=1}$. Moreover, we assume that latent
functions $u_{q}(\bm{x})$ are linearly combined to produce $D$ LOFs, that is,
functions $\mathcal{F}$ that are conditionally independent given
$\mathcal{U}$. Then, each latent function is assumed to be drawn from an
independent GP prior, such that
$u_{q}(\cdot)\sim\mathcal{GP}(0,k_{q}(\cdot,\cdot))$, where $k_{q}$ is any
valid covariance function. Under this construction, each function
$f_{d}(\bm{x})$ is given by
$f_{d}(\bm{x})=\sum_{q=1}^{Q}\sum_{i=1}^{R_{q}}a^{i}_{q,d}u^{i}_{q}(\bm{x}),$
(13)
where coefficients $a^{i}_{q,d}\in\mathbb{R}$ and all $u^{i}_{q}$ are i.i.d.
realizations from the GP $u_{q}(\cdot)$. Given $Q$ zero-mean priors, the mean
function for $f_{d}(\bm{x})$ is set to zero as well. Any cross-covariance
matrix between output functions can be built as
$k_{f_{d}f_{d^{\prime}}}(\bm{x},\bm{x}^{\prime})=\text{cov}[f_{d}(\bm{x}),f_{d}^{\prime}(\bm{x}^{\prime})]$,
which is equal to
$\sum_{q=1}^{Q}b^{q}_{d,d^{\prime}}k_{q}(\bm{x},\bm{x}^{\prime})$, where
$b^{q}_{d,d^{\prime}}=\sum^{R_{q}}_{i=1}a^{i}_{q,d}a^{i}_{q,d^{\prime}}$.
Thus, we obtain the matrix ${\mathbf{B}}_{q}\in\mathbb{R}^{D\times D}$, whose
entries are $\\{b^{q}_{d,d^{\prime}}\\}^{D,D}_{d=1,d^{\prime}=1}$.
Alternatively, matrices ${\mathbf{B}}_{q}$ can be also formulated as
${\mathbf{A}}_{q}{\mathbf{A}}_{q}^{\top}$, where ${\mathbf{A}}_{q}$ has
entries $\\{a^{i}_{q,d}\\}^{D,R_{q}}_{d=1,i=1}$. In this work, we always
assume $R_{q}=1$, that is, we take a single sample per each independent $q$-th
GP prior, reducing coregionalisation matrices to be rank-one. This model is
also known in the literature as the semiparametric latent factor model (Teh et
al., 2005).
It is important to remark that besides using LMC as the combination of LFs
$u_{q}(\cdot)$ to get $D$ potential output functions $f_{d}(\cdot)$, the
multi-output model can also accept other valid operators as, for example,
convolutional processes (Alvarez and Lawrence, 2009) or non-linear
combinations with Volterra series (Álvarez et al., 2019).
### 3.2 Sequential multi-output formulation
Having a multi-parameter GP prior with the aforementioned form, we want to
model the sequential observation process properly. Suppose that we expect to
observe a high-dimensional dataset
$\mathcal{D}=\\{\bm{x}_{n},\bm{y}_{n}\\}^{N}_{n=1}$ where we know a priori
that output vectors $\bm{y}_{n}\in\mathbb{R}^{D\times 1}$ are composed by $D$
features, such that
$\bm{y}_{n}=[y_{1}(\bm{x}_{n}),y_{2}(\bm{x}_{n}),\cdots,y_{D}(\bm{x}_{n})]^{\top}$
with $\bm{x}_{n}\in\mathbb{R}^{p}$ as in the single-output scenario. Again, we
assume that the data $\mathcal{D}$ will be observed as a flow of smaller
batches $\mathcal{D}_{1},\mathcal{D}_{2},\cdots,\mathcal{D}_{t}$ with
irregular size and unknown arrival time. We also suppose that the pairs of
output-input observations are aligned between channels, that is, the streaming
setting is equivalent to the single-output case but considering output vectors
$\bm{y}_{n}$ instead of scalars for simplicity in the derivation. Importantly,
the multi-output model presented here is also applicable to the case of
asymmetric channels (see Figure 1), as we will show later on this section.
The generative process of the multi-output samples is as follows. We assume
that there exist $Q$ latent functions $\mathcal{U}$ that are linearly combined
to produce $D$ latent output functions $\mathcal{F}$ along time, using the LMC
formulation. In our MOGP prior, each one of the $\mathcal{U}$ functions is
stationary across batches $\mathcal{D}_{t}$ and their output variables
$\bm{y}_{n}$ follow a probability distribution
$p(\bm{y}_{n}|{\mathbf{f}}_{n})=\prod_{d=1}^{D}p(y_{d}(\bm{x}_{n})|f_{d}(\bm{x}_{n}))$.
We also define the vector
${\mathbf{f}}_{n}=[{\mathbf{f}}_{1}^{\top},{\mathbf{f}}_{2}^{\top},\cdots,{\mathbf{f}}^{\top}_{D}]^{\top}\in\mathbb{R}^{DN_{t}\times
1}$. Moreover, we reuse the notation from the single-output case to indicate
that our dataset is recursively partitioned, as
$\mathcal{D}=\\{\mathcal{D}_{\text{old}},\mathcal{D}_{\text{new}}\\}$, where
$\mathcal{D}_{\text{new}}=\mathcal{D}_{t}$ at each time step $t$ and
$\mathcal{D}_{\text{old}}$ ever increases.
When training the MOGP model for exact inference, the problem is analogous to
the continual GP case. This is, we encounter a recurrent computational cost
that now also includes $D$, the number of outputs, such that
$\mathcal{O}(D^{3}N_{1}^{3}),\mathcal{O}(D^{3}(N_{1}+N_{2})^{3}),\cdots,\mathcal{O}(D^{3}N^{3})$.
Even if we avoid the use of non-Gaussian likelihoods for every output, where
exact posterior distributions are intractable, such computational cost is
still unfeasible. Therefore, inducing variables are introduced within
variational inference for the reason of scalability. Sparse approximation
methods have been already used in the context of MOGP (Alvarez and Lawrence,
2009, Álvarez et al., 2010, Moreno-Muñoz et al., 2018). The subtle difference
from the single-output case lies on the fact that pseudo-observations are not
taken from the output functions $\mathcal{F}$ but from the latent ones
$\mathcal{U}$ instead. Consequently, the extra layer that the multi-output GP
adds for correlating latent functions, is also used for the sparse
approximation, inducing a two-step conditioning on the model. For instance,
output functions values are conditioned to latent functions and latent
function vectors are conditioned to the subset of pseudo-observations. Under
this setting, we define $Q$ sets of $M_{q}$ inducing variables, one per
function $u_{q}(\cdot)$, such that
$\bm{z}=\\{\bm{z}_{m}\\}^{M_{q}}_{m=1}\in\mathbb{R}^{M_{q}\times p}$. It is
important to mention that these subsets are not restricted to take the same
values of $\bm{z}_{m}$ across dimensions and neither the same size $M_{q}$.
However, we consider all $M_{q}$ to be identical and equal to $M$ in this
work, for simplicity in the notation. We also denote
${\mathbf{u}}_{q}=[u_{q}(\bm{z}_{1}),u_{q}(\bm{z}_{2}),\cdots,u_{q}(\bm{z}_{M})]^{\top}$
as the vector of LF evaluations given the $u_{q}$ process and
${\mathbf{u}}=[{\mathbf{u}}^{\top}_{1},{\mathbf{u}}^{\top}_{2},\cdots,{\mathbf{u}}^{\top}_{Q}]^{\top}\in\mathbb{R}^{QM\times
1}$ for the whole set of functions $\mathcal{U}$. Notice that here, we have
the sparse GP notation transformed for the multi-output problem.
Given $D$ output functions $\mathcal{F}$ and $Q$ latent functions
$\mathcal{U}$, we build our joint prior to be
$p(\mathcal{F},\mathcal{U})=p(\mathcal{F}|\mathcal{U})p(\mathcal{U}|\bm{\psi})$,
where again, we use $\bm{\psi}$ to refer the subset of hyperparameters
involved in the MOGP prior. Using the infinite-dimensional approach that we
introduced in the single-output case, we can factorize our prior by
conditioning on the finite number of inducing points ${\mathbf{u}}$ as
$p(\mathcal{U}|\bm{\psi})=p(\mathcal{U}_{\neq{\mathbf{u}}}|{\mathbf{u}},\bm{\psi})p({\mathbf{u}}|\bm{\psi}),$
(14)
where $\mathcal{U}_{\neq{\mathbf{u}}}$ refers to all latent functions values
$\mathcal{U}$ not including ${\mathbf{u}}$, that is,
$\mathcal{U}=\mathcal{U}_{\neq{\mathbf{u}}}\cup{\mathbf{u}}$. The prior
distribution over ${\mathbf{u}}$ also factorises across latent functions, as
$p({\mathbf{u}}|\bm{\psi})=\prod_{q=1}^{Q}p({\mathbf{u}}_{q}|\bm{\psi})$ with
${\mathbf{u}}_{q}\sim\mathcal{N}(\bm{0},{\mathbf{K}}_{q})$ and
${\mathbf{K}}_{q}\in\mathbb{R}^{M\times M}$ corresponds to
$k_{q}(\bm{z}_{i},\bm{z}_{j})$ with entries $\bm{z}_{i}$,
$\bm{z}_{j}\in\bm{z}$. The dimension of ${\mathbf{K}}_{q}$ always changes
within the number of inducing points evaluations, determining the model’s
maximum complexity. This last detail plays an important role when the input
domain is incremental within the appearance of newer observations (Burt et
al., 2019).
Hence, our primary goal is to obtain the posterior distribution
$p({\mathbf{f}},{\mathbf{u}}|\mathcal{D})$, that we know is analytically
intractable under the presence of inducing points and potential non-Gaussian
likelihoods. If we consider the variational approach as in Titsias (2009),
where we can approximate our posterior with an auxiliary Gaussian distribution
$q(\cdot,\cdot)$, we may consider the following factorisation as in Álvarez et
al. (2010).
$p({\mathbf{f}},{\mathbf{u}}|\mathcal{D})\approx
q({\mathbf{f}},{\mathbf{u}})=p({\mathbf{f}}|{\mathbf{u}})q({\mathbf{u}})=\prod_{d=1}^{D}p({\mathbf{f}}_{d}|{\mathbf{u}})\prod^{Q}_{q=1}q({\mathbf{u}}_{q}),$
where we have a product of $Q$ Gaussian distributions, one per latent process,
with
$q(\mathbf{u}_{q})=\mathcal{N}({\mathbf{u}}_{q}|\bm{\mu}_{{\mathbf{u}}_{q}},{\mathbf{S}}_{{\mathbf{u}}_{q}})$
and where the conditional distribution $p({\mathbf{f}}_{d}|{\mathbf{u}})$ is
given by
$p({\mathbf{f}}_{d}|{\mathbf{u}})=\mathcal{N}\Big{(}{\mathbf{f}}_{d}|{\mathbf{K}}_{\mathbf{f}_{d}{\mathbf{u}}}\mathbf{K}^{-1}_{{\mathbf{u}}{\mathbf{u}}}{\mathbf{u}},\mathbf{K}_{{\mathbf{f}}_{d}{\mathbf{f}}_{d}}-\mathbf{K}_{{\mathbf{f}}_{d}{\mathbf{u}}}{\mathbf{K}}^{-1}_{{\mathbf{u}}{\mathbf{u}}}{\mathbf{K}}^{\top}_{{\mathbf{f}}_{d}{\mathbf{u}}}\Big{)},$
with ${\mathbf{K}}_{{\mathbf{f}}_{d}{\mathbf{u}}}\in\mathbb{R}^{N\times QM}$
being the cross-covariance matrix obtained by evaluating correlation between
$f_{d}({\mathbf{x}})$ and $u_{q}({\mathbf{z}})$. We also denote
${\mathbf{K}}_{{\mathbf{u}}{\mathbf{u}}}\in\mathbb{R}^{QM\times QM}$ as the
block-diagonal matrix formed by the ${\mathbf{K}}_{q}$ matrices.
### 3.3 Avoiding revisiting multiple likelihoods
When using variational inference, we fit the distributions
$q({\mathbf{u}}_{q})$ by maximising a lower bound $\mathcal{L}$ of the log-
marginal likelihood $\log p(\mathcal{D})$. In the MOGP literature, this
marginal is also written as $\log p(\bm{y})$ and in our case, we express it
also as $\log p({\bm{y}_{\text{old}}},{\bm{y}_{\text{new}}})$. Given the
previously defined sparse MOGP model, this probability distribution can be
decomposed as a double integral
$\log p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}})=\log\iint
p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}}|\mathcal{F})p(\mathcal{F},\mathcal{U})d\mathcal{F}d\mathcal{U},$
(15)
where we now consider the finite set of output values ${\bm{y}_{\text{old}}}$
and ${\bm{y}_{\text{new}}}$ to be conditioned on the set of whole function
domains $\mathcal{F}$ as in Bui et al. (2017a) but for the multiple output
case. Due to this assumption, we have a double integration over both
$\mathcal{F}$ and $\mathcal{U}$ where we can apply conditional independence in
the likelihood term of (15). This leads us to obtain
$p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}}|\mathcal{F})=p({\bm{y}_{\text{new}}}|\mathcal{F})p({\bm{y}_{\text{old}}}|\mathcal{F})$.
For simplicity, we will denote both terms as the new and old likelihoods
respectively.
As it was previously mentioned, when dealing with variational inference, any
standard lower bound $\mathcal{L}$ over (15) requires to sequentially evaluate
expectations given former log-likelihood terms $\log
p({\bm{y}_{\text{old}}}|{\mathbf{f}})$. However, under the assumption of a
multi-output GP model, the recurrent evaluation of expectations even worsens.
In particular, due to the factorization of LOFs, it is necessary to compute,
at least, $D$ integrals over the dimensions of old data vectors
${\bm{y}_{\text{old}}}$. Notice that each $d$-th dimension might be
characterized by a different likelihood function that we aim to estimate
through expected values. Fortunately, the meaningful solution of Bui et al.
(2017a) still yields in our multiple channel setting. We can approximate all
probabilities $p({\bm{y}_{\text{old}}}|{\mathbf{f}})$ by means of the Bayes
rule. We have that as long as
$q(\mathcal{F},\mathcal{U})\approx
p(\mathcal{F},\mathcal{U}|{\bm{y}_{\text{old}}},\bm{x}_{\text{old}})\propto
p(\mathcal{F},\mathcal{U})p({\bm{y}_{\text{old}}}|\mathcal{F}),$ (16)
we can invert the Bayes rule equality to obtain an unnormalized estimate of
the likelihood term $p({\bm{y}_{\text{old}}}|\mathcal{F})$ as
$p({\bm{y}_{\text{old}}}|\mathcal{F})\approx\frac{q(\mathcal{F},\mathcal{U})}{p(\mathcal{F},\mathcal{U})}.$
(17)
Importantly, the two distributions that intervene in the quotient of Eq.
$\eqref{eq:likapprox}$ factorize as follows
$\displaystyle
q(\mathcal{F},\mathcal{U})=p(\mathcal{F}|\mathcal{U})p(\mathcal{U}_{\neq{\mathbf{u}}}|{\mathbf{u}},{\bm{\psi}_{\text{old}}})\prod_{q=1}^{Q}q({\mathbf{u}}_{q}),$
(18) $\displaystyle
p(\mathcal{F},\mathcal{U})=p(\mathcal{F}|\mathcal{U})p(\mathcal{U}_{\neq{\mathbf{u}}}|{\mathbf{u}},{\bm{\psi}_{\text{old}}})\prod_{q=1}^{Q}p({\mathbf{u}}_{q}|{\bm{\psi}_{\text{old}}}),$
(19)
where both variational posteriors $q(\cdot)$ and priors $p(\cdot)$ are
evaluated over the inducing points given the respective $Q$ latent functions.
This fact will make it easier to obtain separated KL divergence terms in the
future continual lower bound for multi-task problems. Additionally, if we
introduce the aforementioned expression in Eq. $\eqref{eq:likapprox}$ as a
sequential estimator of our multiple old likelihood terms given some previous
inferred distribution $q(\mathcal{F},\mathcal{U}|{\bm{\phi}_{\text{old}}})$,
we can reformulate Eq. (15) to be
$\log
p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}})\approx\log\iint\frac{p({\bm{y}_{\text{new}}}|\mathcal{F})p(\mathcal{F},\mathcal{U})q(\mathcal{F},\mathcal{U})}{p(\mathcal{F},\mathcal{U})}d\mathcal{F}d\mathcal{U},$
(20)
where both prior distributions $p(\mathcal{F},\mathcal{U})$ in the quotient
differ given different subsets of hyperparameters, i.e. the new
${\bm{\psi}_{\text{new}}}$ and the former ones ${\bm{\psi}_{\text{old}}}$.
Having an approximated log-marginal distribution of this form, we can build
our lower bound $\mathcal{L}\leq\log
p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}})$ by means of the Jensen’s
inequality and without revisiting past samples.
$\mathcal{L}=\iint
q(\mathcal{F},\mathcal{U}|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|\mathcal{F})p(\mathcal{F},\mathcal{U})q(\mathcal{F},\mathcal{U})}{q(\mathcal{F},\mathcal{U}|{\bm{\phi}_{\text{new}}})p(\mathcal{F},\mathcal{U})}d\mathcal{F}d\mathcal{U}.$
(21)
As previously mentioned on Section 2, there is still a problem related to the
use of past _explicit_ distributions in continual lower bounds $\mathcal{L}$,
e.g. reusing distributions evaluated over past inducing points might be
problematic. This issue remains in the multi-output setup as we have to
propagate past inducing points ${{\mathbf{u}}_{\text{old}}}$ forward, for each
latent function, in order to approximate likelihood terms with the expression
in Eq. (18). To avoid it, we adapt the continual GP prior idea within the
predictive expressions to the multiple output setting.
Consider an arbitrary set of test inducing inputs $\mathcal{Z}_{*}$. Assumming
that $p({\mathbf{u}}|\mathcal{D})\approx q({\mathbf{u}})$, the predictive
distribution $p(\mathcal{U}_{*}|\mathcal{D})$ can be approximated as $\int
p(\mathcal{U}_{*}|{\mathbf{u}})q({\mathbf{u}})d{\mathbf{u}}$, where we used
$\mathcal{U}_{*}$ to denote the LF values taken on $\mathcal{Z}_{*}$. While
$q({\mathbf{u}})$ factorises accross the $Q$ latent functions vectors
${\mathbf{u}}_{q}$, the conditional multi-output prior
$p(\mathcal{U}_{*}|{\mathbf{u}})$ is analogous to the one that we obtained in
Eq. (5) but having block matrices ${\mathbf{K}}_{q}$ instead. This means that
we have the same mechanism used to build continual GP priors, that now works
similarly but in the latent function layer rather than in the output function
one obtained after mixing. As a consequence, for each one of the $q$-th non-
linear functions, we will set a continual GP prior of the form
$\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})\approx\int
p(u_{*}|{\mathbf{u}}_{q})q({\mathbf{u}}_{q}|{\bm{\phi}_{\text{old}}})d{\mathbf{u}}_{q}$.
Moreover, due to every one of the latent functions has its own independent
covariance function, the continual update process is separated as well. In
particular, we assume the existence of $Q$ parallel priors of the form
$u_{q,*}\sim\mathcal{GP}(k_{*{\mathbf{u}}_{q}}{\mathbf{K}}^{-1}_{{\mathbf{u}}_{q}{\mathbf{u}}_{q}}\bm{\mu}_{q,\text{old}},k_{**}+k_{*{\mathbf{u}}_{q}}{\mathbf{K}}^{-1}_{{\mathbf{u}}_{q}{\mathbf{u}}_{q}}({\mathbf{S}}_{q,\text{old}}-{\mathbf{K}}_{{\mathbf{u}}_{q}{\mathbf{u}}_{q}}){\mathbf{K}}^{-1}_{{\mathbf{u}}_{q}{\mathbf{u}}_{q}}k^{\top}_{*{\mathbf{u}}_{q}}),$
(22)
where
$k_{*{\mathbf{u}}_{q}}=[k_{q}(\cdot,\bm{z}_{1}),\cdots,k_{q}(\cdot,\bm{z}_{M_{q}})]^{\top}$
refers to the values taken on the corresponding kernel constructor. The
development of the multi-output version of the continual lower bound is now
feasible. First, we use the predictive prior to factorize the expression in
Eq. (18) as
$q(\mathcal{F},\mathcal{U})=p(\mathcal{F}|\mathcal{U})p(\mathcal{U}_{\neq{\mathbf{u}}}|u_{q,*},{\bm{\psi}_{\text{old}}})\prod_{q=1}^{Q}q(u_{q,*})$,
where, for instance, we can set $u_{q,*}={\mathbf{u}}_{q,\text{new}}$ to make
the prior-posterior recursion available. Hence, we can further simplify
$\mathcal{L}$ by means of the continual predictive prior and Gaussian
marginals properties to be
$\displaystyle\mathcal{L}$ $\displaystyle=\iint
q(\mathcal{F},\mathcal{U}|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|\mathcal{F})p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})}{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})}d\mathcal{F}d\mathcal{U}+\iint
q(\mathcal{F},\mathcal{U})\log\frac{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})}{p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})}d\mathcal{F}d\mathcal{U}.$
(23)
This expression can also be rewritten in a more recognizable way like
$\displaystyle\mathcal{L}=$
$\displaystyle\sum_{d=1}^{D}\mathbb{E}_{q({\mathbf{f}}_{d,\text{new}})}\left[\log
p(\bm{y}_{d,\text{new}}|{\mathbf{f}}_{d,\text{new}})\right]-\sum_{q=1}^{Q}\text{KL}\left[q({\mathbf{u}}_{q,\text{new}}|{\bm{\phi}_{\text{new}}})||p({\mathbf{u}}_{q,\text{new}}|{\bm{\psi}_{\text{new}}})\right]$
$\displaystyle+$
$\displaystyle\sum_{q=1}^{Q}\text{KL}\left[{q_{\text{new}}}({\mathbf{u}}_{q,\text{new}}|{\bm{\phi}_{\text{new}}})||p({\mathbf{u}}_{q,\text{new}}|{\bm{\psi}_{\text{old}}})\right]-\sum_{q=1}^{Q}\text{KL}\left[q({\mathbf{u}}_{q,\text{new}}|{\bm{\phi}_{\text{new}}})||q({\mathbf{u}}_{q,\text{new}}|{\bm{\phi}_{\text{old}}})\right],$
(24)
where
$q({\mathbf{f}}_{d,\text{new}})=\mathbb{E}_{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})}[p({\mathbf{f}}_{d,\text{new}}|{{\mathbf{u}}_{\text{new}}})]$
is the approximate marginal posterior for every
${\mathbf{f}}_{d,\text{new}}=f_{d}({\bm{x}_{\text{new}}})$ that can be
obtained analytically via
$\displaystyle
q({\mathbf{f}}_{d,\text{new}})=\mathcal{N}({\mathbf{f}}_{d,\text{new}}$
$\displaystyle|\mathbf{K}_{{\mathbf{f}}_{d,\text{new}}{{\mathbf{u}}_{\text{new}}}}\mathbf{K}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}\bm{\mu}_{{{\mathbf{u}}_{\text{new}}}},{\mathbf{K}}_{{\mathbf{f}}_{d,\text{new}}{\mathbf{f}}_{d,\text{new}}}$
$\displaystyle+\mathbf{K}_{{\mathbf{f}}_{d,\text{new}}{{\mathbf{u}}_{\text{new}}}}\mathbf{K}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}({\mathbf{S}}_{{\mathbf{u}}_{\text{new}}}-{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}})\mathbf{K}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}{\mathbf{K}}_{{\mathbf{f}}_{d,\text{new}}{{\mathbf{u}}_{\text{new}}}}^{\top}),$
where
$\bm{\mu}_{{{\mathbf{u}}_{\text{new}}}}=[\bm{\mu}^{\top}_{{\mathbf{u}}_{1,\text{new}}},\cdots,\bm{\mu}^{\top}_{{\mathbf{u}}_{Q,\text{new}}}]$
and ${\mathbf{S}}_{{\mathbf{u}}_{\text{new}}}$ is a block matrix whose
elements are given by ${\mathbf{K}}_{{\mathbf{u}}_{q,\text{new}}}$. The
interpretability of the multi-output continual bound in Eq. (3.3) is of
particular interest in our work. In the single-output case, both expectations
and divergence terms refer to the same layer of computation, that is, the one
where both observations and output functions $f(\cdot)$ lie and are
parameterising the likelihood distribution. However, in the the multi-output
setting, the expectation term in Eq.(3.3) is focused at the observation
counterpart, while the KL regularization terms exclusively affects the layer
of the latent functions $\mathcal{U}$. Particularly, the three KL divergences
regularise the continual variational inference process that will be updated
sequentially if, for instance, the input domain increases along time. In
constrast, we have $D$ expectation terms on a different layer, which are
invisible to the continual learning mechanism due to they are only evaluated
conditioned to the most recently learned parameters. This property makes the
method applicable to asymmetric scenarios or where, for instance, one of the
channels might be unobserved after some time step.
### 3.4 Stochastic updating and heterogeneous likelihoods
The present approach is also valid when the continual lower bound in Eq. (3.3)
factorises across data observations. The expectation term
$\mathbb{E}_{q({\mathbf{f}}_{d,\text{new}})}\left[\log
p(\bm{y}_{d,\text{new}}|{\mathbf{f}}_{d,\text{new}})\right]$ is there
expressed as a $N$-dimensional sum, amenable for stochastic variational
inference (Hoffman et al., 2013, Hensman et al., 2013, Moreno-Muñoz et al.,
2018) by using small subsets of training samples. The optimization method uses
noisy estimates of the global objective gradient at each time step of the
sequential process. Similar stochastic updates have been already used in
Hensman et al. (2013, 2015), Saul et al. (2016), Moreno-Muñoz et al. (2018).
The scalable bound makes our continual multi-output model applicable to larger
datasets, i.e. multi-channel patient monitoring signals or ICU time-series,
among others.
An important detail to consider is the hyperparameter learning, that is, the
sequential update of variables associated to the covariance functions
$\\{k_{q}(\cdot,\cdot)\\}^{Q}_{q=1}$ that have been previously denoted as
${\bm{\psi}_{\text{old}}}$ and ${\bm{\psi}_{\text{new}}}$. Due to abrupt
changes in the hyperparameters may affect the learning process of
$q({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})$, which is sensitive
to amplitude or smoothness, we use several steps of the variational EM
algorithm (Beal, 2003) within each sequential update. This makes the
optimization process more stable through coordinate ascent. In Algorithm 2, we
present all necessary computations for continually learning the proposed MOGP
model. The key difference between Algorithms 1 and 2 is that the latter one
requires $Q$ iterations over the LFs.
Additionally, having presented the previous continual model for multi-output
Gaussian Process regression, we may effortlessly consider the apparition of
other non-Gaussian output variables in the sequential dataset $\mathcal{D}$.
Besides popular scalable GP methods for dealing with non-Gaussian likelihoods
(Dezfouli and Bonilla, 2015, Hensman et al., 2015) in both single and multi-
output scenarios, we focus in the open problem of heterogeneous likelihood
models. In this case, each $d$-th output can be either a continuous,
categorical, binary or discrete variable, following a different likelihood
model. For this general situation, we can adapt the current construction of
the continual lower-bounds to accept heterogeneous MOGP models (Moreno-Muñoz
et al., 2018). Particularly, we generalise the probability distributions
followed by the outputs values $y_{d}$ in $\mathcal{Y}$ to accept any valid
combination of likelihood functions. Notice that in the multi-output GP
framework, the continual learning mechanism is placed exclusively on the
latent function layer. Hence, the appearance of a new likelihood distribution
only affects to the $d$-th expectation term present in the l.h.s. of Eq.
(3.3).
Algorithm 2 — Multi-channel continual GP learning
1: Initialize $\bm{\phi}_{\text{new}}^{(0)}$ and
$\bm{\psi}_{\text{new}}^{(0)}$ randomly.
2: input: Observe $\mathcal{D}^{(0)}_{\text{new}}$
3: Maximise $\mathcal{L}\leq\log p(\mathcal{D}^{(0)}_{\text{new}})$ w.r.t.
$\\{\bm{\phi}_{\text{new}}^{(0)},\bm{\psi}_{\text{new}}^{(0)}\\}$. $//$
standard variational inference
4: for $t\in 1,\dots,T$ do
5: Update
$\\{\bm{\phi}_{\text{old}}^{(t)},\bm{\psi}_{\text{old}}^{(t)}\\}\leftarrow\\{\bm{\phi}_{\text{new}}^{(t-1)},\bm{\psi}_{\text{new}}^{(t-1)}\\}$
$//$ past learned parameters become the old ones
6: for $q\in 1,\dots,Q$ do
7: input: Observe $\mathcal{D}^{(t)}_{\text{new}}$
8: Choose initial $\mathcal{Z}_{\text{new}}$ $//$ initialization of inducing
points
9: Compute continual GP priors
$\widetilde{q}(\cdot|\bm{\phi}_{\text{old}}^{(t)})$ $//$ conditional prior
reconstruction
10: end for
11: Maximise $\mathcal{L}_{\mathcal{C}}$ w.r.t.
$\\{\bm{\phi}_{\text{new}}^{(t)},\bm{\psi}_{\text{new}}^{(t)}\\}$. $//$
continual variational inference
12: end for
## 4 Experiments
Our experiments in this paper are focused in three main topics that aim to
demonstrate the utility and robustness of the approach over both toy and real-
world datasets. The three topics are: i) performance of the continual GP model
under single-output streaming observations, ii) resistance to propagation
errors when reusing variational approximations, including fitting to the
appearance of tasks, non-Gaussian data and heterogeneous multi-output
settings, iii) applicability to real world problems with multi-dimensional
online data, potentially configured as asymmetric channels. A particular
detail of the aforementioned experiments is that they are organized into
several subsections related to single-output regression, classification,
multi-channel settings and last, heterogeneous likelihood models.
For all experiments, we used a modified version of the Python code released
within Moreno-Muñoz et al. (2018) that presents similar features of
scalability and adaptability to multi-output and non-Gaussian data. For the
optimization process w.r.t. continual lower bounds
$\mathcal{L}_{\mathcal{C}}$, we make use of the LBFGS-B algorithm and when the
stochastic counterpart is necessary, we considered ADADELTA instead, which is
included in the climin library. Further details about the general setting of
hyperparameters are included in the Appendix. Moreover, our code is publicly
available in the repository github.com/pmorenoz/ContinualGP/ where all the
experiments included in this section can be fully reproduced.
### 4.1 Continual GP regression
In our first subset of experiments, we evaluate the performance of the
continual GP approach for the case of single-output scenarios where streaming
data is real-valued, assumed Gaussian distributed and we aim to perform
sequential non-linear regression. We first setup a toy problem with three
different versions in the way of appearance of the incoming samples. We denote
them as i) streaming, ii) overlapping and iii) incremental data. In the first
case, we have a sequence of $t=10$ non-overlapping partitions that are
recursively delivered to the learning system. Each partition avoids revisiting
the previously explored input domain. Secondly, we relax the assumption of
non-overlapping partitions of data to consider partially overlapping tasks
where parts of the input domain may also be re-visited (not the observations).
The last version of the experiment refers to the same dataset that now is
progressively completed within the emergence of new batches. Importantly, we
always use a single-output latent function for modeling likelihood parameters
$\bm{\theta}$, that is, we avoid solutions similar to the chained GP (Saul et
al., 2016), which could be also applied to the current experiment with
continual GPs.
Streaming. The streaming data experiment consists of $t=10$ batches of data
that are observed in a sequential manner. In this case, we consider that each
batch has approximately a similar size, so the scenario is not irregular
w.r.t. the number of samples per batch or their input domain. We setup the
initial number of inducing points to be $M=3$, that will also be increased
following the rule $M(t)=3t$. The rule can be modified depending on the
problem considered, as we will see later on additional experiments. We
consider a synthetic dataset of $N=2000$ samples where the $30\%$ of them are
used for testing. The ground-truth expression of the true latent functions is
included in the Appendix. All inducing points are initialized at random in
different positions based on the previous ones, that is, at time $t+1$. There
are not values of $\mathcal{Z}_{\text{new}}$ that coincide with the previous
ones at $\mathcal{Z}_{\text{old}}$ from the step $t$. In Figure 2, we show
three captions of the iterative learning process, concretely the initial step
at $t=1$, the intermediate one at $t=5$ and the final step at $t=10$. It is
important to mention that at each time-step, the posterior predictive
computation of the curves does not use any past parameters, only the learned
ones in the most recent iteration. Notice that, It is the last trained model,
which avoids revisiting data, the one who predicts all along the input space
explored so far.
Figure 2: Results from continual GP regression applied to toy streaming data.
Sequential batches correspond to non-overlapping partitions. The sequence
consists of $t=10$ consecutive subsets of observations that the model acquires
recursively. Red elements represent the GP predictive posterior over the newer
input domain while the blue ones are refer to the past visited input space.
Train and test data samples are plotted as colored crosses and dots
respectively. Black crosses indicate the position of the inducing inputs at
each time-step. The pink line corresponds to the limit between the past and
the new input domain explored by the continual GP.
Additionally, in Table 1 we include the negative log-predictive density (NLPD)
values obtained from each $t$-th subset of the test observations. All
posterior predictive densities are computed via Monte-Carlo (MC) for the given
selected likelihood distribution. The performance of the method is evaluated
in three different ways: i) test prediction at the new observed input region,
ii) decay of the predictive precision in $t$-th past seen input areas without
revisiting old data samples and iii) prediction quality of the GP model all
along the input domain.
For instance, in the case of the $t^{\prime}=1$ column, the NLPD is evaluated
on the same test-samples as the GP model does at $t=1$. One can see how the
red error metrics remain approximately static around an average NLPD value of
$13.29\times 10^{-2}$ which is slightly less than the initial value obtained
when data was first observed at that region. Initially, the model obtained an
average of $13.13\times 10^{-2}.$ This means that, although the continual
variational approach suffers a small reduction in the predictive precision
once past training samples are never revisited again, the accuracy still
remains constant 9 steps after its maximization, that is, 9 GP prior
reconstructions and 9 optimization processes where the learned uncertainty
measurements are not overwritten. One last detail is that for all metrics
showed, we obtain mean and standard deviation numbers given 10 simulations
with different initializations.
Table 1: Streaming single-output data. Test-NLPD metrics ($\times 10^{-2}$). Column new: Predictive error values obtained in the new observed input area at each time-step ($t^{\prime}=t$). Columns old: Predictive error values obtained in the past observed input areas at time-steps ($t^{\prime}=1,t^{\prime}=4$ and $t^{\prime}=8$). Colored values correspond to the GP prediction on the same test-samples at the $t$-th iteration. Column global: NLPD values over the test-samples all along the input domain at each time-step $t$. | new | old | old | old |
---|---|---|---|---|---
step | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=4$ | $t^{\prime}=8$ | global
$t=1$ | $\mathbf{13.13\pm 0.10}$ | - | - | - | $13.13\pm 0.13$
$t=2$ | $12.50\pm 0.13$ | $13.24\pm 0.10$ | - | - | $25.74\pm 0.23$
$t=3$ | $12.54\pm 0.08$ | $13.29\pm 0.13$ | - | - | $38.48\pm 0.27$
$t=4$ | $\mathbf{11.59\pm 0.04}$ | $13.33\pm 0.12$ | - | - | $52.26\pm 0.28$
$t=5$ | $11.34\pm 0.05$ | $13.28\pm 0.10$ | $11.34\pm 0.06$ | - | $63.78\pm 0.32$
$t=6$ | $11.56\pm 0.06$ | $13.29\pm 0.11$ | $11.33\pm 0.06$ | - | $75.35\pm 0.46$
$t=7$ | $12.71\pm 0.09$ | $13.29\pm 0.12$ | $11.34\pm 0.08$ | - | $88.09\pm 0.55$
$t=8$ | $\mathbf{11.92\pm 0.05}$ | $13.29\pm 0.13$ | $11.34\pm 0.06$ | - | $100.01\pm 0.62$
$t=9$ | $13.55\pm 0.08$ | $13.29\pm 0.09$ | $11.34\pm 0.08$ | $11.98\pm 0.06$ | $113.60\pm 0.58$
$t=10$ | $11.73\pm 0.06$ | $13.30\pm 0.14$ | $11.34\pm 0.07$ | $11.97\pm 0.04$ | $125.34\pm 0.68$
Overlapping. In this version of the single-output experiment, we study the
potential difficulties of the GP regression model to accept overlapping
sequential batches. When we refer to overlapping partitions, we usually
consider the case where a few samples revisit the input space previously
observed. The setting can be observed in Figure 3, where we use shaded purple
areas to indicate the overlapping sections of the new incoming batches. As in
the previous streaming experiment, we consider a sequence of $t=10$ batches,
and now the model is initialized with $M=4$ inducing points instead. The
increasing rule for the sparse approximation is still linear in time steps as
in the aforementioned example. Also, the learning system is limited to a
maximum of 100 iterations per optimization run and importantly, the initial
step of the model is trained using the standard variational bound of scalable
sparse GP models (Hensman et al., 2015, Saul et al., 2016, Moreno-Muñoz et
al., 2018). Notice that on the first iteration, there is no past variational
distribution to reconstruct the conditional GP from.
In Table 2, we show similar NLPD results to the ones included in Table 1. The
first column corresponds to the NLPD metrics obtained over the new observed
test-samples at the $t$-th time-step. Intermediate columns show the predictive
perfomance of the GP over the past visited data. Notice that the
$t^{\prime}=1$ column values would correspond to the NLPD obtained by the GP
at each $t$-th time-step over the input region first visited at $t=1$.
We can observe how the performance of the continual learning approach is
equivalent to the streaming case. Red, blue and purple values indicate the
metrics obtained once its initial training step has passed. In all cases, the
precision of predictive quantities suffer an initial small reduction, but
remains constant once the model continues in the number of iterations. The
final number of inducing points is $M=22$.
Table 2: Overlapping single-output data. Test-NLPD ($\times 10^{-2}$). Column new: Predictive error values obtained in the new observed input area at each time-step ($t^{\prime}=t$). Columns old: Predictive error values obtained in the past observed input areas at time-steps ($t^{\prime}=1,t^{\prime}=4$ and $t^{\prime}=8$). Colored values correspond to the GP prediction on the same test-samples at the $t$-th iteration. Column global: NLPD values over the test-samples all along the input domain at each time-step $t$. In this experiment, input areas are overlapped with the previous one. | new | old | old | old |
---|---|---|---|---|---
step | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=4$ | $t^{\prime}=8$ | global
$t=1$ | $\mathbf{13.26\pm 0.29}$ | - | - | - | $13.26\pm 0.29$
$t=2$ | $11.70\pm 0.20$ | $12.23\pm 0.10$ | - | - | $23.94\pm 0.30$
$t=3$ | $13.60\pm 0.12$ | $12.26\pm 0.11$ | - | - | $37.58\pm 0.31$
$t=4$ | $\mathbf{12.63\pm 0.13}$ | $12.08\pm 0.17$ | - | - | $50.37\pm 0.50$
$t=5$ | $14.50\pm 0.36$ | $12.07\pm 0.12$ | $12.66\pm 0.11$ | - | $64.93\pm 0.77$
$t=6$ | $13.68\pm 0.16$ | $12.04\pm 0.07$ | $12.77\pm 0.10$ | - | $79.38\pm 0.63$
$t=7$ | $13.80\pm 0.10$ | $12.24\pm 0.09$ | $12.75\pm 0.12$ | - | $92.86\pm 0.73$
$t=8$ | $\mathbf{13.45\pm 0.09}$ | $12.03\pm 0.09$ | $12.67\pm 0.11$ | - | $106.21\pm 0.93$
$t=9$ | $12.64\pm 0.09$ | $12.09\pm 0.08$ | $12.69\pm 0.06$ | $13.78\pm 0.09$ | $119.04\pm 1.01$
$t=10$ | $12.84\pm 0.15$ | $12.08\pm 0.11$ | $12.71\pm 0.08$ | $13.65\pm 0.09$ | $131.93\pm 1.01$
Figure 3: Three captions of the continual learning process of our single-
output GP regressor. From top to down, plots correspond to steps $t=1$, $t=5$
and $t=10$. Blue and red elements correspond to past and new observed data for
both training (crosses) and test (dots) data. We consider a sequence of
batches that repetitively overlaps with the last observed ones. Purple area
indicates the overlapping are where past and novel data are mixed.
Incremental. The last version of the toy single-output GP regression
experiment shows relevant properties of the model itself. In this case, we
setup an experiment where batches does not advance through the input space.
Alternatively, we establish a pseudo-stochastic setting, where batches are
observed across the entire input domain. (e.g. similarly to the batches used
in standard SVI methods). The key point here is that we can train, reconstruct
and modify the complexity of our model following any consideration observed
from the new incoming data. Notice that the model allows both to increase or
decrease the number of inducing points and hence, the computational cost of
the variational sparse approximation. That is, in Figure 4 we can see how the
number of inducing points is increased as new batches appear but exploring
similar regions of the input space. At the same time, prediction curves
improve as the number of inducing points increases but considering only the
last observed training data so far. This is interesting for the reason that
the continual mechanism is similar to SVI methods in GPs but using analytic
gradients instead (use of stochastic VI implies noisy gradient vectors
depending on the size of mini-batches and the learning rate hyperparameter)
and it is also flexible to an irregular size of batches.
For future applications, our experiment provides a novel intuition about the
potential utilities of the continual learning approach as an impreved method
for stochastic approximations. Typically, when using SVI for sparse GP models,
one fixes the number of inducing-inputs $M$ and applies any stochastic
gradient method computed from a smaller subset of samples. However, if the
sparse approximation requires a higher amount of inducing-inputs at some
iteration of the learning process (e.g. the input domain increases), the
entire GP would have to be re-defined. When using the continual GP approach,
this problem disappears, as one can augment, reduce or keep constant the
number $M$ of inducing-inputs. Such complexity of the sparse approximation
could be chosen, for instance, using the rates in Burt et al. (2019). Our
method also accepts SVI with an optimizer based on the stochastic gradient. In
the single-output experiments, the initial number of inducing-inputs
considered is $M=4$ and for this version, we set a linear rule of the form
$M(t)=M(t-1)+2t$.
In Table 3, we show the NLPD results from the iterative process of $t=10$
steps. In contrast to the results obtained in the previous versions of the GP
regression experiment, here the robustness against error propagation is not
that obvious. Particularly, we can see that the prediction error values still
improve after the first training iteration. This is caused by the fact that
the density of inducing points is higher and also because the continual
learning process is correctly propagating the posterior distribution forward.
Figure 4: Representation of the continual learning process of the GP at time-
steps $t=1$, $t=3$ and $t=7$. Blue and red elements correspond to past and new
observed data for both training (crosses) and test (dots) data. The dataset is
incrementally delivered to the learning system in small batches all along the
input area. The GP model increases the number of inducing-inputs (black
crosses) as long as new observations come in. Red curves indicate the
posterior predictive curves over the entire input space. Table 3: Incremental
single-output data. Test-NLPD. Column new: Predictive error values obtained in
the new observed input area at each time-step ($t^{\prime}=t$). Columns old:
Predictive error values obtained in the past observed input areas at time-
steps ($t^{\prime}=1,t^{\prime}=4$ and $t^{\prime}=8$). Colored values
correspond to the GP prediction on the same test-samples at the $t$-th
iteration. Column global: NLPD values over the test-samples all along the
input domain at each time-step $t$. In this experiment, all batches are
overlapping.
| new | old | old | old |
---|---|---|---|---|---
step | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=4$ | $t^{\prime}=8$ | global
$t=1$ | $\mathbf{3.17\pm 14.34}$ | - | - | - | $3.17\pm 1.43$
$t=2$ | $2.58\pm 5.69$ | $2.56\pm 4.71$ | - | - | $5.14\pm 1.04$
$t=3$ | $1.46\pm 3.70$ | $1.22\pm 3.00$ | - | - | $3.94\pm 1.07$
$t=4$ | $\mathbf{1.95\pm 6.28}$ | $2.00\pm 6.32$ | - | - | $7.90\pm 2.32$
$t=5$ | $1.50\pm 2.71$ | $1.45\pm 3.99$ | $1.38\pm 2.56$ | - | $7.16\pm 1.54$
$t=6$ | $0.80\pm 0.35$ | $0.77\pm 0.75$ | $0.79\pm 0.85$ | - | $4.93\pm 0.33$
$t=7$ | $0.69\pm 0.82$ | $0.66\pm 0.48$ | $0.68\pm 0.32$ | - | $4.88\pm 0.28$
$t=8$ | $\mathbf{0.63\pm 0.23}$ | $0.66\pm 0.16$ | $0.68\pm 0.29$ | - | $5.43\pm 0.23$
$t=9$ | $0.66\pm 0.18$ | $0.65\pm 0.17$ | $0.66\pm 0.18$ | $0.62\pm 0.14$ | $6.00\pm 0.17$
$t=10$ | $0.63\pm 0.16$ | $0.64\pm 0.13$ | $0.66\pm 0.19$ | $0.62\pm 0.11$ | $6.65\pm 0.16$
(all std. $\times 10^{-3}$)
Dollar Exchange Rate. For our first experiment with a real-world dataset, we
consider the problem of sequentially predicting a foreign exchange rate w.r.t.
the european currency (EUR).333Currency data can be found at
http://fx.sauder.ubc.ca/data.html The setting of our experiment consists of
daily ratios between the US dollar currency (USD) and Euro (EUR), taken during
48 months. The total number of samples taken is $N=922$. In this experiment,
we split the dataset in 4 subsets, each subset corresponds approximately to
one year. Our goal is to perform GP regression once a year without forgetting
the previously learned latent functions. For the regression model, we consider
a Gaussian likelihood distribution with a fixed noise parameter
$\sigma=10^{-2}$ and a Matérn kernel function for the GP. The applicability of
the continual learning approach out of vanilla GPs. Initialization values of
hyperparameters are included in the Appendix.
Similarly to Figure 2 for the toy regression experiment, in Figure 5 we show 4
iterations of the sequential training process. We used different colors to
indicate both old and new training samples. The GP mean predictive function
(black) remains fitted all along the input domain as the model is re-trained
with new data. We setup the initial number of inducing-points to $M=20$, that
becomes double at each time-step.
Figure 5: Evolution of the mean posterior predictive curve (black) along time
under dollar exchange data. Every 12 months, the model is re-updated without
revisiting past training samples. The underlying output latent function is
generated from a GP prior with a Matérn kernel.
### 4.2 Continual GP classification
The approach presented in this paper is also valid under the presence of non-
Gaussian likelihood models that implies to introduce additional approximations
for the computation of expectations. Hence, the expected values of likelihoods
can be computed via Gaussian-Hermite quadratures if the integrals are
intractable. As an example of the continual GP performance over binary data,
we choose the banana dataset, used for demonstrative experiments of scalable
GP classification tasks (Hensman et al., 2015, Bui et al., 2017a).
Banana Dataset. In the continual GP classification experiment with real-world
data, we consider the case of a non-Gaussian likelihood model with an input
dimensionality greater than one. Particularly, the banana dataset consists of
$N=5200$ pairs of input-output observations, where we select a percentage of
$30\%$ for testing the predictive error metrics. All inputs have a dimension
$p=2$. In Figure 6, we plot the 4-steps inference process where we initially
setup a grid of inducing points with $M=3$ inducing inputs per side. Grey
scaled colors correspond to non-revisited training samples.
Figure 6: Performance of the continual GP learning approach under non-Gaussian
data for binary classification tasks. Past samples are plotted in a grey
scaled version. Black curves represent the frontier between positive and
negative predictions w.r.t. the output values. Additionally, the last r.h.s.
plot shows the final prediction of the model over the entire 2-dimensional
input space, within the last training data seen so far (sharp colors).
In Table 4, we show the NLPD results obtained in test prediction as well as
the classification error rates (ER) for each time step. If we analyze the ER
results, we can see that the performance is similar to the single-output GP
regression case, where the precision remains constant in areas of the input
space where training data is never revisited.
Table 4: Banana Dataset. Test NLPD & Classification Error Rate (ER). Column
new: Predictive and error metrics obtained in the new observed input area at
each time-step ($t^{\prime}=t$). Columns old: Predictive and error values
obtained in the past observed input areas at time-steps
($t^{\prime}=1,t^{\prime}=2$ and $t^{\prime}=3$). Colored values correspond to
the GP prediction on the same test-samples at the $t$-th iteration.
(NLPD) | new | old | old | old |
---|---|---|---|---|---
step | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=2$ | $t^{\prime}=3$ | global
$t=1$ | $0.08\pm 0.13$ | - | - | - | $0.08\pm 0.13$
$t=2$ | $0.06\pm 0.45$ | $0.09\pm 7.70$ | - | - | $0.17\pm 7.20$
$t=3$ | $0.13\pm 1.10$ | $0.09\pm 4.90$ | $0.07\pm 0.30$ | - | $0.30\pm 3.40$
$t=4$ | $0.09\pm 1.10$ | $0.10\pm 5.00$ | $0.07\pm 1.80$ | $0.13\pm 1.20$ | $0.39\pm 4.50$
(ER) | new | old | old | old |
step | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=2$ | $t^{\prime}=3$ |
$t=1$ | $0.09\pm 0.10$ | - | - | - |
$t=2$ | $0.08\pm 1.40$ | $0.10\pm 9.30$ | - | - |
$t=3$ | $0.16\pm 2.90$ | $0.10\pm 7.80$ | $0.08\pm 1.10$ | - |
$t=4$ | $0.09\pm 0.75$ | $0.10\pm 12.4$ | $0.08\pm 2.10$ | $0.14\pm 3.80$ |
all std. ($\times 10^{-3}$)
### 4.3 Continual multi-output GPs
As we explained in Section 3, the multi-output framework introduces two layers
in the inference mechanism. One is related to the latent functions
$\mathcal{U}$, where the sparse approximation lies, while the other comes from
the observational side, where expectations are evaluated from output functions
$\mathcal{F}$. The two layers make the continual multi-output learning process
work in a different manner w.r.t. the marginal lower bound
$\mathcal{L}_{\mathcal{C}}$. Now, the expectation terms are decoupled from the
regularization side which is only focused on the latent function priors. The
key property of the continual multi-output approach is that we can consider
extremely irregular problems where, for instance, outputs are completely
asymmetric as we will show in the following results. An illustration of the
asymmetric cases can be seen in Figure 1. In this section of experiments, we
include three cases, two of them using toy regression data and a third one
with real-world observations from human motion capture.
Synchronous Channels. In the first multi-output experiment with toy data, we
are interested into jointly performing multi-task non-linear regression over
two output Gaussian channels with different likelihood noise parameters. The
underlying linear mixing of the latent functions is assumed to follow a LMC
structure that we also aim to infer it in an online manner. The number of true
latent functions is $Q=2$ and we generate them using a linear combination of
sinusoidal signals (see details in Appendix). In this case, we have
artificially split the dataset into five batches of non-overlapping samples
that are delivered sequentially at the same time-step on both channels. In
Figure 7, we show three captures of the learning process for this experiment.
Additionally, the empirical error results for test prediction are included in
Table 5, where the predictive error metrics are equivalent to the ones
obtained in the previous single-output cases.
Figure 7: Results for temporal modeling of multi-output real-valued data. Two
channels are jointly model using the continual learning approach
aforementioned for multi-output GP regression. The pink line indicates the
limiting point between the novel observed samples and the past data that we
avoid to revisit. All inducing-inputs are positioned over the $Q$ underlying
latent functions that are later combined to obtain the output parameter
functions. Both channels are trained together in a synchronous manner. The $Q$
subsets of inducing-inputs are not plotted for a reason of clarity. Table 5:
Synchronous multi-channel streaming data. Test-NLPD (all std. $\times
10^{-4}$). Columns new: Predictive error values obtained in the new observed
input area at each time-step ($t^{\prime}=t$) for each channel. Columns old:
Predictive error values obtained in the past observed input areas at time-step
$t^{\prime}=1$ for both channels. Colored values correspond to the GP
prediction on the same test-samples at the $t$-th iteration. Columns global:
NLPD values over the test-samples all along the input domain at each time-step
$t$ and channel.
channel $\rightarrow$ | I | II | I | II | I | II
---|---|---|---|---|---|---
| new | new | old | old | |
step | $t^{\prime}=t$ | $t^{\prime}=t$ | $t^{\prime}=1$ | $t^{\prime}=1$ | global | global
$t=1$ | $\mathbf{0.19\pm 0.36}$ | $\mathbf{0.30\pm 2.81}$ | - | - | $0.19\pm 0.07$ | $0.30\pm 0.56$
$t=2$ | $0.18\pm 0.53$ | $0.35\pm 2.07$ | $0.19\pm 0.71$ | $0.32\pm 2.53$ | $0.38\pm 0.25$ | $0.67\pm 0.92$
$t=3$ | $0.19\pm 0.42$ | $0.40\pm 1.64$ | $0.19\pm 0.48$ | $0.31\pm 1.97$ | $0.58\pm 0.27$ | $1.07\pm 1.13$
$t=4$ | $0.17\pm 0.49$ | $0.33\pm 1.66$ | $0.19\pm 0.83$ | $0.31\pm 1.98$ | $0.75\pm 0.45$ | $1.41\pm 1.58$
$t=5$ | $0.16\pm 0.37$ | $0.35\pm 1.81$ | $0.19\pm 0.29$ | $0.31\pm 2.19$ | $0.92\pm 0.38$ | $1.76\pm 1.93$
(∗) colors correspond to output channels in Figure 7.
Asynchronous Channels. The following experiment is of particular importance
for the demonstration of the multi-output model performance under asymmetric
incoming channels. Particularly, we consider the same dataset as in the
synchronous scenario but introducing an asymmetric observation process over
the incoming channels data by the learning system. That is, at each time-step,
only one of the two channels delivers output-input samples. In the next step,
the observation channel switches and new incoming data appears on the other
one. This observation procedure is depicted in Figure 8.
The continual inference process is possible due to the latent functions
$\mathcal{U}$ lie in a different layer than the output observations. Hence,
the inducing points can be positioned across the input domain within the
emergence of new samples in any of the output channels. The number of initial
inducing points is $M_{q}=4$ per channel, and double per time-step iteration.
Figure 8: In contrast to Figure 7, we apply the continual GP approach to model
multi-channel sequential data that is observed in an asynchronous manner, that
is, samples might appear at different time steps from different outputs in
unobserved input regions. From left to right and from top to down, we
represent the learning process at four consecutive time-steps ($t=2$, $t=3$,
$t=4$ and $t=5$). Past data is plotted using grey scaled colors.
Multi-channel sensors for Human Motion. For the last multi-output regression
experiment with real-world data, we consider the MOCAP dataset.444MOCAP
datasets are available at http://mocap.cs.cmu.edu/. The data consists of raw
multi-channel traces from sensors monitoring human motion. In particular, we
select the first individual (id. number $01$) in the walking activity example.
We aim to exploit the benefits of multi-task GPs rather that using a single-
output GP per sensor. It is demonstrated that by exploiting such correlations
between channels, multiple-output data are better modelled (Bonilla et al.,
2008). From all available sensors in the human body, we consider three of them
whose oscillation phase does not coincide: the left wrist, the right wrist and
at the right femur. Each channel provides a number of $N=343$ samples
corresponding to the vertical axis values recorded by the sensors. For the
experiment, we setup an initial amount of $M=10$ inducing inputs in order to
obtain a reliable precision. We increase the $M$ twice per recursive
iteration. Moreover, the number of latent functions in the multi-output GP
prior is $Q=3$. Both latent function values and the underlying linear mixing
coefficients are initialized at random at each time-step.
Figure 9: MOCAP dataset. Multi-output GP regression over three sequential
channels. Each channel corresponds to the Y axis output values of a sensor in
a walking motion capture experiment. Black curves correspond to the mean of
the posterior predictive distribution at each time-step for the whole input
space. Gray scaled colors correspond to non-revisited data samples.
The multi-output model with the LMC formulation is robust. It recovers the
previous linear combination from random initial values thanks to the triple KL
regularization within the continual MOGP prior. In Figure 9 we show the
performance of the multi-task regression model for the three regression
outputs at 3 different time-steps. Each color represents a different sensor
channel.
### 4.4 Resistance to propagation error
In this experiment, we are particularly interested in the demonstration of the
effect that the continual GP prior reconstruction has on the whole model. In
particular, how robust it can be as $t\rightarrow\infty$. Typically,
substituting variational posterior distributions $q(\cdot)$ as the novel prior
into a Bayesian online updating scheme seems the most natural manner to treat
sequential observations using approximated probabilistic inference. However,
this approach is usually discarded due to the assumption that repeated
approximations may accumulate errors as the number of time-steps increases
(Nguyen et al., 2018), something that usually happens.
One of the main objectives in our work is to beat this assumption, performing
continual variational learning for signal processing applications with
thousands of updating repetitions. In the following experiment, we present
some results that aim to demonstrate this statement. We also prove that
recursively reconstructing the continual GP prior avoids propagating the error
of approximations forwards.
Solar Physics Data. Based on filtering experiments for signal processing
applications, we obtained an astrophysics dataset which consists of the
monthly average of sunspot counting numbers from 1700 to 1995. In particular,
we use the observations made for the analysis of sunspot cycles by the Royal
Greenwich Observatory (US).555Solar physics data is publicly available at
https://solarscience.msfc.nasa.gov/ For avoiding the use of non-tractable
likelihood models, we transform the strictly positive samples into the real
domain by means of the non-linear mapping $\log(1+\bm{x})$. Note that the
original observations are the average of counting numbers obtained from
several observers.
Our primary goal is to demonstrate that the predictive mechanism of the
continual GP remains stable when $t\rightarrow\infty$, all over the input
domain, i.e. it does not forget past visited regions. In Figure 10, we show
three captures of the continual learning process until a maximum of $t=10^{3}$
iterations. It is important to mention that we used a one-sample update rule
for the entire sequence, meaning $10^{3}$ consecutive optimization trials. For
tractable reasons, we setup an initial number of $M=10$ inducing points for
the warm up period and an incremental update of one additive inducing point
per 100 new samples observed. We also included a similar transition for the
parameters and initialization points as in the previous experiments.
A demonstrative visualization of the whole continual GP learning process for
the solar sunspot signal can be found at
https://www.youtube.com/watch?v=j7kpru4YrcQ. Importantly, the predictive GP
posterior distribution remains accurate and fitted to the signal without
revisiting data during $t=10^{3}$ iterations.
Figure 10: Results for single-output regression on solar physics data with
one-sample updates of the continual sparse GP model. Pink colored signal
corresponds to the warm up observations in the batch mode. Greyed blue signals
correspond to the former visited observations while the blue cross is the new
incoming one. Black colored curves correspond to the mean function and the 95%
confidence interval of the predictive GP distribution all over the input-
space, computed at each time iteration. Black dots are the inducing variables
at each time-step.
### 4.5 Continual GP vs. Baseline methods
In our last experiment, we are interested in the comparison of the continual
GP framework with previous baselines techniques in the literature. As we
mentioned in our revision of the state-of-the-art, the works that our approach
is most related to are: i) the infinite-horizon Gaussian process (IHGP) in
Solin et al. (2018) and ii) the streaming sparse Gaussian process (SSGP) in
Bui et al. (2017a) for the single-output case.
Infinite-Horizon Gaussian Processes. We test the continual GP model under the
same toy experiment included in Solin et al. (2018) for GP classification. The
initial hyperparameters are set equal to the IHGP. An important difference
w.r.t. the aforementioned baseline model is that the IHGP focuses exclusively
on accurate online predictions forward rather than the backward memory of the
model for the already seen input-domain. For that reason, we aim to
demonstrate that the continual GP approach is able to predict in an online
classification task similarly as the IHGP model does. In Figure 11, we show
the results for $t=30$ and $t=90$ in a total of 100 time-steps. The fitting
accuracy is similar to the one showed by the IHGP model. Importantly, we
recursively perform one-sample updates of the model, to adapt the continual GP
for a most similar scenario to the one presented in the IHGP toy experiment.
Streaming Sparse Gaussian Processes. For the second comparative experiment, we
test our continual GP on the two datasets used in Bui et al. (2017a). The
first one is the banana dataset for sparse GP classification. The results and
classification error metrics are included in the experiment of Section 4.2 and
Figure 6. In the second case, we take the toy regression data from its Github
code. 666Toy data available at
https://github.com/thangbui/streaming_sparse_gp. We imitate the setup of the
SSGP toy experiment where the sequence of observations is split in three
partitions, with $M=3$ inducing points per partition. In Figure 12, we show
three captures of the results for the predictive curves of the GP regression
model. We also plot the position of the inducing points (red bullets) as a
proof that the continual GP method is analogous to SSGP when applied under the
same scenario. The only existing difference is that our single-output model
recursively builds the continual GP prior instead of concatenating old and new
inducing-points ${\mathbf{u}}$, that tends to be less robust as the input
domain augments.
Figure 11: Results for continual single-output GP classification over probit
toy data (Solin et al., 2018). Figure 12: Results for continual single-output
GP regression over real-valued toy data (Bui et al., 2017a). Magenta and blue
crosses correspond to past and new observed output samples, respectively. Red
bullets are the inducing variables ${{\mathbf{u}}_{\text{new}}}$ at each time-
step ($t=1$, $t=2$ and $t=3$).
## 5 Conclusion and Future Work
Conclusion. In this paper, we have presented a novel approach that extends the
existing posterior-prior recursion of online Bayesian inference to the
infinite functional framework of Gaussian process models. The key principle of
our continual learning method is that we are able to reconstruct implicit GP
priors over the space-of-functions conditioned to past posterior distributions
via the predictive GP formulation. We adapt the entire method for accepting
sparse approximations based on inducing-inputs for a reason of scalability.
The recursive inference mechanism makes possible to update global posterior
distributions without the necessity of unfeasible training computations or
data revisiting. Thus, we only require to propagate the past learned
parameters forward, rather than concatenating old and new data for avoiding
model forgetting. Moreover, our method is fully scalable and amenable for
stochastic variational inference both on regression and classification
problems with arbitrary likelihood functions. Another point of interest is its
simplicity when applied to the multi-output GP setting. In this case, we have
shown the main differences with the single-output model, and its applicability
to scenarios with asymmetric channels or even heterogeneous likelihoods, that
is, mixed classification and regression problems.
Contribution. The main novelty of our work is on the recursive construction of
the GP prior conditioned to the fitted variational posterior distribution. The
idea of building continual GP priors, instead of concatenating inducing-points
in a sparse approximation context had not been considered before. Similar uses
of the predictive formula within the posterior distribution were analyzed in
Girard et al. (2003) before the appearance of variational methods in the GP
literature. The recursive construction of GPs is equivalent to the posterior-
prior recursion of online Bayesian inference. Additionally, the chance of
handling a new continual GP prior makes the current approach feasible to
multi-output scenarios where otherwise, concatenating inducing points would
not be possible.
Future work. We find that our continual learning scheme has important
connections with other recent works in variational inference methods. For
instance, with Ruiz and Titsias (2019) and their contrastive divergence (VCD)
based on three KL divergence terms. The idea of a triple regularized bound
also emerges naturally in our continual learning problem from the Bayes rule
when avoiding data revisiting. It can be easily interpreted as the difference
between two divergences that balance contributions of some variational
posterior distribution w.r.t. different objectives. However, as Ruiz and
Titsias (2019) explains, the subtraction of two KL divergences might not
satisfy the properties of a divergence operator (to be always non-negative and
becoming zero if equal), something that breaks the consistency of the bound
and a priori is problematic. Fortunately, adding an extra force to the
subtraction of divergences, that is, the third KL term between both
variational distributions, reduces the discrepancy and makes the operator
consistent for the log-marginal lower bound in a similar way to our solution.
Future research lines are, for instance, to employ convolutional processes
(CPs) or non-linear mappings as the mixing operator in the multi-output GP
model as an alternative to the LMC. Moreover, the continual single-output GP
model could be used as a latent baseline in the multivariate time series
imputation method of Fortuin et al. (2019), which uses a GP to capture
temporal dependencies between real-valued latent variables that are later
connected to a deep sequential variational autoencoder (VAE). Another
promising work would be to study the need of increasing the number $M$ of
inducing points as the input domain augments. It could be specified via the
recent bounds for sparse approximations proposed in Burt et al. (2019).
Finally, we may adapt both the single- and the multi-output continual model to
accept non-stationary latent functions similarly to Zhang et al. (2019) or
even infinite number of latent GP functions via mixture of experts (Pradier
and Perez-Cruz, 2018).
### Acknowledgements
PMM acknowledges the support of his FPI grant BES-2016-077626 from the
Ministerio of Economía of Spain. AAR was supported by the Ministerio de
Ciencia, Innovación y Universidades under grant TEC2017-92552-EXP (aMBITION),
by the Ministerio de Ciencia, Innovación y Universidades, jointly with the
European Commission (ERDF), under grant RTI2018-099655-B-I00 (CLARA), and by
The Comunidad de Madrid under grant Y2018/TCS-4705 (PRACTICO-CM). MAA has been
financed by the EPSRC Research Projects EP/R034303/1 and EP/T00343X/1.
## Appendix A. Complete derivation of continual lower bounds
Single-output GP. To derive the continual lower bound for each iteration of
the sequential process, we use the following expression
$\displaystyle\log p(\bm{y})$ $\displaystyle=$ $\displaystyle\log\int
p(\bm{y}|f)p(f)df=\log\int
p({\bm{y}_{\text{new}}},{\bm{y}_{\text{old}}}|f)p(f)df$ (25) $\displaystyle=$
$\displaystyle\log\int
p({\bm{y}_{\text{new}}}|f)p({\bm{y}_{\text{old}}}|f)p(f)df\geq\mathcal{L}_{\mathcal{C}}$
(26) $\displaystyle\mathcal{L}_{\mathcal{C}}$ $\displaystyle=$
$\displaystyle\int\log
p({\bm{y}_{\text{new}}}|f)p({\bm{y}_{\text{old}}}|f)p(f)df=\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)p({\bm{y}_{\text{old}}}|f)p(f)}{q(f|{\bm{\phi}_{\text{new}}})}df$
(27) $\displaystyle=$ $\displaystyle\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)q(f|{\bm{\phi}_{\text{old}}})p(f|{\bm{\psi}_{\text{new}}})}{p(f|{\bm{\psi}_{\text{old}}})q(f|{\bm{\phi}_{\text{new}}})}df$
(28) $\displaystyle=$ $\displaystyle\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{old}}})\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})p(f|{\bm{\psi}_{\text{new}}})}{p(f|{\bm{\psi}_{\text{old}}})q(f|{\bm{\phi}_{\text{new}}})}df$
(29) $\displaystyle=$ $\displaystyle\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{old}}})\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{new}}})p(u_{*}|{\bm{\psi}_{\text{new}}})}{p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{old}}})p(u_{*}|{\bm{\psi}_{\text{old}}})p(f_{\neq
u_{*}}|u_{*},{\bm{\psi}_{\text{new}}})q(u_{*}|{\bm{\phi}_{\text{new}}})}df$
(30) $\displaystyle=$ $\displaystyle\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{p({\bm{y}_{\text{new}}}|f)\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})p(u_{*}|{\bm{\psi}_{\text{new}}})}{p(u_{*}|{\bm{\psi}_{\text{old}}})q(u_{*}|{\bm{\phi}_{\text{new}}})}df$
$\displaystyle=$ $\displaystyle\int q(f|{\bm{\phi}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|f)df-\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{q(u_{*}|{\bm{\phi}_{\text{new}}})}{p(u_{*}|{\bm{\psi}_{\text{new}}})}df+\int
q(f|{\bm{\phi}_{\text{new}}})\log\frac{\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})}{p(u_{*}|{\bm{\psi}_{\text{old}}})}df$
$\displaystyle=$ $\displaystyle\int
q(f_{\neq\\{{{\mathbf{f}}_{\text{new}}},u{*}\\}},{{\mathbf{f}}_{\text{new}}},u_{*}|{\bm{\phi}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})f_{\neq\\{{{\mathbf{f}}_{\text{new}}},u{*}\\}}d{{\mathbf{f}}_{\text{new}}}du_{*}$
$\displaystyle-$ $\displaystyle\int q(f_{\neq
u{*}},u_{*}|{\bm{\phi}_{\text{new}}})\log\frac{q(u_{*}|{\bm{\phi}_{\text{new}}})}{p(u_{*}|{\bm{\psi}_{\text{new}}})}df_{\neq
u_{*}}du_{*}+\int q(f_{\neq
u{*}},u_{*}|{\bm{\phi}_{\text{new}}})\log\frac{\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})}{p(u_{*}|{\bm{\psi}_{\text{old}}})}df_{\neq
u_{*}}du_{*}$ $\displaystyle=$ $\displaystyle\int
q(u_{*}|{\bm{\phi}_{\text{new}}})p({{\mathbf{f}}_{\text{new}}}|u_{*})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}}du_{*}-\int
q(u_{*}|{\bm{\phi}_{\text{new}}})\log\frac{q(u_{*}|{\bm{\phi}_{\text{new}}})}{p(u_{*}|{\bm{\psi}_{\text{new}}})}du_{*}$
$\displaystyle+$ $\displaystyle\int
q(u_{*}|{\bm{\phi}_{\text{new}}})\log\frac{q(u_{*}|{\bm{\phi}_{\text{new}}})\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})}{q(u_{*}|{\bm{\phi}_{\text{new}}})p(u_{*}|{\bm{\psi}_{\text{old}}})}du_{*},$
(33)
where we assume $u_{*}$ to be the new subset of inducing-points
${{\mathbf{u}}_{\text{new}}}$, then
$\displaystyle=$ $\displaystyle\int q({{\mathbf{f}}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}}-\int
q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})\log\frac{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})}{p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})}d{{\mathbf{u}}_{\text{new}}}$
$\displaystyle+$ $\displaystyle\int
q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})\log\frac{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})}{p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})}d{{\mathbf{u}}_{\text{new}}}-\int
q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})\log\frac{q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})}{\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})}d{{\mathbf{u}}_{\text{new}}}$
$\displaystyle=$
$\displaystyle\mathbb{E}_{q({{\mathbf{f}}_{\text{new}}})}[\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})]-\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{new}}})]+\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||p({{\mathbf{u}}_{\text{new}}}|{\bm{\psi}_{\text{old}}})]$
$\displaystyle-$
$\displaystyle\text{KL}[q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})||\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})].$
(35)
It is important to rely on the variational expectation terms for the
likelihood where $q({{\mathbf{f}}_{\text{new}}})$ intervenes. Particularly, we
can take explicit vector values ${{\mathbf{u}}_{\text{new}}}$ for the implicit
inducing points notation $u_{*}$. The general expectation integral takes the
form
$\displaystyle\int
q(u_{*}|{\bm{\phi}_{\text{new}}})p({{\mathbf{f}}_{\text{new}}}|u_{*})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}}du_{*}$
$\displaystyle=$ $\displaystyle\int
q({\mathbf{u}}|{\bm{\phi}_{\text{new}}})p({{\mathbf{f}}_{\text{new}}}|{{\mathbf{u}}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}}d{{\mathbf{u}}_{\text{new}}}$
(36) $\displaystyle=$ $\displaystyle\int
q({\mathbf{u}}|{\bm{\phi}_{\text{new}}})p({{\mathbf{f}}_{\text{new}}}|{{\mathbf{u}}_{\text{new}}})d{{\mathbf{u}}_{\text{new}}}\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}}$
$\displaystyle=$ $\displaystyle\int q({{\mathbf{f}}_{\text{new}}})\log
p({\bm{y}_{\text{new}}}|{{\mathbf{f}}_{\text{new}}})d{{\mathbf{f}}_{\text{new}}},$
and considering we denote $q({{\mathbf{f}}_{\text{new}}})$ as the expected
variational distribution over the output vector ${{\mathbf{f}}_{\text{new}}}$,
that can be analytically calculated as follows
$\displaystyle q({{\mathbf{f}}_{\text{new}}})$ $\displaystyle=$
$\displaystyle\int
q({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{new}}})p({{\mathbf{f}}_{\text{new}}}|{{\mathbf{u}}_{\text{new}}})d{{\mathbf{u}}_{\text{new}}}$
$\displaystyle=$
$\displaystyle\mathcal{N}({{\mathbf{f}}_{\text{new}}}|{\mathbf{K}}_{{{\mathbf{f}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}\bm{\mu}_{\text{new}},{\mathbf{K}}_{{{\mathbf{f}}_{\text{new}}}{{\mathbf{f}}_{\text{new}}}}+{\mathbf{K}}_{{{\mathbf{f}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}({\mathbf{S}}_{\text{new}}-{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}){\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}{\mathbf{K}}^{\top}_{{{\mathbf{f}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}).$
## Appendix B. Continual GP priors
Single-output GP. To sequentially evaluate the approximated lower bound on our
marginal likelihood distribution, we have to reconstruct the continual prior
using the conditional predictive formula of GP models. Assuming that
$q({{\mathbf{u}}_{\text{old}}}|{\bm{\phi}_{\text{old}}})$ is our past learned
variational distribution and we want to infer the probability values on an
implicit vector $u_{*}$ of inducing points; the continual GP prior follows the
expression
$\displaystyle\widetilde{q}(u_{*}|{\bm{\phi}_{\text{old}}})$
$\displaystyle\approx$ $\displaystyle\int
p(u_{*}|{{\mathbf{u}}_{\text{old}}})q({{\mathbf{u}}_{\text{old}}}|{\bm{\phi}_{\text{old}}})d{{\mathbf{u}}_{\text{old}}}$
$\displaystyle=$
$\displaystyle\mathcal{N}(u_{*}|k_{*{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}\bm{\mu}_{\text{old}},k_{**}+k_{*{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}({\mathbf{S}}_{\text{old}}-{\mathbf{K}}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}){\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}k^{\top}_{*{{\mathbf{u}}_{\text{old}}}}),$
and if we assume that $u_{*}={{\mathbf{u}}_{\text{new}}}$, this is, evaluate
the conditional predictive distribution on the future inducing points
${{\mathbf{u}}_{\text{new}}}$, the previous formula takes the form of a
Gaussian distribution whose expression is
$\widetilde{q}({{\mathbf{u}}_{\text{new}}}|{\bm{\phi}_{\text{old}}})=\mathcal{N}({{\mathbf{u}}_{\text{new}}}|{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}\bm{\mu}_{\text{old}},{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}+{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}({\mathbf{S}}_{\text{old}}-{\mathbf{K}}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}){\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{\top}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}).$
Multi-output GP. For the multiple output case, the derivation of the continual
GP expression is analogous but considering the two-layers scheme. This means
that the continual mechanism of reconstruction now works directly on the $Q$
underlying latent functions $u_{q}$, that are modeled independently.
Therefore, the closed-form distribution can be obtained as
$\widetilde{q}({\mathbf{u}}_{q,\text{new}}|{\bm{\phi}_{\text{old}}})=\mathcal{N}({{\mathbf{u}}_{\text{new}}}|{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}\bm{\mu}_{\text{old}},{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}+{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}({\mathbf{S}}_{\text{old}}-{\mathbf{K}}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}){\mathbf{K}}^{-1}_{{{\mathbf{u}}_{\text{old}}}{{\mathbf{u}}_{\text{old}}}}{\mathbf{K}}^{\top}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{old}}}}).$
## Appendix C. Dimensionality reduction of $p(f)$ via Gaussian marginals.
We use the properties of Gaussian marginals to reduce infinite dimensional
distributions $p(f)$. This process is applied for both GP priors $p(f)$ and
the Gaussian variational distribution $q(f)$. We assume that if the generative
process of latent functions is $f\sim p(f)$, then it also holds
$\begin{bmatrix}f_{\neq{{\mathbf{u}}_{\text{new}}}}\\\
{{\mathbf{u}}_{\text{new}}}\end{bmatrix}\sim
p(f_{\neq{{\mathbf{u}}_{\text{new}}}},{{\mathbf{u}}_{\text{new}}}),\\\ $
where the multivariate Gaussian distribution
$p(f_{\neq{{\mathbf{u}}_{\text{new}}}},{{\mathbf{u}}_{\text{new}}})$ has the
following ${\mathbf{K}}$ and $\bm{\mu}$ parameters
$p(f_{\neq{{\mathbf{u}}_{\text{new}}}},{{\mathbf{u}}_{\text{new}}})=\mathcal{N}\Big{(}\begin{bmatrix}\bm{\mu}_{f\neq{{\mathbf{u}}_{\text{new}}}}\\\
\bm{\mu}_{{\mathbf{u}}_{\text{new}}}\end{bmatrix},\begin{bmatrix}{\mathbf{K}}_{f_{\neq{{\mathbf{u}}_{\text{new}}}}f_{\neq{{\mathbf{u}}_{\text{new}}}}}\;{\mathbf{K}}_{f_{\neq{{\mathbf{u}}_{\text{new}}}}{{\mathbf{u}}_{\text{new}}}}\\\
{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}f_{\neq{{\mathbf{u}}_{\text{new}}}}}\;{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}\end{bmatrix}\Big{)},\\\
$
and we therefore, may apply the marginalization
$p({{\mathbf{u}}_{\text{new}}})$ to obtain the target Gaussian distribution
$\int
p(f_{\neq{{\mathbf{u}}_{\text{new}}}},{{\mathbf{u}}_{\text{new}}})df_{\neq{{\mathbf{u}}_{\text{new}}}}=p({{\mathbf{u}}_{\text{new}}})=\mathcal{N}(\bm{\mu}_{{\mathbf{u}}_{\text{new}}},{\mathbf{K}}_{{{\mathbf{u}}_{\text{new}}}{{\mathbf{u}}_{\text{new}}}}).$
## Appendix D. Experiments and hyperparameter setup
The code for the experiments is written in Python and publicly available. It
can be found in the repository https://github.com/pmorenoz/ContinualGP, where
we extend the HetMOGP tool from Moreno-Muñoz et al. (2018) to be applied over
sequences of multiple-output observations. Importantly, all NLPD metrics in
Section 4 are computed from a total of $10^{3}$ samples in 10 different
initializations. To make our experiments fully reproducible, we provide the
details for all the experiments as well as the initializing values for all
parameters and hyperparameters.
Streaming. We use a sequence of $N=2000$ toy observations, that is split into
$T=10$ batches. The train-test data rate is $33\%$ for the test samples. The
initial number of inducing-points $M=3$ and we use the rule $M_{t}=tM$ at each
time-step. Additionally, we use an RBF kernel function $k(\cdot,\cdot)$ whose
hyperparameters, i.e. length-scale and amplitude are always initialized at
$\ell=0.01$ and $\sigma_{a}=0.5$. We assume that the likelihood function is a
Gaussian distribution with a fixed noise parameter $\sigma_{n}=1.5$.
Additionally, the true underlying functions $f$ is generated by mixing three
sinusoidal signals, its expression is
$f(x)=\frac{9}{2}\cos(2\pi x+\frac{3\pi}{2})-3\sin(4.3\pi
x+\frac{3\pi}{10})+5\cos(7\pi x+2.4\pi).$
Overlapping. The setup of the second toy single-output experiment is analogous
to the previous one but with a few exceptions. The initial number of inducing
points is $M=4$, and we increase its capacity by setting $M_{t}=2tM$. The
kernel function and the initialization of parameters is equal to the streaming
experiment. The overlapping sections are generated by randomly indexing
observations from the adjacent partitions.
Incremental. The setup of the incremental experiment is analogous to the
previous ones. In this case, we randomly index observations to generate the
sequence of batches. The initial number of inducing-points is $M=4$ and
increases similarly to the overlapping experiment.
Currency. For this experiment, we use an initial number of $M=20$ inducing
points. We choose a Mátern kernel function with initial length-scale and noise
amplitude values equal to $\ell=10^{-3}$ and $\sigma_{a}=0.1$, respectively.
The incremental rule for the inducing-points is linear within time-steps. The
VEM algorithm makes a maximum of 4 iterations per time-step.
Banana. In the two-dimensional input experiment for GP classification, we
setup an initial grid of inducing-points with $M=3$ per side. The size of the
grid increases within time as $M_{t}=M_{t-1}+1$. In this case, we use an RBF
kernel whose hyperparameters are initialized to $\ell=0.05$ and
$\sigma_{a}=0.1$. The maximum number of VEM iterations is fixed to 4 as well.
For the binary prediction plots in Figure 6, we threshold the predictive
probability as $p<0.5$ or $p\geq 0.5$ for $\bm{y}_{n}=1$, otherwise. The test-
training data splitting is based on a $30\%$ proportion.
Synchronous. We generate $N=2000$ input-output samples where the output
observation is multiple with $D=2$ real-valued dimension. As we consider a toy
multi-task regression problem, we set a likelihood model that is defined using
the syntax: likelihoods_list = [Gaussian(sigma=1.), Gaussian(sigma=2.0)],
where we assume the Gaussian noise parameters $\sigma_{n}$ always fixed. We
use $Q=2$ true latent functions $\mathcal{U}$ that are defined by the
expressions
$u_{1}(x)=\frac{9}{2}\cos(2\pi x+\frac{3\pi}{2})-3\sin(4.3\pi
x+\frac{3\pi}{10})+5\cos(7\pi x+2.4\pi),$
$u_{2}(x)=\frac{9}{2}\cos(\frac{3\pi}{2}x+\frac{\pi}{2})+5\sin(3\pi
x+\frac{3\pi}{2})-\frac{11}{2}\cos(8\pi x+\frac{\pi}{4}),$
where the vectors $\bm{w}_{q}$ of the linear mixing are
$\bm{w}_{1}=[-0.5,0.1]^{\top}$ and $\bm{w}_{2}=[-0.1,0.6]^{\top}$. Moreover,
we choose an RBF kernel for the GP prior of both latent functions and their
hyperparameters are initialized to $\ell=0.05$ and $\sigma_{a}=0.5$. The
number of inducing-points is $M_{q}=5$ for both latent functions and increases
linearly within time.
Asynchronous. The setup of this experiment is analogous to the synchronous
case, where the slight difference is that the initial number of inducing-
points per latent function $u_{q}$ is $M_{q}=4$ instead.
MOCAP. For this experiment, we use a MOGP prior with $Q=3$ latent functions
and an initial number $M_{q}=10$ in all cases. The maximum number of VEM
iterations is 5 in order to guarantee a good fitting. The multi-task
likelihood model is defined by the syntax: likelihoods_list =
[Gaussian(sigma=0.3), Gaussian(sigma=0.3), Gaussian(sigma=0.3)].
Solar. The solar dataset consists of a sequence of $t=1000$ real-valued
observations. We use an extra batch with t=100 samples for a warm up period.
The initial number of inducing-points is $M=15$. We allow the VEM algorithm to
make one iteration per continual update. The likelihood noise parameter is set
to $\sigma_{n}=1.0$. At each time-step, we initialize the RBF kernel of the GP
prior to have a lengthscale $\ell=0.5$ and amplitude $\sigma_{a}=2.0$. We only
increase the number $M$ of inducing-points every 25 time-steps.
## References
* Alvarez and Lawrence (2009) M. Alvarez and N. D. Lawrence. Sparse convolved Gaussian processes for multi-output regression. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 57–64, 2009.
* Álvarez et al. (2009) M. Álvarez, D. Luengo, M. Titsias, and N. Lawrence. Variational inducing kernels for sparse convolved multiple output Gaussian processes. _arXiv preprint arXiv:0912.3268_ , 2009.
* Álvarez et al. (2010) M. Álvarez, D. Luengo, M. Titsias, and N. Lawrence. Efficient multioutput Gaussian processes through inducing kernels. In _Artificial Intelligence and Statistics (AISTATS)_ , pages 25–32, 2010.
* Alvarez et al. (2012) M. A. Alvarez, L. Rosasco, N. D. Lawrence, et al. Kernels for vector-valued functions: A review. _Foundations and Trends in Machine Learning_ , 4(3):195–266, 2012.
* Álvarez et al. (2019) M. A. Álvarez, W. O. Ward, and C. Guarnizo. Non-linear process convolutions for multi-output Gaussian processes. In _Artificial Intelligence and Statistics (AISTATS)_ , pages 1969–1977, 2019.
* Beal (2003) M. J. Beal. Variational algorithms for approximate Bayesian inference. _Ph. D. Thesis, University College London_ , 2003.
* Bonilla et al. (2008) E. V. Bonilla, K. M. Chai, and C. Williams. Multi-task Gaussian process prediction. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 153–160, 2008.
* Bui et al. (2017a) T. D. Bui, C. V. Nguyen, and R. E. Turner. Streaming sparse Gaussian process approximations. In _Advances in Neural Information Processing Systems (NIPS)_ , pages 3299–3307, 2017a.
* Bui et al. (2017b) T. D. Bui, J. Yan, and R. E. Turner. A unifying framework for Gaussian process pseudo-point approximations using power expectation propagation. _Journal of Machine Learning Research_ , 18(1):3649–3720, 2017b.
* Burt et al. (2019) D. R. Burt, C. E. Rasmussen, and M. Van der Wilk. Rates of convergence for sparse variational Gaussian process regression. In _International Conference on Machine Learning (ICML)_ , pages 862–871, 2019. |
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,4,4}$. Then there exist a unique
non-crossing pairing $\sigma\in NC_{2}(m_{1},m_{2},m_{3})$ such that
$\gamma_{m_{1},m_{2},m_{3}}\sigma=\pi$.
###### Proof.
We proceed as before, it is enough to prove that there exist a unique pairing
$\sigma\in\mathcal{P}_{2}(m)$ satisfying,
1. $(i)$
$\sigma\vee\gamma_{m_{1},m_{2},m_{3}}=1_{m}$
2. $(ii)$
if $\\{u,v\\}$ is a block of $\sigma$ then $e_{u}$ and $e_{v}$ connect the
same pair of vertices and have the opposite orientation in
$T_{m_{1},m_{2},m_{3}}^{\pi}$.
As pointed in Theorem 8.10 the edges of multiplicity $2$ of
$\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$ determines uniquely the blocks of any
$\sigma$ satisfying $(ii)$. Let
$\overline{e}=\\{e_{i_{1}},e_{i_{2}},e_{i_{3}},e_{i_{4}}\\}$ and
$\overline{e^{\prime}}=\\{e_{j_{1}},e_{j_{2}},e_{j_{3}},e_{j_{4}}\\}$ be the
edges of multiplicity $4$ of $\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$. Without
loss of generality assume $\overline{e}$ is a $(1,2)$-edge and
$\overline{e^{\prime}}$ is a $(1,3)$-edge such that
$e_{i_{1}},e_{i_{2}},e_{j_{1}},e_{j_{2}}\in E_{1}$, $e_{i_{3}},e_{i_{4}}\in
E_{2}$, $e_{j_{3}},e_{j_{4}}\in E_{3}$ and with $e_{i_{1}}$ and $e_{i_{3}}$
having the same orientation and $e_{j_{1}}$ and $e_{j_{3}}$ having the same
orientation. Any $\sigma$ satisfying condition $(ii)$ must pair only elements
in the same equivalence class and with opposite orientations, so the possible
$\sigma_{i}$ restricted to $\\{i_{1},\dots,i_{4},j_{1},\dots,j_{4}\\}$ are
given in next table.
$\sigma_{i}$ | Blocks of $\sigma_{i}$ | $\gamma_{m_{1},m_{2},m_{3}}\vee\sigma_{i}$
---|---|---
$\sigma_{1}$ | $\\{i_{1},i_{2}\\},\\{i_{3},i_{4}\\},\\{j_{1},j_{2}\\},\\{j_{3},j_{4}\\}$ | $\gamma_{m_{1},m_{2},m_{3}}$
$\sigma_{2}$ | $\\{i_{1},i_{2}\\},\\{i_{3},i_{4}\\},\\{j_{1},j_{4}\\},\\{j_{2},j_{3}\\}$ | $[\\![m_{1}]\\!]\cup[\\![m_{3}]\\!],[\\![m_{2}]\\!]$
$\sigma_{3}$ | $\\{i_{1},i_{4}\\},\\{i_{2},i_{3}\\},\\{j_{1},j_{2}\\},\\{j_{3},j_{4}\\}$ | $[\\![m_{1}]\\!]\cup[\\![m_{2}]\\!],[\\![m_{3}]\\!]$
$\sigma_{4}$ | $\\{i_{1},i_{4}\\},\\{i_{2},i_{3}\\},\\{j_{1},j_{4}\\},\\{j_{2},j_{3}\\}$ | $1_{m}$
Last column shows that only $\sigma_{4}$ satisfies condition $(i)$ and so that
is the only pairing. ∎
###### Lemma 8.12.
Let $\pi\in\mathcal{P}(m)$ and let
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UL}_{2,4}$. Then there exist exactly
two non-crossing pairings $\sigma_{1},\sigma_{2}\in NC_{2}(m_{1},m_{2},m_{3})$
such that $\gamma_{m_{1},m_{2},m_{3}}\sigma_{i}=\pi$ for $i=1,2$.
###### Proof.
It suffices to prove that there exist exactly two pairings
$\sigma_{1},\sigma_{2}\in\mathcal{P}_{2}(m)$ satisfying,
1. $(i)$
$\sigma_{i}\vee\gamma_{m_{1},m_{2},m_{3}}=1_{m}$
2. $(ii)$
if $\\{u,v\\}$ is a block of $\sigma_{i}$ then $e_{u}$ and $e_{v}$ connect the
same pair of vertices and have the opposite orientation in
$T_{m_{1},m_{2},m_{3}}^{\pi}$.
We know that edges of multiplicity $2$ of
$\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$ determines uniquely the blocks of any
$\sigma$ satisfying $(ii)$. Let
$\overline{e}=\\{e_{i_{1}},e_{i_{2}},e_{i_{3}},e_{i_{4}}\\}$ be the edge of
multiplicity $4$ of $\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$. Without loss of
generality assume $e_{i_{1}},e_{i_{2}}\in E_{1}$, $e_{i_{3}}\in E_{2}$,
$e_{i_{4}}\in E_{3}$. As this edge of multiplicity $4$ is a loop then any
pairing of $\\{i_{1}.i_{2},i_{3},i_{4}\\}$ satisfies $(ii)$, so the possible
pairings $\sigma_{i}$ restricted to $\\{i_{1},\dots,i_{4}\\}$ are given in
next table,
$\sigma_{i}$ | Blocks of $\sigma_{i}$ | $\gamma_{m_{1},m_{2},m_{3}}\vee\sigma_{i}$
---|---|---
$\sigma_{1}$ | $\\{i_{1},i_{2}\\},\\{i_{3},i_{4}\\}$ | $[\\![m_{1}]\\!]\cup[\\![m_{2}]\\!],[\\![m_{3}]\\!]$
$\sigma_{2}$ | $\\{i_{1},i_{3}\\},\\{i_{2},i_{4}\\}$ | $1_{m}$
$\sigma_{3}$ | $\\{i_{1},i_{4}\\},\\{i_{2},i_{3}\\}$ | $1_{m}$
Last column shows that only $\sigma_{2}$ and $\sigma_{3}$ satisfies condition
$(i)$ and so those are the only pairings satisfying both conditions. ∎
###### Lemma 8.13.
Let $\pi\in\mathcal{P}(m)$ and let
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}$, then there exist a unique
non-crossing pairings $\sigma\in NC_{2}(m_{1},m_{2},m_{3})$ such that
$\gamma_{m_{1},m_{2},m_{3}}\sigma=\pi$.
###### Proof.
It suffices to prove that there exist a unique pairings
$\sigma\in\mathcal{P}_{2}(m)$ satisfying,
1. $(i)$
$\sigma\vee\gamma_{m_{1},m_{2},m_{3}}=1_{m}$
2. $(ii)$
if $\\{u,v\\}$ is a block of $\sigma$ then $e_{u}$ and $e_{v}$ connect the
same pair of vertices and have the opposite orientation in
$T_{m_{1},m_{2},m_{3}}^{\pi}$.
The edges of multiplicity $2$ of $\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$
determines uniquely the blocks of any $\sigma$ satisfying $(ii)$. It remains
considering the edge of multiplicity $4$. In this case we have two
possibilities.
Case 1. The edge of multiplicity $4$ is not in the circuit of
$\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$. In this case any edge in the circuit
is connecting. Suppose without loss of generality any edge in the circuit is a
$(2,3)$-edge. This means the edge of multiplicity $4$ is either a $(1,2)$-edge
or a $(1,3)$-edge. Assume it is a $(1,2)$-edge. Let
$\overline{e}=\\{e_{i_{1}},e_{i_{2}},e_{i_{3}},e_{i_{4}}\\}$ be the edge of
multiplicity $4$ with $e_{i_{1}},e_{i_{2}}\in E_{1}$, $e_{i_{3}},e_{i_{4}}\in
E_{2}$ and $e_{i_{1}}$ and $e_{i_{3}}$ having the same orientation. Any
$\sigma$ satisfying $(ii)$ must pair edges in opposite orientations, so the
pairings $\sigma_{i}$ restricted to $\\{i_{1},\dots,i_{4}\\}$ are given in
next table.
$\sigma_{i}$ | Blocks of $\sigma_{i}$ | $\gamma_{m_{1},m_{2},m_{3}}\vee\sigma_{i}$
---|---|---
$\sigma_{1}$ | $\\{i_{1},i_{2}\\},\\{i_{3},i_{4}\\}$ | $[\\![m_{1}]\\!],[\\![m_{2}]\\!]\cup[\\![m_{3}]\\!]$
$\sigma_{2}$ | $\\{i_{1},i_{4}\\},\\{i_{2},i_{3}\\}$ | $1_{m}$
Last column shows that only $\sigma_{1}$ satisfies condition $(i)$ and so that
is the only pairing.
Case 1. The edge of multiplicity $4$ is in the circuit. of
$\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$. In this case the edge of
multiplicity $4$ consist of two edges from one basic cycle, say $E_{1}$, and
one edge from each $E_{2}$ and $E_{3}$, let
$\overline{e}=\\{e_{i_{1}},e_{i_{2}},e_{i_{3}},e_{i_{4}}\\}$ be the edge of
multiplicity $4$ with $e_{i_{1}},e_{i_{2}}\in E_{1}$, $e_{i_{3}}\in E_{2}$,
$e_{i_{4}}\in E_{3}$ and such that $e_{i_{1}}$ and $e_{i_{3}}$ have the same
orientation. Any $\sigma$ satisfying $(ii)$ must pair edges in opposite
orientations, so the possible pairings $\sigma_{i}$ restricted to
$\\{i_{1},\dots,i_{4}\\}$ are given in next table,
$\sigma_{i}$ | Blocks of $\sigma_{i}$ | $\gamma_{m_{1},m_{2},m_{3}}\vee\sigma_{i}$
---|---|---
$\sigma_{1}$ | $\\{i_{1},i_{2}\\},\\{i_{3},i_{4}\\}$ | $[\\![m_{1}]\\!],[\\![m_{2}]\\!]\cup[\\![m_{3}]\\!]$
$\sigma_{2}$ | $\\{i_{1},i_{4}\\},\\{i_{2},i_{3}\\}$ | $1_{m}$
Last column shows that only $\sigma_{1}$ satisfies condition $(i)$ and so that
is the only pairing satisfying both conditions, in any case we get that there
exist a unique pairing satisfying $(i)$ and $(ii)$. ∎
###### Lemma 8.14.
Let $\pi\in\mathcal{P}(m)$ and let
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{DB}$, then there exist a unique non-
crossing pairings $\sigma\in NC_{2}(m_{1},m_{2},m_{3})$ such that
$\gamma_{m_{1},m_{2},m_{3}}\sigma=\pi$.
###### Proof.
It suffices to prove that there exist a unique pairings
$\sigma\in\mathcal{P}_{2}(m)$ satisfying,
1. $(i)$
$\sigma\vee\gamma_{m_{1},m_{2},m_{3}}=1_{m}$
2. $(ii)$
if $\\{u,v\\}$ is a block of $\sigma$ then $e_{u}$ and $e_{v}$ connect the
same pair of vertices and have the opposite orientation in
$T_{m_{1},m_{2},m_{3}}^{\pi}$.
Since $T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{DB}$ then any edge has
multiplicity $2$, so we let $\sigma=\overline{\pi}$, note that any edge of
multiplicity $2$ of $\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$ consist of $2$
edges in opposite orientation (Theorem 6.5), so $\sigma$ satisfies $(ii)$. As
we proved in Theorem 8.10 the edges of multiplicity $2$ of
$\overline{T_{m_{1},m_{2},m_{3}}^{\pi}}$ determines uniquely the blocks of any
$\sigma$ satisfying $(ii)$, so in this case such a $\sigma$ satisfying $(ii)$
is unique and is given by $\sigma=\overline{\pi}$. It remains proving
$\sigma=\overline{\pi}$ satisfies $(i)$, however, this is an immediate
consequence of Theorem 6.5 as $C_{\pi}\neq 0$. ∎
###### Corollary 8.15.
Let $\pi\in\mathcal{P}(m)$ be such that
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{LG}$ then there exist $\sigma\in
NC_{2}(m_{1},m_{2},m_{3})$ such that $\pi=\gamma_{m_{1},m_{2},m_{3}}\sigma$,
consequently
$T_{m_{1},m_{2},m_{3}}^{\pi}=T_{m_{1},m_{2},m_{3}}^{\gamma_{m_{1},m_{2},m_{3}}\sigma}$.
Furthermore,
1. $(i)$
If $T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,6}\cup\mathcal{UL}_{2,4}$
there exist exactly two non-crossing pairings $\sigma_{1},\sigma_{2}\in
NC_{2}(m_{1},m_{2},m_{3})$ such that
$T_{m_{1},m_{2},m_{3}}^{\pi}=T_{m_{1},m_{2},m_{3}}^{\gamma_{m_{1},m_{2},m_{3}}\sigma_{1}}=T_{m_{1},m_{2},m_{3}}^{\gamma_{m_{1},m_{2},m_{3}}\sigma_{2}}.$
2. $(ii)$
If
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,4,4}\cup\mathcal{UC}_{2,4}\cup\mathcal{DB}$
there exist a unique non-crossing pairings $\sigma\in
NC_{2}(m_{1},m_{2},m_{3})$ such that
$T_{m_{1},m_{2},m_{3}}^{\pi}=T_{m_{1},m_{2},m_{3}}^{\gamma_{m_{1},m_{2},m_{3}}\sigma}.$
Corollary 8.15 together with Lemma 8.7 determines the relation between non-
crossing pairing and limit graphs. This is given as follows.
###### Lemma 8.16.
Let $\pi\in\mathcal{P}(m)$ then $T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{LG}$
if and only if there exist $\sigma\in NC_{2}(m_{1},m_{2},m_{3})$ such that
$\pi=\gamma_{m_{1},m_{2},m_{3}}\sigma$, furthermore,
$\displaystyle|NC_{2}(m_{1},m_{2},m_{3})|=|\mathcal{LG}|+|\mathcal{UL}_{2,4}|+|\mathcal{T}_{2,6}|$
###### Proof.
Corollary 8.15 and Lemma 8.7 proves that the mapping,
$T:NC_{2}(m_{1},m_{2},m_{3})\rightarrow\mathcal{LG},$
given by,
$T(\sigma)=T_{m_{1},m_{2},m_{3}}^{\gamma_{m_{1},m_{2},m_{3}}\sigma},$
is surjective which proves the first part of Lemma. For the second part note
that Corollary 8.15 proves that only if
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UL}_{2,4}\cup\mathcal{T}_{2,6}$ there
exist exactly two non-crossing pairings being mapped under $T$ to the same
quotient graph $T_{m_{1},m_{2},m_{3}}^{\pi}$ which proves the second part. ∎
###### Corollary 8.17.
$|\mathcal{DB}|=|NC_{2}(m_{1},m_{2},m_{3})|-|\mathcal{UC}_{2,4}|-|\mathcal{T}_{2,4,4}|-2|\mathcal{UL}_{2,4}|-2|\mathcal{T}_{2,6}|$
Corollary 8.17 reduces the problem of counting all limit graphs to only
counting $\mathcal{T}_{2,6},\mathcal{T}_{2,4,4},\mathcal{UL}_{2,4}$ and
$\mathcal{UC}_{2,4}$, so, for the rest of the paper we will work in counting
each type of these.
### 8.3. Counting double trees and double unicircuit graphs
Our motivation for counting double trees and double unicircuit graphs is that
most of the limit graphs can be expressed in terms of these; in fact double
trees and double unicircuit graphs appear when computing the second order
moments as seen in [8], where they provide a way of counting certain graphs
using the set of non-crossing pairings in an $(m_{1},m_{2})$-annulus. In this
subsection we provide our alternative proof of that result.
###### Lemma 8.18.
Let $m\in\mathbb{N}$, $\pi\in\mathcal{P}(m)$ and $\gamma_{m}=(1,\dots,m)\in
S_{m}$. Let $T_{m}=(V,E)$ be the graph consisting of a single basic cycle. Let
$\pi\in\mathcal{P}(m)$. $T_{m}^{\pi}$ is a double tree if and only if
$\pi=\gamma_{m}\sigma$ for some $\sigma\in NC_{2}(m)$. Moreover if
$\sigma_{1}\neq\sigma_{2}$ then $T_{m}^{\gamma_{m}\sigma_{1}}$ and
$T_{m}^{\gamma_{m}\sigma_{2}}$ are distinct quotient graphs, consequently,
$|\\{\pi\in\mathcal{P}(m):T_{m}^{\pi}\text{ is a double
tree}\\}|=|NC_{2}(m)|.$
###### Proof.
Suppose $T_{m}^{\pi}=(V,E)$ is a double tree, we let
$\sigma=\overline{\pi}\in\mathcal{P}_{2}(m)$. Theorem 8.6 says,
$\gamma_{m}\sigma\leq\pi$, therefore,
$m+1=|V|+|E|=\\#(\pi)+\\#(\sigma)\leq\\#(\gamma_{m}\sigma)+\\#(\sigma)\leq
m+1$
which forces $\pi=\gamma_{m}\sigma$ and $\sigma\in NC_{2}(m)$. Conversely let
$\sigma\in\mathit{NC}_{2}(m)$ and $\pi=\gamma_{m}\sigma$. Note that
$G_{\sigma}^{\gamma_{m}}$ has $\\#(\gamma_{m}\sigma)$ vertices and
$\\#(\sigma)$ edges and it is connected because $T_{m}^{\pi}$ is connected.
Thus,
$1\geq\\#(\gamma_{m}\sigma)-\\#(\sigma)=\\#(\gamma_{m}\sigma)+\\#(\sigma)-m=1,$
therefore all above must be equality which means $G_{\sigma}^{\gamma_{m}}$ is
a tree. On the other hand Theorem 8.4 says
$T(G_{\sigma}^{\gamma_{m}})=T_{m}^{\pi}$, therefore $T_{m}^{\pi}$ is a double
tree, moreover if $\\{u,v\\}$ is a block of $\sigma$ then $e_{u}$ and $e_{v}$
are joining the same pair or vertices of $T_{m}^{\pi}$, i.e.
$\sigma=\overline{\pi}$. The latest observation means that for
$\sigma_{1}\neq\sigma_{2}$ the partitions $\overline{\gamma_{m}\sigma_{1}}$
and $\overline{\gamma_{m}\sigma_{2}}$ are distinct and therefore the quotient
graphs $T_{m}^{\gamma\sigma_{1}}$ and $T_{m}^{\gamma\sigma_{2}}$ must be
different. ∎
To count the double unicyclic graphs we will use the set of non-crossing
pairings on the $(m_{1},m_{2})$-annulus.
###### Lemma 8.19.
Let $m_{1},m_{2}\in\mathbb{N}$, $m=m_{1}+m_{2}$, $\pi\in\mathcal{P}(m)$ and
$\gamma_{m_{1},m_{2}}=(1,\dots,m_{1})(m_{1}+1,\dots,m)\in S_{m}$. Let
$T_{m_{1},m_{2}}$ be defined as in Section 5. Let $k\in\mathbb{N}$ with $k\neq
2$. $T^{\pi}_{m_{1},m_{2}}$ is a double unicircuit graph where the unique
circuit of $\overline{T_{m_{1},m_{2}}^{\pi}}$ has length $k$ if and only if
there exist $\sigma\in NC_{2}^{(k)}(m_{1},m_{2})$ such that
$\pi=\gamma_{m_{1},m_{2}}\sigma$. Moreover for $\sigma_{1}\neq\sigma_{2}$ the
quotient graphs $T_{m_{1},m_{2}}^{\gamma_{m_{1},m_{2}}\sigma_{1}}$ and
$T_{m_{1},m_{2}}^{\gamma_{m_{1},m_{2}}\sigma_{2}}$ are distinct.
###### Proof.
Suppose $T^{\pi}_{m_{1},m_{2}}$ is a double unicircuit graph where the unique
circuit of $\overline{T_{m_{1},m_{2}}^{\pi}}$ has length $k$.
$\overline{T_{m_{1},m_{2}}^{\pi}}$ is a connected graph with $\\#(\pi)$
vertices and $m/2$ edges, therefore $q(\pi)=\\#(\pi)-m/2=0$. Let
$\sigma\in\mathcal{P}(m)$ be the partition defined by
$u\overset{\sigma}{\sim}v$ if $e_{u}$ and $e_{v}$ connect the same pair of
vertices of $T_{m_{1},m_{2}}^{\pi}$ then $\sigma$ is a pairing satisfying
$\sigma\vee\gamma_{m_{1},m_{2}}=1_{m}$ and if $\\{u,v\\}$ is a block of
$\sigma$ then $e_{u}$ and $e_{v}$ have the opposite orientation, Theorem 8.8
says that $\sigma\in NC_{2}(m_{1},m_{2})$ and
$\pi=\gamma_{m_{1},m_{2}}\sigma$. Moreover, by definition of $\sigma$ the
through strings of $\sigma$ correspond to the edges in the circuit of
$\overline{T_{m_{1},m_{2}}^{\pi}}$, so $\sigma\in NC_{2}^{(k)}(m_{1},m_{2})$.
Conversely, let $\sigma\in NC_{2}^{(k)}(m_{1},m_{2})$ and
$\pi=\gamma_{m_{1},m_{2}}\sigma$. Let $(V,E)$ be the graph
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$, Lemma 8.5 says
$\overline{T^{\gamma_{m_{1},m_{2}}\sigma}_{m_{1},m_{2}}}$ is a connected graph
and so is $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$. Note,
$\displaystyle|V|-|E|$ $\displaystyle=$
$\displaystyle\\#(\gamma_{m_{1},m_{2}}\sigma)-\\#(\sigma)=\\#(\gamma_{m_{1},m_{2}}\sigma)+\\#(\sigma)-m=0,$
thus, $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ is a unicircuit graph. Let
$B=\\{u,v\\}$ be a block of $\sigma$ not in the circuit of
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$, then $B$ is a cutting edge of
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ and so
$\overline{e_{u}}=\\{e_{u},e_{v}\\}$ is a cutting edge of
$\overline{T_{m_{1},m_{2}}^{\gamma_{m_{1},m_{2}}\sigma}}$, that forces to
$e_{u}$ and $e_{v}$ being from the same basic cycle, and so $B$ is a non-
through string of $\sigma$, that means that all through strings
$B_{1},\dots,B_{k}$ of $\sigma$ must correspond to edges in the circuit of
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$. Suppose one of the edges in the circuit
is not a through string, then there is a vertex of
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$, $V$, in its unique circuit whose two
adjacent edges in within the circuit correspond to one non-through string and
one through string. When doubling the edges of
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ we obtain that $V$ will be adjacent to an
odd number of edges from $E_{1}$ (the trough string produces $1$ adjacent edge
from $E_{1}$ and any non-trough string produces either $0$ or $2$ adjacent
edges from $E_{1}$), this is not possible as $\deg(V)_{1}$ must be even, so
all edges in the circuit of $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ must
correspond to through strings of $\sigma$, i.e
$G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ is a graph with a single circuit of length
$k$. Remember that doubling the edges of $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$
produces the graph $T^{\gamma_{m_{1},m_{2}}\sigma}_{m_{1},m_{2}}$ and since
the there are no edges of $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ connecting the
same pair of vertices (that would mean having a circuit of length $2$ and we
assumed $k\neq 2$), then $T^{\gamma_{m_{1},m_{2}}\sigma}_{m_{1},m_{2}}$ is a
graph such that $\overline{T^{\gamma_{m_{1},m_{2}}\sigma}_{m_{1},m_{2}}}$ has
a unique circuit and all its edges have multiplicity $2$ and consist of two
edges in opposite orientation. Moreover, it must be that
$\sigma=\overline{\pi}$ because $\sigma\leq\overline{\pi}$ (Theorem 8.5) and
both have only blocks of size $2$. To verify $T_{m_{1},m_{2}}^{\pi}$ is a
double unicircuit graph it remains to prove that all edges in the circuit of
$\overline{T_{m_{1},m_{2}}^{\pi}}$ correspond to non-through strings, this
follows because all edges in the circuit of $\overline{T_{m_{1},m_{2}}^{\pi}}$
correspond to edges in the circuit of $G_{\sigma}^{\gamma_{m_{1},m_{2}}}$ and
$\sigma=\overline{\pi}$. Finally the condition $\sigma=\overline{\pi}$ proves
that if $\sigma_{1}\neq\sigma_{2}$ then
$T_{m_{1},m_{2}}^{\gamma_{m_{1},m_{2}}\sigma_{1}}$ and
$T_{m_{1},m_{2}}^{\gamma_{m_{1},m_{2}}\sigma_{2}}$ are distinct. ∎
The following is a consequence of Lemma 8.19.
###### Corollary 8.20.
Let $k\in\mathbb{N}$ with $k\neq 2$. The following are satisfied.
1. $(i)$
$|NC_{2}^{(k)}(m_{1},m_{2})|=|\\{\pi\in\mathcal{P}(m):T_{m_{1},m_{2}}^{\pi}\text{
is a double unicircuit graph}\\\ \text{ with
}\overline{T_{m_{1},m_{2}}^{\pi}}\text{ having a \text{circuit }of length
}k\\}|$
2. $(ii)$
$|NC_{2}(m_{1},m_{2})\setminus
NC_{2}^{(2)}(m_{1},m_{2})|=|NC_{2}(m_{1},m_{2})|-|NC_{2}^{(2)}(m_{1},m_{2})|\\\
=|\\{\pi\in\mathcal{P}(m):T_{m_{1},m_{2}}^{\pi}\text{ is a double unicircuit
graph}\\}|$
### 8.4. Counting $2$-$6$ and $2$-$4$-$4$-tree types
Counting $2$-$6$ and $2$-$4$-$4$ tree types is an immediate consequence of
counting double trees. For this section we set
$m_{1},m_{2},m_{3}\in\mathbb{N}$ and $m=m_{1}+m_{2}+m_{3}$. Let us introduce
the following set of partitioned permutations.
###### Notation 8.21.
For a partitioned permutation
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1,1)}(m_{1},m_{2},m_{3})\cup\mathcal{PS}_{NC}^{(2,1,1)}(m_{1},m_{2},m_{3}),$
we can write $\pi$ as $\pi=\pi_{1}\times\pi_{2}\times\pi_{3}$ with $\pi_{i}\in
NC(m_{i})$.
1. $(i)$
We denote by $\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ to the set
of partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1,1)}(m_{1},m_{2},m_{3})$ such that
$\pi_{i}\in NC_{2}(m_{i})$ $\forall i=1,2,3$.
2. $(ii)$
We denote by $\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})$ to the set
of partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(2,1,1)}(m_{1},m_{2},m_{3})$ such that
$\pi_{i}\in NC_{2}(m_{i})$ $\forall i=1,2,3$.
###### Remark 8.22.
The cardinality of $\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ can be
computed easily. Each element
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ is
such that $\pi=\pi_{1}\times\pi_{2}\times\pi_{3}$ with $\pi_{i}\in
NC_{2}(m_{i})$ and each block of $\mathcal{V}$ is a cycle of $\pi$ except by
one block which is the union of three cycles of $\pi$, thus the number of
permutations $\pi$, can be counted by
$|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|,$
while the number of partitions $\mathcal{V}$, can be counted by choosing a
cycle from each $\pi_{i}$, which can be done in
$\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}$ ways. Therefore,
$|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|=\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|.$
In a similar manner we can compute the cardinality of
$\mathcal{PS}_{NC}^{(2,1,1)}(m_{1},m_{2},\allowbreak m_{3})$, namely,
$|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|\\\
=\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}(\frac{m}{2}-3)|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|.$
###### Lemma 8.23.
$|\mathcal{T}_{2,6}|=4|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|$
###### Proof.
For $\pi\in\mathcal{P}(m)$ the quotient graph $T_{m_{1},m_{2},m_{3}}^{\pi}$ is
a $2$-$6$ tree type if all three graphs $T_{m_{1}}^{\pi}$,$T_{m_{2}}^{\pi}$
and $T_{m_{3}}^{\pi}$ are double trees, Lemma 8.18 says that each double tree
can be chosen in $|NC_{2}(m_{1})|$,$|NC_{2}(m_{2})|$ and $|NC_{2}(m_{3})|$
distinct ways respectively. For each choice of double trees we choose an edge
from each one, which can be done in
$\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}$ ways. Finally we can join the
graph along those edges in $4$ different ways depending on the orientation, as
each orientation correspond to a different partition $\pi$, then,
$|\mathcal{T}_{2,6}|=4\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|.$
As said in Remark 8.22 last expression is precisely fourth times the
cardinality of $\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})$. ∎
###### Lemma 8.24.
$|\mathcal{T}_{2,4,4}|=4|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|$
###### Proof.
We proceed as in Lemma 8.23. The double trees can be chosen in
$|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|$ ways. For a choice of the
double trees we choose two edges from one of them and one edge from each of
the other two. Suppose we chose two edges from $\overline{T_{m_{1}}^{\pi}}$.
Those can be chosen in $\frac{1}{2}\frac{m_{1}}{2}(\frac{m_{1}}{2}-1)$ ways.
The edges of the other two double trees can be chosen in
$\frac{m_{2}}{2}\frac{m_{3}}{2}$ ways. Once we selected the edges we have to
make the union along these edges, we can pair them in $2$ different ways and
for each pair there are two possible orientations of the edges, giving a total
of $4$ ways. So the total number of ways is given by,
$8\frac{1}{2}\frac{m_{1}}{2}(\frac{m_{1}}{2}-1)\frac{m_{2}}{2}\frac{m_{3}}{2}=4\frac{m_{1}}{2}(\frac{m_{1}}{2}-1)\frac{m_{2}}{2}\frac{m_{3}}{2}$
Similarly when choosing two edges from $\overline{T_{m_{2}}^{\pi}}$ and
$\overline{T_{m_{3}}^{\pi}}$ we get
$4\frac{m_{2}}{2}(\frac{m_{2}}{2}-1)\frac{m_{1}}{2}\frac{m_{3}}{2}$ and
$4\frac{m_{3}}{2}(\frac{m_{3}}{2}-1)\frac{m_{1}}{2}\frac{m_{2}}{2}$ ways
respectively, so the total number of $2$-$4$-$4$ tree types is given by,
$4\left[\frac{m_{1}}{2}(\frac{m_{1}}{2}-1)\frac{m_{2}}{2}\frac{m_{3}}{2}+\frac{m_{2}}{2}(\frac{m_{2}}{2}-1)\frac{m_{1}}{2}\frac{m_{3}}{2}+\frac{m_{3}}{2}(\frac{m_{3}}{2}-1)\frac{m_{1}}{2}\frac{m_{2}}{2}\right]\\\
|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|\\\
=4\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}(\frac{m}{2}-3)|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|.$
This last expression is fourth times the cardinality of
$\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})$. ∎
### 8.5. Counting $2$-$4$ uniloop and $2$-$4$ unicircuit types
$2$-$4$-uniloop and $2$-$4$ unicircuit types are made of double unicircuit and
double tree graphs, this allow us to count them in a simple manner. For this
section we set $m_{1},m_{2},m_{3}\in\mathbb{N}$ and $m=m_{1}+m_{2}+m_{3}$
unless other is specified. Let us introduce the following sets of partitioned
permutations.
###### Notation 8.25.
For a partitioned permutation
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1,1)}(m_{1},m_{2},m_{3}),$
we write $\pi=\pi_{1}\times\pi_{2}\times\pi_{3}$ with $\pi_{i}\in NC(m_{i})$
and each block of $\mathcal{V}$ is a cycle of $\pi$ except by one block which
is the union of three cycles of $\pi$ one from each permutation $\pi_{i}$. We
denote by $\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ to the set
of partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1,1)}(m_{1},m_{2},m_{3})$
satisfying the following conditions,
1. $(i)$
$\pi_{i_{1}}$ and $\pi_{i_{2}}$ have all cycles of size $2$ except by one
cycle of size $1$ while $\pi_{i_{3}}\in NC_{2}(m_{i_{3}})$ with
$(i_{1},i_{2},i_{3})$ a permutation of $(1,2,3)$.
2. $(ii)$
The block of $\mathcal{V}$ which is the union of three cycles of $\pi$
consists of the cycles of size $1$ from $\pi_{i_{1}}$ and $\pi_{i_{2}}$ and
any cycle from $\pi_{i_{3}}$.
An example can be seen in Figure 13
Figure 13. A partitioned permutation $(\mathcal{V},\pi)$ in the set
$\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ corresponding to
$\pi=(1,6)(2,3)(4,5)(7)(8,9)(10)(11,12)(13,14)$
$\mathcal{V}=\\{\\{1,6,7,10\\},\\{2,3\\},\\{4,5\\},\\{8,9\\},\\{11,12\\},\\{13,14\\}\\}.$
Each block of $\mathcal{V}$ is a cycle of $\pi$ except by one block which is
the union of the two cycles of size $1$ of $\pi$ and one cycle of size $2$.
###### Notation 8.26.
For a partitioned permutation
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1)}(m_{1},m_{2},m_{3}),$
we write $\pi=\pi_{1}\times\pi_{2}$ with $\pi_{1}\in
S_{NC}(m_{i_{1}},m_{i_{2}})$ and $\pi_{2}\in NC(m_{i_{3}})$ for some
permutation $(i_{1},i_{2},i_{3})$ of $(1,2,3)$. Each block of $\mathcal{V}$ is
a cycle of $\pi$ except by one block which is the union of two cycles of $\pi$
one from each permutation $\pi_{i}$.
1. $(i)$
We denote by $\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$ to the set of
partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}^{(1,1)}(m_{1},m_{2},m_{3})$ such that
$\pi_{1}\in NC_{2}(m_{i_{1}},m_{i_{2}})$ and $\pi_{3}\in NC_{2}(m_{i_{3}})$,
2. $(ii)$
We denote by $\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})$ to the
set of partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$ such
that $\pi_{1}$ has exactly two through strings, i.e $\pi_{1}\in
NC_{2}^{(2)}(m_{i_{1}},m_{i_{2}})$.
3. $(iii)$
We denote by $\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})$ to the set
of partitioned permutations
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$ such
that $\pi_{1}$ has exactly one through string, (i.e $\pi_{1}\in
NC_{2}^{(1)}(m_{i_{1}},m_{i_{2}})$) and the block of $\mathcal{V}$ which is
the union of two cycles of $\pi$ consist of the unique through string of
$\pi_{1}$ and any block of $\pi_{2}$.
Some examples can be seen in Figures 14, 15, and 16.
Figure 14. $(\mathcal{V},\pi)$ in $\mathcal{P\\!S}_{NC}^{(1,1)}(9,5,4)$ where
$\pi=(1,8)\allowbreak(2,14)(3,4)(5,13)(6,12)(7,11)(9,10)(15,16)(17,18)$ and
$\mathcal{V}=\\{\\{1,8\\},\\{2,14,15,16\\},\\{3,4\\},\allowbreak\\{5,13\\},\\{6,12\\},\\{7,11\\},\\{9,10\\},\\{17,18\\}\\}$.
###### Definition 8.27.
Let $n\in\mathbb{N}$. We define the set of block-pairings on $[n]$, which we
denote by $NC_{2}^{block}(n)$, as the set of pairs $(B,\sigma)$ where
$\sigma\in NC_{2}(n)$ and $B$ is a block of $\sigma$.
###### Remark 8.28.
The cardinality of $NC_{2}^{block}(n)$ is given by $\frac{n}{2}|NC_{2}(n)|$ as
for each non-crossing pairing there are $\frac{n}{2}$ blocks to be chosen.
###### Definition 8.29.
Let $n\in\mathbb{N}$ and $T_{n}$ be the graph consisting of a unique basic
cycle defined as in Section 5. For a partition $\pi\in\mathcal{P}(n)$, we say
that the graph $T_{n}^{\pi}$ is a double uniloop graph if $T_{n}^{\pi}$
consists of a graph with two loops over the same vertex and such that removing
those loops results in a double tree. We denote the set of double uniloop
graphs by $\mathcal{DUL}(n)$.
For a block-pairing, $(B,\sigma)\in NC_{2}^{block}(n)$, we may think of the
block $B=\\{u,v\\}$ as a transposition of $S_{n}$ whose cycle decomposition is
$(u,v)$. Under this interpretation let us define the function
$\Psi:NC_{2}^{block}(n)\rightarrow\mathcal{Q}(T_{n})$ given by,
$\Psi(B,\sigma)=T_{n}^{\gamma_{n}\sigma B},$
where $\mathcal{Q}(T_{n})$ denotes the set of quotient graphs of $T_{n}$ and
$\gamma_{n}=(1,\dots,n)\allowbreak\in S_{n}$.
###### Lemma 8.30.
Let $n\in\mathbb{N}$ and
$\Psi:\mathit{NC}_{2}^{block}(n)\rightarrow\mathcal{Q}(T_{n})$ be defined as
above. The image, $Im(\Psi)$, equals to the set of double uniloop graphs;
$\mathcal{DUL}(n)$. Moreover, the function,
$\Psi:\mathit{NC}_{2}^{block}(n)\rightarrow\mathcal{DUL}(n)$
is injective. Therefore,
$\displaystyle|\\{\pi\in\mathcal{P}(n):T_{n}^{\pi}\text{ is a double uniloop
graph}\\}|$ $\displaystyle=$ $\displaystyle|NC_{2}^{block}(n)|$
$\displaystyle=$ $\displaystyle\frac{n}{2}|\mathit{NC}_{2}(n)|.$
###### Proof.
Let $(B,\sigma)\in\mathit{NC}_{2}^{set}(n)$ and let
$\pi^{\prime}=\gamma_{n}\sigma$ with $\gamma_{n}=(1,\dots,n)\in S_{n}$. Since
$\sigma\in\mathit{NC}_{2}(n)$ Lemma 8.18 says that $T_{n}^{\pi^{\prime}}$ is a
double tree and $\overline{\pi^{\prime}}=\sigma$. In other words, any block,
$\\{u,v\\}$ of $\sigma$, corresponds to the edges $e_{u}$ and $e_{v}$ of
$T_{m}^{\pi^{\prime}}$ connecting the same pair of vertices and with opposite
orientation. We let $B=\\{u^{\prime},v^{\prime}\\}$. Observe that
$\pi^{\prime}$ has two cycles: $A$ and $B$ (which we regard as blocks and
vertices of $T_{m}^{\pi^{\prime}}$), such that
$u^{\prime},\gamma_{n}(v^{\prime})\in A$ and
$v^{\prime},\gamma_{n}(u^{\prime})\in B$. Therefore, $\pi=\pi^{\prime}B$ have
exactly the same cycles of $\pi^{\prime}$ except by one which is obtained by
the union of $A$ and $B$. This means that the quotient graph $T_{n}^{\pi}$ is
obtained by identifying the vertices $A$ and $B$ of $T_{n}^{\pi^{\prime}}$
which results in $T_{n}^{\pi}$ being a double uniloop graph. This proves
$Im(\Psi)\subset\mathcal{DUL}$.
Conversely, let $T_{n}^{\pi}$ be a graph of double uniloop type. Then all
edges of $\overline{T_{n}^{\pi}}$ have multiplicity $2$. We let $\sigma$ be
the pairing obtained by $u\overset{\sigma}{\sim}v$ if $e_{u}$ and $e_{v}$
connect the same pair of vertices. Let $\tau=\sigma(u^{\prime},v^{\prime})$
where $e_{u^{\prime}}$ and $e_{v^{\prime}}$ correspond to the loops of
$T_{n}^{\pi}$. Then $\\#(\tau)=n/2+1$. By Theorem 8.6, we have that, as
partitions, $\gamma_{n}\sigma\leq\pi$; so
$\\#(\gamma_{n}\sigma)\geq\\#(\pi)=n/2$. By [10, 2.10], we have that
$\\#(\tau)+\\#(\gamma_{n}\tau)+\\#(\gamma_{n})\leq n+2$; so
$\\#(\gamma_{n}\tau)\leq n/2$. Thus
$\\#(\gamma_{n}\tau)\leq n/2\leq\\#(\gamma_{n}\sigma).$
If $u^{\prime}\sim_{\gamma_{n}\sigma}v^{\prime}$, then
$\\#(\gamma_{n}\tau)=\\#(\gamma_{n}\sigma(u^{\prime},v^{\prime}))=\\#(\gamma_{n}\sigma)+1$,
which is impossible. Thus $u^{\prime}\not\sim_{\gamma_{n}\sigma}v^{\prime}$
and hence $\gamma_{n}\sigma<\pi$. This implies that $\\#(\gamma_{n}\sigma)\geq
n/2+1$. Since $\sigma$ is a pairing we must have
$\\#(\gamma_{n}\sigma)=n/2+1$, and thus $\sigma\in\mathit{NC}_{2}(n)$, i.e.
$(\\{u^{\prime},v^{\prime}\\},\sigma)\in\mathit{NC}_{2}^{block}(n)$. Moreover,
note that $\\#(\pi)=n/2$ and $\\#(\gamma_{n}\sigma)=n/2+1$, since
$\gamma_{n}\sigma\leq\pi$ then $\pi$ is obtained by joining two blocks of
$\gamma_{n}\sigma$, these blocks must correspond to
$[u^{\prime}]_{\gamma_{n}\sigma}$ and $[v^{\prime}]_{\gamma_{n}\sigma}$ as we
proved that $u^{\prime}\not\sim_{\gamma_{n}\sigma}v^{\prime}$. On the other
hand, the cycles of the permutation $\gamma_{n}\sigma(u^{\prime},v^{\prime})$
are the same as the cycles of $\gamma_{n}\sigma$ except by one which is the
union of two cycles of $\gamma_{n}\sigma$: $[u^{\prime}]_{\gamma_{n}\sigma}$
and $[v^{\prime}]_{\gamma_{n}\sigma}$. This proves that, as partitions,
$\gamma_{n}\sigma(u^{\prime},v^{\prime})=\pi$ which proves that
$\Psi(\\{u^{\prime},v^{\prime}\\},\sigma)=T_{n}^{\pi}$, thus
$Im(\Psi)=\mathcal{DUL}(n)$.
To verify injectivity, let us remember that if
$(B,\sigma)\in\mathit{NC}_{2}^{block}(n)$ and $\pi=\gamma_{n}\sigma B$ then
$T_{n}^{\pi}$ pair the edges $e_{u}$ and $e_{v}$ whenever $\\{u,v\\}$ is a
block of $\sigma$, i.e. $\overline{\pi}=\sigma$. Moreover, the block
$B=\\{u^{\prime},v^{\prime}\\}$ correspond to the loops $e_{u^{\prime}}$ and
$e_{v^{\prime}}$ of $T_{n}^{\pi}$. Let $(B_{1},\sigma_{1})$ and
$(B_{2},\sigma_{2})$ be such that, as partitions,
$\pi_{1}=\gamma_{n}\sigma_{1}B_{1}$ and $\pi_{2}=\gamma_{n}\sigma_{2}B_{2}$
are the same. By the pointed before,
$\sigma_{1}=\overline{\pi_{1}}=\overline{\pi_{2}}=\sigma_{2}$. Finally observe
that since $T_{n}^{\pi_{1}}$ and $T_{n}^{\pi_{2}}$ are the same then, their
corresponding loops are the same, which means that the blocks $B_{1}$ and
$B_{2}$ must be the same. ∎
Figure 15. $(\mathcal{V},\pi)\in\mathcal{P\\!S}_{\mathit{NC}}^{(1,1)}(9,5,4)$
with $\pi=(1,8)(2,13)\allowbreak(3,6)(4,5)(7,10)(9,14)(11,12)(15,18)(16,17)$
and
$\mathcal{V}=\\{\\{1,8\\},\\{2,13\\},\\{3,6\\},\\{4,5\\},\\{7,10\\},\\{9,14\\},\\{11,12,16,17\\},\allowbreak\\{15,18\\}\\}.$
The permutation $\pi$ can be written as $\pi_{1}\times\pi_{2}$ with
$\pi_{1}=(1,8)(2,13)(3,6)(4,5)(7,10)(9,14)(11,12)$ and
$\pi_{2}=(15,18)(16,17)$, the permutation $\pi_{1}$ has only the two through
strings $(2,13)$ and $(7,10)$. Figure 16.
$(\mathcal{V},\pi)\in\mathcal{P\\!S}_{\mathit{NC}}^{(1,1)(t)}(9,5,4)$ given by
$\pi=(1,14)\allowbreak(2,3)(4,7)(5,6)(8,9)(10,11)(12,13)(15,16)(17,18)$ and
$\mathcal{V}=\\{\\{1,14,15,16\\},\\{2,3\\},\\{4,7\\},\\{5,6\\},\\{8,9\\},\\{10,11\\},\\{12,13\\},\allowbreak\\{17,18\\}\\}$.
We have $\pi=\pi_{1}\times\pi_{2}\in NC_{2}(9,5)\times NC_{2}(4)$ with
$\pi_{1}$ equal to $\\{(1,14)(2,3)(4,7)(5,6)(8,9)(10,11)(12,13)\\}$ and
$\pi_{2}=\allowbreak(15,16)(17,18)$. Note that $\pi_{1}$ has the unique
through string, $(1,14)$. The block $\\{1,14,15,16\\}$ of $\mathcal{V}$ is
obtained by joining the cycle $(1,14)$ of $\pi_{1}$ to the cycle $(15,16)$ of
$\pi_{2}$.
###### Lemma 8.31.
$|\mathcal{UL}_{2,4}|=|\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})|=|\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})|$
###### Proof.
A quotient graph $T_{m_{1},m_{2},m_{3}}^{\pi}$ can be of $2$-$4$ uniloop type
only when one of $m_{1},m_{2}$ or $m_{3}$ is even and the other two are odd,
similarly $\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})$ and
$\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})$ are non-empty only in
that case, so we may assume $m_{1}$ is even and $m_{2},m_{3}$ are both odd.
The graph $T_{m_{1}}^{\pi}$ is of double uniloop type and can be chosen in
$\frac{m_{1}}{2}|NC_{2}(m_{1})|$ ways as seen in Lemma 8.30. The loop of
$T_{m_{2}}^{\pi}$ can be chosen in $m_{2}$ ways. Then we quotient the rest to
get a double tree. This is basically doing the quotient of a basic cycle of
length $m_{2}-1$ which produces $|NC_{2}(m_{2}-1)|$ distinct double trees, so
$T_{m_{2}}^{\pi}$ is chosen in $m_{2}|NC_{2}(m_{2}-1)|$ ways. Similarly
$T_{m_{3}}^{\pi}$ is chosen in $m_{3}|NC_{2}(m_{3}-1)|$ ways, and since the
union of the graphs $T_{m_{1}}^{\pi}$, $T_{m_{2}}^{\pi}$ and $T_{m_{2}}^{\pi}$
is already determined by the loop then the number of ways of choosing the
graph $T_{m_{1},m_{2},m_{3}}^{\pi}$ is,
$\frac{m_{1}m_{2}m_{3}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2}-1)||NC_{2}(m_{3}-1)|$
which is the cardinality of
$\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})$. The cardinality of
$\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})$ can be computed easily.
We choose a non-crossing pairing on the $(m_{2},m_{3})$-annulus with a single
through string, By [2, Lemma 13] this can be done in
$\binom{m_{2}}{m_{2}-1/2}\binom{m_{3}}{m_{3}-1/2}$ ways. The last expression
can be rewritten as,
$m_{2}m_{3}|NC_{2}(m_{2}-1)||NC_{2}(m_{3}-1)|,$
by using the well know property
$|NC_{2}(n)|=Cat_{n/2}\vcentcolon=\frac{1}{n/2+1}\binom{n}{n/2}$. On the other
hand, the non-crossing pairing on $[m_{1}]$ points can be chosen in
$|NC_{2}(m_{1})|$ ways, and we select a block of that pairing which can be
chosen in $\frac{m_{1}}{2}$ ways. Therefore the cardinality of
$\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})$ is,
$\frac{m_{1}m_{2}m_{3}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2}-1)||NC_{2}(m_{3}-1)|,$
as required. ∎
###### Lemma 8.32.
$|\mathcal{UC}_{2,4}|=2|\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})|-2|\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})|\\\
-2|\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})|$
###### Proof.
We will count all graphs $T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}$.
If $T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}$ then one of the graphs,
$T_{m_{i_{1}}}^{\pi}$, is a double tree, $T_{m_{i_{2}},m_{i_{3}}}^{\pi}$ is a
double unicircuit graph with $(i_{1},i_{2},i_{3})$ a permutation of $(1,2,3)$
and $T_{m_{1},m_{2},m_{3}}^{\pi}$ results of joining $T_{m_{i_{1}}}^{\pi}$ and
$T_{m_{i_{2}},m_{i_{3}}}^{\pi}$ along some edge.
We count firstly the case where the unique circuit of
$\overline{T_{m_{i_{2}},m_{i_{3}}}^{\pi}}$ is not a loop. Lemma 8.18 says that
$T_{m_{i_{1}}}^{\pi}$ can be chosen in $|\mathit{NC}_{2}(m_{i_{1}})|$ ways.
Similarly, Corollary 8.20 says that $T_{m_{i_{2}},m_{i_{3}}}^{\pi}$ can be
chosen in
$|\mathit{NC}_{2}(m_{i_{2}},m_{i_{3}})|\allowbreak-|\mathit{NC}_{2}^{(1)}(m_{i_{2}},m_{i_{3}})|-|\mathit{NC}_{2}^{(2)}(m_{i_{2}},m_{i_{3}})|$
ways. Then we choose an edge from each graph $T_{m_{i_{1}}}^{\pi}$ and
$T_{m_{i_{2}},m_{i_{3}}}^{\pi}$. In the first graph there are
$\frac{m_{i_{1}}}{2}$ choices, and in the second there are
$\frac{m_{i_{2}}+m_{i_{3}}}{2}$ choices. Once selected the edges we make the
union of the graphs $T_{m_{i_{1}}}^{\pi}$ and $T_{m_{i_{2}},m_{i_{3}}}^{\pi}$
along these edges which can be done in $2$ ways depending the orientation of
the edges. Therefore, the total number of graphs is given by,
$2\frac{m_{i_{1}}(m_{i_{2}}+m_{i_{3}})}{4}|\mathit{NC}_{2}(m_{i_{1}})|\times\\\
\left[|\mathit{NC}_{2}(m_{i_{2}},m_{i_{3}})|-|\mathit{NC}_{2}^{(1)}(m_{i_{2}},m_{i_{3}})|-|\mathit{NC}_{2}^{(2)}(m_{i_{2}},m_{i_{3}})|\right].$
The last expression is twice the number of partitioned permutations
$(\mathcal{V},\pi_{1}\times\pi_{2})\in\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$
with $\pi_{2}\in\mathit{NC}_{2}(m_{i_{1}})$ and,
$\pi_{1}\in\mathit{NC}_{2}(m_{i_{2}},m_{i_{3}})\setminus(\mathit{NC}_{2}^{(1)}(m_{i_{2}},m_{i_{3}})\cup\mathit{NC}_{2}^{(2)}(m_{2},m_{3})),$
and we choose a cycle from each $\pi_{1}$ and $\pi_{2}$ and join them together
to make a block of $\mathcal{V}$.
Similarly we count the case where the unique circuit of
$\overline{T_{m_{i_{2}},m_{i_{3}}}^{\pi}}$ is a loop. $T_{m_{i_{1}}}^{\pi}$
can be chosen in $|\mathit{NC}_{2}(m_{i_{1}})|$ ways, and
$T_{m_{i_{2}},m_{i_{3}}}^{\pi}$ can be chosen in
$|\mathit{NC}_{2}^{(1)}(m_{i_{2}},m_{i_{3}})|$ ways. Then we choose an edge
from each graph $T_{m_{i_{1}}}^{\pi}$ and $T_{m_{i_{2}},m_{i_{3}}}^{\pi}$. In
the first graph there are $\frac{m_{i_{1}}}{2}$ choices, and in the second
there are $\frac{m_{i_{2}}+m_{i_{3}}-2}{2}$ choices because the circuit cannot
be chosen as it is a loop. Once selected the edges we have two possible
orientations for the union along the edges. Thus the total number of graphs is
given by,
$2\frac{m_{i_{1}}(m_{i_{2}}+m_{i_{3}}-2)}{4}|\mathit{NC}_{2}(m_{i_{1}})||\mathit{NC}_{2}^{(1)}(m_{i_{2}},m_{i_{3}})|.$
The last expression is twice the number of partitioned permutations
$(\mathcal{V},\pi_{1}\times\pi_{2})\in\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$
whit $\pi_{2}\in\mathit{NC}_{2}(m_{i_{1}})$,
$\pi_{1}\in\mathit{NC}_{2}^{(1)}(m_{i_{2}},\allowbreak m_{i_{3}})$, and we
choose a cycle from each $\pi_{1}$ and $\pi_{2}$ and join them together to
make a block of $\mathcal{V}$ with the restriction that the selected cycle of
$\pi_{1}$ cannot be the unique through string of $\pi_{1}$. Adding both cases
up says that the number of $2$-$4$ unicircuit graphs,
$T_{m_{1},m_{2},m_{3}}^{\pi}$, equals twice the number of partitioned
permutations
$(\mathcal{V},\pi_{1}\times\pi_{2})\in\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})$
with $\pi_{1}\in\mathit{NC}_{2}(m_{i_{2}},m_{i_{3}})$,
$\pi_{2}\in\mathit{NC}_{2}(m_{i_{1}})$ and such that $\pi_{1}$ doesn’t have
two through strings and if $\pi_{1}$ has a single through string then this
through string is never joined to another cycle of $\pi_{2}$ to make a block
of $\mathcal{V}$, i.e. $(\mathcal{V},\pi_{1}\times\pi_{2})$ belongs to,
$\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})\setminus(\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})\cup\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})).$
Thus the number of all possible graphs
$T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}$ is twice the cardinality of
the set,
$\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})\setminus(\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})\cup\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})),$
as desired. ∎
## 9\. The third order moments and cumulants
We are ready to provide a proof to the main theorem, let us recall Corollary
7.12,
$\alpha_{m_{1},m_{2},m_{3}}=\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,6}\end{subarray}}(k_{6}+6k_{4}+2)+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,4,4}\end{subarray}}(k_{4}+1)^{2}\\\
+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UL}_{2,4}\end{subarray}}(\mathring{k_{4}}+2)+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}\end{subarray}}(k_{4}+1)+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{DB}\end{subarray}}1.\\\ $
On the other hand, Corollary 8.17 says,
$|\mathcal{DB}|=|NC_{2}(m_{1},m_{2},m_{3})|-|\mathcal{UC}_{2,4}|-|\mathcal{T}_{2,4,4}|-2|\mathcal{UL}_{2,4}|-2|\mathcal{T}_{2,6}|.$
Combining these two expressions we get the following simpler expression,
(9.1)
$\alpha_{m_{1},m_{2},m_{3}}=\sum_{NC_{2}(m_{1},m_{2},m_{3})}1+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,6}\end{subarray}}(k_{6}+6k_{4})\\\
+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{T}_{2,4,4}\end{subarray}}(k_{4}^{2}+2k_{4})+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UL}_{2,4}\end{subarray}}\mathring{k}_{4}+\sum_{\begin{subarray}{c}\pi\in\mathcal{P}(m)\\\
T_{m_{1},m_{2},m_{3}}^{\pi}\in\mathcal{UC}_{2,4}\end{subarray}}k_{4}$
###### Theorem 9.1.
Let $m_{1},m_{2},m_{3}\in\mathbb{N}$. Then,
$\alpha_{m_{1},m_{2},m_{3}}=|NC_{2}(m_{1},m_{2},m_{3})|+4k_{6}|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|\\\
+4k_{4}^{2}|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|+2k_{4}|\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})|\\\
+(\mathring{k}_{4}-2k_{4})|\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})|$
###### Proof.
Lemmas 8.23, 8.24, 8.32 and 8.31 let us count the sets
$\mathcal{T}_{2,6},\allowbreak\mathcal{T}_{2,4,4},\allowbreak\mathcal{UL}_{2,4}$
and $\mathcal{UC}_{2,4}$; combining this with Equation (9.1) gives,
(9.2)
$\alpha_{m_{1},m_{2},m_{3}}=|NC_{2}(m_{1},m_{2},m_{3})|+4k_{6}|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|\\\
+4k_{4}^{2}|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|+2k_{4}|\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})|\\\
+(\mathring{k}_{4}-2k_{4})|\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})|+2k_{4}R,$
where $R$ is given by,
$12|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|+4|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|\\\
\mbox{}-|\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})|.$
We will prove that $R=0$. We know that,
$12|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|\\\
=12\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|,$
and,
$4|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|=\\\
4\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{m_{3}}{2}(\frac{m}{2}-3)|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|,$
whit $m=m_{1}+m_{2}+m_{3}$. Thus,
(9.3)
$12|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|+4|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|=\\\
\frac{mm_{1}m_{2}m_{3}}{4}|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|$
Now we compute $|\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})|$.
Each element
$(\mathcal{V},\pi_{1}\times\pi_{2})\in\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})$
is such that we have $\pi_{1}\in NC_{2}^{(2)}(m_{i_{1}},m_{i_{1}})$ and
$\pi_{2}\in\allowbreak NC_{2}(m_{i_{3}})$ for some permutation
$(i_{1},i_{2},i_{3})$ of $(1,2,3)$. Assume $\pi_{1}\in
NC_{2}^{(2)}(m_{1},m_{2})$ and $\pi_{2}\in NC_{2}(m_{3})$, the partition
$\mathcal{V}$ is such that each block of $\mathcal{V}$ is a cycle of $\pi$
except by one block which is the union of two cycles of $\pi$ one from each
$\pi_{1}$ and $\pi_{2}$. Those cycles can be chosen in
$\frac{m_{1}+m_{2}}{2}\frac{m_{3}}{2}$ ways. Thus the number of those
partitioned permutation is,
$\frac{(m_{1}+m_{2})m_{3}}{4}|NC_{2}(m_{3})||NC_{2}^{(2)}(m_{1},m_{2})|.$
The set $NC_{2}^{(2)}(m_{1},m_{2})$ can be counted easily. On each circle we
choose two points corresponding to the through strings. By [2, Lemma 13] that
can be done in
$\binom{m_{1}}{\frac{m_{1}}{2}-1}\binom{m_{2}}{\frac{m_{2}}{2}-1}$ ways. Then
we join the points to make the two through strings, which can be done in two
ways. Thus,
$\displaystyle|NC_{2}^{(2)}(m_{1},m_{2})|$ $\displaystyle=$ $\displaystyle
2\binom{m_{1}}{\frac{m_{1}}{2}-1}\binom{m_{2}}{\frac{m_{2}}{2}-1}$
$\displaystyle=$ $\displaystyle
2\frac{m_{1}}{2}\frac{m_{2}}{2}\frac{1}{m_{1}/2}\binom{m_{1}}{m_{1}/2}\frac{1}{m_{2}/2}\binom{m_{2}}{m_{2}/2}$
$\displaystyle=$
$\displaystyle\frac{m_{1}m_{2}}{2}|NC_{2}(m_{1})||NC_{2}(m_{2})|,$
so the total number of elements
$(\mathcal{V},\pi_{1}\times\pi_{2})\in\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})$
with $\pi_{1}\in NC_{2}^{(2)}(m_{1},m_{2})$ and $\pi_{2}\in NC_{2}(m_{3})$ is
given by,
$\frac{m_{1}m_{2}m_{3}(m_{1}+m_{2})}{8}|NC_{2}(m_{1})||NC_{2}(m_{2})||NC_{2}(m_{3})|.$
In the same way we count the other two cases corresponding to $\pi_{1}\in
NC_{2}^{(2)}(m_{1},m_{3})$ and $\pi_{2}\in NC_{2}(m_{2})$ and $\pi_{1}\in
NC_{2}^{(2)}(m_{2},m_{3})$ and $\pi_{2}\in NC_{2}(m_{1})$. Adding all together
up gives,
$\displaystyle|\mathcal{PS}_{NC_{2}^{(2)}}^{(1,1)}(m_{1},m_{2},m_{3})|$
$\displaystyle=$
$\displaystyle(2m_{1}+2m_{2}+2m_{3})\frac{m_{1}m_{2}m_{3}}{8}|NC_{2}(m_{1})|\,|NC_{2}(m_{2})|\,|NC_{2}(m_{3})|$
$\displaystyle=$
$\displaystyle\frac{mm_{1}m_{2}m_{3}}{4}|NC_{2}(m_{1})|\,|NC_{2}(m_{2})|\,|NC_{2}(m_{3})|.$
So Equation (9.3) implies $R=0$. This turns Equation (9.2) into,
$\alpha_{m_{1},m_{2},m_{3}}=|NC_{2}(m_{1},m_{2},m_{3})|+4k_{6}|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|\\\
+4k_{4}^{2}|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|+2k_{4}|\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})|\\\
+(\mathring{k}_{4}-2k_{4})|\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})|.$
We conclude the proof by replacing
$|\mathcal{PS}_{NC_{2}}^{(1,1)(t)}(m_{1},m_{2},m_{3})|$ by
$|\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})|$, as we proved in
Lemma 8.31 they are equal. ∎
Proof of main theorem. As we have seen the third order fluctuation moments
$\alpha_{m_{1},m_{2},m_{3}}$ are somewhat involved. However the higher order
cumulants are very simple.
###### Theorem 9.2 (Main Theorem).
The third order cumulants, $(\kappa_{p,q,r})_{p,q,r}$, of a Wigner Ensemble,
$X$, are given by,
$\kappa_{p,q,r}=\left\\{\begin{array}[]{lcc}4k_{6}&if&p=q=r=2\\\
\mathring{k}_{4}-2k_{4}&if&\\{p,q,r\\}=\\{2,1,1\\}\\\
0&otherwise\end{array}\right.$
###### Proof.
Let us recall that, up to order two, the free cumulants of $X$ are given by;
$\kappa_{2}=1,\kappa_{2,2}=2k_{4}$ and $0$ otherwise. Let
$(\kappa^{\prime}_{n})_{n},(\kappa^{\prime}_{p,q})_{p,q}$ and
$(\kappa^{\prime}_{p,q,r})_{p,q,r}$ be the sequences defined by
$\kappa_{n}^{\prime}=\kappa_{n}$ for all $n$,
$\kappa^{\prime}_{p,q}=\kappa_{p,q}$ for all $p,q$ and
$\kappa^{\prime}_{2,2,2}=4k_{6},\kappa^{\prime}_{2,1,1}=\mathring{k}_{4}-2k_{4}$
and $0$ otherwise. By definition $\kappa^{\prime}_{n}$ and
$\kappa^{\prime}_{p,q}$ coincide with the free cumulants of first and second
order $\kappa_{n}$ and $\kappa_{p,q}$. Therefore these sequences satisfy the
moment-cumulant relations:
(9.4)
$\alpha_{m}=\sum_{(\mathcal{V},\pi)\in\mathcal{PS}_{NC}(m)}\kappa^{\prime}_{(\mathcal{V},\pi)}$
(9.5)
$\alpha_{m_{1},m_{2}}=\sum_{(\mathcal{V},\pi)\in\mathcal{PS}_{NC}(m_{1},m_{2})}\kappa^{\prime}_{(\mathcal{V},\pi)}$
for all $m,m_{1},m_{2}$. For a non-crossing partitioned permutation
$(\mathcal{V},\pi)\in\mathcal{PS}_{NC}(m_{1},m_{2},m_{3}),$
let $\kappa^{\prime}_{(\mathcal{V},\pi)}$ be the multiplicative extension of
$(\kappa^{\prime}_{n})_{n},(\kappa^{\prime}_{p,q})_{p,q}$ and
$(\kappa^{\prime}_{p,q,r})_{p,q,r}$, i.e;
$\kappa_{(\mathcal{V},\pi)}=\prod_{\begin{subarray}{c}B\text{ block of
}\mathcal{V}\\\ V_{1},\dots,V_{i}\text{ cycles of }\pi\text{ with
}V_{i}\subset B\end{subarray}}\kappa^{\prime}_{|V_{1}|,\dots,|V_{i}|}.$
Observe that,
$\sum_{(\mathcal{V},\pi)\in\mathcal{PS}_{NC}(m_{1},m_{2},m_{3})}\kappa^{\prime}_{(\mathcal{V},\pi)}=|\mathit{NC}_{2}(m_{1},m_{2},m_{3})|\\\
+4k_{6}|\mathcal{PS}_{NC_{2}}^{(1,1,1)}(m_{1},m_{2},m_{3})|+4k_{4}^{2}|\mathcal{PS}_{NC_{2}}^{(2,1,1)}(m_{1},m_{2},m_{3})|\\\
+2k_{4}|\mathcal{PS}_{NC_{2}}^{(1,1)}(m_{1},m_{2},m_{3})|+(\mathring{k}_{4}-2k_{4})|\mathcal{PS}_{NC_{2,1,1}}^{(1,1,1)}(m_{1},m_{2},m_{3})|.$
According to Theorem 9.1 last expression equals $\alpha_{m_{1},m_{2},m_{3}}$,
so the sequences $(\kappa^{\prime}_{n})_{n},(\kappa^{\prime}_{p,q})_{p,q}$ and
$(\kappa^{\prime}_{p,q,r})_{p,q,r}$ satisfy the moment-cumulant relation of
order three, namely,
(9.6)
$\alpha_{m_{1},m_{2},m_{3}}=\sum_{(\mathcal{V},\pi)\in\mathcal{PS}_{NC}(m_{1},m_{2},m_{3})}\kappa^{\prime}_{(\mathcal{V},\pi)}.$
However the free cumulants $(\kappa_{n})_{n},(\kappa_{p,q})_{p,q}$ and
$(\kappa_{p,q,r})_{p,q,r}$ are the unique sequences satisfying Equations
(9.4), (9.5) and (9.6), so it must be $\kappa^{\prime}_{p,q,r}=\kappa_{p,q,r}$
as desired. ∎
## Acknowledgements
We would like to thank Roland Speicher for his comments and fruitful
discussions while preparing this paper.
## References
* [1] G. Anderson, A. Guionnet, and O. Zeitouni, An Introduction to Random Matrices, Cambridge Studies in Advanced Mathematics. 118, Cambridge Univ. Press, 2010\.
* [2] C. Armstrong, J. A. Mingo, R. Speicher, and J. C. H. Wilson, The non-commutative cycle lemma, J. Combin. Theory Ser. A, 2009.
* [3] G. Borot, S. Charbonnier, E. Garcia-Failde, F. Leid, S. Shadrin, Analytic theory of higher order free cumulants, arXix:2112.12184.
* [4] B. Collins, J. A. Mingo, P. Śniady, R. Speicher, Second order freeness and fluctuations of Random Matrices: III. Higher Order Freeness and Free Cumulants, Doc. Math., 12 (2007), 1-70.
* [5] A. Khorunzhy, B. Khoruzhenko, and L. Pastur, On the $\textit{1}/N$ corrections to the Green functions of random matrices with independent entries, J. Phys. A 28 (1995), L31–L35.
* [6] S. K. Lando and A. K. Zvonkin, Graphs on surfaces and their applications, Encyclopaedia of Mathematical Sciences 141, Springer, 2003.
* [7] C. Male, Traffic Distributions and Independence: Permutation Invariant Random Matrices and the Three Notions of Independence, Mem. Amer. Math. Soc. 267 (2020), no. 1300.
* [8] C. Male, J. A. Mingo, S. Péché, R. Speicher, Joint Global fluctuations of complex Wigner and Deterministic Matrices, Random Matrices Theory Appl. 11 (2022), paper 2250015, 46 pages.
* [9] W. Massey, A Basic Course in Algebraic Topology, Graduate Texts in Mathematics 127, Springer-Verlag, 1991.
* [10] J. A. Mingo, A. Nica, Annular non-crossing permutations and partitions, and second-order asymptotics for random matrices, Int. Math. Res. Not. IMRN, 2004, 1413-1460.
* [11] J. A. Mingo, P. Śniady, and R. Speicher, Second order freeness and fluctuations of Random Matrices: II. Unitary Random Matrices , Adv. in Math., 209 (2007), 212-240.
* [12] J. A. Mingo and R. Speicher, Free probability and Random Matrices, Fields Institute Monographs, no. 35, Springer-Nature, 2017.
* [13] A. Nica and R. Speicher, Lectures on the combinatorics of free probability, London Mathematical Society Lecture Note Series 335, Cambridge University Press, 2006.
* [14] T. Nishizeki and N. Chiba, Planar graphs: Theory and algorithms, Annals of Discrete Mathematics 32, North-Holland, 1998.
* [15] L. Pastur and M. Shcherbina, Eigenvalue Distributions of Large Random Matrices, Mathematical Surveys and Monographs 171, Amer. Math. Soc., 2011.
* [16] E. P. Wigner, Characteristic vectors of bordered matrices with infinite dimensions, Ann. of Math. (2) 62 (1955), 548–564, and part II, 65 (1957), 203-207.
* [17] E. P. Wigner, On the distribution of the roots of certain symmetric matrices, Ann. of Math. (2) 67 (1958), 325-327.
M††footnotetext: xxvii.viii.mmxxiv |
# Elastic properties of a Sc-Zr-Nb-Ta-Rh-Pd high-entropy alloy superconductor
Yupeng Pana, Xiaobo Hea, Binjie Zhoua, Denver Strongb, Jian Zhanga, Hai-Bin
Yua, Yunfei Tana, Robert J. Cavab∗, and Yongkang Luoa† a Wuhan National High
Magnetic Field Center and School of Physics, Huazhong University of Science
and Technology, Wuhan 430074, China b Department of Chemistry, Princeton
University, Princeton, New Jersey 08544, United States<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We report a comprehensive study on the elastic properties of a hexanary high-
entropy alloy superconductor (ScZrNbTa)0.685[RhPd]0.315 at room and cryogenic
temperatures, by Resonant Ultrasound Spectroscopy experiments. The derived
elastic constants are bulk modulus $K=132.7$ GPa, Young’s modulus $E=121.0$
GPa, shear modulus $G=44.9$ GPa, and Poisson’s ratio $\nu$=0.348 for room
temperature. The Young’s and shear moduli are $\sim 10\%$ larger than those in
NbTi superconductor with similar $T_{c}$, while the ductility is comparable.
Moreover, the mechanical performance is further enhanced at cryogenic
temperature. Our work confirms the advantageous mechanical properties of high-
entropy alloy superconductors and suggests the application prospects.
Keywords: High-entropy alloy, Superconductor, Elastic constants
###### pacs:
74.25.Ld, 62.20.DC, 43.35.Cg
## 1 Introduction
Superconductors (SCs) are a class of ever-green functional materials
discovered in 1911[1], with great application prospects in electrical current
transfer, ground transportation, nuclear magnetic resonance (NMR) and medical
resonance imaging (MRI), International Thermonuclear Experimental Reactor
(ITER) and new-generation quantum computation as well. Until now, the so-
called “low-$T_{c}$ superconductors” Nb-Ti and Nb3Sn based superconducting
wires still dominate most of the commercial applications for superconducting
eletromagnets. For industrial applications, the most crucial prerequisites of
SC are high critical temperature ($T_{c}$), high upper critical field
($H_{c2}$), and high critical current density ($j_{c}$), which guarantee the
strong magnetic field output. Apart from these factors, another technical
issue one has to confront with is the mechanical performance. On the one hand,
materials with good ductility are prone to make into wires; on the other hand,
the strain effect exerted by the electromagnetic force on the superconducting
wires reduces their superconducting properties[2], and this is particularly
the case for the A15-phase Nb3Sn[3, 4]. For these reasons, SCs with both high
strength and good ductility are of considerable interest.
High-entropy alloys (HEAs) refer to systems containing more than four metallic
elements in equimolar or near-equimolar ratios, offering a rich platform for
materials design. Earlier studies on HEAs have revealed a series of intriguing
properties, such as high hardness and strength[5, 6], simultaneous strength
and ductility[7], outstanding corrosion and oxidation resistance[6, 8, 9],
elegant strength-to-weight ratio[10], improved mechanical properties at
cryogenic temperatures[11], etc. A natural question concerns whether
applicable superconducting wires can be made of HEA. Indeed, HEA SCs present a
unique crossing point between novel superconductors and functional high-
entropy alloys that have attracted extensive interests in recent years. In
2014, Koželj et al reported the synthesis of the first HEA SC
Ta34Nb33Hf8Zr14Ti11 with $T_{c}\approx 7.3$ K[12]. One salient feature of this
SC is that it exhibits extraordinarily robust zero-resistance
superconductivity under pressure up to 190.6 GPa[13]. Later on,
superconductivity was also observed in other HEAs e.g. CsCl-type Sc-Zr-Nb-Rh-
Pd, Sc-Zr-Nb-Ta-Rh-Pd[14], $Tr$Zr2-type (Fe,Co,Ni,Rh,Ir)Zr2 [15, 16], and hcp-
structured (MoReRu)(1-2x)/33(PdPt)x [17]. Most interestingly, the pentanary
HEA SC (ScZrNb)0.65[RhPd]0.35 has $T_{c}\approx 9.7$ K and $H_{c2}\approx
10.7$ T, comparable to those in NbTi[18, 19], the superconducting alloy that
accounts for a majority of the global superconductivity market for prevalent
MRIs. Although HEA SCs have manifested themselves with excellent
superconductivity, little has been known about their mechanical
performance[17, 20], and in particular, a comprehensive study about their
elastic constants and moduli at cryogenic temperature is still lacking.
Herein, by employing the Resonant Ultrasound Spectroscopy (RUS) technique, we
performed a comprehensive study on the the second-rank elastic tensor of a
representative HEA SC (ScZrNbTa)0.685[RhPd]0.315, which becomes a SC below
$\sim$7 K. The derived elastic constants at room temperature are bulk modulus
$K=132.7$ GPa, Young’s modulus $E=121.0$ GPa, shear modulus $G=44.9$ GPa, and
Poisson’s ratio $\nu=0.348$. In particular, $E$ and $G$ are $\sim 10\%$ larger
than the NbTi SC, while their $\nu$s are at the same level. These parameters
suggest that this superconducting HEA possess both good strength and
ductility. Meanwhile, unlike Nb3Sn, the excellent mechanical performance
retains even at cryogenic temperature. Our work confirms the advantageous
mechanical properties of high-entropy alloy superconductors and suggests the
application prospects.
## 2 Experimental
The polycrystalline hexanary HEA (ScZrNbTa)0.685[RhPd]0.315 sample studied
here was grown by the arc-melting method as described elsewhere[14]. The
composition and structural characterizations were performd by X-ray
diffraction (Cu-$K_{\alpha}$) and energy-dispersive X-ray spectroscopy (EDS)
measurements. Electrical resistivity was measured as a function of temperature
by the standard four-lead method in a commercial Physical Property
Measurements System (PPMS-9, Quantum Design). For RUS measurements, the sample
was carefully polished into a parallelepiped with the dimensions $1.152\times
0.975\times 0.586$ mm3 and mass 6.33 mg. A schematic of the RUS experimental
set-up is shown in Fig. 1(a). The transducers are made of a Lead Zirconate
Titanate (PZT) plate and an Al2O3 hemisphere, and the latter is used for
electrical isolation and mechanical protection[21]. A pair of transducers were
used in RUS measurements, the bottom one as the ultrasound driving source, and
the top one as the signal pick-up. To reduce the damping of the vibration
modes, the sample is point-touch mounted between the two transducers. The
measurements were made by sweeping frequency at fixed temperatures. More
details about the RUS measurements can be found in Ref. [22]. To measure the
low-temperature elastic constants, a helium-flow cryostat (OptistatCF, Oxford)
was exploited to cool the sample down to $\sim$5.4 K.
Figure 1: (a) Schematic of RUS experimental set-up. (b) Temperature dependent
resistivity of (ScZrNbTa)0.685[RhPd]0.315 showing an onset of the
superconducting transition at $T_{c}^{on}=7$ K. The inset is the crystalline
structure.
## 3 Results and Discussion
The composition of the sample studied in this paper is
(ScZrNbTa)0.685[RhPd]0.315, which is a hexanary HEA superconductor with onset
superconducting transition $T_{c}^{on}\approx 7$ K, verified by resistivity
measurements shown in Fig. 1(b). Bulk nature of superconductivity was
confirmed by the Meissner effect in our previous work[14]. This material has
CsCl-type structure and mixed-site occupancies[14]. The advantage of RUS
experiment is that it can extract the full elastic tensor $\\{C_{ij}\\}$
($i,j$=1-6) in a single frequency sweep. Because the sample is an isotropic
polycrystal, the elastic tensor has only two independent elements, viz.
$C_{11}$ and $C_{44}$ in Voigt notation, whereas $C_{12}$ can be retrieved by
$C_{12}=C_{11}-2C_{44}$.
Figure 2: Bottom panel, vibrational spectrum of (ScZrNbTa)0.685[RhPd]0.315 at
room temperature. Top panel shows the first 59 resonances, red circles -
experimental data, and black crosses - calculated data.
For a sample with given elastic constants, density and dimensions,
theoretically, all the normal resonance modes can be computed directly by
solving a three dimensional elastic wave function[22, 23, 24]. The way we did
for RUS measurements is opposite. We swept frequency from 0.7 to 3.5 MHz, and
collected the full spectrum at different temperatures. A representative
spectrum is displayed in the bottom panel of Fig. 2. About 59 resonant peaks
can be recognized in this range, and each stands for a specific vibration
mode. The elastic constants are derived by a computerized fitting algorithm
with a least-square criterion, in which $C_{11}$ and $C_{44}$ are set as free
fitting parameters. The iteration continues until
$\chi^{2}\equiv\sum_{n}[(F_{n}^{expt}-F_{n}^{cal})/F_{n}^{expt}]^{2}$
minimizes, where $F_{n}^{cal}$ and $F_{n}^{expt}$ are the $n$th calculated and
experimental frequencies, respectively. The fitting yields $C_{11}=192.6$ GPa,
$C_{12}=102.8$ GPa, and $C_{44}=44.9$ GPa for $T=300$ K. The fitting pattern
is presented in the top panel of Fig. 2. The shear modulus $G=C_{44}=44.9$ GPa
is obtained directly.
With these elastic parameters, we further calculate the bulk modulus [22]
$K=\frac{C_{11}+2C_{12}}{3}=132.7~{}GPa,$ (1)
Young’s modulus
$E=\frac{9KG}{3K+G}=121.0~{}GPa,$ (2)
and Poisson’s ratio
$\nu=\frac{3K-2G}{2(3K+G)}=0.348.$ (3)
According to Pugh[25], materials with the ratio $K/G>$1.75 are classified as
plastic. For (ScZrNbTa)0.685[RhPd]0.315, this ratio reaches 2.95, indicative
of good plasticity. This is further supported by the Poisson’s ratio. $\nu$ of
covalent systems are known to be small ($\nu\sim 0.1$), while those of ionic
crystals are $\sim 0.25$[26]. The value $\nu=0.348$ in
(ScZrNbTa)0.685[RhPd]0.315 resides in a plastic regime ($\nu>0.31$)[27].
Table 1: Elastic constants of representative conductors and superconductors
at room temperature. $K$ \- Bulk modulus, $E$ \- Young’s modulus, $G$ \- Shear
modulus, $\nu$ \- Poisson’s ratio. Acoustic Debye temperature $\Theta_{D}$ is
calculated according to the elastic constants.
Materials $T_{c}$ (K) $K$ (GPa) $E$ (GPa) $G$ (GPa) $\nu$ $\Theta_{D}$ (K)
(ScZrNbTa)0.685[RhPd]0.315 7 132.7 121.0 44.9 0.348 277 Cu[28] $-$ 140.2 123.5
45.4 0.350 332 Nb[29] 9.2 174.3 108.9 39.0 0.396 274 NbTi[30] 9.7 131.8 111.7
41.1 0.359 325 Nb3Sn[31] 18 155.6 139.4 51.6 0.351 305 V3Ga[32] 15 115.0
YNi2B2C[33] 15.3 184.7 218.1 83.7 0.303 553 MgB2[34] 39 128.0 245.0 104.0
0.180 971 YBCO[35] 90 115.9 167.9 66.7 0.259 462 LaFeAsO†[36] $-$ 47.3 73.9
29.8 0.240 300
† LaFeAsO becomes a superconductor when electron-doped by partially
substituting O with F, and the $T_{c}$ of LaFeAsO0.89F0.11 reaches 26 K[37].
The elastic moduli of LaFeAsO derived from powder polycrystal are probably
underestimated. The values from first-principles calculation are[26]: $K=97.9$
GPa, $E=141.5$ GPa, $G=56.2$ GPa, $\nu=0.259$, $\Theta_{D}=416$ K.
The optimized elastic constants from RUS measurements also enable us to
estimate some thermodynamic parameters. The Debye temperature ($\Theta_{D}$)
is related to the average sound velocity ($v_{m}$) via [38]
$\Theta_{D}=\frac{h}{k_{B}}[\frac{3n\rho N_{A}}{4\pi M}]^{1/3}v_{m},$ (4)
where $h$ and $k_{B}$ are as conventionally defined in quantum mechanics,
$N_{A}$ is Avogadro’s number, $\rho$=9.610 g/cm3 is the density, $M$ is the
molecular weight of the solid, and $n$=2 is the number of atoms in the CsCl-
type molecule[38]. Here we adopted the average molecular weight $M$=206.376
g/mol for (ScZrNbTa)0.685[RhPd]0.315. The average sound velocity is taken as
[39]
$\frac{1}{v_{m}^{3}}=\frac{1}{3}(\frac{1}{v_{11}^{3}}+\frac{2}{v_{44}^{3}}),$
(5)
where $v_{11}=4477$ m/s and $v_{44}$=2162 m/s are the longitudinal and shear
sound velocities, respectively, and they can be retrieved from
$v_{ii}=\sqrt{C_{ii}/\rho}$ ($i$=1,4). The calculations result in $v_{m}=2430$
m/s and $\Theta_{D}=277$ K.
In addition, the lattice thermal conductivity $\kappa_{L}$ (for
$T>\Theta_{D}$) can be evaluated by[40, 41]
$\kappa_{L}=\frac{A\bar{M}\delta n^{1/3}\Theta_{D}^{3}}{\gamma^{2}T},$ (6)
where $\bar{M}=103.188$ g/mol is the average mass of the atoms in the crystal,
$\gamma\equiv\frac{3}{2}(\frac{3v_{11}^{2}-4v_{44}^{2}}{v_{11}^{2}+2v_{44}^{2}})=2.1$
is the acoustic Grüneisen parameter that characterizes the anharmonicity of a
sample [42, 43], $A=\frac{2.43\times 10^{4}}{1-0.514/\gamma+0.228/\gamma^{2}}$
W/m${}^{2}\cdot$K3, and $\delta^{3}$ signifies the average volume occupied by
one atom in the crystal that can be known from the structural parameters[14].
This gives rise to $\kappa_{L}=16.0$ W/m$\cdot$K at 300 K.
The elastic constants and the estimated thermodynamic parameters for
(ScZrNbTa)0.685[RhPd]0.315 are summarized in Table 1, and for comparison, we
also list other representative conductors and superconductors. It is
interesting to note that the Young’s and shear moduli of
(ScZrNbTa)0.685[RhPd]0.315 are about 10% larger than in Nb and NbTi that have
comparable $T_{c}$. Meanwhile, the Poisson’s ratios of
(ScZrNbTa)0.685[RhPd]0.315, Cu and NbTi are essentially the same, implying
that they have similar ductility. Therefore, filamentary superconducting wires
made of (ScZrNbTa)0.685[RhPd]0.315 / Cu composite are possible.
Figure 3: Low-temperature elastic constants of (ScZrNbTa)0.685[RhPd]0.315.
(a), $C_{11}$ (left) and $C_{12}$ (right); (b), $K$ (left) and $G$ (right);
(c) $E$ (left) and $\nu$ (right). The dashed lines mark $T_{c}$.
In order to study the mechanical performance under cryogenic conditions, we
also performed the RUS measurements at low temperature, and the results are
shown in Fig. 3. Upon cooling, $C_{44}$ ($=G$) increases monotonically below
20 K, and a weak inflection is visible at $T_{c}$, manifesting the
superconducting transition. The same trend is also seen in Young’s modulus.
Other elastic moduli including $C_{11}$, $C_{12}$ and $K$ also increase below
20 K, and tend to level off, but soften slightly near $T_{c}$. The tiny
feature of elastic constants at $T_{c}$ probably implies relatively weak
electron-phonon coupling, which is conceivable here. In particular, the
absence of a step-like discontinuity in $C_{ij}$ around $T_{c}$ evidences that
the coupling between strain and superconducting order parameter is very
weak[44]. At 5.4 K, the base temperature of our measurements, $K=138.6$ GPa,
$E=126.2$ GPa, and $G=46.8$ GPa. It is important to note that for all the
temperatures tested, Poisson’s ratio remains essentially constant about 0.348.
All these suggest that at cryogenic condition this HEA exhibits even better
mechanical performance than at room temperature. We should also point out that
in Nb3Sn, due to the formation of martensitic phase ($\sim 43$ K), Young’s and
shear moduli are softened dramatically, and the values reduce to $E=49$ GPa
and $G=16.8$ GPa at 4.2 K[45]. This makes Nb3Sn rather brittle and in turn
causes the large strain dependence in the critical current density[3, 4]. Such
a shortcoming is absent in (ScZrNbTa)0.685[RhPd]0.315.
Finally, it is important to note that among the SCs listed in Table 1,
(ScZrNbTa)0.685[RhPd]0.315 has the elastic constants most close to Cu; in
other words, the elastic properties of (ScZrNbTa)0.685[RhPd]0.315 and Cu are
in good compatibility. This suggests that the filamentary superconducting wire
made of (ScZrNbTa)0.685[RhPd]0.315 / Cu subjects little strain effect, and
thus will maintain good superconductivity. Also, because all the elastic
moduli of Cu are relatively larger than in (ScZrNbTa)0.685[RhPd]0.315, the
(ScZrNbTa)0.685[RhPd]0.315 / Cu composite wire is expected to exhibit even
better mechanical performance than pure (ScZrNbTa)0.685[RhPd]0.315.
## 4 Conclusion
In conclusion, the elastic properties of the high-entropy alloy superconductor
(ScZrNbTa)0.685[RhPd]0.315 have been studied by Resonant Ultrasound
Spectroscopy measurements. The room-temperature bulk modulus is $K=132.7$ GPa,
Young’ modulus is $E=121.0$ GPa, shear modulus is $G=44.9$ GPa, and Poisson’s
ratio is $\nu=0.348$. The Young’s and shear moduli are $\sim$10% larger than
those in NbTi superconductor with similar $T_{c}$. Most crucially, the
mechanical performance is further improved at cryogenic temperature. These
results illustrate the advantageous elastic properties of high-entropy alloy
superconductors, and suggest the feasibility for industrial applications.
## 5 CRediT authorship contribution statement
Yupeng Pan: Elastic property measurements, Data analysis, Writing. Xiaobo He:
Resistivity measurements. Binjie Zhou: Elastic property measurements. Denver
Strong: Sample synthesis. Jian Zhang: Elastic property measurements. Hai-Bin
Yu: Validation, Guidance. Yunfei Tan: Validation, Guidance. Robert J. Cava:
Sample synthesis, Validation, Guidance. Yongkang Luo: Supervision,
Methodology, Writing.
## 6 Declaration of Competing Interest
The authors declare that they have no known competing financial interests or
personal relationships that could have appeared to influence the work reported
in this paper.
## 7 Acknowledgments
We thank Albert Migliori and Brad Ramshaw for technical aid. This work was
supported by National Natural Science Foundation of China (No. 52077086), and
the sample preparation at Princeton University was supported by the US
Department of Energy Division of Basic Energy Sciences grant number DE-
FG02-98ER45706.
## 8 Author contributions
The manuscript was written with contributions from all the authors. All the
authors have approved the final version of the manuscript.
## References
## References
* [1] H. K. Onnes, The Superconductivity of Mercury, Commun. Phys. Lab. Univ. Leiden. Suppl. 122-124 (1911). https://doi.org/10.1103/physreva.86.023405
* [2] J. W. Ekin, A. F. Clark, Effect of Strain on the Critical Current of Nb3Sn and NbTi Multifilamentary Composite Wires, AIP Conference. Proceedings 34 (1976) 81. https://doi.org/10.1063/1.2946169
* [3] J. W. Ekin, Strain dependence of the critical current and critical field in multifilamentary Nb3Sn composites, IEEE Transactions on Magnetics 15 (1979) 197. https://doi.org/10.1109/TMAG.1979.1060207
* [4] ten B. Hakenn, A. Godeke, 10 H. H. J. Kate, The strain dependence of the critical properties of Nb3Sn conductors, J. Appl. Phys. 85 (1999) 3247. https://doi.org/10.1063/1.369667
* [5] A. Amar, J. F. Li, S. Xiang, X. Liu, Y.Z. Zhou, G.M. Le, X. Y. Wang, F. S. Qu, S. Y. Ma, W. M. Dong, Q. Li, Additive manufacturing of high-strength CrMnFeCoNi-based High Entropy Alloys with TiC addition, Intermetallics 109 (2019) 162-166. https://doi.org/10.1016/j.intermet.2019.04.005
* [6] C. Chen, S. H. Yuan, J.L. Chen, W. Wang, W. W. Zhang, R. Wei, T. Wang, T. Zhang, S. K. Guan, F. S. Li, A Co-free Cr-Fe-Ni-Al-Si high entropy alloy with outstanding corrosion resistance and high hardness fabricated by laser surface melting, Mater. Lett. 314 (2022) 131882. https://doi.org/10.1016/j.matlet.2022.131882
* [7] Y. Zou, H. Ma, R. Spolenak, Ultrastrong ductile and stable high-entropy alloys at small scales, Nat. Commun. 6 (2015) 7748. https://doi.org/10.1038/ncomms8748
* [8] B. Gorr, M. Azim, H. J. Christ, T. Mueller, D. Schliephake, M. Heilmaier, Phase equilibria, microstructure, and high temperature oxidation resistance of novel refractory high-entropy alloys, J. Alloy Compd. 624 (2015) 270. https://doi.org/10.1016/j.jallcom.2014.11.012
* [9] W. R. Zhang, W. B. Liao, P. K. Liaw, J. L. Ren, J. Brechtl, Y. Zhang, Effects of transient thermal shock on the microstructures and corrosion properties of a reduced activation high-entropy alloy, J. Alloy Compd. 918 (2022) 165762. https://doi.org/10.1016/j.jallcom.2022.165762
* [10] K. M. Youssef, A. J. Zaddach, C. Niu, D. L. Irving, C. C. Koch C C, A Novel Low-Density, High-Hardness, High-entropy Alloy with Close-packed Single-phase Nanocrystalline Structures, Materials Research Letters 3 (2015) 95. https://doi.org/10.1080/21663831.2014.985855
* [11] B. Gludovatz, A. Hohenwarter, D. Catoor, E. H. Chang, E. P. George, R. O. Ritchie, A fracture-resistant high-entropy alloy for cryogenic applications, Science 345 (2014) 1153. https://doi.org/10.1126/science.1254581
* [12] P. Koželj, S. Vrtnik, A. Jelen, S. Jazbec, Z. Jagličić, S. Maiti, M. Feuerbacher, W. Steurer, J. Dolinšek, Discovery of a Superconducting High-Entropy Alloy, Phys. Rev. Lett. 113 (2014) 107001. https://doi.org/10.1103/PhysRevLett.113.107001
* [13] J. Guo, H. Wang, v. F. O. Rohr, Z. Wang, S. Cai, Y. Zhou, K. Yang, A. Li, S. Jiang, Q. Wu, R. J. Cava, L. Sun, Robust zero resistance in a superconducting high-entropy alloy at pressures up to 190 GPa, Proceedings of the National Academy of Sciences (USA) 114 (2017) 13144. https://doi.org/10.1073/pnas.1716981114
* [14] K. Stolze, J. Tao, v. F. O. Rohr, T. Kong, R. J. Cava, Sc-Zr-Nb-Rh-Pd and Sc-Zr-Nb-Ta-Rh-Pd High-Entropy Alloy Superconductors on a CsCl-Type Lattice, Chem. Mater. 30 (2018) 906. https://doi.org/10.1021/acs.chemmater.7b04578
* [15] Y. Mizuguchi, M. R. Kasem, T. D. Matsuda, Superconductivity in CuAl2-type Co0.2Ni0.1Cu0.1Rh0.3Ir0.3Zr2 with a high-entropy-alloy transition metal site, Materials Research Letters 9 (2021) 141. https://doi.org/10.1080/21663831.2020.1860147
* [16] M. R. Kasem, A. Yamashita, Y. Goto, T. D. Matsuda, Y. Mizuguchi, Synthesis of high-entropy-alloy-type superconductors (Fe,Co,Ni,Rh,Ir)Zr2 with tunable transition temperature, J. Mater. Sci. 56 (2021) 9499. https://doi.org/10.1007/s10853-021-05921-2
* [17] Q. Zhu, G. Xiao, Y. Cui, W. Yang, S. Song, G. Cao, Z. Ren, Structural transformation and superconductivity in carbon-added hexagonal high-entropy alloys, J. Alloy Compd. 909 (2022) 164700. https://doi.org/10.1016/j.jallcom.2022.164700
* [18] T. G. Berlincourt, Emergence of NbTi as supermagnet material, Cryogenics 27 (1987) 283. https://doi.org/10.1016/0011-2275(87)90057-9
* [19] Z. Charifoulline, Residual Resistivity Ratio (RRR) Measurements of LHC Superconducting NbTi Cable Strands, IEEE Transactions on Applied Superconductivity 16 (2006) 1188. https://doi.org/10.1109/TASC.2006.873322
* [20] C. Cheng, X. Zhang, M. J. R. Haché, Y. Zou, Magnetron co-sputtering synthesis and nanoindentation studies of nanocrystalline (TiZrHf)x(NbTa)1-x high-entropy alloy thin films, Nano Research ISSN 1998-0124. https://doi.org/ 10.1007/s12274-021-3805-1
* [21] Y. Luo, S. Z. Lin, M. Leroux, N. Wakeham, D. M. Fobes, E. D. Bauer, J. B. Betts, J. D. Thompson, A. Miglior, M. Janoschek, B. Maiorov, Skyrmion lattice creep at ultra-low current densities, Commun. Mater. 1 (2020) 83. https://doi.org/10.1038/s43246-020-00083-1
* [22] A. Migliori, J. L. Sarrao, Resonant Ultrasound Spectroscopy: Applications to Physics, Materials Measurements, and Nondestructive Evaluation, Wiley, New York, 1997.
* [23] A. Migliori, J. L. Sarrao, W. M. Visscher, T. M. Bell, M. Lei, Z. Fisk, R. G. Leisure, Resonant ultrasound spectroscopic techniques for measurement of the elastic moduli of solids, Physica B 183 (1993) 1. https://doi.org/10.1016/0921-4526(93)90048-B
* [24] R. G. Leisure, F. A. Willis, J. Phys. Resonant ultrasound spectroscopy: Condens. Matter 9 (1997) 6001.
* [25] S. Pugh, Relations between the Elastic Moduli and the Plastic Properties of Polycrystalline Pure Metals, Phil. Mag. 45 (1953) 833. https://doi.org/10.4236/jamp.2017.51004
* [26] I. R. Shein, A. L. Ivanovskii, Elastic properties of single- and polycrystalline LaFeAsO, SrFe2As2, and LiFeAs basic phases for new FeAs superconductors, Technical Physics Letters 35 (2009) 961. https://doi.org/10.1134/S1063785009100253
* [27] J. J. Lewandowski, W. H. Wang, A. L. Greer, Intrinsic plasticity or brittleness of metallic glasses, Phil. Mag. Lett. 85 (2005) 77. https://doi.org/10.1080/09500830500080474
* [28] H. M. Ledbetter, E. R. Naimon, Elastic Properties of Metals and Alloys. II. Copper, hys. Chem. Ref. Data 3 (1974) 897. https://doi.org/10.1063/1.3253150
* [29] K. J. Carroll, Elastic Constants of Niobium from 4.2 K to 300 K, J. Appl. Phys. 36 (1965) 3689. https://doi.org/10.1063/1.1703072
* [30] H. M. Ledbetter, D. T. Read, Orthorhombic elastic constants of an NbTi/Cu composite superconductor, J. Appl. Phys. 48 (1977) 1874. https://doi.org/10.1063/1.323941
* [31] W. Rehwald, M. Rayl, R. W. Cohen, G. D. Cody, Elastic Moduli and Magnetic Susceptibility of Monocrystalline Nb3Sn, Phys. Rev. B 6 (1972) 363. https://doi.org/10.1103/PhysRevB.6.363
* [32] J. F. Bussière, H. LeHuy, B. Faucher, Elastic Behavior of Polycrystalline Nb3Sn, V3Ga and Nb3Ge, Advances in Cryogenic Engineering Materials 30 (1984) 859. https://doi.org/10.1007/978-1-4613-9868-4-93
* [33] P. Rourke, J. Paglione, F. Ronning, L. Taillefer, K. Kadowaki, Elastic tensor of YNi2B2C, Physica C 397 (2003) 1. https://doi.org/10.1016/S0921-4534(03)01326-1
* [34] V. F. Nesterenko, Bulk Magnesium Diboride, Mechanical and Superconducting Properties, arXiv:.cond-mat/0212543 (2002). https://doi.org/10.48550/arXiv.cond-mat/0212543
* [35] M. Lei, H. Ledbetter, Oxides and Oxide Superconductors: Elastic and Related Properties, NIST, United States Department of Commerce, 1991.
* [36] M. A. McGuire, A. D. Christianson, A. S. Sefat, B. C. Sales, M. D. Lumsden, R. Jin, E. A. Payzant, D. Mandrus, Y. Luan, V. Keppens, V. Varadarajan, J. W. Brill, R. P. Hermann, M. T. Sougrati, F. Grandjean, G. J. Long, Phase transitions in LaFeAsO: Structural, magnetic, elastic, and transport properties, heat capacity and Mössbauer spectra, Phys. Rev. B 78 (2008) 094517. https://doi.org/10.1103/PhysRevB.78.094517
* [37] Y. Kamihara, T. Watanabe, M. Hirano, Hideo Hosono, Iron-Based Layered Superconductor La[O1-xFx]FeAs ($x$=0.05-0.12) with $T_{c}$ = 26 K, J. Am. Chem. Soc. 130 (2008) 3296-3297. https://doi.org/10.1021/ja800073m
* [38] O. L. Anderson, A simplified method for calculating the debye temperature from elastic constants, J. Phys. Chem. Solids 24 (1963) 909-917. https://doi.org/10.1016/0022-3697(63)90067-2
* [39] M. Jamal, S. J. Asadabadi, I. Ahmad, H. A. R. Aliabad, Elastic constants of cubic crystals, Computational Materials Science 95 (2014) 592-599. https://doi.org/10.1016/j.commatsci.2014.08.027
* [40] G. A. Slack, Nonmetallic crystals with high thermal conductivity, J. Phys. Chem. Solids 34 (1973) 321-335. https://doi.org/10.1016/0022-3697(73)90092-9
* [41] G. A. Slack, Advances in Research and Applications in Solid State Physics, New York: Academic Press, 1979.
* [42] V. N. Belomestnykh, E. P. Tesleva, Interrelation between Anharmonicity and Lateral Strain in Quasi-Isotropic Polycrystalline Solids, Tech. Phys. Lett. 49 (2004) 1098-1100. https://doi.org/10.1134/1.1787679
* [43] T. Jia, G. Chen, Y. Zhang, Lattice thermal conductivity evaluated using elastic properties, Phys. Rev. B 95 (2017) 155206. https://doi.org/10.1103/PhysRevB.95.155206
* [44] B. Lüthi B, Physical Acoustics in the Solid State, Spinger, Berlin, 2005.
* [45] M. Poirier, R. Plamondon, J. D. N. Cheeke, J. F. Bussière, Elastic constants of polycrystalline Nb3Sn between 4.2 and 300 K, J. Appl. Phys. 55 (1984) 3327-3332. https://doi.org/10.1063/1.333370
|
# Cross-Episodic Curriculum for Transformer Agents
Lucy Xiaoyang Shi1∗, Yunfan Jiang1∗, Jake Grigsby2, Linxi Fan3†, Yuke Zhu2 3†
1Stanford University 2The University of Texas at Austin 3NVIDIA Research
∗Equal contribution †Equal advising
###### Abstract
We present a new algorithm, Cross-Episodic Curriculum (CEC), to boost the
learning efficiency and generalization of Transformer agents. Central to CEC
is the placement of cross-episodic experiences into a Transformer’s context,
which forms the basis of a curriculum. By sequentially structuring online
learning trials and mixed-quality demonstrations, CEC constructs curricula
that encapsulate learning progression and proficiency increase across
episodes. Such synergy combined with the potent pattern recognition
capabilities of Transformer models delivers a powerful _cross-episodic
attention_ mechanism. The effectiveness of CEC is demonstrated under two
representative scenarios: one involving multi-task reinforcement learning with
discrete control, such as in DeepMind Lab, where the curriculum captures the
learning progression in both individual and progressively complex settings,
and the other involving imitation learning with mixed-quality data for
continuous control, as seen in RoboMimic, where the curriculum captures the
improvement in demonstrators’ expertise. In all instances, policies resulting
from CEC exhibit superior performance and strong generalization. Code is open-
sourced on the project website cec-agent.github.io to facilitate research on
Transformer agent learning.
## 1 Introduction
The paradigm shift driven by foundation models [8] is revolutionizing the
communities who study sequential decision-making problems [80], with
innovations focusing on control [2, 45, 38, 9], planning [76, 32, 33, 78, 17],
pre-trained visual representation [57, 50, 67, 51], among others. Despite the
progress, the data-hungry nature makes the application of Transformer [75]
agents extremely challenging in data-scarce domains like robotics [52, 53, 19,
38, 9]. This leads us to the question: Can we maximize the utilization of
limited data, regardless of their optimality and construction, to foster more
efficient learning?
To this end, this paper introduces a novel algorithm named Cross-Episodic
Curriculum (CEC), a method that explicitly harnesses the shifting
distributions of multiple experiences when organized into a curriculum. The
key insight is that sequential cross-episodic data manifest useful learning
signals that do not easily appear in any separated training
episodes.111Following the canonical definition in Sutton and Barto [73], we
refer to the sequences of agent-environment interaction with clearly
identified initial and terminal states as “episodes”. We interchangeably use
“episode”, “trial”, and “trajectory” in this work. As illustrated in Figure 1,
CEC realizes this through two stages: 1) formulating curricular sequences to
capture (a) the policy improvement on single environments, (b) the learning
progress on a series of progressively harder environments, or (c) the increase
of demonstrators’ proficiency; and 2) causally distilling policy improvements
into the model weights of Transformer agents through _cross-episodic
attention_. When a policy is trained to predict actions at current time steps,
it can trace back beyond ongoing trials and internalize improved behaviors
encoded in curricular data, thereby achieving efficient learning and robust
deployment when probed with visual or dynamics perturbations. Contrary to
prior works like Algorithm Distillation (AD, Laskin et al. [42]) which, at
test time, samples and retains a single task configuration across episodes for
in-context refinement, our method, CEC, prioritizes zero-shot generalization
across a distribution of test configurations. With CEC, agents are evaluated
on a new task configuration in each episode, emphasizing adaptability to
diverse tasks.
Figure 1: Cross-episodic curriculum for Transformer agents. CEC involves two
major steps: 1) Preparation of curricular data. We order multiple experiences
such that they explicitly capture curricular patterns. For instance, they can
be policy improvement in single environments, learning progress in a series of
progressively harder environments, or the increase of the demonstrator’s
expertise. 2) Model training with cross-episodic attention. When training the
model to predict actions, it can trace back beyond the current episode and
internalize the policy refinement for more efficient learning. Here each
$\tau$ represents an episode (trajectory). $\hat{a}$ refers to actions
predicted by the model. Colored triangles denote causal Transformer models.
We investigate the effectiveness of CEC in enhancing sample efficiency and
generalization with two representative case studies. They are: 1)
Reinforcement Learning (RL) on DeepMind Lab (DMLab) [5], a 3D simulation
encompassing visually diverse worlds, complicated environment dynamics, ego-
centric pixel inputs, and joystick control; and 2) Imitation Learning (IL)
from mixed-quality human demonstrations on RoboMimic [53], a framework
designed to study robotic manipulation with proprioceptive and external camera
observations and continuous control. Despite RL episodes being characterized
by state-action-reward tuples and IL trajectories by state-action pairs, our
method exclusively employs state-action pairs in its approach.
In challenging embodied navigation tasks, despite significant generalization
gaps (Table 1), our method surpasses concurrent and competitive method Agentic
Transformer (AT, Liu and Abbeel [47]). It also significantly outperforms
popular offline RL methods such as Decision Transformer (DT, Chen et al. [13])
and baselines trained on expert data, with the same amount of parameters,
architecture, and data size. It even exceeds RL oracles directly trained on
test task distributions by $50\%$ in a _zero-shot_ manner. CEC also yields
robust embodied policies that are up to $1.6\times$ better than RL oracles
when zero-shot probed with unseen environment dynamics. When learning
continuous robotic control, CEC successfully solves two simulated manipulation
tasks, matching and outperforming previous well-established baselines [53, 25,
41]. Further ablation reveals that CEC with cross-episodic attention is a
generally effective recipe for learning Transformer agents, especially in
applications where sequential data exhibit moderate and smooth progression.
## 2 Cross-Episodic Curriculum: Formalism and Implementations
In this section, we establish the foundation for our cross-episodic curriculum
method by first reviewing the preliminaries underlying our case studies, which
encompass two representative scenarios in sequential decision-making.
Subsequently, we formally introduce the assembly of curricular data and the
specifics of model optimization utilizing cross-episodic attention. Lastly, we
delve into the practical implementation of CEC in the context of these two
scenarios.
### 2.1 Preliminaries
#### Reinforcement learning.
We consider the setting where source agents learn through trial and error in
partially observable environments. Denoting states $s\in\mathcal{S}$ and
actions $a\in\mathcal{A}$, an agent interacts in a Partially Observable Markov
Decision Process (POMDP) with the transition function
$p(s_{t+1}|s_{t},a_{t}):\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}$.
It observes $o\in\mathcal{O}$ emitted from observation function
$\Omega(o_{t}|s_{t},a_{t-1}):\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{O}$
and receives scalar reward $r$ from
$R(s,a):\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$. Under the episodic
task setting, RL seeks to learn a parameterized policy $\pi_{\theta}(\cdot|s)$
that maximizes the return over a fixed length $T$ of interaction steps:
$\pi_{\theta}=\arg\max_{\theta\in\Theta}\sum_{t=0}^{T-1}\gamma^{t}r_{t}$,
where $\gamma\in[0,1)$ is a discount factor. Here we follow the canonical
definition of an episode $\tau$ as a series of environment-agent interactions
with length $T$,
$\tau\vcentcolon=(s_{0},a_{0},r_{0},\ldots,s_{T-1},a_{T-1},r_{T-1},s_{T})$,
where initial states $s_{0}$ are sampled from initial state distribution
$s_{0}\sim\rho_{0}(s)$ and terminal states $s_{T}$ are reached once the
elapsed timestep exceeds $T$. Additionally, we view all RL tasks considered in
this work as goal-reaching problems [39, 26] and constrain all episodes to
terminate upon task completion. It is worth noting that similar to previous
work [42], training data are collected by source RL agents during their online
learning. Nevertheless, once the dataset is obtained, our method is trained
_offline_ in a purely supervised manner.
#### Imitation learning.
We consider IL settings with existing trajectories composed only of state-
action pairs. Furthermore, we relax the assumption on demonstration optimality
and allow them to be crowdsourced [10, 12, 11]. Data collected by operators
with varying expertise are therefore unavoidable. Formally, we assume the
access to a dataset
$\mathcal{D}^{N}\vcentcolon=\\{\tau_{1},\ldots,\tau_{N}\\}$ consisting of $N$
demonstrations, with each demonstrated trajectory
$\tau_{i}\vcentcolon=(s_{0},a_{0},\ldots,s_{T-1},a_{T-1})$ naturally
identified as an episode. The goal of IL, specifically of behavior cloning
(BC), is to learn a policy $\pi_{\theta}$ that accurately models the
distribution of behaviors. When viewed as goal-reaching problems, BC policies
can be evaluated by measuring the success ratio in completing tasks [26].
### 2.2 Curricular Data Assembly and Model Optimization
Meaningful learning signals emerge when multiple trajectories are organized
and examined cross-episodically along a curriculum axis. This valuable
information, which is not easily discernible in individual training episodes,
may encompass aspects such as the improvement of an RL agent’s navigation
policy or the generally effective manipulation skills exhibited by operators
with diverse proficiency levels. With a powerful model architecture such as
Transformer [75, 16], such emergent and valuable learning signals can be baked
into policy weights, thereby boosting performance in embodied tasks.
For a given embodied task $\mathcal{M}$, we define its curriculum
$\mathcal{C}_{\mathcal{M}}$ as a collection of trajectories $\tau$ consisting
of state-action pairs. A series of ordered levels
$[\mathcal{L}_{1},\ldots,\mathcal{L}_{L}]$ partitions this collection such
that $\bigcup_{l\in\\{1,\ldots,L\\}}\mathcal{L}_{l}=\mathcal{C}_{\mathcal{M}}$
and $\bigcap_{\forall i,j\in\\{1,\ldots,L\\},i\neq
j}\mathcal{L}_{\\{i,j\\}}=\emptyset$. More importantly, these ordered levels
characterize a curriculum by encoding, for example, learning progress in
single environments, learning progress in a series of progressively harder
environments, or the increase of the demonstrator’s expertise.
With a curriculum
$\mathcal{C}_{\mathcal{M}}\vcentcolon=\\{\tau_{i}\\}_{i=1}^{N}$ and its
characteristics $[\mathcal{L}_{1},\ldots,\mathcal{L}_{L}]$, we construct a
curricular sequence $\mathcal{T}$ that spans multiple episodes and captures
the essence of gradual improvement in the following way:
$\mathcal{T}\vcentcolon=\bigoplus_{l\in\\{1,\ldots,L\\}}\left[\tau^{(1)},\ldots,\tau^{(C)}\right],\quad\text{where}\quad
C\sim\mathcal{U}\left(\llbracket|\mathcal{L}_{l}|\rrbracket\right)\quad\text{and}\quad\tau^{(c)}\sim\mathcal{L}_{l}.$
(1)
The symbol $\oplus$ denotes the concatenation operation.
$\mathcal{U}\left(\llbracket K\rrbracket\right)$ denotes a uniform
distribution over the discrete set $\\{k\in\mathbb{N},k\leq K\\}$. In
practice, we use values smaller than $|\mathcal{L}_{l}|$ considering the
memory consumption.
We subsequently learn a causal policy that only depends on cross-episodic
historical observations $\pi_{\theta}(\cdot|o_{\leq t}^{(\leq n)})$. Note that
this modeling strategy differs from previous work that views sequential
decision-making as a big sequence-modeling problem [13, 37, 42, 38]. It
instead resembles the causal policy in Baker et al. [4]. Nevertheless, we
still follow the best practice [36, 60, 22] to provide previous action as an
extra modality of observations in POMDP RL tasks.
We leverage the powerful attention mechanism of Transformer [75] to enable
cross-episodic attention. Given observation series
$O_{t}^{(n)}\vcentcolon=\\{o_{0}^{(1)},\ldots,o_{\leq t}^{(\leq n)}\\}$
(shorthanded as $O$ hereafter for brevity), Transformer projects it into query
$Q=f_{Q}(O)$, key $K=f_{K}(O)$, and value $V=f_{V}(O)$ matrices, with each row
being a $D$-dim vector. Attention operation is performed to aggregate
information:
$\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{\intercal}}{\sqrt{D}})V.$
(2)
Depending on whether the input arguments for $f_{Q}$ and $f_{\\{K,V\\}}$ are
the same, attention operation can be further divided into self-attention and
cross-attention. Since tasks considered in this work do not require additional
conditioning for task specification, we follow previous work [4, 82] to
utilize self-attention to process observation series. Nevertheless, ours can
be naturally extended to handle, for example, natural language or multi-modal
task prompts, following the cross-attention introduced in Jiang et al. [38].
Finally, this Transformer policy is trained by simply minimizing the negative
log-likelihood objective $\mathcal{J}_{\text{NLL}}$ of labeled actions,
conditioned on cross-episodic context:
$\mathcal{J}_{\text{NLL}}=-\log\pi_{\theta}(\cdot|\mathcal{T})=\frac{1}{|\mathcal{T}|\times
T}\sum_{n=1}^{|\mathcal{T}|}\sum_{t=1}^{T}-\log\pi_{\theta}\left(a_{t}^{(n)}|o_{\leq
t}^{(\leq n)}\right).$ (3)
Regarding the specific memory architecture, we follow Baker et al. [4],
Adaptive Agent Team et al. [1] to use Transformer-XL [16] as our model
backbone. Thus, during deployment, we keep its hidden states propagating
across test episodes to mimic the training settings.
### 2.3 Practical Implementations
We now discuss concrete instantiations of CEC for 1) RL with DMLab and 2) IL
with RoboMimic. Detailed introductions to the benchmark and task selection are
deferred to Sec. 3. We investigate the following three curricula, where the
initial two pertain to RL, while the final one applies to IL:
#### Learning-progress-based curriculum.
In the first instantiation, inspired by the literature on learning progress
[54, 27, 65, 40], we view the progression of learning agents as a curriculum.
Concretely, we train multi-task PPO agents [70, 63] on tasks drawn from test
distributions. We record their online interactions during training, which
faithfully reflect the learning progress. Finally, we form the learning-
progress-based curriculum by sequentially concatenating episodes collected at
different learning stages. Note that this procedure is different from Laskin
et al. [42], where for each environment, the learning dynamics of _multiple_
single-task RL agents has to be logged. In contrast, we only track a _single_
multi-task agent per environment.
(a) Goal Maze
(b) Watermaze
(c) Irreversible Path
(d) Lift
(e) Can
Figure 2: We evaluate our method on five tasks that cover challenges such as
exploration and planning over long horizons in RL settings, as well as object
manipulation and continuous control in IL settings. Figures are from Beattie
et al. [5] and Mandlekar et al. [53].
#### Task-difficulty-based curriculum.
In the second instantiation, instead of taking snapshots of RL agents directly
trained on test configurations, we collect learning progress on a series of
easier but progressively harder tasks. For instance, in an embodied navigation
task, the test configuration includes 20 rooms. Rather than logging source
agents’ learning progression in the 20-room maze, we record in a series of
mazes with 5, 10, and 15 rooms. We then structure stored episodes first
following learning progress and then the increase of layout complexity. This
practice naturally creates a task-difficulty-based curriculum, which resembles
curriculum RL that is based on task difficulty [54, 58]. We find it especially
helpful for hard-exploration problems where the source RL agent does not make
meaningful progress.
#### Expertise-based curriculum.
For the setting of IL from mixed-quality demonstrations, we instantiate a
curriculum based on demonstrators’ expertise. This design choice is motivated
by literature on learning from heterogeneous demonstrators [6, 81], with the
intuition that there is little to learn from novices but a lot from experts.
To realize this idea, we leverage the Multi-Human dataset from RoboMimic [53].
Since it contains demonstrations collected by human demonstrators with varying
proficiency, we organize offline demonstration trajectories following the
increase of expertise to construct the expertise-based curriculum.
## 3 Experimental Setup
In this section, we elaborate on the experimental setup of our case studies.
Our investigation spans two representative and distinct settings: 1) online
reinforcement learning with 3D maze environments of DMLab [5], and 2)
imitation learning from mixed-quality human demonstrations of RoboMimic [53].
For each of them, we discuss task selection, baselines, and training and
evaluation protocols. Teasers of these tasks are shown in Figure 2.
### 3.1 Task Settings and Environments
DeepMind Lab [5] is a 3D learning environment with diverse tasks. Agents spawn
in visually complex worlds, receive ego-centric (thus partially observable)
RGB pixel inputs, and execute joystick actions. We consider three levels from
this benchmark: Goal Maze, Watermaze [56], and Sky Maze with Irreversible
Path. They challenge agents to explore, memorize, and plan over a long
horizon. Their goals are similar — to navigate in complicated mazes and find a
randomly spawned goal, upon which sparse rewards will be released. Episodes
start with randomly spawned agents and goals and terminate once goals are
reached or elapsed steps have exceeded pre-defined horizons.
RoboMimic [53] is a framework designed for studying robot manipulation and
learning from demonstrations. Agents control robot arms with fixed bases,
receive proprioceptive measurements and image observations from mounted
cameras, and operate with continuous control. We evaluate two simulated tasks:
“Lift” and “Can”. In the “Lift” task, robots are tasked with picking up a
small cube. In the “Can” task, robots are required to pick up a soda can from
a large bin and place it into a smaller target bin. Episodes start with
randomly initialized object configuration and terminate upon successfully
completing the task or exceeding pre-defined horizons.
Table 1: Generalization gaps between training and testing for DMLab levels.
Note that agents resulting from task-difficulty-based curricula are not
trained on test configurations. Therefore, their performance should be
considered as _zero-shot_.
Level Name | Difficulty Parameter | Test Difficulty | Training Difficulty
---|---|---|---
| Ours
---
(Learning Progress)
| Ours
---
(Task Difficulty)
| BC
---
w/ Expert Data
| RL
---
(Oracle)
| Curriculum RL
---
(Oracle)
Goal Maze | Room Numbers | 20 | 20 | 5→10→15 | 20 | 20 | 5→10→15→20
Watermaze | Spawn Radius | 580 | 580 | 150→300→450 | 580 | 580 | 150→300→450→580
Irreversible Path | Built-In Difficulty | .9 | .9 | .1→.3→.5→.7 | .9 | .9 | .1→.3→.5→.7→.9
### 3.2 Baselines
The primary goal of these case studies is to assess the effectiveness of our
proposed cross-episodic curriculum in increasing the sample efficiency and
boosting the generalization capability of Transformer agents. Therefore, in
online RL settings, we compare against source RL agents which generate
training data for our method and refer to them as oracles. These include a)
PPO agents directly trained on test task distributions, denoted as “RL
(Oracle)” hereafter, and b) curriculum PPO agents that are gradually adapted
from easier tasks to the test difficulty, which is referred to as “Curriculum
RL (Oracle)”. Furthermore, we compare against one concurrent and competitive
method Agentic Transformer [47], denoted as “AT”. It is closely related to our
method, training Transformers on sequences of trajectory ascending sorted
according to their rewards. We also compare against popular offline RL method
Decision Transformer [13], denoted as “DT”. Additionally, we include another
behavior cloning agent that has the same model architecture as ours but is
trained on optimal data without cross-episodic attention. This baseline is
denoted as “BC w/ Expert Data”. For the case study on IL from mixed-quality
demonstrations, we adopt the most competing approach, BC-RNN, from Mandlekar
et al. [53] as the main baseline. We also include comparisons against other
offline RL methods [44] such as Batch-Constrained Q-learning (BCQ) [25] and
Conservative Q-Learning (CQL) [41].
### 3.3 Training and Evaluation
We follow the best practice to train Transformer agents, including adopting
AdamW optimizer [49], learning rate warm-up and cosine annealing [48], etc.
Training is performed on NVIDIA V100 GPUs. During evaluation, for agents
resulting from our method, each run involves several test rollouts to fill the
context. We keep hidden states of Transformer-XL [16] propagating across
episodes. We run other baselines and oracles for 100 episodes to estimate
their performances. For our methods on RL settings, we compute the maximum
success rate averaged across a sliding window over all test episodes to
account for in-context improvement. The size of the sliding window equals one-
quarter of the total test episodes. These values are averaged over 20 runs to
constitute the final reporting metric. For our methods on the IL setting,
since all training data are successful trajectories, we follow Mandlekar et
al. [53] to report the maximum success rate achieved over the course of
training, directly averaged over test episodes.
## 4 Experiments
Figure 3: Evaluation results on DMLab. Our CEC agents perform comparable to RL
oracles and on average outperform other baseline methods. On the hardest task
Irreversible Path where the RL oracle and BC baseline completely fail, our
agents outperform the curriculum RL oracle by $50\%$ even in a zero-shot
manner. For our methods, DT, AT, and the BC w/ expert data baselines, we
conduct 20 independent evaluation runs, each consisting of 100 episodes for
Goal Maze and Watermaze and 50 episodes for Irreversible Path due to longer
episode length. We test RL oracles for 100 episodes. The error bars represent
the standard deviations over 20 runs.
We aim to answer the following four research questions through comprehensive
experiments.
1. 1.
To what extent can our cross-episodic curriculum increase the sample
efficiency of Transformer agents and boost their generalization capability?
2. 2.
Is CEC consistently effective and generally applicable across distinct
learning settings?
3. 3.
What are the major components that contribute to the effectiveness of our
method?
### 4.1 Main Evaluations
We answer the first two questions above by comparing learned agents from our
method against 1) Reinforcement Learning (RL) oracles in online RL settings
and 2) well-established baselines on learning from mixed-quality
demonstrations in the Imitation Learning (IL) setting.
We first examine agents learned from learning-progress-based and task-
difficulty-based curricula in challenging 3D maze environments. The first type
of agent is denoted as “Ours (Learning Progress)”. For the second type, to
ensure that the evaluation also contains a series of tasks with increasing
difficulty, we adopt two mechanisms that control the task sequencing [58]: 1)
fixed sequencing where agents try each level of difficulty for a fixed amount
of times regardless of their performance and 2) dynamic sequencing where
agents are automatically promoted to the next difficulty level if they
consecutively succeed in the previous level for three times. We denote these
two variants as “Ours (Task Difficulty), Fixed” and “Ours (Task Difficulty),
Auto”, respectively. Note that because the task-difficulty-based curriculum
does not contain any training data on test configurations, these two settings
are zero-shot evaluated on test task distributions. We summarize these
differences in Table 1. We denote AT and DT trained on data consisting of a
mixture of task difficulties as “AT (Mixed Difficulty)” and “DT (Mixed
Difficulty)”. Note that these data are the same used to train “Ours (Task
Difficulty)”. Similarly, we denote AT and DT directly trained on test
difficulty as “AT (Single Difficulty)” and “DT (Single Difficulty)”. These
data are the same used to train “Ours (Learning Progress)”.
Figure 4: Generalization results on DMLab. _Top row_ : Evaluation results on
Goal Maze with unseen maze mechanism and Irreversible Path with out-of-
distribution difficulty levels. _Bottom row_ : Evaluation results on three
levels with environment dynamics differing from training ones. CEC agents
display robustness and generalization across various dimensions, outperforming
curriculum RL oracles by up to $1.6\times$. We follow the same evaluation
protocol as in Figure 3. The error bars represent the standard deviations over
20 runs.
#### Cross-episodic curriculum results in sample-efficient agents.
As shown in Figure 3, on two out of three examined DMLab levels, CEC agents
perform comparable to RL oracles and outperform the BC baselines trained on
expert data by at most $2.8\times$. On the hardest level Irreversible Path
where agents have to plan the route ahead and cannot backtrack, both the BC
baseline and RL oracle fail. However, our agents succeed in proposing correct
paths that lead to goals and significantly outperform the curriculum RL oracle
by $50\%$ even in a _zero-shot_ manner. Because CEC only requires environment
interactions generated during the course of training of online source agents
(the task-difficulty-based curriculum even contains fewer samples compared to
the curriculum RL, as illustrated in Table 1), the comparable and even better
performance demonstrates that our method yields highly sample-efficient
embodied policies. On average, our method with task-difficulty-based
curriculum performs the best during evaluation (Table A.5), confirming the
benefit over the concurrent AT approach that leverages chain-of-hindsight
experiences. When compared to DT, it outperforms by a significant margin,
which suggests that our cross-episodic curriculum helps to squeeze learning
signals that are useful for downstream decision-making.
#### Cross-episodic curriculum boosts the generalization capability.
To further investigate whether CEC can improve generalization at test time, we
construct settings with unseen maze mechanisms (randomly open/closed doors),
out-of-distribution difficulty, and different environment dynamics. See the
Appendix, Sec. C.2 for the exact setups. As demonstrated in Figure 4, CEC
generally improves Transformer agents in learning robust policies that can
generalize to perturbations across various axes. On three settings where the
BC w/ Expert Data baseline still manages to make progress, CEC agents are up
to $2\times$ better. Compared to oracle curriculum RL agents, our policies
significantly outperform them under three out of five examined scenarios. It
is notable that on Irreversible Path with out-of-distribution difficulty, CEC
agent is $1.6\times$ better than the curriculum RL oracle trained on the same
data. These results highlight the benefit of learning with curricular
contexts. On average, our method surpasses the concurrent AT baseline and
achieves significantly better performance than other baselines (Table A.6).
This empirically suggests that CEC helps to learn policies that are robust to
environmental perturbations and can quickly generalize to new changes.
Table 2: Evaluation results on RoboMimic. Visuomotor policies trained with our
expertise-based curriculum outperform the most competing history-dependent
behavior cloning baseline, as well as other offline RL algorithms. For our
method on the Lift task, we conduct 5 independent runs each with 10 rollout
episodes. On the Can task, we conduct 10 independent runs each with 5 rollout
episodes due to the longer horizon required to complete the task. Standard
deviations are included.
Task | Ours | BC-RNN [53] | BCQ [25] | CQL [41]
---|---|---|---|---
Lift | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0\pm 0.0}}$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0\pm 0.0}}$ | $93.3\pm 0.9$ | $11.3\pm 9.3$
Can | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0\pm 0.0}}$ | $\hphantom{\mathbf{0}}96.0\pm 1.6$ | $77.3\pm 6.8$ | $\hphantom{0}0.0\pm 0.0$
#### Cross-episodic curriculum is effective across a wide variety of learning
scenarios.
We now move beyond RL settings and study the effectiveness of the expertise-
based curriculum in the IL setting with mixed-quality demonstrations. This is
a common scenario, especially in robotics, where demonstrations are collected
by human operators with varying proficiency [52]. As presented in Table 2,
visuomotor policies trained with the expertise-based curriculum are able to
match and outperform the well-established baseline [53] on two simulated
robotic manipulation tasks and achieve significantly better performance than
agents learned from prevalent offline RL algorithms [25, 41]. These results
suggest that our cross-episodic curriculum is effective and broadly applicable
across various problem settings. More importantly, it provides a promising
approach to utilizing limited but sub-optimal data in data-scarce regimes such
as robot learning.
### 4.2 Ablation Studies
In this section, we seek to answer the third research question to identify the
components critical to the effectiveness of our approach. We focus on three
parts: the importance of cross-episodic attention, the influence of curriculum
granularity, and the effect of varying context length. Finally, we delve into
the fourth question, identifying scenarios where CEC is expected to be
helpful.
Figure 5: We compare the performance relative to agents trained with the fine-
grained curricula. Performance monotonically degrades as task-difficulty-based
curricula become coarser. Table 3: Ablation on the importance of cross-
episodic attention. Transformer agents trained with the same curricular data
but without cross-episodic attention degrade significantly during evaluation,
suggesting its indispensable role in learning highly performant policies.
| DMLab | RoboMimic
---|---|---
Goal Maze | Watermaze | Irreversible Path | Lift | Can
Ours | ${\color[rgb]{0.09,0.45,0.27}\mathbf{65.2\pm 6.7}}$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{50.9\pm 6.6}}$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{38.2\pm 7.0}}$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0\pm 0.0}}$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0\pm 0.0}}$
Ours w/o Cross-Episodic Attention | $35.0\pm 7.1$ | $20.0\pm 2.5$ | $\hphantom{0}3.8\pm 4.9$ | $\hphantom{00}75.9\pm 12.3$ | ${\color[rgb]{0.09,0.45,0.27}\mathbf{\hphantom{0}99.3\pm 0.9}}$
#### Importance of cross-episodic attention.
The underlying hypothesis behind our method is that cross-episodic attention
enables Transformer agents to distill policy improvement when mixed-optimality
trajectories are viewed collectively. To test this, on DMLab levels and
RoboMimic tasks, we train the same Transformer agents with the same curricular
data and training epochs but without cross-episodic attention. We denote such
agents as “Ours w/o Cross-Episodic Attention” in Table 3. Results demonstrate
that the ablated variants experience dramatic performance degradation on four
out of five examined tasks, which suggests that naively behaviorally cloning
sub-optimal data can be problematic and detrimental. Cross-episodic attention
views curricular data collectively, facilitating the extraction of knowledge
and patterns crucial for refining decision-making, thereby optimizing the use
of sub-optimal data.
#### Curriculum granularity.
We perform this ablation with the task-difficulty-based curriculum on DMLab
levels, due to the ease of adjusting granularity. We treat the curricula
listed in the column “Ours (Task Difficulty)” in Table 1 as “Fine”, and
gradually make them coarser to study the impact. Note that we ensure the same
amount of training data. See the Appendix, Sec. C.4 for how we define
granularity levels “Medium” and “Coarse”. We visualize the performance
relative to the most fine-grained in Figure 5. The monotonic degradation of
policy performance with respect to curriculum coarseness suggests that fine-
grained curricula are critical for Transformer agents to mostly benefit from
cross-episodic training.
Figure 6: Both short and unnecessarily long context windows decrease the
performance. Numbers in the legend denote context lengths. Performance values
are relative to those of “Ours (Task Difficulty), Auto” reported in Figure 3.
“Irrevers. Path” stands for the task “Irreversible Path”.
#### Varying context length.
Lastly, we study the effect of varying context length on DMLab and visualize
it in Figure 6. We normalize all performance values relative to those of “Ours
(Task Difficulty), Auto” reported in Figure 3. It turns out that both too
short and unnecessarily long context windows are harmful. On two out of three
levels, using a shorter context decreases the performance even more. This
finding coincides with Laskin et al. [42] that a sufficiently long Transformer
context is necessary to retain cross-episodic information. Furthermore, we
also discover that an unnecessarily long context is also harmful. We
hypothesize that this is due to the consequent training and optimization
instability.
#### Curriculum selection based on task complexities and data sources.
For RL tasks, we recommend starting with the learning-progress-based
curriculum. However, if the task itself is too challenging, such that source
algorithms barely make progress, we recommend the task-difficulty-based
curriculum. In IL settings, we further investigate the performance of the
learning-progress-based curriculum on RoboMimic tasks considered in this work.
Detailed setup and results are included in Appendix, Sec C.5. To summarize, if
human demonstrations are available, even if they are generated to be
heterogeneous in quality, we recommend using the expertise-based curriculum.
However, in the absence of human demonstrations and only with access to
machine-generated data (e.g., generated by RL agents), our learning-progress-
based curriculum is recommended because it achieves non-trivial performance
and significantly outperforms offline RL methods such as CQL [41].
## 5 Related Work
#### Sequential decision-making with Transformer agents.
There are many ongoing efforts to replicate the strong emergent properties
demonstrated by Transformer models for sequential decision-making problems
[80]. Decision Transformer [13] and Trajectory Transformer [37] pioneered this
thread by casting offline RL [44] as sequence modeling problems. Gato [68]
learns a massively multi-task agent that can be prompted to complete embodied
tasks. MineDojo [22] and VPT [4] utilize numerous YouTube videos for large-
scale pre-training in the video game Minecraft. VIMA [38] and RT-1 [9] build
Transformer agents trained at scale for robotic manipulation tasks. BeT [71]
and C-BeT [14] design novel techniques to learn from demonstrations with
multiple modes with Transformers. Our causal policy most resembles to VPT [4].
But we focus on designing learning techniques that are generally effective
across a wide spectrum of learning scenarios and application domains.
#### Cross-episodic learning.
Cross-episodic learning is a less-explored terrain despite that it has been
discussed together with meta-RL [77] for a long time. RL2 [18] uses recurrent
neural networks for online meta-RL by optimizing multi-episodic value
functions. Meta-Q-learning [21] instead learns multi-episodic value functions
in an offline manner. Algorithm Distillation (AD) [42] and Adaptive Agent
(AdA) [1] are two recent, inspiring methods in cross-episodic learning. Though
at first glance our learning-progress-based curriculum appears similar to AD,
significant differences emerge. Unlike AD, which focuses on in-context
improvements at test time and requires numerous single-task source agents for
data generation, our approach improves data efficiency for Transformer agents
by structuring data in curricula, requiring only a single multi-task agent and
allowing for diverse task instances during evaluations. Meanwhile, AdA,
although using cross-episodic attention with a Transformer backbone, is rooted
in online RL within a proprietary environment. In contrast, we focus on
offline behavior cloning in accessible, open-source environments, also
extending to IL scenarios unexplored by other meta-learning techniques.
Complementary to this, another recent study [43] provides theoretical insight
into cross-episodic learning.
#### Curriculum learning.
Curriculum learning represents training strategies that organize learning
samples in meaningful orders to facilitate learning [7]. It has been proven
effective in numerous works that adaptively select simpler task [58, 74, 69,
62, 15, 55, 59, 46] or auxiliary rewards[35, 72]. Tasks are also parameterized
to form curricula by manipulating goals [24, 30, 66], environment layouts[79,
3, 64], and reward functions [28, 34]. Inspired by this paradigm, our work
harnesses the improving nature of sequential experiences to boost learning
efficiency and generalization for embodied tasks.
## 6 Conclusion
In this work, we introduce a new learning algorithm named _Cross-Episodic
Curriculum_ to enhance the sample efficiency of policy learning and
generalization capability of Transformer agents. It leverages the shifting
distributions of past learning experiences or human demonstrations when they
are viewed as curricula. Combined with cross-episodic attention, CEC yields
embodied policies that attain high performance and robust generalization
across distinct and representative RL and IL settings. CEC represents a solid
step toward sample-efficient policy learning and is promising for data-scarce
problems and real-world domains.
#### Limitations and future work.
The CEC algorithm relies on the accurate formulation of curricular sequences
that capture the improving nature of multiple experiences. However, defining
these sequences accurately can be challenging, especially when dealing with
complex environments or tasks. Incorrect or suboptimal formulations of these
sequences could negatively impact the algorithm’s effectiveness and the
overall learning efficiency of the agents. A thorough exploration regarding
the attainability of curricular data is elaborated upon in Appendix, Sec D.
In subsequent research, the applicability of CEC to real-world tasks,
especially where task difficulty remains ambiguous, merits investigation. A
deeper assessment of a demonstrator’s proficiency trajectory — from initial
unfamiliarity to the establishment of muscle memory — could offer a valuable
learning signal. Moreover, integrating real-time human feedback to dynamically
adjust the curriculum poses an intriguing challenge, potentially enabling CEC
to efficiently operate in extended contexts, multi-agent environments, and
tangible real-world tasks.
## Acknowledgments and Disclosure of Funding
We thank Guanzhi Wang and Annie Xie for helpful discussions. We are grateful
to Yifeng Zhu, Zhenyu Jiang, Soroush Nasiriany, Huihan Liu, and Rutav Shah for
constructive feedback on an early draft of this paper. We also thank the
anonymous reviewers for offering us insightful suggestions and kind
encouragement during the review period. This work was partially supported by
research funds from Salesforce and JP Morgan.
## References
* Adaptive Agent Team et al. [2023] Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja Rakicevic, Tim Rocktäschel, Yannick Schroecker, Jakub Sygnowski, Karl Tuyls, Sarah York, Alexander Zacherl, and Lei Zhang. Human-timescale adaptation in an open-ended task space. _arXiv preprint arXiv: Arxiv-2301.07608_ , 2023.
* Ahn et al. [2022] Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, and Mengyuan Yan. Do as i can, not as i say: Grounding language in robotic affordances. _arXiv preprint arXiv: Arxiv-2204.01691_ , 2022.
* Baker et al. [2020] Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch. Emergent tool use from multi-agent autocurricula. In _International Conference on Learning Representations_ , 2020.
* Baker et al. [2022] Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. _arXiv preprint arXiv: Arxiv-2206.11795_ , 2022.
* Beattie et al. [2016] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, and Stig Petersen. Deepmind lab. _arXiv preprint arXiv: Arxiv-1612.03801_ , 2016.
* Beliaev et al. [2022] M. Beliaev, Andy Shih, S. Ermon, Dorsa Sadigh, and Ramtin Pedarsani. Imitation learning by estimating expertise of demonstrators. _International Conference On Machine Learning_ , 2022.
* Bengio et al. [2009] Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _International Conference on Machine Learning_ , 2009.
* Bommasani et al. [2021] Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. _arXiv preprint arXiv: Arxiv-2108.07258_ , 2021.
* Brohan et al. [2022] Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. Rt-1: Robotics transformer for real-world control at scale. _arXiv preprint arXiv: Arxiv-2212.06817_ , 2022.
* Brown et al. [2019] Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. _arXiv preprint arXiv: Arxiv-1904.06387_ , 2019.
* Cao and Sadigh [2021] Zhangjie Cao and Dorsa Sadigh. Learning from imperfect demonstrations from agents with varying dynamics. _arXiv preprint arXiv: Arxiv-2103.05910_ , 2021.
* Chen et al. [2020] Letian Chen, Rohan Paleja, and Matthew Gombolay. Learning from suboptimal demonstration via self-supervised reward regression. _arXiv preprint arXiv: Arxiv-2010.11723_ , 2020.
* Chen et al. [2021] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , pages 15084–15097, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/7f489f642a0ddb10272b5c31057f0663-Abstract.html.
* Cui et al. [2022] Zichen Jeff Cui, Yibin Wang, Nur Muhammad Mahi Shafiullah, and Lerrel Pinto. From play to policy: Conditional behavior generation from uncurated robot data. _arXiv preprint arXiv: Arxiv-2210.10047_ , 2022.
* Czarnecki et al. [2018] Wojciech Czarnecki, Siddhant M. Jayakumar, Max Jaderberg, Leonard Hasenclever, Yee Whye Teh, Nicolas Manfred Otto Heess, Simon Osindero, and Razvan Pascanu. Mix&match - agent curricula for reinforcement learning. In _International Conference on Machine Learning_ , 2018.
* Dai et al. [2019] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. _arXiv preprint arXiv: Arxiv-1901.02860_ , 2019.
* Driess et al. [2023] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. _arXiv preprint arXiv: Arxiv-2303.03378_ , 2023.
* Duan et al. [2016] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. _arXiv preprint arXiv: Arxiv-1611.02779_ , 2016.
* Ebert et al. [2021] Frederik Ebert, Yanlai Yang, Karl Schmeckpeper, Bernadette Bucher, Georgios Georgakis, Kostas Daniilidis, Chelsea Finn, and Sergey Levine. Bridge data: Boosting generalization of robotic skills with cross-domain datasets. _arXiv preprint arXiv: Arxiv-2109.13396_ , 2021.
* Espeholt et al. [2018] Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. _arXiv preprint arXiv: Arxiv-1802.01561_ , 2018.
* Fakoor et al. [2019] Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, and Alexander J. Smola. Meta-q-learning. _arXiv preprint arXiv: Arxiv-1910.00125_ , 2019.
* Fan et al. [2022] Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. _arXiv preprint arXiv: Arxiv-2206.08853_ , 2022.
* Finn et al. [2015] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial autoencoders for visuomotor learning. _arXiv preprint arXiv: Arxiv-1509.06113_ , 2015.
* Forestier et al. [2017] Sébastien Forestier, Yoan Mollard, and Pierre-Yves Oudeyer. Intrinsically motivated goal exploration processes with automatic curriculum learning. _ArXiv_ , abs/1708.02190, 2017.
* Fujimoto et al. [2018] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. _arXiv preprint arXiv: Arxiv-1812.02900_ , 2018.
* Ghosh et al. [2019] Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin, Benjamin Eysenbach, and Sergey Levine. Learning to reach goals via iterated supervised learning. _arXiv preprint arXiv: Arxiv-1912.06088_ , 2019.
* Graves et al. [2017] Alex Graves, Marc G. Bellemare, Jacob Menick, Remi Munos, and Koray Kavukcuoglu. Automated curriculum learning for neural networks. _arXiv preprint arXiv: Arxiv-1704.03003_ , 2017.
* Gupta et al. [2018] Abhishek Gupta, Benjamin Eysenbach, Chelsea Finn, and Sergey Levine. Unsupervised meta-learning for reinforcement learning. _ArXiv_ , abs/1806.04640, 2018.
* He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition, December 2015. URL http://arxiv.org/abs/1512.03385. arXiv:1512.03385 [cs].
* Held et al. [2018] David Held, Xinyang Geng, Carlos Florensa, and Pieter Abbeel. Automatic goal generation for reinforcement learning agents. In _International Conference on Machine Learning_ , 2018.
* Hoffman et al. [2020] Matthew W. Hoffman, Bobak Shahriari, John Aslanides, Gabriel Barth-Maron, Nikola Momchev, Danila Sinopalnikov, Piotr Stańczyk, Sabela Ramos, Anton Raichuk, Damien Vincent, Léonard Hussenot, Robert Dadashi, Gabriel Dulac-Arnold, Manu Orsini, Alexis Jacq, Johan Ferret, Nino Vieillard, Seyed Kamyar Seyed Ghasemipour, Sertan Girgin, Olivier Pietquin, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Abe Friesen, Ruba Haroun, Alex Novikov, Sergio Gómez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Andrew Cowie, Ziyu Wang, Bilal Piot, and Nando de Freitas. Acme: A research framework for distributed reinforcement learning. _arXiv preprint arXiv:2006.00979_ , 2020. URL https://arxiv.org/abs/2006.00979.
* Huang et al. [2022a] Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato, editors, _International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA_ , volume 162 of _Proceedings of Machine Learning Research_ , pages 9118–9147. PMLR, 2022a. URL https://proceedings.mlr.press/v162/huang22a.html.
* Huang et al. [2022b] Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter. Inner monologue: Embodied reasoning through planning with language models. _arXiv preprint arXiv: Arxiv-2207.05608_ , 2022b.
* Jabri et al. [2019] Allan Jabri, Kyle Hsu, Ben Eysenbach, Abhishek Gupta, Alexei A. Efros, Sergey Levine, and Chelsea Finn. Unsupervised curricula for visual meta-reinforcement learning. In _Advances in Neural Information Processing Systems_ , 2019.
* Jaderberg et al. [2017] Max Jaderberg, Volodymyr Mnih, Wojciech Czarnecki, Tom Schaul, Joel Z. Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. In _International Conference on Learning Representations_ , 2017.
* Jaderberg et al. [2019] Max Jaderberg, Wojciech M. Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castañeda, Charles Beattie, Neil C. Rabinowitz, Ari S. Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z. Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, and Thore Graepel. Human-level performance in 3d multiplayer games with population-based reinforcement learning. _Science_ , 364(6443):859–865, 2019. doi: 10.1126/science.aau6249. URL https://www.science.org/doi/abs/10.1126/science.aau6249.
* Janner et al. [2021] Michael Janner, Qiyang Li, and Sergey Levine. Offline reinforcement learning as one big sequence modeling problem. In Marc’Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, _Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual_ , pages 1273–1286, 2021. URL https://proceedings.neurips.cc/paper/2021/hash/099fe6b0b444c23836c4a5d07346082b-Abstract.html.
* Jiang et al. [2022] Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: General robot manipulation with multimodal prompts. _arXiv preprint arXiv: Arxiv-2210.03094_ , 2022.
* Kaelbling [1993] Leslie Pack Kaelbling. Learning to achieve goals. In _International Joint Conference on Artificial Intelligence_ , 1993.
* Kanitscheider et al. [2021] Ingmar Kanitscheider, Joost Huizinga, David Farhi, William Hebgen Guss, Brandon Houghton, Raul Sampedro, Peter Zhokhov, Bowen Baker, Adrien Ecoffet, Jie Tang, Oleg Klimov, and Jeff Clune. Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft. _arXiv preprint arXiv: Arxiv-2106.14876_ , 2021.
* Kumar et al. [2020] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. _arXiv preprint arXiv: Arxiv-2006.04779_ , 2020.
* Laskin et al. [2023] Michael Laskin, Luyu Wang, Junhyuk Oh, Emilio Parisotto, Stephen Spencer, Richie Steigerwald, DJ Strouse, Steven Stenberg Hansen, Angelos Filos, Ethan Brooks, maxime gazeau, Himanshu Sahni, Satinder Singh, and Volodymyr Mnih. In-context reinforcement learning with algorithm distillation. In _The Eleventh International Conference on Learning Representations_ , 2023. URL https://openreview.net/forum?id=hy0a5MMPUv.
* Lee et al. [2023] Jonathan N. Lee, Annie Xie, Aldo Pacchiano, Yash Chandak, Chelsea Finn, Ofir Nachum, and Emma Brunskill. Supervised pretraining can learn in-context reinforcement learning. _arXiv preprint arXiv: Arxiv-2306.14892_ , 2023.
* Levine et al. [2020] Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. _arXiv preprint arXiv: Arxiv-2005.01643_ , 2020.
* Liang et al. [2022] Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. _arXiv preprint arXiv: Arxiv-2209.07753_ , 2022.
* Lin et al. [2019] Xingyu Lin, Harjatin Singh Baweja, George Kantor, and David Held. Adaptive auxiliary task weighting for reinforcement learning. In _Advances in Neural Information Processing Systems_ , 2019.
* Liu and Abbeel [2023] Hao Liu and Pieter Abbeel. Emergent agentic transformer from chain of hindsight experience. _arXiv preprint arXiv: Arxiv-2305.16554_ , 2023.
* Loshchilov and Hutter [2017] Ilya Loshchilov and Frank Hutter. SGDR: stochastic gradient descent with warm restarts. In _5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings_. OpenReview.net, 2017. URL https://openreview.net/forum?id=Skq89Scxx.
* Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
* Ma et al. [2022] Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, and Amy Zhang. Vip: Towards universal visual reward and representation via value-implicit pre-training. _arXiv preprint arXiv: Arxiv-2210.00030_ , 2022.
* Majumdar et al. [2023] Arjun Majumdar, Karmesh Yadav, Sergio Arnaud, Yecheng Jason Ma, Claire Chen, Sneha Silwal, Aryan Jain, Vincent-Pierre Berges, Pieter Abbeel, Jitendra Malik, Dhruv Batra, Yixin Lin, Oleksandr Maksymets, Aravind Rajeswaran, and Franziska Meier. Where are we in the search for an artificial visual cortex for embodied intelligence? _arXiv preprint arXiv: Arxiv-2303.18240_ , 2023.
* Mandlekar et al. [2018] Ajay Mandlekar, Yuke Zhu, Animesh Garg, Jonathan Booher, Max Spero, Albert Tung, Julian Gao, John Emmons, Anchit Gupta, Emre Orbay, Silvio Savarese, and Li Fei-Fei. Roboturk: A crowdsourcing platform for robotic skill learning through imitation. _arXiv preprint arXiv: Arxiv-1811.02790_ , 2018.
* Mandlekar et al. [2021] Ajay Mandlekar, Danfei Xu, Josiah Wong, Soroush Nasiriany, Chen Wang, Rohun Kulkarni, Li Fei-Fei, Silvio Savarese, Yuke Zhu, and Roberto Martín-Martín. What matters in learning from offline human demonstrations for robot manipulation. _arXiv preprint arXiv: Arxiv-2108.03298_ , 2021.
* Matiisen et al. [2017] Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. _arXiv preprint arXiv: Arxiv-1707.00183_ , 2017.
* Matiisen et al. [2019] Tambet Matiisen, Avital Oliver, Taco Cohen, and John Schulman. Teacher-student curriculum learning. In _IEEE transactions on neural networks and learning systems_ , 2019.
* Morris [1981] Richard G.M. Morris. Spatial localization does not require the presence of local cues. _Learning and Motivation_ , 12(2):239–260, 1981. doi: 10.1016/0023-9690(81)90020-5. URL https://app.dimensions.ai/details/publication/pub.1028012961.
* Nair et al. [2022] Suraj Nair, Aravind Rajeswaran, Vikash Kumar, Chelsea Finn, and Abhinav Gupta. R3m: A universal visual representation for robot manipulation. _arXiv preprint arXiv: Arxiv-2203.12601_ , 2022.
* Narvekar et al. [2017] S. Narvekar, J. Sinapov, and P. Stone. Autonomous task sequencing for customized curriculum design in reinforcement learning. In _International Joint Conference on Artificial Intelligence_ , 2017.
* Narvekar and Stone [2019] Sanmit Narvekar and Peter Stone. Learning curriculum policies for reinforcement learning. In _Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems_. International Foundation for Autonomous Agents and Multiagent Systems, 2019.
* OpenAI et al. [2019] OpenAI, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemyslaw Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d. O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, and Susan Zhang. Dota 2 with large scale deep reinforcement learning. _arXiv preprint arXiv: Arxiv-1912.06680_ , 2019.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc., 2019.
* Peng et al. [2018] B. Peng, J. MacGlashan, R. Loftin, M. Littman, D. Roberts, and Matthew E. Taylor. Curriculum design for machine learners in sequential decision tasks. _IEEE Transactions on Emerging Topics in Computational Intelligence_ , 2:268–277, 2018.
* Petrenko et al. [2020] Aleksei Petrenko, Zhehui Huang, Tushar Kumar, Gaurav Sukhatme, and Vladlen Koltun. Sample factory: Egocentric 3d control from pixels at 100000 fps with asynchronous reinforcement learning. _arXiv preprint arXiv: Arxiv-2006.11751_ , 2020.
* Portelas et al. [2019a] Rémy Portelas, Cédric Colas, Katja Hofmann, and Pierre-Yves Oudeyer. Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments. In _Conference on Robot Learning_ , 2019a.
* Portelas et al. [2019b] Rémy Portelas, Cédric Colas, Katja Hofmann, and Pierre-Yves Oudeyer. Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments. _arXiv preprint arXiv: Arxiv-1910.07224_ , 2019b.
* Racanière et al. [2020] Sébastien Racanière, Andrew Kyle Lampinen, Adam Santoro, David P. Reichert, Vlad Firoiu, and Timothy P. Lillicrap. Automated curriculum generation through setter-solver interactions. In _International Conference on Learning Representations_ , 2020.
* Radosavovic et al. [2022] Ilija Radosavovic, Tete Xiao, Stephen James, Pieter Abbeel, Jitendra Malik, and Trevor Darrell. Real-world robot learning with masked visual pre-training. _arXiv preprint arXiv: Arxiv-2210.03109_ , 2022.
* Reed et al. [2022] Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, Tom Eccles, Jake Bruce, Ali Razavi, Ashley Edwards, Nicolas Heess, Yutian Chen, Raia Hadsell, Oriol Vinyals, Mahyar Bordbar, and Nando de Freitas. A generalist agent. _arXiv preprint arXiv: Arxiv-2205.06175_ , 2022.
* Riedmiller et al. [2018] Martin Riedmiller, Roland Hafner, Thomas Lampe, Michael Neunert, Jonas Degrave, Tom Van de Wiele, Volodymyr Mnih, Nicolas Heess, and Jost Tobias Springenberg. Learning by playing-solving sparse reward tasks from scratch. In _International Conference on Machine Learning_ , 2018.
* Schulman et al. [2017] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _arXiv preprint arXiv: Arxiv-1707.06347_ , 2017.
* Shafiullah et al. [2022] Nur Muhammad Mahi Shafiullah, Zichen Jeff Cui, Ariuntuya Altanzaya, and Lerrel Pinto. Behavior transformers: Cloning $k$ modes with one stone. _arXiv preprint arXiv: Arxiv-2206.11251_ , 2022.
* Shen et al. [2019] William B Shen, Danfei Xu, Yuke Zhu, Leonidas J Guibas, Li Fei-Fei, and Silvio Savarese. Situational fusion of visual representation for visual navigation. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 2881–2890, 2019.
* Sutton and Barto [2018] Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
* Svetlik et al. [2017] M. Svetlik, Matteo Leonetti, J. Sinapov, Rishi Shah, Nick Walker, and P. Stone. Automatic curriculum graph generation for reinforcement learning agents. In _AAAI Conference on Artificial Intelligence_ , 2017.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. _arXiv preprint arXiv: Arxiv-1706.03762_ , 2017.
* Wang et al. [2023a] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. _arXiv preprint arXiv: Arxiv-2305.16291_ , 2023a.
* Wang et al. [2016] Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. Learning to reinforcement learn. _arXiv preprint arXiv: Arxiv-1611.05763_ , 2016.
* Wang et al. [2023b] Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. _arXiv preprint arXiv: Arxiv-2302.01560_ , 2023b.
* Wöhlke et al. [2020] Jan Wöhlke, Felix Schmitt, and Herke van Hoof. A performance-based start state curriculum framework for reinforcement learning. In _Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems_ , pages 1503–1511, 2020.
* Yang et al. [2023] Sherry Yang, Ofir Nachum, Yilun Du, Jason Wei, Pieter Abbeel, and Dale Schuurmans. Foundation models for decision making: Problems, methods, and opportunities. _arXiv preprint arXiv: Arxiv-2303.04129_ , 2023.
* Zhang et al. [2021] Songyuan Zhang, ZHANGJIE CAO, Dorsa Sadigh, and Yanan Sui. Confidence-aware imitation learning from demonstrations with varying optimality. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, _Advances in Neural Information Processing Systems_ , volume 34, pages 12340–12350. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper_files/paper/2021/file/670e8a43b246801ca1eaca97b3e19189-Paper.pdf.
* Zhu et al. [2022] Yifeng Zhu, Abhishek Joshi, Peter Stone, and Yuke Zhu. Viola: Imitation learning for vision-based manipulation with object proposal priors. _arXiv preprint arXiv: Arxiv-2210.11339_ , 2022.
## Appendix A Model Architecture
In this section, we provide comprehensive details about the Transformer model
architectures considered in this work. We implement all models in PyTorch [61]
and adapt the implementation of Transformer-XL from VPT [4].
### A.1 Observation Encoding
Experiments conducted on both DMLab and RoboMimic include RGB image
observations. For models trained on DMLab, we use a ConvNet [29] similar to
the one used in Espeholt et al. [20]. For models trained on RoboMimic, we
follow Mandlekar et al. [53] to use a ResNet-18 network [29] followed by a
spatial-softmax layer [23]. We use independent and separate encoders for
images taken from the wrist camera and frontal camera. Detailed model
parameters are listed in Table A.1.
Table A.1: Model hyperparameters for vision encoders. Hyperparameter | Value
---|---
DMLab
Image Size | 72 $\times$ 96
Number of ConvNet Blocks | 1
Channels per Block | [16, 32, 32]
Output Size | 256
RoboMimic
Image Size | 84 $\times$ 84
Random Crop Height | 76
Random Crop Width | 76
Number of Randomly Cropped Patches | 1
ConvNet Backbone | ResNet-18 [29]
Output Size | 64
Spatial-Softmax Number of Keypoints | 32
Spatial-Softmax Temperature | 1.0
Output Size | 64
Since DMLab is highly partially observable, we follow previous work [20, 22,
4] to supply the model with previous action input. We learn 16-dim embedding
vectors for all discrete actions.
To encode proprioceptive measurement in RoboMimic, we follow Mandlekar et al.
[53] to not apply any learned encoding. Instead, these types of observation
are concatenated with image features and passed altogether to the following
layers. Note that we do not provide previous action inputs in RoboMimic, since
we find doing so would incur significant overfitting.
### A.2 Transformer Backbone
We use Transformer-XL [16] as our model backbone, adapted from Baker et al.
[4]. Transformer-XL splits long sequences into shorter sub-sequences that
reduce the computational cost of attention while allowing the hidden states to
be carried across the entire input by attending to previous keys and values.
This feature is critical for the long sequence inputs necessary for cross-
episodic attention. Detailed model parameters are listed in Table A.2.
Table A.2: Model hyperparameters for Transformer-XL. Hyperparameter | Value (DMLab) | Value (RoboMimic)
---|---|---
Hidden Size | 256 | 400
Number of Layers | 4 | 2
Number of Heads | 8 | 8
Pointwise Ratio | 4 | 4
### A.3 Action Decoding
To decode joystick actions in DMLab tasks, we learn a 3-layer MLP whose output
directly parameterizes a categorical distribution. This action head has a
hidden dimension of 128 with ReLU activations. The “Goal Maze” and
“Irreversible Path” tasks have an action dimension of 7, while “Watermaze” has
15 actions. To decode continuous actions in RoboMimic, we learn a 2-layer MLP
that parameterizes a Gaussian Mixture Model (GMM) with $5$ modes that
generates a 7-dimensional action. This network has a hidden dimension of 400
with ReLU activations. During deployment, we employ the “low-noise evaluation”
trick [31].
## Appendix B Training Details and Hyperparameters
All experiments are conducted on cluster nodes with NVIDIA V100 GPUs. We
utilize DDP (distributed data parallel) to accelerate the training if
necessary. Training hyperparameters are listed in Table A.3.
Table A.3: Hyperparameters used during training. Hyperparameter | Value (DMLab) | Value (RoboMimic)
---|---|---
Learning Rate | 0.0005 | 0.0001
Warmup Steps | 1000 | 0
LR Cosine Annealing Steps | 100000 | N/A
Weight Decay | 0.0 | 0.0
## Appendix C Experiment Details
### C.1 DMLab Main Experiment
Our DMLab main experiment is conducted on three levels with task IDs
* •
explore_goal_locations_large,
* •
rooms_watermaze,
* •
and skymaze_irreversible_path_hard.
We use no action repeats during training and evaluation. For experiments with
varying task difficulty, we select difficulty parameters “room numbers”,
“spawn radius”, and “built-in difficulty” for these three levels,
respectively. We adopt environment wrappers and helper functions from Petrenko
et al. [63] to flexibly and precisely maneuver task difficulties.
Due to different task horizons, we tune the context length of Transformer-XL
models and vary curricular trajectories accordingly. These differences are
summarized in Table A.4.
Table A.4: Experiment details on DMLab tasks. Columns “Epoch” denote the exact
training epochs with best validation performance. We select these checkpoints
for evaluation. For task-difficulty-based curriculum, the column “Training
Trajectories” with $n\times m$ entries means $n$ trajectories per difficulty
level ($m$ levels in total). The column “Sampled Episodes” with $[i,j]$
entries means we first determine the number of episodes per difficulty level
by uniformly sampling an integer from $[i,j]$ (inclusively).
Level Name | Context Length | Task-Difficulty-Based Curriculum | Learning-Progress-Based Curriculum
---|---|---|---
Epoch | Training Trajectories | Sampled Episodes | Epoch | Training Trajectories | Sampled Episodes
Goal Maze | 500 | 84 | 100 x 3 | [1, 5] | 88 | 300 | 9
Watermaze | 400 | 89 | 100 x 3 | [1, 5] | 80 | 300 | 9
Irreversible Path | 1600 | 90 | 100 x 4 | [1, 3] | 97 | 400 | 8
RL oracles serve as source agents used to generate training data for our
methods and the “BC w/ Expert Data” baseline. They are trained with the PPO
[70] implementation from Petrenko et al. [63]. The “BC w/ Expert Data”
baselines have the same model architecture, training hyperparameters, and
amount of training data as our method, but are trained solely on trajectories
generated by the best performing RL oracles without cross-episodic attention.
Table A.5: Evaluation results on DMLab, averaged over three tasks (Figure 3).
| Ours (Task
---
Difficulty), Auto
| Ours (Task
---
Difficulty), Fixed
| Ours (Learning
---
Progress)
| DT (Mixed
---
Difficulty)
| DT (Single
---
Difficulty)
| AT (Mixed
---
Difficulty)
| AT (Single
---
Difficulty)
| BC w/ Expert
---
Data
| RL
---
(Oracle)
| Curriculum RL
---
(Oracle)
51.4 | ${\color[rgb]{0.09,0.45,0.27}\mathbf{54.4}}$ | 32.4 | 35.3 | 11.7 | 42.7 | 33.4 | 14.2 | 40.6 | 50.6
### C.2 DMLab Generalization
This series of experiments probe the zero-shot generalization capabilities of
embodied agents in unseen maze configurations, out-of-distribution difficulty
levels, and varying environment dynamics. For the task “Goal Maze w/ Unseen
Mechanism”, we use the level with task ID explore_obstructed_goals_large,
which adds randomly opened and closed doors into the maze while ensuring a
valid path to the goal always exists. An example of an agent’s ego-centric
observation is visualized in Figure A.1.
The task “Irreversible Path (OOD. Difficulty)” corresponds to configurations
with the built-in difficulty of 1 (agents are only trained on difficulty up to
0.9, as noted in Table 1). For tasks with varying environment dynamics, we
directly test agents with an action repeat of 2. This is different from the
training setting with no action repeat.
Table A.6: Generalization results on DMLab, averaged over five settings
(Figure 4).
| Ours (Task
---
Difficulty)
| Ours (Learning
---
Progress)
| DT (Mixed
---
Difficulty)
| DT (Single
---
Difficulty)
| AT (Mixed
---
Difficulty)
| AT (Single
---
Difficulty)
| BC w/ Expert
---
Data
| RL
---
(Oracle)
| Curriculum RL
---
(Oracle)
${\color[rgb]{0.09,0.45,0.27}\mathbf{39.6}}$ | 27.8 | 31.8 | 13.6 | 39.4 | 29.2 | 18.1 | 30.0 | 37.6
Figure A.1: A visualization of the task “Goal Maze (Unseen Mechanism)”. It
includes doors that are randomly opened or closed.
### C.3 RoboMimic Main Experiment
We leverage the Multi-Human (MH) dataset from Mandlekar et al. [53]. It
consists of demonstrations collected by operators with varying proficiency. We
construct the expertise-based curriculum by following the order of “worse
operators, okay operators, then better operators”. We use a context length of
200 for both tasks. There are 90 trajectories per expertise level. To
determine the number of trajectories per expertise level when constructing
curricular data, we uniformly sample an integer from $[1,5]$ (inclusively).
The “Lift” and “Can” tasks are solved after training for 33 epochs and 179
epochs, respectively. We control for the same number of training epochs in
subsequent ablation studies.
### C.4 Ablation Study on Curriculum Granularity
We perform this ablation with the task-difficulty-based curriculum on DMLab
levels due to the ease of adjusting granularity. The definition of varying
levels of curriculum coarseness is listed in Table A.7.
Table A.7: Definitions of varying levels of curriculum coarseness. Level Name | Difficulty Parameter | Test Difficulty | Fine | Medium | Coarse
---|---|---|---|---|---
Goal Maze | Room Numbers | 20 | 5→10→15 | 5→10 | 5→15
Watermaze | Spawn Radius | 580 | 150→300→450 | 150→300 | 150→450
Irreversible Path | Built-In Difficulty | 0.9 | .1→.3→.5→.7 | .1→.5→.7 | .1→.3→.5
### C.5 Comparison of Curricula in RoboMimic
In IL settings, we further explored the efficacy of various curricula. For the
RoboMimic tasks examined, we employed a learning-progress-based curriculum,
ensuring the total training trajectories matched those of the expertise-based
curriculum (i.e., 270 trajectories per task). All other parameters remained
consistent, with the training data derived from RoboMimic’s machine-generated
dataset.
Table A.8 indicates that when heterogeneous-quality human demonstrations are
accessible, the expertise-based curriculum is preferable due to its superior
performance over the learning-progress-based approach. Conversely, without
expert demonstrations and relying solely on machine-generated data, the
learning-progress-based curriculum is still commendable. It offers noteworthy
results and surpasses offline RL methods like CQL [41], even though CQL is
trained on the full RoboMimic dataset, encompassing 1500 trajectories for the
Lift task and 3900 for the Can task.
Table A.8: Results show the performance of different curricula on two robotic
manipulation tasks: Lift and Can. Standard deviations are included.
Task | Expertise-Based Curriculum | Learning-Progress-Based Curriculum | CQL [41]
---|---|---|---
Lift | $100.0\pm 0.0$ | $32.0\pm 17.0$ | $2.7\pm 0.9$
Can | $100.0\pm 0.0$ | $30.0\pm 2.8$ | $0.0\pm 0.0$
Average | ${\color[rgb]{0.09,0.45,0.27}\mathbf{100.0}}$ | $31.0$ | $1.4$
## Appendix D Feasibility of Obtaining Curricular Data
The challenge of accurately orchestrating a curriculum is non-trivial and
hinges on various factors. In the present work, three curriculum designs are
introduced and validated, each with its practical considerations and
underlying assumptions, discussed herein.
#### Learning-Progress-Based Curriculum.
RL agents typically exhibit monotonic improvement over training epochs,
thereby naturally producing incrementally better data. The curriculum here is
devised through a series of checkpoints throughout the training duration,
necessitating no supplementary assumptions for its formulation.
#### Task-Difficulty-Based Curriculum.
In contexts where environmental difficulty is parameterizable, curricula can
be structured through a schedule, determined by the relevant difficulty
parameter, as demonstrated within this work. In scenarios lacking
parameterized difficulty, alternatives such as methods proposed by
Kanitscheider et al. [40] may be employed. The application of our method to
tasks where difficulty is not explicitly characterized presents an intriguing
avenue for future research.
#### Expertise-Based Curriculum.
A notable limitation resides in the requisite to estimate demonstrators’
proficiency. While some IL benchmarks, e.g., RoboMimic [53], come pre-equipped
with proficiency labels, a broader application of our method necessitates an
approximation of proficiency. One plausible approach entails ranking
trajectories via completion time. Furthermore, a demonstrator’s proficiency is
likely to organically improve—from initial unfamiliarity with teleoperation
systems or tasks, to a stage of executing data collection with muscle memory
[52]. This progression potentially provides a rich learning signal conducive
for CEC application.
## Appendix E Broader Impact
Our Cross-Episodic Curriculum can significantly enhance Transformer agent
learning but carries potential societal impacts. The efficiency of our method
depends on the curriculum’s design. If the curriculum unintentionally reflects
biases, it could lead to the amplification of these biases in learned
policies, potentially perpetuating unfair or discriminatory outcomes in AI-
driven decisions. Furthermore, the computational intensity of our approach at
evaluation could contribute to increased energy usage, which has implications
for the environmental footprint of AI applications.
|
# Black hole ringdown from physically sensible initial value problem in
higher-order scalar-tensor theories
Keisuke Nakashi Department of Social Design Engineering, National Institute
of Technology (KOSEN), Kochi College, 200-1 Monobe Otsu, Nankoku, Kochi,
783-8508, Japan Department of Physics, Rikkyo University, Toshima, Tokyo
171-8501, Japan Masashi Kimura Department of Informatics and Electronics,
Daiichi Institute of Technology, Tokyo 110-0005, Japan Department of Physics,
Rikkyo University, Toshima, Tokyo 171-8501, Japan Hayato Motohashi Division
of Liberal Arts, Kogakuin University, 2665-1 Nakano-machi, Hachioji, Tokyo
192-0015, Japan Kazufumi Takahashi Center for Gravitational Physics and
Quantum Information, Yukawa Institute for Theoretical Physics, Kyoto
University, 606-8502, Kyoto, Japan
###### Abstract
We study odd-parity perturbations about static and spherically symmetric black
hole solutions with a linearly time-dependent scalar field in higher-order
scalar-tensor theories. In particular, we consider stealth Schwarzschild and
stealth Schwarzschild-de Sitter solutions, where the deviation from the
general relativity case is controlled by a single parameter. We find that
complex frequencies of quasinormal modes (QNMs) are given by a simple scaling
of those in general relativity. We also show that there is a degeneracy
between the parameter characterizing the modification from general relativity
and the black hole mass. We then consider a physically sensible initial value
problem by taking into account the fact that the effective metric for the odd-
parity perturbations is in general different from the background metric. We
confirm that damped oscillations appearing at late times are indeed dominated
by the QNMs. Our analysis includes the case where the perturbations are
superluminal, and we demonstrate in this case that the perturbations can
escape from the region inside the horizon for the background metric.
††preprint: RUP-23-19, YITP-23-118
## I Introduction
Testing gravity has been a central issue in physics. Apart from cosmological
tests of gravity Koyama:2015vza ; Ferreira:2019xrr ; Arai:2022ilw , there have
been an increasing number of gravitational-wave events from binary black hole
mergers, which offer a possibility to test gravity at strong-field/dynamical
regimes. In general relativity (GR), the late-time gravitational wave signal
emitted from binary black hole mergers, known as the ringdown signal, can be
well described by a superposition of quasinormal modes (QNMs) Buonanno:2006ui
. Each QNM is characterized by a specific complex frequency, whose real and
imaginary parts respectively correspond to the frequency of temporal
oscillation and the exponential damping rate. The no-hair theorem of black
holes in (vacuum) GR implies that the QNM frequencies are determined solely by
the mass and angular momentum of the black hole. However, in modified gravity,
black holes can support some nontrivial hair other than the mass and angular
momentum, which would affect the QNM spectrum. In other words, the information
about the underlying gravitational theory would be encoded in the QNM
spectrum.
In contrast to GR where gravity is described solely by the spacetime metric,
modified gravity theories in general involve additional degrees of freedom.
The simplest class of modified gravity is the class of scalar-tensor theories,
where a single scalar field represents the modification of gravity. Starting
with the seminal theory of Brans-Dicke Brans:1961sx , a number of scalar-
tensor theories have been proposed so far. Horndeski theories Horndeski:1974wa
; Deffayet:2011gz ; Kobayashi:2011nu , which form the most general class of
scalar-tensor theories with second-order Euler-Lagrange equations, provide a
unified description of such traditional theories. It should be noted that the
second-order nature of the Euler-Lagrange equations guarantees the absence of
the Ostrogradsky ghost Woodard:2015zca ; Motohashi:2014opa ; Motohashi:2020psc
; Aoki:2020gfv .
Meanwhile, the Horndeski class is not the most general class of ghost-free
scalar-tensor theories. Indeed, even if the Euler-Lagrange equations contain
higher-order derivatives, the problem of Ostrogradsky ghost can be
circumvented by imposing the degeneracy condition Motohashi:2014opa ;
Langlois:2015cwa ; Motohashi:2016ftl ; Klein:2016aiq ; Motohashi:2017eya ;
Motohashi:2018pxg . Extensions of Horndeski theories in this direction are
called degenerate higher-order scalar-tensor (DHOST) theories Langlois:2015cwa
; Crisostomi:2016czh ; BenAchour:2016fzp . Another systematic way to extend
the Horndeski class is to employ the disformal transformation
Bekenstein:1992pj ; Bruneton:2007si ; Bettoni:2013diz and its generalization
involving higher derivatives of the scalar field Takahashi:2021ttd ;
Takahashi:2023vva . In fact, the disformal transformation maps the Horndeski
class to (a particular subclass of) the DHOST class, while the generalized
disformal transformation yields a larger class of ghost-free theories, which
is called the generalized disformal Horndeski (GDH) class Takahashi:2022mew
.*1*1*1Matter coupling could introduce an Ostrogradsky mode in generalized
disformal Horndeski theories in general, while there exists a nontrivial
subclass where this problem can be avoided Takahashi:2022mew ; Naruko:2022vuh
; Takahashi:2022ctx ; Ikeda:2023ntu . A yet further extension can be obtained
by relaxing the degeneracy condition in such a way that it is satisfied only
under the unitary gauge. Away from the unitary gauge, apparently there is an
Ostrogradsky mode, but it actually satisfies an elliptic differential equation
on a spacelike hypersurface and hence does not propagate. Such a mode is often
called a shadowy mode DeFelice:2018ewo ; DeFelice:2021hps , which itself is
harmless. By allowing for the existence of the shadowy mode, one obtains
U-DHOST DeFelice:2018ewo ; DeFelice:2021hps ; DeFelice:2022xvq and
generalized disformal unitary-degenerate (GDU) theories Takahashi:2023jro .
An interesting class of solutions in scalar-tensor theories is the so-called
stealth solution, where the metric is the same as in a GR solution but the
scalar field has a nontrivial profile. The stealth solutions have been found
and studied in the Brans-Dicke theory Nariai1968 ; OHanlon:1972ysn ;
BARROW1990294 ; Romero1993 ; Kolitch:1994kr ; Johri:1994rw ; Giardino:2022sdv
; Giardino:2023qlu , more general scalar-tensor theories Ayon-Beato:2004nzi ;
Ayon-Beato:2005yoq ; Mukohyama:2005rw ; Robinson:2006ib ; Ayon-Beato:2015qfa ;
Alvarez:2016qky ; Smolic:2017bic ; Franzin:2021yvf , and Horndeski and DHOST
theories Babichev:2013cya ; Kobayashi:2014eva ; Babichev:2016kdt ;
Babichev:2017lmw ; Minamitsuji:2018vuw ; BenAchour:2018dap ; Motohashi:2018wdq
; Motohashi:2019sen ; Minamitsuji:2019shy ; Bernardo:2019yxp ;
Charmousis:2019vnf ; Takahashi:2020hso ; Bernardo:2020ehy ; Gorji:2020bfl . In
particular, the general construction of stealth solutions was developed in
Motohashi:2018wdq ; Takahashi:2020hso in a covariant manner. The perturbation
theory about stealth black hole solutions has been studied extensively
Babichev:2018uiw ; Takahashi:2019oxz ; deRham:2019gha ; Motohashi:2019ymr ;
Khoury:2020aya ; Tomikawa:2021pca ; Takahashi:2021bml ; Mukohyama:2022skk ;
Khoury:2022zor . It then turned out that perturbations of stealth solutions
are strongly coupled in DHOST theories Babichev:2018uiw ; deRham:2019gha ;
Motohashi:2019ymr ; Takahashi:2021bml , and this problem is expected to
persist in GDH theories. A possible way out of this problem is to consider a
small detuning (i.e., scordatura) of the degeneracy condition
Motohashi:2019ymr .*2*2*2The scordatura term affects the stealth black hole
background, leading to a time-dependent correction. However, the time
dependence is typically very weak and can be negligible at astrophysical
scales Mukohyama:2005rw ; DeFelice:2022qaz . This would introduce an
Ostrogradsky mode in general, but its mass can be pushed above the cutoff of
the theory. Moreover, it is even possible to have the scordatura term in
U-DHOST theories that are intrinsically free of Ostrogradsky ghost
DeFelice:2022xvq . Therefore, DHOST (or GDH) theories supplemented with the
scordatura term would provide a consistent description of stealth solutions.
In the present paper, we perform a time-domain analysis of perturbations about
stealth black hole solutions in DHOST theories. In doing so, the main
difficulty comes from the fact that the effective metric (i.e., the one on
which the perturbations propagate) is in general different from the background
metric which determines the motion of (minimally coupled) matter fields. This
implies that a portion of a hypersurface which is spacelike with respect to
the effective metric can be timelike with respect to the background metric.
Therefore, when matter fields are taken into account, one has to carefully
choose the initial hypersurface so that it is spacelike with respect to both
the effective metric and the background metric. This issue has been addressed
in Nakashi:2022wdg for the case of monopole perturbations about stealth black
hole solutions in DHOST theories. The aim of the present paper is to extend
the analysis of Nakashi:2022wdg to odd-parity perturbations.
The rest of this paper is organized as follows. In Sec. II, we explain the
DHOST theories and their stealth black hole solutions. In addition, following
Takahashi:2019oxz , we analyze the odd-parity perturbations about the stealth
black hole solutions to see that one has to introduce a new time coordinate
(called $\tilde{t}$) to recast the master equation for the odd-parity
perturbations in the form of a wave equation. In Sec. III, we discuss the
effective metric, the character of a constant-$\tilde{t}$ hypersurface, and
characteristic curves for the odd-parity perturbations about the stealth
Schwarzschild solutions. We also discuss QNM frequencies in the DHOST theories
and obtain the time evolution of the perturbations employing the physically
sensible formulation of an initial value problem developed in Nakashi:2022wdg
. In particular, we confirm that the numerical waveform exhibits damped
oscillations at late times, which can be well fitted by a superposition of the
QNMs for the DHOST theories. In Sec. IV, we perform a similar analysis for the
stealth Schwarzschild-de Sitter solutions. Finally, we draw our conclusions in
Sec. V. In what follows, we use the geometric units in which $c=G=1$.
## II gravity theory, Background and Odd-parity perturbations
### II.1 Gravity theory
The action of the quadratic DHOST theories is given by Langlois:2015cwa
$\displaystyle S=\int{\rm
d}^{4}x\sqrt{-g}\left[F_{0}(\phi,X)+F_{1}(\phi,X)\Box\phi+F_{2}(\phi,X)R+\sum_{I=1}^{5}A_{I}(\phi,X)L_{I}^{(2)}\right],$
(1)
where the coupling functions $F_{0},F_{1},F_{2},$ and $A_{I}$ are functions of
the scalar field $\phi$ and its kinetic term $X=\phi_{\mu}\phi^{\mu}$ and
$\displaystyle\begin{split}&L_{1}^{(2)}=\phi_{\mu\nu}\phi^{\mu\nu},\qquad
L_{2}^{(2)}=(\Box\phi)^{2},\qquad
L_{3}^{(2)}=\phi^{\mu}\phi_{\mu\nu}\phi^{\nu}\Box\phi,\\\
&L_{4}^{(2)}=\phi^{\mu}\phi_{\mu\nu}\phi^{\nu\lambda}\phi_{\lambda},\qquad
L_{5}^{(2)}=(\phi^{\mu}\phi_{\mu\nu}\phi^{\nu})^{2},\end{split}$ (2)
with $\phi_{\mu}=\nabla_{\mu}\phi$ and
$\phi_{\mu\nu}=\nabla_{\mu}\nabla_{\nu}\phi$. For a generic choice of the
coupling functions, the theory described by the action (1) suffers from the
problem of the Ostrogradsky ghost associated with higher derivatives in the
equations of motion. The Ostrogradsky ghost can be removed by imposing the
following degeneracy conditions:
$\displaystyle\begin{split}A_{2}&=-A_{1}\neq-\frac{F_{2}}{X},\\\
A_{4}&=\frac{1}{8(F_{2}-XA_{1})^{2}}\left\\{4F_{2}\left[3(A_{1}-2F_{2X})^{2}-2A_{3}F_{2}\right]-A_{3}X^{2}(16A_{1}F_{2X}+A_{3}F_{2})\right.\\\
&\quad\left.+4X(3A_{1}A_{3}F_{2}+16A_{1}^{2}F_{2X}-16A_{1}F_{2X}^{2}-4A_{1}^{3}+2A_{3}F_{2}F_{2X})\right\\},\\\
A_{5}&=\frac{1}{8(F_{2}-XA_{1})^{2}}(2A_{1}-XA_{3}-4F_{2X})\left[A_{1}(2A_{1}+3XA_{3}-4F_{2X})-4A_{3}F_{2}\right],\end{split}$
(3)
where a subscript $X$ denotes the derivative with respect to $X$. The DHOST
theories described by Eq. (1) with the degeneracy conditions (3) is called
class Ia Langlois:2015cwa ; BenAchour:2016cay , which can be mapped to the
Horndeski theory via disformal transformation. It is known that all the other
classes of quadratic DHOST theories are phenomenologically disfavored in the
sense that either the cosmological perturbations are unstable or the modes
correspond to gravitational waves are absent.
In the present paper, we consider a subclass of the class Ia quadratic DHOST
theories, which is described by the following action:
$\displaystyle S=\int{\rm
d}x^{4}\sqrt{-g}\left[F_{0}(X)+F_{2}(X)R+\sum_{I=1}^{5}A_{I}(X)L_{I}^{(2)}\right],$
(4)
where we have set $F_{1}=0$ and assumed that the coupling functions are
functions only of $X$. In other words, we focus on the subclass of the
quadratic DHOST theories whose action is invariant under the shift
($\phi\to\phi+{\rm const}.$) and the reflection ($\phi\to-\phi$) of the scalar
field. As we will see in the next subsection, these theories admit an
interesting class of solutions known as the stealth solutions, i.e., a GR
solution with a linearly time-dependent scalar field.
### II.2 Background spacetime and scalar field
We consider a static and spherically symmetric background spacetime. The
metric of the background spacetime is given by
$\displaystyle\bar{g}_{\mu\nu}{\rm d}x^{\mu}{\rm d}x^{\nu}=-A(r){\rm
d}t^{2}+\frac{{\rm d}r^{2}}{B(r)}+r^{2}\gamma_{ab}{\rm d}x^{a}{\rm d}x^{b},$
(5)
where $\gamma_{ab}$ is the metric on a two-dimensional unit sphere,
$\gamma_{ab}{\rm d}x^{a}{\rm d}x^{b}={\rm d}\theta^{2}+\sin^{2}\theta{\rm
d}\varphi^{2}$. As for the scalar field, we impose the following ansatz:
$\displaystyle\bar{\phi}(t,r)=qt+\psi(r),$ (6)
where $q$ is a nonvanishing constant. We note that the linear time dependence
of the scalar field is compatible with the static metric because the action
(4) depends on the scalar field only through its derivatives. Having said
that, the linear time dependence can be still allowed in theories without
shift symmetry Minamitsuji:2018vuw ; Motohashi:2018wdq ; Takahashi:2020hso .
In the present paper, in particular, we focus on stealth black hole solutions.
A stealth black hole solution is described by the metric which is the same as
the one in GR, while the scalar field has a nontrivial configuration. The
general construction of stealth solutions was developed in Motohashi:2018wdq ;
Takahashi:2020hso in a covariant manner. The idea is to substitute the metric
and scalar field ansatz into the equations of motion and derive the conditions
on the coupling functions of DHOST theories under which the equations are
trivially satisfied. Assuming that $X=-q^{2}$, the stealth Schwarzschild-de
Sitter (dS) metric,
$\displaystyle A(r)=B(r)=1-\frac{r_{\rm s}}{r}-\frac{\Lambda r^{2}}{3},$ (7)
with $r_{\rm s}$ and $\Lambda$ being constants, can be a solution if the
following conditions are satisfied Motohashi:2018wdq ; Takahashi:2020hso :
$\displaystyle\left.\left\\{F_{0}+2\Lambda\left(F_{2}-XA_{1}\right)\right\\}\right|_{X=-q^{2}}=0,\,\,\left.\left\\{2F_{0X}+\Lambda\left(8F_{2X}-2A_{1}+4XA_{1X}+3XA_{3}\right)\right\\}\right|_{X=-q^{2}}=0.$
(8)
Note that, among the three degeneracy conditions in (3), we have used only
$A_{2}=-A_{1}$ in deriving the above conditions. Therefore, the stealth
Schwarzschild-dS solution exists even away from the DHOST theories so long as
$A_{2}=-A_{1}$. Note also that the stealth Schwarzschild solution can be
realized by putting $\Lambda=0$. In this case, the above condition reads
$\displaystyle\left.F_{0}\right|_{X=-q^{2}}=0,\qquad\left.F_{0X}\right|_{X=-q^{2}}=0.$
(9)
For the stealth black hole solutions, the scalar field profile can be obtained
from the condition $X=-q^{2}$ as follows:
$\displaystyle\bar{\phi}=q\left(t\pm\int\frac{\sqrt{1-A(r)}}{A(r)}{\rm
d}r\right).$ (10)
Here, we choose the plus branch so that $\phi$ is regular at the future event
horizon. Indeed, for the plus branch, the behavior of the scalar field near
the future event horizon where $A(r)\simeq 0$ can be approximated as
$\displaystyle\bar{\phi}\simeq q\left(t+\int\frac{{\rm d}r}{A(r)}\right)=qv,$
(11)
where $v$ is the ingoing Eddington-Finkelstein coordinate defined by
$v\coloneqq t+\int A(r)^{-1}{\rm d}r$.
### II.3 Odd-parity perturbations: quadratic Lagrangian and equation of
motion
We study linear odd-parity perturbations around a static and spherically
symmetric spacetime in DHOST theories. Although we will focus on the stealth
black hole solutions in the subsequent sections, for the time being, we
investigate the perturbations around a general static and spherically
symmetric spacetime described by the metric (5), following the discussion in
Takahashi:2019oxz ; Takahashi:2021bml . To study the odd-parity perturbations,
we define the metric perturbation as $\epsilon h_{\mu\nu}\coloneqq
g_{\mu\nu}-\bar{g}_{\mu\nu}$, where $\epsilon$ is a small parameter. Due to
the spherical symmetry of the background spacetime, it is useful to expand the
odd-parity perturbations in terms of the spherical harmonics $Y_{\ell
m}(\theta,\varphi)$ as follows:
$\displaystyle\begin{split}h_{tt}&=h_{tr}=h_{rr}=0,\\\
h_{ta}&=\sum_{\ell,m}h_{0,\ell m}(t,r)E_{a}{}^{b}\bar{\nabla}_{b}Y_{\ell
m}(\theta,\varphi),\\\ h_{ra}&=\sum_{\ell,m}h_{1,\ell
m}(t,r)E_{a}{}^{b}\bar{\nabla}_{b}Y_{\ell m}(\theta,\varphi),\\\
h_{ab}&=\sum_{\ell,m}h_{2,\ell
m}(t,r)E_{(a}{}^{c}\bar{\nabla}_{b)}\bar{\nabla}_{c}Y_{\ell
m}(\theta,\varphi),\end{split}$ (12)
where $E_{ab}$ is the completely antisymmetric tensor defined on a two-
dimensional unit sphere, and $\bar{\nabla}_{a}$ denotes the covariant
derivative with respect to $\gamma_{ab}$. Due to the symmetry of the
background spacetime, it is sufficient to consider only $m=0$. We note that
the odd-parity perturbations do not have $\ell=0$ mode, and $h_{2}$ vanishes
for $\ell=1$. In what follows, we focus on the modes with $\ell\geq 2$ where
the odd-parity perturbations are dynamical. Also, we do not consider the
perturbation of the scalar field, because it belongs to the even-parity
perturbations.
In order to eliminate an unphysical degree of freedom, we consider an
infinitesimal coordinate transformation: $x^{a}\to x^{a}+\epsilon\xi^{a}$. A
general infinitesimal transformation for the odd-parity modes can be written
as
$\displaystyle\xi^{a}=\sum_{\ell,m}\Xi_{\ell
m}(t,r)E^{ab}\bar{\nabla}_{b}Y_{\ell m}(\theta,\varphi).$ (13)
Then, the gauge transformation law for the perturbation variables is given by
$\displaystyle h_{0}\to h_{0}-\dot{\Xi},\qquad h_{1}\to
h_{1}-\Xi^{\prime}+\frac{2}{r}\Xi,\qquad h_{2}\to h_{2}-2\Xi,$ (14)
where a dot and a prime denote the derivatives with respect to $t$ and $r$,
respectively. For $\ell\geq 2$, we set $h_{2}=0$ to fix the gauge freedom,
which is a complete gauge fixing and hence we can legitimately impose it at
the action level Motohashi:2016prk .
The quadratic Lagrangian can be written in terms of a master variable
$\chi_{\ell}$ as follows Takahashi:2019oxz :
$\displaystyle\frac{2\ell+1}{2\pi}{\cal
L}^{(2)}=\frac{\ell(\ell+1)}{2(\ell-1)(\ell+2)}\sqrt{\frac{B}{A}}\left\\{b_{1}\dot{\chi}_{\ell}^{2}-b_{2}\chi_{\ell}^{\prime
2}+b_{3}\dot{\chi}_{\ell}\chi_{\ell}^{\prime}-\left[\ell(\ell+1)b_{4}+V_{\rm
eff}(r)\right]\chi_{\ell}^{2}\right\\},$ (15)
where
$\displaystyle b_{1}=\frac{r^{2}{\cal FH}^{2}}{A{\cal FG}+B{\cal
J}^{2}},\qquad b_{2}=\frac{r^{2}AB{\cal GH}^{2}}{A{\cal FG}+B{\cal
J}^{2}},\qquad b_{3}=\frac{2r^{2}B{\cal H}^{2}{\cal J}}{A{\cal FG}+B{\cal
J}^{2}},\qquad b_{4}={\cal H},$ (16)
and $V_{\rm eff}(r)$ is given by
$\displaystyle V_{\rm eff}(r)=r^{2}{\cal
H}\left[b_{2}\sqrt{\frac{B}{A}}\left(\frac{1}{r^{2}{\cal
H}}\sqrt{\frac{A}{B}}\right)^{\prime}\,\right]^{\prime}-2{\cal H},$ (17)
with ${\cal F}$, ${\cal G}$, ${\cal H}$, and ${\cal J}$ defined by
$\displaystyle\begin{split}&{\cal
F}=2\left(F_{2}+\frac{q^{2}}{A}A_{1}\right),\qquad{\cal
G}=2\left[F_{2}-\left(\frac{q^{2}}{A}+X\right)A_{1}\right],\\\ &{\cal
H}=2\left(F_{2}-XA_{1}\right),\qquad{\cal
J}=-2q\psi^{\prime}A_{1}.\end{split}$ (18)
The relation between the master variable and the original perturbation
variables can be found in Takahashi:2019oxz . The existence of the cross term
$b_{3}\dot{\chi}_{\ell}\chi_{\ell}^{\prime}$ is the crucial difference from
the case with $q=0$, $\psi^{\prime}=0$, and/or $A_{1}=0$. Indeed, we have
$b_{3}\propto{\cal J}\propto q\psi^{\prime}A_{1}$, and hence the cross term
vanishes if $q\psi^{\prime}A_{1}=0$. However, in the present paper, we do not
consider the case where $q\psi^{\prime}A_{1}=0$ because in this case, the
equation of motion and consequently the evolution of the odd-parity
perturbations are completely the same as those in GR.
Let us proceed with the quadratic Lagrangian (15). We can eliminate the cross
term $b_{3}\dot{\chi}_{\ell}\chi_{\ell}^{\prime}$ by introducing a new
coordinate $\tilde{t}$ as follows:
$\displaystyle\tilde{t}=t+\int\frac{b_{3}}{2b_{2}}{\rm d}r.$ (19)
With this new coordinate, the quadratic Lagrangian becomes
$\displaystyle{\cal L}^{(2)}\propto\tilde{{\cal
L}}=\frac{1}{2}\sqrt{\frac{B}{A}}\left\\{\tilde{b}_{1}(\partial_{\tilde{t}}\chi_{\ell})^{2}-b_{2}\chi_{\ell}^{\prime
2}-\left[\ell(\ell+1)b_{4}+V_{\rm eff}(r)\right]\chi_{\ell}^{2}\right\\},$
(20)
where
$\displaystyle\tilde{b}_{1}=b_{1}+\frac{b_{3}^{2}}{4b_{2}}.$ (21)
Next, we obtain the equation of motion for the odd-parity perturbations.
Varying the quadratic Lagrangian (20) with respect to the master variable
$\chi_{\ell}$, we obtain the equation of motion as
$\displaystyle-\partial_{\tilde{t}}^{2}\chi_{\ell}+\frac{b_{2}}{\tilde{b}_{1}}\chi_{\ell}^{\prime\prime}+\frac{Ab_{2}B^{\prime}+B(2Ab_{2}^{\prime}-b_{2}A^{\prime})}{2AB\tilde{b}_{1}}\chi_{\ell}^{\prime}-\frac{\ell(\ell+1)b_{4}+V_{\rm
eff}}{\tilde{b}_{1}}\chi_{\ell}=0.$ (22)
We introduce a new coordinate $\tilde{x}$ and a new variable $\Psi$ to
transform the above equation into the form of a two-dimensional wave equation:
$\displaystyle\tilde{x}$
$\displaystyle=\int\sqrt{\frac{\tilde{b}_{1}}{b_{2}}}{\rm d}r,$ (23)
$\displaystyle\Psi_{\ell}$ $\displaystyle=\frac{\chi_{\ell}}{F(\tilde{x})},$
(24)
where $F(\tilde{x})$ is given by
$\displaystyle F(\tilde{x})=\left(\frac{A}{B\tilde{b}_{1}b_{2}}\right)^{1/4}.$
(25)
Note that $\tilde{x}$ is a generalization of the tortoise coordinate.
Consequently, the equation of motion becomes
$\displaystyle\left[\frac{\partial^{2}}{\partial\tilde{x}^{2}}-\frac{\partial^{2}}{\partial\tilde{t}^{2}}-V_{\ell}(\tilde{x})\right]\Psi_{\ell}=0,$
(26)
where $V_{\ell}(\tilde{x})$ is the effective potential defined by
$\displaystyle V_{\ell}(\tilde{x})=\frac{\ell(\ell+1)b_{4}+V_{\rm
eff}}{\tilde{b}_{1}}+F\frac{{\rm d}^{2}}{{\rm
d}\tilde{x}^{2}}\left(\frac{1}{F}\right).$ (27)
When we fix the background solution, we can compute the effective potential
$V_{\ell}(\tilde{x})$ from the above formula, and hence we can investigate the
time evolution of the odd-parity perturbations based on the master equation
(26).
It should be noted that one can derive a master equation of the same form even
if we do not impose the degeneracy conditions (3), as clarified in
Tomikawa:2021pca . This is as expected because an extra scalar degree of
freedom belongs to the even-parity perturbations and hence does not affect the
odd-parity sector. As mentioned earlier in Sec. II.2, so long as
$A_{2}=-A_{1}$ is satisfied, the class of higher-order scalar-tensor theories
described by the action (4) allows for the stealth Schwarzschild-dS solution
under the condition (8). Moreover, even when $A_{2}\neq-A_{1}$ (which happens
if we take into account the scordatura term Motohashi:2019ymr ), the deviation
of the background solution from the stealth Schwarzschild-dS profile is
typically very weak and can be negligible at astrophysical scales
Mukohyama:2005rw ; DeFelice:2022qaz . Therefore, it is not necessary to impose
the degeneracy conditions (3) for the study of perturbations about the stealth
Schwarzschild-dS profile. Having said that, for concreteness, we focus on the
stealth Schwarzschild(-dS) solution in the DHOST theories in the subsequent
analyses.
## III Stealth Schwarzschild solutions
### III.1 Effective metric
In this section, we consider the stealth Schwarzschild profile as the
background solution. From the diagonalized quadratic Lagrangian (20), we can
find the effective metric on which the odd-parity perturbations propagate. In
what follows, we are interested in the propagation of odd-parity perturbations
in the radial direction, and hence we focus on the first two terms in (20) and
define a two-dimensional effective metric $Z_{IJ}$ ($I,J=\\{\tilde{t},r\\}$)
as
$\displaystyle\tilde{{\cal L}}_{\rm
kin}=\sqrt{\frac{B}{A}}\left[\frac{\tilde{b}_{1}}{2}(\partial_{\tilde{t}}\chi_{\ell})^{2}-\frac{b_{2}}{2}\chi_{\ell}^{\prime
2}\right]\eqqcolon-\frac{1}{2}Z^{IJ}\partial_{I}\chi_{\ell}\partial_{J}\chi_{\ell},$
(28)
where $Z^{IJ}$ is the inverse of $Z_{IJ}$. The component of the effective
metric is given by
$\displaystyle Z_{IJ}{\rm d}x^{I}{\rm
d}x^{J}=\sqrt{\frac{A}{B}}\left[-\frac{1}{\tilde{b}_{1}}{\rm
d}\tilde{t}^{2}+\frac{1}{b_{2}}{\rm d}r^{2}\right].$ (29)
Note that the effective metric is in general different from the background
metric, i.e., $Z_{IJ}{\rm d}x^{I}{\rm d}x^{J}\neq\bar{g}_{IJ}{\rm d}x^{I}{\rm
d}x^{J}$. For the stealth Schwarzschild solutions, $Z_{\tilde{t}\tilde{t}}$
becomes
$\displaystyle Z_{\tilde{t}\tilde{t}}=-\frac{F_{2}(r-r_{\rm
s})-q^{2}A_{1}r_{\rm s}}{2r^{3}(F_{2}+q^{2}A_{1})^{2}}.$ (30)
For the spacetime described by the effective metric $Z_{IJ}$, the vector field
$\partial_{\tilde{t}}$ is a Killing vector field. The Killing horizon is
located at the radius where $Z_{\tilde{t}\tilde{t}}$ changes its sign. From
Eq. (30), the radius of the Killing horizon, denoted by $r_{\rm g}$, can be
read off as
$\displaystyle r_{\rm g}=\left(1+\frac{q^{2}A_{1}}{F_{2}}\right)r_{\rm
s}\eqqcolon(1+\zeta)r_{\rm s}.$ (31)
Since the conditions for no ghost/gradient instabilities are given by
Takahashi:2021bml
$\displaystyle F_{2}>0,\qquad F_{2}+q^{2}A_{1}>0,$ (32)
the Killing horizon $r_{\rm g}$ is positive. Note that these conditions imply
$\zeta>-1$. Note also that $r_{\rm g}>r_{\rm s}$ for $\zeta>0$, while $r_{\rm
g}<r_{\rm s}$ for $\zeta<0$. The two radii coincide with each other for
$q^{2}A_{1}=0$, or equivalently $\zeta=0$.
### III.2 Characters of a constant-$\tilde{t}$ surface
Next, we discuss characters of the new time coordinate $\tilde{t}$. For the
stealth Schwarzschild solutions, $\tilde{t}$ can be analytically obtained from
Eq. (19) as follows:
$\displaystyle\tilde{t}=t+2\sqrt{\frac{r}{r_{\rm s}}}(r_{\rm s}-r_{\rm
g})-\frac{1}{\sqrt{r_{\rm s}}}\left(r_{\rm
g}^{3/2}\log\left|\frac{\sqrt{r}-\sqrt{r_{\rm g}}}{\sqrt{r}+\sqrt{r_{\rm
g}}}\right|-r_{\rm s}^{3/2}\log\left|\frac{\sqrt{r}-\sqrt{r_{\rm
s}}}{\sqrt{r}+\sqrt{r_{\rm s}}}\right|\right)+\tilde{t}_{\rm c},$ (33)
where $\tilde{t}_{\rm c}$ is an integration constant. Let us investigate
whether a constant-$\tilde{t}$ surface is spacelike with respect to the
background metric or not. To this end, we consider a vector field
$\partial_{\mu}\tilde{t}$ which is normal to a constant-$\tilde{t}$ surface.
The norm of $\partial_{\mu}\tilde{t}$ associated with the background metric is
given by
$\displaystyle\bar{g}^{\mu\nu}\partial_{\mu}\tilde{t}\partial_{\nu}\tilde{t}=\frac{r(r_{\rm
g}^{2}-rr_{\rm s})}{r_{\rm s}(r-r_{\rm g})^{2}}.$ (34)
Therefore, the constant-$\tilde{t}$ surface is spacelike for $r>r_{\rm
g}^{2}/r_{\rm s}$, while it is timelike for $r<r_{\rm g}^{2}/r_{\rm s}$. Now,
we discuss the relation between the location of the Killing horizon for the
odd-parity perturbations $r_{\rm g}$ and the characteristic radius $r_{\rm
g}^{2}/r_{s}$. The Killing horizon $r_{\rm g}$ is greater than the
characteristic radius $r_{\rm g}^{2}/r_{\rm s}$ if $r_{\rm s}>r_{\rm g}$, or
equivalently $\zeta<0$. Consequently, if we focus on the spacetime in the
range $r>r_{\rm g}$, the constant-$\tilde{t}$ surface is always spacelike. On
the other hand, the Killing horizon $r_{\rm g}$ is smaller than the
characteristic radius $r_{\rm g}^{2}/r_{\rm s}$ if $r_{\rm s}<r_{\rm g}$, or
equivalently $\zeta>0$. Therefore, the constant-$\tilde{t}$ surface becomes
spacelike in the range $r>r_{\rm g}^{2}/r_{\rm s}$, while it becomes timelike
in the range $r_{\rm g}<r<r_{\rm g}^{2}/r_{\rm s}$. Figure 1 shows the typical
behavior of the constant-$\tilde{t}$ surface embedded in the Penrose diagram
of the Schwarzschild spacetime. The black solid curves are the
constant-$\tilde{t}$ surfaces. In the yellow shaded region, the
constant-$\tilde{t}$ surfaces are spacelike.
Figure 1: Typical behavior of constant-$\tilde{t}$ surface for (A) $\zeta>0$
and (B) $\zeta<0$ embedded in the Penrose diagram of the Schwarzschild
spacetime. The black curves represent constant-$\tilde{t}$ surfaces. The
constant-$\tilde{t}$ surface is spacelike in the yellow shaded region.
### III.3 Characteristic curves
In the high-frequency regime, the odd-parity perturbations propagate along the
characteristic curves on which either $\tilde{v}=\tilde{t}+\tilde{x}={\rm
const}.$ or $\tilde{u}=\tilde{t}-\tilde{x}={\rm const}.$ is satisfied. To
understand properties of the characteristic curves, we perform a similar
analysis as the one in the previous subsection. That is, we study the vector
fields $\partial_{\mu}\tilde{u}$ and $\partial_{\mu}\tilde{v}$ which are
normal to the characteristic curves. The norms of these vector fields with
respect to the background metric are given by
$\displaystyle\bar{g}^{\mu\nu}\partial_{\mu}\tilde{u}\partial_{\nu}\tilde{u}=\frac{r(r_{\rm
g}-r_{\rm s})}{r_{\rm s}(\sqrt{r}-\sqrt{r_{\rm
g}})^{2}},\qquad\bar{g}^{\mu\nu}\partial_{\mu}\tilde{v}\partial_{\nu}\tilde{v}=\frac{r(r_{\rm
g}-r_{\rm s})}{r_{\rm s}(\sqrt{r}+\sqrt{r_{\rm g}})^{2}},$ (35)
respectively. Therefore, for $r_{\rm g}>r_{\rm s}$ or equivalently $\zeta>0$,
the characteristic curves are timelike, while for $r_{\rm g}<r_{\rm s}$ or
equivalently $\zeta<0$, the characteristic curves are spacelike, i.e., the
odd-parity perturbations become superluminal. For $\zeta<0$ case, due to the
superluminal propagation, perturbations can propagate from the region in
$r_{\rm g}<r<r_{\rm s}$ to that in $r>r_{\rm s}$ (see Appendix A).
Figure 2: The characteristic curves for (A) $\zeta>0$ and (B) $\zeta<0$
embedded in the Penrose diagram of the Schwarzschild spacetime. The red curves
and the blues curves represent constant-$\tilde{v}$ curves and
constant-$\tilde{u}$ curves, respectively. For (A) $\zeta>0$, the
characteristic curves are always timelike, while for (B) $\zeta<0$, the
characteristic curves are spacelike, i.e., the odd-parity perturbations are
superluminal.
Figure 2 shows the characteristic curves embedded in the Penrose diagram of
the Schwarzschild spacetime.
### III.4 Equation of motion and QNM frequencies
Let us study the master equation (26) for the case of stealth Schwarzschild
solutions. The generalized tortoise coordinate $\tilde{x}$ and the new master
variable $\Psi_{\ell}$ defined in Eqs. (23) and (24) take the form of
$\displaystyle\tilde{x}$ $\displaystyle=\sqrt{1+\zeta}\left[r+r_{\rm
g}\log\left|\frac{r}{r_{\rm g}}-1\right|\right],$ (36)
$\displaystyle\Psi_{\ell}$ $\displaystyle=r\sqrt{2F_{2}}\left(\frac{r_{\rm
g}}{r_{\rm s}}\right)^{3/4}\chi_{\ell}.$ (37)
We note that $\tilde{x}\to-\infty$ as $r\to r_{\rm g}$ and
$\tilde{x}\to\infty$ as $r\to\infty$. Here, we have chosen the integration
constant for $\tilde{x}$ so that $\tilde{x}=0$ at $r=0$. The master equation
(26) is now written as
$\displaystyle\left[\frac{\partial^{2}}{\partial\tilde{x}^{2}}-\frac{\partial^{2}}{\partial\tilde{t}^{2}}-V_{\ell}(\tilde{x})\right]\Psi_{\ell}=0,$
(38)
where
$\displaystyle V_{\ell}(\tilde{x})=\frac{1}{1+\zeta}\left(1-\frac{r_{\rm
g}}{r}\right)\left[\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\rm
g}}{r^{3}}\right].$ (39)
Note that if $\zeta=0$, the above equation reduces to the standard Regge-
Wheeler equation in GR.
It should be noted that the master equation (38) for the odd-parity
perturbations about stealth solutions in the DHOST theory is the same as the
one in GR except that the effective potential is multiplied by the factor of
$(1+\zeta)^{-1}$ [see Eq. (39)]. Indeed, if we introduce rescaled coordinates
$\tilde{T}$ and $\tilde{X}$ as
$\displaystyle\tilde{T}$ $\displaystyle=\frac{\tilde{t}}{\sqrt{1+\zeta}},$
(40) $\displaystyle\tilde{X}$
$\displaystyle=\frac{\tilde{x}}{\sqrt{1+\zeta}}=r+r_{\rm
g}\log\left|\frac{r}{r_{\rm g}}-1\right|,$ (41)
then the master equation (38) can be rewritten as
$\displaystyle\left[\frac{\partial^{2}}{\partial\tilde{X}^{2}}-\frac{\partial^{2}}{\partial\tilde{T}^{2}}-\tilde{V}_{\ell}(\tilde{X})\right]\Psi_{\ell}=0,$
(42)
with
$\displaystyle\tilde{V}_{\ell}(\tilde{X})=\left(1-\frac{r_{\rm
g}}{r}\right)\left[\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\rm
g}}{r^{3}}\right].$ (43)
Equation (42) is nothing but the standard Regge-Wheeler equation in GR if we
identify $r_{\rm g}$ as the Schwarzschild radius. This implies that we can map
a solution for the wave equation in GR to a solution in the DHOST theory: The
latter is obtained by just rescaling the coordinates in the former. This fact
can be used to discuss the QNM frequencies and the power-law tail in the DHOST
theory.
Let us first discuss the QNM frequencies. Substituting the ansatz
$\Psi_{\ell}=\psi_{\ell}(\tilde{X})e^{-i\tilde{W}\tilde{T}}$ into Eq. (42), we
have
$\displaystyle\left[-\frac{{\rm d}^{2}}{{\rm
d}\tilde{X}^{2}}+\tilde{V}_{\ell}(\tilde{X})\right]\psi_{\ell}(\tilde{X})=\tilde{W}^{2}\psi_{\ell}(\tilde{X}).$
(44)
The QNMs are defined as the modes that are purely ingoing ($\psi_{\ell}\sim
e^{-i\tilde{W}\tilde{X}}$) as $r\to r_{\rm g}$, and purely outgoing
($\psi_{\ell}\sim e^{i\tilde{W}\tilde{X}}$) as $r\to\infty$. Let
$\omega^{\text{Sch}}_{\ell,n}(r_{\rm s})$ be the QNM frequencies for the
Schwarzschild spacetime in GR obtained by solving the standard Regge-Wheeler
equation, where $n$ is the overtone number. For instance,
$\omega_{2,0}^{\text{Sch}}=(0.74734-0.17792\,i)/r_{\rm s}$ for the $\ell=2$
fundamental mode. Also, let $\tilde{W}_{\ell,n}(r_{\rm g})$ be the QNM
frequencies obtained by solving Eq. (44). The relation between
$\omega^{\text{Sch}}_{\ell,n}$ and $\tilde{W}_{\ell,n}$ is given by $r_{\rm
g}\,\tilde{W}_{\ell,n}=r_{\rm s}\,\omega^{\text{Sch}}_{\ell,n}$. From Eq.
(40), we can rewrite the ansatz for $\Psi_{\ell}$ as
$\Psi_{\ell}=\psi_{\ell}(\tilde{x})e^{-\tilde{W}\tilde{t}/\sqrt{1+\zeta}}\eqqcolon\psi_{\ell}(\tilde{x})e^{-i\omega^{\rm
DHOST}\tilde{t}}$. Then, the QNM frequencies $\omega^{\rm DHOST}_{\ell,n}$ can
be expressed in terms of $\omega^{\text{Sch}}_{\ell,n}$ as
$\displaystyle\omega^{\rm DHOST}_{\ell,n}=\frac{r_{\rm
s}\,\omega^{\text{Sch}}_{\ell,n}}{r_{\rm s}(1+\zeta)^{3/2}},$ (45)
where we have used $r_{\rm g}=(1+\zeta)r_{\rm s}$. Note that the numerator is
the QNM frequencies of the Schwarzschild spacetime in GR in unit of $r_{\rm
s}^{-1}$, which we already know. For instance, for the $\ell=2$ fundamental
mode, we have $r_{\rm s}\,\omega_{2,0}^{\text{Sch}}=0.74734-0.17792\,i$, and
hence
$\displaystyle\omega^{\rm DHOST}_{2,0}=\frac{0.74734-0.17792\,i}{r_{\rm
s}(1+\zeta)^{3/2}}.$ (46)
Equation (45) shows that, even if we know QNM frequencies for multiple pairs
of $(\ell,n)$ from observations, we can determine only the combination $r_{\rm
s}(1+\zeta)^{3/2}$. In this sense, we conclude that there is a degeneracy
between $r_{\rm s}$ and $\zeta$. Another important consequence is that we can
find the QNM frequencies of the stealth Schwarzschild solutions in the DHOST
theory from those in GR by applying the formula (45). This is consistent with
the result of Mukohyama:2023xyf where the QNM frequencies have been studied
based on the effective field theory with a timelike scalar profile
Mukohyama:2022enj ; Mukohyama:2022skk applied to a static and spherically
symmetric black hole background.
Let us now briefly discuss the behavior of the power-law tail in the DHOST
theory, assuming that it exists. It is well known that the power-law tail
dominates the waveform of the black hole perturbations after the damped
oscillation phase in GR Price:1971fb . Now, suppose that the solution to the
wave equation (42) (i.e., the one rewritten in the form of the standard Regge-
Wheeler equation in GR) asymptotically behaves as
$\Psi_{\ell}\sim\tilde{T}^{\mathtt{k}}$ at late time, where $\mathtt{k}$ is a
negative constant. Then, by use of Eq. (40), we find that
$\Psi_{\ell}\sim(1+\zeta)^{-\mathtt{k}/2}\tilde{t}^{\mathtt{k}}\propto\tilde{t}^{\mathtt{k}}$.
Therefore, if the power-law tail exists in the DHOST theory, we expect that
its power would be the same as the one in GR.
Before concluding this subsection, we mention the need for a time-domain
analysis. In GR, when we obtain the time evolution of the black hole
perturbations as a solution of Cauchy problem, the late-time behavior of the
black hole perturbations is dominated by a superposition of the QNMs (and the
power-law tail). On the other hand, in the DHOST (or any other modified
gravity) theories, it is nontrivial whether or not the same thing happens
because the effective metric for perturbations does not coincide with the
background metric in general. Although we neglect matter fields in the present
paper, they exist in reality and their dynamics is determined by the
background metric, provided that they are minimally coupled to gravity.
Therefore, in order to obtain the time evolution of the perturbations, we
should impose initial conditions on a hypersurface which is spacelike with
respect to both the background metric and the effective metric. In GR, for
example, the initial surface is often chosen to be a hypersurface with
constant Killing time. In the present case of DHOST theories, a portion of a
constant-$\tilde{t}$ hypersurface can be timelike with respect to the
background metric for $\zeta>0$. When we impose the initial conditions in the
region where the constant-$\tilde{t}$ hypersurface is spacelike, the late-time
behavior of the perturbations would be dominated by the QMNs with frequencies
$\omega^{\rm{DHOST}}_{\ell,n}$. However, when we impose the initial conditions
in the region where the constant-$\tilde{t}$ hypersurface is timelike, it is
not obvious whether the QNMs dominate the late-time behavior of the
perturbations because $\tilde{t}$ cannot be regarded as a physical time
coordinate in this case. In the next subsection, we show that we can prepare a
hypersurface which is spacelike with respect to both the background metric and
the effective metric in the region where the constant-$\tilde{t}$ hypersurface
is timelike by tilting the constant-$\tilde{t}$ hypersurface in an appropriate
manner.
### III.5 Initial value problem and excitations of QNMs
As we mentioned in Sec. III.2, for $\zeta>0$, a constant-$\tilde{t}$ surface
is timelike with respect to the background metric in the region $r_{\rm
g}<r<r_{\rm g}^{2}/r_{\rm s}$. Therefore, in order to discuss the time
evolution of the perturbations based on the mater equation (38) in a
physically sensible manner, we need to choose another initial hypersurface
that is spacelike with respect to both the background metric and the effective
metric. Such a formulation of initial value problem has been proposed in
Nakashi:2022wdg , which we adopt in the following. In what follows, we focus
on the case with $\zeta>0$ and study the $\ell=2$ mode for concreteness (and
hence the subscript $\ell$ will be omitted). We analyze the initial value
problem for $\zeta<0$ in Appendix A.
Let us briefly review how we construct an initial surface in the physically
sensible formulation proposed in Nakashi:2022wdg . We introduce new
coordinates so that $\tilde{\mathcal{U}}=a\tilde{u}$ and
$\tilde{\mathcal{V}}=b\tilde{v}$, where $a$ and $b$ are positive constants.
Figure 3: Schematic picture of the initial surface $\Sigma$ (black dashed
curve) and the surface $\tilde{\Sigma}$ (orange dashed curve) which is
constructed by tilting the constant-$\tilde{t}$ surface. We put an initial
Gaussian wave packet (blue solid curve) on the initial surface $\Sigma$. We
require that (a) the initial surface $\Sigma$ coincides with the surface
$\tilde{\Sigma}$ in the region $S$ (red solid curve) within the numerical
domain $D$ (green shaded region). We also require that (b) the initial
conditions have a compact support in the region $S\cap D$, and hence the field
vanishes outside the numerical domain (gray shaded region). We further assume
that the derivative of the field in the direction perpendicular to $\Sigma$ is
zero on the initial surface.
Figure 3 shows a schematic picture of the initial surface and the numerical
domain. The left panel shows our numerical setup in the Penrose diagram of the
Schwarzschild spacetime, while the right panel shows it in a diagram in which
the characteristic curves of the odd-parity perturbations are depicted by 45-
and 135-degree straight lines. By adjusting the constants $a$ and $b$, we can
make a hypersurface of constant $\tilde{\mathcal{U}}+\tilde{\mathcal{V}}$ be
spacelike in the region $r>r_{\rm B}$ for some $r_{\rm B}<r_{\rm g}^{2}/r_{\rm
s}$. We call the constant-$(\mathcal{\tilde{U}}+\mathcal{\tilde{V}})$ surface
$\tilde{\Sigma}$. Let $S$ be the region where the hypersurface
$\tilde{\Sigma}$ is spacelike and let $\Sigma$ denote a spacelike hypersurface
on which we impose initial conditions and the numerical domain $D$. We impose
the following requirements on the hypersurface $\Sigma$ and the initial
conditions:
1. (a)
The initial surface $\Sigma$ coincides with $\tilde{\Sigma}$ in the region
$S\cap D$.
2. (b)
The initial conditions have a compact support in the region $S\cap D$.
Since the numerical domain is a part of the causal future of the region $S$
determined by the characteristic curves of the odd-parity perturbations, in
the numerical domain, imposing initial conditions on $\Sigma$ corresponds to
imposing initial conditions on $\tilde{\Sigma}$ under the requirement (a).
Also, under the requirement (a), we can regard
$\tilde{\mathcal{U}}+\tilde{\mathcal{V}}$ as a physical time in the numerical
domain. The requirement (b) allows us to obtain the time evolution as follows.
First, we can obtain the time evolution in the region I in the right panel of
Fig. 3 from the initial data given in the region $S\cap D$. Then, when we
study the time evolution in the regions II and III, we can use the requirement
(b) to set $\Psi=0$ on both the right boundary of region II and the left
boundary of region III. Once we obtain the solution in the regions II and III,
it is straightforward to compute the time evolution in the region IV. Thus, we
can obtain the time evolution of the field in the whole numerical domain.
We consider a Gaussian wave packet as the initial field profile:
$\displaystyle\Psi|_{\Sigma}=\Psi(C_{0}-\tilde{\mathcal{V}},\tilde{\mathcal{V}})|_{\tilde{\Sigma}}=e^{-\frac{1}{2}\left(\frac{\tilde{\mathcal{V}}-\tilde{\mathcal{V}}_{0}}{\sigma}\right)^{2}},$
(47)
where $\sigma$ and $\tilde{\mathcal{V}}_{0}$ are the width of the Gaussian
wave packet and its peak position, respectively. It should be noted that we
truncate the Gaussian profile in a finite region in our actual computations so
that the initial data have a compact support within the region $S\cap D$. We
recall that $\tilde{\mathcal{U}}+\tilde{\mathcal{V}}$ takes a constant value
(which we denote by $C_{0}$) on the initial surface within the numerical
domain thanks to the requirement (a), and hence it makes sense to define the
initial data as in Eq. (47). We choose $\sigma$ and $\tilde{\mathcal{V}}_{0}$
so that the support of the initial field profile overlaps with the region
$r_{\rm B}<r<r_{\rm g}^{2}/r_{\rm s}$, where the surface of constant
$\tilde{\mathcal{U}}+\tilde{\mathcal{V}}$ is spacelike and the
constant-$\tilde{t}$ surface is timelike (see Fig. 4).
Figure 4: Schematic picture of the initial field profiles. The cyan and the
orange curves are the Gaussian wave packets with $\sigma=r_{\rm s}$ and
$\sigma=0.1r_{\rm s}$, respectively. The horizontal axis is the value of
$(\tilde{\mathcal{U}}-\tilde{\mathcal{V}})/r_{\rm s}$ on the initial surface
$\Sigma$. In practice, we truncate the Gaussian function to have a compact
support within the region $S\cap D$ and choose the center of the wave packet
so that its support overlaps with the region $r_{\rm B}<r<r_{\rm g}^{2}/r_{\rm
s}$. The black curve is the schematic plot of the effective potential
$V(\tilde{\mathcal{U}},\tilde{\mathcal{V}})$ on the initial surface, which is
meant to show the position of the potential peak (and hence its height does
not have any particular meaning).
Also, regarding the initial condition for the derivative, we impose
$(\partial_{\tilde{\mathcal{U}}}+\partial_{\tilde{\mathcal{V}}})\Psi|_{\Sigma}=0$.
Let us now explain how we solve the master equation (38) under the initial
conditions mentioned above. Expressing the master equation (38) in terms of
$\tilde{\mathcal{U}}$ and $\tilde{\mathcal{V}}$, we have
$\displaystyle-4\frac{\partial^{2}\Psi}{\partial\tilde{\mathcal{U}}\partial\tilde{\mathcal{V}}}=\frac{V(\tilde{\mathcal{U}},\tilde{\mathcal{V}})}{ab}\Psi.$
(48)
We discretize the coordinates $\tilde{\mathcal{U}}$ and $\tilde{\mathcal{V}}$
as $\\{\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j}\\}$ where
$i,j=0,1,2,\cdots$. Note that the grid width $h$ is assumed to be uniform:
$h=\tilde{\mathcal{U}}_{i+1}-\tilde{\mathcal{U}}_{i}=\tilde{\mathcal{V}}_{j+1}-\tilde{\mathcal{V}}_{j}$.
Also, we introduce shorthand notations
$\Psi_{i,j}=\Psi(\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j})$ and
$V_{i,j}=V(\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j})$. Then, we apply
the discretization scheme introduced by Gundlach:1993tp , and the master
equation (48) is simply discretized as
$\displaystyle\Psi_{i+1,j+1}=\Psi_{i+1,j}+\Psi_{i,j+1}-\Psi_{i,j}-\frac{h^{2}}{8}\frac{V_{i,j}}{ab}\left[\Psi_{i+1,j}+\Psi_{i,j+1}\right]+\mathcal{O}(h^{4}).$
(49)
The initial field profile (47) can be implemented as
$\displaystyle\Psi_{i,j}=e^{-\frac{1}{2}\left(\frac{\tilde{\mathcal{V}}_{j}-\tilde{\mathcal{V}}_{0}}{\sigma}\right)^{2}}\Theta(\tilde{\mathcal{V}}-\tilde{\mathcal{V}}_{j_{1}})\,\Theta(\tilde{\mathcal{V}}_{j_{2}}-\tilde{\mathcal{V}}),\qquad(\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j})\in
S\cap D.$ (50)
Here, to make the truncation explicit, we have inserted the step functions
(denoted by $\Theta$) so that $\Psi$ is nonvanishing only for
$\tilde{\mathcal{V}}_{j_{1}}\leq\tilde{\mathcal{V}}\leq\tilde{\mathcal{V}}_{j_{2}}$
on the initial surface. Also, the condition
$(\partial_{\tilde{\mathcal{U}}}+\partial_{\tilde{\mathcal{V}}})\Psi|_{\Sigma}=0$
yields
$\displaystyle\Psi_{i,j}=\Psi_{i+1,j+1}+\mathcal{O}(h^{4}),\qquad(\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j})\in
S\cap D,$ (51)
and hence we have
$\displaystyle\Psi_{i+1,j+1}=\frac{1}{2}\left[\Psi_{i+1,j}+\Psi_{i,j+1}\right]-\frac{h^{2}}{16}\frac{V_{i,j}}{ab}\left[\Psi_{i+1,j}+\Psi_{i,j+1}\right]+\mathcal{O}(h^{4}),\qquad(\tilde{\mathcal{U}}_{i},\tilde{\mathcal{V}}_{j})\in
S\cap D.$ (52)
Combining the discretized equation (49) as well as the initial conditions (50)
and (52), we can obtain the solution for $\Psi$ in the whole numerical domain.
A caveat should be added here. One may think that a constant-$\phi$ surface
would be a good candidate for the initial surface as it is spacelike with
respect to both the background metric and the effective metric. However, if we
choose a constant-$\phi$ surface as the initial surface, it is nontrivial how
to implement initial conditions in our numerical scheme which is based on a
double-null grid, since a constant-$\phi$ surface cannot be described by a
linear function of the null coordinates. This is the reason why we have chosen
the initial surface $\Sigma$ as above.*3*3*3If one would like to choose a
constant-$\phi$ surface as the initial surface, then one needs a numerical
scheme that is more suitable for solving the differential equation based on
the constant-$\phi$ foliation.
Figure 5: The time evolution of the odd-parity perturbations for $\zeta=0.6$
(green curve, top), $\zeta=0.05$ (red curve, middle), and $\zeta=0$ (blue
curve, bottom). The width of the initial Gaussian wave packet is
$\sigma=0.1r_{\rm s}$.
Figure 5 shows the time evolution of the odd-parity perturbations for
$\zeta=0.6$ (green curve, top), $\zeta=0.05$ (red curve, middle), and
$\zeta=0$, i.e., GR (blue curve, bottom) for the initial Gaussian wave packet
with $\sigma=0.1r_{\rm s}$. The observer is located at $\tilde{x}=40\>r_{\rm
s}$. The initial Gaussian wave packet first reaches the observer almost
unscattered, and then the ringdown phase follows, as can be seen in Fig. 5.
In order to confirm that the frequencies in the ringdown phase are QNM
frequencies, we fit the numerical waveform with a superposition of the QNMs.
We introduce the following fitting model $\psi_{N}(\tilde{t})$:
$\displaystyle\psi_{N}(\tilde{t})=\sum_{n=0}^{N}\alpha_{n}e^{-i\left[\mu\,\omega^{\text{Sch}}_{n}(\tilde{t}-\tilde{t}_{\rm
peak})/r_{\rm
s}+\beta_{n}\right]}+c.c.,\qquad\tilde{t}\in[\tilde{t}_{0},\tilde{t}_{\rm
end}],$ (53)
where $n$ labels the overtones and $N$ is the maximum overtone number used in
the fitting. Here, $\alpha_{n}$ and $\beta_{n}$ are real parameters
corresponding to the amplitude and the phase, respectively, and $\mu$ is a
real parameter characterizing the deviation from the QNM frequencies in GR.
Also, $t_{\rm peak}$ denotes the time at which the numerical waveform
$\Psi(\tilde{t})$ takes the maximum value after the initial Gaussian wave
packet passes through the observer. For the fitting analysis, we use the
numerical waveform in the interval $[\tilde{t}_{0},\tilde{t}_{\rm end}]$ where
$\tilde{t}_{0}$ and $\tilde{t}_{\rm end}$ are free parameters satisfying
$\tilde{t}_{\rm peak}\leq\tilde{t}_{0}<\tilde{t}_{\rm end}$. In our fits, we
use the Mathematica function $\mathtt{NonlinearModelFit}$. The amplitude
$\alpha_{n}$ and the phase $\beta_{n}$ are fitting parameters, and we find
best-fit values of these parameters. Note that the parameter $\mu$ is fixed in
the fitting analysis for this section. Once we obtain a best-fit function
$\psi_{N}(\tilde{t})$, we evaluate the goodness of the fit by calculating the
mismatch $\mathcal{M}$ defined by
$\displaystyle\mathcal{M}=1-\frac{\langle\Psi|\psi_{N}\rangle}{\sqrt{\langle\Psi|\Psi\rangle\langle\psi_{N}|\psi_{N}\rangle}},$
(54)
where the scalar product is defined as
$\displaystyle\langle f|g\rangle=\int_{\tilde{t}_{0}}^{\tilde{t}_{\rm
end}}f(\tilde{t})g^{*}(\tilde{t})\;{\rm d}\tilde{t},$ (55)
for arbitrary two complex functions $f$ and $g$, with an asterisk denoting the
complex conjugation. Note that the fitting model $\psi_{N}(\tilde{t})$ and the
numerical waveform $\Psi(\tilde{t})$ are real because we impose real initial
conditions, and hence the complex conjugate in Eq. (55) is of no particular
significance. We have kept the complex conjugate just to follow the convention
in the literature. For the value of $\mu$ which is fixed, we consider the
following two cases:
1. (i)
$\mu=r_{\rm s}(1+\zeta)^{-3/2}$: We assume the value of $\mu$ as $\mu=r_{\rm
s}\,(1+\zeta)^{-3/2}$ with fixed $\zeta$, and find the best-fit parameters
$\alpha_{n}$ and $\beta_{n}$. This case corresponds to the situation where we
fit the numerical waveform with a superposition of QNMs with the frequencies
$\omega^{\rm DHOST}_{n}$ in the DHOST theory .
2. (ii)
$\mu=r_{\rm s}$: We assume the value of $\mu$ as $\mu=r_{\rm s}$, and find the
best-fit parameters $\alpha_{n}$ and $\beta_{n}$. This case corresponds to the
situation where we fit the numerical waveform with a superposition of QNMs
with frequencies $\omega^{\text{Sch}}_{n}$ in GR.
Figure 6: The mismatch for $\zeta=0.6$ between the numerical waveform
$\Psi(\tilde{t})$ and the fitting model $\psi_{N}(\tilde{t})$ defined by Eq.
(53). The left panels are calculated with (i) $\mu=r_{\rm s}(1+\zeta)^{-3/2}$,
while the right panels are calculated with (ii) $\mu=r_{\rm s}$. In addition,
the upper panels are the mismatch for $\sigma=r_{\rm s}$, i.e., the wider
initial Gaussian wave packet, while the lower panels are the mismatch for
$\sigma=0.1r_{\rm s}$, i.e., the narrower initial Gaussian wave packet.
Figure 6 shows the mismatch for $\zeta=0.6$. The left panels are the mismatch
calculated with (i) $\mu=r_{\rm s}(1+\zeta)^{-3/2}$ for $\sigma=r_{\rm s}$
(upper panel) and $\sigma=0.1r_{\rm s}$ (lower panel), respectively. As the
number of $N$ increases, the minimum of mismatch decreases. When we fit the
numerical waveform with only the fundamental mode, i.e., $N=0$ (blue curve),
the mismatch gets smaller as $\tilde{t}_{0}$ increases. This implies that the
waveform near the peak time $\tilde{t}_{\rm peak}$ is dominated by the
overtones. Indeed, when we take into account the higher overtones, the value
of $\tilde{t}_{0}$ that minimizes the mismatch gets closer to $\tilde{t}_{\rm
peak}$. The right panels in Fig. 6 show the mismatch calculated with (ii)
$\mu=r_{\rm s}$. Unlike the DHOST fitting (i), for all $N$, the mismatch takes
almost constant values. In particular, we see that the mismatch for $N=0$ does
not decrease at late time in the GR fitting (ii). This reflects the
inconsistency between the numerical waveform and the fitting model: We are now
fitting the waveform for the DHOST theory by a superposition of the QNMs in
GR. Thus, the right panels in Fig. 6 explicitly show that the the QNMs with
the frequencies in GR do not well describe the numerical waveform for the
DHOST theory.
Figure 7: The mismatch for $\zeta=0.05$ between the numerical waveform
$\Psi(\tilde{t})$ and the fitting model $\psi_{N}(\tilde{t})$ defined by Eq.
(53). The values of the parameters in each panel are the same as those in Fig.
6.
Figure 7 shows the mismatch for $\zeta=0.05$. As in the case of $\zeta=0.6$,
when we calculate the mismatch with (i) $\mu=r_{\rm s}(1+\zeta)^{-3/2}$, as
$N$ increases, the minimum of mismatch decreases and the value of
$\tilde{t}_{0}$ at the minimum gets closer to $\tilde{t}_{\rm peak}$. On the
other hand, when we calculate the mismatch with (ii) $r=r_{\rm s}$, it can be
seen that the numerical waveform is not well described with the QNMs in GR.
From these results, we conclude that the superposition of the QNMs in the
DHOST theory (45) is consistent with the numerical waveform and the QNMs are
excited in the physically sensible initial value problem. We have also
performed the fitting analysis keeping $\mu$ unfixed and found that the best-
fit value of $\mu$ is consistent with that of the DHOST theory, i.e.,
$\mu=r_{\rm s}(1+\zeta)^{-3/2}$.
## IV Stealth Schwarzschild-de Sitter solutions
In this section, we consider the stealth Schwarzschild-dS profile as the
background solution. The background metric is given by the Schwarzschild-dS
metric:
$\displaystyle
A(r)=B(r)=-\frac{\Lambda}{3r}\left(r^{3}-\frac{3}{\Lambda}r+\frac{3r_{\rm
s}}{\Lambda}\right)\eqqcolon-\frac{\Lambda}{3r}\Delta(r),$ (56)
with $r_{\rm s}$ and $\Lambda$ being positive constants. Since $\Delta(r)$ is
a cubic polynomial in $r$, it can be factorized as
$\Delta(r)=(r-r_{-})(r-r_{\rm e})(r-r_{\rm c})$. Here, the three roots are
given by
$\displaystyle r_{-}$
$\displaystyle=\frac{2}{\sqrt{\Lambda}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm
s}\sqrt{\Lambda}}{2}\right)+\frac{2\pi}{3}\right],$ (57) $\displaystyle r_{\rm
e}$
$\displaystyle=\frac{2}{\sqrt{\Lambda}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm
s}\sqrt{\Lambda}}{2}\right)+\frac{4\pi}{3}\right],$ (58) $\displaystyle r_{\rm
c}$
$\displaystyle=\frac{2}{\sqrt{\Lambda}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm
s}\sqrt{\Lambda}}{2}\right)\right],$ (59)
respectively. We note that the three roots are real if
$\displaystyle r_{\rm s}\sqrt{\Lambda}<\frac{2}{3},$ (60)
is satisfied, and we have labeled these roots so that $r_{-}<0<r_{\rm
e}<r_{\rm c}$. Therefore, the event horizon is located at $r=r_{\rm e}$ and
the cosmological horizon is located at $r_{\rm c}$. For small $r_{\rm
s}\sqrt{\Lambda}$, the three roots above are expanded as
$\displaystyle r_{-}$
$\displaystyle=-\sqrt{\frac{3}{\Lambda}}\left[1+\frac{r_{\rm
s}}{2}\sqrt{\frac{\Lambda}{3}}-\frac{r_{\rm s}^{2}\Lambda}{8}+\frac{r_{\rm
s}^{3}}{6}\sqrt{\frac{\Lambda^{3}}{3}}+{\cal O}(r_{\rm
s}^{4}\Lambda^{2})\right],$ (61) $\displaystyle r_{\rm e}$
$\displaystyle=r_{\rm s}\left[1+\frac{1}{3}r_{\rm s}^{2}\Lambda+{\cal
O}(r_{\rm s}^{4}\Lambda^{2})\right],$ (62) $\displaystyle r_{\rm c}$
$\displaystyle=\sqrt{\frac{3}{\Lambda}}\left[1-\frac{r_{\rm
s}}{2}\sqrt{\frac{\Lambda}{3}}-\frac{r_{\rm s}^{2}\Lambda}{8}-\frac{r_{\rm
s}^{3}}{6}\sqrt{\frac{\Lambda^{3}}{3}}+{\cal O}(r_{\rm
s}^{4}\Lambda^{2})\right].$ (63)
### IV.1 Effective metric
For the stealth Schwarzschild-dS solutions, we can define the effective metric
in the same manner as in the case of stealth Schwarzschild solutions. From the
general expression (29) for the effective metric $Z_{IJ}$, the
$\tilde{t}\tilde{t}$-component can be read off as
$\displaystyle
Z_{\tilde{t}\tilde{t}}=\frac{1}{2F_{2}(1+\zeta)^{2}r^{2}}\left[1-\frac{(1+\zeta)r_{\rm
s}}{r}-\frac{(1+\zeta)\Lambda}{3}r^{2}\right].$ (64)
Introducing a parameter $\Lambda_{\rm g}=(1+\zeta)\Lambda$, we have
$\displaystyle Z_{\tilde{t}\tilde{t}}=-\frac{\Lambda_{\rm
g}}{6F_{2}(1+\zeta)^{2}r^{3}}\left(r^{3}-\frac{3}{\Lambda_{\rm
g}}r+\frac{3r_{\rm g}}{\Lambda_{\rm g}}\right)\eqqcolon-\frac{\Lambda_{\rm
g}}{6F_{2}(1+\zeta)^{2}r^{3}}\Delta_{\rm g}(r).$ (65)
Therefore, for the odd-parity perturbations, the locations of the Killing
horizons are determined by the roots for $\Delta_{\rm g}(r)=0$. The roots are
given by
$\displaystyle\tilde{r}_{-}$ $\displaystyle=\frac{2}{\sqrt{\Lambda_{\rm
g}}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm g}\sqrt{\Lambda_{\rm
g}}}{2}\right)+\frac{2\pi}{3}\right],$ (66) $\displaystyle\tilde{r}_{\rm e}$
$\displaystyle=\frac{2}{\sqrt{\Lambda_{\rm
g}}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm g}\sqrt{\Lambda_{\rm
g}}}{2}\right)+\frac{4\pi}{3}\right],$ (67) $\displaystyle\tilde{r}_{\rm c}$
$\displaystyle=\frac{2}{\sqrt{\Lambda_{\rm
g}}}\cos\left[\frac{1}{3}\cos^{-1}\left(-\frac{3r_{\rm g}\sqrt{\Lambda_{\rm
g}}}{2}\right)\right].$ (68)
We note that these three roots are real if $r_{\rm g}\sqrt{\Lambda_{\rm
g}}<2/3$, i.e.,
$\displaystyle r_{\rm s}\sqrt{\Lambda}<\frac{2}{3(1+\zeta)^{3/2}},$ (69)
is satisfied, and we have labeled these roots so that
$\tilde{r}_{-}<0<\tilde{r}_{\rm e}<\tilde{r}_{\rm c}$.
In the present paper, we consider the spacetime region satisfying $\Delta_{\rm
g}(r)>0$, i.e., the static region for the odd-parity perturbations. The static
region associated with the background metric is defined by $\Delta(r)>0$, and
the relation to the static region for the effective metric depends on the
value of $\zeta$ and $\Lambda$. For $\zeta>0$ and $r_{\rm
s}\sqrt{\Lambda}<2(1+\zeta)^{-3/2}/3$, both the effective metric and the
background metric have the static region. However, for
$2(1+\zeta)^{-3/2}/3<r_{\rm s}\sqrt{\Lambda}<2/3$, only the background metric
has the static region.
Figure 8: Relation between the static region associated with the effective
metric and the one associated with the background metric for $\zeta>0$. The
former (green shaded region) exists for $r_{\rm
s}\sqrt{\Lambda}<2(1+\zeta)^{-3/2}/3$, while the latter (yellow shaded region)
exists for $r_{\rm s}\sqrt{\Lambda}<2/3$.
Figure 8 shows the schematic picture of the relation between the static
regions for the effective metric and the background metric in the $\zeta>0$
case. The solid blue and orange curves respectively correspond to the event
horizon and the cosmological horizon of the background spacetime, while the
dashed blue and orange curves correspond to the Killing horizons for the odd-
parity perturbations. The green shaded region is the static region for both
the effective and background metrics, while the yellow shaded region is the
static region for the background metric only.
### IV.2 Characters of a constant-$\tilde{t}$ surface
We examine the characters of a constant-$\tilde{t}$ surface. For the stealth
Schwarzschild-dS solutions, the coordinate $\tilde{t}$ defined by (19) reads
$\displaystyle\tilde{t}=t-\int\frac{\zeta(3r)^{3/2}\sqrt{3r_{\rm s}+\Lambda
r^{3}}}{(1+\zeta)\Lambda^{2}\Delta(r)\Delta_{\rm g}(r)}\,{\rm d}r,$ (70)
though we cannot express it in a simple analytic form unlike the case of the
stealth Schwarzschild solutions. We investigate the global structure of a
constant-$\tilde{t}$ surface in the background spacetime. To this end, we
consider a vector field $\partial_{\mu}\tilde{t}$ which is normal to the
constant-$\tilde{t}$ surface. For the background Schwarzschild-dS metric, the
norm of $\partial_{\mu}\tilde{t}$ is given by
$\displaystyle\bar{g}^{\mu\nu}\partial_{\mu}\tilde{t}\,\partial_{\nu}\tilde{t}=\frac{3r}{\Lambda\Delta_{\rm
g}(r)}\left(r^{3}-\frac{3}{(1+\zeta)^{2}\Lambda}r+\frac{3r_{\rm
s}}{\Lambda}\right).$ (71)
Since we focus only on the region in which $\Delta_{\rm g}(r)>0$, the sign of
$\bar{g}^{\mu\nu}\partial_{\mu}\tilde{t}\,\partial_{\nu}\tilde{t}$ is
determined by the sign of the function in the parentheses in Eq. (71).
Figure 9: Typical plots of constant-$\tilde{t}$ surfaces in the Penrose
diagram of the Schwarzschild-dS spacetime. The black curves represent the
constant-$\tilde{t}$ surfaces. The constant-$\tilde{t}$ surfaces are spacelike
in the yellow shaded region. For (A-1) $\zeta>0$, when the parameter $\zeta$
satisfies Eq. (72), the constant-$\tilde{t}$ surface can be spacelike in a
finite region. For (A-2) $\zeta>0$ when the parameter $\zeta$ violates Eq.
(72), the constant-$\tilde{t}$ surface is always timelike. For (B) $\zeta<0$,
the constant-$\tilde{t}$ surface is always spacelike.
In Fig. 9, we show typical plots of constant-$\tilde{t}$ surfaces in the
Penrose diagram of the Schwarzschild-dS spacetime.*4*4*4 In Fig. 9 and Fig.
10, in the $\zeta<0$ case, a constant-$\tilde{t}$ surface and a characteristic
curves apparently become null near both the event horizon and the cosmological
horizon, but in fact these are spacelike everywhere. This behavior is an
artifact caused by the coordinate system we have used. For the stealth
Schwarzschild-dS solutions, we have defined different double null coordinates
in each block of the Penrose diagram and drawn the curves for each block.
Then, we have glued the diagrams of each block using the method proposed in
Walker1970 . On the other hand, for the stealth Schwarzschild solutions, we
can define a single coordinate system which covers the whole spacetime in a
simple analytic form, and hence the coordinate $\tilde{t}$ and the
characteristic curves are written in an analytic form, which has led to the
smooth curves in Fig. 1 and Fig. 2. The constant-$\tilde{t}$ surfaces are
spacelike in the yellow shaded region. For $\zeta>0$, the constant-$\tilde{t}$
surface has a spacelike region within $\tilde{r}_{\rm e}<r<\tilde{r}_{\rm c}$
if
$\displaystyle r_{\rm s}\sqrt{\Lambda}<\frac{2}{3(1+\zeta)^{3}},$ (72)
is satisfied [see (A-1) in Fig. 9]. If the condition (72) is violated for
$\zeta>0$, the constant-$\tilde{t}$ surface is always timelike [see (A-2) in
Fig. 9]. On the other hand, for $\zeta<0$, the constant-$\tilde{t}$ surface is
always spacelike [see (B) in Fig. 9].
### IV.3 Characteristic curves
As in the case of the stealth Schwarzschild solutions, the odd-parity
perturbations propagate along a constant-$\tilde{u}$ curve or a
constant-$\tilde{v}$ curve. Here, we investigate the vector fields which are
normal to the constant-$\tilde{u}$ curve and the constant-$\tilde{v}$ curve:
$\partial_{\mu}\tilde{u}$ and $\partial_{\mu}\tilde{v}$. For the background
Schwarzschild-dS metric, the norm of these vector fields are given by
$\displaystyle\bar{g}^{\mu\nu}\partial_{\mu}\tilde{u}\,\partial_{\nu}\tilde{u}$
$\displaystyle=\frac{3\zeta r}{\Lambda_{\rm g}^{2}\Delta_{\rm
g}(r)^{2}}\left(\sqrt{3r_{\rm g}+\Lambda_{\rm g}r^{3}}+\sqrt{3r}\right)^{2},$
(73)
$\displaystyle\bar{g}^{\mu\nu}\partial_{\mu}\tilde{v}\,\partial_{\nu}\tilde{v}$
$\displaystyle=\frac{3\zeta r}{\Lambda_{\rm g}^{2}\Delta_{\rm
g}(r)^{2}}\left(\sqrt{3r_{\rm g}+\Lambda_{\rm g}r^{3}}-\sqrt{3r}\right)^{2}.$
(74)
The sign of each norm is determined by the sign of the parameter $\zeta$. The
characteristic curves are timelike for $\zeta>0$, while they are spacelike for
$\zeta<0$. That is, for $\zeta<0$, the odd-parity perturbations become
superluminal. Figure 10 shows the characteristic curves of the odd-parity
perturbations in the Penrose diagram of the Schwarzschild-dS spacetime.
Figure 10: The characteristic curves for (A) $\zeta>0$ and (B) $\zeta<0$ in
the Penrose diagram of the Schwarzschild-dS spacetime. The red curves and the
blues curves represent the constant-$\tilde{v}$ curves and the
constant-$\tilde{u}$ curves, respectively.
### IV.4 Equation of motion and QNM frequencies
In order to express the equation of motion in the form of a two-dimensional
wave equation, we introduce the generalized tortoise coordinate:
$\displaystyle\tilde{x}=\frac{-3}{\Lambda\sqrt{1+\zeta}}\left(\frac{\tilde{r}_{-}\ln\left|r-\tilde{r}_{-}\right|}{(\tilde{r}_{\rm
c}-\tilde{r}_{-})(\tilde{r}_{\rm e}-\tilde{r}_{-})}+\frac{\tilde{r}_{\rm
e}\ln\left|r-\tilde{r}_{\rm e}\right|}{(\tilde{r}_{\rm e}-\tilde{r}_{\rm
c})(\tilde{r}_{\rm e}-\tilde{r}_{-})}+\frac{\tilde{r}_{\rm
c}\ln\left|r-\tilde{r}_{\rm c}\right|}{(\tilde{r}_{\rm c}-\tilde{r}_{\rm
e})(\tilde{r}_{\rm c}-\tilde{r}_{-})}\right),$ (75)
up to an integration constant. We note that $\tilde{x}\to-\infty$ as
$r\to\tilde{r}_{\rm e}$ and $\tilde{x}\to\infty$ as $r\to\tilde{r}_{\rm c}$.
In terms of $\tilde{t}$ and $\tilde{x}$, the equation of motion is written as
follows:
$\displaystyle\left[\frac{\partial^{2}}{\partial\tilde{x}^{2}}-\frac{\partial^{2}}{\partial\tilde{t}^{2}}-V_{\ell}(\tilde{x})\right]\Psi_{\ell}=0,$
(76)
with
$\displaystyle V_{\ell}(\tilde{x})=\frac{1}{1+\zeta}\left(1-\frac{r_{\rm
g}}{r}-\frac{\Lambda_{\rm
g}}{3}r^{2}\right)\left[\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\rm
g}}{r^{3}}\right].$ (77)
As in the case of the stealth Schwarzschild solutions, we introduce the
rescaled coordinates as follows:
$\displaystyle\tilde{T}=\frac{\tilde{t}}{\sqrt{1+\zeta}},$ (78)
$\displaystyle\tilde{X}=\frac{\tilde{x}}{\sqrt{1+\zeta}}.$ (79)
With these coordinates, the equation of motion can be rewritten as
$\displaystyle\left[\frac{\partial^{2}}{\partial\tilde{X}^{2}}-\frac{\partial^{2}}{\partial\tilde{T}^{2}}-\tilde{V}_{\ell}(\tilde{X})\right]\Psi_{\ell}=0,$
(80)
with
$\displaystyle\tilde{V}_{\ell}(\tilde{X})=\left(1-\frac{r_{\rm
g}}{r}-\frac{\Lambda_{\rm
g}}{3}r^{2}\right)\left[\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\rm
g}}{r^{3}}\right].$ (81)
Therefore, even in the case of stealth Schwarzschild-dS solutions, we can
recast the equation of motion into the standard Regge-Wheeler equation
parametrized by $r_{\rm g}$ and $\Lambda_{\rm g}$. For the stealth
Schwarzschild-dS solutions, the QNMs correspond to the modes that are purely
ingoing at $r=\tilde{r}_{\rm e}$ and purely outgoing at $r=\tilde{r}_{\rm c}$.
To determine the QNM frequencies, we consider the ansatz
$\Psi_{\ell}=\psi_{\ell}(\tilde{X})e^{-i\tilde{W}\tilde{T}}$. Let
$\tilde{W}_{\ell,n}(r_{\rm g},\Lambda_{\rm g})$ be the QNM frequencies
obtained by solving Eq. (80) with the ansatz for $\Psi_{\ell}$ and let
$\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm s},\Lambda)$ be the QNM frequencies
calculated from the standard Regge-Wheeler equation parametrized by $r_{\rm
s}$ and $\Lambda$. Then, the QNM frequencies $\tilde{W}_{\ell,n}(r_{\rm
g},\Lambda_{\rm g})$ can be expressed as
$\displaystyle\tilde{W}_{\ell,n}(r_{\rm g},\Lambda_{\rm g})=\omega^{\text{Sch-
dS}}_{\ell,n}(r_{\rm g},\Lambda_{\rm g}),$ (82)
where $\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm g},\Lambda_{\rm
g})\coloneqq\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm s}\to r_{\rm
g},\Lambda\to\Lambda_{\rm g})$. Finally, from the relation between $\tilde{T}$
and $\tilde{t}$, the QNM frequencies of the odd-parity perturbations about the
stealth Schwarzschild-dS solutions in the DHOST theory are given by
$\displaystyle\omega^{\rm DHOST}_{\ell,n}=\frac{\omega^{\text{Sch-
dS}}_{\ell,n}(r_{\rm g},\Lambda_{\rm g})}{\sqrt{1+\zeta}}.$ (83)
That is, once we know the QNM frequencies in the Schwarzschild-dS spacetime
parametrized by $r_{\rm g}$ and $\Lambda_{\rm g}$ in GR, we can find the QNM
frequencies in the DHOST theory by applying the scaling law Eq. (83).
It is worth mentioning that there is a degeneracy between $r_{\rm s}$ and
$\zeta$ as in the case of the stealth Schwarzschild solutions. As we discuss
in Appendix B, the dimensionless QNM frequencies of the Schwarzschild-dS
spacetime in GR, $\Omega^{\text{Sch-dS}}_{\ell,n}\coloneqq r_{\rm
s}\,\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm s},\Lambda)$, depends only on the
combination $r_{\rm s}^{2}\Lambda$, and hence we can write $\Omega^{\text{Sch-
dS}}_{\ell,n}=\Omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm s}^{2}\Lambda)$. Thus,
the QNM frequencies of the Schwarzschild-dS spacetime can be written as
$\displaystyle\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm
s},\Lambda)=\frac{\Omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm
s}^{2}\Lambda)}{r_{\rm s}}.$ (84)
As a result, Eq. (83) can be rewritten in terms of $\Omega^{\text{Sch-
dS}}_{\ell,n}$ as follows:
$\displaystyle\omega^{\rm DHOST}_{\ell,n}=\frac{\Omega^{\text{Sch-
dS}}_{\ell,n}(r_{\rm g}^{2}\Lambda_{\rm g})}{r_{\rm
g}\sqrt{1+\zeta}}=\frac{\Omega^{\text{Sch-dS}}_{\ell,n}\left([r_{\rm
s}(1+\zeta)^{3/2}]^{2}\Lambda\right)}{r_{\rm s}(1+\zeta)^{3/2}},$ (85)
where we have used $r_{\rm g}=(1+\zeta)r_{\rm s}$ and $\Lambda_{\rm
g}=(1+\zeta)\Lambda$. This means that, if we fix the value of $\Lambda$,
$\omega^{\text{DHOST}}_{\ell,n}$ depends only on the combination $r_{\rm
s}(1+\zeta)^{3/2}$. This shows that there is a degeneracy between $r_{\rm s}$
and $\zeta$ in the QNM frequencies for the stealth Schwarzschild-dS solutions
as in the case of the stealth Schwarzschild solutions.
For $r_{\rm g}^{2}\Lambda_{\rm g}\ll 1$, there is a perturbative formula for
the QNM frequencies Hatsuda:2023geo . For $\ell=2$ fundamental mode, the
formula is given by
$\displaystyle\Omega^{\text{Sch-dS}}_{2,0}(r_{\rm g}^{2}\Lambda_{\rm
g})=0.74734-0.17792\,i+\frac{9w_{1}}{2}r_{\rm g}^{2}\Lambda_{\rm
g}+\frac{81w_{2}}{8}r_{\rm g}^{4}\Lambda_{\rm g}^{2}+\mathcal{O}(r_{\rm
g}^{6}\Lambda_{\rm g}^{3}),$ (86)
where $w_{1}=-0.18649+0.03720\,i$, $w_{2}=-0.04819+0.01428\,i$. Note that this
formula implicitly assumes $r_{\rm s}\neq 0$ since otherwise the left-hand
side cannot be defined. Therefore, the QNM frequency $\omega^{\rm
DHOST}_{2,0}$ becomes
$\displaystyle\begin{split}\omega^{\rm DHOST}_{2,0}=\frac{1}{r_{\rm
s}(1+\zeta)^{3/2}}\bigg{(}0.74734&-0.17792\,i+\frac{9w_{1}}{2}\left[r_{\rm
s}(1+\zeta)^{3/2}\right]^{2}\Lambda\\\ &+\frac{81w_{2}}{8}\left[r_{\rm
s}(1+\zeta)^{3/2}\right]^{4}\Lambda^{2}+\mathcal{O}(r_{\rm
s}^{6}\Lambda^{3})\bigg{)}.\end{split}$ (87)
This explicitly shows that there is the degeneracy between $r_{\rm s}$ and
$\zeta$ when we fix the value of $\Lambda$.
## V Summary and discussions
We have investigated the odd-parity perturbations about stealth Schwarzschild
solutions and stealth Schwarzschild-de Sitter solutions with a linearly time-
dependent scalar field in a subclass of DHOST theories, for which the
deviation from general relativity is controlled by a single parameter $\zeta$.
We have derived the effective metric for the odd-parity perturbations and
analyzed the characteristic curves. We have also shown that the Killing
horizon(s) of the effective metric differs from that of the background metric.
For $\zeta<0$ case, the odd-parity perturbations can be superluminal and hence
can escape from the region inside the Schwarzschild radius of the background
metric, as demonstrated in Appendix A. We have derived the master equation for
the odd-parity perturbations in the form of a two-dimensional wave equation,
which can be expressed in the form of the standard Regge-Wheeler equation in
GR with the rescaled black hole mass $r_{\rm g}$ (and the rescaled
cosmological constant $\Lambda_{\rm g}$ in the case of the stealth
Schwarzschild-de Sitter solutions). We have computed the QNM frequencies for
both the stealth Schwarzschild solutions and the stealth Schwarzschild-dS
solutions. In both cases, we have found that the QNM frequencies can be given
by a simple scaling of those in GR. In particular, we have shown that there is
a degeneracy between the black hole mass $r_{\rm s}$ and $\zeta$ in the QNM
frequencies.
We have also solved an initial value problem for the odd-parity perturbations
about the stealth Schwarzschild solutions employing the physically sensible
formulation of the initial value problem proposed in Nakashi:2022wdg . We have
defined a spacelike hypersurface $\Sigma$ in the following manner: We have
constructed a hypersurface $\tilde{\Sigma}$ by slightly tilting the
constant-$\tilde{t}$ surface. We have defined the region $S$ where the surface
$\tilde{\Sigma}$ is spacelike. Furthermore, we have required that (a) the
initial surface $\Sigma$ coincides with $\tilde{\Sigma}$ in the region $S$
within the numerical domain, and that (b) the initial conditions have a
compact support in the region $S$ within the numerical domain. We have
analyzed the time evolution of a initial Gaussian wave packet. We have
confirmed that the damped oscillation phase (ringdown phase) appears. We have
found that a superposition of the QNMs in the DHOST theory is consistent with
the numerical waveform through the fitting analysis. In particular, we have
calculated the mismatch between the numerical waveform and the superposition
of the QNMs in the DHOST theory and found that the minimum of the mismatch
decreases and gets closer to the waveform peak when the overtones are taken
into account. On the other hand, we have also confirmed that a superposition
of the QNMs in GR does not well describe the numerical waveform. From these
results, we conclude that the QNMs in the DHOST theory are excited in the
physically sensible initial value problem.
We note that the perturbations about the stealth solution in DHOST theories
would be strongly coupled Babichev:2018uiw ; deRham:2019gha ;
Motohashi:2019ymr ; Takahashi:2021bml . A possible way out of this problem is
to incorporate the scordatura term Motohashi:2019ymr . However, as discussed
in Sec. II.3, we expect that the scordatura term would not lead to a
qualitative change in our results on the odd-parity perturbations. This is
essentially because the strong coupling problem comes from the vanishing sound
speed of the mode corresponding to the scalar degree of freedom, which belongs
to the even-parity sector. Along this line of thought, it would be intriguing
to study the even-parity sector to see how the effect of the scordatura term
shows up. It should also be noted that the effect of modified gravity
completely disappears in the odd-parity sector when $\zeta=0$, and hence the
study of odd-parity perturbations alone cannot tell the difference from
general relativity. This is another motivation to study the even-parity
sector. We hope to come back to this issue in a future publication.
## Acknowledgments
This work was supported by JSPS KAKENHI Grant Nos. JP22K03626 (M.K.),
JP22K03639 (H.M.), JP22KJ1646 (K.T.), and JP23K13101 (K.T.) from the Japan
Society for the Promotion of Science.
## Appendix A Initial value problem for negative $\zeta$
In Sec. III.5, we have analyzed the initial value problem for the odd-parity
perturbations about the stealth Schwarzschild solutions in $\zeta>0$ case and
shown that the waveform of the odd-parity perturbations is well described by a
superposition of the QNMs in the DHOST theory. In this appendix, we analyze
the case of $\zeta<0$. A remarkable property of the odd-parity perturbation
for $\zeta<0$ is that the perturbation is superluminal in the whole spacetime.
This implies that the odd-parity perturbations can escape from inside the
Schwarzschild radius $r_{\rm s}$. As we mentioned in Sec. III.2, a
constant-$\tilde{t}$ surface is always spacelike, and hence we choose it as
the initial surface, i.e., we set $a=b=1$. Furthermore, we choose $\sigma$ and
$\tilde{\mathcal{V}}_{0}$ so that the initial field profile has its support
inside $r_{\rm s}$.
Figure 11: The evolution of the odd-parity perturbation in the
$(\tilde{\mathcal{U}},\tilde{\mathcal{V}})$-space (left panel) and the
waveform of the perturbation observed by an observer at $\tilde{x}=40r_{\rm
s}$ (right panel) for $\zeta<0$. In the left panel, the cyan solid curve is
the initial field profile and the black solid line is the location of the
Schwarzschild radius $r_{\rm s}$. The left panel shows that the odd-parity
perturbations can escape from inside the Schwarzschild radius. The right panel
shows that the damped oscillation phase also shows up for $\zeta<0$.
Figure 11 shows the evolution of the odd-parity perturbation for $\zeta=-0.5$
in the $(\tilde{\mathcal{U}},\tilde{\mathcal{V}})$-space (left panel) and the
waveform observed by an observer at $\tilde{x}=40r_{\rm s}$ (right panel). In
the left panel, the cyan solid curve is the initial field profile, which has
the compact support inside $r_{\rm s}$ depicted by the black solid line. The
left panel explicitly shows that the odd-parity perturbation escapes from
inside the Schwarzschild radius $r_{\rm s}$. The right panel shows that the
damped oscillation phase also shows up in the $\zeta<0$ case. We note that the
damped oscillation phase can be well fitted by Eq. (53) with $\mu=r_{\rm
s}(1+\zeta)^{-3/2}$.
## Appendix B QNM frequencies of the Schwarzschild-dS spacetime in GR
Here, we briefly mention a property of the QNMs of the Schwarzschild-dS
spacetime in GR. The Regge-Wheeler equation for the Schwarzschild-dS solution
in GR can be written as
$\displaystyle\left[\frac{\partial^{2}}{\partial
x^{2}}-\frac{\partial^{2}}{\partial t^{2}}-V_{\ell}(x)\right]\Psi_{\ell}=0,$
(88)
with the effective potential given by
$\displaystyle V_{\ell}(x)=\left(1-\frac{r_{\rm
s}}{r}-\frac{\Lambda}{3}r^{2}\right)\left[\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\rm
s}}{r^{3}}\right].$ (89)
Substituting the ansatz $\Psi_{\ell}=\psi_{\ell}(x)e^{-i\omega t}$, we have
$\displaystyle\left[\frac{\partial^{2}}{\partial
x^{2}}+\omega^{2}-V_{\ell}(x)\right]\psi_{\ell}(x)=0.$ (90)
In terms of the dimensionless coordinates $\hat{r}\coloneqq r/r_{\rm s}$ and
$\hat{x}\coloneqq x/r_{\rm s}$, the above equation takes the form
$\displaystyle\left[\frac{\partial^{2}}{\partial\hat{x}^{2}}+(r_{\rm
s}\omega)^{2}-r_{\rm s}^{2}V_{\ell}(x)\right]\psi_{\ell}(x)=0.$ (91)
Note that the effective potential written in terms of the dimensionless
coordinates reads
$\displaystyle r_{\rm s}^{2}V_{\ell}(x)=\left(1-\frac{1}{\hat{r}}-\frac{r_{\rm
s}^{2}\Lambda}{3}\hat{r}^{2}\right)\left[\frac{\ell(\ell+1)}{\hat{r}^{2}}-\frac{3}{\hat{r}^{3}}\right],$
(92)
where $r_{\rm s}$ and $\Lambda$ show up only in the combination $r_{\rm
s}^{2}\Lambda$. As a result, the QNM frequencies normalized by $r_{\rm s}$
should depend only on $r_{\rm s}^{2}\Lambda$, which we write
$\Omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm s}^{2}\Lambda)$. Therefore, the
(dimensionful) QNM frequencies $\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm
s},\Lambda)$ can be expressed as
$\displaystyle\omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm
s},\Lambda)=\frac{\Omega^{\text{Sch-dS}}_{\ell,n}(r_{\rm
s}^{2}\Lambda)}{r_{\rm s}},$ (93)
For $r_{\rm s}^{2}\Lambda\ll 1$, there is a perturbative formula for the QNM
frequencies Hatsuda:2023geo . According to the formula, for $\ell=2$
fundamental mode, the dimensionless QNM frequency $\Omega^{\text{Sch-
dS}}_{2,0}(r_{\rm s}^{2}\Lambda)$ is given by
$\displaystyle\Omega^{\text{Sch-dS}}_{2,0}(r_{\rm
s}^{2}\Lambda)=0.74734-0.17792\,i+\frac{9w_{1}}{2}r_{\rm
s}^{2}\Lambda+\frac{81w_{2}}{8}r_{\rm s}^{4}\Lambda^{2}+\mathcal{O}(r_{\rm
s}^{6}\Lambda^{3}),$ (94)
where $w_{1}=-0.18649+0.03720\,i$, $w_{2}=-0.04819+0.01428\,i$. This is
consistent with the fact that the dimensionless QNM frequencies
$\Omega^{\text{Sch-dS}}_{\ell,n}$ depend only on $r_{\rm s}^{2}\Lambda$.
## References
* (1) K. Koyama, _Cosmological Tests of Modified Gravity_ , _Rept. Prog. Phys._ 79 (2016) 046902 [1504.04623].
* (2) P. G. Ferreira, _Cosmological Tests of Gravity_ , _Ann. Rev. Astron. Astrophys._ 57 (2019) 335 [1902.10503].
* (3) S. Arai et al., _Cosmological gravity probes: Connecting recent theoretical developments to forthcoming observations_ , _PTEP_ 2023 (2023) 072E01 [2212.09094].
* (4) A. Buonanno, G. B. Cook and F. Pretorius, _Inspiral, merger and ring-down of equal-mass black-hole binaries_ , _Phys. Rev. D_ 75 (2007) 124018 [gr-qc/0610122].
* (5) C. Brans and R. H. Dicke, _Mach’s principle and a relativistic theory of gravitation_ , _Phys. Rev._ 124 (1961) 925.
* (6) G. W. Horndeski, _Second-order scalar-tensor field equations in a four-dimensional space_ , _Int. J. Theor. Phys._ 10 (1974) 363.
* (7) C. Deffayet, X. Gao, D. A. Steer and G. Zahariade, _From k-essence to generalised Galileons_ , _Phys. Rev. D_ 84 (2011) 064039 [1103.3260].
* (8) T. Kobayashi, M. Yamaguchi and J. Yokoyama, _Generalized G-inflation: Inflation with the most general second-order field equations_ , _Prog. Theor. Phys._ 126 (2011) 511 [1105.5723].
* (9) R. P. Woodard, _Ostrogradsky’s theorem on Hamiltonian instability_ , _Scholarpedia_ 10 (2015) 32243 [1506.02210].
* (10) H. Motohashi and T. Suyama, _Third order equations of motion and the Ostrogradsky instability_ , _Phys. Rev. D_ 91 (2015) 085009 [1411.3721].
* (11) H. Motohashi and T. Suyama, _Quantum Ostrogradsky theorem_ , _JHEP_ 09 (2020) 032 [2001.02483].
* (12) K. Aoki and H. Motohashi, _Ghost from constraints: a generalization of Ostrogradsky theorem_ , _JCAP_ 08 (2020) 026 [2001.06756].
* (13) D. Langlois and K. Noui, _Degenerate higher derivative theories beyond Horndeski: evading the Ostrogradski instability_ , _JCAP_ 02 (2016) 034 [1510.06930].
* (14) H. Motohashi, K. Noui, T. Suyama, M. Yamaguchi and D. Langlois, _Healthy degenerate theories with higher derivatives_ , _JCAP_ 07 (2016) 033 [1603.09355].
* (15) R. Klein and D. Roest, _Exorcising the Ostrogradsky ghost in coupled systems_ , _JHEP_ 07 (2016) 130 [1604.01719].
* (16) H. Motohashi, T. Suyama and M. Yamaguchi, _Ghost-free theory with third-order time derivatives_ , _J. Phys. Soc. Jap._ 87 (2018) 063401 [1711.08125].
* (17) H. Motohashi, T. Suyama and M. Yamaguchi, _Ghost-free theories with arbitrary higher-order time derivatives_ , _JHEP_ 06 (2018) 133 [1804.07990].
* (18) M. Crisostomi, K. Koyama and G. Tasinato, _Extended Scalar-Tensor Theories of Gravity_ , _JCAP_ 04 (2016) 044 [1602.03119].
* (19) J. Ben Achour, M. Crisostomi, K. Koyama, D. Langlois, K. Noui and G. Tasinato, _Degenerate higher order scalar-tensor theories beyond Horndeski up to cubic order_ , _JHEP_ 12 (2016) 100 [1608.08135].
* (20) J. D. Bekenstein, _The Relation between physical and gravitational geometry_ , _Phys. Rev. D_ 48 (1993) 3641 [gr-qc/9211017].
* (21) J.-P. Bruneton and G. Esposito-Farèse, _Field-theoretical formulations of MOND-like gravity_ , _Phys. Rev. D_ 76 (2007) 124012 [0705.4043].
* (22) D. Bettoni and S. Liberati, _Disformal invariance of second order scalar-tensor theories: Framing the Horndeski action_ , _Phys. Rev. D_ 88 (2013) 084020 [1306.6724].
* (23) K. Takahashi, H. Motohashi and M. Minamitsuji, _Invertible disformal transformations with higher derivatives_ , _Phys. Rev. D_ 105 (2022) 024015 [2111.11634].
* (24) K. Takahashi, _Invertible disformal transformations with arbitrary higher-order derivatives_ , 2307.08814.
* (25) K. Takahashi, M. Minamitsuji and H. Motohashi, _Generalized disformal Horndeski theories: cosmological perturbations and consistent matter coupling_ , _PTEP_ 2023 (2023) 013E01 [2209.02176].
* (26) A. Naruko, R. Saito, N. Tanahashi and D. Yamauchi, _Ostrogradsky mode in scalar-tensor theories with higher-order derivative couplings to matter_ , _PTEP_ 2023 (2023) 053E02 [2209.02252].
* (27) K. Takahashi, R. Kimura and H. Motohashi, _Consistency of matter coupling in modified gravity_ , _Phys. Rev. D_ 107 (2023) 044018 [2212.13391].
* (28) T. Ikeda, K. Takahashi and T. Kobayashi, _Consistency of higher-derivative couplings to matter fields in scalar-tensor gravity_ , _Phys. Rev. D_ 108 (2023) 044006 [2302.03418].
* (29) A. De Felice, D. Langlois, S. Mukohyama, K. Noui and A. Wang, _Generalized instantaneous modes in higher-order scalar-tensor theories_ , _Phys. Rev. D_ 98 (2018) 084024 [1803.06241].
* (30) A. De Felice, S. Mukohyama and K. Takahashi, _Nonlinear definition of the shadowy mode in higher-order scalar-tensor theories_ , _JCAP_ 12 (2021) 020 [2110.03194].
* (31) A. De Felice, S. Mukohyama and K. Takahashi, _Built-in scordatura in U-DHOST_ , _Phys. Rev. Lett._ 129 (2022) 031103 [2204.02032].
* (32) K. Takahashi, M. Minamitsuji and H. Motohashi, _Effective description of generalized disformal theories_ , _JCAP_ 07 (2023) 009 [2304.08624].
* (33) H. Nariai, _On the Green’s Function in an Expanding Universe and Its Role in the Problem of Mach’s Principle_ , _Progress of Theoretical Physics_ 40 (1968) 49.
* (34) J. O’ Hanlon and B. O. J. Tupper, _Vacuum-field solutions in the Brans-Dicke theory_ , _Nuovo Cim. B_ 7 (1972) 305.
* (35) J. D. Barrow and K. Maeda, _Extended inflationary universes_ , _Nuclear Physics B_ 341 (1990) 294.
* (36) C. Romero and A. Barros, _Brans-dicke vacuum solutions and the cosmological constant: A qualitative analysis_ , _General Relativity and Gravitation_ 25 (1993) 491.
* (37) S. J. Kolitch, _Qualitative analysis of Brans-Dicke universes with a cosmological constant_ , _Annals Phys._ 246 (1996) 121 [gr-qc/9409002].
* (38) V. B. Johri and K. Desikan, _Cosmological models with constant deceleration parameter in Brans-Dicke theory_ , _Gen. Rel. Grav._ 26 (1994) 1217.
* (39) S. Giardino, V. Faraoni and A. Giusti, _First-order thermodynamics of scalar-tensor cosmology_ , _JCAP_ 04 (2022) 053 [2202.07393].
* (40) S. Giardino, A. Giusti and V. Faraoni, _Thermal stability of stealth and de Sitter spacetimes in scalar-tensor gravity_ , _Eur. Phys. J. C_ 83 (2023) 621 [2302.08550].
* (41) E. Ayón-Beato, C. Martínez and J. Zanelli, _Stealth scalar field overflying a (2+1) black hole_ , _Gen. Rel. Grav._ 38 (2006) 145 [hep-th/0403228].
* (42) E. Ayón-Beato, C. Martínez, R. Troncoso and J. Zanelli, _Gravitational Cheshire effect: Nonminimally coupled scalar fields may not curve spacetime_ , _Phys. Rev. D_ 71 (2005) 104037 [hep-th/0505086].
* (43) S. Mukohyama, _Black holes in the ghost condensate_ , _Phys. Rev. D_ 71 (2005) 104019 [hep-th/0502189].
* (44) D. C. Robinson, _Non-gravitating waves_ , _Gen. Rel. Grav._ 38 (2006) 153.
* (45) E. Ayón-Beato, M. Hassaïne and M. M. Juárez-Aubry, _Stealths on Anisotropic Holographic Backgrounds_ , 1506.03545.
* (46) A. Alvarez, C. Campuzano, M. Cruz, E. Rojas and J. Saavedra, _Stealths on $(1+1)$-dimensional dilatonic gravity_, _Gen. Rel. Grav._ 48 (2016) 165 [1611.03022].
* (47) I. Smolić, _Spacetimes dressed with stealth electromagnetic fields_ , _Phys. Rev. D_ 97 (2018) 084041 [1711.07490].
* (48) E. Franzin and I. Smolić, _Stationary spacetimes with time-dependent real scalar fields_ , _Class. Quant. Grav._ 38 (2021) 115004 [2101.05816].
* (49) E. Babichev and C. Charmousis, _Dressing a black hole with a time-dependent Galileon_ , _JHEP_ 08 (2014) 106 [1312.3204].
* (50) T. Kobayashi and N. Tanahashi, _Exact black hole solutions in shift symmetric scalar–tensor theories_ , _PTEP_ 2014 (2014) 073E02 [1403.4364].
* (51) E. Babichev and G. Esposito-Farèse, _Cosmological self-tuning and local solutions in generalized Horndeski theories_ , _Phys. Rev. D_ 95 (2017) 024020 [1609.09798].
* (52) E. Babichev, C. Charmousis, G. Esposito-Farèse and A. Lehébel, _Stability of Black Holes and the Speed of Gravitational Waves within Self-Tuning Cosmological Models_ , _Phys. Rev. Lett._ 120 (2018) 241101 [1712.04398].
* (53) M. Minamitsuji and H. Motohashi, _Stealth Schwarzschild solution in shift symmetry breaking theories_ , _Phys. Rev. D_ 98 (2018) 084027 [1809.06611].
* (54) J. Ben Achour and H. Liu, _Hairy Schwarzschild-(A)dS black hole solutions in degenerate higher order scalar-tensor theories beyond shift symmetry_ , _Phys. Rev. D_ 99 (2019) 064042 [1811.05369].
* (55) H. Motohashi and M. Minamitsuji, _General Relativity solutions in modified gravity_ , _Phys. Lett. B_ 781 (2018) 728 [1804.01731].
* (56) H. Motohashi and M. Minamitsuji, _Exact black hole solutions in shift-symmetric quadratic degenerate higher-order scalar-tensor theories_ , _Phys. Rev. D_ 99 (2019) 064040 [1901.04658].
* (57) M. Minamitsuji and J. Edholm, _Black hole solutions in shift-symmetric degenerate higher-order scalar-tensor theories_ , _Phys. Rev. D_ 100 (2019) 044053 [1907.02072].
* (58) R. C. Bernardo, J. Celestial and I. Vega, _Stealth black holes in shift symmetric kinetic gravity braiding_ , _Phys. Rev. D_ 101 (2020) 024036 [1911.01847].
* (59) C. Charmousis, M. Crisostomi, R. Gregory and N. Stergioulas, _Rotating Black Holes in Higher Order Gravity_ , _Phys. Rev. D_ 100 (2019) 084020 [1903.05519].
* (60) K. Takahashi and H. Motohashi, _General Relativity solutions with stealth scalar hair in quadratic higher-order scalar-tensor theories_ , _JCAP_ 06 (2020) 034 [2004.03883].
* (61) R. C. Bernardo and I. Vega, _Stealth black hole perturbations in kinetic gravity braiding_ , _J. Math. Phys._ 62 (2021) 072501 [2007.06006].
* (62) M. A. Gorji, H. Motohashi and S. Mukohyama, _Stealth dark energy in scordatura DHOST theory_ , _JCAP_ 03 (2021) 081 [2009.11606].
* (63) E. Babichev, C. Charmousis, G. Esposito-Farèse and A. Lehébel, _Hamiltonian unboundedness vs stability with an application to Horndeski theory_ , _Phys. Rev. D_ 98 (2018) 104050 [1803.11444].
* (64) K. Takahashi, H. Motohashi and M. Minamitsuji, _Linear stability analysis of hairy black holes in quadratic degenerate higher-order scalar-tensor theories: Odd-parity perturbations_ , _Phys. Rev. D_ 100 (2019) 024041 [1904.03554].
* (65) C. de Rham and J. Zhang, _Perturbations of stealth black holes in degenerate higher-order scalar-tensor theories_ , _Phys. Rev. D_ 100 (2019) 124023 [1907.00699].
* (66) H. Motohashi and S. Mukohyama, _Weakly-coupled stealth solution in scordatura degenerate theory_ , _JCAP_ 01 (2020) 030 [1912.00378].
* (67) J. Khoury, M. Trodden and S. S. C. Wong, _Existence and instability of hairy black holes in shift-symmetric Horndeski theories_ , _JCAP_ 11 (2020) 044 [2007.01320].
* (68) K. Tomikawa and T. Kobayashi, _Perturbations and quasinormal modes of black holes with time-dependent scalar hair in shift-symmetric scalar-tensor theories_ , _Phys. Rev. D_ 103 (2021) 084041 [2101.03790].
* (69) K. Takahashi and H. Motohashi, _Black hole perturbations in DHOST theories: master variables, gradient instability, and strong coupling_ , _JCAP_ 08 (2021) 013 [2106.07128].
* (70) S. Mukohyama, K. Takahashi and V. Yingcharoenrat, _Generalized Regge-Wheeler equation from Effective Field Theory of black hole perturbations with a timelike scalar profile_ , _JCAP_ 10 (2022) 050 [2208.02943].
* (71) J. Khoury, T. Noumi, M. Trodden and S. S. C. Wong, _Stability of hairy black holes in shift-symmetric scalar-tensor theories via the effective field theory approach_ , _JCAP_ 04 (2023) 035 [2208.02823].
* (72) A. De Felice, S. Mukohyama and K. Takahashi, _Approximately stealth black hole in higher-order scalar-tensor theories_ , _JCAP_ 03 (2023) 050 [2212.13031].
* (73) K. Nakashi, M. Kimura, H. Motohashi and K. Takahashi, _Black hole perturbations in higher-order scalar–tensor theories: initial value problem and dynamical stability_ , _Class. Quant. Grav._ 39 (2022) 175003 [2204.05054].
* (74) J. Ben Achour, D. Langlois and K. Noui, _Degenerate higher order scalar-tensor theories beyond Horndeski and disformal transformations_ , _Phys. Rev. D_ 93 (2016) 124005 [1602.08398].
* (75) H. Motohashi, T. Suyama and K. Takahashi, _Fundamental theorem on gauge fixing at the action level_ , _Phys. Rev. D_ 94 (2016) 124021 [1608.00071].
* (76) S. Mukohyama, K. Takahashi, K. Tomikawa and V. Yingcharoenrat, _Quasinormal modes from EFT of black hole perturbations with timelike scalar profile_ , _JCAP_ 07 (2023) 050 [2304.14304].
* (77) S. Mukohyama and V. Yingcharoenrat, _Effective field theory of black hole perturbations with timelike scalar profile: formulation_ , _JCAP_ 09 (2022) 010 [2204.00228].
* (78) R. H. Price, _Nonspherical perturbations of relativistic gravitational collapse. 1. Scalar and gravitational perturbations_ , _Phys. Rev. D_ 5 (1972) 2419.
* (79) C. Gundlach, R. H. Price and J. Pullin, _Late time behavior of stellar collapse and explosions: 1. Linearized perturbations_ , _Phys. Rev. D_ 49 (1994) 883 [gr-qc/9307009].
* (80) M. Walker, _Block Diagrams and the Extension of Timelike Two‐Surfaces_ , _Journal of Mathematical Physics_ 11 (2003) 2280.
* (81) Y. Hatsuda and M. Kimura, _Perturbative quasinormal mode frequencies_ , 2307.16626.
|
Regularized Nonlinear Regression with Dependent Errors
and its Application to a Biomechanical Model
Hojun You1, Kyubaek Yoon3, Wei-Ying Wu4, Jongeun Choi3 and Chae Young Lim2
1University of Houston
2Seoul National University
3Yonsei University
4National Dong Hwa University
> Abstract: A biomechanical model often requires parameter estimation and
> selection in a known but complicated nonlinear function. Motivated by
> observing that data from a head-neck position tracking system, one of
> biomechanical models, show multiplicative time dependent errors, we develop
> a modified penalized weighted least squares estimator. The proposed method
> can be also applied to a model with non-zero mean time dependent additive
> errors. Asymptotic properties of the proposed estimator are investigated
> under mild conditions on a weight matrix and the error process. A simulation
> study demonstrates that the proposed estimation works well in both parameter
> estimation and selection with time dependent error. The analysis and
> comparison with an existing method for head-neck position tracking data show
> better performance of the proposed method in terms of the variance accounted
> for (VAF).
>
> Key words and phrases: nonlinear regression; temporal dependence;
> multiplicative error; local consistency and oracle property
## 1 Introduction
A nonlinear regression model has been widely used to describe complicated
relationships between variables (Wood, 2010; Baker and Foley, 2011; Paula et
al., 2015; Lim et al., 2014). In particular, various nonlinear problems are
considered in the field of machinery and biomechanical engineering (Moon et
al., 2012; Santos and Barreto, 2017). Among such nonlinear problems, a head-
neck position tracking model with neurophysiological parameters in
biomechanics motivated us to develop an estimation and selection method for a
nonlinear regression model in this work.
The head-neck position tracking application aims to figure out how
characteristics of the vestibulocollic and cervicocollic reflexes (VCR and
CCR) contribute to the head-neck system. The VCR activates neck muscles to
stabilize the head-in-space and the CCR acts to hold the head on the trunk. A
subject of the experiment follows a reference signal on a computer screen with
his or her head and a head rotation angle is measured during the experiment. A
reference signal is the input of the system and the measured head rotation
angle is the output. The parameters related to VCR and CCR in this nonlinear
system are of interest to understand the head-neck position tracking system.
The head-neck position problem has been widely studied in the literature of
biomechanics (Peng et al., 1996; Chen et al., 2002; Forbes et al., 2013;
Ramadan et al., 2018; Yoon et al., 2022). One of the prevalent issues in
biomechanics is that a model suffers from a relatively large number of
parameters and limited availability of data because the subjects in the
experiment cannot tolerate sufficient time without being fatigued. This leads
to overfitting as well as nonidentifiability of the parameters. To resolve
this issue, selection approaches via a penalized regression method have been
implemented to fix a subset of the parameters to the pre-specified values
while the remaining parameters are estimated (Ramadan et al., 2018; Yoon et
al., 2022).
Figure 1: The black curve represents the measured responses (the observations)
from the subject No. 8 in the head-neck position tracking experiment. The red
dashed curve represents the estimated responses (the fitted values) from the
nonlinear regression model with additive errors introduced in Yoon et al.
(2022).
Figure 2: Sample autocorrelation (left) and sample partial autocorrelation
(right) of the residuals $(\mbox{measured response}-\mbox{estimated
response})$ for the subject No. 8 from the head-neck position tracking
experiment. The estimated response is obtained from the method in Yoon et al.
(2022).
The existing approaches, however, have some limitations. The fitted values
from the penalized nonlinear regression with additive errors in Yoon et al.
(2022) show larger discrepancy from the observed values when the head is
turning in its direction. For example, Figure 2 shows the fitted values (the
estimated responses) from the method in Yoon et al. (2022) and the
observations (the measured responses) of the subject No. 8 in a head-neck
position tracking experiment. The detail description of the data from the
experiment is given in Section 4. We can see the tendency that the larger the
measured response in its magnitude is, the bigger the gap exists between the
measured response and the estimated response. Note that the largest measured
responses occur when the head is turning in its direction to follow the
reference signal’s direction changes. It is more difficult for some subjects
to track the end positions correctly, which can create more errors at each
end. Hence, the multiplicative errors rather than the additive errors would be
more suitable. Indeed, Figure 2 shows the model with additive errors did not
successfully accommodate this characteristic.
Previous studies did not also take into account temporal dependence while the
experimental data exhibit temporal correlation. For example, Figure 2 shows
sample autocorrelation function and sample partial autocorrelation function of
the residuals from the fitted model for the subject No. 8 by the method in
Yoon et al. (2022). It clearly shows temporal dependence in the residuals.
However, Yoon et al. (2022) studied a penalized nonlinear least squares
estimator and its asymptotic properties under the independent error
assumption. Lastly, Ramadan et al. (2018) and Yoon et al. (2022) restrict the
number of sensitive parameters to five, where the sensitive parameters refer
to the parameters whose estimates are not shrunk to the pre-specified values.
Not only may this restriction increase computational instability but also
reduce estimation and prediction performances. Provided the head-neck position
tracking task already suffers from computational challenges, additional
computational issues should be avoided.
To resolve the above-mentioned issues, we consider a nonlinear regression
model with multiplicative errors for the head-neck position tracking system,
which can be written as
$z_{t}=g(\bm{x}_{t};\bm{\theta})\times\varsigma_{t},$ (1.1)
where $z_{t}$ is a measured response, $g(\bm{x};\bm{\theta})$ is a known
nonlinear function with an input $\bm{x}$. $\bm{\theta}$ is a set of
parameters in $g$ and $\varsigma_{t}$ is a multiplicative error. The details
for $g$ and $\bm{\theta}$ for the head-neck position tracking system are
described in Ramadan et al. (2018) and Yoon et al. (2022). A typical approach
is to take the logarithm in both sides of (1.1) so that the resulting model
becomes a nonlinear model with additive errors:
$y_{t}=f(\bm{x}_{t};\bm{\theta})+\epsilon_{t},$ (1.2)
with $y_{t}=\log(z_{t})$,
$f(\bm{x}_{t};\bm{\theta})=\log(g(\bm{x}_{t};\bm{\theta}))$ and
$\epsilon_{t}=\log(\varsigma_{t})$ by assuming all components are positive.
Note that the additive error $\epsilon_{t}$ in (1.2) may not have zero-mean
anymore due to the log transformation.
Motivated by the head-neck position tracking application, we propose a
parameter estimation method for a nonlinear model with multiplicative error in
(1.1) by applying a modified weighted least squares estimation method to (1.2)
which accommodate temporal dependence as well as non-zero mean errors. Given
by the structure, the proposed estimation can also handle the nonlinear
regression model with non-zero mean additive errors in (1.2). The asymptotic
properties of the estimator obtained from the proposed method are studied
under the assumption of temporally correlated errors as we observed temporal
dependence in the head-neck position tracking system data.
A penalized estimator and its asymptotic properties are also investigated. In
the application of the head-neck position tracking system, a set of parameters
needs to be shrunk to the pre-specified values instead of the zero-values.
Thus, we use the penalized estimation approach to shrink estimates to the pre-
specified values. By doing so, we not only resolve the non-identifiability
issue but also keep all estimates meaningful in the head-neck position
tracking system. While the application of the head-neck position tracking
system motivated us to develop the proposed method, it is applicable to
general nonlinear regression models as well.
Least squares estimators and their theoretical properties for the nonlinear
regression models have been studied in the literature. Jennrich (1969) first
proved the strong consistency and asymptotic normality of the nonlinear least
squares estimator with independent errors. Wu (1981) provided a necessary
condition for the existence of any kind of weakly consistent estimator in a
nonlinear regression model with an additive error. In the same article, the
asymptotic properties of the least squares estimator were proved under weaker
conditions than those in Jennrich (1969). The results by Wu (1981) were
generalized in several other papers. Van de Geer (1990) studied three
estimation methods for a general regression model: least squares, least
absolute deviations, and penalized least squares. A stochastic nonlinear
function and martingale difference errors were considered in Lai (1994).
Pollard and Radchenko (2006) proved asymptotic properties of a least squares
estimator for a nonlinear regression model under the second moment assumptions
on the errors.
There are a few studies on the nonlinear regression with multiplicative
errors. Xu and Shimada (2000) studied least squares estimation for nonlinear
multiplicative noise models with independent errors. A weighted least squares
estimation method was proposed in Xu and Shimada (2000), but the estimator
induces a bias and needs correction. Lim et al. (2014) also investigated the
nonlinear multiplicative noise models with independent errors by the log
transformation and proposed the modified least square estimation by including
a sample mean component in the objective function. We also take a similar
approach but the least squares are modified with weights and temporally
dependent errors are assumed in addition to the penalization. Bhattacharyya et
al. (1992) showed that an ordinary least squares estimator for a nonlinear
regression model with additive errors may not possess strong consistency when
the true underlying model is a nonlinear regression model with multiplicative
errors.
Parameter selection via a penalty function has attracted a great deal of
attention since Tibshirani (1996) introduced the least absolute shrinkage and
selection operator (LASSO) and Fan and Li (2001) developed non-concave
penalized likelihood approach with the smoothly clipped absolute deviation
(SCAD). In particular, Fan and Li (2001) established asymptotic properties,
especially the oracle property for the penalized likelihood estimator under
mild regularity conditions. For a nonlinear regression model with independent
errors, asymptotic properties of the estimator from the penalized estimation
method were investigated in Jiang et al. (2012), Wu et al. (2014), and Lv et
al. (2014). Jiang et al. (2012) proposed a penalized weighted composite
quantile regression estimator and developed its asymptotic properties. They
highlighted the proposed method works as efficient as the oracle maximum
likelihood estimator with various error distributions and even works for
heavy-tailed error distributions. Wu et al. (2014) studied nonlinear
independence screening and nonnegative garrote for a high-dimensional
nonlinear additive model. Lv et al. (2014) investigated a nonlinear modal
regression model with the increasing number of variables. Asymptotic
properties such as oracle property and finite sample properties were also
studied in the paper.
To deal with a temporal dependence in the errors, mixing conditions are
considered: strong mixing ($\alpha$-mixing), $\phi$-mixing, and $\rho$-mixing.
These mixing conditions have been frequently studied for handling dependence
of temporal data (Machkouri et al., 2017; Geller and Neumann, 2018). Various
regression models with strong-mixing errors have been previously investigated
(Zhang and Liang, 2012; Yang et al., 2017). Zhang and Liang (2012) studied a
semi-parametric regression model with strong mixing errors and established the
asymptotic normality of a least squares estimator and a weighted least squares
estimator. Yang et al. (2017) developed probability inequalities about a least
squares estimator of nonlinear regression with strong mixing errors.
In this work, we propose a modified weighted least squares estimator for a
nonlinear regression model with non-zero mean errors, which is temporally
correlated. As we mentioned before, this approach can handle a nonlinear
regression model with multiplicative dependence errors after log
transformation. We investigate the asymptotic properties of the proposed
estimator and its penalized version. Specifically, we establish the local
consistency and asymptotic normality for both estimators and the oracle
property of the penalized estimator.
For a penalty function, we consider LASSO and SCAD in the numerical study. The
performance in the numerical study indicates the proposed penalized estimator
works well with finite samples. At last, we exhibit the results of adopting
the proposed method to the head-neck position tracking data with comparison to
those from the existing method.
In Section 2, we demonstrate the proposed estimation method for a nonlinear
regression model and establish the asymptotic properties of the proposed
estimators. In Section 3, several simulation studies are conducted with
various settings. The analysis on head-neck position tracking data with the
proposed method is introduced in Section 4. At last, we provide a discussion
in Section 5. The technical proofs for the theorems and additional results of
the simulation studies are presented in a supplementary material.
## 2 Methods
### 2.1 Modified Weighted Least Squares
We consider a following nonlinear regression model
$y_{t}=f(\bm{x}_{t};\bm{\theta})+\epsilon_{t},$
for $t=1,\cdots,n$, where $f({\bm{x}};\bm{\theta})$ is a known nonlinear
function on ${\bm{x}}\in D\subset R^{d}$, which also depends on the parameter
vector $\bm{\theta}:=(\theta_{1},\theta_{2},...,\theta_{p})^{T}\in\Theta$.
${A}^{T}$ is the transpose of a matrix $A$. $\epsilon_{t}$ is temporally
correlated and its mean, $E(\epsilon_{t})=\mu$, may not be zero. The
assumption on a non-zero mean of the error comes from a nonlinear regression
model with multiplicative errors introduced in (1.1). We further assume that
only a few entries of the true parameter are non-zero. Without loss of
generality, we let the first $s$ entries of the true $\bm{\theta}_{0}$ be non-
zero. That is,
$\bm{\theta}_{0}=(\theta_{01},\theta_{02},...,\theta_{0s},\theta_{0s+1},...,\theta_{0p})^{T}$
and $\theta_{0t}\neq 0$ for $1\leq t\leq s$ and $\theta_{0t}=0$ for $s+1\leq
t\leq p$.
We begin with a modified least squares method using
$\displaystyle\displaystyle\bm{S}_{n}^{(ind)}(\bm{\theta})$
$\displaystyle=\sum^{n}_{t=1}\left(y_{t}-f({\bm{x}_{t}};\bm{\theta})-\frac{1}{n}\sum_{t^{\prime}=1}^{n}(y_{t^{\prime}}-f(\bm{x}_{t^{\prime}};\bm{\theta}))\right)^{2},$
$\displaystyle=\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right)^{T}\bm{\Sigma}_{n}\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right),$
where $\bm{y}=(y_{1},\ldots,y_{n})^{T},\
\bm{f}(\bm{x},\bm{\theta})=(f(\bm{x}_{1};\bm{\theta}),\ldots,f(\bm{x}_{n};\bm{\theta}))^{T}$,
and $\bm{\Sigma}_{n}=\bm{I}_{n}-n^{-1}\bm{1}\bm{1}^{T}$. $\bm{I}_{n}$ is the
identity matrix and $\bm{1}$ is the column vector of one’s. This objective
function is different from a typical least squares expression in that the
sample mean of the errors is subtracted from the error at each data point.
This is motivated by taking into account possibly non-zero mean of the errors
(Lim et al., 2014).
Since we consider temporally dependent data, we introduce a temporal weight
matrix $\bm{W}$ to account for the temporal dependence so that the modified
objective function is
$\displaystyle\bm{S}_{n}(\bm{\theta})$
$\displaystyle=\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right)^{T}\bm{W}^{T}\bm{\Sigma}_{n}\bm{W}\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right),$
$\displaystyle=\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right)^{T}\bm{\Sigma}_{w}\left(\bm{y}-\bm{f}(\bm{x},\bm{\theta})\right),$
(2.3)
where $\bm{\Sigma}_{w}=\bm{W}^{T}\bm{\Sigma}_{n}\bm{W}$. If we know the true
temporal dependence model of the error process, a natural choice of the weight
matrix is from the true covariance matrix of the error process. However, we
allow the weight matrix more flexible since we do not want to assume a
specific temporal dependence model for the error process. Conditions for
$\bm{W}$ will be introduced in the next section.
We can add a penalty function $p_{\tau_{n}}(\cdot)$ when our interest is to
detect the relevant parameters. Then, the penalized estimator is obtained by
minimizing
$\bm{Q}_{n}(\bm{\theta})=\bm{S}_{n}(\bm{\theta})+n\sum^{p}_{i=1}p_{\tau_{n}}(|\theta_{i}|).$
(2.4)
To investigate theoretical properties of the proposed estimators obtained by
minimizing $\bm{S}_{n}(\bm{\theta})$ and $\bm{Q}_{n}(\bm{\theta})$, we
introduce notations and assumptions in the next section.
### 2.2 Notations and Assumptions
We start with three mixing conditions for temporal dependence:
$\alpha$-mixing, $\phi$-mixing, and $\rho$-mixing.
###### Definition 1.
1. Consider a sequence of random variables, $\\{\xi_{i},i\geq 1\\}$ and let $\mathcal{F}_{i}^{j}$ denote the $\sigma$-field generated by $\\{\xi_{i},\ldots,\xi_{j}\\}$. Then, $\\{\xi_{i},i\geq 1\\}$ is said to be
2. (a)
strong mixing or $\alpha$-mixing if $\alpha(m)\rightarrow 0$ as
$m\rightarrow\infty$, where
$\displaystyle\alpha(m)$
$\displaystyle=\displaystyle\underset{n}{\sup}~{}\alpha(\mathcal{F}_{1}^{n},\mathcal{F}_{n+m}^{\infty})$
$\displaystyle\hbox{with}~{}~{}\alpha(\mathcal{F},\mathcal{G})$
$\displaystyle=\displaystyle\underset{A\in\mathcal{F},B\in\mathcal{G}}{\sup}~{}|P(AB)-P(A)P(B)|,$
3. (b)
$\phi$-mixing if $\phi(m)\rightarrow 0$ as $m\rightarrow\infty$, where
$\displaystyle\phi(m)$
$\displaystyle=\displaystyle\underset{n}{\sup}~{}\phi(\mathcal{F}_{1}^{n},\mathcal{F}_{n+m}^{\infty})$
$\displaystyle\hbox{with}~{}~{}\phi(\mathcal{F},\mathcal{G})$
$\displaystyle=\displaystyle\sup_{A\in\mathcal{F},B\in\mathcal{G},P(A)>0}|P(B|A)-P(B)|,~{}~{}\hbox{and}$
4. (c)
$\rho$-mixing if $\rho(m)\rightarrow 0$ as $m\rightarrow\infty$, where
$\displaystyle\rho(m)$
$\displaystyle=\displaystyle\underset{n}{\sup}~{}\rho(\mathcal{F}_{1}^{n},\mathcal{F}_{n+m}^{\infty}),$
$\displaystyle\hbox{with}~{}~{}\rho(\mathcal{F},\mathcal{G})$
$\displaystyle=\displaystyle\sup_{f\in\mathcal{L}^{2}_{real}(\mathcal{F}),\
g\in\mathcal{L}^{2}_{real}(\mathcal{G})}|corr(f,g)|.$
Here, $L^{2}_{real}(\mathcal{F})$ denotes the space of square-integrable,
$\mathcal{F}$-measurable real-valued random variables (Bradley, 2005).
These mixing conditions have been widely adopted to explain dependence of a
random sequence in the literature (Machkouri et al., 2017; Geller and Neumann,
2018). It is well-known that $\phi$-mixing implies $\rho$-mixing,
$\rho$-mixing implies $\alpha$-mixing, and the strong mixing condition is one
of the weakest conditions among many mixing conditions (Peligrad and Utev,
1997; Bradley, 2005). We assume an appropriate mixing condition for our
temporal data and derive the asymptotic properties of the proposed estimators
under such condition. The details appear in Assumption 1.
Next, we introduce notations and assumptions for theoretical results. Define
$d_{t}(\bm{\theta},\bm{\theta}^{\prime})=f({\bm{x}}_{t};\bm{\theta})-f({\bm{x}}_{t};\bm{\theta}^{\prime})$
and
$\bm{d}=(d_{1}(\bm{\theta},\bm{\theta}^{\prime}),d_{2}(\bm{\theta},\bm{\theta}^{\prime}),\ldots,d_{n}(\bm{\theta},\bm{\theta}^{\prime}))^{T}$.
When $f$ is twice differentiable with respect to $\bm{\theta}$, let
$\bm{f}_{k}=\left(\frac{\partial{f(\bm{x}_{1},\bm{\theta})}}{\partial\theta_{k}},\cdots,\frac{\partial
f(\bm{x}_{n},\bm{\theta})}{\partial\theta_{k}}\right)^{T}$ and
$\bm{f}_{kl}=\left(\frac{\partial^{2}\bm{f}(\bm{x}_{1},\bm{\theta})}{\partial\theta_{k}\partial\theta_{l}},...,\frac{\partial^{2}\bm{f}(\bm{x}_{n},\bm{\theta})}{\partial\theta_{k}\partial\theta_{l}}\right)^{T}$.
Using $\bm{f}_{k}$ and $\bm{f}_{kl}$, we define
${\bm{\dot{F}}}(\bm{\theta})=(\bm{f}_{1},...,\bm{f}_{p})$ and
$\bm{{\ddot{F}}}(\bm{\theta})$=Block($\bm{f}_{kl}$) so that
$\bm{\dot{F}}(\bm{\theta})$ is $n\times p$ matrix whose $k$th column is
$\bm{f}_{k}$ and $\bm{{\ddot{F}}}$ is a $pn\times p$ block matrix whose
$(k,l)$th block is $\bm{f}_{kl}$. $\mbox{E}(\bm{\epsilon})=\mu\bm{1}$ and
$\operatorname{var}(\bm{\epsilon})=\bm{\Sigma}_{\epsilon}$. Let $\lambda_{w}$
and $\lambda_{\epsilon}$ denote the maximum eigenvalues of $\bm{\Sigma}_{w}$
and $\bm{\Sigma}_{\epsilon}$, respectively. We consider a temporal weight
matrix satisfying $\bm{W}\bm{1}=\bm{1}$, i.e. the row-sums are 1’s. This
condition is to handle the nonzero mean of the errors. Let $\|\cdot\|$ for a
vector denote a euclidean norm and $\|\cdot\|_{1}$ and $\|\cdot\|_{\infty}$
for a matrix denote 1-norm and infinity norm, respectively.
The assumptions on the nonlinear function $f$, the errors $\epsilon_{i}$, the
weight matrix $\bm{W}$ and the penalty function $p_{\tau_{n}}(\cdot)$ to
investigate asymptotic properties are now introduced.
###### Assumption 1.
1. (1)
The nonlinear function $f\in C^{2}$ on the compact set
$\mathcal{D}\times\Theta$ where $C^{2}$ is the set of twice continuously
differentiable functions.
2. (2)
As $\|\bm{\theta}-\bm{\theta}_{0}\|\rightarrow 0$,
$\left({\bm{\dot{F}}}(\bm{\theta}_{0})^{T}\bm{\Sigma}_{w}{\bm{\dot{F}}}(\bm{\theta}_{0})\right)^{-1}{\bm{\dot{F}}}(\bm{\theta})^{T}\bm{\Sigma}_{w}{\bm{\dot{F}}}(\bm{\theta})\rightarrow
I_{p}$, elementwisely and uniformly in $\bm{\theta}$.
3. (3)
There exist symmetric positive definite matrices $\bm{\Gamma}$ and
$\bm{\Gamma}_{\epsilon}$ such that
$\displaystyle\frac{1}{n\lambda_{w}}\bm{\dot{F}}(\bm{\theta}_{0})^{T}\bm{\Sigma}_{w}\bm{\dot{F}}(\bm{\theta}_{0})$
$\displaystyle\rightarrow\bm{\Gamma}$
$\displaystyle\frac{1}{n\lambda_{\epsilon}\lambda_{w}^{2}}{\bm{\dot{F}}}(\bm{\theta}_{0})^{T}\bm{\Sigma}_{w}\bm{\Sigma}_{\epsilon}\bm{\Sigma}_{w}{\bm{\dot{F}}}(\bm{\theta}_{0})$
$\displaystyle\rightarrow\bm{\Gamma}_{\epsilon}.$
4. (4)
$\frac{\|\bm{W}\|_{1}\cdot\|\bm{W}\|_{\infty}}{\|\bm{W}^{T}\bm{\Sigma}_{n}\bm{W}\|_{2}}=o(n^{1/2}\lambda_{\epsilon}^{1/2})$.
5. (5)
$O(1)\leq\lambda_{\epsilon}\leq o(n)$ and $\lambda_{w}\geq O(1)$.
6. (6)
$\\{\epsilon_{i}^{2}\\}$ is uniformly integrable.
7. (7)
One of the following conditions is satisfied for $\epsilon_{i}$.
* $(a)$
$\\{\epsilon_{i}\\}$ is a $\phi$-mixing.
* $(b)$
$\\{\epsilon_{i}\\}$ is a $\rho$-mixing and
$\sum_{j\in\mathcal{N}}\rho(2^{j})<\infty$.
* $(c)$
For $\delta>0$, $\\{\epsilon_{i}\\}$ is a $\alpha$-mixing,
$\\{|\epsilon_{i}|^{2+\delta}\\}$ is uniformly integrable, and
$\sum_{j\in\mathcal{N}}n^{2/\delta}\alpha(n)<\infty$.
###### Assumption 2.
The first derivative of a penalty function $p_{\tau_{n}}(\cdot)$ denoted by
$q_{\tau_{n}}(\cdot)$, has the following properties:
1. (1)
$c_{n}=\max_{i\in\\{1,\ldots,s\\}}\left\\{|q_{\tau_{n}}(|\theta_{0i}|)|\right\\}=O\left(\left(\lambda_{\epsilon}/n\right)^{1/2}\right)$
2. (2)
$q_{\tau_{n}}(\cdot)$ is Lipschitz continuous given $\tau_{n}$
3. (3)
$n^{1/2}\lambda_{\epsilon}^{-1/2}\lambda_{w}^{-1}\tau_{n}\rightarrow\infty$
4. (4)
For any $C>0$,
$\displaystyle\liminf_{n\rightarrow\infty}\inf_{\theta\in\left(0,C(\lambda_{\epsilon}/n)^{1/2}\right)}\tau_{n}^{-1}q_{\tau_{n}}(\theta)>0$
Assumption 1 imposes mild conditions on the nonlinear function, its domain,
the weight matrix, and the error process. Assumption 1-(1), (2), and (3)
introduce reasonably weak conditions for the nonlinear function and the domain
of data and parameters. The first condition in Assumption 1-(3) is a modified
version of Grenander condition for our objective function (Grenander, 1954;
Wang and Zhu, 2009; Lim et al., 2014). The second condition in Assumption
1-(3) is required to derive the variance of the asymptotic distribution.
Remark 1 explains the plausibility of these conditions by addressing that
slightly weaker conditions can be easily satisfied. We impose a weak condition
on the temporal weight matrix in Assumption 1-(4) so that flexible weight
matrices are allowed. Remark 2 further discusses on the condition for the
temporal weight matrix. In Assumption 1-(5), a lower bound for
$\lambda_{\epsilon}$ can be attained if the error process is stationary with a
bounded spectral density. Assumption 1 contains additional conditions for the
asymptotic normality of the unpenalized estimator from
$\bm{S}_{n}(\bm{\theta})$. Assumptions 1-(6) and (7) refer to Peligrad and
Utev (1997), which studied central limit theorems for linear processes.
Assumption 1-(6) implies uniform boundedness of the second moment for the
errors. Assumption 1-(7) provides weak conditions for temporal dependence of
the errors. The detailed discussion on Assumption 1-(7) is given in Remark 3.
Assumption 2 demonstrates typical conditions for a penalty function. The first
two conditions guarantee that the penalized least squares estimator possess
consistency with the same order as the modified weighted least squares
estimator. The other two conditions contribute to the sparsity of the
penalized estimator. The conditions in Assumption 2 are similar to those in
Fan and Li (2001) and Wang and Zhu (2009). Typically, LASSO and SCAD penalty
functions are considered. The former satisfies only the first two conditions
in Assumption 2 while the latter satisfies all conditions in Assumption 2 with
properly chosen $\tau_{n}$. This means LASSO fails to correctly identify
significant parameters while an estimator using the SCAD penalty function
possesses selection consistency as well as estimation consistency.
###### Remark 1.
We discuss the positive definiteness of $\bm{\Gamma}$ and
$\bm{\Gamma}_{\epsilon}$ and the boundedness of the sequences of the matrices.
First, $\bm{\Sigma}_{w}$ is a symmetric and semi-positive definite matrix
since $\bm{\Sigma}_{w}=\bm{W}^{T}\bm{\Sigma}_{n}\bm{W}$ and $\bm{\Sigma}_{n}$
has rank of $n-1$. Despite the rank deficiency, the sequences of the matrices
are $p\times p$ matrices with $p<n$, so we believe that the limits of the
sequences are likely to acquire positive definiteness. Next, the first
sequence of the matrices in Assumption 1-(3) are clearly bounded above. Since
$\bm{\Sigma}_{w}$ is a semi-positive definite matrix,
$(n\lambda_{w})^{-1}\bm{\dot{F}}(\bm{\theta}_{0})^{T}\bm{\Sigma}_{w}\bm{\dot{F}}(\bm{\theta}_{0})\leq
n^{-1}\bm{\dot{F}}(\bm{\theta}_{0})^{T}\bm{\dot{F}}(\bm{\theta}_{0})=O(1)$ by
the compactness of the domain (Assumption 1-(1)). With
$\lambda_{max}(\bm{\Sigma}_{w}\bm{\Sigma}_{\epsilon}\bm{\Sigma}_{w})\leq\lambda_{\epsilon}\lambda_{w}^{2}$,
we obtain the same result for the second sequence.
###### Remark 2.
We give detailed justification for assumptions on the temporal weight matrix.
By H$\ddot{\mbox{o}}$lder’s inequality,
$\|\bm{W}\|_{2}^{2}\leq\|\bm{W}\|_{1}\|\bm{W}\|_{\infty}$. Hence, with
$\lambda_{w}=\|\bm{W}^{T}\bm{\Sigma}_{n}\bm{W}\|_{2}$ Assumption 1-(4) leads
to $\|\bm{W}\|_{2}\leq o(n^{1/4}\lambda_{\epsilon}^{1/4}\lambda_{w}^{1/2})$.
Recall $O(1)\leq\lambda_{\epsilon}\leq o(n)$ and $\lambda_{w}\geq O(1)$ from
Assumption 1-(5). Thus, the upper bound is sufficiently large for
$\|\bm{W}\|_{2}$ to allow flexible $\bm{W}$. In addition, since product
matrices $\bm{A}\bm{B}$ and $\bm{B}\bm{A}$ share their eigenvalues,
$\lambda_{w}=\lambda_{max}(\bm{W}^{T}\bm{\Sigma}_{n}\bm{W})=\lambda_{max}(\bm{\Sigma}_{n}\bm{W}\bm{W}^{T})\leq\|\bm{W}\|_{2}^{2}$.
In summary, we obtain $O(1)\leq\|\bm{W}\|_{2}^{2}\leq
o(n^{1/2}\lambda_{\epsilon}^{1/2}\lambda_{w})$, so Assumptions 1-(4) and (5)
together provide a flexible upperbound and lowerbound for $\bm{W}$.
###### Remark 3.
There exist many familiar time series processes that satisfy Assumption 1-(7).
Autoregressive (AR) processes and moving average (MA) processes are strongly
mixing under mild conditions (Athreya and Pantula, 1986). Athreya and Pantula
(1986) also mentions that finite order autoregressive moving average (ARMA)
processes are $\phi$-mixing under mild conditions. Furthermore, ARMA processes
and bilinear processes are strong mixing with $\alpha(n)=O(e^{-n\rho})$ with
some $\rho>0$ (Roussas et al., 1992).
### 2.3 Theoretical results
First, we construct the existence and the consistency of the modified weighted
least squares estimator and the penalized least squares estimator.
###### Theorem 1.
For any $\varepsilon>0$ and $a_{n}=(\lambda_{\epsilon}/n)^{1/2}$, under
Assumption 1-(1), (2), (3), and (5), there exists a positive constant $C$ such
that
$P\left(\inf_{\|\bm{v}\|=C}\bm{S}_{n}(\bm{\theta}_{0}+a_{n}\bm{v})-\bm{S}_{n}(\bm{\theta}_{0})>0\right)>1-\varepsilon$
for large enough $n$. Therefore, with probability tending to 1, there exists a
local minimizer of $\bm{S}_{n}(\bm{\theta})$, say $\hat{\bm{\theta}}^{(s)}$,
in the ball centered at $\bm{\theta}_{0}$ with the radius $a_{n}\bm{v}$. Since
$a_{n}=o(1)$ by Assumption 1-(5), we have the consistency of
$\hat{\bm{\theta}}^{(s)}$.
###### Theorem 2.
For any $\varepsilon>0$ and $b_{n}=(\lambda_{\epsilon}/n)^{1/2}+c_{n}$, under
Assumptions in theorem 1 and 2-(1),(2), there exists a positive constant $C$
such that
$P\left(\inf_{\|\bm{v}\|=C}\bm{Q}_{n}(\bm{\theta}_{0}+b_{n}\bm{v})-\bm{Q}_{n}(\bm{\theta}_{0})>0\right)>1-\varepsilon$
for large enough $n$. Therefore, with probability tending to 1, there exists a
local minimizer of $\bm{Q}_{n}(\bm{\theta})$, say $\hat{\bm{\theta}}$, in the
ball centered at $\bm{\theta}_{0}$ with the radius $b_{n}\bm{v}$. By
Assumptions 1-(5) and 2-(1), $b_{n}=o(1)$, which leads to the consistency of
$\hat{\bm{\theta}}$.
The next two theorems establish the asymptotic normality of the modified
weighted least squares estimator from Theorem 1 and the oracle property of the
penalized least squares estimator from Theorem 2.
###### Theorem 3 (Asymptotic normality).
Under Assumption 1,
$\left(\frac{n}{\lambda_{\epsilon}}\right)^{1/2}\left(\hat{\bf{\bm{\theta}}}^{(s)}-\bm{\theta}_{0}\right)~{}\overset{d}{\longrightarrow}~{}N\left(0,\bm{\Gamma}^{-1}\bm{\Gamma}_{\epsilon}\bm{\Gamma}^{-1}\right),$
where $\hat{\bm{\theta}}^{(s)}$ is a consistent estimator introduced in
Theorem 1 using $\bm{S}_{n}(\bm{\theta})$.
###### Theorem 4 (Oracle property).
With $\hat{\bm{\theta}}$, a consistent estimator introduced in Theorem 2 using
$\bm{Q}_{n}(\bm{\theta})$, if Assumptions 1 and 2 are satisfied,
1. (i)
$P\left(\hat{\theta}_{i}=0\right)\rightarrow 1,$ for $i\in\\{s+1,\ldots,p\\}$.
2. (ii)
Also,
$\left(\frac{n}{\lambda_{\epsilon}}\right)^{1/2}\left(\hat{\bm{\theta}}_{1}-\bm{\theta}_{01}+\left((2\lambda_{w}\bm{\Gamma})^{-1}\right)_{11}\bm{\beta}_{n,s}\right)~{}\overset{d}{\longrightarrow}~{}N\left(0,\left(\bm{\Gamma}^{-1}\bm{\Gamma}_{\epsilon}\bm{\Gamma}^{-1}\right)_{11}\right),$
where $\hat{\bm{\theta}}_{1}=(\hat{\theta}_{1},\ldots,\hat{\theta}_{s})^{T},\
\bm{\theta}_{01}=(\theta_{01},\ldots,\theta_{0s})^{T},\
\bm{\beta}_{n,s}=(q_{\tau_{n}}({|\theta_{01}|)sgn(\theta}_{01}),\ldots,q_{\tau_{n}}(|\theta_{0s}|)sgn(\theta_{0s}))^{T}$
and $\bm{A}_{11}$ is the $s\times s$ upper-left matrix of $\bm{A}$.
Note that the estimators from theorems 1 and 2 have convergence orders of
$a_{n}$ and $b_{n}$, respectively. Also, $a_{n}$ and $b_{n}$ eventually have
the same order of $(\lambda_{\epsilon}/n)^{1/2}$ by Assumption 2-(1). One may
think $\lambda_{w}$, information of $\bm{W}$, makes no contribution to both
theorems, even though we have $\bm{W}$ in the objective functions. Recall that
$\lambda_{w}$ contributes to $\tau_{n}$ via Assumption 2-(2) and (4), and
$c_{n}$, which appears in $b_{n}$, is related to $\tau_{n}$ by Assumption
2-(1). This is where $\lambda_{w}$ implicitly comes into the theorems. We
could impose different conditions to make $\lambda_{w}$ explicitly appear in
the theorems. However, such conditions restrict the flexibility of
$\bm{\Sigma}_{\epsilon}$ so that the applicability of the proposed methods
becomes limited. Instead, we decide to keep the current assumptions to allow
flexible $\bm{\Sigma}_{\epsilon}$ and attain the implicit involvement of
$\lambda_{w}$ in the theorems. The proofs for Theorems 1-4 are given in the
supplementary material.
## 3 Simulation Study
We investigate performance of the proposed estimator, in particular, the
penalized version with two different penalty functions, LASSO and SCAD using
simulated data sets. First, we consider the data generated from a nonlinear
additive error model:
$\displaystyle
y_{t}=\frac{1}{1+\exp(-\bm{x}_{t}^{T}\bm{\theta}_{0})}+\epsilon_{t},$ (3.5)
where $\bm{\theta}_{0}=(\theta_{01},\theta_{02},\ldots,\theta_{0,20})^{T}$
with $\theta_{01}=1,\theta_{02}=1.2,\theta_{03}=0.6$, and the others being
zero. The first component of the covariate $\bm{x}$ comes from $U[-1,1]$, a
uniform distribution on $[-1,1]$, and the other components of $\bm{x}$ are
simulated from a joint normal distribution with the zero mean, the variance
being 0.6 and pairwise covariance being 0.1. This is a slight modification of
the covariates setting used in Jiang et al. (2012). For $\epsilon_{t}$, the
AR(1) and ARMA(1,1) with the non-zero mean are considered since these
processes not only represent typical time series processes but also possess
the strong mixing property. The choices of the AR(1) coefficient ($\rho$) are
0.5 and 0.9. For the ARMA process, the parameters for the AR and MA parts are
fixed as 0.8 ($\rho$) and 0.4 ($\phi$), respectively. For the non-zero mean,
$\mu$, the choices are 0.1 and 0.5 and the same for the standard deviation,
$\sigma$. We only show the results when $\sigma=0.5$ to highlight findings as
it is more difficult settings due to a larger variance. We consider three
sample sizes; $n=50,100$ and $200$ to investigate improvements according to
the increasing sample sizes.
A coordinate descent (CD) algorithm, in particular, a cyclic CD algorithm
(Breheny and Huang, 2011), was implemented to calculate the minimizer of the
objective function given in (2.4). Although Breheny and Huang (2011)
considered the convergence of the CD algorithm in a linear model, the cyclic
CD algorithm worked well for our penalized nonlinear regression problem as
well.
To select the tuning parameter, $\tau_{n}$, of the penalty function, a BIC-
type criterion (Wang et al., 2007) was used. The tuning parameter $a$ in the
SCAD penalty was fixed at 3.7 as recommended in Fan and Li (2001). The BIC-
type criterion we consider is
$\mbox{BIC}=\log(\hat{\sigma}^{2})+\log(n)\cdot\widehat{df}/n,$ where
$\widehat{df}$ is the number of significant estimates. For $\hat{\sigma}^{2}$,
we used $\hat{\sigma}^{2}=\overline{r^{2}}-\bar{r}^{2}$ where
$\bar{r}=n^{-1}\sum_{t=1}^{n}r_{t}$ and
$\overline{r^{2}}=n^{-1}\sum_{t=1}^{n}r_{t}^{2}$ with
$r_{t}=y_{t}-f(\bm{x}_{t};\bm{\hat{\theta}})$.
Our proposed method is denoted as penalized modified weighted least squares
(PMWLS). For the simulation study, the square roots of the inverse of the
covariance matrices from the AR(1) process with the AR coefficient
$\rho=0.5,0.9$ and the ARMA(1,1) process with AR and MA coefficients
$(\rho,\phi)=(0.8,0.4)$ are considered for the weight matrices after scaling
to have the row-sums 1. We compare the results of the proposed PMWLS method
with the results from a penalized least squares with a weight matrix (PWLS).
For fair comparison with the proposed method, a temporal weight matrix is also
considered for the PWLS method. In the PWLS method, we introduce an intercept
term to account for a possible non-zero mean of the error in the equation
(3.5), which is a straightforward way to handle non-zero mean of the error.
That is, PWLS minimizes
$\left(\bm{y}-\beta_{0}-\bm{f}(\bm{x};\bm{\theta})\right)^{T}\widetilde{\bm{W}}(\bm{y}-\beta_{0}-\bm{f}(\bm{x};\bm{\theta}))+n\sum_{i=1}^{p}p_{\tau_{n}}(|\theta_{i}|),$
where $\beta_{0}$ is the intercept term and $\widetilde{\bm{W}}$ is a weight
matrix. For weight matrices, we used the inverse of the covariance matrices
from the same models considered for the PMWLS method such as AR(1) and
ARMA(1,1) processes.
Tables 1-3 report the values of mean squared error (MSE) with standard
deviation of squared error (SD) in parenthesis for the estimates with a SCAD
penalty from 100 repetitions of data. MSE and SD are calculated as
$\displaystyle MSE$
$\displaystyle=\frac{1}{R\,p}\sum_{j=1}^{R}\left\|\hat{\bm{\theta}}^{(j)}-\bm{\theta}_{0}\right\|^{2},$
$\displaystyle SD$
$\displaystyle=\displaystyle\sqrt{\frac{1}{R-1}\sum_{j=1}^{R}\left\|\hat{\bm{\theta}}^{(j)}-{\bm{\bar{\theta}}}\right\|^{2}},$
where $\hat{\bm{\theta}}^{(j)}$ stands for the estimate from the $j$-th
repetition and ${\bm{\bar{\theta}}}$ is the sample mean of
$\hat{\bm{\theta}}^{(j)}$ for $j=1,\ldots,100$. The results for the estimates
with a LASSO penalty are provided in the supplementary material.
The MSE and SD results in Tables 1-3 show good performances in estimating the
true parameters. Overall, the estimation results are robust over various
weight matrices. While the results with the LASSO penalty for both PMWLS and
PWLS are comparable (Tables S1-S3 in the supplementary material), the proposed
method (PMWLS) outperforms the PWLS method for most cases with the SCAD
penalty. In particular, the improvement by the PMWLS method with the SCAD
penalty is more apparent when the error process has stronger dependence
(Tables 2 and 3). This performance improvement would be from estimation
efficiency in finite sample since the PMWLS method estimates one less number
of parameters, an intercept, compared to the PWLS method. Weight matrices for
both PMWLS and PWLS methods help to reduce MSE and the improvement is more
clear when the dependence is strong. On the other hand, the choice of the
weight matrices do not make much difference except that the performance is
better when the weight matrix is introduced.
AR(1) with $\rho=0.5$
---
$(\mu,\sigma)$ | Methods | $n=50$ | $n=100$ | $n=200$
$(0.1,0.5)$ | PMWLS | 15.34 (1.67) | 4.98 (0.97) | 2.38 (0.66)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 11.36 (1.38) | 4.17 (0.85) | 2.36 (0.64)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 10.69 (1.30) | 4.22 (0.84) | 2.22 (0.61)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 10.94 (1.29) | 4.42 (0.86) | 2.49 (0.64)
PWLS | 15.00 (1.64) | 4.89 (0.96) | 2.51 (0.67)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 11.88 (1.40) | 4.22 (0.85) | 2.29 (0.63)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 11.15 (1.34) | 4.09 (0.83) | 2.42 (0.63)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 11.07 (1.33) | 4.53 (0.86) | 2.38 (0.63)
$(0.5,0.5)$ | PMWLS | 9.44 (1.32) | 5.15 (0.98) | 2.60 (0.70)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 9.84 (1.31) | 4.19 (0.84) | 2.02 (0.59)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 9.58 (1.27) | 4.07 (0.82) | 1.97 (0.58)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 11.71 (1.39) | 4.73 (0.87) | 2.24 (0.62)
PWLS | 10.17 (1.34) | 6.78 (1.10) | 3.81 (0.83)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 9.17 (1.24) | 4.20 (0.84) | 2.17 (0.60)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 10.66 (1.34) | 4.37 (0.83) | 2.16 (0.59)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 12.17 (1.43) | 5.10 (0.90) | 2.67 (0.66)
$\ast$ The actual MSE values are $0.01\times$ the reported values.
Table 1: Estimation results with SCAD for the equation (3.5) when the error
process is AR(1) with the AR coefficient $\rho=0.5$. Mean squared error (MSE)
with standard deviation (SD) values in the parenthesis are presented. PMWLS
and PWLS stand for penalized modified weighted least squares (our proposed
method) and penalized least squares with a weight matrix, respectively. In the
Methods column, the value of $\rho$ indicates that the weight matrix from the
AR(1) process with $\rho$ as the AR coefficient is considered for estimation.
The value of $(\rho,\phi)$ indicates that the weight matrix from the ARMA(1,1)
process with $\rho$ and $\phi$ as AR and MA coefficients are considered for
estimation. The rows with no $\rho$ or $(\rho,\phi)$ indicate that no weight
matrix is used. Additionally, $\mu$ and $\sigma$ are the mean and standard
deviation values of the error process. Three samples sizes $(n=50,100,\mbox{
and }200)$ are considered to verify sampling properties of the estimators.
AR(1) with $\rho=0.9$
---
$(\mu,\sigma)$ | Methods | $n=50$ | $n=100$ | $n=200$
$(0.1,0.5)$ | PMWLS | 9.42 (1.31) | 4.14 (0.85) | 2.49 (0.68)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 4.90 (0.83) | 3.65 (0.69) | 1.37 (0.47)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 4.96 (0.79) | 3.76 (0.67) | 1.49 (0.47)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 5.29 (0.82) | 3.76 (0.69) | 1.42 (0.46)
PWLS | 14.59 (1.64) | 4.58 (0.89) | 2.47 (0.68)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 5.14 (0.86) | 2.93 (0.66) | 1.41 (0.48)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 4.95 (0.77) | 3.56 (0.65) | 1.41 (0.46)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 5.08 (0.80) | 3.48 (0.65) | 1.44 (0.47)
$(0.5,0.5)$ | PMWLS | 9.38 (1.35) | 4.51 (0.91) | 2.33 (0.67)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 5.13 (0.84) | 2.95 (0.65) | 1.42 (0.47)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 5.19 (0.82) | 2.84 (0.63) | 1.41 (0.47)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 5.76 (0.84) | 3.22 (0.67) | 1.32 (0.45)
PWLS | 12.02 (1.53) | 7.19 (1.17) | 2.93 (0.72)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 5.53 (0.90) | 3.31 (0.68) | 1.65 (0.50)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 5.25 (0.82) | 3.36 (0.67) | 1.69 (0.50)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 5.75 (0.83) | 3.38 (0.67) | 1.60 (0.48)
$\ast$ The actual MSE values are $0.01\times$ the reported values.
Table 2: Estimation results with SCAD for the equation (3.5) when the error
process is AR(1) with the AR coefficient $\rho=0.9$. The other configurations
are identical to Table 1. ARMA(1,1) with $\rho=0.8,\phi=0.4$
---
$(\mu,\sigma)$ | Methods | $n=50$ | $n=100$ | $n=200$
$(0.1,0.5)$ | PMWLS | 5.83 (1.07) | 2.62 (0.71) | 0.83 (0.41)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 3.53 (0.78) | 1.94 (0.57) | 0.50 (0.31)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 3.38 (0.75) | 1.93 (0.55) | 0.50 (0.30)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 3.40 (0.74) | 1.78 (0.54) | 0.46 (0.29)
PWLS | 5.73 (1.05) | 2.66 (0.71) | 0.90 (0.43)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 3.52 (0.78) | 1.95 (0.57) | 0.43 (0.29)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 3.22 (0.73) | 1.93 (0.56) | 0.59 (0.33)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 3.60 (0.75) | 1.89 (0.55) | 0.51 (0.31)
$(0.5,0.5)$ | PMWLS | 4.86 (0.91) | 2.35 (0.67) | 0.85 (0.40)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 4.09 (0.76) | 1.49 (0.50) | 0.58 (0.32)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 4.16 (0.74) | 1.53 (0.50) | 0.58 (0.31)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 4.46 (0.77) | 1.58 (0.51) | 0.55 (0.30)
PWLS | 9.08 (1.30) | 3.07 (0.77) | 1.01 (0.44)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 4.68 (0.80) | 1.86 (0.55) | 0.88 (0.37)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 4.29 (0.73) | 1.81 (0.51) | 0.79 (0.35)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 4.56 (0.75) | 1.94 (0.55) | 0.79 (0.35)
$\ast$ The actual MSE values are $0.01\times$ the reported values.
Table 3: Estimation results with SCAD for the equation (3.5) when the error
process is ARMA(1,1) with AR and MA coefficients $\rho=0.8$ and $\phi=0.4$.
The other configurations are identical to Table 1.
Tables 4-6 demonstrate selection results of PMWLS and PWLS methods with the
SCAD penalty. The results for the LASSO penalty are provided in Tables S4-S6
in the supplementary material. True positive (TP) counts the number of
significant estimates among the significant true parameters and true negative
(TN) counts the number of insignificant estimates among the insignificant true
parameters. As the sample size increases, the values of TP approaches the true
value. The performance in terms of TN is better for SCAD compared to LASSO.
These results correspond to Theorem 4 since LASSO does not fulfill all
conditions in Assumption 2 as discussed. Lastly, selection performance between
PMWLS and PWLS methods are comparable. Different from the estimation
performance, the results are comparable over the choice of weight matrices
including no weight matrix.
AR(1) with $\rho=0.5$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 1.35 | 2.04 | 2.40 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 1.31 | 2.02 | 2.33 | | 17.00 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 1.26 | 1.97 | 2.35 | | 16.97 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.11 | 1.93 | 2.33 | | 17.00 | 17.00 | 17.00
PWLS | 1.27 | 2.07 | 2.38 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 1.28 | 1.98 | 2.35 | | 16.98 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 1.26 | 2.00 | 2.31 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.17 | 1.92 | 2.36 | | 16.98 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 1.51 | 2.03 | 2.39 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 1.38 | 1.99 | 2.39 | | 17.00 | 16.99 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 1.36 | 2.00 | 2.41 | | 16.96 | 17.00 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.19 | 1.90 | 2.39 | | 17.00 | 17.00 | 16.99
PWLS | 1.35 | 1.81 | 2.24 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 1.35 | 1.98 | 2.32 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 1.25 | 1.90 | 2.32 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.20 | 1.80 | 2.25 | | 17.00 | 17.00 | 17.00
| | | | | | | |
Table 4: Selection results with SCAD for the equation (3.5) when the error
process is AR(1) with the AR coefficient $\rho=0.5$. TP counts the number of
significant estimates among the significant true parameters and TN counts the
number of insignificant estimates among the insignificant true parameters. The
other configurations are identical to Table 1. AR(1) with $\rho=0.9$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 1.69 | 2.04 | 2.50 | | 16.97 | 17.00 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 1.69 | 1.91 | 2.50 | | 17.00 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 1.64 | 1.88 | 2.46 | | 17.00 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.58 | 1.89 | 2.47 | | 17.00 | 17.00 | 17.00
PWLS | 1.62 | 1.99 | 2.49 | | 16.98 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 1.64 | 1.93 | 2.49 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 1.62 | 1.92 | 2.47 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.62 | 1.93 | 2.49 | | 17.00 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 1.83 | 2.05 | 2.49 | | 16.98 | 17.00 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 1.71 | 2.09 | 2.50 | | 17.00 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 1.65 | 2.09 | 2.49 | | 16.98 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.54 | 2.01 | 2.51 | | 16.98 | 17.00 | 17.00
PWLS | 1.68 | 1.98 | 2.35 | | 16.95 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 1.66 | 1.98 | 2.41 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 1.62 | 1.96 | 2.39 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.52 | 1.95 | 2.40 | | 17.00 | 17.00 | 17.00
| | | | | | | |
Table 5: Selection results with SCAD for the equation (3.5) when the error
process is AR(1) with the AR coefficient $\rho=0.9$. The other configurations
are identical to Table 4. ARMA(1,1) with $\rho=0.8$, $\phi=0.4$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 2.15 | 2.43 | 2.85 | | 17.00 | 16.98 | 16.98
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.15 | 2.40 | 2.83 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.17 | 2.39 | 2.82 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.15 | 2.44 | 2.84 | | 17.00 | 17.00 | 17.00
PWLS | 2.14 | 2.44 | 2.85 | | 16.98 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.16 | 2.40 | 2.85 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.16 | 2.40 | 2.79 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.09 | 2.42 | 2.82 | | 17.00 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 1.93 | 2.53 | 2.79 | | 16.98 | 16.98 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 1.88 | 2.50 | 2.76 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 1.85 | 2.48 | 2.75 | | 17.00 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.82 | 2.49 | 2.77 | | 17.00 | 17.00 | 17.00
PWLS | 1.80 | 2.47 | 2.73 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 1.74 | 2.39 | 2.62 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 1.76 | 2.34 | 2.65 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 1.72 | 2.36 | 2.66 | | 17.00 | 17.00 | 17.00
| | | | | | | |
Table 6: Selection results with SCAD for the equation (3.5) when the error
process is ARMA(1,1) with AR and MA coefficients $\rho=0.8$ and $\phi=0.4$.
The other configurations are identical to Table 4.
Next, we consider a following nonlinear multiplicative model:
$\displaystyle
y_{t}=\frac{1}{1+\exp(-\bm{x}_{t}^{T}\bm{\theta}_{0})}\times\epsilon_{t}.$
(3.6)
For $\epsilon_{t}$, the exponentiated AR processes or an ARMA process are
considered since the $\epsilon_{t}$’s in the equation (3.6) are allowed to
have only positive values. The AR and ARMA coefficients and the parameter
setting of $\bm{\theta}$ are the same as the one in the model (3.5). We
transformed the model in the log scale and apply our approach. Then, we
compare the results with the PWLS method and an ‘additive’ method, where the
estimator from the additive method is calculated as if the data are from a
nonlinear additive model without log transformation. For this simulation, we
provide the results using the SCAD penalty.
Tables 7-9 show estimation performances of PMWLS and PWLS methods for the
model given in (3.6) and Tables 10-12 describe selection performances. For
most data generation settings, our proposed method (PMWLS) shows better
results. In particular, when $(\mu,\sigma)=(0.5,0.5)$ and the sample size is
small, the difference in performance between PMWLS and PWLS methods becomes
more evident. Hence, we argue that PMWLS is preferred over PWLS in practice
since PMWLS shows better finite sample performance. In terms of choice of
weight matrices, the results are similar to those in the first simulation
study. That is, estimation performance is better when we use a weight matrix
for both approaches while the selection performances are comparable with and
without a weight matrix.
AR(1) with $\rho=0.5$
---
$(\mu,\sigma)$ | Methods | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 0.88 (0.42) | 0.32 (0.26) | 0.15 (0.17)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.79 (0.39) | 0.20 (0.20) | 0.08 (0.12)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.85 (0.41) | 0.20 (0.20) | 0.08 (0.13)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.87 (0.41) | 0.24 (0.22) | 0.08 (0.13)
PWLS | 0.91 (0.42) | 0.32 (0.25) | 0.13 (0.15)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.73 (0.38) | 0.18 (0.19) | 0.07 (0.12)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.77 (0.39) | 0.21 (0.20) | 0.07 (0.12)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.83 (0.40) | 0.23 (0.21) | 0.08 (0.13)
$(0.5,0.5)$ | PMWLS | 0.68 (0.37) | 0.30 (0.25) | 0.14 (0.16)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.56 (0.33) | 0.19 (0.19) | 0.08 (0.12)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.64 (0.36) | 0.18 (0.19) | 0.09 (0.14)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.75 (0.38) | 0.22 (0.21) | 0.11 (0.15)
PWLS | 0.78 (0.39) | 0.30 (0.24) | 0.14 (0.17)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.71 (0.36) | 0.25 (0.21) | 0.10 (0.13)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.74 (0.37) | 0.25 (0.22) | 0.08 (0.13)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.93 (0.41) | 0.26 (0.23) | 0.12 (0.15)
| | | |
Table 7: Estimation results with SCAD for the equation (3.6) when the error
process is the exponentiated AR(1) with $\rho=0.5$. Mean squared error (MSE)
with standard deviation (SD) values in the parenthesis are presented. The
other configurations are identical to Table 1. AR(1) with $\rho=0.9$
---
$(\mu,\sigma)$ | Methods | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 0.61 (0.35) | 0.24 (0.22) | 0.13 (0.16)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.25 (0.22) | 0.03 (0.08) | 0.02 (0.06)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.32 (0.25) | 0.09 (0.14) | 0.01 (0.05)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.40 (0.28) | 0.03 (0.08) | 0.01 (0.05)
PWLS | 0.76 (0.39) | 0.22 (0.21) | 0.12 (0.16)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.27 (0.23) | 0.11 (0.15) | 0.02 (0.06)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.39 (0.27) | 0.09 (0.14) | 0.01 (0.05)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.40 (0.28) | 0.10 (0.14) | 0.01 (0.05)
$(0.5,0.5)$ | PMWLS | 0.55 (0.33) | 0.28 (0.24) | 0.13 (0.16)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.16 (0.18) | 0.08 (0.13) | 0.02 (0.06)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.17 (0.18) | 0.07 (0.12) | 0.01 (0.05)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.19 (0.19) | 0.05 (0.10) | 0.02 (0.06)
PWLS | 0.47 (0.31) | 0.27 (0.23) | 0.14 (0.17)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.22 (0.20) | 0.10 (0.13) | 0.03 (0.06)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.23 (0.21) | 0.11 (0.14) | 0.01 (0.05)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.22 (0.21) | 0.07 (0.12) | 0.02 (0.06)
| | | |
Table 8: Estimation results with SCAD for the equation (3.6) when the error
process is the exponentiated AR(1) with $\rho=0.9$. The other configurations
are identical to Table 7. ARMA(1,1) with $\rho=0.8,\phi=0.4$
---
$(\mu,\sigma)$ | Methods | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 0.37 (0.27) | 0.14 (0.17) | 0.09 (0.14)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.18 (0.19) | 0.04 (0.08) | 0.02 (0.06)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.17 (0.19) | 0.03 (0.08) | 0.01 (0.05)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.12 (0.15) | 0.04 (0.09) | 0.01 (0.05)
PWLS | 0.29 (0.24) | 0.16 (0.17) | 0.09 (0.13)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.18 (0.19) | 0.04 (0.09) | 0.02 (0.06)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.17 (0.19) | 0.03 (0.08) | 0.01 (0.05)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.12 (0.16) | 0.04 (0.09) | 0.01 (0.05)
$(0.5,0.5)$ | PMWLS | 0.44 (0.30) | 0.15 (0.18) | 0.07 (0.12)
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 0.19 (0.19) | 0.04 (0.09) | 0.02 (0.06)
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 0.18 (0.19) | 0.03 (0.08) | 0.01 (0.05)
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.21 (0.20) | 0.04 (0.08) | 0.02 (0.06)
PWLS | 0.42 (0.29) | 0.15 (0.17) | 0.06 (0.11)
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 0.25 (0.21) | 0.06 (0.10) | 0.04 (0.08)
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 0.23 (0.21) | 0.05 (0.10) | 0.02 (0.06)
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 0.32 (0.25) | 0.03 (0.08) | 0.02 (0.06)
| | | |
Table 9: Estimation results with SCAD for the equation (3.6) when the error
process is the exponentiated ARMA(1,1) with AR and MA coefficients $\rho=0.8$
and $\phi=0.4$. The other configurations are identical to Table 7. AR(1) with
$\rho=0.5$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 2.88 | 2.99 | 3.00 | | 16.83 | 16.92 | 16.94
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.85 | 2.99 | 3.00 | | 16.90 | 16.93 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.83 | 2.98 | 3.00 | | 16.89 | 16.99 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.84 | 2.98 | 3.00 | | 16.95 | 16.98 | 16.99
PWLS | 2.87 | 2.99 | 3.00 | | 16.84 | 16.94 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.87 | 2.99 | 3.00 | | 16.89 | 16.98 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.84 | 2.98 | 3.00 | | 16.98 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.84 | 2.98 | 3.00 | | 17.00 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 2.94 | 2.99 | 3.00 | | 16.84 | 16.94 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.92 | 2.99 | 3.00 | | 16.85 | 16.97 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.91 | 2.99 | 2.99 | | 16.93 | 16.97 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.89 | 2.99 | 2.99 | | 16.85 | 16.97 | 16.99
PWLS | 2.92 | 2.99 | 3.00 | | 16.82 | 16.94 | 16.96
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.82 | 2.96 | 2.99 | | 16.91 | 16.98 | 16.99
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.79 | 2.95 | 3.00 | | 16.95 | 17.00 | 16.99
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.76 | 2.96 | 2.99 | | 17.00 | 16.99 | 16.99
| | | | | | | |
Table 10: Selection results with SCAD for the equation (3.6) when the error
process is the exponentiated AR(1) with $\rho=0.5$. TP counts the number of
significant estimates among the significant true parameters and TN counts the
number of insignificant estimates among the insignificant true parameters. The
other configurations are identical to Table 4. AR(1) with $\rho=0.9$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 2.94 | 3.00 | 3.00 | | 16.85 | 16.92 | 16.96
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.93 | 3.00 | 3.00 | | 16.94 | 16.99 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.90 | 2.98 | 3.00 | | 16.99 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.88 | 3.00 | 3.00 | | 17.00 | 17.00 | 17.00
PWLS | 2.91 | 3.00 | 3.00 | | 16.75 | 16.98 | 16.97
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.93 | 2.98 | 3.00 | | 16.98 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.88 | 2.98 | 3.00 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.88 | 2.98 | 3.00 | | 17.00 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 2.97 | 2.99 | 3.00 | | 16.74 | 16.90 | 16.97
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.96 | 2.98 | 3.00 | | 16.99 | 16.98 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.95 | 2.98 | 3.00 | | 16.97 | 17.00 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.95 | 2.99 | 3.00 | | 16.96 | 17.00 | 17.00
PWLS | 2.97 | 2.99 | 3.00 | | 16.88 | 16.97 | 16.96
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.94 | 2.98 | 3.00 | | 16.97 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.92 | 2.96 | 3.00 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.93 | 2.98 | 3.00 | | 17.00 | 17.00 | 17.00
| | | | | | | |
Table 11: Selection results with SCAD for the equation (3.6) when the error
process is the exponentiated AR(1) with $\rho=0.9$. The other configurations
are identical to Table 10. ARMA(1,1) with $\rho=0.8,\phi=0.4$
---
$(\mu,\sigma)$ | Methods | TP | | TN
50 | 100 | 200 | | 50 | 100 | 200
$(0.1,0.5)$ | PMWLS | 2.99 | 3.00 | 3.00 | | 16.81 | 16.96 | 16.92
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.96 | 3.00 | 3.00 | | 16.94 | 16.99 | 16.98
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.96 | 3.00 | 3.00 | | 17.00 | 16.99 | 16.99
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.98 | 3.00 | 3.00 | | 17.00 | 17.00 | 16.99
PWLS | 2.99 | 3.00 | 3.00 | | 16.93 | 16.98 | 16.92
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.96 | 3.00 | 3.00 | | 16.98 | 16.99 | 16.99
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.96 | 3.00 | 3.00 | | 17.00 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.98 | 3.00 | 3.00 | | 17.00 | 17.00 | 17.00
$(0.5,0.5)$ | PMWLS | 2.98 | 3.00 | 3.00 | | 16.70 | 16.94 | 16.95
$\mbox{PMWLS }{\tiny(\rho=0.5)}$ | 2.95 | 3.00 | 3.00 | | 16.98 | 16.98 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.9)}$ | 2.95 | 3.00 | 3.00 | | 16.99 | 16.99 | 17.00
$\mbox{PMWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.94 | 3.00 | 3.00 | | 17.00 | 16.99 | 16.99
PWLS | 2.97 | 3.00 | 3.00 | | 16.76 | 16.95 | 16.99
$\mbox{PWLS }{\tiny(\rho=0.5)}$ | 2.93 | 3.00 | 3.00 | | 16.91 | 16.98 | 16.99
$\mbox{PWLS }{\tiny(\rho=0.9)}$ | 2.92 | 2.99 | 3.00 | | 16.97 | 17.00 | 17.00
$\mbox{PWLS }{\tiny(\rho=0.8,\phi=0.4)}$ | 2.90 | 3.00 | 3.00 | | 16.97 | 17.00 | 17.00
| | | | | | | |
Table 12: Selection results with SCAD for the equation (3.6) when the error
process is the exponentiated ARMA(1,1) with AR and MA coefficients $\rho=0.8$
and $\phi=0.4$. The other configurations are identical to Table 10.
For the comparison between PMWLS and the additive method, where the estimator
of the additive method is calculated as if the data are from a nonlinear
additive model without log transformation, the MSE values using the additive
method are large. The detailed results are provided in Tables S7 and S8 in the
supplementary material. This has been pointed out by Bhattacharyya et al.
(1992) that a mis-specified additive nonlinear model while the true model is a
multiplicative nonlinear model may lead to an inconsistent estimator. On the
other hand, both PMWLS and the additive method successfully discriminate
significant and insignificant parameters but PMWLS outperformed the additive
method in terms of TP.
## 4 Application to a head-neck position tracking system
In this section, we apply our PMWLS method to the parametric nonlinear model
of the head-neck position tracking task which is introduced in detail in
Ramadan et al. (2018). The penalization is done with the SCAD penalty. The
weight matrix is chosen by fitting the residuals from the PMWLS method without
the weight matrix to the ARMA(1,1) process. The weight matrix for the PWLS
method is also constructed in a similar way. The choice of the weight matrix
in real data analysis can be flexible, but fitting residuals from the
unweighted method to ARMA(1,1) worked well even for the data with slowly
decreasing autocorrelation.
There are ten subjects in total, and each subject participated in three trials
of an experiment. In each trial, the subject’s head-neck movement as an angle
(radian) for 30 seconds was collected with measurement frequency of 60Hz, i.e.
1800 observations per each trial. Reference signals as an input guided the
subjects to follow with their eyes. Since the neurophysiological parameters
are different by the subjects, the model is separately fitted to each subject.
Parameters | Max | Min | Description
---|---|---|---
$K_{vis}\left[\frac{Nm}{rad}\right]$ | $10^{3}$ | 50 | Visual feedback gain
$K_{vcr}\left[\frac{Nms^{2}}{rad}\right]$ | $10^{4}$ | 500 | Vestibular feedback gain
$K_{ccr}\left[\frac{Nm}{rad}\right]$ | 300 | 1 | Proprioceptive feedback gain
$\tau[s]$ | 0.4 | 0.1 | Visual feedback delay
$\tau_{1A}[s]$ | 0.2 | 0.01 | Lead time constant of the irregular vestibular afferent neurons
$\tau_{CNS1}[s]$ | 1 | 0.05 | Lead time constant of the central nervous system
$\tau_{C}[s]$ | 5 | 0.1 | Lag time constant of the irregular vestibular afferent neurons
$\tau_{CNS2}[s]$ | 60 | 5 | Lag time constant of the central nervous system
$\tau_{MS1}[s]$ | 1 | 0.01 | First lead time constant of the neck muscle spindle
$\tau_{MS2}[s]$ | 1 | 0.01 | Second lead time constant of the neck muscle spindle
$B\left[\frac{Nms}{rad}\right]$ | 5 | 0.1 | Intrinsic damping
$K\left[\frac{Nm}{rad}\right]$ | 5 | 0.1 | Intrinsic stiffness
Table 13: The neurophysiological parameters of the head-neck position tracking
model. The notation and description are adopted from Ramadan et al. (2018).
Max and Min are the range of the parameter values. The values in brackets are
units of the parameters.
The parametric model of the head-neck position tracking system involves a
highly nonlinear structure with 12 parameters to be estimated. Table 13 shows
a list of the parameters for the head-neck position tracking system. Each
parameter measures a neurophysiological function such as visual feedback gain,
vestibular feedback gain, and so on. The parametric model of the head-neck
position tracking task suffers from its complicated structure and limited data
availability, which could bring overfitting and non-identifiability. Thus, a
penalized method has been considered to stabilize parameter estimation and
build a sparse model for identifiability (Ramadan et al., 2018; Yoon et al.,
2022). The penalization method is applied to let the parameter values be
shrunk to the pre-specified values, called the typical values, instead of the
zero value since the parameters in the model have their physical meanings. For
the determination of the typical values, we pre-estimated the parameters 10
times and set the average as the typical values, $\bm{\tilde{\theta}}$. When
overfitting is highly concerned, the obtained estimates tend to possess non-
ignorable gaps with the true optimum. i.e., small changes in the initial
points could bring relatively large deviations among the fitted estimates.
Hence, averaging over the overfitted estimates may lead to a new estimate much
closer to the true optimum. Thus, we chose the average of pre-estimated values
as our typical values. We illustrate this in Figure 3. The overfitted
estimates
($\bm{\tilde{\theta}}_{1},\bm{\tilde{\theta}}_{2},\bm{\tilde{\theta}}_{3},\
\mbox{and}\ \bm{\tilde{\theta}}_{4}$) in Figure 3 are spread out widely around
the true optimum, $\bm{\theta}_{0}$ but the average, $\bm{\tilde{\theta}}$, is
closer to $\bm{\theta}_{0}$.
Figure 3: Illustration of the overfitted estimates. The background pattern
describes an exemplifying objective function to maximize. The lighter colors
mean higher values of the objective function. $\bm{\theta}_{0}$ indicates the
true maximum and
$\bm{\tilde{\theta}}_{1},\bm{\tilde{\theta}}_{2},\bm{\tilde{\theta}}_{3},\
\mbox{and}\ \bm{\tilde{\theta}}_{4}$ stand for the obtained estimates from 4
different pre-estimations and $\bm{\tilde{\theta}}$ stands for the average of
$\bm{\tilde{\theta}}_{1},\bm{\tilde{\theta}}_{2},\bm{\tilde{\theta}}_{3},\
\mbox{and}\ \bm{\tilde{\theta}}_{4}$.
As described in the introduction, the fitted values from the additive error
model with the penalized ordinary least square method used in Yoon et al.
(2022) indicate a possibility of the multiplicative errors and their residuals
still exhibit temporal dependence. Thus, we apply our approach in estimating
the neurophysiological parameters, $\bm{\theta}$, of the head-neck position
tracking model. Then we compare our PMWLS method to an additive approach
without log transformation (Yoon et al., 2022) and PWLS method. Note that both
PMWLS and PWLS methods are applied after log transformation by assuming
multiplicative errors. For the comparison, we consider variance accounted for
(VAF), which is defined as
${\text{VAF}}(\hat{\bm{\theta}})(\%)=\left[1-\frac{\sum^{n}_{t=1}(y_{t}-\hat{y}_{t})^{2}}{\sum^{n}_{t=1}y_{t}^{2}}\right]\times
100,$ where $\hat{y}_{t}$ is the $t$-th component of
$\bm{f}(\bm{x},\hat{\bm{\theta}})$. VAF has been frequently used to assess the
fit from the obtained estimates in biomechanics (Van Drunen et al., 2013;
Ramadan et al., 2018; Yoon et al., 2022). As observed in the expression of
VAF, the estimates with higher VAF values are translated into the estimates
with lower MSE values.
Recall that there are three sets of measurements (three trials) per subject.
We used the measurements from one trial (train set) to fit the model and the
measurements from two other trials (test set) were used to test the fitted
model. Thus, we have one VAF value using the train set and two VAF values
using the test set. We repeat this for each trial as a train set and report
the average VAF values.
No. | Train
---|---
Additive | PWLS | PMWLS
1 | 8453 | 8472 | 8471
2 | 6896 | 6927 | 7139
3 | 8229 | 8229 | 8229
4 | 8476 | 8464 | 8459
5 | 8424 | 8516 | 8647
6 | 8824 | 8795 | 8846
7 | 9308 | 9308 | 9308
8 | 7900 | 8624 | 8771
9 | 8857 | 8858 | 8842
10 | 7672 | 7672 | 7668
Average | 8304 | 8386 | 8438
No. | Test
Additive | PWLS | PMWLS
1 | 8815 | 8810 | 8823
2 | 5785 | 5827 | 6079
3 | 7680 | 7681 | 7680
4 | 8064 | 8058 | 8066
5 | 8350 | 8417 | 8502
6 | 8821 | 8816 | 8819
7 | 9114 | 9114 | 9114
8 | 8187 | 8975 | 9089
9 | 8276 | 8248 | 8327
10 | 7842 | 7842 | 7837
Average | 8093 | 8179 | 8234
$\ast$ The actual VAF values are $10^{-4}\times$ the reported values.
Table 14: VAFs for 10 subjects. ‘No.’ refers the subject number. ‘Additive’
refers to the additive method studied in Yoon et al. (2022), ‘PWLS’ refers to
the method that considers an intercept term, and ‘PMWLS’ refers to our
proposed method. In both PWLS and PMWLS methods, weight matrices in the
objective function are considered. The Train column is for the train set and
the Test column is for the test set.
Figure 4: Estimation and prediction results for the subject No.2 with measured
responses (black line) and fitted values from the additive approach studied in
Yoon et al. (2022) (blue dashed line, $--$), PWLS (yellow dot-dashed line,
$\cdot-\cdot$), and PMWLS (red dotted line, $\cdots$). The upper plot shows
the estimation result using the measurements from one exemplary trial and the
lower plot shows the prediction result for another trial using the fitted
model.
Figure 5: Estimation and prediction results for the subject No. 8 with
measured responses (black line) and fitted values from the additive approach
studied in Yoon et al. (2022) (blue dashed line, $--$), PWLS (yellow dot-
dashed line, $\cdot-\cdot$), and PMWLS (red dotted line, $\cdots$). The upper
plot shows the estimation result using the measurements from one exemplary
trial and the lower plot shows the prediction result for another trial using
the fitted model.
Table 14 shows VAF values of all ten subjects for the additive approach, PWLS
method and PMWLS method. The VAF values among different methods for some
subjects (No. 1, 3, 4, 7, 9 and 10) are similar and the differences are small.
On the other hand, the VAF values from our PMWLS method are higher than the
VAF values from the other methods for the subjects No. 2, 5, 6 and 8. The
difference among the three approaches is more clear when VAF values are
averaged over all subjects. The averages of VAF values over all subjects when
all the parameter values were set to the typical values, i.e. no penalized
estimation method, are 0.8149 for the train set and 0.7928 for the test set.
Hence, the improvement by our approach is larger compared to the improvements
by the other methods as well. We believe these results are originated from the
fact that multiplicative error assumption is valid and our PMWLS method has
successfully captured the error structure underneath the data.
The estimation and prediction results for the subjects No.2 and No.8 are
provided in Figures 4 and 5, respectively. The upper plots in both Figures 4
and 5 exhibit estimation results from one trial out of three trials per
subject as an example. The lower plots in both Figures 4 and 5 show prediction
results for another trial using the fitted model. In both plots, our PMWLS
method (the red dotted line) is better at capturing peaks of the measured
response than the additive method (the blue dashed line), which motivated this
study at the beginning. We believe that the reason our approach outperforms
the additive approach is that the data implicitly have multiplicative
structure. In addition, the PMWLS method also slightly excels the PWLS method
in both estimation and prediction. Therefore, one can benefit from adopting
our proposed method for data with multiplicative structure. The number of
selected parameters, i.e. sensitive parameters, on average were 2.33
(additive), 3.23 (PWLS) and 3.63 (PMWLS) per subject. This might imply that a
smaller number of selected parameters in the additive approach causes poor
performance in estimation and prediction.
## 5 Conclusion
We proposed an estimation and selection method for the parameters in a
nonlinear model when the errors have non-zero mean and temporal dependence.
Our approach can also handle the multiplicative error model as shown in the
simulation study and the real data example. One can consider simply adding an
intercept term to a nonlinear model and estimating the parameters using the
least squares to handle possible non-zero mean. Simulation results show that
both approaches are overall comparable but our approach performs better
compared to the approach with an intercept term when non-zero mean is larger
and a sample size is rather small. We provided asymptotic properties of the
proposed estimator and numerical studies supported our theoretical results.
Introducing a weight matrix to reflect the temporal dependence improves
estimation. One could assume a parametric model for temporal dependence of the
error process and construct a weight matrix from the covariance matrix of the
error process. However, the class of error processes for our theoretical
results is then confined to a class of some parametric models. Also,
simulation results show there is not much difference by a choice of weight
matrices. Hence, we keep freedom to choose a weight matrix without assuming
the exact temporal dependence model.
The proposed method was successfully applied to the head-neck position
tracking model and produced the state-of-the-art performance in both
estimation and prediction. Since the multiplicative structure with temporally
correlated errors is frequently observed in other fields such as finance,
signal processing and image processing, the use of proposed method is not
limited to the application we considered in this study.
The proposed method assumes a fixed number of parameters. Hence, it is natural
to think about an extension to an increasing number of parameters as the
sample size grows, which we leave as a future study. Since we shrink the
estimates toward the typical values, obtaining good typical values can play a
critical role in the final results. Therefore, a better way to locate the
typical values, though the task is very challenging due to the complicated
structure of the nonlinear function, would contribute greatly to solving the
problem in the head-neck position tracking task. In our result, we used the
same typical values as Yoon et al. (2022) and Ramadan et al. (2018) so that
the comparison was carried out fairly.
## Supplementary Materials
The supplementary materials contain proofs of theoretical results discussed in
the text and additional simulation studies.
## Acknowledgements
Wu was supported by Ministry of Science and Technology of Taiwan under grants
(MOST 111-2118-M-259 -002 -). Choi and Lim were supported by National Research
Foundation of Korea (NRF) grant funded by the Korea government (MSIT)
(NRF-2019R1A2C1002213, 2020R1A4A1018207, 2021R1A2B5B01002620).
## References
* Athreya and Pantula (1986) Athreya, K. B. and S. G. Pantula (1986). A note on strong mixing of arma processes. Statistics $\&$ Probability Letters 4, 187–190.
* Baker and Foley (2011) Baker, K. R. and K. M. Foley (2011). A nonlinear regression model estimating single source concentrations of primary and secondarily formed pm2.5. Atmospheric Environment 45, 3758–3767.
* Bhattacharyya et al. (1992) Bhattacharyya, B., T. Khoshgoftaar, and G. Richardson (1992). Inconsistent m-estimators: Nonlinear regression with multiplicative error. Statistics $\&$ Probability Letters 14, 407–411.
* Bradley (2005) Bradley, R. C. (2005). Basic properties of strong mixing conditions. a survey and some open questions. Probability Surveys 2, 107–144.
* Breheny and Huang (2011) Breheny, P. and J. Huang (2011). Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The Annals of Applied Statistics 5(1), 232–253.
* Chen et al. (2002) Chen, K. J., E. Keshner, B. Peterson, and T. Hain (2002). Modeling head tracking of visual targets. Journal of Vestibular Research 12(1), 25–33.
* Fan and Li (2001) Fan, J. and R. Li (2001). Variable selection via nonconcave penlized likelihood and its oracla properties. Journal of the American Statistical Association 96(456), 1348–1360.
* Forbes et al. (2013) Forbes, P. A., E. de Bruijn, A. C. Schouten, F. C. van der Helm, and R. Happee (2013). Dependency of human neck reflex responses on the bandwidth of pseudorandom anterior-posterior torso perturbations. Experimental Brain Research 226(1), 1–14.
* Geller and Neumann (2018) Geller, J. and M. H. Neumann (2018). Improved local polynomial estimation in time series regression. Journal of Nonparametric Statistics 30(1), 1–27.
* Grenander (1954) Grenander, U. (1954). On the estimation of regression coefficients in the case of an autocorrelated disturbance. The Annals of Mathematical Statistics 25(2), 252–272.
* Jennrich (1969) Jennrich, R. I. (1969). Asymptotic properties of nonlinear least squares estimators. The Annals of Mathematical Statistics 40(2), 633–643.
* Jiang et al. (2012) Jiang, X., J. Jiang, and X. Song (2012). Oracle model selection for nonlinear models based on weighted composite quantile regression. Statistica Sinica 22, 1479–1506.
* Lai (1994) Lai, T. L. (1994). Asymptotic properties of nonlinear least squares estimates in stochastic regression models. The Annals of Statistics 22(4), 1917–1930.
* Lim et al. (2014) Lim, C., M. Meerschaert, and H.-P. Scheffler (2014). Parameter estimation for operator scaling random fields. Journal of Multivariate Analysis 123, 172–183.
* Lv et al. (2014) Lv, Z., H. Zhu, and K. Yu (2014). Robust variable selection for nonlinear models with diverging number of parameters. Statistics and Probability Letters 91, 90–97.
* Machkouri et al. (2017) Machkouri, M. E., K. Es-Sebaiy, and I. Ouassou (2017). On local linear regression for strongly mixing random fields. Journal of Multivariate Analysis 156, 103–115.
* Moon et al. (2012) Moon, K.-H., S. W. Han, T. S. Lee, and S. W. Seok (2012). Approximate mpa-based method for performing incremental dynamic analysis. Nonlinear Dynamics 67, 2865–2888.
* Paula et al. (2015) Paula, D., B. Linard, D. A. Andow, E. R. Sujii, C. S. S. Pires, and A. P. Vogler (2015). Detection and decay rates of prey and prey symbionts in the gut of a predator through metagenomics. Molecular Ecology Resources 15, 880–892.
* Peligrad and Utev (1997) Peligrad, M. and S. Utev (1997). Central limit theorem for linear processes. The Annals of Probability 25(1), 443–456.
* Peng et al. (1996) Peng, G., T. Hain, and B. Peterson (1996). A dynamical model for reflex activated head movements in the horizontal plane. Biological Cybernetics 75(4), 309–319.
* Pollard and Radchenko (2006) Pollard, D. and P. Radchenko (2006). Nonlinear least-squares estimation. Journal of Multivariate Analysis 97, 548–562.
* Ramadan et al. (2018) Ramadan, A., C. Boss, J. Choi, N. P. Reeves, J. Cholewicki, J. M. Popovich, and C. J. Radcliffe (2018). Selecting sensitive parameter subsets in dynamical models with application to biomechanical system identification. Journal of Biomechanical Engineering 140(7), 074503.
* Roussas et al. (1992) Roussas, G. G., L. T. Tran, and D. Ioannides (1992). Fixed design regression for time series: Asymptotic normality. Journal of Multivariate Analysis 40, 262–291.
* Santos and Barreto (2017) Santos, J. D. A. and G. A. Barreto (2017). An outlier-robust kernel rls algorithm for nonlinear system identification. Nonlinear Dynamics 90, 1707–1726.
* Tibshirani (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological) 58(1), 267–288.
* Van de Geer (1990) Van de Geer, S. (1990). Estimating a regression function. The Annals of Statistics 18(2), 907–924.
* Van Drunen et al. (2013) Van Drunen, P., E. Maaswinkel, F. Van der Helm, J. Van Dieën, and R. Happee (2013). Identifying intrinsic and reflexive contributions to low-back stabilization. Journal of Biomechanics 46(8), 1440–1446.
* Wang et al. (2007) Wang, H., R. Li, and C.-L. Tsai (2007). Tuning parameter selectors for the smoothly clipped absolute deviation method. Biometrika 94(3), 553–568.
* Wang and Zhu (2009) Wang, H. and J. Zhu (2009). Variable selection in spatial regression via penalized least squares. The Canadian Jounal of Statistics 37(4), 607–624.
* Wood (2010) Wood, S. N. (2010). Statistical inference for noisy nonlinear ecological dynamic systems. Nature 466, 1102–1107.
* Wu (1981) Wu, C.-F. (1981). Asymptotic theory of nonlinear least squares estimation. The Annals of Statistics, 501–513.
* Wu et al. (2014) Wu, S., H. Xue, Y. Wu, and H. Wu (2014). Variable selection for sparse high-dimensional nonlinear regression models by combining nonnegative garrote and sure independence screening. Statistica Sinica 24, 1365–1387.
* Xu and Shimada (2000) Xu, P. and S. Shimada (2000). Least squares parameter estimation in multiplicative noise models. Communications in Statistics - Simulation and Computation 29, 83–96.
* Yang et al. (2017) Yang, W., Y. Wang, and S. Hu (2017). Some probability inequalities of least-squares estimator in non linear regression model with strong mixing errors. Communications in Statistics – Theory and Methods 46, 165–175.
* Yoon et al. (2022) Yoon, K., H. You, W.-Y. Wu, C. Y. Lim, J. Choi, C. Boss, A. Ramadan, J. M. Popovich Jr, J. Cholewicki, N. P. Reeves, et al. (2022). Regularized nonlinear regression for simultaneously selecting and estimating key model parameters: Application to head-neck position tracking. Engineering Applications of Artificial Intelligence 113, 104974.
* Zhang and Liang (2012) Zhang, J.-J. and H.-Y. Liang (2012). Asymptotic normality of estimators in heteroscedastic semi-parametric model with strong mixing errors. Communications in Statistics – Theory and Methods 41, 2172–2201.
Hojun You, Department of Mathematics, University of Houston, TX, 77056, United
States of America E-mail<EMAIL_ADDRESS>
Kyubaek Yoon, School of Mechanical Engineering, Yonsei University E-mail:
<EMAIL_ADDRESS>Wei-Ying Wu, Department of Applied Mathematics,
National Dong Hwa University E-mail<EMAIL_ADDRESS>Jongeun
Choi, School of Mechanical Engineering, Yonsei University E-mail:
<EMAIL_ADDRESS>Chae Young Lim , Department of Statistics, Seoul
National University E-mail<EMAIL_ADDRESS>
|
# Mixed Dimension Embeddings with Application to Memory-Efficient
Recommendation Systems
A.A. Ginart1, Maxim Naumov2, Dheevatsa Mudigere2, Jiyan Yang2, James Zou1
1Stanford University, Palo Alto, California, {tginart<EMAIL_ADDRESS>2Facebook, Inc. Menlo Park, California, {mnaumov, dheevatsa,
<EMAIL_ADDRESS>
###### Abstract
Embedding representations power machine intelligence in many applications,
including recommendation systems, but they are space intensive — potentially
occupying hundreds of gigabytes in large-scale settings. To help manage this
outsized memory consumption, we explore _mixed dimension embeddings_ , an
embedding layer architecture in which a particular embedding vector’s
dimension scales with its query frequency. Through theoretical analysis and
systematic experiments, we demonstrate that using mixed dimensions can
drastically reduce the memory usage, while maintaining and even improving the
ML performance. Empirically, we show that the proposed mixed dimension layers
improve accuracy by 0.1% using half as many parameters or maintain it using
16$\times$ fewer parameters for click-through rate prediction on the Criteo
Kaggle dataset. They also train over 2$\times$ faster on a GPU.
## I Introduction
Embedding representations power state-of-the-art applications in diverse
domains, including computer vision [1, 2], natural language processing [3, 4,
5], and recommendation systems [6, 7, 8]. It is standard practice to embed
objects into $\mathbb{R}^{d}$ at a fixed uniform dimension (UD) $d$. When the
embedding dimension $d$ is too low, the downstream statistical performance
suffers [9]. When $d$ is high and the number of objects to represent is large,
memory consumption becomes an issue. For example, in recommendation models,
the embedding layer can make up more than $99.9\%$ of the memory it takes to
store the model, and in large-scale settings, it could consume hundreds of
gigabytes or even terabytes [7, 10]. Therefore, finding innovative embedding
representations that use fewer parameters while preserving statistical
performance of the downstream model is an important challenge.
Object frequencies are often heavily skewed in real-world applications. For
instance, for the full MovieLens dataset, the top 10% of users receive as many
queries as the remaining 90% and the top 1% of items receive as many queries
as the remaining 99%. To an even greater extent, on the Criteo Kaggle dataset
the top $0.0003\%$ of indices receive as many queries as the remaining
$\sim$32 million. To leverage the heterogeneous object popularity in
recommendation, we propose mixed dimension (MD) embedding layers, in which the
dimension of a particular object’s embedding scales with that object’s
popularity rather than remaining uniform. Our case studies and theoretical
analysis demonstrate that MD embeddings work well because they do not underfit
popular embeddings while wasting parameters on rare embeddings. Additionally,
MD embeddings minimize popularity-weighted loss at test time by efficiently
allocating parameters.
In Section 3, we introduce the proposed architecture for the embedding layer.
In Section 4, we theoretically investigate MD embeddings. Our theoretical
framework splits embedding-based recommendation systems into either the _data-
limited regime_ or the _memory-limited regime_ , depending on the parameter
budget and sample size. We prove mathematical guarantees, which demonstrate
that when the frequency of categorical values is sufficiently skewed, MD
embeddings are both better at matrix recovery and incur lower reconstruction
distortion than UD embeddings. Our method is faster to train while requiring
far less tuning than other non-uniform embedding layers. In Section 5, we
demonstrate that MD embeddings improve both parameter-efficiency and training
time in click through rate (CTR) prediction tasks.
Summary of Contributions:
(1) We propose an MD embeddings layer for recommendation systems and provide a
novel, mathematical method for sizing the dimension of features with variable
popularity that is fast to train, easy to tune, and performs well empirically.
(2) With matrix completion and factorization models, we prove that with
sufficient popularity skew, mixed dimension embeddings incur lower distortion
when memory-limited and generalize better when data-limited.
(3) For the memory-limited regime we derive the _optimal_ feature dimension.
This dimension only depends on the feature’s _popularity_ , the parameter
budget, and the _singular-value spectrum_ of the pairwise interactions.
## II Background & Problem Formulation
We review the CTR prediction task here (more details in Appendix). Compared to
canonical collaborative filtering (CF), CTR prediction tasks include
additional context that can be incorporated to predict user-item interactions.
These contextual features are expressed through sets of indices (categorical)
and floating point values (continuous). These features can represent arbitrary
details about the context of an on-click or personalization event. The $i$-th
categorical feature can be represented by an index $x_{i}\in\\{1,...,n_{i}\\}$
for $i=1,...,\kappa$. In addition to $\kappa$ categorical features, we also
have $s$ scalar features, together producing a dense feature vector
$\textbf{x}^{\prime}\in\mathbb{R}^{s}$. Thus, given some
$(x_{1},...,x_{\kappa},\textbf{x}^{\prime})\in([n_{1}]\times...\times[n_{\kappa}])\times\mathbb{R}^{s}$,
we would like to predict $y\in\\{0,1\\}$, which denotes a click event in
response to a particular personalized context.
We use state-of-the-art deep learning recommendation model (DLRM) [11] as an
off-the-shelf deep model. Various deep CTR prediction models, including [6,
12, 13, 11, 14, 15], are powered by memory-intensive embedding layers that
utterly dwarf the rest of the model. The trade-off between the size of the
embedding layer and the statistical performance seems to be an unavoidable
trade-off. Generally these deep models are trained via empirical risk
minimization (ERM) and back-propagation. For a given model $f_{\theta}$
(parameterized by $\theta$) the standard practice is to represent categorical
features with some indexed embedding layer $E$. The ERM objective is then:
$\min_{\theta,E}\sum_{i\in\mathcal{D}}\ell\left(f_{\theta}(\mathbf{x^{\prime}}_{i},E[(x_{1},...,x_{\kappa})_{i}]),y_{i}\right)$
where the sum is over all data points
$\\{(x_{1},...,x_{\kappa},\mathbf{x^{\prime}})_{i},y_{i}\\}$ in the dataset
and the loss function $\ell$ is taken to be cross entropy for our purposes.
Usually, each categorical feature has its own independent embedding matrix:
$E[(x_{0},...,x_{\kappa})_{i}]=(E^{(1)}[x_{1}],...,E^{(\kappa)}[x_{\kappa}])$.
Related Works. Recent works have proposed similar but substantially different
techniques for non-uniform embedding architectures, particularly for the
natural language processing (NLP) domain [16, 17]. Neither of those methods
would work out-of-the-box for CTR prediction because they ignore the inherit
feature-level structure in CTR that is absent in NLP. We discuss key
distinctions in more detail in Appendix.
Other approaches propose neural architecture search (NAS) for RecSys embedding
layers is proposed in [18], where generic reinforcement learning algorithms
are used to architect the embedding layer. In contrast to computationally
expensive NAS, we show that the architecture search over non-uniform embedding
layers can be distilled into tuning a _single_ hyper-parameter and does not
require the heavy-machinery of NAS. This simplification in model search is
only possible due to our theoretical framework. Furthermore, in contrast to
all previous works with non-uniform embeddings, we theoretically analyze our
method. Moreover, past works do not empirically validate the speculated
mechanisms by which their methods work.
## III Mixed Dimension Embedding Layer
The MD embedding layer architecture, $\mathbf{\bar{E}}$, consists of $k$
blocks and is defined by $2k$ matrices:
$\mathbf{\bar{E}}=(\bar{E}^{(1)},...,\bar{E}^{(k)},P^{(1)},...,P^{(k)})$ with
$\bar{E}^{(i)}\in\mathbb{R}^{n_{i}\times d_{i}}$ and
$P^{(i)}\in\mathbb{R}^{d_{i}\times\bar{d}}$ for $i=1,...,k$. Together,
$\bar{E}^{(i)}$ and $P^{(i)}$ form the $i$-th block. In total, there are
$n=\sum_{i=1}^{k}n_{i}$ embedding vectors in the layer. We always treat
embedding vectors as row vectors. The $\bar{E}^{(i)}$ can be interpreted as
the MD embeddings, and the $P^{(i)}$ are projections that lift the embeddings
into a _base dimension_ $\bar{d}$ such that $\bar{d}\geq d_{i}$. The entire
layer can be thought of as a single $n\times\bar{d}$ block matrix for which
the $i$-th block is factored at dimension $d_{i}$.
Figure 1: Matrix Architecture for UD and MD Embedding Layers.
Forward propagation for a MD embedding layer is performed by indexing an
embedding vector and then projecting it. For example, compute
$P^{(1)}\bar{E}^{(1)}_{\ell}$ for the $\ell$-th vector in the first block.
Downstream models based on a MD embedding layer should be sized with respect
to $\bar{d}$. If $d_{i}=\bar{d}$ for any block, the projection $P^{(i)}$ is
not needed and may be replaced with an identity mapping. We illustrate this
along with the general matrix architecture of a two block MD embedding layer
in Fig. 1. The parameter budget (total area) consumed by UD and MD embedding
layers is the same, but the parameters are allocated unevenly to different
indices in the MD embeddings.
For MD embedding layers, there are two primary architectural decisions to
make: (i) _blocking: how to block $n$ total embedding indices into $k$
blocks?_ and (ii) _sizing: how to size the embedding dimensions
$\mathbf{d}=(d_{1},...,d_{k})$?_ For large-scale CTR, $\kappa$ is generally on
the order of 10 to 100. Standard embedding layers allocate $\kappa$ UD
embedding matrices to these $\kappa$ features. For MD layers, it is both
simple and natural to inherit the block structure as delineated by the task
itself. We let $k=\kappa$ and use the same number of MD embedding blocks as
categorical features in the CTR prediction task. The MD layer satisfies
$\bar{E}^{(i)}\in\mathbb{R}^{n_{i}\times d_{i}}$ for $i\in\\{1,...,\kappa\\}$.
The value range for each categorical feature defines the row counts $n_{i}$ in
the corresponding block of the MD layer. Any re-indexing can trivially be
stored in a low-cost length $k$ offset vector. For the Criteo dataset, there
are $\kappa=26$ distinct categorical features, so we produce a MD embedding
layer with $k=26$ blocks. To get meaningful and useful blocks, this blocking
scheme depends on the fact that our task has a large number of contextual
features $k$, with value ranges varying from order 10 to order 10 million.
Thus, even if feature values are roughly uniformly popular within each
feature, the large variation in value ranges leads to a significantly skewed
popularity. In contrast to CTR prediction tasks, when using word embeddings in
NLP, one cannot block the mixed layer by feature because this inherent
structure is absent. Thus, one needs to resort to complex blocking and sizing
schemes, such as those proposed in [16, 17]. Furthermore, we found that
accuracy significantly drops if the layer is _not_ blocked by feature. We
hypothesize that embedding projections encode feature-level semantic
information when blocking by feature. As for the question of _sizing_ the
embedding blocks, we defer discussion until after our theoretical analysis.
## IV Theoretical framework
As is standard, our theoretical analysis models CF and RecSys tasks with
matrix completion and factorization (additional references and all proofs are
in Appendix).
$M=\begin{bmatrix}M^{(11)}&\dots&M^{(1k_{W})}\\\ \vdots&\ddots&\vdots\\\
M^{(k_{V}1)}&\dots&M^{(k_{V}k_{W})}\end{bmatrix}$
Let $M\in\mathbb{R}^{n\times m}$, for $n\geq m$, be an unknown target matrix.
Without loss of generality, we also assume $M$ has a _block structure_ such
that $M$ is comprised of blocks $M^{(i,j)}\in\mathbb{R}^{n_{i}\times m_{j}}$
for $1\leq i\leq k_{V}$ and $1\leq j\leq k_{W}$. When indexing $M$, we use
subscripts, as in $M_{kl}$, to denote the $kl$-th scalar entry in $M$, and
superscripts in parenthesis, such as $M^{(i,j)}$, to denote the $ij$-th block
of $M$ (the comma is often omitted in the superscript). Let
$\texttt{rank}(M)=r$ and $\texttt{rank}(M^{(ij)})=r_{ij}$. Let
$\Omega\subset[n]\times[m]$ denote a sample of indices. Our observations,
denoted $\mathbf{\Omega}$, act as a training set:
$\mathbf{\Omega}=\\{(k,l,M_{kl}):(k,l)\in\Omega\\}$. We say the target matrix
$M$ is _completed_ or _recovered_ if recovery algorithm $\mathcal{A}$ returns
$\mathcal{A}(\mathbf{\Omega})=M$. We are interested in the probability of
recovery event: $\mathbf{Pr}[M=\mathcal{A}(\mathbf{\Omega})]$ for an algorithm
$\mathcal{A}$ under a sampling model for $\Omega$. Given both the block-wise
structure of $M$ and the MD embeddings, it is straightforward to apply MD. The
goal is to train the layer $\mathbf{\bar{E}}$ to represent $M$ with the block
structure in $\mathbf{\bar{E}}$ inherited from $M$. We can train
$\mathbf{\bar{E}}$ using stochastic gradient descent (SGD).
Data-Limited & Memory-Limited Regimes. In contextual recommendation engines,
there are two primary bottlenecks. In the data-limited regime, (when the
number of samples is $o(nr\log n)$) the model does not have enough samples to
accurately recover the preference matrix unless a popularity-based approach
like MD embeddings is used. In the memory-limited regime (when the space
constraint is $o(nr)$), the model has sufficient data to recover the
preference matrix but not enough space for the parameters that comprise the
embedding layer, which requires us to use fewer parameters than are naively
required. We leave analysis of both regimes simultaneously for future work.
Because large-scale CTR prediction systems can use up to order $10^{9}$
contextual features and constantly generate data, they are usually in the
memory-limited regime.
### IV-A Generalization in the Data-Limited Regime
It is common practice to study a Bernoulli sampling model for $\Omega$ [19,
20, 21, 22, 23], where each entry is revealed independently with some small
probability. Below, we extend Bernoulli sampling for the proposed block-wise
structure such that sampling probabilities are constant within a block.
Definition: Block-wise Bernoulli sampling _ Let $\Pi\in\mathbb{R}^{k_{W}\times
k_{V}}$ be a probability matrix. Let $N$ denote the expected number of
observed indices in a training sample. Let $\mathbf{i}(\cdot)$ and
$\mathbf{j}(\cdot)$ map each index of $M$ to the index of the block to which
it belongs. Each index $(k,l)$ is independently sampled as follows:
$\mathbf{Pr}[(k,l)\in\Omega]=N\Pi_{\mathbf{i}(k),\mathbf{j}(l)}/(n_{\mathbf{i}(k)}m_{\mathbf{j}(l)})$
_
We use standard matrix completion assumptions. We show that when the training
samples are sufficiently skewed, MD embeddings can recover many more matrices
than UD embeddings. We use recovery of the matrix as a proxy for good
generalization. For brevity, we focus on exact completion for matrices, but it
is well-understood how to extend these results to approximate completion and
for low-rank tensors (refer to Appendix).
We refer to any algorithm that ignores popularity as _popularity-agnostic_
(formalized in Appendix). Under uniform popularity, popularity-agnostic
algorithms need $\Theta(rn\log n)$ samples to recover the matrix [20]. We
provide an asymptotic lower bound on the sample complexity for matrix recovery
under popularity skew by _any_ popularity-agnostic algorithm.
###### Theorem IV.1.
Let $M$ be a target matrix following the block-wise Bernoulli sampling model
under probability matrix $\Pi$. Let
$\varepsilon=\min\\{\min_{i}\frac{1}{n_{i}}\sum_{j}\Pi_{ij},\min_{j}\frac{1}{m_{j}}\sum_{i}\Pi_{ij}\\}$.
(1) Suppose $N=o(\frac{r}{\varepsilon}n\log n)$. Then any algorithm that does
not take popularity into account will have asymptotically zero probability of
recovering $M$.
(2) Let $C$ be some non-asymptotic constant independent of $n$. If $N\geq
C(\max_{ij}\frac{r_{ij}}{\Pi_{ij}})n\log n$, then mixed dimension
factorization with SGD recovers $M$ with high probability.
Thm IV.1 states that with popularity-agnostic methods, completion of matrix
$M$ is bottlenecked by the row or column with lowest popularity. It is
impossible to complete rare rows at the same rank as popular rows. When
popularity skew is large, the $\frac{1}{\varepsilon}$ factor greatly increases
the sample size necessary to complete $M$. In contrast, MD factorization gets
a significant reprieve if low-popularity blocks are also treated as low-rank,
implying $\max_{ij}\frac{r_{ij}}{\Pi_{ij}}\ll\frac{r}{\varepsilon}$.
Two block example. The theorems above are more easily interpreted for a
special case block matrix consisting of two vertically stacked matrices
$M=[M^{(1)},M^{(2)}]^{T}$. Let us assume block-wise popularity sampling with
$\Pi_{1}=1-\epsilon$ and $\Pi_{2}=\epsilon$ for small $0<\epsilon<1/2$, so
that we can interpret $M^{(1)}$ as the popular and $M^{(2)}$ as the rare
block. For illustrative purposes, assume that $r_{2}$ is a small constant and
$r_{1}\approx\frac{1}{\epsilon}$ is significantly larger. Then popularity-
agnostic algorithms suffer a $\frac{1}{\epsilon^{2}}$ quadratic penalty in
sample complexity due to popularity skew, whereas MD factorization only pays a
$\frac{1}{\epsilon}$ factor because the rare block is completed at much lower
rank.
### IV-B Space Efficiency in the Memory-Limited Regime
To study memory-constrained deployment, we assume that we have sufficient data
to complete the target matrix. We are instead constrained by a small parameter
budget $B$. The goal is to optimally allocate parameters to embedding vectors
such that we minimize the expected reconstruction over a non-uniformly sampled
test set. Under mild assumptions this dimension allocation problem is a convex
program and is amenable to closed-form solutions. We prove that the optimal
dimension for a given embedding scales with that embedding’s popularity in a
manner that depends deeply on the spectral properties of the target matrix.
For most applications, popularity skew is present at test time as well as
training time. In this setting, the natural loss metric to study is the
_popularity-weighted mean square error (MSE)_.
Definition: Popularity-Weighted MSE _ Let $\Pi$ be a probability matrix. Let
$(k,l)$ be a test coordinate sampled according to $\Pi$. The popularity-
weighted MSE is given by
$L_{\Pi}(M,\hat{M})=\mathbb{E}_{k,l}|M_{kl}-\hat{M}_{kl}|^{2}=\sum_{i,j}\frac{1}{n_{i}m_{j}}\Pi_{ij}||M^{(ij)}-\hat{M}^{(ij)}||_{F}^{2}$._
Let us now assume that the target matrix is given and that we are trying to
optimally size MD embeddings layers, $W$ and $V$, with respect to popularity-
weighted MSE reconstruction. We assume to have a small parameter budget, so
that we cannot factor the target matrix exactly. We formulate this
optimization as a convex program with linear constraints (we treat the
dimensions as continuous — this is a convex relaxation of a hard problem, see
Appendix).
Convex program for optimizing mixed dimensions:
$\min_{d_{w},d_{v}}\left(\min_{W,V}L_{\Pi}(M,WV^{T})\right)\text{ s.t.
}\sum_{i}n_{i}(d_{w})_{i}+\sum_{j}m_{j}(d_{v})_{j}\leq B$
_where $d_{w}\in\mathbb{R}^{k_{W}}$ and $d_{v}\in\mathbb{R}^{k_{V}}$ denote
the dimensions of the embedding blocks of $W$ and $V$, respectively._
We can obtain a solution using first-order conditions and Lagrange multipliers
[24]. Our analysis reveals that under mild assumptions, the optimal dimension
to popularity scaling is the functional inverse of the spectral decay of the
target matrix (see Thm. IV.2).
###### Theorem IV.2.
Let M be a block matrix with block-wise spectral (singular value) decay given
by $\sigma_{ij}(\cdot)$. Then, the optimal embedding dimensions for MD layers
W and V are given by: $(d^{*}_{w})_{i}=\sum_{j}d^{*}_{ij}$,
$(d_{v}^{*})_{j}=\sum_{i}d^{*}_{ij}$, where
$d_{ij}^{*}=\sigma_{ij}^{-1}\left(\sqrt{\lambda(n_{i}+m_{j})(n_{i}m_{j})\Pi_{ij}^{-1}}\right)$
and $\sum_{ij}(n_{i}+m_{j})d_{ij}^{*}=B$.
When we have a closed-form expression for the spectral decay of $M$, we can
give a closed-form solution in terms of that expression. For illustrative
purposes, we give the optimal dimensions for the simple two-block example
under _power spectral decay_. A matrix with spectral norm $\rho$ exhibits a
singular value spectrum with power spectral decay $\beta>0$ if the $k$-th
singular values is given by: $\sigma(k)=\rho k^{-\beta}$. Based on the
corollary below, the optimal dimension for an embedding vector scales with its
popularity based on a fractional power law.
###### Corollary IV.2.1.
For a vertically stacked two-block matrix, with each block exhibiting a power
spectral decay, then $d_{1}^{*}\propto(1-\epsilon)^{\frac{1}{2\beta}}$ and
$d_{2}^{*}\propto\epsilon^{\frac{1}{2\beta}}$
Figure 2: CTR prediction results for MD embeddings on Criteo dataset using
DLRM. Implementation is available as part of an open-source project on GitHub:
facebookresearch/dlrm. Fig. 2a (left): Learning curves for selected emb. arch.
Fig. 2b (center): Loss vs. # param. for varying $\alpha$. Fig 2c (right):
Train time vs. loss for varying $\alpha$ Figure 3: Matrix Factorization (top
row; a-c) and NCF (bottom row; d-f) for collaborative filtering on the
MovieLens dataset. Fig. 3a & 3d (right): MSE vs. # params for varying
$\alpha$. Fig. 3b & 3e (center): MSE vs. # params for varying $\alpha$. Dashed
lines correspond to test samples that contain the one-third least popular
items. Solid lines correspond to test samples that contain the one-third most
popular items. Fig. 3c & 3f (left): Generalization for popular (red) and rare
(blue) items. Dashed lines correspond to training loss and solid lines
correspond to test loss.
### IV-C Large-scale CTR Prediction is Memory-Limited
Labeled training data is easy to acquire in most large-scale CTR prediction
systems because one can directly observe user engagement (or lack thereof)
with personalized content. The embedding layer’s memory footprint ends up
being the primary bottleneck. In this situation, the results of Thm IV.2 yield
appropriate guidelines for the optimal dimension for each embedding vector.
The unavoidable difficulty is that one needs to know the spectrum of the
target matrix to know the optimal dimension. One solution is to train an
enormous embedding table with an enormous data set, thereby obtaining the
spectrum, and then factoring (or re-training from scratch) at the optimal
size. However, this solution still requires enormous resource usage during
training. Alternatively, we can still leverage the insight of our theoretical
framework to efficiently find good mixed embedding dimensions. Most spectral
decays, whether they are flat or steep, can be decently well fit by a power
law (so much so that there is a large literature dedicated to _falsifying_
power-laws even when the numerical fit appears reasonable [25]). Varying the
temperature adequately captures the trend of the decay. By _a priori_ assuming
that the spectral decay is a power law, we only have to tune one hyper-
parameter over a narrow range that requires only exploring a small number of
values.
Power-law Sizing Scheme. Here we define _block-level probability_
$\mathbf{p}$. Let $\mathbf{p}$ be a $k$-dimensional probability vector (recall
$k$ is the # of blocks in the embedding layer) such that $\mathbf{p}_{i}$ is
the average query probability over rows in the $i$-th block. When blocks
exactly correspond to features, as in our CTR experiments, then
$\mathbf{p}_{i}=\frac{1}{n_{i}}$ because each one row per block is queried per
inference based on the value of each feature. More generally, under a block
sampling structure $\Pi$, $\mathbf{p}_{i}=\sum_{j}\Pi_{ij}$.
Algorithm 1 Popularity-Based Dimension Sizing
Input: Baseline dimension $\bar{d}$ and fixed temperature $0\leq\alpha\leq 1$
Input: Probability vector p
Output: Dimension assignment vector d
$\lambda\leftarrow\bar{d}||\mathbf{p}||_{\infty}^{-\alpha}$ $\triangleright$
Compute scalar scaling factor
$\textbf{d}\leftarrow\lambda\mathbf{p}^{\alpha}$ $\triangleright$ Component-
wise exponent
Alg. 1 shows code for the scheme. We use a temperature parameter $\alpha>0$ to
control the degree to which popularity influences the embedding dimension (as
a simplification over using decay parameter $\beta$). As $\alpha$ increases,
so does the popularity-based skew in the embedding dimension. At $\alpha=0$,
the embedding dimensions are all uniform. At $\alpha=1$, the embedding
dimensions are proportional to their popularity.
## V Experiments
We measure memory in terms of the number of $32$-bit floating point parameters
in the model. Since different embedding base dimensions imply different widths
in first hidden layers of the downstream deep model, for fairness, our
parameter counts include both the embedding layer and all model parameters
(recall that the parameter count is overwhelmingly dominated by embeddings).
We report statistical performance in terms of cross entropy loss. Accuracy is
reported in Appendix (along with other experimental details). DLRM with
uniform $d=32$ embeddings obtains an accuracy of $\sim 79$%, close to state-
of-the-art for this dataset [11]. We sweep parameter budgets from a lower
bound given by 1 parameter per embedding vector ($d=1$) to an upper bound
given by the memory limit of a $16$GB GPU. The parameters are allocated to
embeddings according to the $\alpha$-parameterized rule proposed in Alg. 1.
MD embeddings with $\alpha=0.3$ produce a learning curve on par to that of
$d=32$ UD embeddings using a total parameter count equivalent to $d=2$ UD
(Fig. 2a), yielding a $16\times$ reduction in parameters. We can see that
using MD embedding layers improves the memory-performance frontier at each
parameter budget (Fig. 2b). The optimal temperature ($\alpha$) is dependent on
the parameter budget, with higher temperatures leading to lower loss for
smaller budgets. Embeddings with $\alpha=0.4$ obtain performance on par with
uniform embeddings using $16\times$ fewer parameters. Embeddings with
$\alpha=0.2$ modestly outperform UD embeddings by an accuracy of $0.1$% using
half as many parameters. For RecSys, small improvements in accuracy are still
important when volume is sufficiently large. The std. dev. estimates indicate
that this gain is significant. MD embeddings not only improve prediction
quality for a given parameter budget, but also for a given training time. MD
embeddings can train $>2\times$ faster than UD embeddings at a given test loss
(Fig. 2c). This is possible because for a given loss, the MD layer uses far
fewer parameters than a UD layer. The faster training we observe is likely due
to superior bandwidth efficiency and caching on the GPU, enabled by the
reduced memory footprint. We run all of our models on a single GPU with
identical hyperparameters (including batch size) across all $\alpha$ (more
details in Appendix).
Case Study: Collaborative Filtering. Because context-free CF only includes two
features, users and items, it is easier to gain insight into the effects of MD
embeddings. Matrix factorization (MF) is a typical algorithm for CF where the
matrix factors are embeddings. We also include Neural Collaborative Filtering
(NCF) models [26] to ensure that trends still hold when non-linear layers are
introduced. Because we only have two features for this case study (users and
items) we slightly modify our blocking approach. We sort users and items by
empirical popularity, and uniformly partition them into block matrices. Then
we can apply mixed dimensions within the users and items based on partitions
(more details in Appendix).
MD embeddings substantially outperform UD embeddings, especially at low
parameter budgets (Figs. 3a & 3d). For Figs. 3b & 3e, we partition _item_
embeddings (since they often the focal point of recommendation) into the one-
third most and least popular items (by empirical training frequency). These
results show that non-zero temperature ($\alpha$) improves test loss on
popular items and performs on par with or slightly better than UD embeddings
on rare items. We report generalization curves for the popular and rare items
in Figs 3c and 3f. We fix a parameter budget ($2^{21}$), vary the temperature
($\alpha$), and plot training and test loss for both popular and rare item
partitions. Allocating more parameters to popular items by increasing
temperature ($\alpha$) decreases both training and test loss. On the other
hand, allocating more parameters to rare items by decreasing temperature
($\alpha$) only decreases training loss but not test loss, indicating that the
additional parameters on the rare embeddings are wasteful. Uniform parameter
allocation, agnostic to popularity, is inefficient. A majority of parameters
are severely underutilized on rare embeddings, whereas popular embeddings
could still benefit from increased representational capacity.
## VI Conclusion
We show that MD embeddings greatly improve both parameter efficiency and
training time. Through our case study, we demonstrate systematic and
compelling empirical evidence that MD embeddings work by improving capacity
for learning popular embeddings without compromising rare embeddings. Our
theoretical framework is the first to mathematically explain how this occurs,
and our experiments are the first to validate the phenomena.
## Acknowledgments
This work was initiated while A.A.G. was an intern at Facebook. The authors
would like to thank Shubho Sengupta, Michael Shi, Jongsoo Park, Jonathan
Frankle and Misha Smelyanskiy for helpful comments and suggestions about this
work, and Abigail Grosskopf for editorial help with the manuscript.
## References
* [1] Björn Barz and Joachim Denzler. Hierarchy-based image embeddings for semantic image retrieval. In Proc. IEEE Winter Conf. Applications of Computer Vision, pages 638–647. IEEE, 2019.
* [2] Mariya I. Vasileva, Bryan A. Plummer, Krishna Dusad, Shreya Rajpal, Ranjitha Kumar, and David Forsyth. Learning type-aware embeddings for fashion compatibility. In Proc. European Conf. Computer Vision, 2018.
* [3] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019.
* [4] Alan Akbik, Duncan Blythe, and Roland Vollgraf. Contextual string embeddings for sequence labeling. In Proc. 27th Int. Conf. Computational Linguistics, pages 1638–1649, 2018.
* [5] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
* [6] Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et al. Wide & deep learning for recommender systems. In Proc. 1st Workshop Deep Learning Recommender Systems, pages 7–10. ACM, 2016.
* [7] Jongsoo Park et al. Deep learning inference in Facebook data centers: Characterization, performance optimizations and hardware implications. CoRR, abs/1811.09886, 2018.
* [8] Xiaorui Wu, Hong Xu, Honglin Zhang, Huaming Chen, and Jian Wang. Saec: Similarity-aware embedding compression in recommendation systems. CoRR, abs/1903.00103, 2019.
* [9] Zi Yin and Yuanyuan Shen. On the dimensionality of word embedding. In Proc. Advances in Neural Information Processing Systems, 2018\.
* [10] Qi Pi, Weijie Bian, Guorui Zhou, Xiaoqiang Zhu, and Kun Gai. Practice on long sequential user behavior modeling for click-through rate prediction. CoRR, abs/1905.09248, 2019.
* [11] Maxim Naumov, Dheevatsa Mudigere, Hao-Jun Michael Shi, Jianyu Huang, Narayanan Sundaraman, Jongsoo Park, Xiaodong Wang, Udit Gupta, Carole-Jean Wu, Alisson G Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. CoRR, abs/1906.00091, 2019.
* [12] Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. DeepFM: a factorization-machine based neural network for ctr prediction. CoRR, abs/1703.04247, 2017.
* [13] Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. xDeepFM: Combining explicit and implicit feature interactions for recommender systems. In Proc. 24th Int. Conf. Knowledge Discovery & Data Mining, pages 1754–1763. ACM, 2018.
* [14] Guorui Zhou, Na Mou, Ying Fan, Qi Pi, Weijie Bian, Chang Zhou, Xiaoqiang Zhu, and Kun Gai. Deep interest evolution network for click-through rate prediction. CoRR, abs/1809.03672, 2018.
* [15] Guorui Zhou, Xiaoqiang Zhu, Chenru Song, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, and Kun Gai. Deep interest network for click-through rate prediction. In Proc. 24th Int. Conf. Knowledge Discovery & Data Mining, pages 1059–1068. ACM, 2018.
* [16] Patrick Chen, Si Si, Yang Li, Ciprian Chelba, and Cho-Jui Hsieh. Groupreduce: Block-wise low-rank approximation for neural language model shrinking. In Advances in Neural Information Processing Systems, 2018.
* [17] Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
* [18] Manas R Joglekar, Cong Li, Mei Chen, Taibai Xu, Xiaoming Wang, Jay K Adams, Pranav Khaitan, Jiahui Liu, and Quoc V Le. Neural input search for large scale recommendation models. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2387–2397, 2020.
* [19] Emmanuel J Candès and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010\.
* [20] Emmanuel J Candes and Yaniv Plan. Matrix completion with noise. Proc. IEEE, 98(6):925–936, 2010.
* [21] Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 2009.
* [22] Emmanuel J Candes and Yaniv Plan. Tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. IEEE Transactions on Information Theory, 57(4):2342–2359, 2011\.
* [23] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factorization. IEEE Transactions on Information Theory, 62(11):6535–6579, 2016\.
* [24] David G Luenberger, Yinyu Ye, et al. Linear and nonlinear programming, volume 2. Springer.
* [25] Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. Power-law distributions in empirical data. SIAM review, 51(4):661–703, 2009.
* [26] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural collaborative filtering. In Proc. 26th Int. Conf. World Wide Web, pages 173–182, 2017.
* [27] Trevor Hastie, Rahul Mazumder, Jason D Lee, and Reza Zadeh. Matrix completion and low-rank svd via fast alternating least squares. The Journal of Machine Learning Research, 16:3367–3402, 2015.
* [28] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 2009.
* [29] Steffen Rendle, Li Zhang, and Yehuda Koren. On the difficulty of evaluating baselines: A study on recommender systems. CoRR, abs/1905.01395, 2019.
* [30] Maurizio Ferrari Dacrema, Paolo Cremonesi, and Dietmar Jannach. Are we really making much progress? A worrying analysis of recent neural recommendation approaches. CoRR, abs/1907.06902, 2019.
* [31] Hong-Jian Xue, Xinyu Dai, Jianbing Zhang, Shujian Huang, and Jiajun Chen. Deep matrix factorization models for recommender systems. In IJCAI, pages 3203–3209, 2017.
* [32] Jicong Fan and Jieyu Cheng. Matrix completion by deep matrix factorization. Neural Networks, 98:34–41, 2018.
* [33] Moritz Hardt. Understanding alternating minimization for matrix completion. In 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, pages 651–660. IEEE, 2014.
* [34] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proc. 45th annual ACM symposium on Theory of computing. ACM, 2013\.
* [35] Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factorization. In Advances in Neural Information Processing Systems, pages 7411–7422, 2019.
* [36] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. Journal of Machine Learning Research, 13:1665–1697, 2012.
* [37] Raghu Meka, Prateek Jain, and Inderjit S Dhillon. Matrix completion from power-law distributed samples. In Advances in neural information processing systems, 2009.
* [38] Guangcan Liu, Qingshan Liu, and Xiaotong Yuan. A new theory for matrix completion. In Advances in Neural Information Processing Systems, 2017.
* [39] Gediminas Adomavicius and Alexander Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE transactions on knowledge and data engineering, 17(6):734–749, 2005.
* [40] Fabián P Lousame and Eduardo Sánchez. A taxonomy of collaborative-based recommender systems. In Web Personalization in Intelligent Environments, pages 81–117. Springer, 2009.
* [41] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE transactions on information theory, 56(6):2980–2998, 2010\.
* [42] Sriram Vishwanath. Information theoretic bounds for low-rank matrix completion. In 2010 IEEE Int. Symposium on Information Theory, pages 1508–1512. IEEE, 2010.
* [43] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(Dec):3413–3430, 2011.
* [44] Martin Andrews. Compressing word embeddings. CoRR, abs/1511.06397, 2015.
* [45] Prasad Bhavana, Vikas Kumar, and Vineet Padmanabhan. Block based singular value decomposition approach to matrix factorization for recommender systems. Pattern Recognition Letters, abs/1907.07410, 2019.
* [46] Prasanna Sattigeri and Jayaraman J. Thiagarajan. Sparsifying word representations for deep unordered sentence modeling. In Proc. 1st Workshop on Representation Learning for NLP, pages 206–214.
* [47] Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, and Xueqi Cheng. Sparse word embeddings using l1 regularized online learning. In Proc. 25th Int. Joint Conf. Artificial Intelligence, 2016.
* [48] Jose M Alvarez and Mathieu Salzmann. Compression-aware training of deep networks. In Proc. Advances in Neural Information Processing Systems, pages 856–867, 2017.
* [49] Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. CoRR, abs/1803.03635, 2018.
* [50] Maxim Naumov, Utku Diril, Jongsoo Park, Benjamin Ray, Jedrzej Jablonski, and Andrew Tulloch. On periodic functions as regularizers for quantization of neural networks. CoRR, abs/1811.09862, 2018.
* [51] Eunhyeok Park, Sungjoo Yoo, and Peter Vajda. Value-aware quantization for training and inference of neural networks. In Proc. European Conf. Computer Vision, pages 580–595, 2018.
* [52] Wang-Cheng Kang, Derek Zhiyuan Cheng, Ting Chen, Xinyang Yi, Dong Lin, Lichan Hong, and Ed H Chi. Learning multi-granular quantized embeddings for large-vocab categorical features in recommender systems. In Companion Proceedings of the Web Conference 2020, 2020.
* [53] Raphael Shu and Hideki Nakayama. Compressing word embeddings via deep compositional code learning. CoRR, abs/1711.01068, 2017.
* [54] Julien Tissier, Amaury Habrard, and Christophe Gravier. Near-lossless binarization of word embeddings. CoRR, abs/1803.09065, 2018.
* [55] Joan Serrà and Alexandros Karatzoglou. Getting deep recommenders fit: Bloom embeddings for sparse binary input/output networks. In Proceedings of the Eleventh ACM Conference on Recommender Systems, pages 279–287, 2017.
* [56] Jiaxi Tang and Ke Wang. Ranking distillation: Learning compact ranking models with high performance for recommender system. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2289–2298, 2018.
* [57] Josh Attenberg, Kilian Weinberger, Anirban Dasgupta, Alex Smola, and Martin Zinkevich. Collaborative email-spam filtering with the hashing trick. In Proc. 6th Conf. Email and Anti-Spam, 2009.
* [58] Jinyang Gao, Beng Chin Ooi, Yanyan Shen, and Wang-Chien Lee. Cuckoo feature hashing: Dynamic weight sharing for sparse analytics. In IJCAI, pages 2135–2141, 2018.
* [59] Alexandros Karatzoglou, Alex Smola, and Markus Weimer. Collaborative filtering on a budget. In Proc. 13th Int. Conf. Artificial Intelligence and Statistics, pages 389–396, 2010.
* [60] Valentin Khrulkov, Oleksii Hrinchuk, Leyla Mirvakhabova, and Ivan V. Oseledets. Tensorized embedding layers for efficient model compression. CoRR, abs/1901.10787, 2019.
* [61] Kaiyu Shi and Kai Yu. Structured word embedding for low memory neural network language model. In Proc. Interspeech, 2018.
* [62] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019.
* [63] Nvidia. Tesla V100 GPU architecture white paper. 2017\.
* [64] F. Maxwell Harper and Joseph A. Konstan. The MovieLens datasets: History and context. ACM Transactions on Interactive Intelligent Systems, 5, 2015.
* [65] Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of Adam and beyond. In Proc. Int. Conf. Learning Representations, 2018.
* [66] Florian Strub, Romaric Gaudel, and Jérémie Mary. Hybrid recommender system based on autoencoders. In Proceedings of the 1st Workshop on Deep Learning for Recommender Systems, pages 11–16, 2016.
* [67] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proc. 13th Int. Conf. Artificial Intelligence and Statistics, pages 249–256, 2010.
* [68] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015.
* [69] Jerzy K Baksalary, Peter Semrl, and George PH Styan. A note on rank additivity and range additivity. Linear algebra and its applications, 237:489–498, 1996.
* [70] Yongge Tian. Rank equalities for block matrices and their moore-penrose inverses. Houston journal of mathematics, 30(2):483–510, 2004.
* [71] George Matsaglia and George PH Styan. Equalities and inequalities for ranks of matrices. Linear and multilinear Algebra, 2(3):269–292, 1974.
* [72] Shouyuan Chen, Michael R Lyu, Irwin King, and Zenglin Xu. Exact and stable recovery of pairwise interaction tensors. In Advances in neural information processing systems, pages 1691–1699, 2013.
* [73] Hongyang Zhang, Vatsal Sharan, Moses Charikar, and Yingyu Liang. Recovery guarantees for quadratic tensors with sparse observations. In The 22nd International Conference on Artificial Intelligence and Statistics, 2019.
* [74] Michael Mitzenmacher and Eli Upfal. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge university press, 2017.
* [75] Brian Dawkins. Siobhan’s problem: the coupon collector revisited. The American Statistician, 45(1):76–82, 1991.
* [76] Richard M Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
* [77] Ivan Markovsky. Structured low-rank approximation and its applications. Automatica, 44(4):891–909, 2008.
* [78] Richard Courant and Fritz John. Introduction to calculus and analysis I. Springer Science & Business Media, 2012.
The appendix is organized as follows. We begin with an extended discussion,
including additional related works and background references in A. We include
supplementary experiments and reproducibility details concerning our empirical
work in B. Finally, we include mathematical details including assumptions,
theorems, and proofs in C.
## Supplementary & Extended Discussion
### -A Representation Learning
Embedding-based approaches, such as matrix factorization (MF) [27, 28, 29] or
neural collaborative filtering (NCF) [30, 26, 31, 32], are among the most
computationally efficient solutions to CF and matrix completion. The simplest
model for canonical CF is MF, which treats embeddings as factorizing the
target matrix directly and comes with nice theoretical guarantees. Direct
optimization over embedding layers is a non-convex problem. However, due to
specific problem structure, many simple algorithms, including first-order
methods, have been proven to find global optima (under mild assumptions) [33,
23, 34]. Neural collaborative filtering (NCF) models, which make use of non-
linear neural processing, are more flexible options. While NCF models often
lack the theoretical guarantees offered by MF, they usually perform mildly
better on real-world datasets. In CTR prediction, it is common to use
embedding layers to represent categorical features and have various neural
layers to process the various embeddings.
### -B Regularization
We note that in many instances, embeddings for collaborative filtering tasks
are usually trained with some type of norm-based regularization for the
embedding vectors. While this particular form of regularization works well for
small-scale embeddings, it is non-trivial to scale it for large-scale
embeddings. For many standard loss functions, only the embeddings queried on
the forward pass have non-zero gradients on the backward pass. Using sparse
updates for the embedding tables is essentially mandatory for efficient
training at large scale. Thus, contrary to popular belief in academic circles,
large-scale embeddings are often trained without norm-based regularization
which are incompatible with sparse updates. This is because the gradient
update when using a regularization term should back-propagate to every
embedding vector in the layer, rather than just those queried in the forward
pass. Because this technique is not feasible in large-scale CTR prediction, we
explicitly do not use embedding regularization in our collaborative filtering
task. Furthermore, we note that from the perspective of popularity skew, that
embedding norm regularization actually implicitly penalizes rare embeddings
more than popular ones, since a larger fraction of training updates only
contain norm penalty signal for rare embeddings than popular ones. This is an
interesting connection that could be explored in future work, but it does not
achieve the stated goal of parameter reduction.
### -C Non-uniform Embeddings
Recent works have proposed similar but distinct non-uniform embedding
architectures, particularly for the natural language processing (NLP) domain
[16, 17]. As mentioned in the main text, there are substantial differences
between those methods and ours. We emphasize that neither of those methods
would work out-of-the-box for CTR prediction. Critically, in contrast to NLP,
CTR embeddings encode categorical values for individual features, and thus
come with feature-level structure that should not be ignored when architecting
the embedding layer. In NLP, embeddings represent words and can be thought of
a single large bag of values — in contrast to representing the various
categorical values a particular feature can take on. We discover that ignoring
this feature-level structure in the embedding layer adversely affects
performance, dropping accuracy by $>1\%$. For this reason, the sorted blocking
technique introduced in [17] is not effective in CTR prediction. Additionally,
embedding layers in NLP are pre-trained from unsupervised language corpus,
unlike in RecSys, which means that clustering and factoring the embeddings as
in [16] prior to training is not feasible in CTR prediction.
Furthermore, in contrast to previous works, we theoretically analyze our
method. Moreover, past methods do not even empirically validate the speculated
mechanisms by which their methods work. For example, in [17], authors claim
their proposed architecture, "reduces the capacity for less frequent words
with the benefit of reducing overfitting to rare word." While the proposed
method works well on benchmarks, the claim that the method reduces overfitting
is not supported. As shown in [9, 35], embedding overfitting depends
critically on the training algorithm and even the model atop the embeddings.
In fact, when training is properly tuned, embeddings are quite resilient to
overfitting, which means the claim made in [17] is far from self-evident. It
is more accurate to view rare embeddings as wasting parameters rather than
overfitting them.
Non-uniform and deterministic sampling have been discussed in the matrix
completion literature [36, 37, 38], but only in so far as how to correct for
popularity so as to improve statistical recovery performance, or build
theoretical guarantees for completion under deterministic or non-uniform
sampling.
### -D Collaborative Filtering & Matrix Completion
CF tasks come in many variations and have a large mass scientific literature,
with good reviews of classical algorithms and approaches provided in [39, 40].
Related to CF is the simplified matrix completion problem [20, 21, 19, 27, 41,
42, 23, 43].
### -E Memory-Efficient Embeddings
The techniques to decrease the memory consumed by embedding tables can be
roughly split into two high-level classes: (i) compression algorithms and (ii)
compressed architectures. Simple _offline_ compression algorithms include
post-training quantization, pruning or low-rank SVD [44, 45, 46, 47]. Online
compression algorithms, including quantization-aware training, gradual
pruning, and periodic regularization, are somewhat less popular in practice
because of the additional complications they add to already intricate deep
model training methods. [48, 49, 50, 51, 52]. Model distillation techniques
[53, 54, 55, 56] are another form of compression and can have online and
offline variants. On the other hand, compressed architectures have the
advantage of not only reducing memory requirements for inference time, but
also at training time. This is the approach followed by hashing-based and
tensor factorization methods [57, 58, 59, 60, 61].
### -F Forward Propagation with Mixed Dimension Embeddings
Forward propagation for a MD embedding layer can be summarized in Alg. 2. The
steps involved in this algorithm are differentiable, therefore we can perform
backward propagation through this layer and update matrices $\bar{E}^{(i)}$
and $P^{(i)}$ accordingly. We note that Alg. 2 may be generalized to support
multi-hot lookups, where embedding vectors corresponding to $z$ query indices
are fetched and reduced by a differentiable operator, such as add, multiply or
concatenation.
Algorithm 2 Forward Propagation
Input: Index $x\in[n]$
Output: Embedding vector $\textbf{e}_{x}$
$i\leftarrow 0$ and $t\leftarrow 0$
while $t+n_{i}<x$ do $\triangleright$ Find offset $t$ and sub-block $i$
$t\leftarrow t+n_{i}$
$i\leftarrow i+1$
end while
$\textbf{e}_{x}\leftarrow\bar{E}^{(i)}[x-t]P^{(i)}$ $\triangleright$ Construct
an embedding vector
### -G Popularity Histograms
The key idea of the MD embeddings is to address the skew in the popularity of
objects appearing in a dataset. To illustrate it we present popularity
histograms of accesses across all features for the Criteo Kaggle dataset in
Fig. 4(a) and individually for users and items features for the MovieLens
dataset in Fig. 4(c) and 4(b), respectively.
(a) Histogram of accesses for Criteo
(b) Histogram of accesses for items
(c) Histogram of accesses for users
Figure 4: Popularity skew in real-world datasets.
(a) Effect of num. of equiparititons on CF with MD
(b) Effect of num. of equiparititons on CF with UD
Figure 5: Effect of num. of equiparititons on CF
## Experimental Details & Supplementary Experiments
In this section we provide detailed protocols for the experiments in Section 5
of the main text. All code is implemented using the Pytorch [62] library. All
algorithms are run as single GPU jobs on Nvidia V100s [63]. All confidence
intervals reported are standard deviations obtained from 5 replicates per
experiment with different random seeds. In Table I we summarize the datasets
and models used.
Dataset | Tasks | # Samples | # Categories | Models Used
---|---|---|---|---
MovieLens | CF | 27M | 330K | MF, NCF
Criteo Kaggle | CTR | 40M | 32M | DLRM
Table I: Datasets, tasks and models
### -H Collaborative Filtering Case Study
Recall that CF tasks require a different blocking scheme than the one
presented in the main text because we only have 2 categorical features. These
features have corresponding embedding matrices $W\in\mathbb{R}^{n\times d}$
and $V\in\mathbb{R}^{m\times d}$, for users and items, respectively. To size
the MD embedding layer we apply MDs within individual embedding matrices by
partitioning them. We block $W$ and $V$ separately. First, we sort and re-
index the rows based on empirical row-wise frequency: $i<i^{\prime}\implies
f_{i}\geq f_{i^{\prime}}$, where $f_{i}$ is the frequency that a user or item
appears in the training data111Sorting and indexing can be done quickly on a
single node as well as in the distributed settings.. Then, we partition each
embedding matrix into $k$ blocks such that the sum of frequencies in each
block is equal. For each of the $k$ blocks, the total popularity (i.e.
probability that a random query will include that row) for each block is
constant (see Alg. 3). Then, for a given frequency $\mathbf{f}$ the
$k$-equipartition is unique and is simple to compute. In our experiments, we
saw that setting $k$ anywhere in the $[8,16]$ range is sufficient to observe
the effects induced by MDs, with diminishing effect beyond these thresholds
(Fig. 5).
Algorithm 3 Partitioning Scheme for CF
Input: Desired number of blocks $k$
Input: Row-wise frequencies vector $\mathbf{\bar{f}}$
Output: Offsets vector $\mathbf{t}$
$\mathbf{f}\leftarrow\texttt{sort}(\mathbf{\bar{f}})$ $\triangleright$ Sort by
row-wise frequency
Find offsets $t_{i}$ such that
$\sum_{l=t_{i}}^{t_{i+1}}f_{l}=\sum_{l=t_{j}}^{t_{j+1}}f_{l}$ for $\forall
i,j$
In our experiments we use the full $27$M MovieLens dataset [64]. We train at
learning rate $10^{-2}$, found in the same manner as for CTR prediction. For
consistency with CTR prediction, we also used the Amsgrad optimizer [65]. We
train for 100 epochs, taking the model with the lowest validation loss. We
early terminate if the model does not improve after 5 epochs in a row. We use
a batch size of $2^{15}$ in order to speed-up training. We did not observe
significant differences between this and smaller batch sizes. Our reported
performance, in terms of MSE, are comparable to those elsewhere reported in
the literature [66].
For initialization we use the uniform Xavier initialization (for consistency
with CTR prediction). Also, for the NCF model, we use a 3-layer MLP with
hidden layer dimensions $128-128-32$. We used default LeakyReLU activations.
For the item embedding partitioning (in Fig. 3), we partitioned the item
embeddings by empirical popularity mass. This means that the top third item
embeddings represent to order $10^{3}$ items, whereas the bottom third item
embeddings represent order $10^{5}$ items. Thus, the top third and bottom
third partitions have the same empirical popularity mass, not the same number
of items.
### -I CTR Prediction
From the perspective of this work, using MD embedding layers on real CTR
prediction data with modern deep recommendation models is an important
experiment that shows how MD embedding layers might scale to real-world
recommendation engines.
Figure 6: Embedding parameters for 26 categorical features allocated at
different temperatures ($\alpha$) for the same parameter budget on the Criteo
dataset. Higher temperatures results in higher dimensions for popular
embeddings and $\alpha=0$ is uniform dimensions.See Section 4 and Alg.1 for
more details concerning the assignment scheme. The dimensions are rounded to
powers of 2.
In our experiments we use state-of-the-art DLRM [11] and the Criteo Kaggle
dataset. We determined the learning rates, optimizer, batch size, and
initializations scheme by doing a simple grid search sweep on _uniform_
dimension embeddings of dimension $32$. _Ultimately, we used Amsgrad with a
learning rate of $10^{-3}$, a batch size of $2^{12}$, and a uniform Xavier
initialization for all CTR experiments reported in the main text_. As is
customary in CTR prediction tasks, we train for only one epoch [11]. Examples
of parameter allocation based on Alg. 1 can be found in Fig. 6.
Figure 7: Xavier and He (fan-out) initializations for DLRM with UD embeddings
at various parameter budgets Figure 8: Accuracy on CTR Prediction for varying
temp. ($\alpha$) Figure 9: Training time vs. parameter counts for varying
temperature (legend same as Fig. 8)
For learning rates, we tried
$\lambda\in\\{10^{-2},10^{-3},10^{-4},10^{-5}\\}$. $10^{-3}$ was best. We
tried Amsgrad, Adam, Adagrad, and SGD optimizers. Amsgrad was best.
For batch size, we tried powers of 2 from $2^{5}$ to $2^{12}$. The batch size
hardly affected the memory footprint on the GPU, since the majority of memory
is spent on the embedding layer, which is only sparsely queried on each
forward pass in the batch. We also found that performance was largely
invariant in batch size, so we selected batch size of $2^{12}$.
For initialization schemes, we tried uniform Xavier [67], uniform He (fan-in)
and uniform He (fan-out) [68]. We initialized all neural network parameters,
including embedding matrices, according to the same scheme. We found that He
fan-in resulted in severe training instability. Xavier outperformed He fan-out
by a considerable margin (Fig. 7).
We also report the accuracy for our CTR prediction experiments (Fig. 8). The
curves are like a reflection of the cross entropy loss in the main text.
Finally, we report training time vs. parameter counts for varying temperatures
(Fig. 9).
### -J On the Range of Temperatures ($\alpha$)
Recall that we use a power-law inspired embedding dimension sizing scheme. In
the main text, we emphasize that $\alpha$ should be in the range $(0,1)$. In
principle, one could pick an $\alpha>1$, but since it is natural to assume
that there is diminishing returns to the embedding dimension of a feature, it
should follow that such a choice is poor. An $\alpha>1/2$ would imply a sub-
linear spectral decay which is rarely the case in embeddings learned form
real-world data. This coincides with out experiments, where the best $\alpha$
were actually below $1/2$.
## Theoretical Details
We proceed to present proofs of theorems as well as additional results that
did not fit into the main text of the paper. First, we describe the details of
the mathematical optimization procedure used in the proofs. Then, we discuss
the extension of our method from the matrix case to the tensor case. Finally,
we enumerate the details/assumptions and give the proofs for our theorems.
### -K Block-wise MD Factorization
The first point to address is the specifics of the SGD-variant used to solve
the MD factorization. We adopt the particular variation of the SGD algorithm
for matrix factorization proposed in [23] and assume that block structure is
known or has been estimated. We show that under rank additivity assumption it
can be applied to factor the blocks of the matrix $M$ independently and yet
construct a rank-$r$ approximation for it. Note that in this scenario the
projections do not need to be free parameters (see Alg. 4).
Algorithm 4 Block-wise Mixed Dim. Factorization
Input: Partially masked target block matrix $\mathcal{P}_{\Omega}(M)$.
Input: The blocks $M^{(ij)}$ with block-wise rank $r_{ij}$.
Output: MD embeddings $\bar{W},\bar{V}$.
$W^{(i,j)},V^{(i,j)}\leftarrow\mathbf{SGD}(\mathcal{P}_{\Omega}(M^{(ij)}))$$\triangleright$
Factor each block
for $1\leq i\leq k_{W}$ do
$\bar{W}^{(i)}\leftarrow[W^{(i,1)},...,W^{(i,k_{V})}]\in\mathbb{R}^{n_{i}\times
d_{w}^{i}}$ $\triangleright$ Assemble $\bar{W}$
$d_{w}^{i}\leftarrow\sum_{j=1}^{k_{V}}r_{ij}$ $\triangleright$ Construct
projection $P$
$s_{w}^{i}\leftarrow\sum_{l=1}^{i-1}\sum_{j=1}^{k_{V}}r_{lj}$
$t_{w}^{i}\leftarrow\sum_{l=i+1}^{k_{W}}\sum_{j=1}^{k_{V}}r_{lj}$
$P_{W}^{(i)}\leftarrow\begin{bmatrix}0_{d_{w}^{i}\times
s_{w}^{i}},I_{d_{w}^{i}\times d_{w}^{i}},0_{d_{w}^{i}\times
t_{w}^{i}}\end{bmatrix}\in\mathbb{R}^{d_{w}^{i}\times r}$
end for
for $1\leq j\leq k_{V}$ do
$\bar{V}^{(j)}\leftarrow[V^{(1,j)},...,V^{(k_{W},j)}]\in\mathbb{R}^{m_{j}\times
d_{v}^{j}}$ $\triangleright$ Assemble $\bar{V}$
$d_{v}^{j}\leftarrow\sum_{i=1}^{k_{W}}r_{ij}$ $\triangleright$ Construct
projection $P$
$s_{ij}\leftarrow\sum_{l=1}^{i-1}r_{li}+\sum_{l=1}^{j-1}r_{il}$
$t_{ij}\leftarrow\sum_{l=j+1}^{k_{V}}r_{il}+\sum_{l=i+1}^{k_{W}}r_{lj}$
$P_{V}^{(j)}\leftarrow\begin{bmatrix}0_{r_{1j}\times s_{1j}},I_{r_{1j}\times
r_{1j}},0_{r_{1j}\times t_{1j}}\\\ \vdots\\\ 0_{r_{ij}\times
s_{ij}},I_{r_{ij}\times r_{ij}},0_{r_{ij}\times t_{ij}}\\\ \vdots\\\
0_{r_{k_{w}j}\times s_{k_{w}j}},I_{r_{k_{w}j}\times
r_{k_{w}j}},0_{r_{k_{w}j}\times
t_{k_{w}j}}\end{bmatrix}\in\mathbb{R}^{d_{v}^{j}\times r}$
end for
$\bar{W}\leftarrow(\bar{W}^{(1)},...,\bar{W}^{(k_{W})},P_{W}^{(1)},...,P_{W}^{(k_{W})})$
$\bar{V}\leftarrow(\bar{V}^{(1)},...,\bar{V}^{(k_{V})},P_{V}^{(1)},...,P_{V}^{(k_{V})})$
##### Two block example
We illustrate block-wise MD factorization on $2n\times m$ two block rank-$r$
matrix
$M{=}\begin{bmatrix}M^{(1)}\\\
M^{(2)}\end{bmatrix}{=}\begin{bmatrix}W^{(1)}V^{(1)^{T}}\\\
W^{(2)}V^{(2)^{T}}\end{bmatrix}=\begin{bmatrix}W^{(1)}P_{W}^{(1)}\\\
W^{(2)}P_{W}^{(2)}\end{bmatrix}\begin{bmatrix}V^{(1)^{T}}\\\
V^{(2)^{T}}\end{bmatrix}$ with $W^{(1)}\in\mathbb{R}^{n\times
r_{1}},V^{(1)}\in\mathbb{R}^{m\times r_{1}},W^{(2)}\in\mathbb{R}^{n\times
r_{2}},V^{(2)}\in\mathbb{R}^{m\times r_{2}}$ obtained by Alg. 4, projections
$P_{W}^{(1)}=[I,0]\in\mathbb{R}^{r_{1}\times r}$,
$P_{W}^{(2)}=[0,I]\in\mathbb{R}^{r_{2}\times r}$ defined by construction and
$I$ being an identity matrix.
##### Four block example
We extend the example to $2n\times 2m$ four block rank-$r$ matrix below
$M=\begin{bmatrix}M^{(11)}M^{(12)}\\\ M^{(21)}M^{(22)}\\\
\end{bmatrix}=\begin{bmatrix}\bar{W}^{(1)}P_{W}^{(1)}\\\
\bar{W}^{(2)}P_{W}^{(2)}\end{bmatrix}\begin{bmatrix}(\bar{V}^{(1)}P_{V}^{(1)})^{T}\\\
(\bar{V}^{(2)}P_{V}^{(2)})^{T}\end{bmatrix}$
where rank-$r_{ij}$ block $M^{(ij)}=W^{(ij)}V^{(ij)^{T}}$factors and
$\bar{W}^{(1)}=[W^{(11)},W^{(12)}]\in\mathbb{R}^{n\times r_{11}+r_{12}}$
$\bar{W}^{(2)}=[W^{(21)},W^{(22)}]\in\mathbb{R}^{n\times r_{21}+r_{22}}$
$\bar{V}^{(1)}=[V^{(11)},\phantom{1.}V^{(21)}]\in\mathbb{R}^{n\times
r_{11}+r_{21}}$
$\bar{V}^{(2)}=[V^{(12)},\phantom{1.}V^{(22)}]\in\mathbb{R}^{n\times
r_{12}+r_{22}}$
were obtained by Alg. 4, while projections
$P_{W}^{(1)}=\begin{bmatrix}I,0,0,0\\\
0,I,0,0\end{bmatrix}\in\mathbb{R}^{r_{11}+r_{12}\times r}$
$P_{W}^{(2)}=\begin{bmatrix}0,0,I,0\\\
0,0,0,I\end{bmatrix}\in\mathbb{R}^{r_{21}+r_{22}\times r}$
$P_{V}^{(1)}=\begin{bmatrix}I,0,0,0\\\
0,0,I,0\end{bmatrix}\in\mathbb{R}^{r_{11}+r_{21}\times r}$
$P_{V}^{(2)}=\begin{bmatrix}0,I,0,0\\\
0,0,0,I\end{bmatrix}\in\mathbb{R}^{r_{12}+r_{22}\times r}$
are defined by construction. Note that expanded terms
$\bar{W}^{(1)}P_{W}^{(1)}=[W^{(11)},W^{(12)},0,0]$
$\bar{W}^{(2)}P_{W}^{(2)}=[0,0,W^{(12)},W^{(22)}]$
$\bar{V}^{(1)}P_{V}^{(1)}=[V^{(11)},0,V^{(21)},0]$
$\bar{V}^{(2)}P_{V}^{(2)}=[0,V^{(12)},0,V^{(22)}]$
Intuitively, in a case with more blocks, the projections generalize this
pattern of “sliding" the block elements of $W$ and “interleaving" the block
elements of $V$ as defined in Alg. 4.
All of the proofs in this work rely on this particular variant of SGD (a
slight departure from the practical solver used in the experiments).
### -L Block-wise Rank Additivity
Implicitly, we have assume a notion of _rank additivity_ over the block
structure of $M$. Our notion of rank additivity used above is slightly less
general than the one in [69] but is sufficient for our purposes.
###### Definition .1.
Rank Additive _Block matrix $M$ is rank additive if
$r=\sum_{i=1}^{k_{V}}\sum_{j=1}^{k_{W}}r_{ij}$, where $r=\texttt{rank}(M)$ and
$r_{ij}=\texttt{rank}(M^{(ij)})$._
Of course, rank additivity is a mild assumption when the ranks $r_{ij}\ll m$
and the number of blocks is asymptotically constant. In fact, it holds with
high probability for standard random matrix models.
Let us show when the assumption of block-wise rank additivity holds for target
matrix $M$. We begin by restating a relevant lemma [70, 71].
###### Lemma .1.
Let $A\in\mathbb{R}^{m\times n},B\in\mathbb{R}^{m\times
k},C\in\mathbb{R}^{l\times n}$ and $D\in\mathbb{R}^{l\times k}$, while
$\mathcal{R_{M}}=\texttt{range}(M)$ and $r_{M}=\texttt{rank}(M)$. Then,
$\texttt{rank}(\begin{bmatrix}A&B\\\
C&D\end{bmatrix})=r_{A}+r_{B}+r_{C}+r_{D}$ iff
$\mathcal{R}_{A}\cap\mathcal{R}_{B}=\mathcal{R}_{C}\cap\mathcal{R}_{D}=\mathcal{R}_{A^{T}}\cap\mathcal{R}_{C^{T}}=\mathcal{R}_{B^{T}}\cap\mathcal{R}_{D^{T}}=\\{0\\}$.
We generalize the above lemma in the proposition below.
###### Prop. .2.
A block matrix $M$ is rank additive if
$\mathcal{R}_{M^{(ij)}}\cap\mathcal{R}_{M^{(ij^{\prime})}}=\\{0\\}$ for all
$1\leq j\leq k_{V}$ and any $j\neq j^{\prime}$
$\mathcal{R}_{M^{(ij)T}}\cap\mathcal{R}_{M^{(i^{\prime}j)T}}=\\{0\\}$ for all
$1\leq i\leq k_{W}$ and any $i\neq i^{\prime}$
###### Proof.
Let $M^{(i)}$ be the $i$-th block-row of $M$. If for all $j\neq j^{\prime}$,
the $\texttt{range}(M^{(ij)})\cap\texttt{range}(M^{(ij^{\prime})})=\\{0\\}$,
then we directly obtain Eqn. 1:
$\mathbf{dim}(\texttt{range}(M^{(i)}))=\sum_{j}\mathbf{dim}(\texttt{range}(M^{(ij)}))$
Thus, each block-row is rank additive under the assumptions. With some minor
additional effort, we can re-apply the above reasoning on the transpose:
$M^{T}=[M^{(1)T},...,M^{(k_{V})T}]$, treating the whole matrix as a block-row
with $M^{(i)T}$ as the constituent blocks.
Note that we have that
$\texttt{range}(M^{(i)T})=\bigoplus_{j=1}^{k_{V}}\texttt{range}(M^{(ij)T})$
since $M^{(i)T}$ is a concatenation: $M^{(i)T}=[M^{(i0)T},...,M^{(ik_{V})T}]$.
Thus we have for $i\neq i^{\prime}$:
$\texttt{range}(M^{(i)T})\cap\texttt{range}(M^{(i^{\prime})T})=$
$\left(\bigoplus_{j=1}^{k_{V}}\texttt{range}(M^{(ij)T})\right)\bigcap\left(\bigoplus_{j=1}^{k_{V}}\texttt{range}(M^{(i^{\prime}j)T})\right)\subset$
$\bigoplus_{j=1}^{k_{V}}\left(\texttt{range}(M^{(ij)T})\cap\texttt{range}(M^{(i^{\prime}j)T})\right)=\bigoplus_{j}(\\{0\\})=\\{0\\}$
This implies
$\\{0\\}\subset\texttt{range}(M^{(i)T})\cap\texttt{range}(M^{(i^{\prime})T})$
and thus
$\texttt{range}(M^{(i)T})\cap\texttt{range}(M^{(i^{\prime})T})=\\{0\\}$
from which we can conclude Eqn. 2:
$\mathbf{dim}(\texttt{range}(M^{T}))=\sum_{i}\mathbf{dim}(\texttt{range}(M^{(i)T}))$
To conclude the proof, we have
$\texttt{rank}(M)=\texttt{rank}(M^{T})$
$=\mathbf{dim}(\texttt{range}(M^{T}))$
$=\sum_{i}\mathbf{dim}(\texttt{range}(M^{(i)T}))$ by Eqn. 2
$=\sum_{i}\texttt{rank}(M^{(i)T})$
$=\sum_{i}\texttt{rank}(M^{(i)})$
$=\sum_{i}\mathbf{dim}(\texttt{range}(M^{(i)}))$
$=\sum_{i}\sum_{j}\mathbf{dim}(\texttt{range}(M^{(ij)})$ by Eqn. 1
$=\sum_{i}\sum_{j}\texttt{rank}(M^{(ij)})$
∎
In other words, as long as the column and row spaces of these block matrices
only intersect at the origin, rank additivity is attained. Of course, in a
high-dimensional ambient space, randomly selected low-dimensional subspaces
will not intersect beyond the origin from which it follows that rank
additivity is in general a mild assumption that generally holds in practice.
### -M Data-Limited Regime
We cover additional details about for the data-limited, as well as provide
proofs for the associated theorems in this section.
#### -M1 Extension to Tensor Completion
As mentioned before, matrix completion is a common model for CF tasks. We
assume the reader is familiar with this literature. We introduce the more
general tensor completion problem as well. Tensor completion generalizes to
contextual CF tasks and subsumes matrix completion as a special case. We
review this here, following a setting similar to [72, 73]. For tensor
completion, the goal is recovering $T\in\mathbb{R}^{n_{1}\times...\times
n_{\kappa}}$ where $T_{x_{1},...,x_{\kappa}}\in\\{0,1\\}$ denotes if given
context $(x_{3},...,x_{\kappa})$, user $x_{1}$ engages with item $x_{2}$. We
assume $T$ has a low _pairwise interaction rank_ $r$, meaning $T$ can be
factored into $\kappa$ matrices, $\mathcal{M}^{(i)}\in\mathbb{R}^{n_{i}\times
r}$ for $i\in[\kappa]$ as follows:
$T_{x_{1},...,x_{\kappa}}=\sum_{(i,j)\in[\kappa]\times[\kappa]}\langle\mathcal{M}^{(i)}_{x_{i}},\mathcal{M}^{(j)}_{x_{j}}\rangle$
.
Under the assumed pairwise interaction rank $r$ for $T$, we can factor $T$
into $\kappa$ matrices, $\mathcal{M}^{(1)},...,\mathcal{M}^{(1)}$. we can
adapt our model to the tensor case by exploiting the block structure and
treating $M$ as a block pairwise interaction matrix rather than a preference
matrix. Let $k_{V}=k_{W}=\kappa$ and let each block represent an interaction
matrix: $M^{(ij)}=\mathcal{M}^{(i)}(\mathcal{M}^{(j)})^{T}$. Hence, $M$ is
symmetric and with this simple construction, the factors of $T$ are
represented as the blocks of $M$. The remaining distinction is that in the
tensor case, the algorithm only observe sums of elements selected from the
blocks of $M$ instead of observing the entries of $M$ directly. This minor
distinction is addressed in both [72] and [73] and with appropriate care to
details, the two observation structures are largely equivalent. For brevity,
we discuss only the matrix case here, while keeping in mind this natural
extension to the tensor case. When thinking about the categorical features in
CTR prediction, this construction is precisely the one we use to block by
features, as described in Section 3.
#### -M2 Assumptions
Beyond the rank addivity assumption, we also implicitly assume a classical
assumption on _incoherence_.
The notion of _incoherence_ is central to matrix completion [20, 19, 21, 22,
23, 41]. Throughout this work, we implicitly assume that $M$ is
$\mu$-incoherent. Note that asymptotic notation occults this. For many
standard random models for $M$, $\mu$ scales like $\sqrt{r\log n}$ [41], but
here, we simply take $\mu$-incoherence as an assumption on $M$. Note that all
matrices are incoherent for some $\mu\in[1,\frac{\max\\{n,m\\}}{r}]$ [23].
###### Definition .2.
Incoherence. _Let $M=USV^{T}$ be the singular-value decomposition for a matrix
rank-$r$ matrix $M\in\mathbb{R}^{n\times m}$. Matrix $M$ is $\mu$-incoherent
if for all $1\leq i\leq n,||U_{i}||_{2}^{2}\leq\frac{\mu r}{n}$ and for all
$1\leq j\leq m,||V_{j}||_{2}^{2}\leq\frac{\mu r}{n}$._
##### Low-sample Sub-matrix
We denote $M_{\varepsilon}$ as the _low sample sub-matrix_ of blocks
corresponding to minimum marginal sampling rate $\varepsilon$. Concretely,
$M_{\varepsilon}=[M^{(i_{\varepsilon}1)},...,M^{(i_{\varepsilon}k_{V})}]\in\mathbb{R}^{n_{i_{\varepsilon}}\times
m}$ if $\varepsilon_{W}\leq\varepsilon_{V}$ and
$M_{\varepsilon}=[M^{(1j_{\varepsilon})},...,M^{(k_{W}j_{\varepsilon})}]\in\mathbb{R}^{n\times
m_{j_{\varepsilon}}}$ otherwise. For convenience, we also define
$\tilde{n}_{\varepsilon}=n_{i_{\varepsilon}}$ if
$\varepsilon_{W}\leq\varepsilon_{V}$ and
$\tilde{n}_{\varepsilon}=m_{j_{\varepsilon}}$ otherwise. We refer to
$\tilde{n}_{\varepsilon}$ as the size of the low-sample sub-matrix (since the
other dimension is inherited from the size of $M$ itself).
#### -M3 Popularity-agnostic algorithms
(including UD matrix factorization) are those that can be seen as empirically
matching at the observed indices at a given rank constraint, or any relaxation
thereof, without taking advantage of popularity. MD factorization imposes
additional popularity-based constraints. These additional constraints become
essential to completion when the popularity is significantly skewed.
###### Definition .3.
Popularity-Agnostic Algorithm. _Let $f(\theta)$ be some arbitrary but fixed
model with parameters $\theta$ that outputs an attempted reconstruction
$\hat{M}$ of matrix $M$. For a given rank $r^{*}$ let $\mathcal{S}$ be any set
of matrices such that for $\hat{S}=\\{\hat{M}|rank(\hat{M})=r^{*}\\}$ we have
$\hat{S}\subseteq\mathcal{S}$. An algorithm $\mathcal{A}$ is popularity-
agnositc if it outputs the solution to an optimization characterized by a
Lagrangian of the form
$\mathcal{L}(\mathbf{\theta,\lambda})=||M-\hat{M}||_{\Omega}^{2}+\lambda\mathbf{1}[\hat{M}\not\in\mathcal{S}]$
where indicator function $\mathbf{1}[x]=1$ if x=True and 0 otherwise._
#### -M4 Popularity-Agnostic Completion Bounds
It is standard to impose a low-rank structure in the context of matrix
completion. We are interested in understanding how and when popularity-based
structure can improve recovery. While, UD embeddings impose a low-rank
structure, at a given rank, we can interpret our MD embeddings as imposing
additional popularity-based constraints on the matrix reconstruction. While
our MD embeddings maintain a particular rank, they do so with less parameters,
thereby imposing an additional popularity-based restriction on the space of
admissible solution matrices.
##### Non-asymptotic Upper Bound
We first give a simple lower bound on the sample complexity for popularity-
agnostic algorithm. The bound is straightforward, based on the fact that
without additional problem structure, in order to complete a matrix at rank
$r$, you need at least $r$ observations, even on the least popular row or
column. The global reconstruction efforts will always be thwarted by the least
popular user or item. The bound below is non-asymptotic, holding for any
problem size. The theorem below implies that popularity-agnostic algorithms
pay steep recovery penalties depending on the least likely row/column to
sample. If you want to exactly recover a matrix, you can only do as well as
your most difficult row/column.
###### Theorem .3.
Fix some $0<\delta<\frac{1}{2}$. Let $\varepsilon$ be the minimum marginal
sampling rate and let $\tilde{n}_{\varepsilon}$ be the size of the low-sample
sub-matrix. Suppose number of samples $N\leq\frac{r}{\varepsilon}(1-\delta)$.
Then, no popularity-agnostic algorithm can recover $M$ with probability
greater than $\exp(-\frac{r\tilde{n}_{\varepsilon}\delta^{2}}{3(1-\delta)})$.
###### Proof.
Let $\psi$ be any one of the $\tilde{n}_{\varepsilon}$ vectors in the low-
probability sub-matrix $M_{\varepsilon}$ ($\psi$ is of length $n$ or $m$,
depending on if $M_{\varepsilon}$ is a block-wise row or column). Let
$X_{\psi}$ be a random variable denoting the number of observations in
corresponding to $\psi$ under block-wise Bernoulli sampling. Since $X_{\psi}$
is the sum of independent Bernoulli variables, we have
$\mathbb{E}[X_{\psi}]=N\varepsilon$ by linearity of expectation. Furthermore,
we require at least $r$ observations for each row and column in $M$ in order
to achieve exact recovery at rank $r$. In order to see this directly, assume
that an oracle completes all the embeddings except the row or column in
question. Then, each observations defines immediately removes only one degree
of freedom, since it defines the inner product with a known vector. It will be
impossible to complete the final row or column with less than $r$ observations
because popularity-agnostic algorithms provide no further constraints beyond a
low-rank structure. Given that we need $r$ observations per vector, we can see
that $\mathbb{E}[X_{\psi}]=N\varepsilon<r$. This implies if $N<r/\varepsilon$,
we can use the Chernoff tail bound [74] to bound $\mathbf{Pr}[X_{\psi}\geq r]$
from above. We take $1/2<\delta<1$ such that $N\leq\delta r/\varepsilon$. By
application of the Chernoff bound, we have $\mathbf{Pr}[X_{\psi}\geq
r]\leq\exp(-\frac{r}{3}\frac{\delta^{2}}{1-\delta})$. To complete the proof,
notice that our argument extends to each of the $\tilde{n}_{\varepsilon}$
vectors in $M_{\varepsilon}$ independently. Since all of these vectors require
$r$ observations in order to complete of $M_{\varepsilon}$, we obtain the
final probability by computing a product over the probability that each of
$\tilde{n}_{\varepsilon}$ vectors obtains at least $r$ observations.∎
##### Asymptotic Upper Bound
We also provide a stronger asymptotic lower bound for exact completion, based
on the results of [19, 41]. This lower bound assumes that the matrix size $n$
increases while keeping the sampling rate constant. It includes an additional
$O(\log n)$ factor, due to the well-known _coupon collector_ effect [75].
Since $M$ is still a block matrix, we assume that asymptotically, each
individual block becomes large, while $\Pi$ is held constant. More concretely,
we assume each block scales at the same rate as the entire matrix:
$n_{ij}=\Theta(n)$ for all $i,j$ and $m_{ij}=\Theta(n)$ for all $i,j$. In
principle, we could also support an asymptotic number of blocks as well, as
long as the number of blocks grows slowly enough compared to size of each
block. Other numerical constants, such as the condition number and
incoherence, are taken to be non-asymptotic. Note that we do not require the
block additivity assumption for this to hold.
###### Theorem .4.
Let $M$ be a target matix following the block-wise Bernoulli sampling model.
Let $\varepsilon$ be the minimum marginal sampling rate. Suppose
$N=o(\frac{r}{\varepsilon}n\log n)$. Then any popularity-agnostic algorithm
will have arbitrarily small probability of recovery, asymptotically.
###### Proof.
Order $\Theta(nr\log n)$ observations are necessary for exact completion at a
given probability in the asymptotic setting [19, 41]. This is because
$\Theta(nr\log n)$ observations are necessary to obtain $r$ observations per
row and column. Each vector in the low-sample sub-matrix also requires $r$
observations. Since the number of samples in the low-sample sub-matrix
concentrates around $N\varepsilon$, this number must be order $\Theta(rn\log
n)$ in order to have a chance of reconstruction.∎
It is instructive to understand why UD embeddings fail to recover rank-$r$
matrix $M$ under popularity skew. For argument’s sake, let $d$ denote a
potential uniform embedding dimension. Suppose we have $\Theta(rn\log n)$
samples and we set UD to $r$: $d\leftarrow r$. When sampling is skewed,
$M^{(2)}$ will be too sparsely covered to reveal $r$ degrees of freedom, since
it only generates $\epsilon$ fraction of the observations. Thus, the
$r$-dimensional embeddings would over-fit the under-constrained $M^{(2)}$
block as a result. Alternatively, if we set $d\leftarrow\epsilon r$, as so to
match the sample size over sub-matrix $M^{(2)}$, then our $\epsilon
r$-dimensional embeddings would be unable to fit the larger training sample
over the $M^{(1)}$ block. Namely, we would now have too many samples coming
from a rank-$r$ matrix, resulting in an over-constrained problem that is
infeasible with $\epsilon r$-dimensional embeddings. By using MD embeddings,
we can avoid this problem by simultaneously fitting popular and rare blocks.
#### -M5 Completion Guarantees for Mixed Dimension Embeddings
In [23] it was shown that various non-convex optimization algorithms,
including SGD, could exactly complete the unknown matrix, under the Bernoulli
sampling model. For convenience, the theorem is reproduced below. The details
of the SGD implementation, such as step sizes and other parameters, can be
found in [23].
###### Theorem .5.
(Sun and Luo, 2016) Let $\alpha=\frac{n}{m}\geq 1$, $\kappa$ be the condition
number of $M$ and $\mu$ be the incoherence of $M$. If (expected) number of
samples $N\geq C_{0}nr\kappa^{2}\alpha(\max\\{\mu\log
n,\sqrt{\alpha}\mu^{2}r^{6}\kappa^{4}\\})$ then SGD completes $M$ with
probability greater than $1-\frac{2}{n^{4}}$.
We can use Thm .5 and Alg. 4 to construct a guarantee for mixed-dimension
block-wise factorization, as follows.
###### Corollary .5.1.
Let $M$ be a target matrix following the block-wise Bernoulli sampling model.
Let $C_{0}$ be a universal constant, $\hat{n}_{ij}=\min\\{n_{i},m_{j}\\}$ and
$N^{*}_{ij}=C_{0}\Pi_{ij}^{-1}\hat{n}_{ij}r_{ij}\kappa_{ij}^{2}\alpha_{ij}(\max\\{\mu_{ij}\log\hat{n}_{ij},\sqrt{\alpha_{ij}}\mu_{ij}^{2}r_{ij}^{6}\kappa_{ij}^{4}\\})$
where $\kappa_{ij}$ and $\mu_{ij}$ is the condition number and the incoherence
of the $ij$-th block of $M$ and
$\alpha_{ij}=\frac{\max\\{n_{ij},m_{ij}\\}}{\min\\{n_{ij},m_{ij}\\}}\geq 1$.
If $N\geq\max_{ij}N_{ij}^{*}$, then block-wise MD factorization completes rank
additive block matrix $M$ with probability greater than
$1-\sum_{ij}\frac{2}{\hat{n}_{ij}^{4}}$.
###### Proof.
Recall the construction used in Alg. 4. First, we complete each block
individually. We apply Thm .5 to each block independently to guarantee its
completion at rank $r_{ij}$ with probability at least
$1-\frac{2}{n_{ij}^{4}}$. We then use the block-wise embeddings to construct
MD embeddings $\bar{W},\bar{V}$ as described Alg. 4. If
$W^{(ij)}V^{(ij)T}=M^{(ij)}$, for all $i,j$, then $\bar{W}\bar{V}^{T}=M$.
Thus, we need only a union bound on the failure probabilities from Thm .5 to
complete the proof. ∎
Note that Corollary .5.1 implies Thm IV.1 as it is a non-asymptotic version.
Namely, recall that $n_{ij}=\Theta(n)$ for all $i,j$. Furthermore, letting
$C=C_{0}\max_{ij}{(\Pi_{ij}^{-1}r_{ij}\kappa_{ij}^{2}\alpha_{ij}\mu_{ij})}$
recovers Corollary IV.1.
### -N Memory-Limited Regime
Now, we turn our attention to the allocation implications of non-uniformly
sampled test sets. To abstract away training, we assume an oracle reveals the
target matrix. Recall that our challenge is a small parameter budget — our
embeddings can only use $B$ parameters. The question is what dimensions
$d_{w}$ and $d_{v}$ each embedding block should get to minimize our
popularity-weighted reconstruction loss (under MSE)? Before proceeding, we
pause to define some useful matrices from the block-wise MD factorization
(Alg. 4).
###### Definition .4.
_In the context of block-wise MD factorization (Alg. 4) we refer to the
matrices $(W^{(ij)},V^{(ij)})$ as the $ij$-th block-wise embeddings. We refer
to matrix $\bar{W}^{(i)}$ as the $i$-th row embedding block and matrix
$\bar{V}^{(j)}$ as the $j$-th column embedding block._
Note that, generally speaking, embedding tables $(W,V)$, naturally inherit an
_embedding block_ structure based on the block structure of $M$. For example,
standard UD embeddings partition such that the top block-wise row
$M^{(1)}=[M^{(11)},...,M^{(1k_{W})}]$ only depends on $W^{(1)}$. Thus,
embedding blocks exist independent of the usage of block-wise MD
factorization. On the other hand, _block-wise embeddings_
$(W^{(ij)},V^{(ij)})$, are a distinct byproduct of block-wise MD
factorization.
#### -N1 Optimization over Embedding Dimensions
We assume the block structure and block-wise probability matrix is given — the
variables over which the optimization takes place are 1) the dimensions of the
embedding blocks, $(d_{w},d_{v})$ such that
$W^{(l)}\in\mathbb{R}^{n_{l}\times(d_{w})_{l}}$ and
$V^{(l)}\in\mathbb{R}^{n_{l}\times(d_{v})_{l}}$ and 2) the embedding blocks
themselves $W^{(i)}$ for $1\leq i\leq k_{W}$ and $V^{(j)}$ for $1\leq j\leq
k_{V}$. Note that when the embedding block dimensions are uniform, such that
$(d_{w})_{i}=(d_{v})_{j}$ for all $i$ and $j$, this is equivalent to direct
optimization over embedding matrices $W,V$ (i.e. matrix factorization). Recall
that $L_{\Pi}$ is the popularity-weighted MSE. When $d_{w}$ and $d_{v}$ are
treated as integers, this optimization is NP-Hard in general, since even
integral feasibility under linear constraints is known to be NP-Hard [76].
Instead, we study a continuous relaxation that results in a convex program.
The resultant convex program is far simpler, and yields a closed-form solution
given the spectrum of the target matrix. We proceed to define another quantity
of interest for our discussion, a _spectral (singular value) decay_. In order
to save space in the main text, we do not introduce the spectral decay rule
$g$ but we imply it when referring to the spectrum directly. After the
upcoming definition, we restate and prove Thm. IV.2 from the main text.
###### Definition .5.
_A spectral decay is mapping from $[0,r]$ to $\mathbb{R}^{+}$ that describes
the singular value scree plot for a matrix. Let $\sigma_{k}$ be the $k$-th
singular value of a matrix and $\sigma_{k}\geq\sigma_{k+1}$ for $k=1,...,r$.
For any singular value spectrum we associate a spectral decay rule, a piece-
wise step-function and its functional inverse, as $g(x)=\sigma_{k}$ for
$k-1\leq x<k$ and $g^{-1}(x)=k$ for $\sigma_{k}<x\leq\sigma_{k+1}$,
respectively._
###### Theorem .6.
The optimal block-wise embedding dimensions for the convex relaxation of the
variable dimension embedding optimization under a parameter budget are given
by
$d_{ij}^{*}=g_{ij}^{-1}\left(\sqrt{\lambda(n_{i}+m_{j})(n_{i}m_{j})\Pi_{ij}^{-1}}\right)$
where $g_{ij}^{-1}$ is the functional inverse of the spectral decay of block
$M^{(ij)}$.
###### Proof.
The optimization is formulated as
$\min_{d_{w},d_{v}}\min_{W,V}L_{\Pi}(M,WV^{T})$ $\text{s.t.
}\sum_{i}n_{i}(d_{w})_{i}+\sum_{j}m_{j}(d_{v})_{j}\leq B$
Under relaxation, we treat this a continuous optimization. Let $(k,l)$ be a
test coordinate sampled according to $\Pi$. If rank additivity holds, we can
equivalently write
$=\min_{d}\min_{W,V}\mathbb{E}_{(k,l)\in[n]\times[m]}|M_{kl}-W_{k}V_{l}^{T}|^{2}$
$\text{ st }\sum_{ij}(n_{i}+m_{j})d_{ij}\leq B$
where $M_{kl}$ is the $kl$-th element of $M$, $(d_{w})_{i}=\sum_{j}d_{ij}$ and
$(d_{v})_{j}=\sum_{i}d_{ij}$. The $d_{w},d_{v}$ refer to the embedding block
dimensions, whereas the $d_{ij}$ refer to the block-wise embedding dimensions
(Definition C.11). We may ignore the parameters in the projections since they
are not free parameters (and also contribute a negligible amount of parameters
to the total count). Under Bernoulli sampling model, our popularity
distribution yields. As shorthand notation, let
$\mathfrak{B}:=\sum_{ij}(n_{i}+m_{j})d_{ij}$.
$=\min_{d}\min_{W,V}\sum_{ij}\frac{\Pi_{ij}}{n_{i}m_{j}}||M^{(ij)}-W^{(ij)}V^{(ij)^{T}}||_{F}^{2}\text{
st }\mathfrak{B}\leq B$
Since the constraints remain the same, we omit them for the time being.
Letting $\sigma_{k}^{(ij)}$ be the singular values of block $M^{(ij)}$ and
using the low-rank approximation theorem [77] we obtain
$=\min_{d}\sum_{ij}\frac{\Pi_{ij}}{n_{i}m_{j}}\sum_{k=d_{ij}+1}^{r_{ij}}(\sigma_{k}^{(ij)})^{2}\text{
st }\mathfrak{B}\leq B$
Letting $g_{ij}$ be the spectral decay rule for each block and noticing that
by construction $\sum_{k=0}^{r}\sigma_{k}=\int_{0}^{r}g(k)dk$ we obtain
$=\min_{d}\sum_{ij}\frac{\Pi_{ij}}{n_{i}m_{j}}\left(\int_{0}^{r_{ij}}g_{ij}^{2}(k)dk-\int_{0}^{d_{ij}}g_{ij}^{2}(k)dk\right)\text{
st }\mathfrak{B}\leq B$
$=\min_{d}\sum_{ij}\frac{\Pi_{ij}}{n_{i}m_{j}}\left(||M^{(ij)}||_{F}^{2}-\int_{0}^{d_{ij}}g_{ij}^{2}(k)dk\right)\text{
st }\mathfrak{B}\leq B$
Observe that the objective is convex. To see this, note that each $g$ is
decreasing since the spectral decay is decreasing. Thus, $g^{2}$ is also
decreasing. The negative integral of a decreasing function is convex. Finally,
since the objective is a sum of functions that are convex along one variable
and constant along the rest, the entire optimization is convex (and well-posed
under the linear constraint, which is guaranteed to be active). Thus we can
solve with the optimization with first-order conditions [24]. The
corresponding Lagrangian can be written as
$\mathcal{L}=\sum_{ij}\frac{\Pi_{ij}}{n_{i}m_{j}}\left(||M^{(ij)}||_{F}^{2}-\int_{0}^{d_{ij}}g_{ij}^{2}(k)dk\right)$
$+\lambda\left(-B+\sum_{ij}(n_{i}+m_{j})d_{ij}\right)$
Note that $M^{(ij)}$ does not depend on $d_{ij}$. Also, note that we can use
the fundamental theorem of calculus $\frac{\partial}{\partial
x}\int_{0}^{x}g(t)dt=g(x)$ [78]. Then, using Lagrange multipliers [24] we can
write
$\frac{\partial}{\partial
d_{ij}}L_{\Pi}(M,WV^{T})=-\frac{\Pi_{ij}}{n_{i}m_{j}}g_{ij}^{2}(d_{ij})+\lambda(n_{i}+m_{j})$
Finally, using first order conditions
$\nabla_{d_{ij}}=[\frac{\partial}{\partial d_{ij}}]=0$ we obtain:
$g_{ij}^{2}(d_{ij})=\lambda(n_{i}+m_{j})(n_{i}m_{j})\Pi_{ij}^{-1}$. Solving
for $d_{ij}$ by taking the functional inverse of $g_{ij}$ completes the proof.
We conclude:
$d^{*}_{ij}=g_{ij}^{-1}\left(\sqrt{\lambda(n_{i}+m_{j})(n_{i}m_{j})\Pi_{ij}^{-1}}\right)$
∎
For specific spectral decay rules, we may give closed-form solutions, as done
in the main text for power law decay. We can also analyze the performance gap
between uniform and MD embeddings with respect to the optimization objective.
###### Corollary .6.1.
The performance gap compared to UD embeddings is
$\sum_{ij}\Pi_{ij}(\mathbf{1}\\{d_{ij}^{*}>\frac{B}{n+m}\\}(\sum_{k=\frac{B}{n+m}}^{d^{*}_{ij}}(\sigma_{k}^{(ij)})^{2})$
$-\mathbf{1}\\{d_{ij}^{*}<\frac{B}{n+m}\\}(\sum_{k=d^{*}_{ij}}^{\frac{B}{n+m}}(\sigma_{k}^{(ij)})^{2}))$
###### Proof.
Follows directly from plugging optimal $d^{*}$ into the objective. ∎
We can explain the intuition for Corollary .6.1 as follows. The first term
counts the spectral mass gained back by allocating more parameters to frequent
embeddings. $B/(n+m)$ is the embedding dimension under a uniform parameter
allotment. When $d_{ij}^{*}$ is greater than this, we are increasing the
dimension which enables that embedding to recover more of the spectrum. This
occurs when $\Pi_{ij}$ is large. On the other hand, the trade-off is that
lower-dimension embeddings recover less of the spectrum when $\Pi_{ij}$ is
small, which is the penalty incurred by the second term.
###### Corollary .6.2.
When $M$ exhibits a block-wise power spectral decay, this becomes:
$d_{ij}^{*}=\lambda\zeta_{ij}\Pi_{ij}^{\frac{1}{2\beta}}$
where
$\zeta_{ij}=\left(\frac{(n_{i}+m_{j})(n_{i}m_{j})}{\mu}\right)^{\frac{-1}{2\beta}}$
and
$\lambda=\left(\frac{B}{\sum_{ij}(n_{i}+m_{j})\zeta_{ij}}\right)^{-2\beta}$
###### Proof.
Follows directly by substituting power spectral decay rule for
$\sigma(\cdot)$. ∎
#### -N2 Approximation Gap for Convex Relaxation
Note that we can bound the approximation gap of the proposed relaxation by
simply rounding down each $d_{ij}$ to the nearest integer, which ensures the
feasibility of the assignment. The absolute approximation error is then less
than $\sum_{ij}\Pi_{ij}\cdot g^{2}(d_{ij})$. For most applications, this
quantity is small for a good MD assignment, since either the probability term,
$\Pi_{ij}$ is small, or the the spectrum at $d_{ij}$ is small. For example, in
typical use cases, the embedding dimensions may be on the order of $10-100$ –
rounding down to the nearest integer would thus represent a loss of $10-1\%$
of the spectral mass.
|
# Ambiguity in mana and magic definition and knot states
S. Mironovabcd, An. Morozovecd e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We study the Mana and Magic for quantum states. They have a standard
definition through the Clifford group, which is finite and thus classically
computable. We introduce a modified Mana and Magic, which keep their main
property of classical computability, while making other states classically
computable. We also apply these new definitions to the studies of knot states
of 2-strand knots.
a INR RAS, Moscow, 117312, Russia
b ITMP, MSU, Moscow, 119991, Russia
c MIPT, Dolgoprudny, 141701, Russia
d Kurchatov Institute, Moscow, 123182, Russia
e IITP RAS, Moscow 127994, Russia
## 1 Introduction
In recent years papers appeared which discuss magic and mana properties for
different models, such as CFT, knot theory and others [1, 2, 3]. These
quantities characterize how far is the certain quantum mechanical state
defined for such a model from the element of the Clifford group [4]. According
to the Gottesmann-Knill theorem [5], Clifford group elements can be
effectively modeled on a classical computer. Thus it is claimed that “magic”
is in effect a non-classicality of a certain state, and mana measures this
non-classicality. These properties can be important if one discusses these
properties in relation to the quantum computations.
The Gottesman-Knill theorem is based on a fact that Clifford group is a finite
subgroup of studied group $G$, which is a tensor product of several $SU(N)$’s.
However, it is not the only finite subgroup. One can define infinitely many
such subgroups for the same group $G$. Among these the defining property of
the Clifford group is its connection to the sigma matrices. From the point of
view of quantum computing there is no need to demand this. Thus depending on
the set of problems one wants to present to quantum computer, mana can be
defined differently. Our claim is that mana is in fact a relative rather than
absolute property.
In the present paper we will present how the Clifford group is usually defined
and how it can be modified to get other finite subgroups. We will apply this
new mana definition to studying knot states.
Knot theory is a widely studied subject with lots of relations to other
theories. Among others there are connections between knot theory and quantum
computations which provide both approaches to calculate knot polynomials using
quantum algorithms, as well as describing quantum algorithms as some knot
configuration in effective topological field theory [14]-[19]. This involves
calculating knots using unitary matrices through the Reshetikhin-Turaev
algorithms [6]-[13]. Specifically for some particular series of knots any
quantum algorithm can be described as consecutive approximations by a series
of knots [18, 19].
However, in the present paper we discuss different approach to the knot
theory. Mana and magic are properties of the quantum states (density matrices)
rather than unitary operations. There is a way to define quantum states,
corresponding to a knot [2], using ideas of topological field theories [20,
21]. Matrix elements of this density matrix are made from the knot polynomials
in special points. Thus classicality of such states provides us with some
information on how can these knot invariants be calculable on classical
computer.
Paper is organised as follows. In Chapter 2 we define Clifford group, which is
a finite subgroup of the $SU(N)$ group. In Chapter 3 we provide a definition
of mana as it is given in other papers on the subject, such as [1, 2, 3]. In
Chapter 4 we discuss ambiguity in mana definition and show how can the
definition be modified to give mana connected to a differen finite subgroup of
$SU(N)$. In Chapter 4 we define quantum mechanical states, describing
different knots, according to [20, 21, 2]. In Chapter 5 we study how mana
looks like for the knot states and how it can be changed by defining Mana
differently.
## 2 Clifford group
Clifford group was first defined by D. Gottesman [4]. Let us take a system of
$d$ orthormal states $|k>$, $k=0\ldots d-1$. We take a pair of operators $z$
and $x$:
$\begin{array}[]{l}Z=\sum\limits_{k=0}^{d-1}\omega^{k}|k><k|,\ \ \ \
\omega=e^{\frac{2\pi i}{d}},\\\ \\\ X=\sum\limits_{k=0}^{d-1}|(k+1)\text{mod}\
d><k|.\end{array}$ (1)
Using these operators one could define generalized Pauli operators
$T_{aa^{\prime}}=\left\\{\begin{array}[]{l}i^{aa^{\prime}}Z^{a}X^{a^{\prime}},\
\ d=2,\\\ \\\ \omega^{-\bar{2}aa^{\prime}}Z^{a}X^{a^{\prime}},\ \
d>2,\end{array}\right.$ (2)
where $\bar{2}$ is a multiplicative inverse of $2$: $2\times\bar{2}\equiv
1\text{mod}\ d$. Form the generalized Pauli operators one can defined strings
of such operators – Pauli strings:
$T_{\mathbf{a}}=T_{a_{1}a^{\prime}_{1}}T_{a_{2}a^{\prime}_{2}}\ldots
T_{a_{n}a^{\prime}_{n}}.$ (3)
The Clifford group is defined as a set of unitary operators $U$ which
transform Pauli string into another Pauli string up to a phase:
$\mathcal{C}=\left\\{U\ \ :\ \
UT_{\vec{a}}U^{\dagger}=e^{i\phi}T_{\vec{b}}\right\\}.$ (4)
Clifford gates – elements of the Clifford group – act in the space of the size
$d^{n}$. There are three Clifford gates which generate the whole group. Two of
them act in one $d$-dimensional space, namely phase gate and Hadamard gate:
$\begin{array}[]{l}K=\left(\begin{array}[]{llllll}1\\\ &\omega\\\
&&\omega^{2}\\\ &&\ldots\\\ &&&\omega^{\frac{(d-1)(d-3)}{2}}\\\
&&&&\omega^{\frac{(d-1)(d-2)}{2}}\end{array}\right),\\\ \\\
H=\cfrac{1}{\sqrt{d}}\left(\begin{array}[]{llll}1&1&\ldots&1\\\
1&\omega&\ldots&\omega^{d-1}\\\ \ldots&\ldots&\ldots&\ldots\\\
1&\omega^{d-1}&\ldots&\omega^{(d-1)^{2}}\end{array}\right).\end{array}$ (5)
Also there is one operator which acts on a pair of $d$-dimensional spaces:
$S=\sum\limits_{ij}|i;i\oplus j><i;j|.$ (6)
In the $SU(2)$ case this operator is the CNOT-gate.
Interesting fact is that there is a finite (although quite large) number of
elements in the Clifford group. Due to this fact anything constructed from the
Clifford gates can be effectively simulated on a classical computer in
polynomial time. Interesting fact is that Clifford group includes the
entangling S-gate (CNOT). Thus non-classicality (magic) is in fact a different
parameter from the entanglement.
## 3 Mana
To measure the degree of magic of a certain state a quantity called mana was
introduced. It is defined through a phase space point operator $A_{\vec{a}}$:
$A_{\vec{a}}=d^{-n}T_{\vec{a}}\sum\limits_{\vec{b}}T_{\vec{b}}T^{\dagger}_{\vec{a}},$
(7)
where $T_{\vec{a}}$ are Pauli strings (3). using these operators one can
define discrete Wigner function $W_{\rho}(\vec{a})$ of a state with density
matrix $\rho$:
$W_{\rho}(\vec{a})=\frac{1}{d^{n}}\text{Tr}\ \rho A_{\vec{a}}.$ (8)
Phase space point operators form a complete orthonormal basis in the space of
$d^{n}\times d^{n}$ matrices, thus density matrix can be constructed from the
Wigner functions:
$\rho=\sum\limits_{\vec{a}}W_{\rho}(\vec{a})A_{\vec{a}}.$ (9)
This means that for any physical state when $\text{Tr}\ \rho=1$, set of Wigner
functions satisfy $\sum W_{\rho}(\vec{a})=1$. Mana is defined as a logarithm
of negativity of the set of Wigner functions:
$M(\rho)=\log\sum\limits_{\vec{a}}|W_{\rho}(\vec{a})|.$ (10)
Mana possess several interesting properties which describes why this
definition was chosen. First, it can be shown that mana is equal to zero if
and only if the density matrix $\rho$ is made from Clifford gates. Second,
mana is additive:
$M(\rho_{a}\otimes\rho_{b})=M(\rho_{a})+M(\rho_{b}).$ (11)
Third, mana is related to the second Renyi entropy $S_{2}$:
$M(\rho)\leq\frac{1}{2}(L\log d-S_{2}).$ (12)
## 4 Ambiguity
The main property of the Clifford group, related to the ease of classical
computation is its finiteness. However Clifford group is not the only finite
subgroup of the $U(d)^{\otimes n}$ group. From the Clifford group one can
easily construct other finite subgroups using just the rotation matrices.
Let us take as an example the case of one unitary group. Then instead of $X$
and $Z$ operators one can take
$\tilde{X}=WXW^{\dagger},\ \ \ \tilde{Z}=WZW^{\dagger},$ (13)
where $W$ is a matrix from the $SU(d)$ group. These provide another set of
matrices instead of the generalized Pauli matrices:
$\tilde{T}=WTW^{\dagger}.$ (14)
Instead of Pauli strings one will also get generalized strings which can be
rotated independently for each of the generalized Pauli matrices in the
string. These generalized Pauli strings will also be related by some finite
group:
$\tilde{\mathcal{C}}=\left\\{\tilde{U}\ \ :\ \
\tilde{U}\tilde{T}_{\vec{a}}\tilde{U}^{\dagger}=e^{i\phi}\tilde{T}_{\vec{b}}\right\\}.$
(15)
These matrices are related to the Clifford group matrices when there is only
one $U(d)$ group by rotation with $W$. Thus instead of Hadamard and Phase gate
this group will be generated by
$\tilde{K}=WKW^{\dagger},\ \ \ \tilde{H}=WHW^{\dagger}.$ (16)
When tensor product of several $U(d)$ groups is considered, each group can be
rotated independently of others. The $S$ operator from (6) should be modified
accordingly:
$\tilde{S}=\mathcal{W}_{2}S\mathcal{W}_{2}^{\dagger},$ (17)
where $\mathcal{W}_{n}=W_{1}\otimes W_{2}\otimes\ldots\otimes W_{n}$, with
matrices $W_{1}$ and $W_{2}$ acting on the corresponding pair of two
dimensional spaces.
By modifying the Clifford group, the definition of Mana also should be
modified. Namely phase space point operator instead of (7) will be equal to
$\tilde{A}_{\vec{a}}=\mathcal{W}A_{\vec{a}}\mathcal{W}^{\dagger},$ (18)
which also modifies the definition of Wigner function:
$\begin{array}[]{r}\tilde{W}_{\rho}(\vec{a})=\frac{1}{d^{n}}\text{Tr}\
\rho\tilde{A}_{\vec{a}}=\frac{1}{d^{n}}\text{Tr}\
\rho\mathcal{W}A_{\vec{a}}\mathcal{W}^{\dagger}=\\\ \\\
=\frac{1}{d^{n}}\text{Tr}\
\mathcal{W}^{\dagger}\rho\mathcal{W}A_{\vec{a}}=W_{\mathcal{W}^{\dagger}\rho\mathcal{W}}(\vec{a}).\end{array}$
(19)
## 5 Knot states
Let us discuss applications of the generalized mana definition to the knot
theory. To define mana for knots, one should first define some quantum states,
related to knots. This can be done as follows, based on the papers by Atiyah
[20] and Witten [21]. From the physics perspective knot theory is a three-
dimensional Chern-Simons theory with action
$S=\frac{k}{4\pi}\int d^{3}x\left(\mathcal{A}\wedge
d\mathcal{A}+\frac{2}{3}\mathcal{A}\wedge\mathcal{A}\wedge\mathcal{A}\right),$
(20)
where $k$ is an integer, called level of the Chern-Simons theory. Knot
polynomials are equal to the Wilson-loop averages of the Chern-Simons theory:
$J^{\mathcal{K}}_{r}(q)=\left<\text{Tr}_{r}\oint\limits_{\mathcal{K}}d\vec{x}\vec{\mathcal{A}}\right>.$
(21)
Polynomial depends on the gauge group of the Chern-Simons theory, its level
$k$, representation $r$ of the gauge group and contour of integration
$\mathcal{K}$. In what follows for the sake of simplicity we will speak only
of the $SU(2)$ group, when knot polynomials are Jones polynomials. Contour of
integration is a closed curve in three-dimensional space or a knot. Finally
the answer for the Wilson-loop average happens to be a Laurent polynomial in
variable $q$, related to the level of the Chern-Simons theory:
$q=e^{\cfrac{\pi i}{k+2}}.$ (22)
Jones polynomials are in fact related to the representations of the quantum
group $U_{q}(sl(2))$, rather than $SU(2)$ group. When $k$ is an integer then
$q$ is a root of integer and then there is a finite number of highest-weight
representations of the $U_{q}(sl(2))$ quantum group. Namely for even $k$ there
are only representations, corresponding to the spin $m/2$ for $m\leq k$ [22].
This allows us to define a basis of representations for the knot states.
Namely, state function can be defined as follows:
$|\mathcal{K}>=\sum\limits_{j=0}^{k-1}J_{j}|j>.$ (23)
Using such a wave function one can also define a density matrix:
$\rho_{\mathcal{K}}=|\mathcal{K}><\mathcal{K}|,$ (24)
and study mana for such states. In the present paper we will discuss only two-
strand knots, which polynomials can be calculated using the following formula
[11]:
$J^{T[2,n]}_{j}=\sum\limits_{i=0}^{j}\frac{q^{2j-2i+1}-q^{-2j-2i-1}}{q^{j+1}-q^{-j-1}}\left((-1)^{i}q^{i^{2}-2ji-i+j}\right)^{n},$
(25)
where $n$ is a number of twists in a two-strand braid. For odd $n$ the closure
of braid produces a knot, while for even $n$ the result is a link.
## 6 Mana for the Knot states
Mana for two strand knots is periodical in parameter $n$ with period being
equal to $k+2$, but besides that mana for many knots coincide. On the Fig.1
mana for $k=2$ is displayed. Knots appear only if $n$ are odd integers in
(25). However, using formula (25), one can extend this definition to the other
values of $n$, which gives graph on the Fig. 2. This is purely an analytical
continuation of the results for knots to an arbitrary $n$ by using (25).
Figure 1: 1\. Mana for two-strand knots for $k=2$ in Clifford group basis
Figure 2: 2\. Mana for two-strand knots for $k=2$ with continuation to
arbitrary real $n$ in Clifford group basis.
Similarly mana for $k=3$ (see Fig.3) and $k=4$ (see Fig.4) can be produced.
Figure 3: 3\. Mana for two-strand knots for $k=3$ in Clifford group basis.
Figure 4: 4\. Mana for two-strand knots for $k=4$ in Clifford group basis.
For $k=2$, as can be seen from Fig.1, there are two distinct values of mana
(and in fact only two different density matrices). By changing the basis we
calculate mana in, one can make for the other half of 2-strand knots to be
equal to zero, while making the mana for the other set to be nonzero. This in
fact can be done not only for $k=2$, but also for other cases. This means that
by changing the basis one can make different sets of knots states to be
effectively calculable on a classical computer.
Specific property of knot states is that they are pure, i.e. $\rho$ is a
tensor product of ket and bra vectors (24). The state for unknot ($n=1$ in
(25)) is defined by a vector $v=\frac{1}{\sqrt{d}}[1,1,..,1]$ and
$\rho=v^{\dagger}\times v$ for any $k$ and always has mana equals to zero. By
transitivity of unitary group it is always possible to rotate any other knot
state to the same vector $v=Uv_{1}$. This immediately gives rotated density
matrix or, in other words, new basis for Clifford group,
$\rho=Uv_{1}^{\dagger}\times v_{1}U^{\dagger}=v^{\dagger}\times v$ with zero
mana.
In $k=2$ case the only distinguished state besides unknot is trefoil knot
($n=3$ in (25)) with state vector $v_{1}=\frac{1}{\sqrt{3}}[1,-1,1]$. The
easies way to rotate it to the unknot state vector $v$ is by using unitary
matrix
$S=\left(\begin{array}[]{ccc}1\phantom{0}&0\phantom{0}\phantom{0}&0\\\
0\phantom{0}&0\phantom{0}\phantom{0}&1\\\
0\phantom{0}&-1\phantom{0}&0\end{array}\right)$ (26)
. Of course this matrix is not unique due to stability subgroup SU(2) and U(1)
unambiguity in definition of $v$ or $v_{1}$. But this freedom is not enough to
rotate both $v$ and $v_{1}$ to zero-mana states.
Figure 5: 5\. Mana for two-strand knots for $k=2$ in the basis, rotated by
matrix $S$ from (26). Figure 6: 6\. Mana for two-strand knots for $k=3$ in the
basis, rotated by matrix $S$ from (27).
The $k=3$ case can be analytically solved in a similar way, the rotation
matrix between $v$ and $v_{1}$ can be found for instance in a following way
(rounded answer)
$S=\left(\begin{array}[]{cccc}0.07+0.5i\phantom{0}&-0.7-0.5i&0&0\\\
0.7-0.5i\phantom{0}&0.07-0.5i\phantom{0}&0&0\\\
0&0&0.07-0.5i\phantom{0}&0.7-0.5i\\\
0&0&-0.7-0.5i\phantom{0}&0.07+0.5i\end{array}\right)$ (27)
Thus we illustrated for k=2 and k=3 that in different basis different knot
states can have zero mana but only separately. There is no basis in which two
or more different knot states have zero mana simultaneously. That in fact
shows that even if the absolute value of mana for a state is meaningless (it
can always be set to zero), the set of values of mana for different states has
an invariant property, they can not be set to zero together, thus are not
calculable on a classical computer.
## 7 Conclusion
In this paper we described how the definition of magic and mana can be
changed, while keeping its main property - finiteness of classical
calculations. This can be done by rotating either the basis of Clifford group,
or rotating the density matrix, which is in principle the same.
We applied these principles to the studies of mana of knot states. Knots have
deep connections with quantum calculations through Reshetikhin-Turaev
formalism, which relies on unitary matrices and thus becomes a natural task
for quantum computer. Another important connection is topological quantum
computer which in turn relies on knots as its basic algorithms.
We studied the mana of 2-strand knots and we also managed to find rotation
matrices which allows to change which knots have zero mana and thus are
classically calculable.
The goal of this paper was to make it clear that, while mana is an interesting
property of the quantum state, which should measure its “classicality”, its
definition is in fact very subjective. It relies on the specific finite
subgroup of the $SU(N)$ group. However one can define a different finite
subgroup which also changes the mana definition. This new mana also in fact
measures classicality, but in a different basis.
Clifford group is only distinguished by its relation to the $X$ and $Z$
operators (1), which has historical significance, but is not always connected
to the real quantum systems, used to build quantum computers. In fact for many
quantum computers there is a certain freedom to choose different basic
operations, quantum gates. For example for topological quantum computer
natural choice of operations are $\mathcal{R}$-matrices, which has no relation
whatsoever to the $X$ and $Z$ operators. Thus it is important to understand
which states are “classical” in relation to the exact choice of basic
operations.
We showed using the example of knot states that depending on the basis
different knot states can have zero mana in different bases. This in fact
means that mana is a relative rather than absolute property. Calculations on a
quantum computer (or on a classical reversible computer, which is its
classical counterpart) are made by using unitary matrices (or permutation
matrices in the classical case), which transform one state of the quantum
system into another. If these two states can have zero mana simultaneously in
some basis, then the corresponding calculations can be effectively made on a
classical computer. Thus mana definition should be chosen specifically for the
problem we are trying to study.
While in this paper we used knot states purely as an example to show that mana
can be defined differently, it is very interesting to study the dependence of
mana in different bases on the knot, as well as mana for the states, related
to other knot polynomials. This however still remains to be studied.
## Acknowledgements
We are grateful for very useful discussions to N. Kolganov.
This work was supported by Russian Science Foundation grant No 18-71-10073.
## References
* [1] C.D. White, C. Cao, B. Swingle, _“Conformal field theories are magical”_ , Phys. Rev. B 103, 075145 (2021), arXiv:2007.01303
* [2] J.R. Fliss, _“Knots, links, and long-range magic”_ , JHEP 04 (2021) 090, arXiv:2011.01962
* [3] K. Goto, T. Nasaka, M. Nozaki, _“Chaos by Magic”_ , arXiv:2112.14593
* [4] D. Gottesman, _“Theory of fault-tolerant quantum computation”_ , Physical Review A. 57 (1): 127–137, arXiv:quant-ph/9702029
* [5] D. Gottesman, _“The Heisenberg Representation of Quantum Computers”_ , Group22: Proceedings of the XXII International Colloquium on Group Theoretical Methods in Physics, eds. S. P. Corney, R. Delbourgo, and P. D. Jarvis, pp. 32-43 (Cambridge, MA, International Press, 1999), arXiv:quant-ph/9807006v1
* [6] V.G. Turaev, _“The Yang-Baxter equation and invariants of links”_ , Invent.Math. 92 (1988) 527-553
* [7] N.Yu. Reshetikhin, and V.G. Turaev, _“Chern-Simons Theory in the Temporal Gauge and Knot Invariants through the Universal Quantum R-Matrix”_ , Commun.Math.Phys. 127 (1990) 1-26
* [8] N. Reshetikhin, V.G. Turaev, _“Invariants of three manifolds via link polynomials and quantum groups”_ , Invent.Math. 103 (1991) 547-597
* [9] A. Morozov, and A. Smirnov, _“Chern-Simons Theory in the Temporal Gauge and Knot Invariants through the Universal Quantum R-Matrix”_ , Nucl.Phys. B835 (2010) 284-313, arXiv:1001.2003
* [10] A. Smirnov, _“Notes on Chern-Simons Theory in the Temporal Gauge”_ , The Subnuclear Series: Volume 47, The Most Unexpected at LHC and the Status of High Energy Frontier, pp. 489-498 (2011), arXiv:0910.5011
* [11] A. Mironov, A. Morozov and An. Morozov, _“Character expansion for HOMFLY polynomials. II. Fundamental representation. Up to five strands in braid”_ , JHEP 03 (2012) 034, arXiv:1112.2654
* [12] A. Mironov, A. Morozov and An. Morozov, _“On Hopf-induced Deformation of Topological Locus”_ , Pis’ma v ZhETF, vol. 107, iss. 11, pp. 759 – 760, arXiv:1804.10231
* [13] An. Morozov and A. Sleptsov, _“New symmetries for the $U_{q}(sl_{N})$ 6-j symbols from the Eigenvalue conjecture ”_, Pis’ma v ZhETF, vol. 108, iss. 10, pp. 721 – 722, arXiv:1905.01876
* [14] C. Nayak, S. H. Simon, A. Stern, M. Freedman, S. Das Sarma, _“Non-Abelian Anyons and Topological Quantum Computation”_ , Rev. Mod. Phys. 80, 1083 (2008), arXiv:0707.1889
* [15] Yu. Makhlin, S. Backens, A. Shnirman, _“Two-qubit operation on Majorana qubits in ordinary-qubit chains”_ , Pis’ma v ZhETF, vol. 108, iss. 11, pp. 779 – 780,
* [16] L. H. Kauffman, S. J. Lomonaco Jr., _“Braiding Operators are Universal Quantum Gates”_ , New Journal of Physics,4(2002) 73.1-18;6(2004) 134.1-40, quant-ph/0401090
* [17] D. Melnikov, A. Mironov, S. Mironov, A. Morozov, An. Morozov, _“Towards topological quantum computer”_ , Nucl.Phys. B926 (2018) 491-508, arXiv:1703.00431
* [18] N.Kolganov and An. Morozov, _“On Hopf-induced Deformation of Topological Locus”_ , Pis’ma v ZhETF, vol. 111, iss. 9-10, pp. 623 – 624, arXiv:2004.07764
* [19] N. Kolganov, S. Mironov and An. Morozov, _“Large k topological quantum computer”_ , arXiv:2105.03980
* [20] M. Atiyah, _“Topological quantum field theories”_ , Publications Mathematiques de l’Institut des Hautes Etudes Scientifiques 68 (1988) no. 1, 175–186
* [21] E. Witten, _“Quantum field theory and the Jones polynomial”_ , Commun. Math. Phys.121(1989) 351-399,
* [22] B. Abdesselam, D. Arnaudon, A. Chakrabarti, _“Representations of $U_{q}(sl(N))$ at Roots of Unity”_, J. Phys. A: Math. Gen. 28 5495 (1995), arXiv:q-alg/9504006
|
††thanks: The authors are with the Department of Electrical Communication
Engineering, Indian Institute of Science, Bangalore, India.
<EMAIL_ADDRESS>
# Dr-COVID: Graph Neural Networks for SARS-CoV-2 Drug Repurposing
Siddhant Doshi and Sundeep Prabhakar Chepuri
###### Abstract
The _2019 novel coronavirus (SARS-CoV-2)_ pandemic has resulted in more than a
million deaths, high morbidities, and economic distress worldwide. There is an
urgent need to identify medications that would treat and prevent novel
diseases like the 2019 coronavirus disease (COVID-19). Drug repurposing is a
promising strategy to discover new medical indications of the existing
approved drugs due to several advantages in terms of the costs, safety
factors, and quick results compared to new drug design and discovery. In this
work, we explore computational data-driven methods for drug repurposing and
propose a dedicated graph neural network (GNN) based drug repurposing model,
called Dr-COVID. Although we analyze the predicted drugs in detail for
COVID-19, the model is generic and can be used for any novel diseases. We
construct a four-layered heterogeneous graph to model the complex interactions
between drugs, diseases, genes, and anatomies. We pose drug repurposing as a
link prediction problem. Specifically, we design an encoder based on the
scalable inceptive graph neural network (SIGN) to generate embeddings for all
the nodes in the four-layered graph and propose a quadratic norm scorer as a
decoder to predict treatment for a disease. We provide a detailed analysis of
the 150 potential drugs (such as _Dexamethasone_ , _Ivermectin_) predicted by
Dr-COVID for COVID-19 from different pharmacological classes (e.g.,
corticosteroids, antivirals, antiparasitic). Out of these 150 drugs, 46 drugs
are currently in clinical trials. Dr-COVID is evaluated in terms of its
prediction performance and its ability to rank the known treatment drugs for
diseases as high as possible. For a majority of the diseases, Dr-COVID ranks
the actual treatment drug in the top 15.
COVID-19; computational pharmacology; drug repurposing; graph neural network;
machine learning; SARS-CoV-2
## I Introduction
The dreadful pandemic outbreak of the coronavirus disease 2019 (COVID-19) has
affected about 56 million people with more than a million deaths worldwide as
of November 2020. The June 2020 Global Economic Prospects [1] estimated a
$5.2$% downfall in the global gross domestic product (GDP) in 2020 that would
lead to the worst economic slowdown in history after the Second World War. The
disease affects mammals’ respiratory tract and shows symptoms similar to
pneumonia, causing mild to severe respiratory tract infections [2]. The
pathogen that causes COVID-19 belongs to the _Coronaviridae_ family, which is
a family of enveloped positive-strand RNA viruses that affect mammals, birds,
and amphibians. The name coronavirus (CoV) is derived because of the crown-
shaped spikes that project from their surface. Coronaviruses are majorly
grouped into four genera: _alphacoronavirus_ , _betacoronavirus_ ,
_deltacoronavirus_ , and _gammacoronavirus_. While _deltacoronaviruses_ and
_gammacoronaviruses_ infect birds, _alphacoronaviruses_ and
_betacoronaviruses_ infect mammals [3]. Out of the seven known strains of
human CoVs (HCoVs), the three _betacoronaviruses_ , namely, _middle east
respiratory syndrome coronavirus (MERS-CoV)_ , _severe acute respiratory
syndrome coronavirus (SARS-CoV)_ , and the _novel severe acute respiratory
syndrome coronavirus (SARS-CoV-2)_ produce severe symptoms. In the past two
decades, the world witnessed highly fatal _MERS-CoV_ and _SARS-CoV_ that led
to global epidemics with high mortality. Although the 2003 _SARS-CoV_ outbreak
was controlled, it infected 8098 individuals and resulted in 774 deaths. As of
November 2019, 2494 cases and 855 deaths were reported due to _MERS-CoV_ ,
with the majority in Saudi Arabia [3]. In December 2019, similar cases were
again reported in Wuhan City, China [4], wherein investigations confirmed it
to be the third novel CoV, i.e., _SARS-CoV-2_ , which is also referred to as
_HCoV-2019_ , _2019-nCoV_ , or colloquially simply as coronavirus [5]. _SARS-
CoV-2_ being highly contagious, on 30 January 2020, the World Health
Organization (WHO) declared it as a public emergency of international concern
warning all the countries with vulnerable health care systems [6].
The current treatment for COVID-19 is completely supportive and symptomatic as
there are no specific known medicines. Several research groups around the
world are trying to develop a vaccine that would prevent and treat _SARS-
CoV-2_. Looking at the current unpredictable trajectory of how the disease
spreads and the life cycle of the virus, there is an urgent need to develop
preventive strategies against it. Given this strict timeline, a more realistic
solution lies in drug repurposing or drug repositioning, which aims to
identify new medical indications of approved drugs. Drug repurposing offers
several advantages. It has a low risk of failure as the drug has already been
approved with less unknown harmful adverse effects. It reduces the time frame
for drug development as the drugs have passed all the pre-clinical trials and
safety norms. Finally, compared to the discovery of a new drug, drug
repurposing requires less economic investment and puts fewer lives of
volunteers (particularly kids) involved in clinical trials at risk [7]. Some
of the examples of repurposed drugs are Sildenafil, which was initially
developed as an antihypertensive drug was proved effective in treating
erectile dysfunction by Pfizer [7], and Rituximab that was originally used
against cancer was proved to be effective against rheumatoid arthritis [7], to
name a few. Even for COVID-19, drugs like Remdesivir (a drug for treating
Ebola virus disease), Chloroquine/Hydroxychloroquine (antimalarial drugs),
Dexamethasone (anti-inflammatory drugs) are being repurposed and are under
clinical trials as per the International Clinical Trials Registry Platform
(ICTRP), which is a common platform maintained by WHO to track the clinical
trial studies across the world.
Drug repurposing involves identifying potential drugs and monitoring their _in
vivo_ efficacy and potency against the disease. The most critical step in this
pipeline is identifying the right candidate drugs, for which experimental and
computational approaches are usually considered. To identify potential drugs
experimentally, a variety of chromatographic and spectroscopic techniques are
available for target-based drug discovery. Phenotype screening is used as an
alternative to target-based drug discovery when the identity of the specific
drug target and its role in the disease are not known [7]. Recently,
computational approaches are receiving attention due to the availability of
large biological data. Efficient ways to handle big data has opened up many
opportunities in the field of pharmacology. Zitnik, et al. [8] elaborates
several data-driven computational tools to integrate large volumes of
heterogeneous data and solve problems in pharmacology such as drug-target
interaction prediction (identify interactions between a drug and its target
genes), drug repurposing, and drug-drug interaction or side effect prediction,
to list a few. Hence this field is known as computational pharmacology. Many
standard machine learning (ML) and deep learning (DL) techniques have been
applied in computational pharmacology. Drug-drug interaction was formulated as
a binary classification problem and solved using ML techniques like random
forest, support vector machines (SVM), and naive bayes [9], and using DL
models like deep multi-layer perceptrons and recurrent neural networks, to
name a few. DL techniques often outperform standard ML techniques [10, 11].
However, these methods lack the ability to capture the structural information
in the data, specifically the connections between different biological
entities (e.g., interactions between drugs and genes or between drugs and
diseases). A natural and efficient way to represent such structural
information is to construct a graph with nodes representing entities like
drugs, genes, diseases, etc., and edges representing the complex interactions
between these entities. Graph neural networks (GNNs) capture the structural
information by accounting for the underlying graph structure while processing
the data. Decagon, a GNN-based model designed for predicting the side effects
of a pair of drugs has proved its capability by outperforming the non-graph
based machine learning models in terms of its prediction performance [12].
Similarly, drug repurposing has been studied using computational methods such
as signature matching methods, molecular docking, and network-based
approaches. Recently, network-based and machine learning approaches [13, 14,
15, 16, 17], and GNN based approaches [18] and [19] have been proposed for
drug repurposing.
In this work, we propose a GNN architecture for COVID-19 drug repurposing
called Dr-COVID, which is a dedicated model for drug repurposing. We formulate
our problem by constructing a four-layered heterogeneous graph comprising
drugs, genes, diseases, and anatomies. We then build a deep learning model to
predict the links between the drug and disease entities, where a link between
a drug-disease entity suggests that the drug treats the disease. Specifically,
Dr-COVID is based on the scalable inceptive graph neural network (SIGN)
architecture [20] for generating the node embeddings of the entities. We
propose a quadratic norm scoring function that rank orders the predicted
drugs. All the network information and node features are derived from the drug
repurposing knowledge graph (DRKG) [21]. DRKG is a biological knowledge graph
compiled using several databases, and comprises entities like drugs, diseases,
anatomies, etc., and their connections. We leverage their generic set of low-
dimensional embeddings that represent the graph nodes and edges in the
Euclidean space for training. We validate Dr-COVID’s performance on the known
drug-disease pairs. Although we present the results and analysis for COVID-19,
Dr-COVID is generic and is useful for any novel human diseases. From a list of
150 drugs predicted by Dr-COVID for _SARS-CoV-2_ , 46 drugs are currently in
clinical trials. For a majority of diseases with known treatment, the proposed
Dr-COVID model ranks the approved treatment drugs in the top 15, which
suggests the efficacy of the proposed drug repurposing model. As we use the
SIGN architecture that does many computations beforehand, Dr-COVID is
computationally efficient as compared to the other GNN-based methods [18, 19].
Specifically, in contrast to [18] we include additional entities such as
anatomies as the side information in our graph. This additional information
provides indirect interactions between the disease and gene entities. The norm
scorer we design captures correlations between the drug and disease pairs, and
as a consequence, the model predicts many more drugs (e.g., _Brexanolone_)
that are in clinical trials as compared to the existing GNN-based and network-
based drug repurposing models.
## II Results and discussion
In this section, we present the drugs predicted by Dr-COVID for COVID-19
according to their pharmacological classifications, and elaborate on their
roles in treating the disease. We individually predict drugs for the 27
entities that specify the _SARS-CoV-2_ genome structure as identified by
Gordon et al. [22]. This genome structure includes structural proteins,
namely, envelope (_SARS-CoV2-E_), membrane (_SARS-CoV2-M_), nucleocapsid
(_SARS-CoV2-N_), surface (_SARS-CoV2-spike_) proteins, 15 non-structural
proteins (nsp), and open reading frames (orf) that encode the accessory
proteins. We also predict drugs for 6 diseases related to CoV, namely, _SARS-
CoV_ , _Avian infectious bronchitis virus (IBV)_ , _MERS-CoV_ , _CoV-229E_ ,
_CoV-NL63_ , and _Murine coronavirus (MHV)_. We choose the top 10 ranked
predicted drugs for all these disease targets, combine them, and present them
as a single list of 150 drugs (after removing the duplicate entries). We refer
to these 33 (i.e., 27 entities related to the _SARS-CoV-2_ genome structure
and 6 CoV diseases) as COVID-19 nodes. Out of these 150 drugs, 46 drugs are in
clinical trials in different phases. We provide the predicted scores of all
the drugs for all the COVID-19 nodes using Dr-COVID in our github repository.
The software to reproduce the results are available at:
https://github.com/siddhant-doshi/Dr-COVID
Fig 1 gives a heatmap indicating the ranks of these 150 drugs. It is a matrix
representation in which the drugs are listed on the vertical axis and COVID-19
nodes on the horizontal axis. All the 150 drugs are grouped based on their
first-level anatomical therapeutic chemical (ATC) codes as indicated on the
left side. A colored patch in the heatmap indicates the rank of a drug for a
disease. The darker the patch, the better is the rank, as indicated by the
rank bar on the right side. As can be seen, a major portion of the heatmap is
covered with dark patches as we only consider the top 10 ranked drugs. We can
infer from the heatmap that cardiovascular drugs (e.g., _Captopril_ ,
_Atenolol_) and anti-inflammatory drugs (e.g., _Celecoxib_ , _Prednisone_) are
ranked high for the _alphacoronaviruses_ , and a combination of antiparasitic
(e.g., _Ivermectin_), corticosteroids (e.g., _Prednisolone_ ,
_Dexamethasone_), antivirals (e.g., _Cidofovir_), and antineoplastic drugs
(e.g., _Methotrexate_ , _Sirolimus_) in the case of _betacoronaviruses_.
Figure 1: Rank heatmap of the predicted drugs with their corresponding ATC
labels.
Fig 2 lists the drugs predicted by Dr-COVID, grouped based on their first
level ATC codes such as antiparasitic (P), respiratory system (R), and so on,
whereas Fig 1 emphasizes the ranks of these predicted drugs for the COVID-19
nodes. The majority of the corticosteroids we predict belong to the
respiratory system (R) class, which has been the primary target for the
coronaviruses, as reflected by the symptoms. However, COVID-19 has a multi-
organ impact on the human body and is not limited to the respiratory system
[23]. Complications due to the cytokine storm with the effects of angiotensin
converting enzyme (ACE) have led to cardiac arrest, kidney failure, and liver
damage resulting in many deaths. For these reasons, we see drugs from various
ATC classes are being considered for clinical trials. Next, we discuss in
detail these pharmacological classifications of some of the predicted drugs.
Figure 2: Predicted drugs for COVID, categorized based on their ATC labels.
Anti-inflammatory (AI) agents: Inflammatory cytokine storms are prominently
evident in COVID-19 positive patients and timely anti-inflammation treatment
is required [24]. Pneumonia caused by the coronavirus results in a huge amount
of inflammatory cell infiltration leading to acute respiratory distress
syndrome (ARDS), causing many deaths [25, 26]. A wide range of anti-
inflammatory treatments including glucocorticoids, non-steroidal anti-
inflammatory drugs (NSAIDs), immunosuppressants, inflammatory cytokines
antagonists like tumor necrosis factor (TNF) inhibitors, Janus kinase (JAK)
inhibitors, and interleukin-1-receptor antagonist (IL-1RA) are being
considered for COVID-19. Our model predicts in the top 10, steroids like
_Dexamethasone_ , _Hydrocortisone_ , _Methylprednisolone_ ; NSAIDs like
_Ibuprofen_ , _Aspirin (Acetylsalicylic acid)_ ; immunosuppressants like
_Sirolimus_ and _Methotrexate_ ; IL-1RA _Anakinra_ ; CoX2 inhibitor
_Celecoxib_. These drugs are currently undergoing clinical trials. Some of the
other corticosteroids like _Betamethasone_ , _Prednisone_ , and TNF inhibitor
_Certolizumab pegol_ were also ranked high.
Antiviral and anti-parasitic agents: Dr-COVID predicts nucleotide analogue
antivirals like _Acyclovir_ , _Valaciclovir_ , _Cidofovir_ , and _Entecavir_
that have shown positive results in terminating the RNA synthesis catalyzed by
polymerases of coronaviruses [27]. _Ivermectin_ and _Nitazoxanide_ are used
against many parasite infestations and are also known to have antiviral
properties. _Mebendazole_ is another similar anti-parasitic drug that Dr-COVID
ranked high. One of the recent reports shows that _Ivermectin_ is an effective
inhibitor of the _SARS-CoV-2_ and many other positive single-stranded RNA
viruses. A 5000-fold reduction in the virus titer within 48 hours in cell
culture was obtained with a single treatment (5$\mu M$) of _Ivermectin_ [28].
Statins and ACE inhibitors/ beta-blockers/ calcium channel blockers: Statins
are lipid-lowering drugs that inhibit the cholesterol synthesis enzyme (also
known as HMG-CoA reductase), which also has anti-inflammatory properties.
There have been implications of lipid metabolism in the _SARS-CoV-2_
pathogenesis [29], due to which there are reports on including statins in the
line of treatment for COVID-19. Dr-COVID predicts _Atorvastatin_ ,
_Simvastatin_ , and _Rosuvastatin_ , where all the three drugs are currently
in clinical trials. On the contrary, some studies show that statins tend to
increase the cellular expression of ACE inhibitors [30], to which the _SARS-
CoV2-spike_ protein binds at the entry-level in humans [31]. Analyzing this
issue, an observational study by Zang et al. [32] reported a reduced mortality
rate in the patients treated with statins and no adverse effect was observed
by adding an ACE inhibitor drug also to the line of treatment. These ACE
inhibitors are cardiovascular drugs causing relaxation of blood vessels that
are primarily used to treat high blood pressure and heart failure. Beta-
adrenergic and calcium channel blockers are other similar functioning drugs
that lower blood pressure, are also currently considered to treat COVID-19.
Dr-COVID predicts _Captopril_ (ACE inhibitor), _Atenolol_ (beta-blocker) and
_Nifedipine_ (calcium channel blocker), which are currently in clinical
trials. Additionally, the list of predicted drugs includes _Spironolactone_
and _Hydrochlorothiazide_ , that help prevent our body from absorbing too much
salt and eventually lowering the blood pressure and avoiding cardiac failure.
Miscellaneous: Dr-COVID also predicts some of the pre-discovered vaccines such
as _Rubella virus vaccine_ , which is majorly considered for all the
healthcare workers, the _Yellow fever vaccine_ [33], and the _Ebola zaire
virus vaccine (rVSV-ZEBOV)_. Further, we also have _Mercaptopurine_ , an
antineoplastic agent that has been considered as a selective inhibitor of
_SARS-CoV_ [34] in the list of predicted drugs. Antidepressant _Brexanolone_
that is currently considered for patients on ventilator support due to ARDS,
vasodilators _Nitroglycerine_ and _Alprostadil_ , nutritional supplements like
_Riboflavin_ (Vitamin B2) [35], _Niacin_ , _Cholecalciferol_ (Vitamin D3), and
_Iron_ are some more top-ranked drugs. Interestingly, _Ephedra sinica root_ ,
a herb generally used to treat asthma and lung congestion, and an ingredient
of lung cleansing and detoxifying decoction (LCDD), which is a widely used
traditional Chinese medicine [36] is one of the drugs predicted in our list.
In essence, Dr-COVID predicts drugs for COVID-19 from different
pharmacological classes like the corticosteroids, antivirals, antiparasitic,
NSAIDs, and cardiovascular drugs, as the disease does not target particular
anatomy and impacts multiple organs in the human body.
## III Dataset
In this section, we describe the dataset that we use to train and test Dr-
COVID for COVID-19 drug repurposing. We also describe how we model the data as
a multilayer graph to capture the underlying complex interactions between
different biological entities. We derive the required information from DRKG,
which is a comprehensive biological knowledge graph relating genes, drugs,
diseases, biological processes, side effects, and other eight more entities
useful for computational pharmacological tasks like drug repurposing, drug
discovery, and drug adverse effect prediction, to list a few. DRKG gathers all
this information from six databases, namely, Drugbank [37], Hetionet [38],
GNBR [39], STRING [40], IntAct [41], and DGIdb [42]. From DRKG, we consider
four entities that are relevant to the drug repurposing task. The four
entities are drugs (e.g., _Dexamethasone_ , _Sirolimus_), diseases (e.g.,
_Scabies_ , _Asthma_), anatomies (e.g., _Bronchus_ , _Trachea_), and genes
(e.g., _Gene ID: 8446_ , _Gene ID: 5529_). All the genes are referred with
their respective Entrez IDs throughout the paper. We extract the details about
these entities specifically from the Drugbank, Hetionet, and GNBR databases.
We form a four-layered heterogeneous graph with these four entities in each
layer as illustrated in Fig 3a. The four-layered graph is composed of 8070
drugs, 4166 diseases, 29848 genes, 400 anatomies, and a total of 1,417,624
links, which include all the inter-layer and intra-layer connections. Next, we
discuss the interactome that we consider for drug repurposing.
Figure 3: (a) Four-layered heterogeneous graph illustrating the inter-layer
and intra-layer connections. (b), (c) and (d) Subgraph centered around the
drugs _Dexamethasone_ , _Ivermectin_ and _Simvastatin_ , respectively.
Interactome: There are inter-layered connections between the four layers and
some have intra-layered connections. The inter-layered connections are of
different types. The drug-disease links indicate treatment or palliation,
i.e., a drug treats or has a relieving effect on a disease. For example,
interaction between _Ivermectin-Scabies_ (as seen in Fig 3b) and _Simvastatin-
Hyperlipidemia_ (as seen in Fig 3d) are of type treatment, whereas _Atropine-
Parkinson’s disease_ and _Diclofenac-Osteoarthritis_ are of type palliation.
The drug-gene and disease-gene links are the direct gene targets of the
compound and the disease, respectively. _Gene ID: 4306_ , _Gene ID: 387_ ,
_Gene ID: 1786_ are some of the targets of the drug _Dexamethasone_ (see Fig
3b) and _Gene ID: 5509_ , _Gene ID: 859_ are target genes of the disease
_Malaria_. Some of the genes targeted by the drug (e.g., _Dexamthasone_ ,
_Ivermectin_ , _Simvastatin_) as well as by the _SARS-CoV-2_ virus (referred
to as shared genes) are shown in Fig 3 (b,c and d). These common gene targets
between a drug and a disease are one of the reasons for the drug to be a
potential repurposing candidate against the disease. The disease-anatomy and
gene-anatomy connections indicate how the diseases affect the anatomies and
interactions between the genes and anatomies. For example, _Gene ID: 2771_ and
_Gene ID: 3156_ belong to the _cardiac ventricle_ anatomy (see Fig 3d);
disease _Schizophrenia_ affects multiple anatomies like the _central nervous
system (CNS)_ and _optic tract_.
There are also intra-layered connections. The drug-drug and disease-disease
connections show the similarity between a pair of drugs and diseases,
respectively. The gene-gene links describe the interaction between genes
(e.g., epistasis, complementation) and form the whole gene interactome
network. This comprehensive gene network serves as a backbone for our model,
wherein we predict the unknown links between drugs and new diseases like
COVID-19 as they are connected through genes and anatomies. The anatomy
information helps in drug predictions by focusing on the local interactions of
genes related to the same anatomy as the genes targeted by the disease. Some
examples of the intra-layered connections are _Simvastatin_ -_Lovastatin_ and
_Gene ID: 23649_ -_Gene ID: 8480_ as seen in Fig 3d. While all these
interactions reveal the true relations between the entities, we also randomly
sample the no-drug-disease links, which give us negative control in the
learning process. For example, there is no link between _Simvastatin-Scabies_
, i.e., _Simvastatin_ is not known to treat or suppress the effects of
_Scabies_. Including such negative control in the training process makes our
model accurate and reliable.
HCoV interactome network: To specialize the drug repurposing model Dr-COVID
for COVID-19, as discussed before, we consider the four known HCoVs, namely,
_SARS-CoV_ , _MERS-CoV_ , _CoV-229E_ and _CoV-NL63_ , and two non-human CoVs
namely _MHV_ , and _IBV_. We consider interactions of these disease nodes with
human genes. There are 129 links between these six disease nodes and the gene
nodes [21]. In addition, we consider all the 27 _SARS-CoV-2_ proteins
(including the structured proteins, nsp, and orf) and their 332 links
connecting the target human genes as given by Gordon et al. [22]. In other
words, there are only disease-gene interactions available for these COVID-19
nodes. With this available information, we train Dr-COVID to predict possible
drug connections for these COVID-19 nodes.
## IV Methods and Models
In the last few years, deep learning has gained significant attention from a
variety of scientific disciplines due to its extraordinary successes in
solving many challenging tasks like data cleansing, mining, and
classification, mainly for images, speech, or text datasets. However, in many
applications, the structure underlying data is not always Euclidean. Some
examples include social networks, transportation networks, brain networks,
sensor networks, chemical molecules, protein-protein interactions, meshed
surfaces in computer graphics, and the drug repurposing network, as discussed
above, to list a few. For these applications, more recently, deep learning for
graph-structured data, also known as geometric deep learning (GDL) [43], is
receiving steady research attention. GDL aims at building neural network
architectures known as graph neural network (GNNs) to learn from graph-
structured data. GDL models are used to learn low-dimensional graph
representations or node embeddings by taking into account the nodal
connectivity information. These embeddings are then used to solve many graph
analysis tasks like node classification, graph classification, and link
prediction, to list a few. GNN architectures are developed using concepts from
spectral graph theory and generalize the traditional convolution operation in
the convolutional neural network (CNN) to the graph setting. In this section,
we describe the proposed Dr-COVID architecture for COVID-19 drug repurposing
and describe numerical experiments performed to evaluate our model.
Consider an undirected graph $\cal{G}=(\cal{V},\cal{E})$ with a set of
vertices $\cal{V}=$ {$v_{1},v_{2},\cdots,v_{N}$} and edges $e_{ij}\in\cal{E}$
denoting a connection between nodes $v_{i}$ and $v_{j}$. We represent a graph
$\cal{G}$ using the adjacency matrix $\mathbf{A}\in\mathbb{R}^{N\times N}$,
where the $(i,j)$th entry of $\mathbf{A}$ denoted by $a_{ij}$ is $1$ if there
exists an edge between nodes $v_{i}$ and $v_{j}$, and $zero$ otherwise. To
account for the non-uniformity in the degrees of the nodes, we use the
normalized adjacency matrix denoted by
$\tilde{\mathbf{A}}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$,
where $\mathbf{D}\in\mathbb{R}^{N\times N}$ is the diagonal degree matrix.
Each node in the graph is associated with its own feature vector (referred to
as input feature). Let us denote the input feature of node $i$ by
$\mathbf{x}_{i}^{(0)}\in\mathbb{R}^{d}$, which contains key information or
attributes of that node (e.g., individual drug side effects). Let
$\mathbf{X}^{(0)}\in\mathbb{R}^{N\times d}$ be the input feature matrix
associated with the $N$ nodes in the graph $\cal{G}$ obtained by stacking the
input features of all the nodes in $\cal{G}$. The new embeddings for a node is
generated by combining information from its neighboring nodes (e.g., diseases
or genes) to account for the local interactions. This process of combining
information and generating new representations for a node is done by a single
GNN block. If we stack $K$ such blocks, we can incorporate information for a
node from its $K$-hop neighbors (e.g., in Fig 3c, the drug _Ivermectin_ is a
$2$-hop neighbor of the anatomy _Lung_ and is connected via _Gene ID: 8614_).
Mathematically, this operation can be represented as
$\displaystyle\mathbf{X}^{(k+1)}=g_{k}(\bar{\mathbf{A}}\mathbf{X}^{(k)}\mathbf{W}_{k}),$
(1)
where $\mathbf{X}^{(k)}\in\mathbb{R}^{N\times d_{k}}$ represents the $k$th
layer embedding matrix and $d_{k}$ is the embedding dimension in the $k$th
layer. Here, $\bar{\mathbf{A}}=\mathbf{I}+\tilde{\mathbf{A}}$, where the
identity matrix $\mathbf{I}\in\mathbb{R}^{N\times N}$, is added to account for
the self-node embeddings, $\textbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k+1}}$
is the learnable transformation matrix, and $g_{k}(\cdot)$ is the activation
function in the $k$th layer. There exist several GNN variants such as graph
convolutional networks (GCN) [44], GraphSAGE [45], graph attention networks
(GAT) [46] and scalable inception graph neural network (SIGN) [20], to name a
few. GCN is a vanilla flavored GNN based on Eq (1). GAT gives individual
attention to the neighboring nodes instead of treating every node equally. To
address the issue of scalability, GraphSAGE uses a neighbor sampling method,
wherein instead of taking the entire neighborhood, we randomly sample a subset
of neighbor nodes. SIGN takes a different approach to solve the scalability
issue and introduce a parallel architecture. The proposed Dr-COVID
architecture is based on the SIGN approach due to its computational
advantages. The predicted list of drugs from other GNNs are available in our
repository. Next, we describe the proposed Dr-COVID architecture.
### IV.1 Dr-COVID architecture
The proposed GNN architecture for _SARS-CoV-2_ drug repurposing has two main
components, namely, the encoder and decoder. The encoder based on the SIGN
architecture generates the node embeddings of all the nodes in the four-layer
graph. The decoder scores a drug-disease pair based on the embeddings. The
encoder and decoder networks are trained in an end-to-end manner. Next, we
describe these two components of the Dr-COVID architecture, which is
illustrated in Fig 4.
Figure 4: Dr-CoV architecture.
Encoder: The Dr-COVID encoder is based on the SIGN architecture [20], which
provides low-dimensional node embeddings based on the input features and nodal
connectivity information. Recall that the matrix $\mathbf{A}$ is the adjacency
matrix of the four-layered graph $\cal{G}$ and $\tilde{\mathbf{A}}$ is the
normalized adjacency. SIGN uses linear diffusion operators represented using
matrices $\mathbf{F}_{r}$, $r=1,2,\cdots$, to perform message passing and
aggregate local information in the graph. By choosing
$\mathbf{F}_{r}=\tilde{\mathbf{A}}^{r}$ we can incorporate information for
node $v$ from its $r$-hop neighbors. Here, $\tilde{\mathbf{A}}^{r}$ denotes
the $r$th matrix power. To start the information exchange between the nodes,
we assume that each node has its own $d$ dimensional feature, which we collect
in the matrix $\mathbf{X}\in\mathbb{R}^{N\times d}$ to obtain the complete
input feature matrix associated with the nodes of $\cal{G}$. We can then
represent the encoder as
$\displaystyle\mathbf{Z}=\sigma_{1}\left\\{\left[\mathbf{X}\boldsymbol{\Theta}_{0}\,\|\,\mathbf{F}_{1}\mathbf{X}\boldsymbol{\Theta}_{1}\|\,\cdots\|\,\mathbf{F}_{r}\mathbf{X}\boldsymbol{\Theta}_{r}\right]\right\\}\quad\text{and}\quad\mathbf{Y}=\sigma_{2}\left\\{\mathbf{ZW}\right\\},$
(2)
where Y is the final node embedding matrix for the nodes in the graph
$\cal{G}$, and
{$\boldsymbol{\Theta}_{0},\cdots,\boldsymbol{\Theta}_{r},\mathbf{W}$} are the
learnable parameters. Here, $\|$ represents concatenation and
$\sigma_{1}\\{\cdot\\}$ and $\sigma_{2}\\{\cdot\\}$ are the nonlinear tanh and
leaky rectified linear unit (leaky ReLU) activation functions, respectively.
The matrix
$\mathbf{F}_{r}\mathbf{X}=\mathbf{D}^{-\frac{1}{2}}\mathbf{A}^{r}\mathbf{D}^{-\frac{1}{2}}\mathbf{X}$
captures information about the local interactions over $r$-hop neighbors. Fig
4 shows the encoder architecture. The main benefit of using SIGN over other
sequential models (e.g., GCN, GAT, GraphSAGE) is that the matrix product
$\textbf{F}_{r}\textbf{X}$ is independent of the learnable parameters
$\boldsymbol{\Theta}_{r}$. Thus, this matrix product can be pre-computed
before training the neural network model. Doing so reduces the computational
complexity without compromising the performance.
In our setting, we choose $r=2$, i.e., the low-dimensional node embeddings
have information from 2-hop neighbors. Choosing $r\geq 3$ is not useful for
drug repurposing, as we aim to capture the local information of the drug
targets such that a drug node embedding should retain information about its
target genes and the shared genes in its vicinity. For example, the $1$-hop
neighbors of _Dexamethasone_ as shown in Fig 3b, are the diseases it treats
(e.g., _Asthma_), and the drugs similar to _Dexamethasone_ (e.g.,
_Methylprednisolone_) and its target genes (e.g., _Gene ID: 8446_ , _Gene ID:
387_). The $2$-hop neighbors are the anatomies of the target genes (e.g.,
_Bronchus_) of _Dexamethasone_ , and the drugs that have similar effects on
the diseases (e.g., _Hydrocortisone_ and _Dexamethasone_ have similar effects
on _Asthma_). It is essential for the embedding related to _Dexamethasone_ to
retain this local information for the drug repurposing task, and not much
benefit is obtained by propagating more deeper in the network.
Decoder: For drug repurposing, we propose a score function that takes as input
the embeddings of the drugs and diseases and outputs a score based on which we
decide if a certain drug treats the disease. Fig 4 illustrates the proposed
decoder. The columns of the embedding matrix Y, contains the embeddings of all
the nodes in the four-layer graph, including the embeddings of the disease and
drug nodes. Let us denote the embeddings of the $i$th drug as ${\bf
y}_{c_{i}}\in\mathbb{R}^{l}$ and the embeddings of the $j$th disease as ${\bf
y}_{d_{j}}\in\mathbb{R}^{l}$. The proposed scoring function $f(\cdot)$ to
infer whether drug $c_{i}$ is a promising treatment for disease $d_{j}$ is
defined as
$\displaystyle s_{ij}=f({\bf y}_{c_{i}},{\bf y}_{d_{j}})=\sigma\left\\{{\bf
y}^{T}_{c_{i}}{\boldsymbol{\Phi}}{\bf y}_{d_{j}}\right\\},$ (3)
where $\sigma\\{\cdot\\}$ is the nonlinear sigmoid activation function and
${\boldsymbol{\Phi}}\in\mathbb{R}^{l\times l}$ is a learnable co-efficient
matrix. We interpret $s_{ij}$ as the probability that a link exists between
drug $c_{i}$ and disease $d_{j}$. The term ${\bf
y}^{T}_{c_{i}}{\boldsymbol{\Phi}}{\bf y}_{d_{j}}$ can be interpreted as a
measure of correlation (induced by ${\boldsymbol{\Phi}}$) between the disease
and drug node embeddings. We use $d=400$ and $l=250$ in our implementation.
The model is trained in a mini-batch setting in an end-to-end fashion using
stochastic gradient descent to minimize the weighted cross entropy loss, where
the loss function for the sample corresponding to the drug-disease pair
$(i,j)$ is given by
$\displaystyle\ell(s_{ij},z_{ij})=wz_{ij}\left(\log\left(\frac{1}{\sigma(s_{ij})}\right)\right)+\left(1-z_{ij}\right)\log\left(\frac{1}{1-\sigma(s_{ij})}\right),$
(4)
where $z_{ij}$ is the known training label associated with score $s_{ij}$ for
the drug-disease pair, $z_{ij}=1$ indicates that drug $i$ treats disease $j$
and otherwise when $z_{ij}=0$. Here, $w$ is the weight on the positive samples
that we choose to account for the class imbalance. As discussed in the Dataset
Section, we include the no-drug-disease links as negative control while
training our model. The number of no-drug-disease links is almost thirty times
the number of positive samples. To handle this class disparity, we explicitly
use a weight $w>0$ on the positive samples.
### IV.2 Model evaluation
In this subsection, we evaluate Dr-COVID and discuss the choice of various
hyper-parameters. The drug repurposing via link prediction can be viewed as a
binary classification problem, wherein a positive class represents the
existence of a link between the input drug and disease, and otherwise for a
negative class. We have 6113 positive samples (drug-disease links) in our
dataset. To account for the negative class samples, we randomly choose 200,000
no-drug-disease links (i.e., there is no link between these drugs and
diseases). These links are then divided into the training and testing set with
a $90\%-10\%$ split. To use mini-batch stochastic gradient descent, we group
the training set in batches of size 512 and train them for 20 epochs. Due to
the significant class imbalance, we oversample the drug-disease links while
creating batches, thus maintaining the class ratio (ratio of the number of
negative samples to the number of positive samples) of $1.5$ in each batch.
The weight $w$ on the positives samples (mentioned in Eq (4)), is also chosen
to be the class imbalance ratio of each batch, i.e., we fix $w$ to be $1.5$.
We perform experiments on three sequential GNN encoder architectures, namely,
GCN [44], GraphSAGE [45], and GAT [46] for the drug repurposing task, which we
treat as a link prediction problem, and compare with the proposed Dr-COVID
architecture. Specifically, the SIGN encoder in Dr-COVID is replaced with GCN,
GraphSAGE, and GAT to evaluate the model performance. Two blocks of these
sequential models are stacked to maintain the consistency with $r=2$ of the
Dr-COVID architecture. We evaluate these models on the test set, which are
known treatments for diseases that are not shown to the model while training.
The model is evaluated based on two performance measures. Firstly, we report
the ability to classify the links correctly, i.e., to predict the known
treatments correctly for diseases in the test set. This is measured through
the receiver operating characteristic (ROC) curve of the true positive rate
(TPR) versus the false positive rates (FPR). Next, using the list of predicted
drugs for the diseases in the test set, we report that model’s ability to rank
the actual treatment drug as high as possible (the ranking is obtained by
ordering the scores in Eq (3)).
ROC curves show the performance of a binary classification model by varying
the threshold values used to classify the positive samples, which eventually
change the TPR and FPR. Fig 5 shows the ROC curves of different GNN models.
The area under ROC (AUROC), which lies in the interval [0,1] indicates the
separation ability of a binary classifier, where 1 indicates the best
performance, 0.5 means that the model is unable to discriminate between the
classes and 0 indicates a completely opposite behavior. We can see from Fig 5
that all the models have very similar AUROC values.
Figure 5: ROC Curves.
We also evaluate Dr-COVID in terms of ranks of the actual treatment drug in
the predicted list for a disease from the testing set, where the rank is
computed by rank ordering the scores as before. In addition, we compute the
network proximity scores [13] and rank order the drugs based on these scores
to compare with other GNN encoder models. These network proximity scores are a
measure of the shortest distance between drugs and diseases. They are computed
as
$\displaystyle
P_{ij}=\frac{1}{|\cal{C}|+|\cal{T}|}\left(\sum_{p\in\cal{C}}\min_{q\in\cal{T}}d(p,q)+\sum_{q\in\cal{T}}\min_{p\in\cal{C}}d(p,q)\right),$
(5)
where $P_{ij}$ is a proximity score of drug $c_{i}$ and disease $d_{j}$. Here,
$\cal{C}$ is the set of target genes of $c_{i}$, $\cal{T}$ is the set of
target genes of $d_{j}$, and $d(p,q)$ is the shortest distance between a gene
$p\in\cal{C}$ and a gene $q\in\cal{T}$ in the gene interactome. We convert
these into Z-scores using the permutation test as
$\displaystyle Z_{ij}=\frac{P_{ij}-\mu}{\omega},$ (6)
where $\mu$ is the mean proximity score of $c_{i}$ and $d_{j}$, which we
compute by randomly selecting subsets of genes with the same degree
distribution as that of $\cal{C}$ and $\cal{T}$ from the gene interactome, and
$\omega$ is the standard deviation of the scores generated in the permutation
test. Table 1 gives the rankings, which clearly show that the Dr-COVID results
in better ranks on the unseen diseases than the other GNN variants. Also,
compared to the network proximity measure, which is solely based on the gene
interactome, Dr-COVID performs better. We choose these drug-disease pairs for
evaluation as these links are not shown during the training. It is evident
that the diseases on which we evaluate are not confined to single anatomy
(e.g., _rectal neoplasms_ are associated to the _rectum_ anatomy, whereas
_pulmonary fibrosis_ is a _lung_ disease), nor do they require a similar
family of drugs for their treatment (e.g., _Fluorouracil_ is an antineoplastic
drug, and _Prednisone_ is an anti-inflammatory corticosteroid). Thus,
showcasing our model’s unbiased nature. For a majority of the diseases in the
test set Dr-COVID ranks the treatment drug in top 10 (as seen in Table 1). In
the case of _Leukemia_ (blood cancer), other antineoplastic drugs like
_Hydroxyurea_ and _Methotrexate_ are ranked high (in top $10$) and its known
treatment drug _Azacitidine_ is ranked $17$. We give more importance to the
ranking parameter as any drug predictor requires classifying and ranking the
correct drugs as high as possible. Considering this AUROC-ranking trade-off we
can see that Dr-COVID with SIGN encoder performs the best.
Disease | Treatment drug | Ranks
---|---|---
| | Dr-CoV (SIGN) | SAGE | GCN | GAT | Network proximity
_Encephalitis_ | _Acyclovir_ | 10 | 35 | 35 | 295 | 5462
_Rectal neoplasms_ | _Fluorouracil_ | 9 | 421 | 16 | 231 | 2831
_Pulmonary fibrosis_ | _Prednisone_ | 5 | 3 | 10 | 9 | 2072
_Atrioventricular block_ | _Atropine_ | 6 | 79 | 8 | 14 | 4453
_Pellagra_ | _Niacin_ | 2 | 56 | 497 | 484 | Not computable
_Colic_ | _Hyoscyamine_ | 1 | 1 | 501 | 205 | Not computable
_Leukemia_ | _Azacitidine_ | 17 | 120 | 31 | 332 | 377
Table 1: Ranking Table. The Table gives the ranking performance of Dr-COVID
compared with other GNN variants and the network proximity measures. There are
no associated genes with some of the disease in our database, which makes it
impossible to compute the Z scores. These are indicated as “Not computable".
The best results are highlighted in bold font.
COVID-19 analysis: We perform a similar analysis and identify potential
candidate drugs for _SARS-CoV-2_. For all the COVID-19 nodes in our dataset
comprising 27 proteins (structured, nsp and orf), _SARS-CoV_ , _IBV_ , _MERS-
CoV_ , _CoV-229E_ , _CoV-NL63_ , and _MHV_ , we individually predict the drugs
for all these 33 entities. Each protein in _SARS-CoV-2_ targets a different
set of genes in humans, so we give individual predictions. We then pick the
top 10 drugs from all the predicted drugs and list 150 candidate repurposed
drugs for COVID-19. Out of these 150 drugs, 46 are currently in clinical
trials. Our predictions have a mixture of antivirals, antineoplastic,
corticosteroids, monoclonal antibodies (mAb), non-steroidal anti-inflammatory
drugs (NSAIDs), ACE inhibitors, and statin family of drugs, and some of the
vaccines discovered previously for other diseases. Refer to the Results
Section for a detailed discussion on the analysis of the predicted drugs for
COVID-19.
## V Conclusions
In this work, we presented a generalized drug repurposing model, called Dr-
COVID for novel human diseases. We constructed a biological network of drugs,
diseases, genes, and anatomies and formulated the drug repurposing task as a
link prediction problem. We proposed a graph neural network model, which was
then trained to predict drugs for new diseases. Dr-COVID predicted 150
potential drugs for COVID-19, of which 46 drugs are currently in clinical
trials. The considered GNN model is computationally efficient and better ranks
known treatment drugs for diseases than the other GNN variants and non-deep
methods like the network proximity approaches. This work can be extended along
several directions. Considering the availability of substantial biological
data, the inclusion of information like individual side effects of drugs, the
molecular structure of the drugs, etc., may further improve the predictions.
Considering the comorbidities of a patient would help us analyze the
biological process and gene interactions in the body specific to an individual
and accordingly prescribe the line of treatment. Predicting a synergistic
combination of drugs for a disease would be another area of interest where
graph neural networks can be beneficial.
## Acknowledgements
S.P. Chepuri is supported in part by the Pratiskha Trust Young Investigator
Award, Indian Institute of Science, Bangalore, and the SERB grant
SRG/2019/000619, and S. Doshi is supported by the Robert Bosch Center for
Cyber Physical Systems, Indian Institute of Science, Bangalore, Student
Research Grant 2020-M-11 .
The authors thank the Deep Graph Learning team for compiling DRKG and making
the data public at https://github.com/gnn4dr/DRKG.
## Data and Code availability
All the implementation and the data required to reproduce the results in the
paper are available at https://github.com/siddhant-doshi/Dr-COVID.
## References
* [1] World Bank. Global Economic Prospects, June 2020; 2020b. http://hdl.handle.net/10986/33748.
* [2] Zumla A, Chan JF, Azhar EI, Hui DS, Yuen KY. Coronaviruses—drug discovery and therapeutic options. Nat Rev Drug Discov. 2016;15(5):327–347.
* [3] Paules CI, Marston HD, Fauci AS. Coronavirus infections—more than just the common cold. JAMA. 2020;323(8):707–708.
* [4] Lu H, Stratton CW, Tang YW. Outbreak of pneumonia of unknown etiology in Wuhan, China: The mystery and the miracle. J Med Virol. 2020;92(4):401–402.
* [5] Chen N, Zhou M, Dong X, Qu J, Gong F, Han Y, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507–513.
* [6] Sohrabi C, Alsafi Z, O’Neill N, Khan M, Kerwan A, Al-Jabir A, et al. World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). Int J Surg. 2020;76:71–76.
* [7] Pushpakom S, Iorio F, Eyers PA, Escott KJ, Hopper S, Wells A, et al. Drug repurposing: progress, challenges and recommendations. Nat Rev Drug Discov. 2018;18(1):41–58.
* [8] Zitnik M, Nguyen F, Wang B, Leskovec J, Goldenberg A, Hoffman M. Machine learning for integrating data in biology and medicine: Principles, practice, and opportunities. Inf Fusion. 2019;50:71–91.
* [9] Cheng F, Zhao Z. Machine learning-based prediction of drug–drug interactions by integrating drug phenotypic, therapeutic, chemical, and genomic properties. J Am Med Inform Assoc. 2014;21(e2):e278–e286.
* [10] Yi Z, Li S, Yu J, Tan Y, Wu Q, Yuan H, et al. Drug-drug interaction extraction via recurrent neural network with multiple attention layers. International Conference on Advanced Data Mining and Applications. 2017;10604:554–566.
* [11] Deng Y, Xu X, Qiu Y, Xia J, Zhang W, Liu S. A multimodal deep learning framework for predicting drug-drug interaction events. Bioinformatics. 2020;36(15):4316–4322.
* [12] Zitnik M, Agrawal M, Leskovec J. Modeling polypharmacy side effects with graph convolutional networks. Bioinformatics. 2018;34(13):i457–i466.
* [13] Cheng F, Desai RJ, Handy DE, Wang R, Schneeweiss S, Barábasi AL, et al. Network-based approach to prediction and population-based validation of in silico drug repurposing. Nat Commun. 2018;9(1):1–12.
* [14] Guney E, Menche J, Vidal M, Barábasi AL. Network-based in silico drug efficacy screening. Nat Commun. 2016;7(1):1–13.
* [15] Zhou Y, Hou Y, Shen J, Huang Y, Martin W, Cheng F. Network-based drug repurposing for novel coronavirus 2019-nCoV/SARS-CoV-2. Cell Discov. 2020;6(1):1–18.
* [16] Cheng F, Lu W, Liu C, Fang J, Hou Y, Handy DE, et al. A genome-wide positioning systems network algorithm for in silico drug repurposing. Nat Commun. 2019;10(1):1–14.
* [17] Gramatica R, Di Matteo T, Giorgetti S, Barbiani M, Bevec D, Aste T. Graph theory enables drug repurposing–how a mathematical model can drive the discovery of hidden mechanisms of action. PLOS One. 2014;9(1):e84912.
* [18] Gysi DM, Valle ÍD, Zitnik M, Ameli A, Gan X, Varol O, et al. Network medicine framework for identifying drug repurposing opportunities for COVID-19 arXiv:1403.3301 [Preprint]. 2020 [cited 2020 Nov 20]. Available from: https://arxiv.org/abs/2004.07229.
* [19] Ioannidis VN, Zheng D, Karypis G. Few-shot link prediction via graph neural networks for Covid-19 drug-repurposing arXiv:2007.10261 [Preprint]. 2020 [cited 2020 Nov 20]. Available from: https://arxiv.org/abs/2007.10261.
* [20] Rossi E, Frasca F, Chamberlain B, Eynard D, Bronstein M, Monti F. SIGN: Scalable Inception Graph Neural Networks arXiv:2004.11198 [Preprint]. 2020 [cited 2020 Nov 20]. Available from: https://arxiv.org/abs/2004.11198.
* [21] Ioannidis VN, Song X, Manchanda S, Li M, Pan X, Zheng D, et al. DRKG - Drug Repurposing Knowledge Graph for Covid-19 2020 [cited 2020 Nov 20]. Database: Github [Internet]. Available from: https://github.com/gnn4dr/DRKG/.
* [22] Gordon DE, Jang GM, Bouhaddou M, Xu J, Obernier K, White KM, et al. A SARS-CoV-2 protein interaction map reveals targets for drug repurposing. Nature. 2020;583:459–468.
* [23] Zaim S, Chong JH, Sankaranarayanan V, Harky A. COVID-19 and multi-organ response. Curr Prob Cardiology. 2020;45(8):100618.
* [24] Zhang W, Zhao Y, Zhang F, Wang Q, Li T, Liu Z, et al. The use of anti-inflammatory drugs in the treatment of people with severe coronavirus disease 2019 (COVID-19): The experience of clinical immunologists from China. Clin Immunol. 2020;214:108393.
* [25] Channappanavar R, Perlman S. Pathogenic human coronavirus infections: causes and consequences of cytokine storm and immunopathology. Semin Immunopathol. 2017;39(5):529–539.
* [26] Chousterman BG, Swirski FK, Weber GF. Cytokine storm and sepsis disease pathogenesis. Semin Immunopathol. 2017;39(5):517–528.
* [27] Jockusch S, Tao C, Li X, Anderson TK, Chien M, Kumar S, et al. Library of Nucleotide Analogues Terminate RNA Synthesis Catalyzed by Polymerases of Coronaviruses Causing SARS and COVID-19. Antiviral Res. 2020;180:104857.
* [28] Caly L, Druce JD, Catton MG, Jans DA, Wagstaff KM. The FDA-approved drug ivermectin inhibits the replication of SARS-CoV-2 in vitro. Antiviral Res. 2020;178:104787.
* [29] Fajgenbaum DC, Rader DJ. Teaching old drugs new tricks: statins for COVID-19? Cell Metab. 2020;32(2):145–147.
* [30] Shin YH, Min JJ, Lee JH, Kim EH, Kim GE, Kim MH, et al. The effect of fluvastatin on cardiac fibrosis and angiotensin-converting enzyme-2 expression in glucose-controlled diabetic rat hearts. Heart Vessels. 2016;32(5):618–627.
* [31] Zhang H, Penninger JM, Li Y, Zhong N, Slutsky AS. Angiotensin-converting enzyme 2 (ACE2) as a SARS-CoV-2 receptor: molecular mechanisms and potential therapeutic target. Intensive Care Med. 2020;46(4):586–590.
* [32] Zhang XJ, Qin JJ, Cheng X, Shen L, Zhao YC, Yuan Y, et al. In-hospital use of statins is associated with a reduced risk of mortality among individuals with COVID-19. Cell Metab. 2020;32(2):176–187.
* [33] Rega Institute, Belgian University KU Leuven. KU Leuven Breakthrough as Modified Yellow Fever Virus Destroys COVID-19 in Preclinical Animal Research: Clinical Trials Next. News article. 2020 [cited 2020 Nov 19]. Available from: https://www.trialsitenews.com/ku-leuven-breakthrough-as-modified-yellow-fever-virus-destroys-covid-19-in-preclinical-animal-research-clinical-trials-next/.
* [34] Chen X, Chou CY, Chang GG. Thiopurine analogue inhibitors of severe acute respiratory syndrome-coronavirus papain-like protease, a deubiquitinating and deISGylating enzyme. Antivir Chem Chemother. 2009;19(4):151–156.
* [35] Ragan I, Hartson L, Pidcoke H, Bowen R, Goodrich R. Pathogen reduction of SARS-CoV-2 virus in plasma and whole blood using riboflavin and UV light. PLOS One. 2020;15(5):e0233947.
* [36] Weng JK. Plant Solutions for the COVID-19 Pandemic and Beyond: Historical Reflections and Future Perspectives. Mol Plant. 2020;13:803–807.
* [37] Wishart DS, Feunang YD, Guo AC, Lo EJ, Marcu A, Grant JR, et al. Drugbank 5.0: a major update to the Drugbank database for 2018. Nucleic Acids Res. 2017;46(D1):D1074–D1082.
* [38] Himmelstein DS, Lizee A, Hessler C, Brueggeman L, Chen SL, Hadley D, et al. Systematic integration of biomedical knowledge prioritizes drugs for repurposing. eLife. 2017;6:e26726.
* [39] Percha B, Altman RB. A global network of biomedical relationships derived from text. Bioinformatics. 2018;34(15):2614–2624.
* [40] Szklarczyk D, Gable AL, Lyon D, Junge A, Wyder S, Huerta-Cepas J, et al. STRING v11: protein–protein association networks with increased coverage, supporting functional discovery in genome-wide experimental datasets. Nucleic Acids Res. 2019;47(D1):D607–D613.
* [41] Orchard S, Ammari M, Aranda B, Breuza L, Briganti L, Broackes-Carter F, et al. The MIntAct project—IntAct as a common curation platform for 11 molecular interaction databases. Nucleic Acids Res. 2014;42(D1):D358–D363.
* [42] Cotto KC, Wagner AH, Feng YY, Kiwala S, Coffman AC, Spies G, et al. GIdb 3.0: a redesign and expansion of the drug–gene interaction database. Nucleic Acids Res. 2018;46(D1):D1068–D1073.
* [43] Bronstein MM, Bruna J, LeCun Y, Szlam A, Vandergheynst P. Geometric deep learning: going beyond euclidean data. IEEE Signal Process Mag. 2017;34(4):18–42.
* [44] Kipf TN, Welling M. Semi-supervised classification with graph convolutional networks. arXiv:1609.02907 [Preprint]. 2016 [cited 2020 Nov 20]. Available from: https://arxiv.org/abs/1609.02907.
* [45] Hamilton W, Ying Z, Leskovec J. Inductive representation learning on large graphs. Advances in NeurIPS. 2017 (pp. 1024-1034).
* [46] Veličković P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y. Graph attention networks. arXiv:1710.10903 [Preprint]. 2017 [cited 2020 Nov 20]. Available from: https://arxiv.org/abs/1710.10903.
|
Bootstrapping 2d $\phi^{4}$ Theory
with Hamiltonian Truncation Data
Hongbin Chen1, A. Liam Fitzpatrick1, Denis Karateev2
1Department of Physics, Boston University, Boston, MA 02215, USA
2Philippe Meyer Institute, Physics Department École Normale Supérieure (ENS),
Université PSL24 rue Lhomond, F-75231 Paris, France
We combine the methods of Hamiltonian Truncation and the recently proposed
generalisation of the S-matrix bootstrap that includes local operators to
determine the two-particle scattering amplitude and the two-particle form
factor of the stress tensor at $s>0$ in the 2d $\phi^{4}$ theory. We use the
form factor of the stress tensor at $s\leq 0$ and its spectral density
computed using Lightcone Conformal Truncation (LCT), and inject them into the
generalized S-matrix bootstrap set-up. The obtained results for the scattering
amplitude and the form factor are fully reliable only in the elastic regime.
We independently construct the “pure” S-matrix bootstrap bounds (bootstrap
without including matrix elements of local operators), and find that the sinh-
Gordon model and its analytic continuation the “staircase model” saturate
these bounds. Surprisingly, the $\phi^{4}$ two-particle scattering amplitude
also very nearly saturates these bounds, and moreover is extremely close to
that of the sinh-Gordon/staircase model.
###### Contents
1. 1 Introduction
1. 1.1 Models in 2d
2. 1.2 Summary of Main Results
2. 2 Basic Definitions and Notation
3. 3 Analytic Results
1. 3.1 Sinh-Gordon Model
2. 3.2 Staircase Model
3. 3.3 $\phi^{4}$ model
4. 3.4 2d $O(N)$ model in the large $N$ limit
5. 3.5 $T\overline{T}$ deformation of the 2d Ising
4. 4 Pure S-matrix bootstrap
1. 4.1 Set-up
2. 4.2 Numerical Results
5. 5 S-matrix and Form Factor Bootstrap
1. 5.1 Set-up
2. 5.2 Numerical Results
1. 5.2.1 Infinite Precision Example
2. 5.2.2 $\phi^{4}$ model
3. 5.3 Comparison of the sinh-Gordon model and $\phi^{4}$ model
6. 6 Discussion and Future Directions
7. A Kinematics of 2d Scattering
8. B $O(N)$ model
9. C Perturbative Computations
1. C.1 Feynman Diagrams
1. C.1.1 $\phi^{4}$ theory
2. C.1.2 2d $O(N)$ model in the large $N$ limit
2. C.2 Dispersion Relations
3. C.3 Nonrelativistic Limit
10. D Sinh-Gordon Form Factors and $C$-function
## 1 Introduction
There is a set of powerful non-perturbative techniques to study quantum field
theories (QFTs) commonly referred to as “bootstrap” methods. Such methods
attempt to bound the space of QFTs using only basic principles such as
symmetries, unitarity, crossing, etc. The most famous bootstrap technique is
the numerical conformal bootstrap pioneered in Rattazzi:2008pe . It allows one
to derive precise bounds on the space of conformal field theories (CFTs), see
Poland:2018epd for a review. Another bootstrap technique which allows one to
study QFTs with a mass gap was pioneered in Paulos:2016but ; Paulos:2017fhb .
In this paper we will refer to it as the numerical $S$-matrix bootstrap. The
$S$-matrix bootstrap gained further attention in recent years, see
Paulos:2016fap ; Doroud:2018szp ; He:2018uxa ; Cordova:2018uop ;
Guerrieri:2018uew ; Homrich:2019cbt ; EliasMiro:2019kyf ; Cordova:2019lot ;
Bercini:2019vme ; Correia:2020xtr ; Bose:2020shm ; Guerrieri:2020bto ;
Hebbar:2020ukp ; He:2021eqn ; Guerrieri:2021ivu ; Miro:2021rof ;
Guerrieri:2020kcs ; Guerrieri:2021tak . The recent work Karateev:2019ymz ;
Karateev:2020axc made a concrete proposal for how to extended the $S$-matrix
bootstrap to accommodate form factors and spectral densities. We will refer to
this approach as the numerical $S$-matrix/form factor bootstrap.
A simultaneous advantage and disadvantage of bootstrap methods is their model-
independent nature. If one wants to study some particular model one
generically has to inject additional model specific information. The amount of
this additional information highly depends on the situation. For example one
can solve numerically the 3d Ising model using the conformal bootstrap method
by simply specifying that there are only two relevant operators in the
spectrum, one is $Z_{2}$ even and one is $Z_{2}$ odd, see ElShowk:2012ht .
Another example is the work Guerrieri:2018uew , where the authors attempted to
study the 4d QCD by using the $S$-matrix bootstrap injecting some known
information from chiral perturbation theory. Another notable example in this
spirit is the study of the 2d Ising Field Theory in Gabai:2019ryw , where the
authors injected the S-matrix of the theory in one kinematic regime to learn
about its behavior more generally.
A great class of tools for obtaining non-perturbative results in a particular
model are Hamiltonian Truncation methods, which involve numerically
diagonalizing the Hamiltonian in a finite dimensional subspace of the full
Hilbert space. This approach is a special case of more general variational
methods, so all else being equal the larger the truncation subspace, the more
accurate the approximation to the eigenstates of the Hamiltonian. There are
various different ways one can try to implement Hamiltonian truncation for
continuum QFT, the most well-known probably being the Truncated Conformal
Space Approximation (TCSA) of Zamolodchikov and Yurov yurov1990truncated ;
yurov1991correlation , see james2018non for a recent review and guide to the
literature. One immediate output of such methods is the mass spectrum, which
is just the set of eigenvalues of the Hamiltonian. Because one also obtains
the eigenvectors of the Hamiltonian, one can compute spectral densities of
local operators quite straightforwardly. By contrast, constructing multi-
particle asymptotic states with Hamiltonian methods is much more subtle, since
these are not just eigenstates of the Hamiltonian. Thus the computation of
observables like the scattering amplitudes requires a more involved approach.
The main goal of this paper is to study non-perturbatively the 2d $\phi^{4}$
model (in the unbroken phase) and to compute as many observables as we can. In
a companion paper truncffsd , we used the Lightcone Conformal Truncation (LCT)
method to compute the two-particle form factor of the stress tensor in the
unphysical regime ($s\leq 0$) and its spectral density. In this paper, we will
inject this data into the S-matrix/form factor bootstrap program, and obtain
the form factor at $s>0$ and also the elastic 2-to-2 scattering amplitude in
the $\phi^{4}$ model.
The rest of the introduction is organized as follows. In section 1.1, we
discuss a wide class of scalar field theories in 2d, where we precisely define
the $\phi^{4}$ model and discuss its relation with other models. In section
1.2, we provide an extended summary of our main results.
### 1.1 Models in 2d
Let us consider the class of quantum field theories in 2d which consists of a
single real scalar field $\phi(x)$ and is defined as the deformation of the
free scalar field theory in the UV by a potential $V(\phi)$. The corresponding
action reads
$S_{UV}=\int d^{2}x\left(-{1\over 2}(\partial\phi)^{2}-V(\phi)\right).$ (1.1)
Notice that the field $\phi(x)$ has the mass dimension zero, $[\phi]=0$. This
situation is special to 2d and allows for complicated potentials $V(\phi)$ not
present in higher dimensions. In this paper we will further restrict our
attention to potentials which are invariant under the following $Z_{2}$
transformation $\phi(x)\rightarrow-\phi(x)$. The most generic potential then
has the following form
$V(\phi)={1\over
2}m_{0}^{2}\phi^{2}+\sum_{n=2}^{\infty}g_{2n}\phi^{2n}+\text{counterterms},$
(1.2)
where $m_{0}$ is the mass-like parameter and $g_{2n}$ is an infinite set of
coupling constants. We focus on the case when $m_{0}^{2}>0$ (unbroken phase).
Below we will define and discuss several potentials $V(\phi)$. We will take
the operators to be normal-ordered in order to remove divergences in the
theory; this choice is equivalent to a hard cutoff with a particular choice
for the counterterms above.
We start with two integrable models called the sine-Gordon and the sinh-Gordon
models. They are given by the following potentials respectively
$\displaystyle V_{\text{sine-Gordon}}(\phi)$ $\displaystyle\equiv-
m_{0}^{2}\beta^{-2}\left(\cos(\beta\phi)-1\right)+\text{counterterms},$ (1.3)
$\displaystyle V_{\text{sinh-Gordon}}(\phi)$
$\displaystyle\equiv+m_{0}^{2}\beta^{-2}\left(\cosh(\beta\phi)-1\right)+\text{counterterms}.$
(1.4)
Here $\beta$ is the single dimensionless parameter which specifies the models.
Expanding these potentials around $\phi=0$ one can bring them to the form
given by (1.2), and thus express all the $g_{2n}$ coefficients in terms of
$\beta$. The two models are formally related by the replacement
$\beta\leftrightarrow i\beta$. These two models have been extensively studied
in the literature. For a summary of the sine-Gordon results see, for example,
section 4.1 in Karateev:2019ymz and references therein. We will summarize the
results for the sinh-Gordon model in section 3.1.
Another interesting model is the $\phi^{4}$ model. It will play the central
role in this paper. It is defined by the potential (1.2) with
$g_{4}=\lambda/4!$ and $g_{2n}=0$ for all $n\geq 3$. Here $\lambda\geq 0$ is
the quartic coupling constant. This is possibly the simplest quantum field
theory model one can think of. Let us write out its potential explicitly, it
reads
$V_{\phi^{4}}\equiv{1\over 2}m_{0}^{2}\,\phi^{2}+{\lambda\over
4!}\phi^{4}+{1\over 2}\delta_{m}\phi^{2}.$ (1.5)
No counterterms are required for the coupling constant $\lambda$ in $d=2$.
Normal-ordering the interaction and setting $\delta_{m}=0$ is equivalent to
choosing a hard cutoff $\Lambda_{\rm cutoff}$ and setting
$\delta_{m}=-{\lambda\over 8\pi}\log\Lambda_{\rm cutoff}^{2}/m_{0}^{2}$. The
quartic coupling $\lambda$ has mass dimension $[\lambda]=2$; we define the
dimensionless quartic coupling $\overline{\lambda}$ as111We caution the reader
that this convention for $\overline{\lambda}$ differs from that in
Anand:2020gnn ;
$\overline{\lambda}_{\text{here}}=4\pi\overline{\lambda}_{\text{there}}$.
$\overline{\lambda}\equiv m_{0}^{-2}\lambda.$ (1.6)
The $\phi^{4}$ model is non-integrable, and one needs numerical non-
perturbative techniques in order to compute observables in this theory.222See
e.g. Chabysheva:2015ynr ; Burkardt:2016ffk ; Schaich:2009jk ; Milsted:2013rxa
; Bosetti:2015lsa ; Rychkov:2014eea ; Rychkov:2015vap ; Bajnok:2015bgw ;
Elliott:2014fsa ; Chabysheva:2016ehd ; Serone:2018gjo ; Tilloy:2021hhb for
various recent nonperturbative works on this model. It was shown in
Anand:2017yij that the $\phi^{4}$ model in lightcone quantization333It is
important to note that, due to the contribution from zero modes, the critical
value of the coupling differs in equal-time and lightcone quantization. See
Burkardt ; Burkardt2 ; Fitzpatrick:2018xlz for details. in the unbroken phase
is in the following range
$\overline{\lambda}\in[0,\,23.1].$ (1.7)
The critical value $\overline{\lambda}\approx 23.1$ leads to the conformal IR
fixed point given by the free massless Majorana fermion (which is the 2d Ising
model).
Finally, we consider the 2d $O(N)$ model, which is the case when instead of a
single field $\phi$, we have $N$ fields $\phi_{1}$, $\phi_{2}$, $\ldots$,
$\phi_{N}$ with the same mass. Requiring the $O(N)$ symmetry we can write the
following analogue of the pure $\phi^{4}$ theory
$V_{\phi^{4}}^{O(N)}(\phi)\equiv{1\over
2}m_{0}^{2}(\phi_{i}\phi_{i})+{\lambda\over
8N}(\phi_{i}\phi_{i})(\phi_{j}\phi_{j})+{1\over
2}\delta_{m}(\phi_{i}\phi_{i}),$ (1.8)
where there is an implicit summation over the repeated indices. In the large
$N$ limit when $N\rightarrow\infty$ the model becomes integrable. We will
mostly use it in this paper to check our numerical procedures.
To conclude our brief discussion of 2d models let us clarify an important
point. One can consider an infinite class of potentials (1.2) with
$g_{4}=\lambda/4!$ and
$m_{0}^{-2}g_{2n}\ll 1,\qquad n\geq 3.$ (1.9)
All such models will lead to observables very similar to the pure $\phi^{4}$
model (1.5). Using non-perturbative techniques we can compute in practice
observables only at finite precision and thus we will never be able to
distinguish the pure $\phi^{4}$ model from this infinite class of models. In
order to be pedantic we say that we compute observables for $\phi^{4}$-like
theories. In this sense, the sinh-Gordon model belongs to the class of
$\phi^{4}$-like models if $\beta^{2}=\overline{\lambda}$ at very small values
of the coupling, $\overline{\lambda}\ll 4\pi$.
### 1.2 Summary of Main Results
Given a model there are various observables one would like to compute. In this
paper we will focus on three different observables: the two-particle form
factor of the trace of the stress tensor $\mathcal{F}_{2,0}^{\Theta}(s)$, the
spectral density of the trace of the stress tensor $\rho_{\Theta}(s)$ and the
two-to-two scattering amplitude $\mathcal{S}(s)$. For their precise
definitions see section 2. Given the definition of the $\phi^{4}$ model in
(1.5), one would like to obtain the above observables in terms of the bare
coupling $\overline{\lambda}$ and the mass-like parameter $m_{0}$ which simply
sets the scale.444In practice we provide all our final expressions in terms of
the physical mass $m$ which can also be computed in terms of $m_{0}$ for a
given value of $\overline{\lambda}$. In the $\overline{\lambda}\ll 4\pi$
regime, one can use perturbation theory to do that. For
$\overline{\lambda}\gtrsim 4\pi$, one can use Hamiltonian Truncation methods
instead. In the companion paper truncffsd , we have computed the spectral
density of the trace of the stress tensor $\rho_{\Theta}(s)$ and two-particle
form factor $\mathcal{F}_{2,0}^{\Theta}(s)$ at $s\leq 0$ for various values of
$\overline{\lambda}$, see figure 4 and 12 therein. The main goal of this paper
is to compute $\mathcal{F}_{2,0}^{\Theta}(s)$ for $s>0$ and the scattering
amplitude $\mathcal{S}(s)$ given the input of truncffsd . Below we outline the
main results.
We start by employing the pure $S$-matrix bootstrap to study the space of
scattering amplitudes of a single $Z_{2}$ odd particle with the physical mass
$m$. One can characterize such amplitudes for example by their value (and the
value of their derivatives) at the crossing symmetric point $s=2m^{2}$. See
(2.26) and (2.27) for details. Using crossing, analyticity and unitarity we
construct a non-perturbative bound on a two-dimensional subspace of these
parameters. The bound is presented in figure 1. We discover that the left tip
describes the scattering of free bosons and the right tip describes the
scattering of free Majorana fermions (i.e. the 2d Ising model). The two tips
are connected by the lower and upper edges. The lower edge is saturated by the
sinh-Gordon model and its analytic continuation the “staircase model”.
We then propose a strategy which allows one to inject the Hamiltonian
Truncation data of truncffsd into the $S$-matrix/form factor bootstrap. This
allows one to isolate a specific theory, instead of constructing generic
bounds on the space of allowed theories. Using this strategy we numerically
obtain the form factor of the trace of the stress tensor
$\mathcal{F}_{2,0}^{\Theta}(s)$ with $s>0$ and the scattering amplitude
$\mathcal{S}(s)$. The results are presented in figures 9 \- 11 for various
values of $\overline{\lambda}$. In the regime $\overline{\lambda}\ll 4\pi$
they agree with perturbative expressions. Due to the limitations of the
$S$-matrix/form factor bootstrap restricted to two-particle scattering states,
we expect that our results are fully accurate only in the “elastic” regime for
$s\leq 16m^{2}$. Given the numerical scattering amplitudes in the $\phi^{4}$
model, we can determine the position of the $\phi^{4}$ model with respect to
the generic bound given in figure 1. It turns out that the $\phi^{4}$ model
lies very close to the lower edge of this plot, see figure 7. This means that
the $\phi^{4}$ model is very similar to the sinh-Gordon/staircase model (which
is exactly on the lower edge) if one only looks at the two-dimension subspace
shown in this plot. This fact calls for further investigation which we
summarize in the next paragraph. Leaving this issue aside for a moment we
observe that the $\phi^{4}$ model starts close to the free boson theory for
small values of $\overline{\lambda}$ and monotonically moves towards the free
fermion theory (2d Ising) along the lower edge when we increase
$\overline{\lambda}$. This behaviour is in agreement with the fact that there
is a critical value of $\overline{\lambda}$ when the $\phi^{4}$ theory flows
to the 2d Ising fixed point.
The $\phi^{4}$ and the sinh-Gordon models are inherently different. The former
has particle production and the latter does not. In practice this difference
will become evident for example if we look at the scattering amplitude
$\mathcal{S}(s)$ in the “non-elastic” regime $s\geq 16m^{2}$. For small values
of $\overline{\lambda}\ll 4\pi$, the $\phi^{4}$ model is expected to be very
similar to the sinh-Gordon model with $\beta^{2}=\overline{\lambda}$. To our
surprise we have discovered that at strong coupling, the $\phi^{4}$ model
still gives very similar observables in the “elastic” regime as the sinh-
Gordon model with some value $\beta_{*}^{2}$. Notice however that
$\beta_{*}^{2}\neq\overline{\lambda}$. For large values of
$\overline{\lambda}$, the value of $\beta_{*}^{2}$ is allowed to become
complex (describing the “staircase model”). The comparison of the observables
in the $\phi^{4}$ model and in the sinh-Gordon model is given in figure 16, 16
and 17. There, one sees a striking similarity of the two models in the
“elastic” regime and their small deviation in the non-elastic regime.
##### Outline of the paper
We summarize basic definitions and set up the notation in section 2. We
summarize various analytic results in section 3. For instance we discuss the
sinh-Gordon model and its analytic continuation (the “staircase model”), the
$\phi^{4}$ model in perturbation theory, and the 2d $O(N)$ in the large $N$
limit. In section 4, we construct a generic bound on the space of scattering
amplitudes of $Z_{2}$ odd particles. In section 5, we show how one can inject
the LCT data into the $S$-matrix/form factor bootstrap and apply this strategy
to the 2d $\phi^{4}$ model. We discuss open questions and further directions
in section 6.
Some supplementary material is provided in appendices. We review the details
of the 2d kinematics in appendix A. We discuss the 2d $O(N)$ models and their
large $N$ limit in appendix B. We provide details of perturbative and large
$N$ computations in the $\phi^{4}$ model and the $O(N)$ model in appendix C.
Finally, in appendix D, we discuss two- and four-particle form factors in the
sinh-Gordon model.
## 2 Basic Definitions and Notation
Let us start by carefully defining the most important objects for our work. We
will first work with scalars and general number of dimensions $d$, and focus
on $d=2$ in the second half of this section. We also define the scattering
amplitudes for 2d Majorana fermions (which is used in later sections) at the
end of this section. We will use the “mostly plus” Lorentzian metric
$\eta^{\mu\nu}=\\{-1,+1,+1,\ldots\\}.$ (2.1)
We require that our quantum field theory contains the local stress tensor
$T^{\mu\nu}(x)$ which obeys the following conditions
$T^{\mu\nu}(x)=T^{\nu\mu}(x),\qquad\partial_{\mu}T^{\mu\nu}(x)=0.$ (2.2)
We denote its trace by
$\Theta(x)\equiv\eta_{\mu\nu}T^{\mu\nu}(x).$ (2.3)
One of the simplest observables of any theory is two-point function of the
trace of the stress tensor. One can distinguish the Wightman two-point
function
$\langle{\rm vac}|\Theta(x_{1})\Theta(x_{2})|{\rm
vac}\rangle_{W}\equiv\lim_{\epsilon\rightarrow 0^{+}}\langle{\rm
vac}|\Theta(x_{1}^{0}-i\epsilon,\vec{x}_{1})\Theta(x_{2})|{\rm vac}\rangle$
(2.4)
and the time-ordered two-point function
$\langle{\rm vac}|\Theta(x_{1})\Theta(x_{2})|{\rm
vac}\rangle_{T}\equiv\theta(x_{1}^{0}-x_{2}^{0})\langle{\rm
vac}|\Theta(x_{1})\Theta(x_{2})|{\rm
vac}\rangle_{W}+\theta(x_{2}^{0}-x_{1}^{0})\langle{\rm
vac}|\Theta(x_{2})\Theta(x_{1})|{\rm vac}\rangle_{W}.$ (2.5)
Let us introduce the spectral density of the trace of the stress tensor
$\rho_{\Theta}$ as the Fourier transformed Wightman two-point function. In the
notation of Weinberg:1995mt we have
$2\pi\theta(p^{0})\rho_{\Theta}(-p^{2})\equiv\int d^{d}xe^{-ip\cdot
x}\langle{\rm vac}|\Theta(x)\Theta(0)|{\rm vac}\rangle_{W}.$ (2.6)
One can express $\rho_{\Theta}$ also in terms of the time-ordered two-point
function as follows555For a simple derivation see for example the beginning of
section 3.2 in Karateev:2020axc .
$2\pi\theta(p^{0})\rho_{\Theta}(-p^{2})=2\,\text{Re}\int d^{d}xe^{-ip\cdot
x}\langle{\rm vac}|\Theta(x)\Theta(0)|{\rm vac}\rangle_{T}.$ (2.7)
In quantum field theories with a “mass gap” there exist one-particle
asymptotic in and out states denoted by
$|m,\vec{p}\,\rangle_{\text{in}}\qquad|m,\vec{p}\,\rangle_{\text{out}},$ (2.8)
where $m$ and $\vec{p}$ stand for the physical mass and the $(d-1)$-momentum
of the asymptotic particles. The standard normalization choice for them reads
as
${}_{\text{in}}\langle
m,\vec{p}_{1}\,|m,\vec{p}_{2}\,\rangle_{\text{in}}={}_{\text{out}}\langle
m,\vec{p}_{1}\,|m,\vec{p}_{2}\,\rangle_{\text{out}}=2\sqrt{m^{2}+\vec{p}_{1}^{\,2}}\times(2\pi)^{d-1}\delta^{d-1}(\vec{p}_{1}-\vec{p}_{2}).$
(2.9)
Using the one-particle asymptotic states one can construct general
$n$-particle asymptotic states with $n\geq 2$, for details see for example
section 2.1.2 in Karateev:2019ymz . Let us denote the two-particle in and out
asymptotic states by
$|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}\quad\text{and}\quad|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{out}}.$
(2.10)
They are constructed in such a way that they obey the following normalization
${}_{\text{in}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}={}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{out}}=\\\
4\sqrt{m^{2}+\vec{p}_{1}^{\,2}}\sqrt{m^{2}+\vec{p}_{2}^{\,2}}\,(2\pi)^{2(d-1)}\delta^{(d-1)}(\vec{p}_{1}-\vec{k}_{1})\delta^{(d-1)}(\vec{p}_{2}-\vec{k}_{2})+(\vec{p}_{1}\leftrightarrow\vec{p}_{2}).$
(2.11)
Using the asymptotic states one can define another set of observables called
the form factors. In this work we will use the following form factors of the
trace of the stress tensor
$\displaystyle\mathcal{F}^{\Theta}_{1,1}(t)$
$\displaystyle\equiv{}_{\text{out}}\langle
m,\vec{p}_{1}\,|\Theta(0)|m,\vec{p}_{2}\rangle_{\text{in}},$ (2.12)
$\displaystyle\mathcal{F}^{\Theta}_{2,0}(s)$
$\displaystyle\equiv{}_{\text{out}}\langle
m,\vec{p}_{1}\,;m,\vec{p}_{2}\,|\Theta(0)|{\rm vac}\rangle.$
Here we have introduced the analogues of the Mandelstam variables for the form
factors which are defined as
$s\equiv-(p_{1}+p_{2})^{2},\qquad t\equiv-(p_{1}-p_{2})^{2},\qquad
s+t=4m^{2}.$ (2.13)
The latter relation simply follows from the definitions of $s$ and $t$. The
two form factors in (2.12) are related by the crossing symmetry as
follows666The crossing symmetry is the condition that the matrix elements in
(2.12) remain invariant under the following change of the $d$-momenta
$p^{\mu}_{2,\text{out}}\rightarrow-p^{\mu}_{2,\text{in}}$.
$\mathcal{F}^{\Theta}_{2,0}(s)=\mathcal{F}^{\Theta}_{1,1}(s).$ (2.14)
The Ward identity imposes the following normalization condition
$\lim_{s\rightarrow 0}\mathcal{F}^{\Theta}_{2,0}(s)=-2m^{2}.$ (2.15)
See for example appendix G in Karateev:2020axc for its derivation. The
condition (2.15) can be seen as the definition of the physical mass.
The last observable in which we are interested is the scattering amplitude
$\mathcal{S}(s)$. We define it via the following matrix element
$\mathcal{S}(s,t)\times(2\pi)^{(d-1)}\delta^{(d)}(p_{1}+p_{2}-k_{1}-k_{2})\equiv{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}.$
(2.16)
In this case, the usual Mandelstam variables are defined as
$s\equiv-(p_{1}+p_{2})^{2},\quad t\equiv-(p_{1}-k_{1})^{2},\quad
u\equiv-(p_{1}-k_{2})^{2},\quad s+t+u=4m^{2}.$ (2.17)
The difference between (2.13) and (2.17) should be understood from the
context. Instead of using the full scattering amplitude $\mathcal{S}(s)$, it
is often very convenient to define the interacting part of the scattering
amplitude $\mathcal{T}(s,t)$ as follows
$i\mathcal{T}(s,t)\times(2\pi)^{(d-1)}\delta^{(d)}(p_{1}+p_{2}-k_{1}-k_{2})\equiv\\\
{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}-{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{out}}.$
(2.18)
We have defined the observables in general dimensions up to now. In the rest
of this section, we focus on $d=2$. In the special case of $d=2$, the
scattering amplitude takes a particularly simple form since it depends only on
the single Mandelstam variable $s$. In our convention, $u=0$. This restriction
is imposed via the Heaviside step function $\theta$ (not to confuse with the
rapidity $\theta$ that is used in later sections) in the equations below. We
provide details of 2d kinematics in appendix A.777See also the end of section
2 in Zamolodchikov:1978xm (in particular equations (2.9) - (2.11) and the
surrounding discussion). In $d=2$ we define the scattering amplitude of
identical particles as888Note that although we use the vector notation
$\vec{p}$ for the momentum, it is really just a single number, since are are
in 2d, and the step functions make sense.,999In the left-hand of this equation
and all similar equations below it is understood that the scattering amplitude
implicitly contains the appropriate step functions. This is is because all our
amplitudes are required to have $u=0$.
$\mathcal{S}(s)\times(2\pi)\delta^{(2)}(p_{1}+p_{2}-k_{1}-k_{2})\equiv\\\
{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}\times\theta(\vec{p}_{1}-\vec{p}_{2})\theta(\vec{k}_{2}-\vec{k}_{1}).$
(2.19)
The interacting part of the scattering amplitude in $d=2$ is then defined as
$\mathcal{T}(s)\times(2\pi)\delta^{(2)}(p_{1}+p_{2}-k_{1}-k_{2})\equiv\Big{(}{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in}}\\\
-{}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{out}}\Big{)}\times\theta(\vec{p}_{1}-\vec{p}_{2})\theta(\vec{k}_{2}-\vec{k}_{1}).$
(2.20)
In $d=2$, it is straightforward to rewrite (2.11) in the following way
${}_{\text{out}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{\text{out}}=\mathcal{N}_{2}\times(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-k_{1}-k_{2}),$
(2.21)
where we have defined
$\mathcal{N}_{2}\equiv 2\sqrt{s}\sqrt{s-4m^{2}}.$ (2.22)
Combining (2.16), (2.18) and (2.21), we obtain the following simple relation
between the full amplitude and its interacting part
$\mathcal{S}(s)=\mathcal{N}_{2}+i\mathcal{T}(s).$ (2.23)
It is also convenient to introduce the following amplitude (which can be seen
as the analogue of the partial amplitudes in higher dimensions)
$\widehat{\mathcal{S}}(s)\equiv\mathcal{N}_{2}^{-1}\mathcal{S}(s)=1+i\,\mathcal{N}_{2}^{-1}\mathcal{T}(s).$
(2.24)
Unitarity imposes the following constraint:
$\left|\widehat{\mathcal{S}}(s)\right|^{2}\leq 1,\quad\text{for }s\geq
4m^{2}.$ (2.25)
One can define the non-perturbative quartic coupling $\Lambda$ via the
interacting part of the physical amplitude as
$\Lambda\equiv-\lim_{s\rightarrow 2m^{2}}\mathcal{T}(s).$ (2.26)
We can also define the following set of non-perturbative parameters
$\Lambda^{(n)}\equiv\lim_{s\rightarrow 2m^{2}}\partial_{s}^{n}\mathcal{T}(s).$
(2.27)
Note that there is no minus sign in the definition of $\Lambda^{(n)}$, which
we find to be convenient. Due to the crossing symmetry $s\leftrightarrow
4m^{2}-s$, all the odd derivatives in $s$ at the crossing symmetric point
vanish. The infinite set of physical non-perturbative parameters $\Lambda$,
$\Lambda^{(2)}$, $\Lambda^{(4)}$, $\ldots$ can be chosen to fully describe any
scattering process.
According to Zamolodchikov:1986gt ; Cardy:1988tj , in $d=2$, one can define
the $C$-function as
$C(s_{0})\equiv 12\pi\int_{0}^{s_{0}}ds\,{\rho_{\Theta}(s)\over s^{2}}.$
(2.28)
The UV central charge $c_{UV}$ is related to the $C$-function in the following
simple way101010Here we use the standard conventions for the central charge
$c_{UV}$ in which the theory of a free scalar boson has $c_{UV}=1$. For a
summary of the standard conventions see for example the end of appendix A in
Karateev:2020axc .,111111For the derivation and further discussion see also
Cappelli:1990yc and section 5 of Karateev:2020axc .
$c_{UV}=C(\infty).$ (2.29)
The full spectral density can be written as a sum as follows
$\rho_{\Theta}(s)=\rho^{(2)}_{\Theta}(s)\theta(s-4m^{2})+\rho^{(4)}_{\Theta}(s)\theta(s-16m^{2})+\rho^{(6)}_{\Theta}(s)\theta(s-36m^{2})+\ldots$
(2.30)
Here the superscript $(2)$, $(4)$, $(6)$ stand for 2-, 4- and 6-particle part
of the spectral density and $\ldots$ represent higher-particle contributions.
Note that $\forall n$, $\rho^{n}_{\Theta}(s)\geq 0$. In writing (2.30) we
assumed the absence of odd-number particle states due to the $Z_{2}$ symmetry.
The two-particle part of the spectral density is related to the two-particle
form factor as
$\rho^{(2)}_{\Theta}(s)=(2\pi\mathcal{N}_{2})^{-1}|\mathcal{F}^{\Theta}_{2,0}(s)|^{2}.$
(2.31)
In the “elastic” regime $s\in[4m^{2},16m^{2}]$ we also have Watson’s equation
which reads
$\widehat{\mathcal{S}}(s)={\mathcal{F}^{\Theta}_{2,0}(s)\over\mathcal{F}^{*}{}^{\Theta}_{2,0}(s)},\qquad\text{for
}s\in[4m^{2},16m^{2}].$ (2.32)
For the derivation of these relations and their analogues in higher dimensions
see Karateev:2019ymz ; Karateev:2020axc .
##### Majorana fermions
Consider the case of a single Majorana fermion in 2d with mass $m$.
Analogously to the bosonic case we can construct the two-particle fermion
states (2.10). However, now the two particle state must be anti-symmetric
under the exchange of the two fermions, thus instead of the normalization
condition (2.11) we get
$\displaystyle{}_{\text{free fermions}}$ $\displaystyle\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{free
fermions}}$
$\displaystyle=4\sqrt{m^{2}+\vec{p}_{1}^{\,2}}\sqrt{m^{2}+\vec{p}_{2}^{\,2}}\,(2\pi)^{2}\delta^{(1)}(\vec{p}_{1}-\vec{k}_{1})\delta^{(1)}(\vec{p}_{2}-\vec{k}_{2})-(\vec{p}_{1}\leftrightarrow\vec{p}_{2}).$
(2.33)
The scattering amplitude for the Majorana fermion reads as
$\mathcal{S}_{\text{fermions}}(s)\times(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4})\equiv\\\
{}_{\text{out fermions}}\langle
m,\vec{k}_{1};m,\vec{k}_{2}|m,\vec{p}_{1};m,\vec{p}_{2}\,\rangle_{\text{in
fermions}}\times\theta(\vec{p}_{1}-\vec{p}_{2})\theta(\vec{k}_{2}-\vec{k}_{1}).$
(2.34)
In the case of free fermions the scattering amplitude is simply given by the
normalization condition (2.33). Due to the presence of theta functions only
the second term in (2.33) contributes and using the change of variables, one
gets
$\mathcal{S}_{\text{free
fermions}}(s)=-\mathcal{N}_{2},\qquad\widehat{\mathcal{S}}_{\text{free
fermions}}(s)=-1,$ (2.35)
where we use the hatted amplitude for fermions is defined as in (2.24).
Analogously to (2.24) we can extract the interacting part of the fermion
scattering
$\widehat{\mathcal{S}}_{\text{fermions}}(s)=-1+i\mathcal{N}_{2}^{-1}\mathcal{T}_{\text{fermions}}(s).$
(2.36)
We finally notice that the scattering amplitude for free Majorana fermions is
equivalent to the scattering of bosons with the following interacting part
$\mathcal{T}_{\text{bosons}}(s)=2i\mathcal{N}_{2}.$ (2.37)
This can be seen by simply plugging (2.37) into (2.24). One trivially recovers
(2.35).
## 3 Analytic Results
In this section we provide analytic results for the sinh-Gordon, $\phi^{4}$
and 2d $O(N)$ models defined in section 1.1. The main objects we would like to
compute are the $2\rightarrow 2$ scattering amplitudes, the two-particle form
factor of the trace of the stress-tensor and its spectral density defined in
section 2. In $d=2$ all these observables are functions of a single variable
$s$. It will be often more convenient to use the rapidity variable $\theta$
related to the $s$ variable by
$\theta\equiv 2\text{ArcCosh}\left({\sqrt{s}\over
2m}\right)\quad\Leftrightarrow\quad s=4m^{2}\cosh^{2}\left({\theta\over
2}\right).$ (3.1)
Under crossing, we have $s\rightarrow 4m^{2}-s$, which corresponds to
$\theta\rightarrow i\pi-\theta$.
### 3.1 Sinh-Gordon Model
We have defined the sinh-Gordon model in (1.4). In what follows we will review
its scattering amplitude and the stress-tensor form factor.121212For a recent
extensive study of the sinh-Gordon model see Konik:2020gdi . For aesthetic
purposes let us define the following parameter
$b\equiv{\beta\over\sqrt{8\pi}}.$ (3.2)
The spectrum of the sinh-Gordon model consists of a single $Z_{2}$ odd
particle with mass $m$. The $2\rightarrow 2$ scattering amplitude was found in
Arinshtein:1979pb . It reads131313Under analytic continuation this amplitude
maps to the scattering of the lightest breathers in the sine-Gordon model. See
for example equation (4.18) in Karateev:2019ymz . Notice however the slight
clash of notation, namely $\gamma_{\text{here}}$ is equivalent to
$\gamma_{\text{there}}/8$.
$\widehat{\mathcal{S}}(\theta)={\sinh\theta-i\sin\gamma\over\sinh\theta+i\sin\gamma},\qquad\gamma\equiv{\pi
b^{2}\over 1+b^{2}},$ (3.3)
which is crossing symmetric, since $\sinh\theta$ is invariant when
$\theta\rightarrow i\pi-\theta$. It also possesses the following non-trivial
symmetry $b\leftrightarrow b^{-1}$. We can thus restrict our attention on the
following parameter range
$b\in[0,1]\quad\Leftrightarrow\quad\gamma\in[0,\pi/2].$ (3.4)
Using the definition of the non-perturbative quartic coupling (2.26), we
conclude that
$\Lambda=8m^{2}\left(1-{1\over 1+\sin\gamma}\right).$ (3.5)
Due to (3.4), the non-perturbative quartic coupling $\Lambda$ in the sinh-
Gordon model has the following range
$\Lambda\in[0,4m^{2}].$ (3.6)
One can use the relation (3.5) to eliminate the parameter $\gamma$ and rewrite
the scattering amplitude (3.3) as
$\widehat{\mathcal{S}}(\theta)=1+{2i\Lambda\over(\Lambda-8m^{2})\sinh\theta-i\Lambda}.$
(3.7)
The form factor of a scalar local operator in the sinh-Gordon model was
computed in Fring:1992pt . Adjusting the normalization of their result
according to (2.15), we can write the following expression for 2-particle form
factor for the trace of the stress-tensor
$\mathcal{F}^{\Theta}_{2,0}(\theta)=-2m^{2}\times\\\
\exp\left(8\,\int_{0}^{\infty}{dx\over x}{\sinh\left({x\gamma\over
2\pi}\right)\sinh\left({x(\pi-\gamma)\over 2\pi}\right)\sinh\left({x\over
2}\right)\over\sinh^{2}(x)}\sin^{2}\left({x(i\pi-\theta)\over
2\pi}\right)\right).$ (3.8)
In section 5, in order to compare the spectral densities of the sinh-Gordon
model and the $\phi^{4}$ model above $s=16m^{2}$, we will also need the
4-particle form factor for $\Theta$, which we review in appendix D.
Let us now notice that the actual expressions for the scattering amplitude
(3.3) and for the form factor (3.8) are analytic functions of the parameter
$\gamma$. They can be thus analytically continued away from the original range
of $\gamma$ given by (3.4). The resulting amplitude and the form factor are
the ones of the so-called staircase model, which we review next.
### 3.2 Staircase Model
Because of the strong-weak duality $b\leftrightarrow b^{-1}$ in the sinh-
Gordon model, it is effectively impossible to increase the coupling beyond
$b=1$ and as a result $\Lambda$ is restricted to be $\leq 4m^{2}$. However, by
analytically continuing the coupling to complex values, it is formally
possible to obtain larger values of $\Lambda$. The Staircase Model
zamolodchikov2006resonance is the analytic continuation
$\gamma={\pi\over 2}+i\theta_{0}$ (3.9)
with $\theta_{0}$ real. Although the Lagrangian is no longer real and it is
not clear why such a deformation should correspond to an underlying unitary
theory,141414It would be interesting to interpret the theory as a “Complex
CFT” along the lines of Gorbenko:2018ncu ; Gorbenko:2018dtm . In particular,
the large amount of RG time that the Staircase Model spends near each minimal
model suggests a form of “walking” near the minimal model fixed points. in
zamolodchikov2006resonance Zamolodchikov showed that the $c$-function of the
theory, defined using the thermodynamic Bethe Ansatz, flows from a free scalar
in the UV to the Ising model in the IR and moreover approaches very close to
each of the $c<1$ minimal models along this RG flow. The amount of RG time
spent near each minimal model is proportional to $\theta_{0}$, so that at
large $\theta_{0}$ the $c$-function resembles a staircase.
Substituting equation (3.9) into (3.5), the $\sin\gamma$ in the denominator
becomes $\cosh\theta_{0}$, and now the maximum value that $\Lambda$ can reach
is $8m^{2}$. Solving for $\theta_{0}$ in terms of $\Lambda/m^{2}$, we get
$\theta_{0}=\log\left({2+\sqrt{{\Lambda\over m^{2}}-4}\over
2-\sqrt{{\Lambda\over m^{2}}-4}}\right),$ (3.10)
which grows logarithmically like $\theta_{0}\approx\log{16m^{2}\over
8m^{2}-\Lambda}$ as $\Lambda$ approaches the upper limit $8m^{2}$.
Parametrized in terms of $\Lambda$, the $S$ matrix of the staircase model is
given precisely by (3.7) with $\Lambda\in[4m^{2},8m^{2}]$. Similarly one
obtains the form factor of the trace of the stress tensor in the staircase
model by using the analytic continuation (3.9) and (3.10) in the expression
(3.8). It is interesting to notice that using (3.9), we can read off the value
of $\beta^{2}$ from (3.2) and (3.3). Since
$\beta=\sqrt{8\pi{\gamma\over\pi-\gamma}}$, one thus has
$\left|\beta^{2}\right|=8\pi$ for $m^{-2}\Lambda\geq 4$.
Expanding the analytically continued amplitude (3.7) around $\Lambda=8m^{2}$,
one finds
$\mathcal{T}_{\text{bosons}}=2i\mathcal{N}_{2}+{st\over
M^{2}}+{\mathcal{O}}\left({1\over M^{4}}\right),$ (3.11)
where ${1\over M^{2}}\equiv{8m^{2}-\Lambda\over 4m^{4}}$. According to the
discussion around (2.37), the interacting part of the boson scattering
amplitude (3.11) is equivalent to the scattering of Majorana fermions with the
following interacting part
$\mathcal{T}_{\text{fermions}}={st\over M^{2}}+{\mathcal{O}}\left({1\over
M^{4}}\right).$ (3.12)
In other words at $\Lambda=8m^{2}$,
$\mathcal{T}_{\text{bosons}}=2i\mathcal{N}_{2}$, which is equivalent to
$\mathcal{T}_{\text{fermions}}=0$, so the S-matrix approaches that of a free
massive fermion (the Ising model). Moreover, at $\Lambda$ slightly below
$8m^{2}$, the S-matrix additionally has contributions from irrelevant
deformations that should capture the approach to Ising from the UV (in this
case, from the next minimal model up, i.e. the tricritical Ising model). In
section 3.5 we will explicitly check this leading correction.
### 3.3 $\phi^{4}$ model
The $\phi^{4}$ model defined by (1.5) allows for the presence of one-particle
asymptotic states (2.8) which are $Z_{2}$ odd. Due to the presence of the
$Z_{2}$ symmetry the “elastic” regime in the $\phi^{4}$ model is extended to
$s\in[4m^{2},16m^{2}]$. The relation between the lightcone quantization bare
mass $m_{0}$ and the physical mass $m$ is given by
$m=m_{0}\left(1-{\overline{\lambda}^{2}\over 768}+{\overline{\lambda}^{3}\over
3072\pi}+O(\overline{\lambda}^{4})\right).$ (3.13)
For higher order corrections see equation (2.14) in Fitzpatrick:2018xlz .
Using perturbation theory we compute the two-particle form-factor and the
spectral density of the trace of the stress-tensor. The form factor reads
$m^{-2}\mathcal{F}^{\Theta}_{2,0}(s)=-2+\left({\overline{\lambda}\over
4\pi}\right)\,\Delta(s)+\\\ {1\over 2}\left({\overline{\lambda}\over
4\pi}\right)^{2}\left({\pi^{2}s\over
8(s-4m^{2})}-\Delta(s)\left(\Delta(s)/2+1\right)\right)+\mathcal{O}(\overline{\lambda}^{3}),$
(3.14)
where the function $\Delta(s)$ is defined as
$\Delta(s)\equiv-1+\lim_{\epsilon\rightarrow
0^{+}}{4m^{2}\text{ArcTan}\left({\sqrt{s}\over\sqrt{4m^{2}-s-i\epsilon}}\right)\over\sqrt{s}\sqrt{4m^{2}-s-i\epsilon}}.$
(3.15)
The expression (3.14) is valid for any complex value of $s$. The function
$\Delta(s)$ has a single branch cut along the horizontal axis in the $s$
complex plane for $s\in[4m^{2},\infty)$. The infinitesimally small $\epsilon$
is present in order to specify the correct side of the branch cut. At times,
it is more convenient to use the rapidity variable $\theta$ defined in (3.1),
which opens up this branch cut. In this variable, the $\epsilon$ prescription
translates to taking the $\theta>0$ branch for $s>4m^{2}$, and the function
$\Delta(s)$ is simply
$\Delta(s(\theta))={i\pi-\theta\over\sinh\theta}-1.$ (3.16)
The following limits hold true
$\Delta(0)=0,\qquad\Delta(2m^{2})={\pi\over 2}-1,\qquad\Delta(4m^{2})=\infty.$
(3.17)
The first entry in (3.17) implies (3.14) satisfies the normalization condition
(2.15).
The spectral density of the trace of the stress tensor reads as
${2\pi\mathcal{N}_{2}\over
4m^{4}}\times\rho_{\Theta}(s)=1+{\overline{\lambda}\over
4\pi}\times\left(1+4m^{2}\mathcal{N}_{2}^{-1}\log\left({\sqrt{s}+\sqrt{s-4m^{2}}\over\sqrt{s}-\sqrt{s-4m^{2}}}\right)\right)+\mathcal{O}(\overline{\lambda}^{2}).$
(3.18)
It is defined in the region $s\in[4m^{2},\infty]$. Fully computing the next
correction to the spectral density is quite difficult. We notice however that
in the “elastic” regime $s\in[4m^{2},16m^{2}]$ with no particle production the
next correction to the spectral density simply follows from (2.30) and (3.14).
We derive (3.14) and (3.18) in appendix C. We are not aware of any literature
where these results were previously presented, though in principle it should
be possible to obtain them from small $\beta$ expansions of results for the
corresponding observables in the sinh-Gordon model.
For completeness, let us also provide the textbook result for the interacting
part of the scattering amplitude. It reads
$m^{-2}\mathcal{T}(s)=-\overline{\lambda}\times\left(1-{1\over
2}\left({\overline{\lambda}\over
4\pi}\right)\times\big{(}1+\Delta(s)+\Delta(4m^{2}-s)\big{)}+O(\overline{\lambda}^{3})\right).$
(3.19)
It is straightforward to check that (3.14) and (3.19) obey Watson’s equation
(2.32) in the “elastic” regime. Using (3.19) and the second entry in (3.17) we
can relate the quartic coupling $\lambda$ and the non-perturbative quartic
coupling $\Lambda$ defined in (2.26) as follows
$m^{-2}\Lambda=\overline{\lambda}-{\overline{\lambda}^{2}(\pi-1)\over
8\pi}+\mathcal{O}(\overline{\lambda}^{3}).$ (3.20)
Another thing that is important to emphasize is that the perturbative results
diverge at the two-particle threshold $s=4m^{2}$. This divergence is an
artifact of perturbation theory and does not appear in the non-perturbative
amplitude.
By inspecting the perturbative results (3.14), (3.18) and (3.19), we notice
that the perturbative expansion parameter is more accurately
${\overline{\lambda}\over 4\pi}$ rather than $\overline{\lambda}$. Thus, we
expect that the strong coupling regime where perturbation theory breaks down
is
$\overline{\lambda}\sim 4\pi\sim 12.$ (3.21)
### 3.4 2d $O(N)$ model in the large $N$ limit
Let us now consider the generalization of the $\phi^{4}$ model given by (1.5)
where the field $\phi(x)$ has $N$ components and the action is invariant under
$O(N)$ symmetry. In such a theory there are three different two-particle
states transforming in the trivial, symmetric and antisymmetric
representations of the $O(N)$ group. For details see appendix B.
Let us consider here the large $N$ limit $N\rightarrow\infty$. In this limit
it is enough to only consider the two-particle states in the trivial
representation. In what follows we compute the spectral density and the form
factor of the trace of the stress tensor together with the scattering
amplitude for the two-particle states in the trivial representation. Our
results are valid to all orders of perturbation theory. The details of all the
computations are given in appendix C.
In the large $N$ limit the relation between the physical mass $m$ and
lightcone quantization bare mass $m_{0}$ is extremely simple, namely
$m=m_{0}.$ (3.22)
The two-particle form factor of the stress-tensor in the large $N$ limit reads
as
$m^{-2}\mathcal{F}^{\Theta}_{2,0}(s)=-2+{2\overline{\lambda}\Delta(s)\over
8\pi+\overline{\lambda}\,(1+\Delta(s))}.$ (3.23)
It is important to notice that this form factor does not have a singularity at
$s=4m^{2}$. Using the third entry in (3.17) we conclude that
$\mathcal{F}^{\Theta}_{2,0}(4m^{2})=0$. In the large $N$ limit there is no
particle production. As a result the full spectral density is simply given by
the two-particle form factor (3.23) via (2.30). Concretely speaking
$2\pi\mathcal{N}_{2}\rho_{\Theta}(s)=|\mathcal{F}^{\Theta}_{2,0}(s)|^{2}.$
(3.24)
The full scattering amplitude in the large $N$ limit reads
$\widehat{\mathcal{S}}(s)=-{\overline{\lambda}\,(\pi-i\theta)+8\pi
i\,\text{Sinh}(\theta)\over\overline{\lambda}\,(\pi+i\theta)-8\pi
i\,\text{Sinh}(\theta)}.$ (3.25)
Note that this S-matrix is not crossing symmetric, contrary to the other
models that we consider in this paper. It is straightforward to check that
(3.23) and (3.25) satisfy Watson’s equation (2.32) for the whole range of
energies $s\in[4m^{2},\infty)$. Moreover, there is no divergence at the two-
particle threshold $s=4m^{2}$, where $\widehat{\mathcal{S}}(4m^{2})=-1$.
Using the definition of the non-perturbative quartic coupling $\Lambda$, the
relation between the scattering amplitude and the interacting part of the
scattering amplitude and the explicit solution (3.25), we can evaluate
precisely the non-perturbative coupling $\Lambda$ in terms of $\lambda$. It
takes the following simple form
$m^{-2}\Lambda={16\overline{\lambda}\over 16+\overline{\lambda}}.$ (3.26)
One can see that for real positive $\overline{\lambda}$, we have
$\Lambda\in[0,16]$.
### 3.5 $T\overline{T}$ deformation of the 2d Ising
In the vicinity of the critical point, both $\phi^{4}$ theory and the
Staircase Model flow to the Ising model with a $\mathbb{Z}_{2}$ symmetry that
forbids the magnetic deformation $\sigma$. In that case, the lowest-dimension
deformation around Ising is the thermal operator $\epsilon$, which is just the
fermion mass term of the free Majorana fermion description of the Ising model.
The next-lowest-dimension scalar operator is $T\overline{T}$, which in terms
of the left- and right-moving components $\psi$ and $\widetilde{\psi}$ of the
fermion is
$\delta\mathcal{L}={1\over
M^{2}}\psi\partial_{-}\psi\widetilde{\psi}\partial_{+}\widetilde{\psi}.$
(3.27)
Here, $M$ is the scale of the UV cut-off of the low-energy expansion. In the
limit that $M$ is much larger than the mass gap $m$, the contributions to the
S-matrix from all other higher dimension operators are suppressed by higher
powers of $m/M$, so near $\Lambda/m^{2}=8$ the S-matrix is well-approximated
by the tree-level contribution from (3.27). The leading contribution to the
scattering amplitude is most easily computed in lightcone coordinates, where
each $\psi$ contraction with an external fermion produces a factor of
$\sqrt{p_{-}}$ for that fermion, and each $\widetilde{\psi}$ contraction
produces a factor of $\sqrt{p_{-}}{\sqrt{2}p_{+}\over m}$ (the extra factor
follows from the fermion equation of motion
$\sqrt{2}i\partial_{+}\psi=m\widetilde{\psi}$). So, the full tree-level
contribution is simply ${2\over
m^{2}}\sqrt{p_{1-}p_{2-}p_{3-}p_{4-}}p_{2-}p_{3+}p_{4+}^{2}$, anti-symmetrized
on all permutations of $p_{1},p_{2},-p_{3},$ and $-p_{4}$. Finally, there are
only two solutions to the kinematic constraint $p_{1}+p_{2}=p_{3}+p_{4}$;
either $p_{1}=p_{3},p_{2}=p_{4}$ or $p_{1}=p_{4},p_{2}=p_{3}$. Taking the
former, and using $p_{+}={m^{2}\over 2p_{-}}$, we obtain
$\mathcal{T}_{\text{fermions}}={m^{4}(p_{1-}-p_{2-})^{2}(p_{1-}+p_{2-})^{2}\over
M^{2}p_{1-}^{2}p_{2-}^{2}}={st\over M^{2}},$ (3.28)
in agreement with (3.12). The Ising model S-matrix has
$(\Lambda/m^{2},m^{2}\Lambda^{(2)})$ = $(8,2)$, and from the above expression
we can read off that the leading correction which gives
$(\Lambda/m^{2},m^{2}\Lambda^{(2)})=\left(8-{4m^{2}\over M^{2}},2-{2m^{2}\over
M^{2}}\right).$ (3.29)
## 4 Pure S-matrix bootstrap
In this section we will construct general non-perturbative bounds on the space
of 2d scattering amplitudes of $Z_{2}$ odd particles (assuming there is no
bound state pole). We will define the exact optimization problem in section
4.1 and present our numerical results in section 4.2. The main result of this
section is presented in figure 1. The amplitudes in the sinh-Gordon and its
analytic continuation (the staircase model) saturate the lower edge of this
bound.
### 4.1 Set-up
Let us start by discussing the unitarity constraint. In 2d the scattering
amplitude $\widehat{\mathcal{S}}(s)$ must obey the following positive
semidefinite condition
$\begin{pmatrix}1&\widehat{\mathcal{S}}^{*}(s)\\\
\widehat{\mathcal{S}}(s)&1\end{pmatrix}\succeq 0,\qquad\text{for
}s\in[4m^{2},\infty).$ (4.1)
Due to Sylvester’s criterion, this condition is equivalent to the more
familiar one (2.25). To see that, one can simply evaluate the determinant of
(4.1).
It was proposed in Paulos:2016but ; Paulos:2017fhb how to use the constraint
(4.1) in practice. One can write the following ansatz for the scattering
amplitude which automatically obeys maximal analyticity and crossing
$\widehat{\mathcal{S}}(s)-1=-{\Lambda\over
4m^{2}}+\sum_{n=1}^{N_{\text{max}}}a_{n}\times\left({\mathfrak{r}}(s;2m^{2})^{n}+{\mathfrak{r}}(4m^{2}-s;2m^{2})^{n}\right),$
(4.2)
where $\Lambda$ is the non-perturbative quartic coupling defined in (2.26),
$a_{n}$ are some real coefficients and the ${\mathfrak{r}}$ variable is
defined as
${\mathfrak{r}}(s;s_{0})\equiv\lim_{\epsilon\rightarrow
0^{+}}{\sqrt{4m^{2}-s_{0}}-\sqrt{4m^{2}-s-i\epsilon}\over\sqrt{4m^{2}-s_{0}}+\sqrt{4m^{2}-s-i\epsilon}}.$
(4.3)
Here $s_{0}$ is a free parameter which can be chosen at will. For scattering
amplitudes it is convenient to choose $s_{0}=2m^{2}$; this guaranties that at
the crossing symmetric point $s=2m^{2}$, the ${\mathfrak{r}}(s;2m^{2})$
variable vanishes. In theory, one should take $N_{\text{max}}=\infty$. This is
impossible in practice however, and one is thus has to choose a large enough
but finite value of $N_{\text{max}}$ which leads to stable numerical results
(stable under the change of $N_{\text{max}}$). Alternatively to (4.2), one
could also parametrize only the interacting part of the scattering amplitude,
namely
$\mathcal{T}(s)=-\Lambda+\sum_{n=1}^{N_{\text{max}}}\widetilde{a}_{n}\times\left({\mathfrak{r}}(s;2m^{2})^{n}+{\mathfrak{r}}(4m^{2}-s;2m^{2})^{n}\right).$
(4.4)
In this ansatz we denote the unknown paramters by $\widetilde{a}$ in order to
distinguish them from the parameters $a$ entering in (4.2). Depending on the
situation sometimes this choice is more convenient than (4.2).
Using SDPB Simmons-Duffin:2015qma ; Landry:2019qug we can scan the parameter
space $(\Lambda,a_{0},a_{1},a_{2},\ldots)$ of the ansatz (4.2) (or
alternatively
$(\Lambda,\widetilde{a}_{0},\widetilde{a}_{1},\widetilde{a}_{2},\ldots)$ of
the ansatz (4.4)) by looking for amplitudes with the largest or smallest value
of $\Lambda$ which obey (4.1). Once the allowed range of $\Lambda$ is
determined, we can look for example for amplitudes for each allowed value of
$\Lambda$ with the largest or smallest value of the parameter $\Lambda^{(2)}$
defined in (2.27). Using this definition we can express $\Lambda^{(2)}$ in
terms of the paramenters of the ansatz as
$\Lambda^{(2)}={\Lambda+m^{2}\,(2a_{1}+a_{2})\over
4m^{4}}\qquad\text{or}\qquad\Lambda^{(2)}={2\widetilde{a}_{1}+\widetilde{a}_{2}\over
16m^{4}}.$ (4.5)
### 4.2 Numerical Results
Solving the optimization problem for $\Lambda$ defined in section 4.1 we
obtain the following bound
$\Lambda\in[0,\,8m^{2}].$ (4.6)
For each $\Lambda$ in this range, we can now minimize and maximize the
parameter $\Lambda^{(2)}$. As a result we obtain a 2d plot of allowed values
which is given in figure 1. On the boundary of the allowed region in figure 1,
we can extract the numerical expressions of the scattering amplitudes. For
instance the scattering amplitudes extracted from the lower edge are presented
in figures 3 and 3. Remarkably they coincide with the analytic expression
(3.7) which describes the sinh-Gordon model and its analytic continuation (the
staircase model). In particular, notice that the amplitudes extracted in the
vicinity of the tips of the allowed region in figure 1 approach the following
expressions
$\widehat{\mathcal{S}}_{\text{left
tip}}(s)=+1\qquad\text{and}\qquad\widehat{\mathcal{S}}_{\text{right
tip}}(s)=-1.$ (4.7)
These are the amplitudes of the free boson (lower left corner with
$\Lambda=0$) and of the free Majorana fermion (upper right corner with
$\Lambda=8m^{2}$). Let us also make a fun observation that the amplitudes
extracted from the upper edge of the bound in figure 1 are related to the ones
extracted from the lower edge by
$\widehat{\mathcal{S}}_{\text{upper
edge}}(s;\,\Lambda)=-\widehat{\mathcal{S}}_{\text{lower
edge}}(s;\,8m^{2}-\Lambda).$ (4.8)
Using (2.36), we can interpret $\widehat{\mathcal{S}}_{\text{upper
edge}}(s;\,\Lambda)$ as complex conjugated amplitudes of Majorana fermions
with the interacting part exactly as in the sinh-Gordon expression with
$\Lambda\rightarrow 8m^{2}-\Lambda$. We do not know any UV complete model from
where such amplitudes could originate.
Figure 1: Bound on the parameters $\Lambda$ and $\Lambda^{(2)}$. The allowed
region is depicted in blue.
Figure 2: Real part of the scattering amplitude obtained by minimizing
$\Lambda^{(2)}$ for various values of $\Lambda$.
Figure 3: Imaginary part of the scattering amplitude obtained by minimizing
$\Lambda^{(2)}$ for various values of $\Lambda$.
Finally, let us relax the requirement that the amplitude $\mathcal{S}(s)$ is
crossing invariant. This means the when writing down the ansatz for the
S-matrix, we do not have the second term in the parentheses in equation (4.2).
In this case, we obtain the following bound
$\Lambda\in[0,16m^{2}].$ (4.9)
We notice that the large $N$ limit of the 2d $O(N)$ model with $\phi^{4}$
potential populates this interval, see (3.26). We can then minimize
$\Lambda^{(2)}$ by fixing the value of $\Lambda$ in the interval (4.9). The
resulting numerical amplitudes correspond precisely to the large $N$ analytic
solution (3.25). In practice, for this optimization problem it was important
to parametrize the interacting part of the scattering amplitude $\mathcal{T}$
instead of $\widehat{\mathcal{S}}$.
## 5 S-matrix and Form Factor Bootstrap
In this section we will define the numerical optimisation which allows one to
compute the two-to-two scattering amplitude and the two-particle form factor
of the trace of the stress tensor at $s>0$ in the 2d $\phi^{4}$ model given
the LCT input obtained in a companion paper truncffsd . We begin in section
5.1 by quickly reviewing the generalization of the S-matrix bootstrap proposed
in Karateev:2019ymz which allows one to include local operators. We then
explain how one can define an optimization problem which allows one to compute
the scattering amplitude and the form factor at $s>0$. In section 5.2, we
present our numerical findings. The main results are given in figures 9 \- 11.
Also in section 5.2, we will observe that the $\phi^{4}$ model is very similar
to the sinh-Gordon model in the elastic regime. We will investigate this
similarity further in section 5.3.
### 5.1 Set-up
In Karateev:2019ymz , it was shown that unitarity allows to write a more
complicated constraint than (4.1) which entangles the scattering amplitude
with the two-particle form factor and the spectral density of a scalar local
operator $\mathcal{O}$. In this paper we consider the case where the local
operator is the trace of the stress tensor $\Theta$. The following positive
semidefinite condition can be written151515Various entries in this matrix have
different mass dimensions. This positivity condition is equivalent however to
the one which is obtained from (5.1) by the following rescaling
$\mathcal{F}_{\Theta}\rightarrow
m^{-1}\mathcal{F}_{\Theta},\qquad\rho_{\Theta}\rightarrow
m^{-2}\rho_{\Theta}.$ The unitarity condition in the latter form was
originally presented in Karateev:2019ymz . It contains only dimensionless
quantities.
$\begin{pmatrix}1&\widehat{\mathcal{S}}^{*}(s)&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{*}{}^{\Theta}_{2,0}(s)\\\
\widehat{\mathcal{S}}(s)&1&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{\Theta}_{2,0}(s)\\\
\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{\Theta}_{2,0}(s)&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{*}{}^{\Theta}_{2,0}(s)&2\pi\rho(s)\end{pmatrix}\succeq
0,\qquad\text{for }s\in[4m^{2},\infty].$ (5.1)
Analogously to section 4.1, one can define various numerical optimization
problems which utilize (5.1) instead of (4.1). For that we should write an
ansatz for all the ingredients entering in (5.1). For the scattering amplitude
we use the ansatz (4.2) or (4.4). In practice, we will use (4.4) in this
section. For the form factor we can write instead
$\mathcal{F}^{\Theta}_{2,0}(s)=-2m^{2}+\sum_{n=1}^{N_{\text{max}}}b_{n}\times{\mathfrak{r}}(s;0)^{n},$
(5.2)
where $b_{n}$ are some real parameters. By construction it is an analytic
function in $s$ with a single branch cut on the real axis between $4m^{2}$ and
$+\infty$. The ${\mathfrak{r}}$ variable was defined in (4.3). Here, we have
chosen the parameter $s_{0}$ to be 0, such that ${\mathfrak{r}}(s;0)$ vanishes
at $s=0$. This is convenient since this ansatz automatically satisfies the
normalization condition (2.14) at $s=0$. If we also write an ansatz for the
spectral density (which is simply a real function), one can then bound for
example the UV central charge (2.29) for various values of $\Lambda$. Although
this may be an interesting problem, we do not pursue it in this paper.
Instead of parametrizing the spectral density in this section, we will use its
explicit form in the 2d $\phi^{4}$ model found in the companion paper
truncffsd , see figure 4 there. We use the superscript LCT in order to denote
these spectral densities, namely
$s\in[4m^{2},s_{\text{max}}]:\qquad\rho_{\Theta}^{\text{LCT}}(s).$ (5.3)
Here $s_{\text{max}}$ is the maximal value of $s$ for which we trust the
results of truncffsd . In truncffsd , see figure 12, we have also computed the
two-particle form factor of the trace of the stress tensor at $s\leq 0$. We
also use the LCT superscript to denote these form factors, namely
$s\in[s_{\text{min}},0]:\qquad\mathcal{F}^{\Theta\text{LCT}}_{2,0}(s).$ (5.4)
Here $s_{\text{min}}<0$ is the minimal value of $s$ for which we trust the
results of truncffsd . We refer to (5.3) and (5.4) as the input data.
Let us now precisely define our optimization problem. Given the value of the
physical mass $m$ (which is obtained by the LCT method), determine the unknown
coefficients $\Lambda$, $a_{n}$ and $b_{n}$ in the ansatze (4.4) and (5.2),
such that $\Lambda$ has the maximal/minimal value and the following
constraints are satisfied
$\displaystyle\begin{pmatrix}1&\widehat{\mathcal{S}}^{*}(s)&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{*}{}^{\Theta}_{2,0}(s)\\\
\widehat{\mathcal{S}}(s)&1&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{\Theta}_{2,0}(s)\\\
\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{\Theta}_{2,0}(s)&\mathcal{N}_{2}^{-1/2}\,\mathcal{F}^{*}{}^{\Theta}_{2,0}(s)&2\pi\rho_{\Theta}^{\text{LCT}}(s)\end{pmatrix}\succeq
0,\quad s\in[(4+\sigma)m^{2},s_{\text{max}}],$ (5.5)
$\displaystyle\begin{pmatrix}1&\widehat{\mathcal{S}}^{*}(s)\\\
\widehat{\mathcal{S}}(s)&1\end{pmatrix}\succeq 0,\quad
s\in[4m^{2},(4+\sigma)m^{2})\cup(s_{\text{max}},\infty)$ (5.6)
The first constraint (5.5) allows one to inject information about the LCT
spectral density (5.3) in the set-up. The second constraint (5.6) can be seen
as the reduced version of the first one in the region where no information
about the spectral density is available. In the above equations we have
introduced an addition small parameter $\sigma\ll 1$. The numerical bootstrap
set-up is sensitive to numerical errors in the LCT data, and the presence of
$\sigma$ mitigates the effect of these errors in the spectral density near
threshold $s=4m^{2}$ and the uncertainty in the value of the physical mass
itself. In addition to equation (5.5) and (5.6), we require that the Ansatz
for the form factor match the one obtained by the LCT method. We can impose
this by demanding
$|\mathcal{F}^{\Theta}_{2,0}(s)-\mathcal{F}^{\Theta\text{LCT}}_{2,0}(s)|\leq\epsilon,\quad\text{for
}s\in[s_{\text{min}},\,0],$ (5.7)
where $\epsilon\geq 0$ is a small positive parameter. We have introduced the
$\epsilon$ parameter in the set-up in order to accommodate the numerical
errors in the LCT input data. The constraint (5.7) can be equivalently
rewritten in the semi-positive form as
$\begin{pmatrix}\epsilon&\mathcal{F}^{*\Theta}_{2,0}(s)-\mathcal{F}^{*\Theta\text{LCT}}_{2,0}(s)\\\
\mathcal{F}^{\Theta}_{2,0}(s)-\mathcal{F}^{\Theta\text{LCT}}_{2,0}(s)&\epsilon\end{pmatrix}\succeq
0,\quad\text{for }s\in[s_{\text{min}},\,0].$ (5.8)
In practice, we parameterize $\epsilon$ in terms of the exponent $\delta$
defined in the following way
$m^{-2}\epsilon=10^{-\delta}.$ (5.9)
The larger the value of $\delta$, the stronger the constraint (5.8) becomes.
When we present our numerical results in section 5.2, we will see that given a
large enough value of $\delta$ in (5.9), we find a unique solution to the
optimisation problem described in this section, namely the upper and lower
bounds lead to almost the same result. Moreover, we will see that the
unitarity conditions (5.5) and (5.6) tend to get saturated in the “elastic”
regime $s\in[4m^{2},16m^{2}]$. As a result, the obtained form factors obey
equation (2.31) and the scattering amplitudes obey equation (2.32) as they
should. In the non-elastic regime $s\geq 16m^{2}$, the LCT spectral density
contains four- and higher- particle contributions, however we do not include
four- and higher-particle form factors in the set-up. Therefore,
conservatively speaking, this means that for $s\geq 16m^{2}$ the behaviour of
the obtained scattering amplitude and the form factor has nothing to do with
the $\phi^{4}$ model.
Formulating the above paragraph in different words, one can roughly say that
the above optimization procedure determines the coefficients of the form
factor ansatz in equation (5.2) given two constraints: that the Ansatz matches
the LCT form factor result (5.4) for $s\leq 0$, and the square of its norm
saturates the LCT spectral density (5.3) for $4m^{2}\leq s\leq 16m^{2}$ via
(2.31). The scattering amplitude is then obtained by solving Watson’s equation
(2.32).
### 5.2 Numerical Results
We present now the solutions of the optimization problem defined in section
5.1. As a demonstration of our approach, in section 5.2.1, instead of using
the LCT input data (which obviously contains numerical errors), we use the
input data obtained from the analytic solution for the 2d $O(N)$ model in the
large $N$ limit given in section 3.4. We stress however that we use only the
part of the analytic data which is computable with the LCT methods. The reason
for this exercise is to show how the optimization problem works in the
presence of high accuracy data. We present our optimization for the $\phi^{4}$
model using the LCT data in section 5.2.2. For small values of the quartic
coupling constant $\overline{\lambda}$, our results are in agreement with
perturbation theory. For large values of $\overline{\lambda}$ our results are
novel.
In order to proceed, let us provide some details on the choice of the
optimization parameters used in SDPB. We use the following range for the input
data
$s_{\text{min}}=-80m^{2},\qquad s_{\text{max}}=100m^{2}.$ (5.10)
We use the following size of the ansatzes in equation (4.4) and equation (5.2)
$N_{\text{max}}=50,$ (5.11)
which is large enough in practice. We impose the conditions (5.5), (5.6) and
(5.8) at a finite number of points $s$. Let us denote by $N_{\mathcal{F}}$ the
number of $s$ values picked in the interval $[s_{\text{min}},0]$ where the
condition (5.8) is imposed, and by $N_{\rho}$ the number of $s$ values picked
in the interval $[(4+\sigma)m^{2},s_{\text{max}}]$ where the condition (5.5)
is imposed. For the choice of $N_{\text{max}}$ in (5.11), we chose the
following values
$N_{\mathcal{F}}=1000,\qquad N_{\rho}=2500.$ (5.12)
We impose the condition (5.6) at about 100 points in the range
$s>s_{\text{max}}$. We use the Chebyshev grid to distribute the above points.
In practice for the LCT data with $\overline{\lambda}\leq 13$ we use
$\sigma=0.001$ and for $\overline{\lambda}>13$ we use $\sigma=0.01$. This
indicates that the LCT data for higher values of $\overline{\lambda}$ contains
larger errors. For smaller values of $\sigma$ the optimization problem often
simply does not converge.
Our strategy is then as follows. We run the optimization routine for different
values of $\epsilon$ or equivalently $\delta$, see (5.9). For small values of
$\delta$, the problem is not constraining enough. For too large values of
$\delta$, the problem becomes unfeasible. In order to find the optimal value
for $\delta$ we perform the binary scan in the range
$\delta\in[0,10].$ (5.13)
We then pick the largest value of $\delta$ where the optimization problem is
still feasible. The binary scan is performed until the difference between the
feasible and unfeasible values of $\delta$ drops below some threshold value.
We pick this threshold value to be 0.1.
#### 5.2.1 Infinite Precision Example
We study here the 2d $O(N)$ model in the large $N$ limit where the exact
analytic solution exists, see section 3.4. We pick the following value of the
quartic coupling
$\overline{\lambda}=10$ (5.14)
as an example. At large $N$ we keep only the singlet component of the
scattering amplitude, which therefore loses its crossing symmetry
$s\leftrightarrow 4m^{2}-s$, see appendix B. As a result, in the ansatz (4.4),
we relax crossing symmetry by dropping the last term in the sum.
Figure 4: Lower and upper bounds on the non-perturbative quartic coupling
constant $\Lambda$. Blue dots are the numerical data. The blue vertical lines
indicate the allowed region for $\Lambda$ for each value of $\delta$. The
horizontal dashed line indicates the analytic value of $\Lambda\approx
6.1538$, which the numerical lower and upper bounds are expected to converge
to. Here we use the analytic large $N$ data of the 2d $O(N)$ model to mimic
the LCT input data.
Figure 5: The real and imaginary part of the interacting part of the
scattering amplitude $\mathcal{T}(s)$. The solid red line is the numerical
result. The dashed black line is the expected result from perturbation theory
(3.25). Here we use the analytic large $N$ data of the 2d $O(N)$ model to
mimic the LCT input data.
Figure 6: The real and imaginary part of the form factor of the trace of the
stress tensor $\mathcal{F}^{\Theta}_{2,0}(s)$. The solid red line is the
numerical result. The dashed black line is the expected result from
perturbation theory (3.23). Here we use the analytic large $N$ data of the 2d
$O(N)$ model to mimic the LCT input data.
The bound on the non-perturbative quartic coupling $\Lambda$ is given on
figure 6. We see that the upper and lower bounds quickly converge to the
analytic value of $\Lambda$ and starting from $\delta\gtrsim 9$ basically
coincide. We pick the “lower bound” solution with the largest value of
$\delta$ and extract the interacting part of the scattering amplitude and the
form factor. The result is given in figures 6 and 6 respectively. The
optimization problem result is given by the red solid line and the analytic
results are given by the black dashed line. Both are in a perfect agreement.
#### 5.2.2 $\phi^{4}$ model
Let us now address the optimization problem with the $\phi^{4}$ LCT data as an
input. In what follows we will denote the data obtained by maximization of
$\Lambda$ by the subscript “upper” and the data obtained by minimization of
$\Lambda$ by the subscript “lower”. The obtained numerical values of $\Lambda$
and $\Lambda^{(2)}$ are presented in table 1. Looking at this table one can
see that both optimization problems lead to almost the same numerical values.
This indicates that our procedure converges to the unique solution. The
relative difference between $\Lambda_{\text{upper}}$ and
$\Lambda_{\text{lower}}$ can be taken as a rough error estimate. It is
illuminating to place the data of table 1 on figure 1. We display the result
in figure 7. Remarkably the $\phi^{4}$ model lies very close to the boundary
of the allowed region and almost coincides with the sinh-Gordon/staircase
model. We address the similarity between the two models in detail in the next
section.
$\overline{\lambda}$ | 1 | 3 | 6 | 7 | 8 | 9
---|---|---|---|---|---|---
$m^{-2}\Lambda_{\text{upper}}$ | 0.903 | 2.102 | 3.292 | 3.608 | 3.909 | 4.196
$m^{-2}\Lambda_{\text{lower}}$ | 0.878 | 2.093 | 3.290 | 3.604 | 3.904 | 4.189
$m^{+2}\Lambda^{(2)}_{\text{upper}}$ | 0.029 | 0.140 | 0.340 | 0.409 | 0.479 | 0.551
$m^{+2}\Lambda^{(2)}_{\text{lower}}$ | 0.027 | 0.140 | 0.340 | 0.408 | 0.478 | 0.550
$1-\Lambda_{\text{lower}}/\Lambda_{\text{upper}}$ | 0.028 | 0.004 | 0.0004 | 0.001 | 0.001 | 0.002
$\overline{\lambda}$ | 10 | 11 | 12 | 13 | 16 | 18
---|---|---|---|---|---|---
$m^{-2}\Lambda_{\text{upper}}$ | 4.465 | 4.773 | 5.062 | 5.347 | 5.974 | 6.681
$m^{-2}\Lambda_{\text{lower}}$ | 4.462 | 4.753 | 5.018 | 5.310 | 5.941 | 6.635
$m^{+2}\Lambda^{(2)}_{\text{upper}}$ | 0.624 | 0.713 | 0.804 | 0.897 | 1.149 | 1.479
$m^{+2}\Lambda^{(2)}_{\text{lower}}$ | 0.623 | 0.708 | 0.792 | 0.887 | 1.146 | 1.472
$1-\Lambda_{\text{lower}}/\Lambda_{\text{upper}}$ | 0.001 | 0.004 | 0.009 | 0.007 | 0.006 | 0.007
Table 1: The numerical values of the non-perturbative couplings $\Lambda$ and
$\Lambda^{(2)}$ describing the $\phi^{4}$ model computed for various values of
$\overline{\lambda}$. We present the numerical values for both the upper and
the lower bound. We also indicate a relative difference between the upper and
the lower values of $\Lambda$. Analogous values are shown for the sinh-Gordon
model and its analytic continuation (staircase model) in table 2. Figure 7:
Bound on the parameters $\Lambda$ and $\Lambda^{(2)}$. The allowed region is
depicted in blue. The obtained numerical values for the $\phi^{4}$ model using
the LCT data from table 1 are indicated by red and purple crosses. These
crosses correspond to lower and upper bounds respectively.
As a solution of our optimization problem we obtain not only the data of table
1 but also all the coefficients in the ansatz (4.4) and (5.2). Taking these
coefficients as averages between the upper and lower bound results we obtain
numerical expressions for the interacting part of the scattering amplitude and
the form factor of the trace of the stress tensor. The results are presented
in figures 9 \- 11 for different values of $\overline{\lambda}$. For
$\overline{\lambda}=1$ we can compare our result with the perturbative
amplitude (3.19). It is depicted by the red dashed lines in figure 9 and 9. We
find an excellent agreement. For completeness we provide here the perturbative
value of $m^{-2}\Lambda$ for $\overline{\lambda}=1$ using equation (3.20). It
reads
$m^{-2}\Lambda=0.914\pm 0.006.$ (5.15)
This value is rather close to the one of table 1 for the upper bound which is
$0.903$. For $\overline{\lambda}=18$ we could try to compare our result with
the scattering amplitude of the deformed 2d Ising model. It is given by
equation (2.37) and (3.28) and reads
$\mathcal{T}(s)=4i\sqrt{s}\sqrt{s-4m^{2}}+{s(4m^{2}-s)\over M^{2}}.$ (5.16)
The value of $M^{2}$ can be estimated from equation (3.29) by plugging there
the value $m^{-2}\Lambda=6.681$ found in table 1. The amplitude (5.16) is
depicted in figure 9 and 9 by the black dashed line. We see that the
$\overline{\lambda}=18$ result has a similar shape to the amplitude (5.16).
Notice however that this comparison is rather crude since the
$\overline{\lambda}=18$ amplitude is still far away from the critical point
(its mass gap in unit of $m_{0}$ is $m/m_{0}\simeq 0.6186$).
In the “elastic” regime $s\in[4m^{2},16m^{2}]$, one can reconstruct the
spectral density from the obtained two particle form factor using equation
(2.31). For $\overline{\lambda}=10$ we explicitly compare the reconstructed
two-particle part of the spectral density with the LCT result (which was used
as part of the input data to the optimization problem) in figure 12. We see
that in the elastic regime they basically coincide. We present relative error
between the reconstructed two-particle part of the spectral density and the
LCT result for different values of $\overline{\lambda}$ in figure 14. The
relative errors become large at the threshold $s=4m^{2}$ (since
$\rho_{\Theta}^{\text{LCT}}$ is approaching 0 as $s$ goes to $4m^{2}$, and a
small uncertainty in the form factor can cause a somewhat large relative
error), but stay relatively low in the “elastic” region. This provides a solid
check for our bootstrap results for the form factor. The obtained scattering
amplitude must obey Watson’s equation (2.32) in the “elastic” regime. As
presented in figure 14, our bootstrap results indeed satisfy it well. This
provides validation of our bootstrap results for the obtained scattering
amplitudes. Outside of the “elastic” regime the presence of four- and higher-
particle form factors becomes necessary for the bounds from unitarity to be
tight. Since we do not have them in our bootstrap set-up, a potential concern
is that the bootstrap algorithm tries to saturate unitarity in this regime by
letting the two-particle form factor grow larger than it should be. So it is
not clear if our results for the form factor and the scattering amplitude in
the $s\geq 16m^{2}$ regime are relevant to the $\phi^{4}$ model. From figure
14 and 14, one can also see that generally for larger $\overline{\lambda}$,
the relative errors are larger. Therefore, we expect the uncertainties in the
results from the S-matrix/form factor bootstrap in figure 9 \- 11 to be
relatively larger for larger $\overline{\lambda}$.
Figure 8: Real part of the interacting scattering amplitude in the $\phi^{4}$
theory computed using the LCT data as an input to the S-matrix/form factor
bootstrap problem for various values of $\overline{\lambda}$. As a consistency
check, we also plotted the real part of the perturbative two-loop scattering
amplitude (equation (3.19)) with $\overline{\lambda}=1$ (red dotted line).
Figure 9: Imaginary part of the interacting scattering amplitude in the
$\phi^{4}$ theory computed using the LCT data as an input to the S-matrix/form
factor bootstrap problem for various values of $\overline{\lambda}$. As a
consistency check, we also plotted the imaginary part of the perturbative two-
loop scattering amplitude (equation (3.19)) with $\overline{\lambda}=1$ (red
dotted line).
Figure 10: Real part of the form factor of the trace of the stress tensor in
the $\phi^{4}$ theory computed using the LCT data as an input to the
S-matrix/form factor bootstrap problem for various values of
$\overline{\lambda}$. As a consistency check, we also plotted the real part of
the perturbative two-loop form factor (equation (3.14)) with
$\overline{\lambda}=1$ (red dotted line).
Figure 11: Imaginary part of the form factor of the trace of the stress tensor
in the $\phi^{4}$ theory computed using the LCT data as an input to the
S-matrix/form factor bootstrap problem for various values of
$\overline{\lambda}$. As a consistency check, we also plotted the imaginary
part of the perturbative two-loop form factor (equation (3.14)) with
$\overline{\lambda}=1$ (red dotted line). Figure 12: Comparison between the
LCT spectral density and the spectral density reconstructed from the obtained
form factor for $\overline{\lambda}=10$
Figure 13: Relative error between the LCT spectral density and the spectral
density obtained from the obtained two-particle form factor for various values
of $\overline{\lambda}$.
Figure 14: Check of Watson’s equation (2.32) using the obtained expressions of
the form factor and the spectral density for various values of
$\overline{\lambda}$. The vertical axis is given in the log scale.
### 5.3 Comparison of the sinh-Gordon model and $\phi^{4}$ model
Figure 7 nicely summarises the results of sections 4.2 and 5.2. It provides
the allowed region in the space of consistent quantum field theories (blue
region) and indicates the position of the $\phi^{4}$ model in this region (red
and purple crosses). Remarkably, the $\phi^{4}$ model lies super close to the
lower boundary of the allowed region where the sinh-Gordon model and its
analytic continuation (the staircase model) lie. In this section we discuss
the plausibility of this result.
To begin with, let us write explicitly the parameters $\Lambda$ and
$\Lambda^{(2)}$ in the sinh-Gordon model (and its analytic continuation) for
various values of $\beta^{2}$. The results are summarised in table 2. We chose
the values of $\beta^{2}$ in these tables in such a way that the sinh-Gordon
(and its analytic continuation) has the same values of $\Lambda$ as in table
1. As already expected from figure 7, the values of $\Lambda^{(2)}$ of the
$\phi^{4}$ model and the (analytically continued) sinh-Gordon model are almost
the same. Tables 1 and 2 quantify this similarity.
The comparison of tables 1 and 2 can be summarized as follows: given some
value of $\overline{\lambda}$ in the $\phi^{4}$ model, there is always some
value $\beta^{2}$ in the (analytically continued) sinh-Gordon model which
results in the $\Lambda$ and $\Lambda^{(2)}$ values similar to the ones in the
$\phi^{4}$ model. Only in the preturbative regime when $\overline{\lambda}\ll
4\pi$ we have $\beta^{2}\approx\overline{\lambda}$. (For example, the
$\phi^{4}$ model with $\overline{\lambda}=1$ is similar to the sinh-Gordon
model with $\beta^{2}=1.043$.)
Using the values of $\Lambda$ in table 1, one can compute the scattering
amplitude, the form factor of the trace of the stress tensor and spectral
density in the sinh-Gordon model (and its analytic continuation) using the
results of section 3.1. We compare them with our LCT expressions for the form
factor at $s\leq 0$ and the spectral density in figure 16. We observe that the
two models have a very similar behaviour in a wide range of values $s$ even at
strong coupling. Since the $\phi^{4}$ LCT data is so close to the sinh-Gordon
model, it is not surprising that the form factor at $s>0$ and the scattering
amplitudes we obtain from the numerical optimization will also be similar to
those of the sinh-Gordon model. To be concrete, we compare the form factors at
$s>0$ in the two models in figure 16 and the scattering amplitudes in figure
17. Notice especially that the form factors and scattering amplitudes for
these two theories are almost the same at $0<s<4m^{2}$, even for large
$\overline{\lambda}$. This also explains what we saw in figure 7.
It is important to stress that the amplitude and the form factor in the
$\phi^{4}$ and sinh-Gordon models must differ in the non-elastic regime
$s>16m^{2}$, however our bootstrap method does not allow us to compute the
$\phi^{4}$ observables in this regime reliably to see the difference. It is
interesting that there exist amplitudes belonging to different models which
are very similar in the elastic regime and differ significantly in the non-
elastic regime. See Tourkine:2021fqh for a related discussion, where the
authors studied the question of how sensitive the elastic part of the
amplitude is to the inelastic regime, if one regards the latter as an input to
the S-matrix bootstrap and the former as an output. In particular, it would
likely shed light on the similarity of the $\phi^{4}$ and sinh-Gordon elastic
amplitudes by studying how much our S-matrix bounds vary under changes of the
inelastic amplitudes, using the framework of Tourkine:2021fqh .
$\beta^{2}$ | 1.043 | 3.298 | 8.219 | 11.1466 | 17.183
---|---|---|---|---|---
$m^{-2}\Lambda$ | 0.908 | 2.102 | 3.292 | 3.610 | 3.912
$m^{2}\Lambda^{(2)}$ | 0.026 | 0.138 | 0.339 | 0.407 | 0.478
$\beta^{2}$ | $8\pi\,e^{-0.560i}$ | $8\pi\,e^{-0.850i}$ | $8\pi\,e^{-1.081i}$ | $8\pi\,e^{-1.255i}$ | $8\pi\,e^{-1.402i}$ | $-8\pi\,e^{+1.466i}$ | $-8\pi\,e^{+1.196i}$
---|---|---|---|---|---|---|---
$m^{-2}\Lambda$ | 4.197 | 4.466 | 4.772 | 5.062 | 5.346 | 5.974 | 6.681
$m^{2}\Lambda^{(2)}$ | 0.551 | 0.623 | 0.712 | 0.801 | 0.893 | 1.115 | 1.395
Table 2: The numerical values of non-perturbative couplings $\Lambda$ and
$\Lambda^{(2)}$ describing the sinh-Gordon model and its analytic continuation
(staircase model) computed for various values $\beta^{2}$. Analogous values
are shown for the $\phi^{4}$ model in table 1.
Figure 15: Comparison of the form factor of the stress tensor at $s\leq 0$
(left plot) and its spectral density (right plot) in the $\phi^{4}$ model
(solid lines) and the (analytically continued) sinh-Gordon model (dotted
lines) for $\Lambda=\\{0.903,2.102,3.292,3.909,4.465,5.347,5.974,6.681\\}$
which correspond to $\overline{\lambda}=\\{1,3,6,8,10,13,16,18\\}$ in the
$\phi^{4}$ model according to table 1. The solid lines for the $\phi^{4}$
model are from light-cone truncation computation, while the dotted lines for
the sinh-Gordon model are from the analytic form factor formulas (3.8) and
(D.4). Note that for the sinh-Gordon spectral densities, we included the four-
particle form factor contribution, so that the result shown above is exact up
to $s=36m^{2}$.
Figure 16: Comparison of the real part (left plots) and imaginary part (right
plot) of the form factor of the stress tensor at $s>0$ in the $\phi^{4}$ model
(solid lines) and the (analytically continued) sinh-Gordon model (dotted
lines) for $\Lambda=\\{0.903,2.102,3.292,3.909,4.465,5.347,5.974,6.681\\}$
which correspond to $\overline{\lambda}=\\{1,3,6,8,10,13,16,18\\}$ in the
$\phi^{4}$ model according to table 1. The solid lines for the $\phi^{4}$
model are from the S-matrix/form factor bootstrap, while the dotted lines for
sinh-Gordon are from the analytic two-particle form factor formula (3.8).
Figure 17: Comparison of the two-to-two scattering amplitudes in the
$\phi^{4}$ model (solid lines) and the (analytically continued) sinh-Gordon
model (dotted lines) for
$\Lambda=\\{0.903,2.102,3.292,3.909,4.465,5.347,5.974,6.681\\}$, which
correspond to $\overline{\lambda}=\\{1,3,6,8,10,13,16,18\\}$ in the $\phi^{4}$
model according to table 1. The solid lines for the $\phi^{4}$ model are from
the S-matrix/form factor bootstrap, while the dotted lines for the sinh-Gordon
model are from the analytic S-matrix formula (3.3)
## 6 Discussion and Future Directions
The main purpose of this paper was to start with some nonperturbative data for
a specific model, in this case for $\phi^{4}$ theory in 2d, and to inject that
data into the S-matrix/form factor bootstrap in order to compute additional
observable quantities. Ideally, one might hope that with a finite amount of
such data, the constraints of crossing, analyticity, and unitarity completely
determine the rest of the theory. Less ambitiously, the S-matrix
bootstrap/form factor might simply provide a robust method to extract
additional results from some initial data. In our specific application, our
‘initial data’ was the spectral density of the stress tensor, and its form
factor with two-particle states in a certain kinematic regime $s\leq 0$,
computed using lightcone Hamiltonian truncation methods from our companion
paper truncffsd . Roughly, the bootstrap can take this data and obtain the
form factor in a different kinematic regime, at $s>0$, after which the form of
the elastic scattering amplitude follows from Watson’s theorem. An important
part of the challenge was that the input data itself is determined
numerically, so that simply analytically continuing between different
kinematic regimes is not straightforward.161616In a system at finite volume,
Luscher’s method Luscher:1985dn ; Luscher:1986pf provides another handle on
the elastic scattering amplitudes, which could be used to verify or improve
the S-matrix bootstrap results. The work bajnok2016truncated applied
Luscher’s method to equal-time Hamiltonian truncation in finite volume in the
broken phase of $\phi^{4}$ theory, and it should be possible to repeat their
analysis in the unbroken phase that we have studied in this work. One could
instead try to obtain the finite volume spectrum from lattice Monte Carlo
rather than from truncation methods.
One of the surprises of our analysis is that the elastic 2-to-2 S-matrix in
$\phi^{4}$ theory is extremely close to that of the sinh-Gordon model and its
analytic continuation (the staircase model), after the couplings of both
models are adjusted to have the same value of $\Lambda$ (the interacting part
of the scattering amplitude value at the crossing-symmetric point $s=2m^{2}$).
The fact that the scattering amplitudes in both models are are somewhat close
is perhaps not very surprising. As we have emphasized, the 2-to-2 S-matrices
for the two theories are identical in perturbation theory around $\Lambda=0$
until $\mathcal{O}(\Lambda^{4})$, which is the first order in perturbation
theory where $\phi^{4}$ has particle production. Moreover, both theories reach
a critical point at the upper limit $\Lambda=8m^{2}$ where they describe the
Ising model S-matrix, and perturbation theory around this upper limit is
described at $\mathcal{O}(8m^{2}-\Lambda)$ by the leading irrelevant
deformation $T\overline{T}$, so the first difference between the theories
arises at $\mathcal{O}((8m^{2}-\Lambda)^{2})$. So one could reasonably expect
the S-matrices to be quite similar in between these two limits. Nevertheless,
the degree to which they agree even at intermediate strongly coupled values is
still remarkable. One might worry that this agreement is an artifact of the
S-matrix bootstrap itself, which tends to push theories to saturate unitarity
conditions and therefore tends to find integrable models. In fact, we have
shown that a pure S-matrix bootstrap analysis, without any injection of
dynamical data from LCT, exactly finds the sinh-Gordon/staircase model
S-matrix. However, we emphasize that our $\phi^{4}$ S-matrix bootstrap
analysis used a different optimization condition from our pure S-matrix
bootstrap analysis. In the former, we fixed the data from LCT and maximized
$\Lambda$, whereas in the latter we fixed $\Lambda$ and maximized the second
derivative $\Lambda^{(2)}$ of the S-matrix at the crossing-symmetric point.
Moreover, $\phi^{4}$ theory really should saturate unitarity in the elastic
regime $4m^{2}<s<16m^{2}$ due to kinematics, so one cannot think of this
saturation as an artifact of the S-matrix bootstrap. Rather, in practical
terms it appears that the origin of this close agreement is that even at
strong coupling, the stress tensor form factor at $s\leq 0$ and the spectral
density at $4m^{2}<s<16m^{2}$, which we compute in LCT, is very similar to
that of sinh-Gordon/staircase model.171717We also compute the stress tensor
spectral density at $s>16m^{2}$, and here we do see a significant deviation
between $\phi^{4}$ and sinh-Gordon. However, the S-matrix bootstrap result for
the elastic scattering amplitude does not seem to be very sensitive to the
detailed behavior of the spectral density in this regime.
Although 2-to-2 elastic scattering appears to be very similar in $\phi^{4}$
theory and the sinh-Gordon model, we do not expect it to be similar at large
$s$ and it certainly cannot be similar for 2-to-$2+n$ since particle
production exactly vanishes in sinh-Gordon. The S-matrix bootstrap with both
two- and four-particle external states would therefore be particularly
illuminating in this case since it would uncover more of the qualitative
difference between the two models. In $d>2$, including higher multiplicities
in the S-matrix bootstrap is likely quite challenging due to the large
kinematic parameter space, but in $d=2$ we are optimistic that it would be
practical. If one wanted to use the S-matrix bootstrap in combination with UV
CFT operators, as we have done in this work, then the inclusion of four-
particle external states would necessitate the appearance of four-particle
form factors $\mathcal{F}_{4,0}^{\Theta}$ in the unitarity condition which is
very hard to compute in the LCT framework. Perhaps, one could simply
parameterize it and try to obtain it as one of the outputs of the S-matrix
bootstrap.
Finally, we end by mentioning possible generalizations of the method. There
are many other models in 2d that would be interesting to analyze using this
approach. LCT can be applied to theories with more general field content in
2d, including gauge fields and fermions, and 2d QCD at finite $N_{c}$ would be
a particularly interesting application.181818See e.g. Dempsey:2021xpf ;
Katz:2014uoa ; Katz:2013qua for recent LCT and DLCQ applications to 2d QCD.
Our approach here is similar in spirit to that of Gabai:2019ryw , which
studied Ising Field theory with both a $\sigma$ and $\epsilon$ deformation
using TFFSA and Luscher’s method Luscher:1985dn ; Luscher:1986pf , but it
would be interesting to see if any more mileage could be gained by also
including form factors and spectral densities in a generalized unitarity
condition as we did in this paper. More ambitiously, our method in principle
can be applied to higher dimensions, the main challenge being that it is
difficult to obtain the input data. LCT has been applied to the $\phi^{4}$
model in 3d, and the stress tensor spectral density was obtained in
Anand:2020qnp .191919Both lightcone and equal-time Hamiltonian truncation have
seen important recent progress for $\phi^{4}$ theory in $d>2$
Hogervorst:2014rta ; Katz:2016hxp ; Elias-Miro:2020qwz ; Anand:2020qnp . One
of the main challenges has been dealing with state-dependent counterterms for
divergences. The recent works Elias-Miro:2020qwz ; Anand:2020qnp developed
systematic methods to handle this issue and specifically applied their work in
the context of 3d $\phi^{4}$ theory. One would have to generalize our
treatment of form factors to 3d, but the basic idea would be the same. In
$d>2$ there are two stress-tensor two-particle form factors
$\mathcal{F}_{2,0}^{\Theta}(s)$ and $\mathcal{F}_{2,0}^{(2)}(s)$, as well as
two spectral densities $\rho_{\Theta}(s)$ and $\rho_{2}(s)$, and the
scattering amplitude $\mathcal{S}(s,t)$ can be decomposed into partial
amplitudes $\mathcal{S}_{j}(s)$ with $j=0,2,4,\ldots$. The generalization of
the unitarity constraint (5.1) was worked out in Karateev:2020axc .202020See
equations (6.36) and (6.41) there. So although generalizing our work to 3d
would involve significant work, at least all the pieces have already been
assembled and are waiting to be used.
### Acknowledgments
We thank Ami Katz, Alexander Monin, Giuseppe Mussardo, João Penedones, Balt
van Rees, Matthew Walters, for helpful conversations, and in particular Ami
Katz and Matthew Walters for comments on a draft. ALF and HC were supported in
part by the US Department of Energy Office of Science under Award Number DE-
SC0015845 and the Simons Collaboration Grant on the Non-Perturbative
Bootstrap, and ALF in part by a Sloan Foundation fellowship.
## Appendix A Kinematics of 2d Scattering
Consider the scattering of two identical scalar particles in two space-time
dimensions. We denote the initial two-momenta of two particles (before the
scattering) by $p_{1}^{\mu}$ and $p_{2}^{\mu}$ and the final two-momenta of
two particles (after the scattering) by $k_{1}^{\mu}$ and $k_{2}^{\mu}$. The
two particles obey the mass-shell condition
$p_{i}^{2}=k_{i}^{2}=-m^{2},$ (A.1)
where $i=1,2$. The conservation of two-momenta leads to the requirement
$p_{1}^{\mu}+p_{2}^{\mu}-k_{1}^{\mu}-k_{2}^{\mu}=0.$ (A.2)
Due to the mass-shell condition (A.1), there are only two different solutions
for the two momenta after the scattering, namely
$\vec{k}_{1}=\vec{p}_{1},\quad\vec{k}_{2}=\vec{p}_{2}\qquad\text{or}\qquad\vec{k}_{1}=\vec{p}_{2},\quad\vec{k}_{2}=\vec{p}_{1}.$
(A.3)
Let us recall that the Mandelstam variables are defined as
$\displaystyle s\equiv-(p_{1}+p_{2})^{2},\qquad
t\equiv-(p_{1}-k_{1})^{2},\qquad u\equiv-(p_{1}-k_{2})^{2}.$ (A.4)
Plugging the two solutions (A.3) into the definition of the Mandelstam
variables we see that they correspond to two different situation
$t=0\qquad\text{or}\qquad u=0.$ (A.5)
The two solutions (A.3) are related by the discrete $Z_{2}$ symmetry
$\vec{k}_{1}\leftrightarrow\vec{k}_{2}$. The scattering in $d=2$ happens on
the line. It is standard to work with the convention when two particle states
are defined in such a way that particle 1 (with momentum $\vec{p}_{1}$) is to
the left of particle 2 (with momentum $\vec{p}_{2}$) on the line. Then the in
two-particle states are required to obey $\vec{p}_{1}>\vec{p}_{2}$. This
condition forces the trajectories of two particles to cross as time goes by.
Instead the out two-particle states are required to obey
$\vec{p}_{1}<\vec{p}_{2}$ condition which ensures that the particles will
never meet in the future. When considering the scattering process
$p_{1}p_{2}\rightarrow k_{1}k_{2}$, the above convention is imposed by adding
the following product of step-functions
$\theta(\vec{p}_{1}-\vec{p}_{2})\theta(\vec{k}_{2}-\vec{k}_{1})$ (A.6)
into the definition of 2d scattering amplitudes. Plugging here the solution
(A.3) we see that in this convention the $t=0$ solution vanishes and we are
left only with the $u=0$ solution.
Let us now derive a very useful relation. Consider the Dirac $\delta$-function
which encodes the conservation condition (A.2), namely
$\delta^{2}(p_{1}+p_{2}-k_{1}-k_{2})=\delta(p_{1}^{0}+p_{2}^{0}-k_{1}^{0}-k_{2}^{0})\delta(\vec{p}_{1}+\vec{p}_{2}-\vec{k}_{1}-\vec{k}_{2}).$
(A.7)
Here the energies $p_{i}^{0}$ and $k_{i}^{0}$ are fixed in term of the momenta
$\vec{p}_{i}$ and $\vec{k}_{i}$ due to the mass-shell condition (A.1). Given
the initial values of $\vec{p}_{i}$, this Dirac $\delta$-function restricts
the values of $\vec{k}_{i}$ to their allowed range, in 2d this restriction is
severe and leads only to two possibilities (A.3). Let us now imagine that we
would like to integrate (A.7) with some kernel over all possible values of
$\vec{k}_{i}$, namely
$\int_{-\infty}^{+\infty}d\vec{k}_{1}\int_{-\infty}^{+\infty}d\vec{k}_{2}\,f(\vec{k}_{1},\vec{k}_{2})\delta^{2}(p_{1}+p_{2}-k_{1}-k_{2}).$
(A.8)
In order to perform this integration we need to perform several steps which we
explain below.
Due to the second Dirac $\delta$-function in the right-hand side of (A.7), we
have $\vec{k}_{2}=\vec{p}_{1}+\vec{p}_{2}-\vec{k}_{1}$. Thus, we can fully
eliminate the integral over $\vec{k}_{2}$. Plugging this restriction back into
(A.7) and using the mass-shell condition (A.1), we get
$\delta(p_{1}^{0}+p_{2}^{0}-k_{1}^{0}-k_{2}^{0})=\delta\Big{(}g(\vec{k}_{1})\Big{)},$
(A.9)
where we have defined
$g(\vec{k}_{1})\equiv
p_{1}^{0}+p_{2}^{0}-\sqrt{m^{2}+\vec{k}_{1}^{\,2}}-\sqrt{m^{2}+(\vec{p}_{1}+\vec{p}_{2}-\vec{k}_{1})^{\,2}}.$
(A.10)
Let use now use the standard property of the Dirac $\delta$-functions and the
fact that $g(\vec{k}_{1})=0$ has only two solutions given by (A.3). We have
then
$\delta(p_{1}^{0}+p_{2}^{0}-k_{1}^{0}-k_{2}^{0})={\delta(\vec{k}_{1}-\vec{p}_{1})\over\Big{|}g^{\prime}(\vec{p}_{1})\Big{|}}+{\delta(\vec{k}_{1}-\vec{p}_{2})\over\Big{|}g^{\prime}(\vec{p}_{2})\Big{|}}.$
(A.11)
Evaluating the derivatives we finally obtain
$\delta(p_{1}^{0}+p_{2}^{0}-k_{1}^{0}-k_{2}^{0})={p_{1}^{0}p_{2}^{0}\over\left|\vec{p}_{1}p_{2}^{0}-\vec{p}_{2}p_{1}^{0}\right|}\times\left(\delta(\vec{k}_{1}-\vec{p}_{1})+\delta(\vec{k}_{1}-\vec{p}_{2})\right).$
(A.12)
Plugging (A.12) into (A.7), we get the final relation
$\delta^{2}(p_{1}+p_{2}-k_{1}-k_{2})=\\\
{p_{1}^{0}p_{2}^{0}\over\left|\vec{p}_{1}p_{2}^{0}-\vec{p}_{2}p_{1}^{0}\right|}\times\left(\delta(\vec{k}_{1}-\vec{p}_{1})\delta(\vec{k}_{2}-\vec{p}_{2})+\delta(\vec{k}_{1}-\vec{p}_{2})\delta(\vec{k}_{2}-\vec{p}_{1})\right).$
(A.13)
Equivalently we could write it as
$4\left|\vec{p}_{1}p_{2}^{0}-\vec{p}_{2}p_{1}^{0}\right|\times\delta^{2}(p_{1}+p_{2}-k_{1}-k_{2})=\\\
4p_{1}^{0}p_{2}^{0}\times\left(\delta(\vec{k}_{1}-\vec{p}_{1})\delta(\vec{k}_{2}-\vec{p}_{2})+\delta(\vec{k}_{1}-\vec{p}_{2})\delta(\vec{k}_{2}-\vec{p}_{1})\right).$
(A.14)
Let us now evaluate the expression (A.14) in the center of mass frame defined
as $\vec{p}_{2}=-\vec{p}_{1}$. Plugging this condition into (A.4) we conclude
that in the center of mass frame
$|\vec{p}_{1}|={1\over 2}\,\sqrt{s-4m^{2}},\qquad p_{1}^{0}={1\over
2}\,\sqrt{s}.$ (A.15)
Plugging these into the left-hand side of (A.14) we get
$4\left|\vec{p}_{1}p_{2}^{0}-\vec{p}_{2}p_{1}^{0}\right|=2\sqrt{s}\sqrt{s-4m^{2}}.$
(A.16)
We then notice that the quantity
$4\left|\vec{p}_{1}p_{2}^{0}-\vec{p}_{2}p_{1}^{0}\right|$ is Lorentz
invariant, thus (A.16) holds in a generic frame! The result (A.14) together
with (A.16) gives precisely (2.21).
## Appendix B $O(N)$ model
Let us consider the case when the system has a global $O(N)$ symmetry. We will
require our asymptotic states to transform in the vector representation of
$O(N)$. They will thus carry an extra label $a=1\ldots N$. The one particle
states are normalized as before with an addition of the Kronecker delta due to
the presence of the $O(N)$ vector indicies
$\displaystyle{}_{b}\langle
m,\vec{p}_{2}|m,\vec{p}_{1}\rangle_{a}=2p^{0}\delta_{ab}\times
2\pi\delta(\vec{p}_{2}-\vec{p}_{1}).$ (B.1)
The full scattering amplitude can be decomposed into three independent
scattering amplitudes $\sigma_{i}(s)$, $i=1,2,3$. In the notation of
Zamolodchikov:1978xm we have
$\displaystyle{}_{cd}\langle
m,\vec{p}_{3};m,\vec{p}_{4}|S|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ab}=$
$\displaystyle(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4})\times$
$\displaystyle\big{(}\sigma_{1}(s)\delta_{ab}\delta_{cd}+\sigma_{2}(s)\delta_{ac}\delta_{bd}+\sigma_{3}(s)\delta_{ad}\delta_{bc}\big{)}.$
(B.2)
Crossing $1\leftrightarrow 3$ implies the following relations
$\sigma_{1}(s)=\sigma_{3}(4m^{2}-s),\quad\sigma_{2}(s)=\sigma_{2}(4m^{2}-s).$
(B.3)
Let us discuss unitarity now. The two-particle states transform in the
reducible $O(N)$ representation and can be further decomposed into three
irreducible representations as
$|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ab}={\delta_{ab}\over\sqrt{N}}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\bullet}+|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{S}}_{(ab)}+|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{A}}_{[ab]},$
(B.4)
where we have defined
$\displaystyle|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\bullet}$
$\displaystyle\equiv{1\over\sqrt{N}}\,\sum_{a=1}^{N}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{aa},$
(B.5) $\displaystyle|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{S}}_{(ab)}$
$\displaystyle\equiv{1\over
2}\,\Big{(}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ab}+|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ba}\Big{)}-{\delta_{ab}\over\sqrt{N}}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\bullet},$
(B.6) $\displaystyle|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{A}}_{[ab]}$
$\displaystyle\equiv{1\over
2}\,\Big{(}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ab}-|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ba}\Big{)}.$
(B.7)
The labels $\bullet$, S and A stand for trivial, symmetric traceless and
antisymmetric representations. Using the normalization condition (B.1) we find
that
$\displaystyle{}^{\bullet}\langle
m,\vec{p}_{3};m,\vec{p}_{4}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\bullet}$
$\displaystyle=\mathcal{N}_{2}\times(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4}),$
(B.8) $\displaystyle{}_{(cd)}^{\textbf{S}}\langle
m,\vec{p}_{3};m,\vec{p}_{4}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{S}}_{(ab)}$
$\displaystyle=\mathcal{N}_{2}\,T^{ab,cd}_{\textbf{S}}\times(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4}),$
(B.9) $\displaystyle{}_{[cd]}^{\textbf{A}}\langle
m,\vec{p}_{3};m,\vec{p}_{4}|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\textbf{A}}_{[ab]}$
$\displaystyle=\mathcal{N}_{2}\,T^{ab,cd}_{\textbf{A}}\times(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4}).$
(B.10)
Notice that the normalization condition for the trivial representation is
exactly the one used in the main text, see (2.21).
Taking into account (B.4) alternatively to (B.2) we can rewrite the full
scattering amplitude in terms of independent scattering amplitudes
$S_{\bullet}(s)$, $S_{\textbf{S}}(s)$ and $S_{\textbf{A}}(s)$, as
$\displaystyle{}_{cd}\langle
m,\vec{p}_{3};m,\vec{p}_{4}|S|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{ab}=$
$\displaystyle(2\pi)^{2}\delta^{(2)}(p_{1}+p_{2}-p_{3}-p_{4})\times$
$\displaystyle\big{(}S_{\bullet}(s)T^{ab,cd}_{\bullet}+S_{\textbf{S}}(s)T^{ab,cd}_{\textbf{S}}+S_{\textbf{A}}(s)T^{ab,cd}_{\textbf{A}}\big{)},$
(B.11)
where the tensor structures associated to the three irreducible
representations are defined as
$T^{ab,cd}_{\bullet}\equiv{1\over N}\delta_{ab}\delta_{cd},\quad
T^{ab,cd}_{\textbf{S}}\equiv{\delta_{ac}\delta_{bd}+\delta_{ad}\delta_{bc}\over
2}-{1\over N}\delta_{ab}\delta_{cd},\quad
T^{ab,cd}_{\textbf{A}}\equiv{\delta_{ac}\delta_{bd}-\delta_{ad}\delta_{bc}\over
2}.$ (B.12)
The relation between two sets of amplitudes $\sigma_{1}$, $\sigma_{2}$,
$\sigma_{3}$ and $S_{\bullet}$, $S_{\textbf{S}}$, $S_{\textbf{A}}$ simply
reads as
$\displaystyle S_{\bullet}(s)$
$\displaystyle=\sigma_{2}(s)+\sigma_{3}(s)+N\sigma_{1}(s),$ $\displaystyle
S_{\textbf{S}}(s)$ $\displaystyle=\sigma_{2}(s)+\sigma_{3}(s),$ (B.13)
$\displaystyle S_{\textbf{A}}(s)$ $\displaystyle=\sigma_{2}(s)-\sigma_{3}(s),$
In section 3.2 of Karateev:2019ymz it was shown that using the states in the
irreducible representation of the $O(N)$ one can formulate the unitarity
constraints in the simple form. For the trivial representation we have
$\begin{pmatrix}1&\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\bullet}^{*}(s)&\mathcal{N}_{2}^{\,-1/2}\,\mathcal{F}_{2,0}^{*\Theta}(s)\\\
\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\bullet}(s)&1&\mathcal{N}_{2}^{\,-1/2}\,\mathcal{F}_{2,0}^{\Theta}(s)\\\
\mathcal{N}_{2}^{\,-1/2}\,\mathcal{F}_{2,0}^{\Theta}(s)&\mathcal{N}_{2}^{\,-1/2}\,\mathcal{F}_{2,0}^{*\Theta}(s)&2\pi\rho_{\Theta}(s)\end{pmatrix}\succeq
0,$ (B.14)
where the form factor is defined as
$\displaystyle\mathcal{F}_{2,0}^{\Theta}(s)$ $\displaystyle\equiv\langle
0|\Theta(0)|m,\vec{p}_{1};m,\vec{p}_{2}\rangle^{\bullet}$ (B.15)
$\displaystyle=\sqrt{N}\;\langle
0|\Theta(0)|m,\vec{p}_{1};m,\vec{p}_{2}\rangle_{11}.$
For the symmetric and antisymmetric representations we have instead
$\begin{pmatrix}1&\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\textbf{S}}^{*}(s)\\\
\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\textbf{S}}(s)&1\end{pmatrix}\succeq
0,\qquad\begin{pmatrix}1&\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\textbf{A}}^{*}(s)\\\
\mathcal{N}_{2}^{\,-1}{\mathcal{S}}_{\textbf{A}}(s)&1\end{pmatrix}\succeq 0.$
(B.16)
The crossing equations (B.3) in the new basis read as
$\begin{pmatrix}{\mathcal{S}}_{\bullet}(s)\\\ {\mathcal{S}}_{\textbf{S}}(s)\\\
{\mathcal{S}}_{\textbf{A}}(s)\end{pmatrix}=\begin{pmatrix}{1\over N}&&{1\over
2}-{1\over N}+{N\over 2}&&{1\over 2}-{N\over 2}\\\ {1\over N}&&{1\over
2}-{1\over N}&&{1\over 2}\\\ -{1\over N}&&{1\over 2}+{1\over N}&&{1\over
2}\end{pmatrix}\begin{pmatrix}{\mathcal{S}}_{\bullet}(4m^{2}-s)\\\
{\mathcal{S}}_{\textbf{S}}(4m^{2}-s)\\\
{\mathcal{S}}_{\textbf{A}}(4m^{2}-s)\end{pmatrix}.$ (B.17)
Let us now consider the 2d $O(N)$ model with $\phi^{4}$ potential. In the
large $N$ limit $N\rightarrow\infty$ limit using perturbation theory it is
straightforward to show that
$\sigma_{i}(s)={\overline{\sigma}_{i}(s)\over N}+O(N^{-2}),\quad i=1,2,3,$
(B.18)
where $\overline{\sigma}_{i}(s)$ is the finite part in the large $N$ limit.
Using (B.13) we conclude that
$S_{\bullet}(s)=\overline{\sigma}_{1}(s),\qquad
NS_{\textbf{S}}(s)=\overline{\sigma}_{2}(s)+\overline{\sigma}_{3}(s),\qquad
NS_{\textbf{A}}(s)=\overline{\sigma}_{2}(s)-\overline{\sigma}_{3}(s).$ (B.19)
Using these we can read off from (B.17) the crossing equation for the trivial
scattering amplitude. It reads
$S_{\bullet}(s)=\overline{\sigma}_{1}(s)=\overline{\sigma}_{3}(4m^{2}-3).$
(B.20)
Clearly this crossing equation does not close if we consider only the trivial
scattering amplitude.
## Appendix C Perturbative Computations
In this appendix, we will detail various analytic computations of the form
factors and scattering amplitudes in solvable limits (large $N$, non-
relativistic, and perturbative $\lambda$) that we use throughout the paper.
### C.1 Feynman Diagrams
Figure 18: Feynman diagrams for the 2-to-2 $S$-matrix up to one loop (plus
crossed diagrams).
#### C.1.1 $\phi^{4}$ theory
Figure 19: Feynman diagrams for the two-particle form factor of $\Theta$ up to
two loops.
Figure 20: Feynman diagrams for the $T_{--}$ two-point function up to two
loops.
We begin with the form factors and amplitudes in a loop expansion, in powers
of the coupling $\lambda$. The leading order $\mathcal{O}(\lambda^{0})$ free
theory expressions are
$\widehat{{\mathcal{S}}}=1,\quad\mathcal{F}_{2,0}^{\Theta}=-2m^{2},\quad\pi\rho_{\Theta}=2m^{4}\omega^{2}\theta(s-4m^{2}),$
(C.1)
where $\omega\equiv{1\over 2\sqrt{s(s-4m^{2})}}={\mathcal{N}}_{2}^{-1}$.
To compute the form factors and spectral densities of $\Theta$, it is in
general easier to compute those of $T_{--}$ first and then use the Ward
identity than it is to compute those of $\Theta$ directly. The reason is that
$T_{--}$ is simply $(\partial_{-}\phi)^{2}$, independent of the interaction
and mass terms, and so involves fewer Feynman diagrams. At tree-level,
$\langle m^{2},p_{1};m^{2},p_{2}|T_{--}(0)\rangle=2p_{1-}p_{2-},$ (C.2)
The Ward identity implies
$\langle m^{2},p_{1};m^{2},p_{2}|\Theta(0)\rangle=-{s\over p_{-}^{2}}\langle
m^{2},p_{1};m^{2},p_{2}|T_{--}(0)\rangle$ (C.3)
where
$s=(p_{1}+p_{2})^{2}=m^{2}{p_{-}^{2}\over p_{1-}p_{2-}},\qquad p_{-}\equiv
p_{1-}+p_{2-},$ (C.4)
so at $\mathcal{O}(\lambda^{0})$ we obtain
$\langle m^{2},p_{1};m^{2},p_{2}|\Theta(0)\rangle=-2m^{2},$ (C.5)
as claimed, and as can easily be verified by a direct computation with
$\Theta$.
At the next order, $\mathcal{O}(\lambda)$, the S-matrix is given by a tree
diagram, the form factor involves an one-loop computation, and the spectral
density involves a two-loop diagram, as shown in the corresponding diagrams in
figure 18, 19, and 20. The S-matrix is simply
$\widehat{{\mathcal{S}}}=1-i\lambda\omega^{2}.$ (C.6)
The form factor one-loop diagram (top right diagram in figure 19) can be
computed by standard methods,
$\langle m^{2},p_{1};m^{2},p_{2}|T_{--}(0)\rangle=-{\lambda\over
4\pi}\int_{0}^{1}dx{x(1-x)p_{-}^{2}\over m_{0}^{2}-x(1-x)s}.$ (C.7)
The integral over the Feynman parameter $x$ can be done in closed form to
obtain the expression given in equation (3.14), which for reference we write
here as
$\langle m^{2},p_{1};m^{2},p_{2}|\Theta(0)\rangle=-2m^{2}+{\lambda\over
4\pi}\Delta(s)+\mathcal{O}(\lambda^{2}),\quad\Delta(m^{2}x)\equiv-1+\lim_{\epsilon\rightarrow
0^{+}}{4\text{ArcTan}\left({\sqrt{x}\over\sqrt{4-x-i\epsilon}}\right)\over\sqrt{x(4-x-i\epsilon)}}.$
(C.8)
We have used $m_{0}=m+\mathcal{O}(\lambda^{2})$, so $m$ and $m_{0}$ are
interchangeable at this order.
The $\mathcal{O}(\lambda)$ (i.e. two-loop) diagram (second diagram in figure
20) for the $T_{--}$ time-ordered two-point function factors into a product of
two one-loop diagrams. The Ward identity can again be used to obtain the
correlator with $T_{--}$s replaced by $\Theta$s, so by evaluating a couple of
one-loop diagrams we obtain
$\pi\rho_{\Theta}(s)=\textrm{Re}\int d^{2}xe^{-ip\cdot x}\langle{\rm
vac}|\Theta(x)\Theta(0)|{\rm
vac}\rangle_{T}=\theta(s-4m^{2})\left[2m^{4}\omega^{2}+{\lambda\over(4\pi)^{2}}\textrm{Im}(\Delta^{2}(s))+\mathcal{O}(\lambda^{2})\right].$
(C.9)
At $s>4m^{2}$, the result for $\rho_{\Theta}$ can be written a bit more
explicitly with the following expressions for the real and imaginary parts of
$\Delta(s)$:
$\Delta(s)\stackrel{{\scriptstyle s>4m^{2}}}{{=}}-\left[4m^{2}{{\rm
ArcCosh}\left(\sqrt{{s\over
4m^{2}}}\right)\over\sqrt{s(s-4m^{2})}}\right]-4i\pi m^{2}\omega^{2}.$ (C.10)
One can also perform these computations directly with $\Theta$; in that case,
it is crucial to include a subtle contribution $\propto\lambda\phi^{2}$ in the
definition of $\Theta$ itself:
$\Theta=m^{2}\phi^{2}+{\lambda\over 12}\phi^{4}+{\lambda\over 8\pi}\phi^{2},$
(C.11)
see e.g. Anand:2017yij for details.212121One way to “discover” the
contribution ${\lambda\over 8\pi}\phi^{2}$ to $\Theta$ is that the relation
$\mathcal{F}_{2,0}^{\Theta}(0)=-2m^{2}$ is not satisfied at one-loop if it is
not included.
At the next order, $\mathcal{O}(\lambda^{2})$, the perturbative diagrams for
$\mathcal{F}_{2,0}^{\Theta}$ and $\rho_{\Theta}$ become more difficult to
evaluate, involving a two-loop and three-loop computation, respectively. Here
we will only derive the $\mathcal{O}(\lambda^{2})$ contribution to
$\mathcal{F}_{2,0}^{\Theta}$. As a check, in the next subsection we will
rederive the $\mathcal{O}(\lambda^{2})$ contribution to
$\mathcal{F}^{\Theta}_{2,0}$ using dispersion relations. Since particle
production is kinematically forbidden for $s<16m^{2}$, we can actually obtain
the three-loop spectral density in this regime from the two-loop form factor.
First, the $\mathcal{O}(\lambda^{2})$ contribution to the S-matrix involves
only an one-loop diagram (second diagram in figure 18) that can be easily
evaluated:
${\mathcal{T}}=-\lambda+{\lambda^{2}\over 8\pi m^{2}}\left(1+4\pi
im^{2}\omega^{2}\right)+\mathcal{O}(\lambda^{3}).$ (C.12)
To compute the $\mathcal{O}(\lambda^{2})$ correction to
$\mathcal{F}_{2,0}^{\Theta}$, we again compute $\langle
m^{2},p_{1};m^{2},p_{2}|T_{--}(0)\rangle$ and use the Ward identity. There are
two two-loop diagrams that must be evaluated, as shown in Fig. 19. The first
is a simple product of two one-loop diagrams, and is easily evaluated to be
$F_{2,0}^{\Theta}\supset-{\lambda^{2}\over
2(4\pi)^{2}m^{2}}\Delta(s)(\Delta(s)+1).$ (C.13)
The second two-loop diagram in Fig. 19 involves the integral222222See e.g.
(10.57) in Peskin:1995ev , which is easily generalized to the diagram we are
considering.
${\mathcal{I}}\equiv\int_{0}^{1}dx\int_{0}^{1}dy\int_{0}^{1}dw\int{d^{2}k\over(2\pi)^{2}}{(1-w)k_{-}(k+p)_{-}\over(w[x(1-x)(k+p_{1})^{2}]+(1-w)[k^{2}+2yk\cdot
p+yp^{2}]+m^{2})^{3}},$ (C.14)
(where $p=p_{1}+p_{2}$) plus a symmetric contribution with
$p_{1}\leftrightarrow p_{2}$. With some effort, these integrals can be
evaluated and massaged into the closed form result in equation (3.14). In
(3.14), we have also had to adjust for an $\mathcal{O}(\lambda^{2})$
wavefunction renormalization Serone:2018gjo ,232323The wavefunction
renormalization factor is given by $b_{2}^{(1)}$ from Table 8 of
Serone:2018gjo ; we have used the fact that their numeric value for
$b_{2}^{(1)}$ is equal to ${9\over 2\pi^{2}}-{3\over 8}$.
$Z^{-1}=1-\left({\lambda\over 4!m_{0}^{2}}\right)^{2}\left({9\over
2\pi^{2}}-{3\over 8}\right)+\mathcal{O}(\lambda^{2}),$ (C.15)
after which the tree-level contribution to $\langle
m^{2},p_{1};m^{2},p_{2}|\Theta(0)\rangle$ becomes
$\langle
m^{2},p_{1};m^{2},p_{2}|\Theta(0)\rangle\supset-2m^{2}Z^{-1}=-2m^{2}\left(1-\left({\lambda\over
4!m^{2}}\right)^{2}\left({9\over 2\pi^{2}}-{3\over
8}\right)+\mathcal{O}(\lambda^{2})\right).$ (C.16)
This wavefunction renormalization contribution has the effect of canceling out
the $s=0$ contribution from the other two-loop diagrams, so that the Ward
identity $\mathcal{F}_{2,0}^{\Theta}(0)=-2m^{2}$ is preserved.
#### C.1.2 2d $O(N)$ model in the large $N$ limit
The S-matrix, form factor $\mathcal{F}_{2,0}^{\Theta}$, and spectral density
$\rho_{\Theta}$ in the $O(N)$ theory at large $N$ simply involve diagrams we
have just computed, together with a standard resummation of higher loop
diagrams that factorize and form a geometric series.
The $\Theta$ form factor is, in units with $m=1$,
$\mathcal{F}_{2,0}^{\Theta}(s)=-2\left(1-{\lambda\Delta(s)\over
8\pi+\lambda(1+\Delta(s))}\right),$ (C.17)
where $\Delta(s)$ is the function given in (3.15). The S-matrix is simplest in
the rapidity variable $\theta$:
$S=-{(-i\theta+\pi)\lambda\operatorname{csch}(\theta)+8i\pi\over(i\theta+\pi)\lambda\operatorname{csch}(\theta)-8i\pi}.$
(C.18)
Finally, the time-ordered two-point function $\mathbf{\Delta}_{\Theta}(s)$ is
$\mathbf{\Delta}_{\Theta}(s)=4i{{s\over 6}-\Delta(s)+{\lambda\over
8\pi}\left({s\over 6}+\Delta(s)\left({s\over 6}-1\right)\right)\over
8\pi+\lambda(\Delta(s)+1)},$ (C.19)
and the spectral density $\rho_{\Theta}(s)$ can be obtained either by taking
the real part of $\mathbf{\Delta}_{\Theta}(s)$ or by using the form factor
together with the fact that the large $N$ limit theory saturates the
inequality (2.30).
### C.2 Dispersion Relations
As a check of our previous two-loop formulas for the $\phi^{4}$ model, we will
see how to rederive the one- and two-loop contributions using unitarity,
Watson’s equation, and dispersion relations.
We begin with the S-matrix,
$\widehat{S}=1+i\omega^{2}\mathcal{T}$ (C.20)
By definition, up to $\mathcal{O}(\lambda)$, it is
$\mathcal{T}=-\lambda+\mathcal{O}(\lambda^{2})\qquad\omega^{2}={1\over
2\sqrt{s(s-4m^{2})}}$ (C.21)
Let us also divide $\mathcal{T}$ into real and imaginary parts as follows
$\mathcal{T}=\mathcal{T}_{R}+i\mathcal{T}_{I}$ (C.22)
From on-shell unitarity $SS^{*}=1$, we infer that at $s>4$,
$\mathcal{T}_{I}=\lambda^{2}{\omega^{2}\over 2}+\mathcal{O}(\lambda^{3})$
(C.23)
Then, we can reconstruct $\mathcal{T}$ at $\mathcal{O}(\lambda^{2})$ from its
imaginary part using dispersion relations:
$\mathcal{T}(s)=\mathcal{T}_{\infty}-{1\over\pi}\int_{4m^{2}}^{\infty}d\mu^{2}\mathcal{T}_{I}(\mu^{2})\left({1\over
s-\mu^{2}}-{1\over
s-(4m^{2}-\mu^{2})}\right)=\mathcal{T}_{\infty}+{\lambda^{2}\over
4\sqrt{(4m^{2}-s)s}}.$ (C.24)
It is easy to see that the imaginary part of $\mathcal{T}(s)$ is indeed
$\mathcal{T}_{I}(s)$. The constant “subtraction” piece $\mathcal{T}_{\infty}$
depends on the definition of the theory and cannot be determined by dispersion
relations. If we define the theory to have a bare quartic coupling
${\mathcal{L}}\supset-{\lambda\over 4!}\phi^{4}$ without additional
counterterms, then a one-loop computation shows
$\mathcal{T}_{\infty}=-\lambda+{\lambda^{2}\over 8\pi
m^{2}}+\mathcal{O}(\lambda^{3})$.
Next, we apply Watson’s equation to obtain the form factor for $\Theta$. In
the rest of this appendix, for notational convenience, we will denote
$\mathcal{F}^{\Theta}_{2,0}$ simply as $\mathcal{F}$. At
$\mathcal{O}(\lambda^{0})$, we have $\mathcal{F}=-2m^{2}$. Expanding
$\mathcal{F}$ in powers of $\lambda$,242424Note that the subscripts in
$\mathcal{F}$ in equation (C.25) have different meanings from those in other
parts of this paper.
$\mathcal{F}=-2m^{2}\left(1+{\lambda\over
m^{2}}(\mathcal{F}_{1,R}+i\mathcal{F}_{1,I})+{\lambda^{2}\over
m^{4}}(\mathcal{F}_{2,R}+i\mathcal{F}_{2,I})+\dots\right)$ (C.25)
and imposing
${\mathcal{F}(s)\over\mathcal{F}^{*}(s)}=\widehat{S}(s),$ (C.26)
we immediately find
$\mathcal{F}_{1,I}=-{1\over 2}m^{2}\omega^{2}=-{m^{2}\over
4\sqrt{s(s-4m^{2})}}$ (C.27)
Applying dispersion relations, we have
$\mathcal{F}_{1}(s)=\mathcal{F}_{1,\infty}-{1\over\pi}\int_{4}^{\infty}d\mu^{2}{\mathcal{F}_{1,I}(\mu^{2})\over
s-\mu^{2}}=\mathcal{F}_{1,\infty}-{m^{2}\sec^{-1}\left({2m\over\sqrt{4m^{2}-s}}\right)\over
2\pi\sqrt{(4m^{2}-s)s}}$ (C.28)
where $\mathcal{F}_{1,\infty}$ is another constant subtraction. We can fix its
value by demanding that $\mathcal{F}(0)=-2m^{2}$, which implies
$\mathcal{F}_{1,\infty}={1\over 8\pi}$ (C.29)
Putting this together, we obtain
$\mathcal{F}_{1}(s)=-{1\over 8\pi}\Delta(s).$ (C.30)
At the next order, using our expression for $\mathcal{T}$ up to
$\mathcal{O}(\lambda^{2})$, we find from Watson’s equation that
$\mathcal{F}_{2,I}(s)=-{m^{2}\text{csch}^{-1}\left({2m\over\sqrt{s-4m^{2}}}\right)\over
8\pi(s-4m^{2})s}-{\pi\over 32}m^{2}\delta(s-4m^{2}).$ (C.31)
There is a subtle $\delta(s-4m^{2})$ contribution here that arises from taking
the difference between ${1\over s-4m^{2}}$ and $\left({1\over
s^{-}4m^{2}}\right)^{*}$, which differ by a $\delta$ function at $s=4m^{2}$
due to the change in $i\epsilon$ prescription under complex conjugation. The
clearest way to see this difficult term is by studying the non-relativistic
limit $s\sim 4m^{2}$ directly, as we will do in the next subsection.
Finally, we can reconstruct the full form factor at this order:
$\mathcal{F}_{2}(s)=\mathcal{F}_{2,\infty}-{1\over\pi}\int_{4m^{2}}^{\infty}d\mu^{2}{\mathcal{F}_{2,I}(\mu^{2})\over
s-\mu^{2}}=-{1\over(8\pi)^{2}}\left({\pi^{2}s\over
8(s-4m^{2})}-\Delta(s)(\Delta(s)/2+1)\right),$ (C.32)
which agrees with the result (3.14). We again fixed the subtraction term
$\mathcal{F}_{2,\infty}$ by demanding that $\mathcal{F}(0)=-2m^{2}$.
### C.3 Nonrelativistic Limit
Scattering of two particles near the threshold $s=4m^{2}$ is simply a one-
dimensional non-relativistic quantum mechanics problem that can be solved, as
we review briefly. The interaction $\lambda\phi^{4}$ is a $\delta$ function
potential in position space. We take the scattering wavefunction to be
$\psi(x)=\left\\{\begin{array}[]{cc}e^{ikx}+Se^{-ikx},&x<0\\\
Se^{ikx}+e^{-ikx},&x>0\end{array}\right.,$ (C.33)
which is even as a function of $x$ since the two particles are identical. The
Schrodinger equation is
$-{\psi^{\prime\prime}(x)\over 2(m/2)}+{\lambda\over
8}\delta(x)\psi(x)=E\psi(x)$ (C.34)
where $E={s-4m^{2}\over 4m}={k^{2}\over m}$.
Integrating the Schrodinger equation around $x=0$, we find
$S={16k-im\lambda\over
16k+im\lambda}\approx-{\lambda+8i\theta\over\lambda-8i\theta}$ (C.35)
where we have written the result in terms of rapidity $\theta$ and taken the
limit $\theta\rightarrow 0$ with $\lambda/\theta$ fixed. This is regular at
small $\theta$, but if we first take a small $\lambda$ limit then each power
in $\lambda$ is individually singular:
$S(\theta)\stackrel{{\scriptstyle\theta\rightarrow
0}}{{\approx}}1-{i\lambda\over 4\theta}-{\lambda^{2}\over
32\theta^{2}}+{i\lambda^{3}\over 256\theta^{3}}+\dots$ (C.36)
where we have kept only the most singular terms at each order. So the small
$\lambda$ limit and the small $\theta$ limit do not commute, and in fact
perturbative loop computations, which are an expansion in powers of $\lambda$,
should not be trusted below about $\theta\lesssim{\lambda\over 4}$.
From the scattering wavefunction, we can also extract the form factor in this
limit. Each individual particle has energy $m+E/2$, so the time-dependent
wavefunction is
$\psi_{2}(x_{1},x_{2},t_{1},t_{2})=e^{i(m+{E\over
2})(t_{1}+t_{2})}\psi({x_{1}-x_{2}\over 2})$ (C.37)
where $\psi(x)$ is from eq. (C.33). We will take $m=1$. In second
quantization, $\psi_{2}$ is
$\psi_{2}(x_{1},x_{2},t_{1},t_{2})=\langle\phi(x_{1},t_{1})\phi(x_{2},t_{2})|p_{1},p_{2}\rangle$
(C.38)
where $|p_{1},p_{2}\rangle$ is the two-particle state, with momentum
$p_{1}=-p_{2}=k/2$ since we are in the rest frame. Then, we can easily
calculate the overlap with $T_{--}$ by taking
$\langle T_{--}(0)|p_{1},p_{2}\rangle=\lim_{x_{i}\rightarrow
0,t_{i}\rightarrow
0}(\partial_{t_{1}}-\partial_{x_{1}})(\partial_{t_{2}}-\partial_{x_{2}})\psi_{2}(x_{1},x_{2},t_{1},t_{2}).$
(C.39)
In the nonrelativistic limit, both $E=k^{2}$ and $k$ go to zero, and we obtain
$\langle T_{--}(0)|p_{1},p_{2}\rangle=-(1+S).$ (C.40)
with $S$ given in equation (C.35). This result has the correct phase according
to Watson’s equation, since one can easily check that ${1+S\over 1+S^{*}}=S$.
Expanding in small $\lambda$ we have
$-{1\over 2}\langle T_{--}(0)|p_{1},p_{2}\rangle=1-{i\lambda\over
8\theta}-{\lambda^{2}\over 64\theta^{2}}+\dots,$ (C.41)
which agrees with the perturbative result (3.14) if we expand (3.14) in small
$\theta$. From this expression, we see that $\mathcal{F}_{2,0}^{\Theta}$ at
$\mathcal{O}(\lambda^{2})$ has a pole $\sim{\lambda^{2}\over 32(s-4)}$, which
implies that the imaginary part of $\mathcal{F}_{2,0}^{\Theta}$ at
$\mathcal{O}(\lambda^{2})$ contains a $\delta$ function of the form
$\textrm{Im}(\mathcal{F}_{2,0}^{\Theta})\supset-\lambda^{2}{\pi\over 32(s-4)}$
(C.42)
as claimed in eq (C.31).
## Appendix D Sinh-Gordon Form Factors and $C$-function
In this appendix, we provides some details about the four-particle form factor
of the trace of the stress-tensor $\Theta$ and the computation of the spectral
density in section 5 in the sinh-Gordon/staircase model. The result of the
$2n$-particle form factor for $\Theta$ is given in Fring:1992pt (the form
factors with an odd number of particles vanish for $\Theta$). In terms of the
minimal form factor
$\mathcal{F}_{\text{min}}\left(\theta\right)=\mathcal{N}\exp\left(8\int_{0}^{\infty}{dx\over
x}{\sinh\left({x\gamma\over 2\pi}\right)\sinh\left({x(\pi-\gamma)\over
2\pi}\right)\sinh\left({x\over
2}\right)\over\sinh^{2}(x)}\sin^{2}\left({x(i\pi-\theta)\over
2\pi}\right)\right),$ (D.1)
with the normalization constant
$\mathcal{N}=\exp\left[-4\int_{0}^{\infty}{dx\over x}{\sinh\left({x\gamma\over
2\pi}\right)\sinh\left({x\over
2}\left(1-{\gamma\over\pi}\right)\right)\sinh{x\over
2}\over\sinh^{2}x}\right],$ (D.2)
the 2-particle form factor in equation (3.8) in our convention is given by
$\mathcal{F}_{2,0}^{\Theta}\left(\theta\right)=-2m^{2}{\mathcal{F}_{\text{min}}\left(\theta\right)\over\mathcal{N}}.$
(D.3)
And the expression for the four-particle form factor is
$\mathcal{F}_{4,0}^{\Theta}(\theta_{1},\theta_{2},\theta_{3},\theta_{4})={8\pi
m^{2}\sin\gamma\over\mathcal{N}^{2}}\sigma_{1}^{(4)}\sigma_{2}^{\left(4\right)}\sigma_{3}^{(4)}\prod_{0<i<j\leq
4}{\mathcal{F}_{\text{min}}\left(\theta_{ij}\right)\over x_{i}+x_{j}},$ (D.4)
where $\theta_{ij}=\theta_{i}-\theta_{j}$, $x_{i}=e^{\theta_{i}}$, and
$\sigma_{k}^{\left(4\right)}$s are degree $k$ symmetric polynomials of
$x_{i}$. Specifically, we have
$\displaystyle\sigma_{1}^{\left(4\right)}$
$\displaystyle=x_{1}+x_{2}+x_{3}+x_{4},$
$\displaystyle\sigma_{2}^{\left(4\right)}$
$\displaystyle=x_{1}x_{2}+x_{1}x_{3}+x_{1}x_{4}+x_{2}x_{3}+x_{2}x_{4}+x_{3}x_{4},$
(D.5) $\displaystyle\sigma_{3}^{\left(4\right)}$
$\displaystyle=x_{1}x_{2}x_{3}+x_{1}x_{2}x_{4}+x_{1}x_{3}x_{4}+x_{2}x_{3}x_{4}.$
As mentioned in Fring:1992pt , to numerically evaluate the form factors, it is
easier to use the following expression for $\mathcal{F}_{\text{min}}$:
$\displaystyle\mathcal{F}_{\text{min}}\left(\theta\right)$
$\displaystyle=\mathcal{N}I_{N}(\theta)$
$\displaystyle\times\prod_{k=0}^{N-1}\left[{\left(1+\left({\widehat{\theta}/2\pi\over
k+{1\over 2}}\right)^{2}\right)\left(1+\left({\widehat{\theta}/2\pi\over
k+{3\over 2}-{\gamma\over
2\pi}}\right)^{2}\right)\left(1+\left({\widehat{\theta}/2\pi\over
k+1+{\gamma\over
2\pi}}\right)^{2}\right)\over\left(1+\left({\widehat{\theta}/2\pi\over
k+{3\over 2}}\right)^{2}\right)\left(1+\left({\widehat{\theta}/2\pi\over
k+{1\over 2}+{\gamma\over
2\pi}}\right)^{2}\right)\left(1+\left({\widehat{\theta}/2\pi\over
k+1-{\gamma\over 2\pi}}\right)^{2}\right)}\right]^{k+1}$ (D.6)
where the integral $I_{N}(\theta)$ is defined as
$I_{N}(\theta)\equiv\exp\Bigg{[}8\int_{0}^{\infty}{dx\over
x}{\sinh\left({x\gamma\over 2\pi}\right)\sinh\left({x\over
2}\left(1-{\gamma\over\pi}\right)\right)\sinh{x\over
2}\over\sinh^{2}x}\times\\\
\left(N+1-Ne^{-2x}\right)e^{-2Nx}\sin^{2}\left({x\widehat{\theta}\over
2\pi}\right)\Bigg{]}.$ (D.7)
Here $\widehat{\theta}\equiv i\pi-\theta$. The integral in the exponent of
$I_{N}(\theta)$ is approaching 0 as one increases $N$. And for large $N$, the
the contribution from the integral is actually negligible. For example, for
$N=1000$, in the cases we considered in this paper, the integral is order
$\mathcal{O}\left(10^{-8}\right)$. In the actual computation for getting the
spectral density, we simply take large enough $N$ and discard the integral
part in (D.6). We then use a rational function to fit the result for
$\mathcal{F}_{\text{min}}$ for each values of $\gamma$ (which only introduces
an uncertainty of order $\mathcal{O}(10^{-8}$)), in order for Mathematica to
be able to evaluate the 4-particle form factor contribution to the spectral
density quickly later on.
The spectral density is given exactly by
$\rho_{\Theta}(s)=\rho_{\Theta,2}(s)\theta(s-4m^{2})+\rho_{\Theta,4}(s)\theta(s-16m^{2})+\rho_{\Theta,6}(s)\theta(s-36m^{2})+\ldots,$
(D.8)
where
$\rho_{\Theta,2n}\left(s\right)={1\over
2\pi}{1\over(2n)!}\int{d\theta_{1}\ldots
d\theta_{2n}\over(2\pi)^{2n}}\left|F_{2n,0}^{\Theta}\left(\theta_{1},\ldots,\theta_{2n}\right)\right|^{2}\\\
\delta\left(\sum_{i=1}^{2n}m\sinh\theta_{i}\right)\delta\left(\sum_{i=1}^{2n}m\cosh\theta_{i}-\sqrt{s}\right),$
(D.9)
In what follows we will consider only two- and four-particle contributions to
the spectral density only. This means that $\rho_{\Theta}(s)$ remains exact up
to $s=36m^{2}$ and then will start deviating from the exact answer due to six-
and higher particle states.
The comparison of the sinh-Gordon spectral density (with two- and four-
particle states) and the $\phi^{4}$ spectral density is given in figure 16.
They happen to be very similar in a wide range of values of $s$. Using (D.8)
one can also compute the $C$-function. We show the contributions to the change
of the central charge $\Delta C=12\pi\int_{0}^{\infty}ds{\rho_{\Theta}(s)\over
s^{2}}$ from the two-particle and four-particle form factors in figure 21. As
expected we reproduce the value of the free boson $\Delta C=1$ very well (at
least for $\Lambda\leq 4$). For small values of $\Lambda$, the free boson
central charge is mostly given by the two-particle part of the spectral
density. When $\Lambda$ increases, four- and then higher-particle
contributions become important.
Figure 21: Contributions to $\Delta C$ from the two-particle and four-particle
form factors in the sinh-Gordon/staircase model for various values of the non-
perturbative quartic coupling $\Lambda$. The red dashed line is $\Delta C=1$
for comparison.
## References
* (1) R. Rattazzi, V. S. Rychkov, E. Tonni and A. Vichi, _Bounding scalar operator dimensions in 4D CFT_ , _JHEP_ 12 (2008) 031, [0807.0004].
* (2) D. Poland, S. Rychkov and A. Vichi, _The Conformal Bootstrap: Theory, Numerical Techniques, and Applications_ , _Rev. Mod. Phys._ 91 (2019) 015002, [1805.04405].
* (3) M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, _The S-matrix bootstrap II: two dimensional amplitudes_ , _JHEP_ 11 (2017) 143, [1607.06110].
* (4) M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, _The S-matrix bootstrap. Part III: higher dimensional amplitudes_ , _JHEP_ 12 (2019) 040, [1708.06765].
* (5) M. F. Paulos, J. Penedones, J. Toledo, B. C. van Rees and P. Vieira, _The S-matrix bootstrap. Part I: QFT in AdS_ , _JHEP_ 11 (2017) 133, [1607.06109].
* (6) N. Doroud and J. Elias Miró, _S-matrix bootstrap for resonances_ , _JHEP_ 09 (2018) 052, [1804.04376].
* (7) Y. He, A. Irrgang and M. Kruczenski, _A note on the S-matrix bootstrap for the 2d O(N) bosonic model_ , _JHEP_ 11 (2018) 093, [1805.02812]. |
††thanks<EMAIL_ADDRESS>
# Graph Theoretic Analysis of Three-Terminal Quantum Dot Thermocouples:
Onsager Relations and Spin-Thermoelectric Effects
Nikhil Gupt Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh
208016, India Shuvadip Ghosh Indian Institute of Technology Kanpur, Kanpur,
Uttar Pradesh 208016, India Arnab Ghosh Indian Institute of Technology
Kanpur, Kanpur, Uttar Pradesh 208016, India
###### Abstract
We introduce a simplified model for a three-terminal quantum thermocouple
consisting of two strongly-coupled quantum dots. To elucidate spin-dependent
Seebeck and Peltier effects, we employ a microscopic Hamiltonian and map the
Lindblad master equation onto a quantum transition network, capturing the key
working principles for both reciprocal effects. Our analysis reveals quantum
thermodynamic networks encompassing both Coulomb interaction and spin-flipping
processes, lead to the emergence of spin-thermolectric effects. Using
algebraic graph theory, we recover the phenomenological law of irreversible
thermodynamics from the stochastic version of the entropy production rate
expressed in terms of cycle flux and cycle forces. Remarkably, Onsager
reciprocity and Kelvin relation for transport coefficients find their premises
in the properties of cycle flux trajectories within the quantum transition
network. This underscores the universal generality of thermodynamic principles
across classical and quantum realms, despite their fundamentally different
basis from classical laws of irreversible thermodynamics relying on local
equilibrium assumptions.
## I Introduction
Thermoelectric devices have garnered significant attention owing to the
continual demand for innovative and effective approaches to temperature
sensors, heat pumps, and energy conversion Rowe (1995); DiSalvo (1999);
Goldsmid (2009); Shakouri and Zebarjadi (2009); Dubi and Di Ventra (2011);
Mazza _et al._ (2015); Benenti _et al._ (2017). This interest is rooted in
the phenomenon of thermoelectricity, where a temperature gradient induces an
electric current (Seebeck effect), and a potential gradient induces a heat
current (Peltier effect). From a thermodynamic point of view, a non-
equilibrium system experiences a distinct set of generalized thermodynamic
forces, arising from its simultaneous couplings with different reservoirs
Callen (1985); Landi and Paternostro (2021). The system’s response to these
external thermodynamic forces is reflected in a corresponding set of
generalized thermodynamic fluxes. The concept has been well investigated in
classical irreversible thermodynamics, with Onsager’s groundbreaking work on
the reciprocity principle of thermoelectric phenomena Onsager (1931a, b);
Callen (1948). Traditionally, thermocouples consisting of two different metal
wires, are used to observe such reciprocal effects. Only in recent times,
experimental research on magnetic metals and insulators, have experienced the
emergence of the spin Seebeck effect (SSE), wherein a spin current is
generated in response to a thermal gradient Uchida _et al._ (2008); Wu _et
al._ (2015); Zhou _et al._ (2021), and conversely, the spin Peltier effect
(SPE), involves a spin voltage producing a thermal current Flipse _et al._
(2014); Daimon _et al._ (2016); Ohnuma _et al._ (2017). The above findings
have ignited renewed enthusiasm among researchers to grasp the fundamental
aspects of spin caloritronics Bauer _et al._ (2012); Boona _et al._ (2014);
Ronetti _et al._ (2016); ichi UCHIDA (2021) and explore practical
applications such as waste heat recovery and on-chip refrigeration for future
nanoelectronics. A a result, there is a considerable interest in understanding
the quantum thermodynamics of nanoscale thermoelectrics through theoretical
modelings Di Ventra (2008); Nazarov and Blanter (2009); Ihn (2010); Heikkilä
(2013); Ren (2013); Whitney _et al._ (2016, 2018); Wang _et al._ (2022) and
experimental setups involving quantum dot (QD) nanostructures, nanowires, and
two-dimensional materials van Houten _et al._ (1992); Lee _et al._ (2016);
Svilans _et al._ (2016); Erlingsson _et al._ (2017); Patel _et al._ (2020);
Han _et al._ (2020); Yang _et al._ (2023).
The quantized energy levels and strong on-site Coulomb interactions among QDs,
make them excellent candidates for thermoelectric applications Esposito _et
al._ (2009); Nakpathomkun _et al._ (2010); Donsa _et al._ (2014); Sothmann
_et al._ (2014); Whitney _et al._ (2016); Erdman _et al._ (2017); Whitney
_et al._ (2018); Wang _et al._ (2022) and various other nanoscale thermal
devices Esposito _et al._ (2012); Thierschmann _et al._ (2015); Jiang _et
al._ (2015); Zhang _et al._ (2017); Ghosh _et al._ (2022). While the
discrete QD spectrum can be fine-tuned via external gate voltages and offers
energy-selective transport, the strong Coulombic interaction between electrons
on capacitively coupled QDs can facilitate the transfer of precise amounts of
energy from the heat reservoirs. However, the use of QDs as working substances
for quantum thermodynamic devices, characterized by a limited number of
quantum states, necessitates a completely new understanding of these devices
Whitney _et al._ (2018). The typical thermalization length being larger than
the nanoscale dimension forces these systems to behave in a highly non-trivial
manner, and their transport properties cannot be adequately described by the
usual Boltzmann transport equation Datta (2005), which primarily relies on the
local equilibrium assumptions.
On the contrary, the Lindblad master equation, formulated in terms of the
density matrix, is used as the preferred tool for examining the thermodynamic
properties of the open quantum systems Breuer and Petruccione (2007);
Gelbwaser-Klimovsky _et al._ (2015); Joulain _et al._ (2016); Ghosh _et
al._ (2017); Potts (2019); Gupt _et al._ (2021, 2022); Ghosh _et al._
(2022). Though it is quite effective in accurately calculating the steady-
state currents amid non-equilibrium conditions, it does not reveal any
information about the operational principles and the nature of the transport
coefficients involved in complex quantum systems. In contrast, network theory
in recent years has emerged as a powerful instrument for comprehending non-
equilibrium quantum systems Wang _et al._ (2022); Gupt _et al._ (2023). In
this framework, dissipative quantum dynamics can be represented as a weighted
network featuring nodes and edges Schnakenberg (1976). Here, vertices (nodes)
signify quantum states, and edges denote non-equilibrium transitions from one
quantum state to another, with positive flux rates. Network theory has been
applied for many years to explore complex biological phenomena and chemical
reactions Hill and Chen (1975); Kohler and Vollmerhaus (1980); Ren (2017);
Dutta _et al._ (2020). However, recent work by Wang et al. Wang _et al._
(2022) has drawn huge attention by utilizing network theory to understand the
principle working mechanism of quantum thermal devices. The present authors
have extended the technique further to molecular systems to unravel hidden
electron transfer pathways in solar cells under strong non-equilibrium
conditions Gupt _et al._ (2023).
In this paper, we leverage the advantages of network theory to elucidate the
operational principles of spin-thermoelectric effects within a three-terminal
quantum setup, closely resembling classical thermocouples. We demonstrate how
spin and energy currents, obtained from the quantum master equation, are
linked to the thermodynamic forces, manifesting spin-Seebeck and spin-Peltier,
as thermodynamic cross-effects. Close parallelisms between the microscopic and
the macroscopic description of the non-equilibrium system are established via
cycle force and cycle fluxes within a basic graph and thermodynamic forces and
fluxes of phenomenological laws. The central concept being used here is an
expression of the entropy production rate within the framework of the
algebraic graph.
The present work is organized as follows: In Sec. II, we introduce the basic
model of the quantum thermocouple and present the microscopic description
using the Lindblad master equation and quantum kinetic Pauli master equation.
We elaborate the basic framework of network theory in the context of spin-
thermoelectric effects in Sec. III and recover the phenomenological law of
irreversible thermodynamics and Onsager’s reciprocity in terms of network
cycle flux and forces. Operational principles of both spin-Seebeck and spin-
Peltier effects are presented in Sec. IV and finally, we conclude in Sec. V.
## II Microscopic Model and Quantum Master equation
The basic model of a quantum thermocouple consists of two strongly coupled
quantum dots (QDs) via Coulomb interaction. The lower quantum dot, denoted as
${\rm QD}_{l}$, is simultaneously coupled with a spinful free-electron
reservoir (on the left) and a magnon bath (on the right), both maintained at
an equal temperature ($T_{0}$), as depicted in Fig. 1. The upper quantum dot,
${\rm QD}_{u}$, is only coupled with a spinless free-electron reservoir,
acting as a junction like in a classical thermocouple. The spinful free-
electron reservoir comprises spin-polarized electrons with both spin-up
($\uparrow$) and spin-down ($\downarrow$) orientations Vandaele _et al._
(2017); Wang _et al._ (2022). In contrast, the spinless free-electron
reservoir in the middle consists of electrons without any distinct spin
orientation, and the magnon bath at the right is responsible for inducing
spin-flipping of the ${\rm QD}_{l}$ electrons Ren (2013); Wang _et al._
(2022); Vandaele _et al._ (2017); Sothmann and Büttiker (2012). The three-
terminal QD model presented here bears a striking similarity to a classical
thermocouple, particularly in the manifestation of both the spin-Seebeck and
spin-Peltier effects (SSE and SPE). In SSE, a spin current emerges under the
influence of a temperature gradient ($\delta T$), while SPE occurs with the
application of a spin bias voltage at the lower terminals, resembling the open
ends of a conventional thermocouple. Notably, the difference between in the
statistical properties of the magnon and electron reservoirs plays a crucial
role in generating spin-thermoelectric effects. This distinction can be
likened to the role of dissimilar metal wires in a classical thermocouple,
highlighting its significance within this quantum framework.
Figure 1: Inset: Schematic diagram of the classical thermocouple. Main:
Schematic diagram of a three-terminal Coulomb-coupled QD thermocouple. The
lower quantum dot (${\rm QD}_{l}$) is coupled to the left reservoir i.e. a
spinful reservoir (in green) and the right reservoir (in blue) i.e. a magnon
bath. Both terminals are kept at equal temperatures and serve as cold ends.
The upper quantum dot (${\rm QD}_{u}$) is coupled to the middle reservoir (in
red) i.e. a spinless electron reservoir. Here, heat is transferred from the
middle reservoir which acts as a junction (hot end) and the spin current is
across the lower two terminals, analogous to the open ends of a thermocouple.
The total Hamiltonian of the entire three-terminal setup is given below
$\displaystyle H$ $\displaystyle=$ $\displaystyle H_{\rm S}+H_{\rm B}+H_{\rm
I},$ $\displaystyle H_{\rm S}$ $\displaystyle=$
$\displaystyle\sum_{\sigma=\\{\uparrow,\downarrow\\}}\varepsilon_{l\sigma}{n}_{l\sigma}+\varepsilon_{u}{n}_{u}+\sum_{\sigma=\\{\uparrow,\downarrow\\}}{\rm
U}{n}_{u}{n}_{l\sigma},$ (1) $\displaystyle H_{\rm B}$ $\displaystyle=$
$\displaystyle H_{\rm L}+H_{\rm M}+H_{\rm R},$ (2) $\displaystyle=$
$\displaystyle\sum_{\sigma,k}(\epsilon_{{\rm L}\sigma k}-\mu_{{\rm
L}\sigma})b^{\dagger}_{{\rm L}\sigma k}b_{{\rm L}\sigma k}$ $\displaystyle+$
$\displaystyle\sum_{k}(\epsilon_{{\rm M}k}-\mu_{\rm M})b^{\dagger}_{{\rm
M}k}b_{{\rm M}k}+\sum_{q}\epsilon_{{\rm R}q}a^{\dagger}_{{\rm R}q}a_{{\rm
R}q},$ $\displaystyle H_{\rm I}$ $\displaystyle=$ $\displaystyle H_{\rm
IL}+H_{\rm IM}+H_{\rm IR},$ (3) $\displaystyle=$
$\displaystyle\hbar\sum_{\sigma,k}(t_{{\rm L}k}b^{\dagger}_{{\rm L}\sigma
k}d_{l\sigma}+t^{*}_{{\rm L}k}d^{\dagger}_{l\sigma}b_{{\rm L}\sigma k})$
$\displaystyle+$ $\displaystyle\hbar\sum_{k}(t_{{\rm M}k}b^{\dagger}_{{\rm
M}k}d_{u}+t^{*}_{{\rm M}k}d^{\dagger}_{u}b_{{\rm M}k})$ $\displaystyle+$
$\displaystyle\hbar\sum_{q}(g_{{\rm R}q}a^{\dagger}_{{\rm
R}q}d^{\dagger}_{l\uparrow}d_{l\downarrow}+g^{*}_{{\rm
R}q}d^{\dagger}_{l\downarrow}d_{l\uparrow}a_{{\rm R}q}).$
Equation (1) represents the total system Hamiltonian of the two Coulomb-
coupled QDs, where ${\rm U}$ describes the long-range positive Coulomb
repulsion energy that permits energy exchange but forbids any particle
exchange between the QDs. The operator
${n}_{l\sigma}=d^{\dagger}_{l\sigma}d_{l\sigma}$ is the number operator for
${\rm QD}_{l}$, with eigenstates
$|\phi_{l}\rangle=\\{|0\rangle,\ket{\uparrow},\ket{\downarrow}\\}$ and
corresponding eigenenergies $0$, $\varepsilon_{l\uparrow}$ and
$\varepsilon_{l\downarrow}$, respectively, where $d^{\dagger}_{l\sigma}$
($d_{l\sigma}$) denotes the electron creation (annihilation) operator with a
single particle energy level $\varepsilon_{l\sigma}$, obeying anti-commutation
relation
$\\{d_{l\sigma},d^{\dagger}_{l\sigma^{\prime}}\\}=\delta_{\sigma\sigma^{\prime}}$;
$\sigma$ being the spin orientation of the electrons. Similarly,
$n_{u}=d^{\dagger}_{u}d_{u}$ is the number operator for ${\rm QD}_{u}$, with
eigenstates $|\phi_{u}\rangle={\\{|0\rangle,|1\rangle}\\}$ and corresponding
eigenenergies $0$ and $\varepsilon_{u}$, respectively, where,
$d^{\dagger}_{u}$ ($d_{u}$) represents the electron creation (annihilation)
operator for ${\rm QD}_{u}$, with a single particle energy level of
$\varepsilon_{u}$, satisfying the anti-commutation relation
$\\{d_{u},d^{\dagger}_{u}\\}=1$.
Equation (2) describes the total bath Hamiltonian $H_{\rm B}$, wherein $H_{\rm
L}$, $H_{\rm M}$ and $H_{\rm R}$ are the respective Hamiltonians for the left
(L), middle (M) and right (R) reservoirs. The operators $b^{\dagger}_{{\rm
L}\sigma k}$ ($b^{\dagger}_{{\rm M}k}$) and $b_{{\rm L}\sigma k}$ ($b_{{\rm
M}k}$) represent the creation and annihilation operators of electrons for the
L and M baths, where, $\epsilon_{{\rm L}\sigma k}$ and $\mu_{{\rm L}\sigma}$
stand for the energy and chemical potential of electrons corresponding to the
spinful fermionic reservoir (L), with $k$ being the continuous wave number
(momentum) and $\sigma$ denotes the electron spin. The difference between the
chemical potentials $\mu_{\rm L\downarrow}$ and $\mu_{\rm L\uparrow}$ is given
by the spin bias voltage i.e., $\Delta\mu_{\rm S}=\mu_{\rm
L\downarrow}-\mu_{\rm L\uparrow}$. On the other hand, $\epsilon_{{\rm M}k}$
and $\mu_{\rm M}$ refer to the energy and chemical potential of electrons
without any specific spin orientation for the spinless fermionic reservoir
(M). For the magnon bath (R), $a^{\dagger}_{{\rm R}q}$ and $a_{{\rm R}q}$ are
the bosonic creation and annihilation operators with the energy
$\epsilon_{{\rm R}q}$ and momentum $q$ respectively.
Equation (3) provides the total system-reservoir interaction Hamiltonian
$H_{\rm I}$, where $H_{\rm I\alpha}$ ($\alpha={\rm L,M,R}$) represents the
interaction between the system and the $\alpha$-th reservoir. Here the ${\rm
QD}_{l}$ (${\rm QD}_{u}$) is tunnel-coupled to the L and M reservoir with the
tunneling amplitudes $t_{\rm L(M)}$, allowing both particle and energy
exchange with the QDs, while ${\rm QD}_{l}$ is simultaneously coupled to a
magnon bath which flips only one spin at a time. Under strong coupling, the
eigenstates of $H_{\rm S}$ are determined by the tensor product of the number
operator’s eigenbasis $|\phi_{u}\phi_{l}\rangle$ of the coupled QD system. For
convenience, the six microstates of the coupled system
${\\{|0\rangle,|1\rangle}\\}\otimes\\{|0\rangle,\ket{\uparrow},\ket{\downarrow}\\}$,
are labeled by $|\mathbb{1}\rangle=|00\rangle$,
$|\mathbb{2}\rangle=|10\rangle$, $|\mathbb{3}\rangle=\ket{0\uparrow}$,
$|\mathbb{4}\rangle=\ket{0\downarrow}$, $|\mathbb{5}\rangle=\ket{1\uparrow}$,
$|\mathbb{6}\rangle=\ket{1\downarrow}$ and their corresponding eigenenergies
($\varepsilon_{\mathbb{i}}$, $\mathbb{i=1,2,....,6}$) are given by
$\varepsilon_{\mathbb{1}}=0$, $\varepsilon_{\mathbb{2}}=\varepsilon_{u}$,
$\varepsilon_{\mathbb{3}}=\varepsilon_{l\uparrow}$,
$\varepsilon_{\mathbb{4}}=\varepsilon_{l\downarrow}$,
$\varepsilon_{\mathbb{5}}=\varepsilon_{u}+\varepsilon_{l\uparrow}+{\rm U}$ and
$\varepsilon_{\mathbb{6}}=\varepsilon_{u}+\varepsilon_{l\downarrow}+{\rm U}$
respectively. There are in total nine allowed transitions: The transitions
$|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle$,
$|\mathbb{1}\rangle\leftrightarrow|\mathbb{4}\rangle$,
$|\mathbb{2}\rangle\leftrightarrow|\mathbb{5}\rangle$ and
$|\mathbb{2}\rangle\leftrightarrow|\mathbb{6}\rangle$ are driven by the
reservoir L, while the transitions
$|\mathbb{1}\rangle\leftrightarrow|\mathbb{2}\rangle$,
$|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle$ and
$|\mathbb{4}\rangle\leftrightarrow|\mathbb{6}\rangle$ are induced by the
reservoir M, and the transitions
$|\mathbb{3}\rangle\leftrightarrow|\mathbb{4}\rangle$ and
$|\mathbb{5}\rangle\leftrightarrow|\mathbb{6}\rangle$ are triggered by the
bath R.
To calculate the thermal spin ($J_{\rm S}$) and energy current ($J_{\rm E}$)
under the SSE and SPE, we first derive the Lindblad quantum master equation of
the reduced density matrix $\rho$ for the coupled QDs system under the Born-
Markov and Secular (BMS) approximation Breuer and Petruccione (2007);
Strasberg (2022); Gupt _et al._ (2022, 2023) (see Appendix A)
$\frac{d\rho}{dt}=\mathcal{L}_{\rm L}[\rho]+\mathcal{L}_{\rm
R}[\rho]+\mathcal{L}_{\rm M}[\rho].$ (4)
Here $\mathcal{L}_{\alpha}$ ($\alpha={\rm L,R,M}$) is the Lindbladian due to
the interaction of the quantum system with its $\alpha$-th reservoir. The
explicit form of the superoperator $\mathcal{L}$ is given in terms of
dissipater
$\mathcal{D}(C)[\rho]=C\rho
C^{\dagger}-\frac{1}{2}\\{\rho,C^{\dagger}C\\},\;C\in\\{d_{l\sigma},d_{u},d^{\dagger}_{l\uparrow}d_{l\downarrow}\\},$
(5)
as follows:
$\displaystyle\mathcal{L}_{\rm L}[\rho]$ $\displaystyle=$
$\displaystyle\sum_{\sigma=\\{\uparrow,\downarrow\\}}\mathcal{L}_{{\rm
L}\sigma}[\rho],$ $\displaystyle\mathcal{L}_{{\rm L}\sigma}[\rho]$
$\displaystyle=$ $\displaystyle\sum_{\\{\varepsilon_{{\rm
L}\sigma}\\}}\gamma_{\rm L}\Big{[}f(\varepsilon_{{\rm L}\sigma},\mu_{{\rm
L}\sigma},T_{\rm L})\mathcal{D}(d^{\dagger}_{l\sigma})[\rho]$ (6)
$\displaystyle+$ $\displaystyle(1-f(\varepsilon_{{\rm L}\sigma},\mu_{{\rm
L}\sigma},T_{\rm L}))\mathcal{D}(d_{l\sigma})[\rho]\Big{]},$
$\displaystyle\mathcal{L}_{\rm M}[\rho]$ $\displaystyle=$
$\displaystyle\sum_{\\{\varepsilon_{\rm M}\\}}\gamma_{\rm
M}\Big{[}f(\varepsilon_{\rm M},\mu_{\rm M},T_{\rm
M})\mathcal{D}(d^{\dagger}_{u})[\rho]$ (7) $\displaystyle+$
$\displaystyle(1-f(\varepsilon_{\rm M},\mu_{\rm M},T_{\rm
M}))\mathcal{D}(d_{u})[\rho]\Big{]},$ $\displaystyle\mathcal{L}_{\rm R}[\rho]$
$\displaystyle=$ $\displaystyle\sum_{\\{\varepsilon_{\rm R}\\}}\gamma_{\rm
R}\Big{[}n(\varepsilon_{\rm R},T_{\rm
R})\mathcal{D}(d^{\dagger}_{l\downarrow}d_{l\uparrow})[\rho]$ (8)
$\displaystyle+$ $\displaystyle(1+n(\varepsilon_{\rm R},T_{\rm
R}))\mathcal{D}(d^{\dagger}_{l\uparrow}d_{l\downarrow})[\rho]\Big{]}.$
Note that we have implemented the strong coupling formalism to derive the
interaction picture master equation presented above Werlang _et al._ (2014);
Ghosh _et al._ (2022). Here, the strong coupling refers to the interaction
between the two QDs, while the system-reservoir coupling is assumed to be
weak, allowing for the safe implementation of the BMS approximation. In Eqs.
(6)-(8), all $\gamma$ values stand for the bare tunneling rates associated
with individual processes and depend on the system-reservoir coupling strength
through the respective bath spectral function. Lastly,
$f(\varepsilon,\mu,T)=[e^{(\varepsilon-\mu)/k_{B}T}+1]^{-1}$ and
$n(\varepsilon,T)=[e^{\varepsilon/k_{B}T}-1]^{-1}$ are respectively the Fermi-
Dirac (FD) and Bose-Einstein (BE) distribution functions with the positive
transition energy $\varepsilon$, chemical potential $\mu$ and temperature $T$
associated with the thermal reservoir, where $k_{B}$ is the Boltzmann
constant. Since the Hamiltonian $H_{\rm S}$ in Eq. (1) is diagonal in the
number state eigenbasis of the coupled QDs system, the reduced density matrix
$\rho$ of the above Lindblad master equation effectively decouples the
diagonal and off-diagonal matrix elements in the eigenbasis of $H_{\rm S}$
Ghosh _et al._ (2022). The diagonal elements of the density matrix $\rho$
signify the occupation probabilities of each microstate and the time evolution
is given by
$\displaystyle\frac{dP_{\mathbb{1}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{12}}+J_{\mathbb{13}}+J_{\mathbb{14}},$ (9)
$\displaystyle\frac{dP_{\mathbb{2}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{21}}+J_{\mathbb{25}}+J_{\mathbb{26}},$ (10)
$\displaystyle\frac{dP_{\mathbb{3}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{31}}+J_{\mathbb{34}}+J_{\mathbb{35}},$ (11)
$\displaystyle\frac{dP_{\mathbb{4}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{41}}+J_{\mathbb{43}}+J_{\mathbb{46}},$ (12)
$\displaystyle\frac{dP_{\mathbb{5}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{52}}+J_{\mathbb{53}}+J_{\mathbb{56}},$ (13)
$\displaystyle\frac{dP_{\mathbb{6}}}{dt}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{62}}+J_{\mathbb{64}}+J_{\mathbb{65}}.$ (14)
Here $J_{\mathbb{ij}}$ stands for the net transition rate from state
$|\mathbb{j}\rangle$ to $|\mathbb{i}\rangle$ which is given by
$\displaystyle J_{\mathbb{ij}}$ $\displaystyle=$ $\displaystyle
k_{\mathbb{ij}}P_{\mathbb{j}}-k_{\mathbb{ji}}P_{\mathbb{i}},$ (15)
$\displaystyle J_{\mathbb{ij}}$ $\displaystyle=$ $\displaystyle-
J_{\mathbb{ji}},\;\quad\mathbb{i,j=1,2,....,6}$ (16)
where, $P_{\mathbb{i}}=\langle i|\rho|i\rangle$ is the population of the
$\mathbb{i}$-th eiegenstate and $k_{\mathbb{ji}}$
($k_{\mathbb{\ket{j}\leftarrow\ket{i}}}$) gives the transition probability
from microstate $|\mathbb{i}\rangle$ to microstate $|\mathbb{j}\rangle$. The
rate expressions $k_{\mathbb{ji}}$ for all transitions in terms of $\gamma$
and the distribution functions can be summarized as follows:
$\displaystyle k_{\mathbb{31}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}f(\varepsilon_{l\uparrow},\mu_{{\rm L}\uparrow},T_{\rm L}),$ $\displaystyle
k_{\mathbb{13}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}[1-f(\varepsilon_{l\uparrow},\mu_{{\rm L}\uparrow},T_{\rm L})],$
$\displaystyle k_{\mathbb{41}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}f(\varepsilon_{l\downarrow},\mu_{{\rm L}\downarrow},T_{\rm L}),$
$\displaystyle k_{\mathbb{14}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}[1-f(\varepsilon_{l\downarrow},\mu_{{\rm L}\downarrow},T_{\rm L})],$
$\displaystyle k_{\mathbb{52}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}f(\varepsilon_{l\uparrow}+{\rm U},\mu_{{\rm L}\uparrow},T_{\rm L}),$
$\displaystyle k_{\mathbb{25}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}[1-f(\varepsilon_{l\uparrow}+{\rm U},\mu_{{\rm L}\uparrow},T_{\rm L})],$
$\displaystyle k_{\mathbb{62}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}f(\varepsilon_{l\downarrow}+{\rm U},\mu_{{\rm L}\downarrow},T_{\rm L}),$
$\displaystyle k_{\mathbb{26}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
L}[1-f(\varepsilon_{l\downarrow}+{\rm U},\mu_{{\rm L}\downarrow},T_{\rm L})],$
$\displaystyle k_{\mathbb{21}}$ $\displaystyle=$ $\displaystyle\gamma_{\rm
M}f(\varepsilon_{u},\mu_{\rm M},T_{\rm M}),$ $\displaystyle k_{\mathbb{12}}$
$\displaystyle=$ $\displaystyle\gamma_{\rm M}[1-f(\varepsilon_{u},\mu_{\rm
M},T_{\rm M})],$ $\displaystyle k_{\mathbb{53}}$ $\displaystyle=$
$\displaystyle k_{\mathbb{64}}=\gamma_{\rm M}f(\varepsilon_{u}+{\rm
U},\mu_{\rm M},T_{\rm M}),$ $\displaystyle k_{\mathbb{35}}$ $\displaystyle=$
$\displaystyle k_{\mathbb{46}}=\gamma_{\rm M}[1-f(\varepsilon_{u}+{\rm
U},\mu_{\rm M},T_{\rm M})],$ $\displaystyle k_{\mathbb{43}}$ $\displaystyle=$
$\displaystyle k_{\mathbb{65}}=\gamma_{\rm
R}n(\varepsilon_{l\downarrow}-\varepsilon_{l\uparrow},T_{\rm R}),$
$\displaystyle k_{\mathbb{34}}$ $\displaystyle=$ $\displaystyle
k_{\mathbb{56}}=\gamma_{\rm
R}[1+n(\varepsilon_{l\downarrow}-\varepsilon_{l\uparrow},T_{\rm R})].$ (17)
Combining Eqs. (9)-(14) with Eq. (15), it is evident that the evolution
equations for the microscopic probabilities exhibit linearity with respect to
the populations $\\{P_{\mathbb{i}}\\}$. As a result, we can cast these
equations in the following compact form
$\frac{dP_{\mathbb{i}}}{dt}=\sum^{\mathbb{6}}_{\mathbb{j=1}}J_{\mathbb{ij}}=\sum^{\mathbb{6}}_{\mathbb{j=1}}k_{\mathbb{ij}}P_{\mathbb{j}}-k_{\mathbb{ji}}P_{\mathbb{i}};\quad\mathbb{i\neq
j},$ (18)
where $\sum^{\mathbb{6}}_{\mathbb{i=1}}P_{\mathbb{i}}=1$. Equation (18) is
known as the quantum kinetic Pauli master equation which is “classical” in
looking but quantum mechanical in content through the transition probabilities
$\\{k_{\mathbb{ij}}\\}$, determined by the Fermi’s golden rule within BMS
approximation and the statistical properties of the respective quantum baths
Sinha _et al._ (2011a, b). To obtain the steady-state solution
$\bar{P}_{\mathbb{i}}$ of Eq.(18), one has to solve the system of linear
equations, satisfying the conditions $0\leq\bar{P}_{\mathbb{i}}\leq 1$ and
$\sum_{\mathbb{i}}\bar{P}_{\mathbb{i}}=1$. With the help of Eq. (18) and the
rate coefficients calculated from the above microscopic picture (Cf. (II)), it
is possible to evaluate the steady-state spin and energy currents in terms of
the net transition rates (Cf. Eq. (15)) between the system microstates, where
$\\{P_{\mathbb{i}}\\}$ get replaced by the steady-state populations
$\\{\bar{P}_{\mathbb{i}}\\}$. Following the definition of the spin and energy
currents mentioned in Appendix-A, we obtain the mathematical expression of the
steady-state spin current $J_{\rm S}$ which flows from left to right, as
$\displaystyle J_{\rm S}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Big{(}\Tr\\{d^{\dagger}_{l\downarrow}d_{l\downarrow}\mathcal{L}_{\rm
L\downarrow}[\rho]\\}-\Tr\\{d^{\dagger}_{l\uparrow}d_{l\uparrow}\mathcal{L}_{\rm
L\uparrow}[\rho]\\}\Big{)},$ $\displaystyle=$ $\displaystyle
J_{\mathbb{34}}+J_{\mathbb{56}}=(k_{\mathbb{34}}\bar{P}_{\mathbb{4}}-k_{\mathbb{43}}\bar{P}_{\mathbb{3}})+(k_{\mathbb{56}}\bar{P}_{\mathbb{6}}-k_{\mathbb{65}}\bar{P}_{\mathbb{5}}),$
and the steady-state energy (heat) current $J_{\rm E}$, through the middle
reservoir is given by
$\displaystyle J_{\rm E}$ $\displaystyle=$
$\displaystyle\Tr\\{\mathcal{L}_{\rm M}[\rho]H_{\rm S}\\}={\rm
U}(J_{\mathbb{53}}+J_{\mathbb{64}})$ (20) $\displaystyle=$ $\displaystyle{\rm
U}[(k_{\mathbb{53}}\bar{P}_{\mathbb{3}}-k_{\mathbb{35}}\bar{P}_{\mathbb{5}})+(k_{\mathbb{64}}\bar{P}_{\mathbb{4}}-k_{\mathbb{46}}\bar{P}_{\mathbb{6}})].$
Eq. (20) immediately implies that a finite energy current always requires a
finite Coulomb interaction energy. However, obtaining the exact analytical
solutions for $J_{\rm S}$, $J_{\rm E}$ in terms of steady-state populations
[Eqs. (II) and (20)] by solving the linear master equation [Eq. (18)] is by no
means a trivial task. Secondly, while, one may in principle use exact Eqs.
(LABEL:spin-current-Js) and (20) to numerically compute the steady-state spin
and energy currents, it does not provide any physical insight into the
underlying transport mechanisms leading to SSE and SPE. Nor does it explain
how the macroscopic spin and energy currents are related to the thermodynamic
forces that give rise to spin-thermoelectric effects as a manifestation of
thermodynamic cross-effects.
An alternative yet effective method is to calculate algebraic expressions for
steady-state currents through a network or mathematical graph theory Tutte
(2001); Balakrishnan and Ranganathan (2012). This also allows us to understand
the operational principles of QD-based spin-thermoelectric effects quite
easily. In this method, one first constructs a basic graph $\mathbb{G}$ as a
diagrammatic representation of the right-hand side of Eq. (18). To extract the
principal mechanism from complex transport behaviors, one then decomposes the
quantum transition network into cycle trajectories, collects the cycle fluxes
using algebraic graph theory, and selects the top-ranked cycle fluxes—i.e.,
the cycle trajectories with the highest probabilities Wang _et al._ (2022).
In the following section, we illustrate this method in the context of the
present problem and establish the connection between the microscopic
descriptions of the non-equilibrium system via the basic graph and the
macroscopic description of thermolectric phenomena in terms of thermodynamic
forces and fluxes, including the celebrated Onsager and Kelvin relations
Callen (1985). The key concept throughout the entire formalism is the
expression of entropy production rate in the framework of graph theory Landi
and Paternostro (2021); Schnakenberg (1976).
## III Network theory and Reciprocity relation
The network or graph theory found its first application in electricity, with
Kirchhoff making a pioneering contribution to the understanding of electrical
circuits as non-equilibrium systems involving electric current and potential.
Since then, graph theory has expanded its horizons and produced a flurry of
inspiring early works by Hill, Kohler, Vollmerhaus, King, and Altman Hill and
Chen (1975); Kohler and Vollmerhaus (1980); King and Altman (1956),
particularly on biophysical and biochemical systems. A vast body of literature
is available on this subject Wu _et al._ (2012); Einax _et al._ (2011);
Einax and Nitzan (2014); Ren (2017); Dutta _et al._ (2020); still,
Schnakenberg’s 1976 review is considered a seminal contribution to this field
Schnakenberg (1976).
### III.1 Quantum Transition network and Cycle flux analysis
As an extension of network theory to quantum systems, the notable work of Wang
et al. Wang _et al._ (2022) is worth mentioning. They have recently
demonstrated that the dissipative quantum dynamics of non-equilibrium
transport can be mapped onto networks of quantum state transitions, where
nodes or vertices correspond to quantum states, and the connecting lines or
edges between two quantum states represent their allowed transitions.
Figure 2: Schematic diagram of the basic graph ($\mathbb{G}$). Subcycles
$\\{\mathcal{C}_{1},\mathcal{C}_{5},\mathcal{C}_{8},\mathcal{C}_{9},\mathcal{C}_{10}\\}$,
sharing the common edge ($\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{4}}$)
are used to calculate edge flux $J_{\mathbb{43}}$ [Cf. Eq.(24)].
In the present case, the diagrammatic representation of the quantum transport
processes under the non-equilibrium condition is shown in Fig. 2 in the form
of a basic graph ($\mathbb{G}$), where each node or vertex represents a
quantum state $\\{|\mathbb{i}\rangle\\}$ along with its associated
(microscopic) occupation probability $\\{P_{\mathbb{i}}\\}$. The transition
between adjacent quantum states $|\mathbb{i}\rangle$ and $|\mathbb{j}\rangle$
are depicted by edges. The steady-state population $\bar{P}_{\mathbb{i}}$ can
then be calculated as
$\bar{P}_{\mathbb{i}}=\frac{\Lambda_{\mathbb{i}}}{\Lambda},\quad\text{with}\quad
0\leq\bar{P}_{\mathbb{i}}\leq
1\quad\text{and}\quad\sum_{\mathbb{i}}\bar{P}_{\mathbb{i}}=1;$ (21)
where $\Lambda_{\mathbb{i}}$ represents the sum of the weight of the spanning
trees rooted on the $\mathbb{\ket{i}}$-th state and $\Lambda$ is defined as
the sum of the weights of the spanning trees rooted on every individual state
$\\{\mathbb{\ket{i}}\\}$, i.e. $\sum_{\mathbb{i}}\Lambda_{\mathbb{i}}$. In the
literature, the above method is known as Kirchhoff’s theorem Schnakenberg
(1976); Kirchhoff (1847). According to this theorem, a spanning tree is a
subgraph of $\mathbb{G}$ which includes all the vertices with the minimum
number of edges that are always connected but have no circuits (cyclic
sequence of edges or cycle trajectory). To construct a spanning tree, one
should remove $\nu=e-v+1$ number of edges of the basic graph $\mathbb{G}$,
where $e$ and $v$ are the numbers of edges and vertices in $\mathbb{G}$
Schnakenberg (1976). As a result, all possible spanning trees contain an equal
number of vertices and edges.
Under the non-equilibrium condition, each edge represents a transport process
and the rate of these transport processes is determined by the net transition
rate or edge flux. The steady-state edge flux from a state $\mathbb{\ket{j}}$
and $\mathbb{\ket{i}}$ is defined as
$J_{\mathbb{ij}}=k_{\mathbb{ij}}\bar{P}_{\mathbb{j}}-k_{\mathbb{ji}}\bar{P}_{\mathbb{i}},$
(22)
where, each edge denotes a pair of transitions with the transition
probabilities $k_{\mathbb{ij}}$ (from $|\mathbb{j}\rangle$ to
$|\mathbb{i}\rangle$) and $k_{\mathbb{ji}}$ (from $|\mathbb{i}\rangle$ to
$|\mathbb{j}\rangle$) Schnakenberg (1976); Wu _et al._ (2012). Typically, a
basic graph $\mathbb{G}$ is comprised of numerous undirected subcycles
($\mathcal{C}$), and each of these subcycles represents a pair of two one-
directional circuits [Fig. 2], namely $\mathcal{C}^{+}$ (counterclockwise) and
$\mathcal{C}^{-}$ (clockwise) Kohler and Vollmerhaus (1980). Since the
circuits are formed by the cyclic sequence of edges within $\mathbb{G}$, the
edge flux can be defined in terms of the circuit fluxes Schnakenberg (1976),
as
$J_{\mathbb{ij}}=\sum_{\mathcal{C}}\mathcal{S}_{ij}(\mathcal{C})(J^{+}_{\mathcal{C}}-J^{-}_{\mathcal{C}})=\sum_{\mathcal{C}}\mathcal{S}_{ij}(\mathcal{C})J_{\mathcal{C}}.$
(23)
Here $J_{\mathcal{C}}=J^{+}_{\mathcal{C}}-J^{-}_{\mathcal{C}}$ denotes the net
cycle flux wherein $J^{+}_{\mathcal{C}}$ and $J^{-}_{\mathcal{C}}$ are the
circuit fluxes correspond to circuits $\mathcal{C}^{+}$ and $\mathcal{C}^{-}$
respectively, with the prefactor $\mathcal{S}_{\mathbb{ij}}(\mathcal{C})=0,\pm
1$. $\mathcal{S}_{\mathbb{ij}}(\mathcal{C})=0$ if $\mathcal{C}^{+}$ and
$\mathcal{C}^{-}$ does not contain the edge
$\mathbb{\ket{j}}\rightarrow\mathbb{\ket{i}}$;
$\mathcal{S}_{\mathbb{ij}}(\mathcal{C})=+1$ if the orientation of
$\mathcal{C}^{+}$ ($\mathcal{C}^{-}$) is along (opposite) to edge
$\mathbb{\ket{j}}\rightarrow\mathbb{\ket{i}}$ and
$\mathcal{S}_{\mathbb{ij}}(\mathcal{C})=-1$ if the orientation of
$\mathcal{C}^{+}$ ($\mathcal{C}^{-}$) is opposite (along) to edge
$\mathbb{\ket{j}}\rightarrow\mathbb{\ket{i}}$. For example, the edge flux
$J_{\mathbb{43}}$ ($J_{\mathbb{\ket{4}\leftarrow\ket{3}}}$) in the basic graph
$\mathbb{G}$, can be expressed in terms of the circuit fluxes [Fig. 2] as
$J_{\mathbb{43}}=J^{+}_{\mathcal{C}_{1}}-J^{-}_{\mathcal{C}_{1}}-J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{5}}+J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}}-J^{+}_{\mathcal{C}_{9}}+J^{-}_{\mathcal{C}_{9}}-J^{+}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{10}}.$
(24)
The name “circuit” was initially introduced by Kohler and Vollmerhaus Kohler
and Vollmerhaus (1980) and also termed a “one-way cycle” by Hill Hill and
Kedem (1966). However, we prefer to use the term “circuit” or “cycle
trajectory” to avoid confusion with the usual “cycle”. Hill and Chen provided
the physical interpretation for circuit fluxes Hill and Chen (1975), revealing
that these fluxes signify the ‘frequency’ (or rate) of circuit completions
along a particular cycle trajectory. To be specific, the circuit flux
associated with a one-directional cycle trajectory $\mathcal{C^{\pm}}$ is
given by
$J^{\pm}_{\mathcal{C}}=\Pi^{\pm}_{\mathcal{C}}\frac{\Lambda_{\mathcal{C}}}{\Lambda}.$
(25)
Here, $\Pi^{\pm}_{\mathcal{C}}$ denotes the weight factor which is determined
by the product of the transition rates along the circuit $\mathcal{C^{\pm}}$.
For example, the clockwise cycle trajectory
$\mathcal{C}^{-}_{1}(\mathbb{\ket{1}}\rightarrow\mathbb{\ket{4}}\rightarrow\mathbb{\ket{3}}\rightarrow\mathbb{\ket{1}})$
[Fig. 2] has the weight factor
$\Pi^{-}_{\mathcal{C}_{1}}=k_{\mathbb{13}}k_{\mathbb{34}}k_{\mathbb{41}}$,
where, $\Lambda_{\mathcal{C}}$ represents the sum of the weight of the
spanning trees rooted on cycle $\mathcal{C}$ and
$\Lambda=\sum_{\mathbb{i}}\Lambda_{\mathbb{i}}$.
Figure 3: Spanning trees rooted on cycle $\mathcal{C}_{3}$ (shaded region) of
the basic graph.
Now, there are a total of 22 paired cycle trajectories, or 11 subcycles, for
our basic graph $\mathbb{G}$, which are as follows:
$\displaystyle\mathcal{C}_{1}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{2}$ $\displaystyle:$
$\displaystyle|\mathbb{2}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{2}\rangle$
$\displaystyle\mathcal{C}_{3}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{4}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{5}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{6}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{7}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{8}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{2}\rangle\leftrightarrow|\mathbb{1}\rangle$
$\displaystyle\mathcal{C}_{9}$ $\displaystyle:$
$\displaystyle|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{3}\rangle$
$\displaystyle\mathcal{C}_{10}$ $\displaystyle:$
$\displaystyle|\mathbb{2}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{2}\rangle$
$\displaystyle\mathcal{C}_{11}$ $\displaystyle:$
$\displaystyle|\mathbb{1}\rangle\leftrightarrow|\mathbb{3}\rangle\leftrightarrow|\mathbb{5}\rangle\leftrightarrow|\mathbb{6}\rangle\leftrightarrow|\mathbb{4}\rangle\leftrightarrow|\mathbb{1}\rangle.$
(26)
Hence, enumerating a large number of spanning trees rooted at each individual
state, as well as for cycles, poses a formidable challenge. This difficulty
becomes more pronounced with the increasing size of the basic graph.
To bypass this problem, we utilize the generalized matrix-tree theorem from
algebraic graph Wang _et al._ (2022); Gupt _et al._ (2023) by rewriting the
master equation in the following form Keizer (1972):
$\dot{\rm\textbf{P}}=-{\rm\textbf{MP}}$, where
${\rm\textbf{P}}=\\{P_{\mathbb{1}},P_{\mathbb{2}},P_{\mathbb{3}},P_{\mathbb{4}},P_{\mathbb{5}},P_{\mathbb{6}}\\}$
is a column matrix and M is a square matrix, given by
$\displaystyle{\rm\textbf{M}}=\left[{\begin{array}[]{cccccc}k_{\mathbb{21}}+k_{\mathbb{31}}+k_{\mathbb{41}}&-k_{\mathbb{21}}&k_{\mathbb{13}}&-k_{\mathbb{14}}&0&0\\\
-k_{\mathbb{21}}&k_{\mathbb{12}}+k_{\mathbb{52}}+k_{\mathbb{62}}&0&0&-k_{\mathbb{25}}&-k_{\mathbb{26}}\\\
-k_{\mathbb{31}}&0&k_{\mathbb{13}}+k_{\mathbb{43}}+k_{\mathbb{53}}&-k_{\mathbb{34}}&-k_{\mathbb{35}}&0\\\
-k_{\mathbb{41}}&0&-k_{\mathbb{43}}&k_{\mathbb{14}}+k_{\mathbb{34}}+k_{\mathbb{64}}&0&-k_{\mathbb{46}}\\\
0&-k_{\mathbb{52}}&-k_{\mathbb{53}}&0&k_{\mathbb{25}}+k_{\mathbb{35}}+k_{\mathbb{65}}&-k_{\mathbb{56}}\\\
0&-k_{\mathbb{62}}&0&-k_{\mathbb{64}}&-k_{\mathbb{65}}&k_{\mathbb{26}}+k_{\mathbb{46}}+k_{\mathbb{56}}\\\
\end{array}}\right].$ (33)
Equation (33) is known as the Laplacian or transition matrix of the weighted
graph $\mathbb{G}$. Furthermore, in accordance with the matrix tree theorem,
it is possible to compute both the numerator and denominator of Eqs. (21) and
(25) as the determinants of the reduced transition matrix. For instance,
$\Lambda_{\mathbb{i}}$ is related to ${\rm\textbf{M}}[\mathbb{i},\mathbb{i}]$
which can be obtained by removing the $\mathbb{i}$-th row and column of the
Laplacian matrix M. Similarly, $\Lambda_{\mathcal{C}}$ is identical to the
$\det({\rm\textbf{M}}[\mathcal{C},\mathcal{C}])$, obtained by deleting rows
and columns belonging to cycle $\mathcal{C}$ of the transition matrix M. This
directly leads to a simple algebraic expression of the steady-state population
$\bar{P}_{\mathbb{i}}=\frac{\det({\rm\textbf{M}}[\mathbb{i};\mathbb{i}])}{\sum_{\mathbb{i}}\det({\rm\textbf{M}}[\mathbb{i};\mathbb{i}])},$
(34)
and the one-directional circuit flux associated with circuits
$\mathcal{C}^{\pm}$ in the following form
$J^{\pm}_{\mathcal{C}}=\Pi^{\pm}_{\mathcal{C}}\frac{\det({\rm\textbf{M}}[\mathcal{C};\mathcal{C}])}{\sum_{\mathbb{i}}\det({\rm\textbf{M}}[\mathbb{i};\mathbb{i}])},$
(35)
where
${\sum_{\mathbb{i}}\det({\rm\textbf{M}}[\mathbb{i};\mathbb{i}])}=\Lambda$. As
an example, the sum of the weights of spanning trees rooted on cycle
$\mathcal{C}_{3}$ in terms of the reduced determinant of the original
Laplacian matrix M is obtained by deleting rows and columns
$\mathbb{i(1,2,3,5)}\in\mathcal{C}_{3}$ [Fig. 3]. So, the principle minor
${\rm\textbf{M}}[\mathcal{C}_{3},\mathcal{C}_{3}]$ or
${\rm\textbf{M}}[1,3,5,2;1,3,5,2]$ and its determinant takes the form of
$\displaystyle{\rm\textbf{M}}[\mathcal{C}_{3},\mathcal{C}_{3}]=\left[{\begin{array}[]{ccc}k_{\mathbb{14}}+k_{\mathbb{34}}+k_{\mathbb{64}}&-k_{\mathbb{46}}\\\
-k_{\mathbb{64}}&k_{\mathbb{26}}+k_{\mathbb{46}}+k_{\mathbb{56}}\end{array}}\right],$
(38)
and
$\displaystyle\det({\rm\textbf{M}}[\mathcal{C}_{3},\mathcal{C}_{3}])=k_{\mathbb{14}}k_{\mathbb{26}}+k_{\mathbb{14}}k_{\mathbb{46}}+k_{\mathbb{14}}k_{\mathbb{56}}+k_{\mathbb{34}}k_{\mathbb{26}}$
$\displaystyle+k_{\mathbb{34}}k_{\mathbb{46}}+k_{\mathbb{34}}k_{\mathbb{56}}+k_{\mathbb{26}}k_{\mathbb{64}}+k_{\mathbb{56}}k_{\mathbb{64}}.$
(39)
respectively. Equation (III.1) contains a sum of eight terms, indicating that
there are eight possible spanning trees rooted on cycle $\mathcal{C}_{3}$.
Each term represents the weight of a spanning tree in which all the weighted
edges are directed towards the cycle $\mathcal{C}_{3}$ [Fig. 3]. Finally, from
Eq. (35), it is clear that we can calculate the circuit fluxes
$J^{\pm}_{\mathcal{C}}$ for each stochastic cycle trajectory with the
orientation either clockwise or counterclockwise and efficiently rank out the
top-ranked circuit fluxes. Next, we will show how the microscopic details of
cycle and circuit fluxes help us understand the SSE and SPE, connecting spin
and energy currents to macroscopic thermodynamic forces in the
phenomenological laws of irreversible thermodynamics.
### III.2 Onsager Relation
The first step on our way from a microscopic to a macroscopic description is
to establish an expression for entropy production rate, the key quantity in
understanding any irreversible processes Schnakenberg (1976); Landi and
Paternostro (2021). To start with, we consider the von Neumann entropy
$\mathcal{S}=-k_{B}\sum_{\mathbb{i}}P_{\mathbb{i}}\ln P_{\mathbb{i}},$ (40)
in the framework of our discrete-state quantum transition network,
characterized by its microscopic probability $\\{P_{\mathbb{i}}\\}$. Thus, the
time evolution of $\mathcal{S}$ is given by
$\frac{d\mathcal{S}}{dt}=-k_{B}\sum_{\mathbb{i}}\frac{dP_{\mathbb{i}}}{dt}\ln
P_{\mathbb{i}}.$ (41)
With the help of the quantum kinetic Pauli master equation (18), one can
rewrite Eq. (41) as
$\displaystyle\frac{d\mathcal{S}}{dt}$ $\displaystyle=$
$\displaystyle\frac{1}{2}k_{B}\sum_{\mathbb{i,j}}J_{\mathbb{ij}}\ln\Big(\frac{P_{\mathbb{j}}}{P_{\mathbb{i}}}\Big{missing}).$
(42)
Now, following Schnakenberg’s suggestion Schnakenberg (1976), we split Eq.
(42) into two parts,
$\frac{d\mathcal{S}}{dt}=\dot{\Phi}(t)+\dot{\sigma}(t),$ (43)
where we identify the first term $\dot{\Phi}(t)$ as the entropy flux rate
$\dot{\Phi}(t)=-\frac{1}{2}k_{B}\sum_{\mathbb{i,j}}J_{\mathbb{ij}}\ln\Big(\frac{k_{\mathbb{ij}}}{k_{\mathbb{ji}}}\Big{missing}),$
(44)
which arises from the interaction between the system and its surroundings. The
second term $\dot{\sigma}(t)$ is the total entropy production rate
$\displaystyle\dot{\sigma}(t)$ $\displaystyle=$
$\displaystyle\frac{1}{2}k_{B}\sum_{\mathbb{i,j}}(k_{\mathbb{ij}}P_{\mathbb{j}}-k_{\mathbb{ji}}P_{\mathbb{i}})\ln\Big(\frac{k_{\mathbb{ij}}P_{\mathbb{j}}}{k_{\mathbb{ji}}P_{\mathbb{i}}}\Big{missing}).$
(45)
Equations (45) may appear a little artificial at first glance, and a natural
question to be raised at this point is whether Eq. (45) has anything to do
with the entropy production of the phenomenological irreversible
thermodynamics, which needs be expressed as a bilinear form of the macroscopic
thermodynamic forces and fluxes. It must be emphasized that neither
$\dot{\mathcal{S}}$ in Eq. (43) nor $\dot{\Phi}$ in Eq. (44) are necessarily
positive, but only $\dot{\sigma}(t)\geq 0$, since it takes a form
$(a-b)\ln(a/b)\geq 0$ [Cf. Eq. (45)]. This is indeed true since the total
entropy production rate ($\dot{\sigma}$) of any system must be always positive
Schnakenberg (1976); Landi and Paternostro (2021). Thus, it turns out that Eq.
(45) satisfies the basic criteria for the entropy production rate. Under the
steady-state condition, there is no change in the entropy of the system which
implies Schnakenberg (1976); Landi and Paternostro (2021)
$\dot{\sigma}=-\dot{\Phi}(t)=\frac{1}{2}k_{B}\sum_{\mathbb{i,j}}(k_{\mathbb{ij}}\bar{P}_{\mathbb{j}}-k_{\mathbb{ji}}\bar{P}_{\mathbb{i}})\ln\Big(\frac{k_{\mathbb{ij}}}{k_{\mathbb{ji}}}\Big{missing}).$
(46)
Using Eqs. (22) and (23), one may rewrite the above equation in terms of the
circuit and cycle fluxes as Schnakenberg (1976),
$\displaystyle\dot{\sigma}$ $\displaystyle=$ $\displaystyle
k_{B}\sum_{\mathcal{C}}(J^{+}_{\mathcal{C}}\mathcal{A}^{+}_{\mathcal{C}}+J^{-}_{\mathcal{C}}\mathcal{A}^{-}_{\mathcal{C}})$
(47) $\displaystyle=$ $\displaystyle
k_{B}\sum_{\mathcal{C}}(J^{+}_{\mathcal{C}}-J^{-}_{\mathcal{C}})\mathcal{A}^{+}_{\mathcal{C}}=k_{B}\sum_{\mathcal{C}}J_{\mathcal{C}}\mathcal{X}_{\mathcal{C}},$
where
$\mathcal{X}_{\mathcal{C}}=\mathcal{A}^{+}_{\mathcal{C}}=\ln(\Pi^{+}_{\mathcal{C}}/\Pi^{-}_{\mathcal{C}})=-\mathcal{A}^{-}_{\mathcal{C}}$
is called the the cycle affinity. For a given cycle $\mathcal{C}$, it measures
the imbalance or asymmetry between the transition rates along two opposite
cycle trajectories $\mathcal{C}^{\pm}$ and hence qualifies as a thermodynamic
force Ohga _et al._ (2023). This is because, when
$\mathcal{X}_{\mathcal{C}}=0$, it implies $J_{\mathcal{C}}=0$, resulting in
equal circuit fluxes in both directions, i.e.,
$J^{+}_{\mathcal{C}}=J^{-}_{\mathcal{C}}$. Equation (47), expressed in terms
of cycle fluxes and cycle forces, can thus be regarded as a microscopic or
stochastic version of the phenomenological Onsager relation Landi and
Paternostro (2021).
Moreover, we find from Eq. (35) that the ratio of $J^{\pm}_{\mathcal{C}}$ is
equal to the ratio of weight factors $\Pi^{\pm}_{\mathcal{C}}$ for each cycle,
which, in turn, is determined by the ratio of the product of the transitions
rates along circuits $\mathcal{C}^{\pm}$ and can be computed in terms of
externally controllable, macroscopic physical quantities $T_{0}$, $\delta T$
and $\Delta\mu_{\rm S}$, as follows (see Appendix B):
$\displaystyle\frac{J^{+}_{\mathcal{C}_{1}}}{J^{-}_{\mathcal{C}_{1}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (48)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{2}}}{J^{-}_{\mathcal{C}_{2}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{2}}}{\Pi^{-}_{\mathcal{C}_{2}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (49)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{3}}}{J^{-}_{\mathcal{C}_{3}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{3}}}{\Pi^{-}_{\mathcal{C}_{3}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}},$ (50)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{4}}}{J^{-}_{\mathcal{C}_{4}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{4}}}{\Pi^{-}_{\mathcal{C}_{4}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (51)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{5}}}{J^{-}_{\mathcal{C}_{5}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{5}}}{\Pi^{-}_{\mathcal{C}_{5}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (52)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{6}}}{J^{-}_{\mathcal{C}_{6}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{6}}}{\Pi^{-}_{\mathcal{C}_{6}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}},$ (53)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{7}}}{J^{-}_{\mathcal{C}_{7}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{7}}}{\Pi^{-}_{\mathcal{C}_{7}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (54)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{8}}}{J^{-}_{\mathcal{C}_{8}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{8}}}{\Pi^{-}_{\mathcal{C}_{8}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (55)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{9}}}{J^{-}_{\mathcal{C}_{9}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{9}}}{\Pi^{-}_{\mathcal{C}_{9}}}=1,$
(56) $\displaystyle\frac{J^{+}_{\mathcal{C}_{10}}}{J^{-}_{\mathcal{C}_{10}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{10}}}{\Pi^{-}_{\mathcal{C}_{10}}}=e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (57)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{11}}}{J^{-}_{\mathcal{C}_{11}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{11}}}{\Pi^{-}_{\mathcal{C}_{11}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}},$ (58)
In order to derive Eqs. (48)-(58), one makes use of Eq. (II), where the
explicit form of the distribution functions are governed by the quantum
statistical properties of the respective thermal reservoirs (see Appendix B
for details). The advantage to writing the above set of equations as a ratio
of the circuits fluxes lies in its ability to indicate that if the external
biases $\Delta\mu_{\rm S}$ and $\delta T$ are zero on the r.h.s of Eqs.
(48)-(58), then regardless of the magnitudes the circuit fluxes, corresponding
cycle can’t contribute the spin and energy currents. Therefore, one can infer
the cycle fluxes associated with the subcycles
$\\{\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{10},\mathcal{C}_{11}\\}$ are
controlled by the spin bias voltage $\Delta\mu_{\rm S}$ and hence can only
contribute to the spin current $J_{\rm S}$. Whereas the net cycle fluxes
associated with the subcycles $\mathcal{C}_{3}$ and $\mathcal{C}_{6}$ are
dependent on the Coulomb interaction ${\rm U}$ and the temperature gradient
$\delta T$ and thereby contributing to the energy current $J_{\rm E}$.
However, there are few subcycles
$\\{\mathcal{C}_{4},\mathcal{C}_{5},\mathcal{C}_{7},\mathcal{C}_{8}\\}$ and
their conjugate fluxes are governed by both $\delta T$ as well as
$\Delta\mu_{\rm S}$ and therefore can contribute to both $J_{\rm S}$ and
$J_{\rm E}$. Indeed, these are the four cycles that are responsible for the
spin-thermoelectric cross-effects of SSE and SPE as we will demonstrate in
Sec. IV. Note that the net cycle flux associated with $\mathcal{C}_{9}$ is
identically zero because the circuit fluxes in both directions (clockwise and
counterclockwise) are equal as evident from Eq.(56). As a result, we can use
Eq. (23) to rewrite macroscopic spin and energy current expressions [Eqs. (II)
and (20)] in terms of the microscopic circuit and cycle fluxes as follows:
$\displaystyle J_{\rm S}$ $\displaystyle=$
$\displaystyle-(J^{+}_{\mathcal{C}_{1}}-J^{-}_{\mathcal{C}_{1}})-(J^{+}_{\mathcal{C}_{2}}-J^{-}_{\mathcal{C}_{2}})-(J^{+}_{\mathcal{C}_{4}}-J^{-}_{\mathcal{C}_{4}})$
$\displaystyle+$
$\displaystyle(J^{+}_{\mathcal{C}_{5}}-J^{-}_{\mathcal{C}_{5}})+(J^{+}_{\mathcal{C}_{7}}-J^{-}_{\mathcal{C}_{7}})-(J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}})$
$\displaystyle+$
$\displaystyle(J^{+}_{\mathcal{C}_{10}}-J^{-}_{\mathcal{C}_{10}})-(J^{+}_{\mathcal{C}_{11}}-J^{-}_{\mathcal{C}_{11}})$
$\displaystyle=$ $\displaystyle-
J_{\mathcal{C}_{1}}-J_{\mathcal{C}_{2}}-J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{7}}-J_{\mathcal{C}_{8}}$
$\displaystyle+$ $\displaystyle J_{\mathcal{C}_{10}}-J_{\mathcal{C}_{11}},$
(60) $\displaystyle J_{\rm E}$ $\displaystyle=$ $\displaystyle{\rm
U}[(J^{+}_{\mathcal{C}_{3}}-J^{-}_{\mathcal{C}_{3}})+(J^{+}_{\mathcal{C}_{4}}-J^{-}_{\mathcal{C}_{4}})+(J^{+}_{\mathcal{C}_{5}}-J^{-}_{\mathcal{C}_{5}})$
(61) $\displaystyle+$
$\displaystyle(J^{+}_{\mathcal{C}_{6}}-J^{-}_{\mathcal{C}_{6}})+(J^{+}_{\mathcal{C}_{7}}-J^{-}_{\mathcal{C}_{7}})+(J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}})]$
$\displaystyle=$ $\displaystyle{\rm
U}[J_{\mathcal{C}_{3}}+J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{6}}+J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}].$
(62)
Similar to Eqs. (II) and (20), the above set of equations are the most general
ones, however, the latter has an advantage over the previous set of equations.
Eqs. (61) and (62) can be expressed in terms of macroscopic forces,
facilitating the connection between SSE and SPE as a manifestation of
thermodynamic cross-effects. In order to identify the phenomenological forces,
we substitute Eqs. (48)-(58) into Eq. (47), to write the entropy production
rate as a sum over the associated thermodynamic forces and fluxes as (Appendix
B)
$\displaystyle\dot{\sigma}$ $\displaystyle=$ $\displaystyle{\rm
U}[J_{\mathcal{C}_{3}}+J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{6}}+J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}]\frac{\delta
T}{T_{0}(T_{0}+\delta T)}$ (63) $\displaystyle-$
$\displaystyle[J_{\mathcal{C}_{1}}+J_{\mathcal{C}_{2}}+J_{\mathcal{C}_{4}}-J_{\mathcal{C}_{5}}-J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}-J_{\mathcal{C}_{10}}+J_{\mathcal{C}_{11}}]\frac{\Delta\mu_{\rm
S}}{T_{0}}$ $\displaystyle=$ $\displaystyle J_{\rm
E}\Big{[}\frac{1}{T_{0}}-\frac{1}{(T_{0}+\delta T)}\Big{]}+J_{\rm
S}\Big{[}\frac{\mu_{\rm L\downarrow}}{T_{0}}-\frac{\mu_{\rm
L\uparrow}}{T_{0}}\Big{]}$ $\displaystyle=$ $\displaystyle J_{\rm
E}\mathcal{X}_{\rm E}+J_{\rm S}\mathcal{X}_{\rm S},$
where $\mathcal{X}_{\rm E}$ and $\mathcal{X}_{\rm S}$ are identified as
conjugate forces corresponding to the energy current $J_{\rm E}$, and spin
current $J_{\rm S}$, respectively. If we compare Eq. (47) and (63), we observe
that in both cases, the entropy production rate $\dot{\sigma}$ is the product
of the fluxes and forces: In Eq.(47), $\dot{\sigma}$ is in terms of the
microscopic fluxes ($J_{\mathcal{C}}$) and its conjugate forces
($\mathcal{X}_{\mathcal{C}}$), i.e., cycle affinities whereas in Eq.(63),
$\dot{\sigma}$ is in terms of the macroscopic fluxes (like the flow of spin
and energy current) and the associated phenomenological forces
($\mathcal{X}_{\rm E}$ and $\mathcal{X}_{\rm S}$). This is one of our central
results, showcasing the recovery of the phenomenological thermodynamic law of
entropy production in terms of generalized thermodynamic forces and fluxes
derived from the microscopic dynamical framework of the master equation
employing network cycle flux and forces. For small external bias $\delta T$
and $\Delta\mu_{\rm S}$, we can simplify Eq. (63) in the following form
$\displaystyle\dot{\sigma}\approx J_{\rm E}\left(\frac{\delta
T}{{T_{0}}^{2}}\right)+J_{\rm S}\left(\frac{\Delta\mu_{\rm S}}{T_{0}}\right),$
(64)
which is in accordance with the linear dependence of the entropy production
rate on the relevant thermodynamic forces. Similarly, the spin and the energy
currents within the linear response regime can be approximated as follows:
$\displaystyle J_{\rm S}$ $\displaystyle\approx$ $\displaystyle{\rm
U}(-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}})\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)}+(J^{-}_{\mathcal{C}_{1}}+J^{-}_{\mathcal{C}_{2}}+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}+J^{+}_{\mathcal{C}_{8}}+J^{-}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{11}})\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)},$ (65) $\displaystyle J_{\rm E}$
$\displaystyle\approx$ $\displaystyle{\rm
U^{2}}(J^{-}_{\mathcal{C}_{3}}+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{6}}+J^{-}_{\mathcal{C}_{7}}+J^{+}_{\mathcal{C}_{8}})\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)}+{\rm
U}(-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}})\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}.$ (66)
Comparing Eqs. (65) and (66) with the phenomenological linear law of
irreversible thermodynamics
$\displaystyle J^{\rm ph}_{\rm S}=L_{\rm SE}\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)}+L_{\rm SS}\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)},$ (67) $\displaystyle J^{\rm ph}_{\rm E}=L_{\rm
EE}\Bigg{(}\frac{\delta T}{k_{B}{T_{0}}^{2}}\Bigg{)}+L_{\rm
ES}\Bigg{(}\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Bigg{)},$ (68)
we identify the Onsager transport coefficients ($L$’s) in terms of the
microscopic circuit fluxes obtained from the network theory
$\displaystyle L_{\rm SE}$ $\displaystyle=$ $\displaystyle{\rm
U}(-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}})\equiv
L_{\rm ES},$ (69) $\displaystyle L_{\rm SS}$ $\displaystyle=$
$\displaystyle(J^{-}_{\mathcal{C}_{1}}+J^{-}_{\mathcal{C}_{2}}+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}$
(70) $\displaystyle+$ $\displaystyle
J^{+}_{\mathcal{C}_{8}}+J^{-}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{11}}),$
$\displaystyle L_{\rm EE}$ $\displaystyle=$ $\displaystyle{\rm
U^{2}}(J^{-}_{\mathcal{C}_{3}}+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{6}}+J^{-}_{\mathcal{C}_{7}}+J^{+}_{\mathcal{C}_{8}}).$
(71)
Equation (69) encapsulates the essence of the Onsager reciprocity relation.
Here we derive this relation by applying the quantum kinetic Pauli master
equation within the framework of network theory. It reveals that the BMS
quantum master equation is not a mere description of the dissipative dynamics
of the open quantum system; rather, it reproduces the reciprocity relation of
the linear law of irreversible thermodynamics, which obeys due to the time-
reversal symmetry of the stationary fluctuations. Here instead, it follows
from the properties of the network circuit fluxes between the forward
(counterclockwise) and reverse (clockwise) cycle trajectories.
Now, our aim is to establish the relationship between the coefficients of the
spin-Seebeck and the spin-Peltier effects. Under the zero spin current
condition, i.e., $J_{\rm S}=0$, we obtain from Eq. (67)
$\displaystyle L_{\rm SE}\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)}+L_{\rm SS}\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}=0,$ $\displaystyle{\rm or},\quad\kappa$
$\displaystyle\equiv\Bigg{(}\frac{\Delta\mu_{\rm S}}{\delta T}\Bigg{)}_{J_{\rm
S}=0}=-\frac{1}{T_{0}}\Bigg{(}\frac{L_{\rm SE}}{L_{\rm SS}}\Bigg{)}$ (72)
$\displaystyle\kappa$ $\displaystyle=-\frac{\rm
U}{T_{0}}\Big{(}-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}}\Big{)}/\Big{(}J^{-}_{\mathcal{C}_{1}}+J^{-}_{\mathcal{C}_{2}}$
(73)
$\displaystyle+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}+J^{+}_{\mathcal{C}_{8}}+J^{-}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{11}}\Big{)}.$
Here $\kappa=({\Delta\mu_{\rm S}}/{\delta T})_{J_{\rm S}=0}$ is the spin-
Seebeck coefficient or spin-thermoelectric power, defined as the change in the
spin bias voltage per unit change of temperature. Similarly, we may define the
spin-Peltier coefficient as
$\displaystyle\vartheta$ $\displaystyle=-\Bigg{(}\frac{J_{\rm E}}{J_{\rm
S}}\Bigg{)}_{{\delta T}=0}=-\frac{L_{\rm ES}}{L_{\rm SS}},$ (75)
$\displaystyle=-{\rm
U}\Big{(}-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}}\Big{)}/\Big{(}J^{-}_{\mathcal{C}_{1}}+J^{-}_{\mathcal{C}_{2}}$
$\displaystyle+J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{7}}+J^{+}_{\mathcal{C}_{8}}+J^{-}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{11}}\Big{)}.$
On the face of it, both Eqs. (72) and (75), appear exactly the same as their
classical counterparts, although their basis is completely different in
classical and quantum cases. Using Eqs. (73) and (75), we immediately conclude
that the classic Kelvin relation
$\displaystyle T_{0}\Bigg{(}\frac{\Delta\mu_{\rm S}}{\delta T}\Bigg{)}_{J_{\rm
S}=0}$ $\displaystyle=-\Bigg{(}\frac{J_{\rm E}}{J_{\rm S}}\Bigg{)}_{{\delta
T}=0}=-\frac{L_{\rm ES}}{L_{\rm SS}}=-\frac{L_{\rm SE}}{L_{\rm SS}},$ ${\rm
or},\quad T_{0}\kappa=\vartheta.$ (76)
equally holds for quantum thermocouples, connecting the two thermoelectric
effects, namely SSE and SPE. This is the hallmark of thermodynamics with its
universal generality. The generality prevails in the sense that all the
thermodynamic relations retain their forms in both classical and quantum
settings, with the only variation being in specific expressions that are used
to articulate them.
## IV Operational Principles
To this end, we delve into the operational principles underlying the spin-
Seebeck and spin-Peltier effects.
### IV.1 Spin-Seebeck effect
We observe the SSE when there is no spin bias voltage $\Delta\mu_{\rm S}=0$,
and a spin current is generated due to a temperature difference $\delta T$
between the upper and the lower terminals of the device. For $\Delta\mu_{\rm
S}=0$, Eq.(III.2) reduces to
$\displaystyle J_{\rm S}$ $\displaystyle=$
$\displaystyle-(J^{+}_{\mathcal{C}_{4}}-J^{-}_{\mathcal{C}_{4}})+(J^{+}_{\mathcal{C}_{5}}-J^{-}_{\mathcal{C}_{5}})+(J^{+}_{\mathcal{C}_{7}}-J^{-}_{\mathcal{C}_{7}})$
(77) $\displaystyle-$
$\displaystyle(J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}})$
$\displaystyle=$ $\displaystyle-
J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{7}}-J_{\mathcal{C}_{8}},$
while the $J_{\rm E}$ is still governed by Eq. (62). As a result, we identify
that cycle fluxes corresponding to subcycles $\mathcal{C}_{3}$,
$\mathcal{C}_{4}$, $\mathcal{C}_{5}$, $\mathcal{C}_{6}$, $\mathcal{C}_{7}$,
and $\mathcal{C}_{8}$ contribute to $J_{\rm E}$, while $\mathcal{C}_{4}$,
$\mathcal{C}_{5}$, $\mathcal{C}_{7}$, and $\mathcal{C}_{8}$ facilitate $J_{\rm
S}$. All the contributing cycle fluxes, energy, and spin currents, in
dimensionless units, along with all six microstate populations, are plotted in
Fig. 4 w.r.t the dimensionless temperature gradient $\delta T$. Although all
six cycles assist $J_{\rm E}$, the primary contribution comes from
$\mathcal{C}_{3}$, classified as the highest-rank cycle with a nonzero
contribution. In contrast, all the cycles appearing in $J_{\rm S}$ have equal
magnitudes but are lower in rank compared to $\mathcal{C}_{3}$ [Fig. 4a].
Consequently, the dimensionless spin current $J_{\rm S}$ observed in the SSE,
which is a linear combination of the four contributing cycles, is two orders
of magnitude less than the dimensionless energy current $J_{\rm E}$ [Figs. 4b
and 4c]. Upon setting $\Delta\mu_{\rm S}=0$ in Eqs. (65) and (66), approximate
expressions for $J_{\rm S}$ and $J_{\rm E}$ in the linear response regime are
reduced to
$J_{\rm S}\approx L_{\rm SE}\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)},$ (78)
and
$J_{\rm E}\approx L_{\rm EE}\Bigg{(}\frac{\delta
T}{k_{B}{T_{0}}^{2}}\Bigg{)},$ (79)
respectively, where $L_{\rm SE}$ and $L_{\rm EE}$ are given by Eqs. (69) and
(71) respectively. We observe that Eqs. (78) and (79) closely follow the
general Eqs. (77) and (62) [solid lines in Figs. 4b and 4c], but they start to
deviate [dash-dot lines] for large values of the temperature gradient.
Figure 4: Spin-Seebeck effect: All contributing (a) cycle fluxes (b)
populations (c) spin and (d) energy currents are plotted against dimensionless
thermal energy $k_{B}\delta T/\hbar\gamma$. The parameters used are as
follows: $\gamma_{\rm L}=\gamma_{\rm M}=\gamma_{\rm R}=\gamma$, ${\rm
U}=0.8\hbar\gamma$, $k_{B}T_{0}=4\hbar\gamma$,
$\varepsilon_{l\downarrow}=4.5\hbar\gamma$,
$\varepsilon_{l\uparrow}=0.5\hbar\gamma$, $\varepsilon_{u}=3\hbar\gamma$, and
$\mu_{\rm L\downarrow}=\mu_{\rm L\uparrow}=0$, $\mu_{\rm M}=0$.
Finally, we note that cycles ($\mathcal{C}_{4}$, $\mathcal{C}_{5}$,
$\mathcal{C}_{7}$, and $\mathcal{C}_{8}$) involving spin-flip processes
contribute to $J_{\rm S}$. For example, consider the dynamic steps of
$\mathcal{C}^{+}_{4}$: starting from the most populated state $\ket{00}$ as
shown in Fig.4(d), the system sequentially transitions to $\ket{0\uparrow}$
(where one spin-up electron tunnels from the left reservoir into the lower
QD), then to $\ket{1\uparrow}$ (where one electron tunnels from the middle
reservoir into the upper QD). The third step involves a spin-flip process
$\ket{1\uparrow}\rightarrow\ket{1\downarrow}$ by absorbing one magnon supplied
by the right reservoir. Subsequently, one spin-up electron tunnels into the
left reservoir ($\ket{1\uparrow}\rightarrow\ket{10}$), and finally, the system
returns to its initial state $\ket{00}$ by releasing one electron to the
middle reservoir. Thus, at the end of the full cycle, an integer spin-1 is
transferred from the left spinful electron reservoir to the right magnon bath.
Similarly, the clockwise circuit $\mathcal{C}^{-}_{4}$ represents the reverse
process, and both cycle trajectories additively contribute to the spin current
expression [Eq.(77)] in the SSE. The same holds true for other contributing
cycle trajectories mentioned in Eq. (77).
### IV.2 Spin-Peltier effect
We observe the SPE in a scenario when $\delta T=0$ and an energy current is
generated due to a non-zero spin bias voltage. Putting $\delta T=0$ in Eq
(62), we obtain
$\displaystyle J_{\rm E}$ $\displaystyle=$ $\displaystyle{\rm
U}[(J^{+}_{\mathcal{C}_{4}}-J^{-}_{\mathcal{C}_{4}})+(J^{+}_{\mathcal{C}_{5}}-J^{-}_{\mathcal{C}_{5}})+(J^{+}_{\mathcal{C}_{7}}-J^{-}_{\mathcal{C}_{7}})$
(80) $\displaystyle+$
$\displaystyle(J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}})]$
$\displaystyle=$ $\displaystyle{\rm
U}[J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}],$
(81)
and the same Eq.(III.2) can be used to calculate $J_{\rm S}$.
Figure 5: Spin-Peltier effect: All contributing (a) cycle fluxes (b)
populations (c) spin and (d) energy currents are plotted against dimensionless
thermal energy $\Delta\mu_{\rm S}/\hbar\gamma$. The parameters used are as
follows: $\gamma_{\rm L}=\gamma_{\rm M}=\gamma_{\rm R}=\gamma$, ${\rm
U}=0.8\hbar\gamma$, $k_{B}T_{0}=4\hbar\gamma$,
$\varepsilon_{l\downarrow}=4.5\hbar\gamma$,
$\varepsilon_{l\uparrow}=0.5\hbar\gamma$, $\varepsilon_{u}=3\hbar\gamma$, and
$\mu_{\rm L\uparrow}=2.5\hbar\gamma$, $\mu_{\rm L\downarrow}=\mu_{\rm
L\uparrow}+\Delta\mu_{\rm S}$, $\delta T=0$,$\mu_{\rm M}=0$.
As a result, cycle fluxes corresponding to cycles $\mathcal{C}_{1}$,
$\mathcal{C}_{2}$, $\mathcal{C}_{4}$, $\mathcal{C}_{5}$, $\mathcal{C}_{7}$,
$\mathcal{C}_{8}$, $\mathcal{C}_{10}$, and $\mathcal{C}_{11}$ attribute to the
spin current, while only four cycles $\mathcal{C}_{4}$, $\mathcal{C}_{5}$,
$\mathcal{C}_{7}$, and $\mathcal{C}_{8}$ contribute to the energy current. All
supporting cycle fluxes, spin and energy currents in dimensionless units, and
the population of each eigenstate are plotted in Fig.5 against the
dimensionless spin bias voltage $\Delta\mu_{\rm S}$. In this case, the major
contribution to $J_{\rm S}$ is coming from cycle $\mathcal{C}_{1}$ which is
classified as the top-ranked cycle with a nonzero contribution. On the other
hand, all the cycles contributing to $J_{\rm E}$ are lower ranked cycles
relative to $\mathcal{C}_{1}$ with comparable magnitudes [Fig.5a].
Consequently, the dimensionless energy current $J_{\rm E}$ in SPE, is two
orders of magnitude less than the dimensionless spin current $J_{\rm S}$
[Fig.5b and 5c]. Upon substituting $\delta T=0$ in Eqs.(65) and (66),
approximate expressions for $J_{\rm E}$ and $J_{\rm S}$ in the linear response
regime take the form of
$J_{\rm E}\approx L_{\rm ES}\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)},$ (82)
and
$J_{\rm S}\approx L_{\rm SS}\Bigg{(}\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)},$ (83)
where $L_{\rm ES}$ and $L_{\rm SS}$ are given by Eqs. (69) and (70),
respectively. Similar to SSE, we note that for small bias, Eqs. (82) and (83)
closely follow the general Eqs. (81) and (61) [solid lines in Figs. 5b and
5c]. As mentioned earlier, each cycle $\mathcal{C}$ represents two paired
circuits, i.e., $\mathcal{C}^{+}$ and $\mathcal{C}^{-}$. In Fig.5a,
$J_{\mathcal{C}}<0$ implies that the flux corresponding to the
counterclockwise circuit $\mathcal{C}^{+}$ ($J^{+}_{\mathcal{C}}$) is less
than the flux corresponding to clockwise circuit $\mathcal{C}^{-}$
($J^{-}_{\mathcal{C}}$). Finally, we emphasize that it is the same set of four
cycles ($\mathcal{C}_{4}$, $\mathcal{C}_{5}$, $\mathcal{C}_{7}$, and
$\mathcal{C}_{8}$) that not only produces a weak spin current in SSE but also
accounts for generating a weak energy current in SPE.
### IV.3 SSE and SPE: As thermodynamic cross-effect
In Section III.2, we have identified cycles
${\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{10},\mathcal{C}_{11}}$, which
can contribute solely to the spin current and not to energy current.
Conversely, cycles ${\mathcal{C}_{3},\mathcal{C}_{6}}$ are found to contribute
exclusively to the energy current and not to the spin current. Meanwhile,
cycles ${\mathcal{C}_{4},\mathcal{C}_{5},\mathcal{C}_{7},\mathcal{C}_{8}}$
have the quality to contribute to both energy as well as spin currents. To
gain a deeper understanding, it is essential to analyze the complete topology
of the network. Notably, we observe that all cycles contributing to $J_{\rm
S}$ must involve a spin-flip process, either
$\ket{0\uparrow}\leftrightarrow\ket{0\downarrow}$ or
$\ket{1\uparrow}\leftrightarrow\ket{1\downarrow}$, corresponding to the edges
$\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{4}}$ or
$\ket{\mathbb{5}}\leftrightarrow\ket{\mathbb{6}}$, respectively. Similarly,
cycles contributing to $J_{\rm E}$ must include the edges
$\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{5}}$
($\ket{0\uparrow}\leftrightarrow\ket{1\uparrow}$) or
$\ket{\mathbb{4}}\leftrightarrow\ket{\mathbb{6}}$
($\ket{0\downarrow}\leftrightarrow\ket{1\downarrow}$), enabling Coulomb
interaction between the upper and lower dots. This insight sheds light on why
the expressions for spin and energy currents, derived from the master
equation, take their particular forms [Cf. Eqs. (II) and (20)].
Figure 6: Schematic diagram of four subcycles
$\\{\mathcal{C}_{4},\mathcal{C}_{5},\mathcal{C}_{7},\mathcal{C}_{8}\\}$ which
are truly responsible for the spin-thermolectric cross-effect of SSE and SPE.
Special edges are marked in orange.
At this juncture, it’s imperative to underscore that cycles
${\mathcal{C}_{1},\mathcal{C}_{2},\mathcal{C}_{9},\mathcal{C}_{10},\text{ and
}\mathcal{C}_{11}}$ share the edge
$\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{4}}$ or
$\ket{\mathbb{5}}\leftrightarrow\ket{\mathbb{6}}$, yet they do not contribute
to the spin current in the SSE due to zero cycle affinity. This results from
the fact that the circuit fluxes associated with the cycle trajectories
($\mathcal{C}^{+}$ and $\mathcal{C}^{-}$) are identical in both directions,
yielding a zero cycle flux. The same holds true for the SPE with cycles
$\mathcal{C}_{3}$, $\mathcal{C}_{6}$, and $\mathcal{C}_{9}$ in the absence of
a spin bias voltage, despite having the required edges. Intriguingly, cycle
$\mathcal{C}_{9}$ possesses both spin-flip and Coulomb-interaction edges [Fig.
2], yet it yields zero cycle fluxes due to its zero cycle affinity. Therefore,
the asymmetry in cycle affinity emerges as the primary thermodynamic driving
force and the foremost criterion for obtaining a nonzero cycle flux.
Conversely, cycles
${\mathcal{C}_{4},\mathcal{C}_{5},\mathcal{C}_{7},\mathcal{C}_{8}}$ exhibit
nonzero cycle affinity either in the absence of a temperature bias (SPE) or in
the absence of a spin bias voltage (SSE). Consequently, these four cycles
stand as the sole contributors to both SSE and SPE, featuring finite spin and
energy currents. This is attributed to their possession of both the spin-flip
edge ($\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{4}}$ or
$\ket{\mathbb{5}}\leftrightarrow\ket{\mathbb{6}}$) and the Coulomb-interaction
edge ($\ket{\mathbb{3}}\leftrightarrow\ket{\mathbb{5}}$ or
$\ket{\mathbb{4}}\leftrightarrow\ket{\mathbb{6}}$). To summarize, we affirm
that ${\mathcal{C}_{4},\mathcal{C}_{5},\mathcal{C}_{7},\mathcal{C}_{8}}$ stand
as the four pivotal cycles solely responsible for materializing the
thermodynamic cross-effect in the form of the spin-thermoelectric effect
within our simple minimal model of the quantum thermocouple. The consequences
of the interference effect between these major contributing cycles open
intriguing avenues, and future research directions could delve into the impact
of quantum coherence and entanglement effect Rao _et al._ (2020); Whitney
(2016) on the device performance from the perspective of the network theory.
## V Conclusions
The key findings of our present analysis are outlined as follows: (i) We
present a simple model of a quantum thermocouple that exhibits spin-
caloritronic effects based on three-terminal ultra-strong Coulomb-coupled
quantum dots. In contrast to four-terminal models, this minimal model mimics
both spin-dependent Seebeck and Peltier effects in complete analogy to
classical thermocouples, used to describe thermoelectric effects. In the
quantum case, distinct statistical properties of the thermal reservoirs play a
role akin to dissimilar metals in traditional thermocouples. (ii) We find out
that the expressions for spin and energy currents, derived from the Lindblad
master equation, completely agree with network theoretical results. However,
the quantum kinetic Pauli master equation serves as the basis for constructing
the thermodynamic network, encompassing joint system microstates and
associated transition rates. This is in stark contrast to classical network
theory, where microstates often result from coarse-graining procedures. Here
instead, they naturally emerge as eigenstates of the coupled quantum systems,
derived from the microscopic Hamiltonian description of the composite quantum
system. (iii) Benefiting from the generalized matrix tree theorem of the
algebraic graph, we not only unveil the fundamental operational principles
behind spin-Seebeck and spin-Peltier effects but also confirm the
applicability of well-known thermodynamic relations in nano-thermoelectric
devices. The validity of Onsager reciprocity and Kelvin relations for
thermoelectric coefficients underscore the universal generality of
thermodynamic principles in both classical and quantum realms. In the present
case, the above relations stem from the characteristic properties of forward
and backward cycle flux trajectories of the quantum thermodynamic network.
This is fundamentally different from the phenomenological classical laws of
irreversible thermodynamics that hinge on local equilibrium assumptions. (iv)
In this context, we stress the importance of network cycle flux, and cycle
affinity in establishing the macroscopic spin and energy currents in terms of
stochastic cycle currents. Cycle affinity, expressed as a ratio of transition
rates between forward and backward cycle trajectories, emerges as a
fundamental driving force behind nonzero cycle fluxes. Then, the cycle flux
ranking scheme powered by the microscopic or stochastic version of the entropy
production rate, sheds light on the origin of weak spin and energy currents in
spin-Seebeck and Peltier effects, respectively. (v) Finally, we identify four
non-intersecting cycles that are responsible for manifesting both reciprocal
effects of spin-thermoelectricity within our simple minimal model.
Characterized by special edges involving the spin-flip process and Coulomb
interaction between interacting quantum dots, these cycles pave the way for
underpinning the fundamental working principles of quantum thermocouples.
## Acknowledgements
We thank Sujan Kundu for the useful discussions. AG acknowledges financial
support from the Initiation grant of IITK (Grant No. IITK/CHM/2018513). N.G.
is thankful to CSIR for the fellowship. S.G. acknowledges the Ministry of
Education, Government of India, for the Prime Minister Research Fellowship
(PMRF).
## References
* Rowe (1995) D. Rowe, _CRC Handbook of Thermoelectrics_ (CRC-Press, 1995).
* DiSalvo (1999) F. J. DiSalvo, Science 285, 703 (1999).
* Goldsmid (2009) H. Goldsmid, _Introduction to Thermoelectricity_, Springer Series in Materials Science (Springer Berlin Heidelberg, 2009).
* Shakouri and Zebarjadi (2009) A. Shakouri and M. Zebarjadi, “Nanoengineered materials for thermoelectric energy conversion,” in _Thermal Nanosystems and Nanomaterials_, edited by S. Volz (Springer Berlin Heidelberg, Berlin, Heidelberg, 2009) pp. 225–299.
* Dubi and Di Ventra (2011) Y. Dubi and M. Di Ventra, Rev. Mod. Phys. 83, 131 (2011).
* Mazza _et al._ (2015) F. Mazza, S. Valentini, R. Bosisio, G. Benenti, V. Giovannetti, R. Fazio, and F. Taddei, Phys. Rev. B 91, 245435 (2015).
* Benenti _et al._ (2017) G. Benenti, G. Casati, K. Saito, and R. Whitney, Physics Reports 694, 1 (2017), fundamental aspects of steady-state conversion of heat to work at the nanoscale.
* Callen (1985) H. B. Callen, _Thermodynamics and an Introduction to Thermostatistics_ , 2nd ed. (John Wiley & Sons, Inc., New York, 1985).
* Landi and Paternostro (2021) G. T. Landi and M. Paternostro, Rev. Mod. Phys. 93, 035008 (2021).
* Onsager (1931a) L. Onsager, Phys. Rev. 37, 405 (1931a).
* Onsager (1931b) L. Onsager, Phys. Rev. 38, 2265 (1931b).
* Callen (1948) H. B. Callen, Phys. Rev. 73, 1349 (1948).
* Uchida _et al._ (2008) K. Uchida, S. Takahashi, K. Harii, J. Ieda, W. Koshibae, K. Ando, S. Maekawa, and E. Saitoh, Nature 455, 778 (2008).
* Wu _et al._ (2015) S. M. Wu, J. E. Pearson, and A. Bhattacharya, Phys. Rev. Lett. 114, 186602 (2015).
* Zhou _et al._ (2021) W. Zhou, K. Yamamoto, A. Miura, R. Iguchi, Y. Miura, K.-i. Uchida, and Y. Sakuraba, Nature Materials 20, 463 (2021).
* Flipse _et al._ (2014) J. Flipse, F. K. Dejene, D. Wagenaar, G. E. W. Bauer, J. B. Youssef, and B. J. van Wees, Phys. Rev. Lett. 113, 027601 (2014).
* Daimon _et al._ (2016) S. Daimon, R. Iguchi, T. Hioki, E. Saitoh, and K.-i. Uchida, Nature Communications 7, 13754 (2016).
* Ohnuma _et al._ (2017) Y. Ohnuma, M. Matsuo, and S. Maekawa, Phys. Rev. B 96, 134412 (2017).
* Bauer _et al._ (2012) G. E. W. Bauer, E. Saitoh, and B. J. van Wees, Nature Materials 11, 391 (2012).
* Boona _et al._ (2014) S. R. Boona, R. C. Myers, and J. P. Heremans, Energy Environ. Sci. 7, 885 (2014).
* Ronetti _et al._ (2016) F. Ronetti, L. Vannucci, G. Dolcetto, M. Carrega, and M. Sassetti, Phys. Rev. B 93, 165414 (2016).
* ichi UCHIDA (2021) K. ichi UCHIDA, Proceedings of the Japan Academy, Series B 97, 69 (2021).
* Di Ventra (2008) M. Di Ventra, _Electrical Transport in Nanoscale Systems_ (Cambridge University Press, 2008).
* Nazarov and Blanter (2009) Y. Nazarov and Y. Blanter, _Quantum Transport: Introduction to Nanoscience_ (Cambridge University Press, 2009).
* Ihn (2010) T. Ihn, _Semiconductor Nanostructures: Quantum States and Electronic Transport_ (OUP Oxford, 2010).
* Heikkilä (2013) T. Heikkilä, _The Physics of Nanoelectronics: Transport and Fluctuation Phenomena at Low Temperatures_, Oxford Master Series in Physics (OUP Oxford, 2013).
* Ren (2013) J. Ren, Phys. Rev. B 88, 220406 (2013).
* Whitney _et al._ (2016) R. S. Whitney, R. Sánchez, F. Haupt, and J. Splettstoesser, Physica E: Low-dimensional Systems and Nanostructures 75, 257 (2016).
* Whitney _et al._ (2018) R. S. Whitney, R. Sánchez, and J. Splettstoesser, “Quantum thermodynamics of nanoscale thermoelectrics and electronic devices,” in _Thermodynamics in the Quantum Regime: Fundamental Aspects and New Directions_, edited by F. Binder, L. A. Correa, C. Gogolin, J. Anders, and G. Adesso (Springer International Publishing, Cham, 2018) pp. 175–206.
* Wang _et al._ (2022) L. Wang, Z. Wang, C. Wang, and J. Ren, Phys. Rev. Lett. 128, 067701 (2022).
* van Houten _et al._ (1992) H. van Houten, L. W. Molenkamp, C. W. J. Beenakker, and C. T. Foxon, Semiconductor Science and Technology 7, B215 (1992).
* Lee _et al._ (2016) M.-J. Lee, J.-H. Ahn, J. H. Sung, H. Heo, S. G. Jeon, W. Lee, J. Y. Song, K.-H. Hong, B. Choi, S.-H. Lee, and M.-H. Jo, Nature Communications 7, 12011 (2016).
* Svilans _et al._ (2016) A. Svilans, M. Leijnse, and H. Linke, Comptes Rendus Physique 17, 1096 (2016), mesoscopic thermoelectric phenomena / Phénomènes thermoélectriques mésoscopiques.
* Erlingsson _et al._ (2017) S. I. Erlingsson, A. Manolescu, G. A. Nemnes, J. H. Bardarson, and D. Sanchez, Phys. Rev. Lett. 119, 036804 (2017).
* Patel _et al._ (2020) A. Patel, D. Singh, Y. Sonvane, P. B. Thakor, and R. Ahuja, ACS Applied Materials & Interfaces 12, 46212 (2020).
* Han _et al._ (2020) W. Han, S. Maekawa, and X.-C. Xie, Nature Materials 19, 139 (2020).
* Yang _et al._ (2023) G. Yang, L. Sang, C. Zhang, N. Ye, A. Hamilton, M. S. Fuhrer, and X. Wang, Nature Reviews Physics 5, 466 (2023).
* Esposito _et al._ (2009) M. Esposito, K. Lindenberg, and C. V. den Broeck, Europhysics Letters 85, 60010 (2009).
* Nakpathomkun _et al._ (2010) N. Nakpathomkun, H. Q. Xu, and H. Linke, Phys. Rev. B 82, 235428 (2010).
* Donsa _et al._ (2014) S. Donsa, S. Andergassen, and K. Held, Phys. Rev. B 89, 125103 (2014).
* Sothmann _et al._ (2014) B. Sothmann, R. Sánchez, and A. N. Jordan, Nanotechnology 26, 032001 (2014).
* Erdman _et al._ (2017) P. A. Erdman, F. Mazza, R. Bosisio, G. Benenti, R. Fazio, and F. Taddei, Phys. Rev. B 95, 245432 (2017).
* Esposito _et al._ (2012) M. Esposito, N. Kumar, K. Lindenberg, and C. Van den Broeck, Phys. Rev. E 85, 031117 (2012).
* Thierschmann _et al._ (2015) H. Thierschmann, R. Sánchez, B. Sothmann, F. Arnold, C. Heyn, W. Hansen, H. Buhmann, and L. W. Molenkamp, Nature Nanotechnology 10, 854 (2015).
* Jiang _et al._ (2015) J.-H. Jiang, M. Kulkarni, D. Segal, and Y. Imry, Phys. Rev. B 92, 045309 (2015).
* Zhang _et al._ (2017) Y. Zhang, X. Zhang, Z. Ye, G. Lin, and J. Chen, Applied Physics Letters 110, 153501 (2017).
* Ghosh _et al._ (2022) S. Ghosh, N. Gupt, and A. Ghosh, Entropy 24 (2022).
* Datta (2005) S. Datta, _Quantum Transport: Atom to Transistor_, Ke xue qian yan cong shu (Cambridge University Press, 2005).
* Breuer and Petruccione (2007) H.-P. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, 2007).
* Gelbwaser-Klimovsky _et al._ (2015) D. Gelbwaser-Klimovsky, W. Niedenzu, and G. Kurizki, Adv. At. Mol. Opt. Phys. 64, 329 (2015).
* Joulain _et al._ (2016) K. Joulain, J. Drevillon, Y. Ezzahri, and J. Ordonez-Miranda, Phys. Rev. Lett. 116, 200601 (2016).
* Ghosh _et al._ (2017) A. Ghosh, C. L. Latune, L. Davidovich, and G. Kurizki, Proc. Natl. Acad. Sci. U.S.A. 114, 12156 (2017).
* Potts (2019) P. P. Potts, “Introduction to quantum thermodynamics (lecture notes),” (2019), arXiv:1906.07439 [quant-ph] .
* Gupt _et al._ (2021) N. Gupt, S. Bhattacharyya, and A. Ghosh, Phys. Rev. E 104, 054130 (2021).
* Gupt _et al._ (2022) N. Gupt, S. Bhattacharyya, B. Das, S. Datta, V. Mukherjee, and A. Ghosh, Phys. Rev. E 106, 024110 (2022).
* Gupt _et al._ (2023) N. Gupt, S. Ghosh, and A. Ghosh, Phys. Rev. E 108, 034305 (2023).
* Schnakenberg (1976) J. Schnakenberg, Rev. Mod. Phys. 48, 571 (1976).
* Hill and Chen (1975) T. L. Hill and Y. D. Chen, Proceedings of the National Academy of Sciences 72, 1291 (1975).
* Kohler and Vollmerhaus (1980) H.-H. Kohler and E. Vollmerhaus, Journal of Mathematical Biology 9, 275 (1980).
* Ren (2017) J. Ren, Frontiers of Physics 12, 120505 (2017).
* Dutta _et al._ (2020) A. Dutta, G. M. Schütz, and D. Chowdhury, Phys. Rev. E 101, 032402 (2020).
* Vandaele _et al._ (2017) K. Vandaele, S. J. Watzman, B. Flebus, A. Prakash, Y. Zheng, S. R. Boona, and J. P. Heremans, Materials Today Physics 1, 39 (2017).
* Sothmann and Büttiker (2012) B. Sothmann and M. Büttiker, Europhysics Letters 99, 27001 (2012).
* Strasberg (2022) P. Strasberg, _Quantum Stochastic Thermodynamics: Foundations and Selected Applications_ (Oxford University Press, 2022).
* Werlang _et al._ (2014) T. Werlang, M. Marchiori, M. Cornelio, and D. Valente, Physical Review E 89, 062109 (2014).
* Sinha _et al._ (2011a) S. S. Sinha, A. Ghosh, and D. S. Ray, Phys. Rev. E 84, 041113 (2011a).
* Sinha _et al._ (2011b) S. S. Sinha, A. Ghosh, and D. S. Ray, Phys. Rev. E 84, 031118 (2011b).
* Tutte (2001) W. T. Tutte, _Graph Theory_ , Vol. 21 (Cambridge University Press, 2001).
* Balakrishnan and Ranganathan (2012) R. Balakrishnan and K. Ranganathan, _A Textbook of Graph Theory_ (Springer Science & Business Media, 2012).
* King and Altman (1956) E. L. King and C. Altman, The Journal of Physical Chemistry 60, 1375 (1956).
* Wu _et al._ (2012) J. Wu, F. Liu, J. Ma, R. J. Silbey, and J. Cao, J. Chem. Phys. 137, 174111 (2012).
* Einax _et al._ (2011) M. Einax, M. Dierl, and A. Nitzan, The Journal of Physical Chemistry C 115, 21396 (2011).
* Einax and Nitzan (2014) M. Einax and A. Nitzan, The Journal of Physical Chemistry C 118, 27226 (2014).
* Kirchhoff (1847) G. Kirchhoff, Poggendorffs Ann. Phys. Chem. 148, 497 (1847).
* Hill and Kedem (1966) T. L. Hill and O. Kedem, Journal of Theoretical Biology 10, 399 (1966).
* Keizer (1972) J. Keizer, Journal of Statistical Physics 6, 67 (1972).
* Ohga _et al._ (2023) N. Ohga, S. Ito, and A. Kolchinsky, Phys. Rev. Lett. 131, 077101 (2023).
* Rao _et al._ (2020) D. D. B. Rao, A. Ghosh, D. Gelbwaser-Klimovsky, N. Bar-Gill, and G. Kurizki, New Journal of Physics 22, 083035 (2020).
* Whitney (2016) R. S. Whitney, Entropy 18 (2016).
## Appendix A Derivation of the Lindblad Quantum Master Equation
The total Hamiltonian of the overall three-terminal setup is given by
$H=H_{\rm S}+H_{\rm B}+H_{\rm I},$ (84)
where $H_{\rm S}$, $H_{\rm B}$, and $H_{\rm I}$ are the total Hamiltonian of
the system, bath, and system-bath interaction, respectively. The interaction
Hamiltonian $H_{\rm I}$ is defined as $H_{\rm I}=H_{\rm IL}+H_{\rm IM}+H_{\rm
IR}$, wherein $H_{\rm IL(R)}$ denotes the interaction of the lower quantum dot
(${\rm QD}_{l}$) with the left (right) bath, and $H_{\rm IM}$ represents the
interaction of the upper quantum dot (${\rm QD}_{u}$) with the middle bath.
The interaction Hamiltonian $H_{\rm I\alpha}$ ($\alpha={\rm L,M,R}$) for each
$\alpha$-th bath is given by Wang _et al._ (2022); Gupt _et al._ (2023)
$\displaystyle H_{\rm IL}$ $\displaystyle=$ $\displaystyle H_{\rm
IL\uparrow}+H_{\rm IL\downarrow},\;H_{\rm IL\uparrow}=\hbar\sum_{k}(t_{{\rm
L}k}b^{\dagger}_{{\rm L}\uparrow k}d_{l\uparrow}+t^{*}_{{\rm
L}k}d^{\dagger}_{l\uparrow}b_{{\rm L}\uparrow k}),\;H_{\rm
IL\downarrow}=\hbar\sum_{k}(t_{{\rm L}k}b^{\dagger}_{{\rm L}\downarrow
k}d_{l\downarrow}+t^{*}_{{\rm L}k}d^{\dagger}_{l\downarrow}b_{{\rm
L}\downarrow k}),$ $\displaystyle H_{\rm IM}$ $\displaystyle=$
$\displaystyle\hbar\sum_{k}(t_{{\rm M}k}b^{\dagger}_{{\rm
M}k}d_{u}+t^{*}_{{\rm M}k}d^{\dagger}_{u}b_{{\rm M}k}),\;H_{\rm
IR}=\hbar\sum_{q}(g_{{\rm R}q}a^{\dagger}_{{\rm
R}q}d^{\dagger}_{l\uparrow}d_{l\downarrow}+g^{*}_{{\rm
R}q}d^{\dagger}_{l\downarrow}d_{l\uparrow}a_{{\rm R}q}).$ (85)
To formulate the master equation, we begin with the derivation by considering
the von Neumann equation applied to the total density matrix $\rho_{\rm T}$ of
the combined system and reservoirs in the interaction picture, as given in
Breuer and Petruccione (2007)
$\frac{d\rho_{\rm T}}{dt}=-\frac{i}{\hbar}[H_{\rm I}(t),\rho_{\rm T}(t)].$
(86)
Integrating Eq.(86) and tracing out the bath degrees of freedom, the master
equation in terms of the reduced density matrix $\rho$ of the coupled quantum
dot system under the Born-Markov approximation can be written as Breuer and
Petruccione (2007)
$\frac{d\rho(t)}{dt}=-\frac{1}{\hbar^{2}}\Tr_{B}\int^{\infty}_{0}ds[H_{\rm
I}(t),[H_{\rm I}(t-s),\rho_{\rm T}(t)]].$ (87)
In Eq.(87), $\rho(t)=\Tr_{\rm B}\\{\rho_{\rm T}\\}\equiv\Tr_{\rm
B}\\{\rho(t)\otimes\rho_{\rm B}\\}$ where $\rho_{\rm B}=\rho_{\rm
L\uparrow}\otimes\rho_{\rm L\downarrow}\otimes\rho_{\rm M}\otimes\rho_{\rm
R}$, and $\Tr_{\rm B}\equiv\Tr_{{\rm L\uparrow},{\rm L\downarrow},{\rm M},{\rm
R}}$ stands for the trace over each bath degrees of freedom. As a result, we
can rewrite Eq. (87) as Gupt _et al._ (2022); Ghosh _et al._ (2022); Gupt
_et al._ (2023)
$\frac{d\rho(t)}{dt}=-\frac{1}{\hbar^{2}}\Tr_{{\rm L\uparrow},{\rm
L\downarrow},{\rm M},{\rm R}}\int^{\infty}_{0}ds[H_{\rm I}(t),[H_{\rm
I}(t-s),\rho(t)\otimes\rho_{\rm L\uparrow}\otimes\rho_{\rm
L\downarrow}\otimes\rho_{\rm M}\otimes\rho_{\rm R}]].$ (88)
Using the following relations Ghosh _et al._ (2022); Gupt _et al._ (2023)
$\displaystyle\Tr_{\rm L\uparrow}(b_{{\rm L}\uparrow k}(t)\rho_{\rm
L\uparrow})$ $\displaystyle=$ $\displaystyle 0=\Tr_{\rm
L\uparrow}(b^{\dagger}_{{\rm L}\uparrow k}(t)\rho_{\rm
L\uparrow}),\quad\quad\Tr_{\rm L\uparrow}(b_{{\rm L}\uparrow k}(t)\rho_{\rm
L\uparrow})=0=\Tr_{\rm L\uparrow}(b^{\dagger}_{{\rm L}\uparrow k}(t)\rho_{\rm
L\uparrow})$ (89) $\displaystyle\Tr_{\rm M}(b_{{\rm M}k}(t)\rho_{\rm M})$
$\displaystyle=$ $\displaystyle 0=\Tr_{\rm M}(b^{\dagger}_{{\rm
M}k}(t)\rho_{\rm M}),\;\quad\quad\Tr_{\rm R}(a_{{\rm R}q}(t)\rho_{\rm
R})=0=\Tr_{\rm R}(a^{\dagger}_{{\rm R}q}(t)\rho_{\rm R}),$ (90)
one can simplify Eq.(88) as Gupt _et al._ (2022); Ghosh _et al._ (2022);
Gupt _et al._ (2023)
$\frac{d\rho(t)}{dt}=-\frac{1}{\hbar^{2}}\sum_{\beta}\Tr_{{\rm L\uparrow},{\rm
L\downarrow},{\rm M},{\rm R}}\int^{\infty}_{0}ds[H_{\rm I\beta}(t),[H_{\rm
I\beta}(t-s),\rho(t)\otimes\rho_{\rm L\uparrow}\otimes\rho_{\rm
L\downarrow}\otimes\rho_{\rm M}\otimes\rho_{\rm R}]],\quad\beta={\rm
L\uparrow,L\downarrow,M,R}.$ (91)
Now, we use system operators in the interaction picture as
$\displaystyle d_{\rm i}(t)$ $\displaystyle=$ $\displaystyle e^{{iH_{\rm
S}t}/{\hbar}}d_{\rm i}e^{{-iH_{\rm
S}t}/{\hbar}}=\sum_{\\{\varepsilon_{\mathbb{ji}}\\}}e^{{-i{\varepsilon_{\mathbb{ji}}}t}/{\hbar}}d_{\rm
i},$ $\displaystyle d^{\dagger}_{\rm i}(t)$ $\displaystyle=$ $\displaystyle
e^{{iH_{\rm S}t}/{\hbar}}d^{\dagger}_{\rm i}e^{{-iH_{\rm
S}t}/{\hbar}}=\sum_{\\{\varepsilon_{\mathbb{ji}}\\}}e^{{i{\varepsilon_{\mathbb{ji}}}t}/{\hbar}}d^{\dagger}_{\rm
i},\quad{\rm i}=l\uparrow,l\downarrow,u$ (92)
where
$\varepsilon_{\mathbb{ji}}=\varepsilon_{\mathbb{j}}-\varepsilon_{\mathbb{i}}>0$
is the energy required for the transition between state $|\mathbb{i}\rangle$
and $|\mathbb{j}\rangle$ driven by their respective bath. Similarly, one can
write the expressions for the bath operators in the interaction picture. With
all these given prescriptions, we have simplified the Eq.(91), resulting in
the Lindblad form of the quantum master equation as follows:
$\frac{d\rho}{dt}=\mathcal{L}_{\rm L\uparrow}[\rho]+\mathcal{L}_{\rm
L\downarrow}[\rho]+\mathcal{L}_{\rm M}[\rho]+\mathcal{L}_{\rm R}[\rho],$ (93)
The explicit forms of the Lindblad super operator $\mathcal{L}$ in the above
equation are given by
$\displaystyle\mathcal{L}_{\rm L\uparrow}[\rho]=\sum_{\\{\varepsilon_{\rm
L\uparrow}\\}}\gamma_{\rm L}\Big{[}f(\varepsilon_{\rm L\uparrow},\mu_{\rm
L\uparrow},T_{\rm L})\Big{(}d^{\dagger}_{l\uparrow}(\varepsilon_{\rm
L\uparrow})\rho d_{l\uparrow}(\varepsilon_{\rm
L\uparrow})-\frac{1}{2}\\{d_{l\uparrow}(\varepsilon_{\rm
L\uparrow})d^{\dagger}_{l\uparrow}(\varepsilon_{\rm
L\uparrow}),\rho\\}\Big{)}$ $\displaystyle+(1-f(\varepsilon_{\rm
L\uparrow},\mu_{\rm L\uparrow},T_{\rm
L}))\Big{(}d_{l\uparrow}(\varepsilon_{\rm L\uparrow})\rho
d^{\dagger}_{l\uparrow}(\varepsilon_{\rm
L\uparrow})-\frac{1}{2}\\{d^{\dagger}_{l\uparrow}(\varepsilon_{\rm
L\uparrow})d_{l\uparrow}(\varepsilon_{\rm L\uparrow}),\rho\\}\Big{)}\Big{]},$
(94) $\displaystyle\mathcal{L}_{\rm
L\downarrow}[\rho]=\sum_{\\{\varepsilon_{\rm L\downarrow}\\}}\gamma_{\rm
L}\Big{[}f(\varepsilon_{\rm L\downarrow},\mu_{\rm L\downarrow},T_{\rm
L})\Big{(}d^{\dagger}_{l\downarrow}(\varepsilon_{\rm L\downarrow})\rho
d_{l\downarrow}(\varepsilon_{\rm
L\downarrow})-\frac{1}{2}\\{d_{l\downarrow}(\varepsilon_{\rm
L\downarrow})d^{\dagger}_{l\downarrow}(\varepsilon_{\rm
L\downarrow}),\rho\\}\Big{)}$ $\displaystyle+(1-f(\varepsilon_{\rm
L\downarrow},\mu_{\rm L\downarrow},T_{\rm
L}))\Big{(}d_{l\downarrow}(\varepsilon_{\rm L\downarrow})\rho
d^{\dagger}_{l\downarrow}(\varepsilon_{\rm
L\downarrow})-\frac{1}{2}\\{d^{\dagger}_{l\downarrow}(\varepsilon_{\rm
L\downarrow})d_{l\downarrow}(\varepsilon_{\rm
L\downarrow}),\rho\\}\Big{)}\Big{]},$ (95) $\displaystyle\mathcal{L}_{\rm
M}[\rho]=\sum_{\\{\varepsilon_{\rm M}\\}}\gamma_{\rm
M}\Big{[}f(\varepsilon_{\rm M},\mu_{\rm M},T_{\rm
L})\Big{(}d^{\dagger}_{u}(\varepsilon_{\rm M})\rho d_{u}(\varepsilon_{\rm
M})-\frac{1}{2}\\{d_{u}(\varepsilon_{\rm M})d^{\dagger}_{u}(\varepsilon_{\rm
M}),\rho\\}\Big{)}$ $\displaystyle+(1-f(\varepsilon_{\rm M},\mu_{\rm M},T_{\rm
M}))\Big{(}d_{u}(\varepsilon_{\rm M})\rho d^{\dagger}_{u}(\varepsilon_{\rm
M})-\frac{1}{2}\\{d^{\dagger}_{u}(\varepsilon_{\rm M})d_{u}(\varepsilon_{\rm
M}),\rho\\}\Big{)}\Big{]},$ (96) $\displaystyle\mathcal{L}_{\rm
R}[\rho]=\sum_{\\{\varepsilon_{\rm R}\\}}\gamma_{\rm
R}\Big{[}n(\varepsilon_{\rm R},T_{\rm
R})\Big{(}V^{\dagger}_{l}(\varepsilon_{\rm R})\rho V_{l}(\varepsilon_{\rm
R})-\frac{1}{2}\\{V_{l}(\varepsilon_{\rm R})V^{\dagger}_{l}(\varepsilon_{\rm
R}),\rho\\}\Big{)}$ $\displaystyle+(n(\varepsilon_{\rm R},T_{\rm
R})+1)\Big{(}V_{l}(\varepsilon_{\rm R})\rho V^{\dagger}_{l}(\varepsilon_{\rm
R})-\frac{1}{2}\\{V^{\dagger}_{l}(\varepsilon_{\rm R})V_{l}(\varepsilon_{\rm
R}),\rho\\}\Big{)}\Big{]},$ (97)
where the operators $V_{l}=d^{\dagger}_{l\uparrow}d_{l\downarrow}$ and
$V^{\dagger}_{l}=d^{\dagger}_{l\downarrow}d_{l\uparrow}$ are responsible for
the transition between spin-up ($\uparrow$) and spin-down ($\downarrow$)
states. The transition rates corresponding to their respective bath are
characterized by the various $\gamma$’s. The explicit form of all $\gamma$’s
in terms of system-bath coupling constants can be calculated by Fermi’s golden
rule, as $\gamma_{\rm L}=2\pi\hbar\sum_{k}|t_{{\rm
L}k}|^{2}\delta\big{(}\varepsilon-\epsilon_{{\rm L\sigma}k}\big{)}$, where
$\sigma=\\{\uparrow,\downarrow\\}$, $\gamma_{\rm M}=2\pi\hbar\sum_{k}|t_{{\rm
M}k}|^{2}\delta\big{(}\varepsilon-\epsilon_{{\rm M}k}\big{)}$, and
$\gamma_{\rm R}=2\pi\hbar\sum_{q}|g_{{\rm
R}q}|^{2}\delta\big{(}\varepsilon-\epsilon_{{\rm R}q}\big{)}$ Gupt _et al._
(2023). The functions
$f(\varepsilon,\mu,T)=[e^{(\varepsilon-\mu)/k_{B}T}+1]^{-1}$ is the Fermi-
Dirac distribution function corresponding to the left (L) and middle (M) bath
with the transition energy $\varepsilon$, chemical potential $\mu$, and
equilibrium bath temperature $T$. Similarly, the function
$n(\varepsilon,T)=[e^{\varepsilon/k_{B}T}-1]^{-1}$ is the Bose-Einstein
distribution function corresponding to the right (R) bath with the transition
energy $\varepsilon$ and reservoir temperature $T$. The distribution functions
are defined as the bath correlation functions and can be calculated as
$\langle{b^{\dagger}b}\rangle=\Tr_{\rm L\sigma(M)}({b^{\dagger}b\rho_{\rm
L\sigma(M)}})=f_{\rm L\sigma(M)}$ and $\langle{bb^{\dagger}}\rangle=\Tr_{\rm
L\sigma(M)}({bb^{\dagger}\rho_{\rm L\sigma(M)}})=1-f_{\rm L\sigma(M)}$ for the
left (middle) bath and $\langle{a^{\dagger}a}\rangle=\Tr_{\rm
R}({a^{\dagger}a\rho_{\rm R}})=n_{\rm R}$,
$\langle{aa^{\dagger}}\rangle=\Tr_{\rm R}({aa^{\dagger}\rho_{\rm R}})=1+n_{\rm
R}$ for the right bath Gupt _et al._ (2023). The operators $b$ and
$b^{\dagger}$ follow anti-commutation relation whereas the operators $a$ and
$a^{\dagger}$ follow commutation relation, and $k_{B}$ is the Boltzmann
constant. The energies needed for the transitions which are driven by the left
and middle baths are $\varepsilon_{\rm
L\uparrow}=\\{\varepsilon_{\mathbb{31}},\varepsilon_{\mathbb{52}}\\}$,
$\varepsilon_{\rm
L\downarrow}=\\{\varepsilon_{\mathbb{41}},\varepsilon_{\mathbb{62}}\\}$, and
$\varepsilon_{\rm
M}=\\{\varepsilon_{\mathbb{21}},\varepsilon_{\mathbb{53}},\varepsilon_{\mathbb{64}}\\}$
respectively, while the energies required for the transitions triggered by the
right bath are $\varepsilon_{\rm
R}=\\{\varepsilon_{\mathbb{43}},\varepsilon_{\mathbb{65}}\\}$. Note that one
can express the various system creation and annihilation operators and their
combinations in the following forms $|\mathbb{i}\rangle\langle\mathbb{j}|$
($\mathbb{i}\neq\mathbb{j}$, $\mathbb{i},\;\mathbb{j}=\mathbb{1,2,3,4,5,6}$),
which are given by:
$\displaystyle d^{\dagger}_{l\uparrow}$ $\displaystyle=$
$\displaystyle|\mathbb{3}\rangle\langle\mathbb{1}|+|\mathbb{5}\rangle\langle\mathbb{2}|,\quad
d_{l\uparrow}=|\mathbb{1}\rangle\langle\mathbb{3}|+|\mathbb{2}\rangle\langle\mathbb{5}|,$
$\displaystyle d^{\dagger}_{l\downarrow}$ $\displaystyle=$
$\displaystyle|\mathbb{4}\rangle\langle\mathbb{1}|+|\mathbb{6}\rangle\langle\mathbb{2}|,\quad
d_{l\downarrow}=|\mathbb{1}\rangle\langle\mathbb{4}|+|\mathbb{2}\rangle\langle\mathbb{6}|,$
$\displaystyle d^{\dagger}_{u}$ $\displaystyle=$
$\displaystyle|\mathbb{2}\rangle\langle\mathbb{1}|+|\mathbb{5}\rangle\langle\mathbb{3}|+|\mathbb{6}\rangle\langle\mathbb{4}|,\quad
d_{u}=|\mathbb{1}\rangle\langle\mathbb{2}|+|\mathbb{3}\rangle\langle\mathbb{5}|+|\mathbb{4}\rangle\langle\mathbb{6}|,$
$\displaystyle V^{\dagger}_{l}$ $\displaystyle=$ $\displaystyle
d^{\dagger}_{l\downarrow}d_{l\uparrow}=|\mathbb{4}\rangle\langle\mathbb{3}|+|\mathbb{6}\rangle\langle\mathbb{5}|,\quad
V_{l}=d^{\dagger}_{l\uparrow}d_{l\downarrow}=|\mathbb{3}\rangle\langle\mathbb{4}|+|\mathbb{5}\rangle\langle\mathbb{6}|.$
(98)
Finally, Eqs. (9) to (14) in the main text can be derived with the help of
Eqs. (93)-(A) in the following manner. For example
$\frac{dP_{\mathbb{1}}}{dt}=\langle\mathbb{1}|\frac{d\rho}{dt}|\mathbb{1}\rangle=\langle\mathbb{1}|\mathcal{L}_{\rm
L\uparrow}[\rho]|\mathbb{1}\rangle+\langle\mathbb{1}|\mathcal{L}_{\rm
L\downarrow}[\rho]|\mathbb{1}\rangle+\langle\mathbb{1}|\mathcal{L}_{\rm
M}[\rho]|\mathbb{1}\rangle+\langle\mathbb{1}|\mathcal{L}_{\rm
R}[\rho]|\mathbb{1}\rangle,$ (99)
where the terms
$\displaystyle\langle\mathbb{1}|\mathcal{L}_{\rm
L\uparrow}[\rho]|\mathbb{1}\rangle$ $\displaystyle=$
$\displaystyle{\gamma_{\rm L}f(\varepsilon_{\mathbb{31}},\mu_{\rm
L\uparrow},T_{\rm
L})}\Big{(}-\frac{1}{2}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{1}|\rho|\mathbb{1}\rangle-\frac{1}{2}\langle\mathbb{1}|\rho|\mathbb{1}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}+\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{31}},\mu_{\rm L\uparrow},T_{\rm
L}))\Big{(}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{3}|\rho|\mathbb{3}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}$
(100) $\displaystyle=$ $\displaystyle\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{31}},\mu_{\rm L\uparrow},T_{\rm
L}))P_{\mathbb{3}}-\gamma_{L}f(\varepsilon_{\mathbb{31}},\mu_{\rm
L\uparrow},T_{\rm
L})P_{\mathbb{1}}=k_{\mathbb{13}}P_{\mathbb{3}}-k_{\mathbb{31}}P_{\mathbb{1}}\equiv
J_{\mathbb{13}},$ $\displaystyle\langle\mathbb{1}|\mathcal{L}_{\rm
L\downarrow}[\rho]|\mathbb{1}\rangle$ $\displaystyle=$
$\displaystyle{\gamma_{\rm L}f(\varepsilon_{\mathbb{41}},\mu_{\rm
L\downarrow},T_{\rm
L})}\Big{(}-\frac{1}{2}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{1}|\rho|\mathbb{1}\rangle-\frac{1}{2}\langle\mathbb{1}|\rho|\mathbb{1}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}+\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{41}},\mu_{\rm L\downarrow},T_{\rm
L}))\Big{(}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{4}|\rho|\mathbb{4}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}$
(101) $\displaystyle=$ $\displaystyle\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{41}},\mu_{\rm L\downarrow},T_{\rm
L}))P_{\mathbb{4}}-\gamma_{L}f(\varepsilon_{\mathbb{41}},\mu_{\rm
L\downarrow},T_{\rm
L})P_{\mathbb{1}}=k_{\mathbb{14}}P_{\mathbb{4}}-k_{\mathbb{41}}P_{\mathbb{1}}\equiv
J_{\mathbb{14}},$ $\displaystyle\langle\mathbb{1}|\mathcal{L}_{\rm
M}[\rho]|\mathbb{1}\rangle$ $\displaystyle=$ $\displaystyle{\gamma_{\rm
L}f(\varepsilon_{\mathbb{21}},\mu_{\rm L\downarrow},T_{\rm
L})}\Big{(}-\frac{1}{2}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{1}|\rho|\mathbb{1}\rangle-\frac{1}{2}\langle\mathbb{1}|\rho|\mathbb{1}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}+\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{21}},\mu_{\rm L\downarrow},T_{\rm
L}))\Big{(}\langle\mathbb{1}|\mathbb{1}\rangle\langle\mathbb{2}|\rho|\mathbb{2}\rangle\langle\mathbb{1}|\mathbb{1}\rangle\Big{)}$
(102) $\displaystyle=$ $\displaystyle\gamma_{\rm
L}(1-f(\varepsilon_{\mathbb{21}},\mu_{\rm L\downarrow},T_{\rm
L}))P_{\mathbb{2}}-\gamma_{L}f(\varepsilon_{\mathbb{21}},\mu_{\rm
L\downarrow},T_{\rm
L})P_{\mathbb{1}}=k_{\mathbb{12}}P_{\mathbb{2}}-k_{\mathbb{21}}P_{\mathbb{1}}\equiv
J_{\mathbb{12}},$ $\displaystyle\langle\mathbb{1}|\mathcal{L}_{\rm
R}[\rho]|\mathbb{1}\rangle$ $\displaystyle=$ $\displaystyle 0.$ (103)
Similarly, one can derive time evolution equations for the population of the
other $\mathbb{i}$-th states. Under the steady state, $dP_{\mathbb{i}}/dt=0$
($\mathbb{i=1,2,..,6}$), and we have
$\displaystyle\frac{dP_{\mathbb{1}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{13}}P_{\mathbb{3}}-k_{\mathbb{31}}P_{\mathbb{1}})+(k_{\mathbb{14}}P_{\mathbb{4}}-k_{\mathbb{41}}P_{\mathbb{1}})+(k_{\mathbb{12}}P_{\mathbb{2}}-k_{\mathbb{21}}P_{\mathbb{1}})=J_{\mathbb{13}}+J_{\mathbb{14}}+J_{\mathbb{12}}=0,$
(104) $\displaystyle\frac{dP_{\mathbb{2}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{25}}P_{\mathbb{5}}-k_{\mathbb{52}}P_{\mathbb{2}})+(k_{\mathbb{26}}P_{\mathbb{6}}-k_{\mathbb{62}}P_{\mathbb{2}})+(k_{\mathbb{21}}P_{\mathbb{1}}-k_{\mathbb{12}}P_{\mathbb{2}})=J_{\mathbb{25}}+J_{\mathbb{26}}+J_{\mathbb{21}}=0,$
(105) $\displaystyle\frac{dP_{\mathbb{3}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{31}}P_{\mathbb{1}}-k_{\mathbb{13}}P_{\mathbb{3}})+(k_{\mathbb{35}}P_{\mathbb{5}}-k_{\mathbb{53}}P_{\mathbb{3}})+(k_{\mathbb{34}}P_{\mathbb{4}}-k_{\mathbb{43}}P_{\mathbb{3}})=J_{\mathbb{31}}+J_{\mathbb{35}}+J_{\mathbb{34}}=0,$
(106) $\displaystyle\frac{dP_{\mathbb{4}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{41}}P_{\mathbb{1}}-k_{\mathbb{14}}P_{\mathbb{4}})+(k_{\mathbb{46}}P_{\mathbb{6}}-k_{\mathbb{64}}P_{\mathbb{4}})+(k_{\mathbb{43}}P_{\mathbb{3}}-k_{\mathbb{34}}P_{\mathbb{4}})=J_{\mathbb{41}}+J_{\mathbb{46}}+J_{\mathbb{43}}=0,$
(107) $\displaystyle\frac{dP_{\mathbb{5}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{52}}P_{\mathbb{2}}-k_{\mathbb{25}}P_{\mathbb{5}})+(k_{\mathbb{53}}P_{\mathbb{3}}-k_{\mathbb{35}}P_{\mathbb{5}})+(k_{\mathbb{56}}P_{\mathbb{6}}-k_{\mathbb{65}}P_{\mathbb{5}})=J_{\mathbb{52}}+J_{\mathbb{53}}+J_{\mathbb{56}}=0,$
(108) $\displaystyle\frac{dP_{\mathbb{6}}}{dt}$ $\displaystyle=$
$\displaystyle(k_{\mathbb{62}}P_{\mathbb{2}}-k_{\mathbb{26}}P_{\mathbb{6}})+(k_{\mathbb{64}}P_{\mathbb{4}}-k_{\mathbb{46}}P_{\mathbb{6}})+(k_{\mathbb{65}}P_{\mathbb{5}}-k_{\mathbb{56}}P_{\mathbb{6}})=J_{\mathbb{62}}+J_{\mathbb{64}}+J_{\mathbb{65}}=0.$
(109)
In the present case, there is no particle exchange between the quantum dots,
implying no particle current due to the middle reservoir. So, the steady-state
energy (heat) current through the middle reservoir within the Born-Markov-
Secular (BMS) master equation can be defined as Ghosh _et al._ (2022); Wang
_et al._ (2022)
$J_{\rm E}=J^{\rm M}_{\rm E}=\Tr\\{\mathcal{L}_{\rm M}[\rho]H_{\rm S}\\},$
(110)
where the system Hamiltonian $H_{\rm S}$ has the following form, $H_{\rm
S}=\sum_{\mathbb{i}}\varepsilon_{\mathbb{i}}|\mathbb{i}\rangle\langle\mathbb{i}|$,
with $\varepsilon_{\mathbb{i}}$ being the energy of the $\mathbb{i}$-th state.
Using Eq.(110), the expression for t$J_{\rm E}$ can be calculated as
$\displaystyle J_{\rm
E}=\varepsilon_{\mathbb{21}}J_{\mathbb{21}}+\varepsilon_{\mathbb{53}}J_{\mathbb{53}}+\varepsilon_{\mathbb{64}}J_{\mathbb{64}}.$
(111)
At the steady state, one may verify
$J_{\mathbb{12}}=-J_{\mathbb{21}}=J_{\mathbb{53}}+J_{\mathbb{64}}$. As a
result, Eq. (111) reduce to Eq.(20) of the main text:
$\displaystyle J_{\rm E}$ $\displaystyle=$
$\displaystyle\varepsilon_{u}(-J_{\mathbb{12}})+(\varepsilon_{u}+{\rm
U})J_{\mathbb{53}}+(\varepsilon_{u}+{\rm U})J_{\mathbb{64}},$ (112)
$\displaystyle=$
$\displaystyle-\varepsilon_{u}J_{\mathbb{12}}+(\varepsilon_{u}+{\rm
U})(J_{\mathbb{53}}+J_{\mathbb{64}}),$ $\displaystyle=$ $\displaystyle{\rm
U}(J_{\mathbb{53}}+J_{\mathbb{64}}).$
Similarly, the steady-state spin current due to the left and right reservoirs
can be defined as Wang _et al._ (2022):
$\displaystyle J^{\rm L}_{\rm S}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Big{(}\Tr\\{d^{\dagger}_{l\downarrow}d_{l\downarrow}\mathcal{L}_{\rm
L\downarrow}[\rho]\\}-\Tr\\{d^{\dagger}_{l\uparrow}d_{l\uparrow}\mathcal{L}_{\rm
L\uparrow}[\rho]\\}\Big{)},$ (113) $\displaystyle J^{\rm R}_{\rm S}$
$\displaystyle=$ $\displaystyle\Tr\\{V^{\dagger}_{l}V_{l}\mathcal{L}_{\rm
R}[\rho]\\}.$ (114)
Now, using Eq.(A), one can derive the expressions for $J^{\rm L}_{\rm S}$ and
$J^{\rm R}_{\rm S}$ from Eqs. (113) and (114) in the follwing forms:
$\displaystyle J^{\rm L}_{\rm S}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\Big{[}(J_{\mathbb{41}}+J_{\mathbb{62}})-(J_{\mathbb{31}}+J_{\mathbb{52}})\Big{]},$
(115) $\displaystyle J^{\rm R}_{\rm S}$ $\displaystyle=$ $\displaystyle
J_{\mathbb{43}}+J_{\mathbb{65}}.$ (116)
In the steady-state,
$J_{\mathbb{41}}+J_{\mathbb{62}}=-(J_{\mathbb{31}}+J_{\mathbb{52}})=J_{\mathbb{34}}+J_{\mathbb{56}}$.
As a result, we get Eq.(II) of the main text as the steady-state spin current
$\displaystyle J_{\rm S}$ $\displaystyle=$ $\displaystyle J^{\rm L}_{\rm
S}=\frac{1}{2}\Big{[}(J_{\mathbb{41}}+J_{\mathbb{62}})+(J_{\mathbb{41}}+J_{\mathbb{62}})\Big{]}$
(117) $\displaystyle=$ $\displaystyle
J_{\mathbb{41}}+J_{\mathbb{62}}=J_{\mathbb{34}}+J_{\mathbb{56}}$
$\displaystyle=$ $\displaystyle-(J_{\mathbb{43}}+J_{\mathbb{65}})=-J^{\rm
R}_{\rm S}.$ (118)
## Appendix B Derivation of the entropy production rate
From Eq.(46), the steady-state entropy production rate can be written as
follows Schnakenberg (1976); Landi and Paternostro (2021):
$\displaystyle\dot{\sigma}$ $\displaystyle=$
$\displaystyle\frac{1}{2}k_{B}\sum_{\mathbb{i,j}}J_{\mathbb{ij}}\ln\Big(\frac{k_{\mathbb{ij}}}{k_{\mathbb{ij}}}\Big{missing})$
(119) $\displaystyle=$
$\displaystyle\frac{1}{2}k_{B}\Bigg{[}J_{\mathbb{31}}\ln\Big(\frac{k_{\mathbb{31}}}{k_{\mathbb{13}}}\Big{missing})+J_{\mathbb{13}}\ln\Big(\frac{k_{\mathbb{13}}}{k_{\mathbb{31}}}\Big{missing})+J_{\mathbb{41}}\ln\Big(\frac{k_{\mathbb{41}}}{k_{\mathbb{14}}}\Big{missing})+J_{\mathbb{14}}\ln\Big(\frac{k_{\mathbb{14}}}{k_{\mathbb{41}}}\Big{missing})+J_{\mathbb{43}}\ln\Big(\frac{k_{\mathbb{43}}}{k_{\mathbb{34}}}\Big{missing})+J_{\mathbb{34}}\ln\Big(\frac{k_{\mathbb{34}}}{k_{\mathbb{43}}}\Big{missing})+.....\Bigg{]}$
As we have mentioned in the main text $J_{\mathbb{ij}}=-J_{\mathbb{ji}}$ for
all $\mathbb{i}$ and $\mathbb{j}$ ($\mathbb{i\neq j}$), so $\dot{\sigma}$ will
be equal to
$\displaystyle\dot{\sigma}$ $\displaystyle=$ $\displaystyle
k_{B}\Bigg{[}J_{\mathbb{31}}\ln\Big(\frac{k_{\mathbb{31}}}{k_{\mathbb{13}}}\Big{missing})+J_{\mathbb{41}}\ln\Big(\frac{k_{\mathbb{41}}}{k_{\mathbb{14}}}\Big{missing})+J_{\mathbb{43}}\ln\Big(\frac{k_{\mathbb{43}}}{k_{\mathbb{34}}}\Big{missing})+J_{\mathbb{21}}\ln\Big(\frac{k_{\mathbb{21}}}{k_{\mathbb{12}}}\Big{missing})+J_{\mathbb{52}}\ln\Big(\frac{k_{\mathbb{52}}}{k_{\mathbb{25}}}\Big{missing})+J_{\mathbb{64}}\ln\Big(\frac{k_{\mathbb{64}}}{k_{\mathbb{46}}}\Big{missing})+.....\Bigg{]}$
(120) $\displaystyle=$ $\displaystyle
k_{B}\Bigg{[}(J^{+}_{\mathcal{C}_{1}}-J^{-}_{\mathcal{C}_{1}}+J^{+}_{\mathcal{C}_{3}}-J^{-}_{\mathcal{C}_{3}}+J^{+}_{\mathcal{C}_{4}}-J^{-}_{\mathcal{C}_{4}}+J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}}+J^{+}_{\mathcal{C}_{11}}-J^{-}_{\mathcal{C}_{11}})\ln\Big(\frac{k_{\mathbb{31}}}{k_{\mathbb{13}}}\Big{missing})+(-J^{+}_{\mathcal{C}_{1}}+J^{-}_{\mathcal{C}_{1}}+J^{+}_{\mathcal{C}_{5}}-J^{-}_{\mathcal{C}_{5}}+J^{+}_{\mathcal{C}_{6}}-J^{-}_{\mathcal{C}_{6}}$
$\displaystyle+$ $\displaystyle
J^{+}_{\mathcal{C}_{7}}-J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{11}}+J^{-}_{\mathcal{C}_{11}})\ln\Big(\frac{k_{\mathbb{41}}}{k_{\mathbb{14}}}\Big{missing})+(J^{+}_{\mathcal{C}_{1}}-J^{-}_{\mathcal{C}_{1}}-J^{+}_{\mathcal{C}_{5}}+J^{-}_{\mathcal{C}_{5}}+J^{+}_{\mathcal{C}_{8}}-J^{-}_{\mathcal{C}_{8}}-J^{+}_{\mathcal{C}_{9}}-J^{-}_{\mathcal{C}_{9}}-J^{+}_{\mathcal{C}_{10}}+J^{-}_{\mathcal{C}_{10}})\ln\Big(\frac{k_{\mathbb{43}}}{k_{\mathbb{34}}}\Big{missing})+$
$\displaystyle+$
$\displaystyle(-J^{+}_{\mathcal{C}_{3}}+-J^{-}_{\mathcal{C}_{3}}-J^{+}_{\mathcal{C}_{4}}+-J^{-}_{\mathcal{C}_{4}}-J^{+}_{\mathcal{C}_{5}}+-J^{-}_{\mathcal{C}_{5}}-J^{+}_{\mathcal{C}_{6}}+-J^{-}_{\mathcal{C}_{6}}-J^{+}_{\mathcal{C}_{7}}+-J^{-}_{\mathcal{C}_{7}}-J^{+}_{\mathcal{C}_{8}}+J^{-}_{\mathcal{C}_{8}})\ln\Big(\frac{k_{\mathbb{21}}}{k_{\mathbb{12}}}\Big{missing})+.....\Bigg{]}$
$\displaystyle=$ $\displaystyle
k_{B}\Bigg{[}J^{+}_{\mathcal{C}_{1}}\ln\Bigg(\frac{k_{\mathbb{14}}k_{\mathbb{43}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{34}}k_{\mathbb{41}}}\Bigg{missing})-J^{-}_{\mathcal{C}_{1}}\ln\Bigg(\frac{k_{\mathbb{14}}k_{\mathbb{43}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{34}}k_{\mathbb{41}}}\Bigg{missing})+.....\Bigg{]}=k_{B}\Bigg{[}J^{+}_{\mathcal{C}_{1}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}\Bigg{missing})-J^{-}_{\mathcal{C}_{1}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}\Bigg{missing})+.....\Bigg{]}$
$\displaystyle=$ $\displaystyle
k_{B}(J^{+}_{\mathcal{C}_{1}}-J^{-}_{\mathcal{C}_{1}})\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}\Bigg{missing})+.....$
$\displaystyle=$ $\displaystyle
k_{B}\sum_{\mathcal{C}}J_{\mathcal{C}}\mathcal{X}_{\mathcal{C}},\quad\text{where}\quad
J_{\mathcal{C}}=(J^{+}_{\mathcal{C}}-J^{-}_{\mathcal{C}})\quad\text{and}\quad\mathcal{X}_{\mathcal{C}}=\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}}}{\Pi^{-}_{\mathcal{C}}}\Bigg{missing}).$
The ratio of $\Pi^{\pm}_{\mathcal{C}}$ is equal to the ratio of
$J^{\pm}_{\mathcal{C}}$ for each cycle trajectory. These ratios are
$\displaystyle\frac{J^{+}_{\mathcal{C}_{1}}}{J^{-}_{\mathcal{C}_{1}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}=\frac{k_{\mathbb{43}}k_{\mathbb{31}}k_{\mathbb{14}}}{k_{\mathbb{13}}k_{\mathbb{34}}k_{\mathbb{41}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1-\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$
(121) $\displaystyle\frac{J^{+}_{\mathcal{C}_{2}}}{J^{-}_{\mathcal{C}_{2}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{2}}}{\Pi^{-}_{\mathcal{C}_{2}}}=\frac{k_{\mathbb{26}}k_{\mathbb{65}}k_{\mathbb{52}}}{k_{\mathbb{25}}k_{\mathbb{56}}k_{\mathbb{62}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1-\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$
(122) $\displaystyle\frac{J^{+}_{\mathcal{C}_{3}}}{J^{-}_{\mathcal{C}_{3}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{3}}}{\Pi^{-}_{\mathcal{C}_{3}}}=\frac{k_{\mathbb{12}}k_{\mathbb{25}}k_{\mathbb{53}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{35}}k_{\mathbb{52}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}\Big{)},$ (123)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{4}}}{J^{-}_{\mathcal{C}_{4}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{4}}}{\Pi^{-}_{\mathcal{C}_{4}}}=\frac{k_{\mathbb{12}}k_{\mathbb{26}}k_{\mathbb{65}}k_{\mathbb{53}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{35}}k_{\mathbb{56}}k_{\mathbb{62}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}-\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$ (124)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{5}}}{J^{-}_{\mathcal{C}_{5}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{5}}}{\Pi^{-}_{\mathcal{C}_{5}}}=\frac{k_{\mathbb{12}}k_{\mathbb{25}}k_{\mathbb{53}}k_{\mathbb{34}}k_{\mathbb{41}}}{k_{\mathbb{14}}k_{\mathbb{43}}k_{\mathbb{35}}k_{\mathbb{52}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}+\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$ (125)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{6}}}{J^{-}_{\mathcal{C}_{6}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{6}}}{\Pi^{-}_{\mathcal{C}_{6}}}=\frac{k_{\mathbb{12}}k_{\mathbb{26}}k_{\mathbb{64}}k_{\mathbb{41}}}{k_{\mathbb{14}}k_{\mathbb{46}}k_{\mathbb{62}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}\Big{)},$ (126)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{7}}}{J^{-}_{\mathcal{C}_{7}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{7}}}{\Pi^{-}_{\mathcal{C}_{7}}}=\frac{k_{\mathbb{12}}k_{\mathbb{25}}k_{\mathbb{56}}k_{\mathbb{64}}k_{\mathbb{41}}}{k_{\mathbb{14}}k_{\mathbb{46}}k_{\mathbb{65}}k_{\mathbb{52}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}+\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$ (127)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{8}}}{J^{-}_{\mathcal{C}_{8}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{8}}}{\Pi^{-}_{\mathcal{C}_{8}}}=\frac{k_{\mathbb{12}}k_{\mathbb{26}}k_{\mathbb{64}}k_{\mathbb{43}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{34}}k_{\mathbb{46}}k_{\mathbb{62}}k_{\mathbb{21}}}=e^{{{\rm
U}\delta T}/{k_{B}T_{0}(T_{0}+\delta T)}}e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1+\frac{{\rm U}\delta
T}{k_{B}{T_{0}}^{2}}-\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$ (128)
$\displaystyle\frac{J^{+}_{\mathcal{C}_{9}}}{J^{-}_{\mathcal{C}_{9}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{9}}}{\Pi^{-}_{\mathcal{C}_{9}}}=\frac{k_{\mathbb{34}}k_{\mathbb{46}}k_{\mathbb{65}}k_{\mathbb{53}}}{k_{\mathbb{35}}k_{\mathbb{56}}k_{\mathbb{64}}k_{\mathbb{43}}}=1,$
(129) $\displaystyle\frac{J^{+}_{\mathcal{C}_{10}}}{J^{-}_{\mathcal{C}_{10}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{10}}}{\Pi^{-}_{\mathcal{C}_{10}}}=\frac{k_{\mathbb{25}}k_{\mathbb{53}}k_{\mathbb{34}}k_{\mathbb{46}}k_{\mathbb{62}}}{k_{\mathbb{26}}k_{\mathbb{64}}k_{\mathbb{43}}k_{\mathbb{35}}k_{\mathbb{52}}}=e^{{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1+\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)},$
(130) $\displaystyle\frac{J^{+}_{\mathcal{C}_{11}}}{J^{-}_{\mathcal{C}_{11}}}$
$\displaystyle=$
$\displaystyle\frac{\Pi^{+}_{\mathcal{C}_{11}}}{\Pi^{-}_{\mathcal{C}_{11}}}=\frac{k_{\mathbb{14}}k_{\mathbb{46}}k_{\mathbb{65}}k_{\mathbb{53}}k_{\mathbb{31}}}{k_{\mathbb{13}}k_{\mathbb{35}}k_{\mathbb{56}}k_{\mathbb{64}}k_{\mathbb{41}}}=e^{-{\Delta\mu_{\rm
S}}/{k_{B}T_{0}}}\approx\Big{(}1-\frac{\Delta\mu_{\rm S}}{k_{B}T_{0}}\Big{)}.$
(131)
Using Eqs.(121)-(131) into Eq.(120), yields
$\displaystyle\dot{\sigma}$ $\displaystyle=$ $\displaystyle
J_{\mathcal{C}_{1}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{1}}}{\Pi^{-}_{\mathcal{C}_{1}}}\Bigg{missing})+J_{\mathcal{C}_{2}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{2}}}{\Pi^{-}_{\mathcal{C}_{2}}}\Bigg{missing})+J_{\mathcal{C}_{3}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{3}}}{\Pi^{-}_{\mathcal{C}_{3}}}\Bigg{missing})+J_{\mathcal{C}_{4}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{4}}}{\Pi^{-}_{\mathcal{C}_{4}}}\Bigg{missing})+J_{\mathcal{C}_{5}}\ln\Bigg(\frac{\Pi^{+}_{\mathcal{C}_{5}}}{\Pi^{-}_{\mathcal{C}_{5}}}\Bigg{missing})+.....$
(132) $\displaystyle=$ $\displaystyle
J_{\mathcal{C}_{1}}\Bigg{(}-\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}+J_{\mathcal{C}_{2}}\Bigg{(}-\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}+J_{\mathcal{C}_{3}}\Bigg{(}-\frac{{\rm U}\delta
T}{k_{B}{T_{0}}(T_{0}+\delta T)}\Bigg{)}+J_{\mathcal{C}_{4}}\Bigg{(}\frac{{\rm
U}\delta T}{k_{B}{T_{0}}(T_{0}+\delta T)}-\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}$ $\displaystyle+$ $\displaystyle
J_{\mathcal{C}_{5}}\Bigg{(}\frac{{\rm U}\delta T}{k_{B}{T_{0}}(T_{0}+\delta
T)}+\frac{\Delta\mu_{\rm
S}}{k_{B}T_{0}}\Bigg{)}+J_{\mathcal{C}_{6}}\Bigg{(}-\frac{{\rm U}\delta
T}{k_{B}{T_{0}}(T_{0}+\delta T)}\Bigg{)}+.....$ $\displaystyle=$
$\displaystyle{\rm
U}[J_{\mathcal{C}_{3}}+J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{6}}+J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}]\frac{\delta
T}{T_{0}(T_{0}+\delta
T)}+[-J_{\mathcal{C}_{1}}-J_{\mathcal{C}_{2}}-J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{7}}-J_{\mathcal{C}_{8}}+J_{\mathcal{C}_{10}}-J_{\mathcal{C}_{11}}]\frac{\Delta\mu_{\rm
S}}{T_{0}}$ $\displaystyle=$ $\displaystyle J_{\rm
E}\Big{[}\frac{1}{T_{0}}-\frac{1}{(T_{0}+\delta T)}\Big{]}+J_{\rm
S}\Big{[}\frac{\mu_{\rm L\downarrow}}{T_{0}}-\frac{\mu_{\rm
L\uparrow}}{T_{0}}\Big{]},$
where we identify the expressions of the macroscopic energy and spin currents
in terms of microscopic cycle fluxes
$\displaystyle J_{\rm E}$ $\displaystyle=$ $\displaystyle{\rm
U}[J_{\mathcal{C}_{3}}+J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{6}}+J_{\mathcal{C}_{7}}+J_{\mathcal{C}_{8}}]$
(133) $\displaystyle J_{\rm S}$ $\displaystyle=$
$\displaystyle[-J_{\mathcal{C}_{1}}-J_{\mathcal{C}_{2}}-J_{\mathcal{C}_{4}}+J_{\mathcal{C}_{5}}+J_{\mathcal{C}_{7}}-J_{\mathcal{C}_{8}}+J_{\mathcal{C}_{10}}-J_{\mathcal{C}_{11}}],$
(134)
which are equivalent to Eqs. (62) and (60) of the main text.
|
with frequencies going to zero or infinity, into an error term which is small
in $Y$ (which suffices for our purposes as we have now developed a robust
small data theory in $Y$!).
A precise formulation of the profile decomposition we require is as follows.
###### Theorem 6.1 (Profile decomposition).
Let $(f_{n},g_{n})\in H^{1}\times L^{2}$ be a bounded sequence. Then after
passing to a subsequence if necessary, there exists
$J^{*}\in\mathbb{N}\cup\\{\infty\\}$, non-zero profiles $(f^{(j)},g^{(j)})\in
H^{1}\times L^{2}$, and group elements
$\mathfrak{g}_{n}^{(j)}=\mathfrak{g}[t_{n}^{(j)},x_{n}^{(j)}]$ such that if we
define $(w_{n}^{(J)},e_{n}^{(J)})\in H^{1}\times L^{2}$ as
$(f_{n},g_{n})=\sum_{j=1}^{J}\mathfrak{g}_{n}^{(j)}(f^{(j)},g^{(j)})+(w_{n}^{(J)},e_{n}^{(J)})$
then we have the properties:
1. (i)
The energy of the profiles decouples, thus for any $J\leqslant J^{*}$
$\lim_{n\to\infty}\Big{(}\|f_{n}\|_{H^{1}}^{2}-\sum_{j=1}^{J}\|f^{(j)}\|_{H^{1}}^{2}-\|w^{(J)}_{n}\|_{H^{1}}^{2}\Big{)}=0=\lim_{n\to\infty}\Big{(}\|g_{n}\|_{L^{2}}^{2}-\sum_{j=1}^{J}\|g^{(j)}\|_{L^{2}}^{2}-\|e^{(J)}_{n}\|_{L^{2}}^{2}\Big{)}$
and
$\lim_{n\to\infty}\Big{(}\mathcal{E}_{Z}(f_{n},g_{n})-\sum_{j=1}^{J}\mathcal{E}_{Z}\big{(}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\big{)}-\mathcal{E}_{Z}(w^{(J)}_{n},e^{(J)}_{n})\Big{)}=0.$
2. (ii)
The free evolution of the error $(w^{(J)}_{n},e^{(J)}_{n})$ goes to zero in
$Y\times Y$, thus
$\lim_{J\to
J^{*}}\limsup_{n\to\infty}\big{(}\|e^{it\Delta}w^{(J)}_{n}\|_{Y}+\|e^{it|\nabla|}e^{(J)}_{n}\|_{Y}\big{)}=0.$
3. (iii)
For any $j\not=k$, the group elements
$\mathfrak{g}_{n}^{(j)}=\mathfrak{g}[t^{(j)}_{n},x^{(j)}_{n}]$ satisfy the
asymptotic orthogonality property
$\lim_{n\to\infty}\big{(}|t_{n}^{(j)}-t_{n}^{(k)}|+|x^{(j)}_{n}-x^{(k)}_{n}|\big{)}=0.$
Moreover, we have the normalisation condition that either $t_{n}^{(j)}=0$ for
all $n\in\mathbb{N}$, or $|t_{n}^{(j)}|\to\infty$ as $n\to\infty$.
###### Proof.
The proof is a minor modification of the profile decomposition of Bahouri-
Gérard [2], and thus we only give a sketch of the proof. A standard argument,
see for instance [23], shows that it suffices to prove that if
$\lim_{n\to\infty}\|(f_{n},g_{n})\|_{H^{1}\times
L^{2}}=A,\qquad\lim_{n\to\infty}\|(e^{it\Delta}f_{n},e^{it|\nabla|}g_{n})\|_{Y\times
Y}=\epsilon>0,$ (6.1)
then there exits a profile $(f^{(1)},g^{(1)})\in H^{1}\times L^{2}$ and a
sequence of group elements $\mathfrak{g}_{n}=\mathfrak{g}[t_{n},x_{n}]$ such
that, after potentially taking a subsequence, the sequence
$(\mathfrak{g}_{n})^{-1}(f_{n},g_{n})$ converges weakly to $(f^{(1)},g^{(1)})$
in $H^{1}\times L^{2}$, we have the lower bound
$\|(f^{(1)},g^{(1)})\|_{H^{1}\times L^{2}}\gtrsim\epsilon,$ (6.2)
either $t_{n}=0$ for all $n$ or $|t_{n}|\to\infty$, and the energy (and
$L^{2}$ and $H^{1}$ norms) decouples
$\begin{split}\lim_{n\to\infty}\Big{(}\mathcal{E}_{Z}[(f_{n},g_{n})]-\mathcal{E}_{Z}\big{[}\mathfrak{g}_{n}(f^{(1)},g^{(1)})\big{]}-\mathcal{E}_{Z}\big{[}(f_{n},g_{n})-\mathfrak{g}_{n}(f^{(1)},g^{(1)})\big{]}\Big{)}&=0,\\\
\lim_{n\to\infty}\Big{(}\|f_{n}\|_{H^{1}}^{2}-\|f^{(1)}\|_{H^{1}}^{2}-\|f_{n}-e^{-it_{n}\Delta}\tau_{x_{n}}f^{(1)}\|_{H^{1}}^{2}\Big{)}&=0,\\\
\lim_{n\to\infty}\Big{(}\|g_{n}\|_{L^{2}}^{2}-\|g^{(1)}\|_{L^{2}}^{2}-\|g_{n}-e^{-it_{n}|\nabla|}\tau_{x_{n}}g^{(1)}\|_{L^{2}}^{2}\Big{)}&=0.\end{split}$
(6.3)
To construct the profile $(f^{(1)},g^{(1)})$ we observe that by definition
there exists $(s_{n},x_{n})\in\mathbb{R}\times\mathbb{R}^{4}$ and
$\lambda_{n}\in 2^{\mathbb{N}}$ such that
$\lambda_{n}^{-4}\big{|}\big{(}e^{is_{n}\Delta}P_{\lambda_{n}}f_{n},e^{is_{n}|\nabla|}P_{\lambda_{n}}g_{n}\big{)}(x_{n})\big{|}\geqslant\frac{1}{2}\epsilon.$
An application of Bernstein’s inequality gives
$\lambda_{n}^{-2}\|(f_{n},g_{n})\|_{H^{1}\times L^{2}}\gtrsim\epsilon$, and
hence $\limsup_{n\to\infty}\lambda_{n}^{2}\lesssim\epsilon^{-1}A$. In
particular, as $\lambda_{n}\in 2^{\mathbb{N}}$, there exists $\lambda_{*}\in
2^{\mathbb{N}}$ and a subsequence such that $\lambda_{n}=\lambda_{*}$ for all
$n\in\mathbb{N}$. Define the group element
$\mathfrak{g}_{n}=\mathfrak{g}_{n}[t_{n},x_{n}]$, where $t_{n}=s_{n}$ if
$|s_{n}|\to\infty$, and otherwise $t_{n}=0$ for all $n\in\mathbb{N}$. As
$(\mathfrak{g}_{n})^{-1}(f_{n},g_{n})$ is a bounded sequence in $H^{1}\times
L^{2}$, after potentially taking a further subsequence, there exists
$(f^{(1)},g^{(1)})\in H^{1}\times L^{2}$ such that
$(\mathfrak{g}_{n})^{-1}(f_{n},g_{n})$ converges weakly to $(f^{(1)},g^{(1)})$
in $H^{1}\times L^{2}$. Letting $K_{\lambda_{*}}$ denote the kernel of the
Fourier multiplier $P_{\lambda_{*}}$, we have by the continuity of the flow
$t\mapsto(e^{it\Delta},e^{it|\nabla|})$ on $H^{1}\times L^{2}$,
$\displaystyle|(P_{\lambda_{*}}f^{(1)},P_{\lambda_{*}}g^{(1)})(0)|$
$\displaystyle=\limsup_{n\to\infty}\Big{|}\int_{\mathbb{R}^{4}}K_{\lambda_{*}}(-y)(\mathfrak{g}_{n})^{-1}(f_{n},g_{n})(y)dy$
$\displaystyle=\lambda_{*}^{4}\limsup_{n\to\infty}\lambda_{n}^{-4}\big{|}\big{(}e^{is_{n}\Delta}P_{\lambda_{n}}f_{n},e^{is_{n}|\nabla|}P_{\lambda_{n}}g_{n}\big{)}(x_{n})\big{|}\geqslant\frac{1}{2}\lambda_{*}^{4}\epsilon$
and hence Bernstein’s inequality and fact that $\lambda_{*}\in 2^{\mathbb{N}}$
gives the lower bound (6.2). The $L^{2}$ and $H^{1}$ decoupling in (6.3)
follows immediately from the fact that $e^{it_{n}|\nabla|}\tau_{-x_{n}}f_{n}$
converges weakly in $H^{1}$ to $f^{(1)}$, while
$e^{it_{n}|\nabla|}\tau_{-x_{n}}g_{n}$ converges weakly to $g^{(1)}$, and
noting that both the $H^{1}$ and $L^{2}$ norms are invariant under the action
of $\mathfrak{g}_{n}$. To verify the energy decoupling in (6.3) we have to
work slightly harder. In view of the $H^{1}$ and $L^{2}$ decoupling, it
suffices to prove that
$\lim_{n\to\infty}\Big{|}\int_{\mathbb{R}^{4}}g_{n}|f_{n}|^{2}-e^{-it_{n}|\nabla|}\tau_{x_{n}}g^{(1)}\big{|}e^{-it_{n}\Delta}\tau_{x_{n}}f^{(1)}\big{|}^{2}-\big{(}g_{n}-e^{-it_{n}|\nabla|}\tau_{x_{n}}g^{(1)}\big{)}\big{|}f_{n}-e^{-it_{n}\Delta}\tau_{x_{n}}f^{(1)}\big{|}^{2}dx\Big{|}=0.$
If $|t_{n}|\to\infty$, then after approximating by smooth functions the limit
follows from the dispersive decay of the wave and Schrödinger propagators.
Thus we may assume $t_{n}=0$ and after a short computation via translation
invariance and weak convergence, our goal is to now prove that
$\lim_{n\to\infty}\int_{\mathbb{R}^{4}}2|\tau_{-x_{n}}g_{n}||f^{(1)}||\tau_{-x_{n}}f_{n}-f^{(1)}|+|g^{(1)}||\tau_{-x_{n}}f_{n}+f^{(1)}||\tau_{-x_{n}}f_{n}-f^{(1)}|dx=0.$
(6.4)
But this follows by noting that since $\tau_{-x_{n}}f_{n}$ bounded in $H^{1}$,
by the Rellich-Kondrachov Theorem, we have
$\|\tau_{-x_{n}}f_{n}-f^{(1)}\|_{L^{2}(\Omega)}\to 0$ for any compact
$\Omega\subset\mathbb{R}^{4}$. Hence limit follows by localising in space. ∎
Later we exploit the fact that asymptotically orthogonal nonlinear profiles
only interact weakly.
###### Lemma 6.2 (Orthogonal profiles interact weakly).
Let $(t_{n},x_{n}),(t_{n}^{\prime},x_{n}^{\prime})\in\mathbb{R}^{1+4}$ be
sequences such that
$\lim_{n\to\infty}\big{(}|t_{n}-t_{n}^{\prime}|+|x_{n}-x_{n}^{\prime}|\big{)}=\infty.$
Then for any $V\in\underline{W}^{0}$ and $u,w\in S^{\frac{1}{2}}$ we have
$\lim_{n\to\infty}\big{\|}\mathcal{I}_{0}\big{[}\Re(V_{n})u_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}=\lim_{n\to\infty}\big{\|}\mathcal{J}_{0}\big{[}|\nabla|(\overline{w}_{n}u_{n})\big{]}\big{\|}_{W^{0}}=\lim_{n\to\infty}\|\overline{w}_{n}u_{n}\|_{L^{1}_{t}L^{2}_{x}}=0$
(6.5)
where we take
$u_{n}(t,x)=u(t-t_{n},x-x_{n}),\qquad
V_{n}(t,x)=V(t-t_{n}^{\prime},x-x_{n}^{\prime}),\qquad
w_{n}(t,x)=w(t-t_{n}^{\prime},x-x_{n}^{\prime}).$
###### Proof.
The proof is essentially the standard approximation argument to reduce to the
$C^{\infty}_{0}$ case. We begin by observing that after translating in space-
time, for all limits in (6.5) it is enough to consider the case
$t_{n}^{\prime}=x_{n}^{\prime}=0$. For the first limit, as
$\lim_{R\to\infty}\|P_{\geqslant
R}V\|_{\underline{W}^{0}}=\lim_{R\to\infty}\|P_{\geqslant
R}u\|_{S^{\frac{1}{2}}}=0$
an application of Theorem 2.4 shows that we only have to consider bounded
frequencies. In particular, since
$\big{\|}\mathcal{I}_{0}\big{[}\Re(V_{<R})P_{<R}u_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}\lesssim
R^{\frac{1}{2}}\|V_{<R}(P_{<R}u)_{n}\|_{L^{2}_{t}L^{\frac{4}{3}}_{x}}$
it suffices to prove that for the bounded frequency contribution we have
$\lim_{n\to\infty}\|V_{<R}P_{<R}u_{n}\|_{L^{2}_{t}L^{\frac{4}{3}}_{x}}=0.$
This can be reduced further after noting that as $u\in
L^{2}_{t}L^{4}_{x}\subset S^{\frac{1}{2}}$ we have
$\lim_{R^{\prime}\to\infty}\big{\|}\mathbbold{1}_{\\{|t|+|x|\geqslant
R^{\prime}\\}}P_{<R}u\big{\|}_{L^{2}_{t}L^{4}_{x}}=0.$
Letting $B_{R^{\prime}}=\\{|t|+|x|<R^{\prime}\\}$ we have
$\displaystyle\big{\|}V_{\leqslant
R}\big{(}\mathbbold{1}_{B_{R^{\prime}}}P_{\leqslant
R}u\big{)}_{n}\big{\|}_{L^{2}_{t}L^{\frac{4}{3}}_{x}}$
$\displaystyle\lesssim\big{\|}V_{\leqslant
R}(t,x)\mathbbold{1}_{B_{R^{\prime}}}(t-t_{n},x-x_{n})\big{\|}_{L^{2}_{t}L^{\frac{4}{3}}_{x}}\|P_{\leqslant
R}u\|_{L^{\infty}_{t,x}}$ $\displaystyle\lesssim
R^{\frac{3}{2}}(R^{\prime})^{\frac{7}{3}}\|V_{\leqslant
R}(t,x)\mathbbold{1}_{B_{R^{\prime}}}(t-t_{n},x-x_{n})\|_{L^{2}_{t}L^{6}_{x}}\|u\|_{L^{\infty}_{t}H^{\frac{1}{2}}}$
and hence it is enough to prove that
$\lim_{n\to\infty}\|V_{\leqslant
R}(t,x)\mathbbold{1}_{B_{R^{\prime}}}(t-t_{n},x-x_{n})\|_{L^{2}_{t}L^{6}_{x}}=0.$
But this is immediate via the dominated convergence theorem since
$\|V_{\leqslant R}\|_{L^{2}_{t}L^{6}_{x}}\lesssim
R^{\frac{5}{6}}\|V\|_{\underline{W}^{0}}$ which is a consequence of (2.2).
Hence the first limit in (6.5) follows.
To prove the second limit in (6.5), an analogous argument via Theorem 2.4
shows that again is suffices to consider bounded frequencies. After observing
that an application of Bernstein’s inequality gives
$\|\big{\|}\mathcal{J}_{0}\big{[}|\nabla|(\overline{w_{\leqslant
R}}P_{\leqslant R}u_{n})\big{]}\big{\|}_{W^{0}}\lesssim
R\|\overline{w_{\leqslant R}}P_{\leqslant R}u_{n}\|_{L^{1}_{t}L^{2}_{x}\cap
L^{2}_{t}L^{\frac{4}{3}}_{x}},$
and noting that $u,w\in L^{2}_{t}L^{4}_{x}\subset S^{\frac{1}{2}}$, it only
remains to prove that
$\lim_{n\to\infty}\|\overline{w}u_{n}\|_{L^{1}_{t}L^{2}_{x}\cap
L^{2}_{t}L^{\frac{4}{3}}_{x}}=0.$
However this is a consequence of the fact that $w,u\in L^{2}_{t}L^{4}_{x}$
together with the argument used to prove the first limit in (6.5). This also
completes the proof of the final limit in (6.5). ∎
## 7\. The Ground State Constraint
Define the functional
$\mathcal{K}(f)=\|f\|_{\dot{H}^{1}}^{2}-\|f\|_{L^{4}}^{4}=\frac{d}{d\lambda}\mathcal{E}_{NLS}(\lambda
f)\big{|}_{\lambda=1}.$
This functional can be seen as the scaling derivative of the NLS energy
$\mathcal{E}_{NLS}(f)$ and appears in the Virial identity for both the NLS and
Zakharov equations, see for instance the discussion in [16, 17]. The
functional $\mathcal{K}$ is also closely related to the Zakharov energy, for
instance we have the identity
$\frac{d}{d\lambda}\mathcal{E}_{Z}(\lambda
f,\lambda^{2}g)=\lambda^{-1}\mathcal{K}(\lambda
f)+\lambda^{3}\big{\|}g+|f|^{2}\big{\|}_{L^{2}_{x}}^{2}.$ (7.1)
This identity is particularly useful as it shows that provided
$\mathcal{K}(\lambda f)>0$, the energy increases along the curve
$\lambda\mapsto(\lambda f,\lambda^{2}g)$ in $\dot{H}^{1}\times L^{2}$.
An application of Hölder and Sobolev embedding gives
$4\mathcal{E}_{Z}(f,g)\leqslant
2\|f\|_{\dot{H}^{1}}^{2}+\|g\|_{L^{2}}^{2}+2\|Q\|_{\dot{H}^{1}}^{-1}\|g\|_{L^{2}}\|f\|_{\dot{H}^{1}}^{2}\lesssim\|f\|_{\dot{H}^{1}}^{2}(1+\|g\|_{L^{2}})+\|g\|_{L^{2}}^{2}$
and thus the energy is always finite provided $(f,g)\in\dot{H}^{1}\times
L^{2}$. The reverse inequality is false in general, in particular the energy
is not necessarily positive. However, if we impose a size constraint on the
functions $(f,g)\in\dot{H}^{1}\times L^{1}$, then the energy is always
positive. A natural condition to ensure that the energy is coercive can be
phrased in terms of the ground state (or Aubin-Talenti function)
$Q(x)=(1+\frac{|x|^{2}}{8})^{-1}$. Recall that the ground state
$Q\in\dot{H}^{1}$ satisfies the properties
$\Delta
Q=-Q^{3},\qquad\|Q\|_{L^{4}}^{2}=\|Q\|_{\dot{H}^{1}},\qquad\|Q\|_{\dot{H}^{1}}^{-\frac{1}{2}}=\sup_{\|f\|_{\dot{H}^{1}}=1}\|f\|_{L^{4}},\qquad\mathcal{E}_{Z}(Q,-Q^{2})=\mathcal{E}_{NLS}(Q)=\frac{1}{4}\|Q\|_{\dot{H}^{1}}^{2}.$
The properties of the ground state quickly give the implications
$\|g\|_{L^{2}}\leqslant\|Q\|_{\dot{H}^{1}}\qquad\Longrightarrow\qquad\|g\|_{L^{2}}^{2}\leqslant
4\mathcal{E}_{Z}(f,g)$ (7.2)
and
$\|f\|_{\dot{H}^{1}}\leqslant\|Q\|_{\dot{H}^{1}}\qquad\Longrightarrow\qquad\|f\|_{\dot{H}^{1}}^{2}\leqslant
4\mathcal{E}_{Z}(f,g).$ (7.3)
More precisely, we simply note that the (sharp) Sobolev embedding gives
$\displaystyle 4\mathcal{E}_{Z}(f,g)$ $\displaystyle\geqslant
2\|f\|_{\dot{H}^{1}}^{2}+\|g\|_{L^{2}}^{2}-2\|g\|_{L^{2}}\|f\|_{L^{4}}^{2}\geqslant
2\|f\|_{\dot{H}^{1}}^{2}+\|g\|_{L^{2}}^{2}-2\|Q\|_{\dot{H}^{1}}^{-1}\|g\|_{L^{2}}\|f\|_{\dot{H}^{1}}^{2}$
and hence rearranging we have the lower bound
$\begin{split}4\mathcal{E}_{Z}(f,g)&\geqslant\|g\|_{L^{2}}^{2}+2\|Q\|_{\dot{H}^{1}}^{-1}\|f\|_{\dot{H}^{1}}^{2}\big{(}\|Q\|_{\dot{H}^{1}}-\|g\|_{L^{2}}\big{)}\\\
&=\|f\|_{\dot{H}^{1}}^{2}+\big{(}\|g\|_{L^{2}}-\|Q\|_{\dot{H}^{1}}^{-1}\|f\|_{\dot{H}^{1}}^{2}\big{)}^{2}+\|Q\|_{\dot{H}^{1}}^{-2}\|f\|_{\dot{H}^{1}}^{2}\big{(}\|Q\|_{\dot{H}^{1}}^{2}-\|f\|_{\dot{H}^{1}}^{2}\big{)}.\end{split}$
(7.4)
from which the implications (7.2) and (7.3) easily follow. These implications
can be improved if we assume that the energy is at most the energy
$\mathcal{E}_{Z}(Q,-Q^{2})=\frac{1}{4}\|Q\|_{\dot{H}^{1}}^{2}$ of the ground
state solution $(Q,-Q^{2})$.
###### Lemma 7.1 (Energy coercive below ground state [16]).
Let $(f,g)\in\dot{H}^{1}\times L^{2}$ and suppose that
$\min\\{\|f\|_{\dot{H}^{1}},\|g\|_{L^{2}}\\}\leqslant\|Q\|_{\dot{H}^{1}}\qquad\text{
and }\qquad 4\mathcal{E}_{z}(f,g)\leqslant\|Q\|_{\dot{H}^{1}}^{2}.$
Then we have the improved bounds
$\max\\{\|f\|_{\dot{H}^{1}}^{2},\|g\|_{L^{2}}^{2}\\}\leqslant
4\mathcal{E}_{Z}(f,g)\qquad\text{ and
}\qquad\mathcal{K}(f)+\big{\|}g+|f|^{2}\big{\|}_{L^{2}_{x}}\geqslant
4\mathcal{E}_{Z}(f,g)\frac{\|Q\|_{\dot{H}^{1}}^{2}-4\mathcal{E}_{Z}(f,g)}{2\|Q\|_{\dot{H}^{1}}^{2}}.$
###### Proof.
This is essentially contained in [16, Lemma 6.1], but we give a slightly more
direct proof by arguing directly from the properties of the ground state
function $Q$. More precisely, by rearranging the lower bound (7.4) and
applying the assumption
$4\mathcal{E}_{Z}(f,g)\leqslant\|Q\|_{\dot{H}^{1}}^{2}$ we have
$\big{(}\|g\|_{L^{2}}-\|Q\|_{\dot{H}^{1}}^{-1}\|f\|_{\dot{H}^{1}}^{2}\big{)}^{2}\leqslant\|Q\|_{\dot{H}^{1}}^{-2}\big{(}\|Q\|_{\dot{H}^{1}}^{2}-\|f\|_{\dot{H}^{1}}^{2}\big{)}^{2}$
and
$2\|f\|_{\dot{H}^{1}}^{2}\|Q\|_{\dot{H}^{1}}^{-1}\big{(}\|Q\|_{\dot{H}^{1}}-\|g\|_{L^{2}}\big{)}\leqslant\|Q\|_{\dot{H}^{1}}^{2}-\|g\|_{L^{2}}^{2}.$
In particular, a short computation shows provided
$4\mathcal{E}_{Z}(f,g)\leqslant\|Q\|_{\dot{H}^{1}}^{2}$ we have the
implication
$\min\\{\|f\|_{\dot{H}^{1}},\|g\|_{L^{2}}\\}\leqslant\|Q\|_{\dot{H}^{1}}\qquad\Longrightarrow\qquad\max\\{\|f\|_{\dot{H}^{1}},\|g\|_{L^{2}}\\}\leqslant\|Q\|_{\dot{H}^{1}}.$
In view of the implications (7.2) and (7.3), this completes the proof of the
first bound.
Finally, the second bound in the statement of the lemma follows by observing
that (7.4) and $\|f\|_{\dot{H}^{1}}\leqslant 4\mathcal{E}_{Z}(f,g)$ in fact
implies the slightly sharper bound
$\|f\|_{\dot{H}^{1}}^{2}\leqslant\frac{\|Q\|_{\dot{H}^{1}}^{2}}{2\|Q\|_{\dot{H}^{1}}^{2}-\|f\|_{\dot{H}^{1}}^{2}}4\mathcal{E}_{Z}(f,g)\leqslant\frac{\|Q\|_{\dot{H}^{1}}^{2}}{2\|Q\|_{\dot{H}^{1}}^{2}-4\mathcal{E}_{Z}(f,g)}4\mathcal{E}_{Z}(f,g)$
and hence
$\displaystyle\mathcal{K}(f)+\big{\|}g+|f|^{2}\big{\|}_{L^{2}_{x}}=4\mathcal{E}_{Z}(f,g)-\|f\|_{\dot{H}^{1}}^{2}$
$\displaystyle\geqslant
4\mathcal{E}_{Z}(f,g)\Big{(}\frac{\|Q\|_{\dot{H}^{1}}^{2}-4\mathcal{E}_{Z}(f,g)}{2\|Q\|_{\dot{H}^{1}}^{2}-4\mathcal{E}_{Z}(f,g)}\Big{)}$
which clearly suffices as $\mathcal{E}_{Z}(f,g)\geqslant 0$. ∎
###### Remark 7.2 (Characterising ground state condition).
As observed in [16], there are a number of ways to characterise the ground
state condition in Lemma 7.1. For instance, as long as the energy is below the
energy of the ground state, the sign of $\mathcal{K}(f)$ can be used to
determine the coercivity of the energy for both $\mathcal{E}_{Z}$ and
$\mathcal{E}_{NLS}$. This is well known in the case of the NLS [20]. In fact,
for any $f\in\dot{H}^{1}$ with $4\mathcal{E}_{NLS}(f)<\|Q\|_{\dot{H}^{1}}$ we
have the implications
$\mathcal{K}(f)>0\,\,\,\,\Longleftrightarrow\,\,\,\,0<\|f\|_{L^{4}}<\|Q\|_{L^{4}}\,\,\,\,\Longleftrightarrow\,\,\,\,0<\|f\|_{\dot{H}^{1}}<\|Q\|_{\dot{H}^{1}}\,\,\,\,\Longleftrightarrow\,\,\,\,\|f\|_{\dot{H}^{1}}<4\mathcal{E}_{NLS}(f)$
(7.5)
and
$\mathcal{K}(f)=0\qquad\Longleftrightarrow\qquad f=0.$ (7.6)
As in the proof of Lemma 7.1, the implications (7.5) and (7.6) follow from the
properties of the ground state solution, together with the identity
$4\mathcal{E}_{NLS}(f)=2\|f\|_{\dot{H}^{1}}^{2}-\|f\|_{L^{4}}^{4}=\|f\|_{\dot{H}^{1}}^{2}+\mathcal{K}(f).$
A similar characterisation holds in the case of the Zakharov equation. In fact
since $\mathcal{E}_{NLS}(f)\leqslant\mathcal{E}_{Z}(f,g)$, somewhat trivially,
the implications (7.5) and (7.6) also hold for any $(f,g)\in\dot{H}^{1}\times
L^{2}$ with $4\mathcal{E}_{Z}(f,g)<\|Q\|_{\dot{H}^{1}}^{2}$.
On the other hand, the ground state condition can also be characterised using
the wave data $g\in L^{2}$ [16, Lemma 6.1]. More precisely for any $(f,g)\in
H^{1}\times L^{2}$ with $4\mathcal{E}_{Z}(f,g)<\|Q\|_{\dot{H}^{1}}^{2}$ we
have
$\mathcal{K}(f)\geqslant
0,\qquad\Longleftrightarrow\qquad\|g\|_{L^{2}}<\|Q\|_{\dot{H}^{1}}\quad\Longleftrightarrow\quad\|g\|_{L^{2}}\leqslant\mathcal{E}_{Z}(f,g)$
(7.7)
and
$\mathcal{K}(f)=0\qquad\Longleftrightarrow\qquad f=0.$ (7.8)
The implications (7.7) and (7.8) follow directly from Lemma 7.1 together with
the Schrödinger counterparts (7.5) and (7.6). Note that the lack of strict
inequalities in (7.7) is a consequence of the fact that
$4\mathcal{E}_{Z}(0,g)=\|g\|_{L^{2}},\qquad
4\mathcal{E}_{Z}(f,0)=2\|f\|_{\dot{H}^{1}}^{2}$
and in particular, $f=0$ (or $g=0$) does not necessarily imply that
$\mathcal{E}_{Z}(f,g)=0$.
## 8\. A Palais-Smale Type Condition
In this section our goal is to apply the results obtained in the previous
section to show that show that any bounded sequence of solutions to the
Zakharov equation for which the dispersive norm $\|\cdot\|_{D}\to\infty$ and
lie below the ground state solution, must by precompact modulo translations.
This type of result is the key step in the proof of Theorem 1.6. The arguments
used in this section are largely adapted from [22, 20, 23]. The key point is
that if a mass/energy threshold exists (see Definition 1.5), then via the
profile decomposition in Theorem 6.1, together with the well-posedness theory
in Section 5, we can extract some compactness from any sequence of solutions
approaching the critical threshold.
###### Theorem 8.1 (Palais-Smale type condition).
Let $(M_{c},E_{c})$ be a mass/energy threshold with
$4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$. Suppose that $(u_{n},V_{n})\in
C(\mathbb{R},H^{1}\times L^{2})$ is a sequence of global solutions to (1.2)
such that
$u_{n}\in
L^{2}_{t,loc}W^{\frac{1}{2},4}_{x},\qquad\lim_{n\to\infty}\mathcal{E}_{Z}(u_{n},V_{n})=E_{c},\qquad\lim_{n\to\infty}\mathcal{M}(u_{n})=M_{c},\qquad\sup_{n\in\mathbb{N}}\|V_{n}(0)\|_{L^{2}_{x}}\leqslant\|Q\|_{\dot{H}^{1}}$
(8.1)
and
$\lim_{n\to\infty}\|u_{n}\|_{D([0,\infty))}=\lim_{n\to\infty}\|u_{n}\|_{D((-\infty,0])}=\infty.$
(8.2)
Then there exists $x_{n}\in\mathbb{R}$ such that the translated sequence
$(u_{n},V_{n})(0,x+x_{n})$ has a convergent subsequence in $H^{1}\times
L^{2}$.
###### Proof.
Define $(f_{n},g_{n})=(u_{n},V_{n})(0)\in H^{1}\times L^{2}$. The first step
is to verify that the sequence $(f_{n},g_{n})$ is bounded in $H^{1}\times
L^{2}$. Since
$\lim_{n\to\infty}4\mathcal{E}_{Z}(f_{n},g_{n})=4E_{c}<\|Q\|_{\dot{H}^{1}}$,
the assumption (8.1) implies that for all sufficiently large $n$ we have
$4\mathcal{E}_{Z}(f_{n},g_{n})<\|Q\|_{\dot{H}^{1}}^{2},\qquad\|g_{n}\|_{L^{2}}\leqslant\|Q\|_{\dot{H}^{1}}.$
Consequently, the variational properties of the ground state (see Lemma 7.1)
give the upper bounds
$\limsup_{n\to\infty}\|f_{n}\|_{\dot{H}^{1}}^{2}\leqslant
4E_{c},\qquad\limsup_{n\to\infty}\|g_{n}\|_{L^{2}}^{2}\leqslant 4E_{c}.$ (8.3)
Together with the assumed boundedness of the mass, we conclude that
$\sup_{n\in\mathbb{N}}\|(f_{n},g_{n})\|_{H^{1}\times L^{2}}<\infty$. In other
words the sequence $(f_{n},g_{n})$ is a bounded sequence in $H^{1}\times
L^{2}$.
We now apply the profile decomposition in Theorem 6.1 to the sequence
$(f_{n},g_{n})$, and obtain $J^{*}\in\mathbb{N}\cup\\{\infty\\}$ and for each
$1\leqslant j\leqslant J^{*}$ group elements
$\mathfrak{g}^{(j)}_{n}=\mathfrak{g}[t_{n}^{(j)},x_{n}^{(j)}]$ and profiles
$(f^{(j)},g^{(j)})\not=(0,0)$ such that (after replacing $(f_{n},g_{n})$ with
a suitable subsequence) for any $0\leqslant J\leqslant J^{*}$ we can write
$(f_{n},g_{n})=\sum_{j=1}^{J}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})+(w^{(J)}_{n},e^{(J)}_{n})$
where the profiles and errors $(w^{(J)}_{n},e^{(J)}_{n})$ satisfy the
conditions (i), (ii), and (iii) in the statement of Theorem 6.1. In
particular, we have the $L^{2}$ decoupling of the wave profiles
$\sum_{j=1}^{J}\|g^{(j)}\|_{L^{2}}^{2}+\limsup_{n\to\infty}\|e^{(J)}_{n}\|_{L^{2}}^{2}\leqslant\limsup_{n\to\infty}\|g_{n}(0)\|_{L^{2}}^{2}\leqslant
4E_{c}$ (8.4)
the decoupling of the energy
$\limsup_{n\to\infty}\Big{[}\sum_{j=1}^{J}\mathcal{E}_{Z}\big{(}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\big{)}+\mathcal{E}_{Z}(w^{(J)}_{n},e^{(J)}_{n})\Big{]}\leqslant\lim_{n\to\infty}\mathcal{E}_{Z}(f_{n},g_{n})(0)=E_{c}$
(8.5)
and the decoupling of the Schrödinger mass
$\sum_{j=1}^{J}\mathcal{M}(f^{(j)})+\limsup_{n\to\infty}\mathcal{M}(w^{(J)}_{n})\leqslant\lim_{n\to\infty}\mathcal{M}(f_{n})=M_{c}.$
(8.6)
The above limits quickly imply that each profile has energy below the ground
state. More precisely, as $4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$, the $L^{2}$
decoupling (8.4) implies that for every $1\leqslant j\leqslant J$ we have
$\|g^{(j)}\|_{L^{2}}<\|Q\|_{\dot{H}^{1}}$, and for all sufficiently large $n$,
the error satisfies $\|e^{(J)}_{n}\|_{L^{2}}<\|Q\|_{\dot{H}^{1}}$. Hence the
implication (7.2) implies that
$\limsup_{n\to\infty}4\mathcal{E}_{Z}\big{(}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\big{)}\geqslant\|g^{(j)}\|_{L^{2}}^{2}\qquad\text{
and
}\qquad\limsup_{n\to\infty}4\mathcal{E}_{Z}(w^{(J)}_{n},e^{(J)}_{n})\geqslant\limsup_{n\to\infty}\|e^{(J)}_{n}\|_{L^{2}}^{2}.$
Consequently, as each profile is non-zero, energies of each of the
(translated) profiles $\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})$ must be
strictly positive. Together with (8.6), we conclude that both the profiles and
the error term have energy below the threshold. Namely we have the bounds
$0\leqslant\mathcal{M}(f^{(j)})\leqslant M_{c},\qquad
0<\limsup_{n\to\infty}\mathcal{E}_{Z}\big{(}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\big{)}\leqslant
E_{c},\qquad\|g^{(j)}\|_{L^{2}}\leqslant 4E_{c},$ (8.7)
and
$\limsup_{n\to\infty}\mathcal{E}_{Z}\big{(}w^{(J)}_{n},e^{(J)}_{n}\big{)}\leqslant
E_{c},\qquad\limsup_{n\to\infty}\|e^{(J)}_{n}\|_{L^{2}}\leqslant 4E_{c}.$
(8.8)
Moreover, the $H^{1}$ decoupling of the profiles together with (8.3) gives the
upper bound
$\sum_{j=1}^{J}\|f^{(j)}\|_{H^{1}}^{2}+\limsup_{n\to\infty}\|w^{(J)}_{n}\|_{H^{1}}^{2}\leqslant\limsup_{n\to\infty}\|f_{n}\|_{H^{1}}^{2}\leqslant
M_{c}+4E_{c}.$ (8.9)
The next step is to evolve the profiles $(f^{(j)},g^{(j)})$ via the Zakharov
equation. More precisely, in view of the energy constraint (8.7) and the
assumption $4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$, we can apply Theorem 1.1 (if
$t_{n}=0$) or Theorem 3.5 (if $t_{n}\to\pm\infty$, potentially after
reflecting in time) and obtain global solutions $(u^{(j)},V^{(j)})\in
C(\mathbb{R};H^{1}\times L^{2})$ to (1.2) satisfying
$\lim_{n\to\infty}\big{\|}(u^{(j)},V^{(j)})(-t_{n})-\big{(}e^{-it_{n}\Delta}f^{(j)},e^{-it_{n}|\nabla|}g^{(j)}\big{)}\big{\|}_{H^{1}\times
L^{2}}=0$
with mass $\mathcal{M}(u^{(j)})=\mathcal{M}(f^{(j)})\leqslant M_{c}$ and
energy
$\begin{split}\mathcal{E}_{Z}(u^{(j)},V^{(j)})=\lim_{n\to\infty}\mathcal{E}_{Z}\big{(}e^{-it_{n}\Delta}f^{(j)},e^{-it_{n}|\nabla|}g^{(j)}\big{)}&=\lim_{n\to\infty}\mathcal{E}_{Z}\big{(}\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\big{)}\leqslant
E_{c},\\\
\max\big{\\{}\|u^{(j)}\|_{L^{\infty}_{t}\dot{H}^{1}_{x}},\|V^{(j)}\|_{L^{\infty}_{t}L^{2}_{x}}\big{\\}}&\leqslant
4\mathcal{E}_{Z}(u^{(j)},V^{(j)}).\end{split}$ (8.10)
We now translate in space-time, and define the translated solutions
$u_{n}^{(j)}(t,x)=u^{(j)}(t-t_{n},x-x_{n}),\qquad
V^{(j)}_{n}(t,x)=V^{(j)}(t-t_{n},x-x_{n})$
the key point being that we now have
$\lim_{n\to\infty}\big{\|}\big{(}u_{n}^{(j)},V_{n}^{(j)}\big{)}(0)-\mathfrak{g}^{(j)}_{n}(f^{(j)},g^{(j)})\|_{H^{1}\times
L^{2}}=0.$ (8.11)
We now consider two cases, either there exists a profile with energy precisely
$E_{c}$, or all profiles have energy strictly below $E_{c}$. In the former
case, the decoupling inequalities (8.4), (8.5), and (8.6), imply that there is
only one profile and moreover the error goes to zero in $\dot{H}^{1}\times
L^{2}$. Upgrading the convergence to $H^{1}\times L^{2}$ relies on the fact
that $(M_{c},E_{c})$ is a mass/energy threshold together with a short argument
via Theorem 5.2. In the later case where the initial profile has energy
strictly less than $E_{c}$, we show that each profile is uniformly bounded in
the dispersive norm $\|\cdot\|_{D}$, and moreover only interacts weakly.
Applying the stability result in Theorem 5.2 we eventually conclude that
$\limsup_{n\to\infty}\|u_{n}\|_{D}<\infty$ which contradicts our initial
assumption (8.2).
Case 1: $\mathcal{E}_{Z}(u^{(1)},V^{(1)})=E_{c}$.
Our goal is to show that if $\mathcal{E}_{Z}(u^{(1)},V^{(1)})=E_{c}$, then we
have
$\lim_{n\to\infty}\big{\|}(f_{n},g_{n})(x)-(f^{(1)},g^{(1)})(x-x_{n})\big{\|}_{H^{1}\times
L^{2}}=0.$ (8.12)
We start by observing that the decoupling of the energy (8.5) together with
(8.10) implies that for all $1<j\leqslant J$
$\mathcal{E}_{Z}(u^{(j)},V^{(j)})=0=\limsup_{n\to\infty}\mathcal{E}_{Z}(w^{(J)}_{n},e^{(J)}_{n}).$
Hence, as energies of all profiles lies below the ground state solution
(namely (8.7) holds), an application of Lemma 7.1 gives
$\big{\|}\big{(}u^{(j)},V^{(j)}\big{)}\big{\|}_{\dot{H}^{1}\times
L^{2}}^{2}\leqslant 4\mathcal{E}_{Z}(u^{(j)},V^{(j)})=0$
and
$\limsup_{n\to\infty}\|(w^{(J)}_{n},e^{(J)}_{n})\|_{\dot{H}^{1}\times
L^{2}}^{2}\leqslant
4\limsup_{n\to\infty}\mathcal{E}_{Z}(w^{(J)}_{n},e^{(J)}_{n})=0.$
In other words there is only one profile, and moreover the error goes to zero
in $\dot{H}^{1}\times L^{2}$.
This is almost what we want, but to obtain (8.12) we need to upgrade the
convergence to $H^{1}\times L^{2}$. To this end, suppose for the moment we
have $\mathcal{M}(u^{(1)})\leqslant M_{c}-\delta$ for some $\delta>0$. Since
$(M_{c},E_{c})$ is a mass/energy threshold, we then see that
$\|u^{(1)}\|_{D}\leqslant L(M_{c}-\delta,E_{c})<\infty.$
In particular, applying translation invariance, Theorem 3.1, and collecting
the above bounds/limits, we have
$\limsup_{n\to\infty}\Big{(}\|f_{n}\|_{H^{1}}+\|u^{(1)}_{n}\|_{S^{\frac{1}{2}}}\Big{)}\lesssim_{L,M_{c},E_{c}}1,\qquad\limsup_{n\to\infty}\|V^{(1)}_{n}\|_{L^{\infty}_{t}L^{2}_{x}}\leqslant
4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$
and
$\lim_{n\to\infty}\Big{(}\big{\|}f_{n}-u_{n}^{(1)}(0)\big{\|}_{\dot{H}^{1}}+\big{\|}g_{n}-V^{(1)}_{n}(0)\big{\|}_{L^{2}}\Big{)}=0.$
Thus in view of (5.3), an application of Theorem 5.2 with $F=G=0$ implies the
uniform dispersive bound $\limsup_{n\to\infty}\|u_{n}\|_{D}<\infty$ which
clearly contradicts the assumption (8.2). Therefore it is not possible for
both the dispersive norm to blow-up and the mass of the profile
$(u^{(1)},V^{(1)})$ to remain strictly less than $M_{c}$. In other words, we
must have the mass constraint $\mathcal{M}(u^{(1)})=M_{c}$. In view of (8.6),
we then conclude that $\limsup_{n\to\infty}\mathcal{M}(w^{(J)}_{n})=0$.
Consequently, unpacking the definition of $u^{(1)}_{n}$, we have the
$H^{1}\times L^{2}$ limit
$\lim_{n\to\infty}\big{\|}(u_{n},V_{n})(0,x)-\big{(}e^{-it_{n}^{(1)}\Delta}f^{(1)}(x-x_{n}^{(1)}),e^{-it_{n}^{(1)}|\nabla|}g^{(1)}(x-x_{n}^{(1)})\big{)}\big{\|}_{H^{1}\times
L^{2}}=0.$
To complete the proof of (8.12), it only remains to rule out the case
$t_{n}^{(1)}\to-\infty$ (the case $t_{n}^{(1)}\to\infty$ would then also be
excluded by time reversibility). We start by noting that since
$t_{n}^{(1)}\to-\infty$, the dispersive decay of the free Schrödinger and wave
evolutions implies that
$\lim_{n\to\infty}\big{\|}\mathfrak{g}^{(1)}_{n}\big{(}e^{it\Delta}f,e^{it|\nabla|}g\big{)}\big{\|}_{Y\times
Y([0,\infty))}=\lim_{n\to\infty}\big{\|}\big{(}e^{it\Delta}f,e^{it|\nabla|}g\big{)}\big{\|}_{Y\times
Y([-t_{n},\infty))}=0.$
Therefore, applying the bound (2.4), we have
$\displaystyle\limsup_{n\to\infty}\big{\|}\big{(}e^{it\Delta}f_{n},e^{it|\nabla|}g_{n}\big{)}\big{\|}_{Y\times
Y([0,\infty))}$
$\displaystyle\leqslant\limsup_{n\to\infty}\big{\|}\big{(}e^{it\Delta}f_{n},e^{it|\nabla|}g_{n}\big{)}-\mathfrak{g}^{(1)}_{n}(e^{it\Delta}f^{(1)},e^{it|\nabla|}g^{(1)})\big{\|}_{Y\times
Y([0,\infty))}$
$\displaystyle\qquad+\limsup_{n\to\infty}\|\mathfrak{g}^{(1)}_{n}(e^{it\Delta}f^{(1)},e^{it|\nabla|}g^{(1)})\|_{Y\times
Y([0,\infty))}$
$\displaystyle\lesssim\limsup_{n\to\infty}\|(f_{n},g_{n})-\mathfrak{g}^{(1)}_{n}(f^{(1)},g^{(1)})\|_{H^{1}\times
L^{2}}=0.$
Consequently, we can apply the small data theory in Theorem 1.7 to the
interval $[0,\infty)$, and conclude that
$\limsup_{n}\|u_{n}\|_{D([0,\infty))}<\infty$. But this contradicts the
assumption (8.2) and hence we cannot have $t_{n}^{(1)}\to\pm\infty$.
Case 2: $\mathcal{E}_{Z}(u^{(1)},V^{(1)})<E_{c}$.
As all profiles are non-zero and have positive energy, we conclude from (8.5),
(8.7), and (8.8) that we have the mass/energy bounds
$\sup_{1\leqslant j\leqslant J^{*}}\mathcal{M}(u^{(j)})\leqslant
M_{c},\qquad\sup_{1\leqslant j\leqslant
J^{*}}\mathcal{E}_{Z}\big{(}u^{(j)},V^{(j)}\big{)}\leqslant E_{c}-\delta$
for some $\delta>0$. In particular, in view of the definition of $E_{c}$, we
have the global dispersive bound
$\sup_{1\leqslant j\leqslant J^{*}}\|u^{(j)}\|_{D}\leqslant
L=L(M_{c},E_{c}-\delta)<\infty.$
To upgrade this bound to control over the $\underline{S}^{s}$ norm, we first
note that the bound (8.10) together with the conservation of mass gives
$\|u^{(j)}\|_{L^{\infty}_{t}H^{1}_{x}}\leqslant
M_{c}+4E_{c},\qquad\|V^{(j)}\|_{L^{\infty}_{t}L^{2}_{x}}\leqslant
4E_{c}<\|Q\|_{\dot{H}^{1}}.$
Consequently an application of Theorem 3.1 implies that for any
$\frac{1}{2}\leqslant s<1$ we have
$\|u^{(j)}\|_{\underline{S}^{s}}\lesssim_{s,L,E_{c},M_{c}}\|f^{(j)}\|_{H^{s}},\qquad\|V^{(j)}\|_{\underline{W}^{0}}\lesssim_{s,L,E_{c},M_{c}}\|g^{(j)}\|_{L^{2}}+\|f^{(j)}\|_{H^{\frac{1}{2}}}^{2}$
(8.13)
where the implied constants depend only on $s$, $L$, $E_{c}$, and $M_{c}$. We
now define
$\Psi^{(J)}_{n}=\sum_{j=1}^{J}u_{n}^{(j)}+e^{it\Delta}w^{(J)}_{n},\qquad\Phi^{(J)}_{n}=\sum_{j=1}^{J}V^{(j)}_{n}+e^{it|\nabla|}e^{(J)}_{n}.$
We claim that we have the properties:
1. (i)
(Data agrees asymptotically) For every $0\leqslant J\leqslant J^{*}$ we have
$\lim_{n\to\infty}\big{\|}(f_{n},g_{n})-(\Psi^{(J)}_{n},\Phi_{n}^{(J)})(0)\big{\|}_{H^{1}\times
L^{2}}=0,\qquad\limsup_{n\to\infty}\|\Phi^{(J)}_{n}\|_{L^{\infty}_{t}L^{2}_{x}}\leqslant
4E_{c}<\|Q\|_{\dot{H}^{1}}.$
2. (ii)
(Uniformly bounded in $\underline{S}^{s}\times\underline{W}^{0}$) For every
$\frac{1}{2}\leqslant s<1$ we have
$\sup_{0\leqslant J\leqslant
J^{*}}\limsup_{n\to\infty}\big{\|}(\Psi^{(J)}_{n},\Phi^{(J)}_{n})\big{\|}_{\underline{S}^{s}\times\underline{W}^{0}}\lesssim_{s,L,E_{c},M}1.$
3. (iii)
(Approximate solution) We have
$\lim_{J\to
J^{*}}\limsup_{n\to\infty}\Big{\|}\mathcal{I}_{0}\Big{[}\Re(\Phi^{(J)}_{n})\Psi^{(J)}_{n}-\sum_{j=1}^{J}\Re(V^{(j)}_{n})u^{(j)}_{n}\Big{]}\Big{\|}_{S^{\frac{1}{2}}}=0$
and
$\lim_{J\to
J^{*}}\limsup_{n\to\infty}\Big{\|}|\nabla|\mathcal{J}_{0}\Big{[}|\Psi^{(J)}_{n}|^{2}-\sum_{j=1}^{J}|u_{n}^{(j)}|^{2}\Big{]}\Big{\|}_{W^{0}}=0.$
Assuming these properties hold for the moment, an application of the stability
result in Theorem 5.2 together with (5.3) implies that we have the global
bound $\limsup_{n\to\infty}\|u_{n}\|_{D}<\infty$. But this contradicts the
assumption that $\|u_{n}\|_{D([0,\infty))}\to\infty$ as $n\to\infty$. Hence
Case 2 cannot occur.
It only remains to verify the properties (i), (ii), and (iii). The property
(i) follows immediately from the construction of the profiles
$(f^{(j)},g^{(j)})$. To prove (ii), we start by observing that (8.13) together
with (8.4) and (8.9) implies that
$\sup_{0\leqslant J\leqslant
J^{*}}\Big{(}\sum_{j=1}^{J}\|u^{(j)}\|_{S^{s}}^{2}\Big{)}^{\frac{1}{2}}\lesssim_{s,L,E_{c},M_{c}}\sup_{0\leqslant
J\leqslant
J^{*}}\Big{(}\sum_{j=1}^{J}\|f^{(j)}\|_{H^{s}}^{2}\Big{)}^{\frac{1}{2}}\lesssim_{s,L,E_{c},M_{c}}1$
and
$\sup_{0\leqslant J\leqslant
J^{*}}\Big{(}\sum_{j=1}^{J}\|V^{(j)}\|_{W^{0}}^{2}\Big{)}^{\frac{1}{2}}\lesssim_{s,L,E_{c},M_{c}}\sup_{0\leqslant
J\leqslant
J^{*}}\Big{[}\Big{(}\sum_{j=1}^{J}\|g^{(j)}\|_{L^{2}}^{2}\Big{)}^{\frac{1}{2}}+\sum_{j=1}^{J}\|f^{(j)}\|_{H^{s}}^{2}\Big{]}\lesssim_{s,L,E_{c},M_{c}}1.$
Therefore, (i) and the bound (8.3), together with an application of the energy
inequality (2.5) and the bilinear estimate in Theorem 2.4 gives
$\displaystyle\sup_{0\leqslant J\leqslant
J^{*}}\limsup_{n\to\infty}\big{\|}\Psi^{(J)}_{n}\big{\|}_{\underline{S}^{s}}$
$\displaystyle\lesssim\sup_{0\leqslant J\leqslant
J^{*}}\Big{[}\limsup_{n\to\infty}\big{\|}\Psi^{(J)}_{n}(0)\big{\|}_{H^{s}}+\limsup_{n\to\infty}\Big{\|}\sum_{j=1}^{J}\Re(V^{(j)}_{n})u^{(j)}_{n}\Big{\|}_{N^{s}}\Big{]}$
$\displaystyle\lesssim\sup_{0\leqslant J\leqslant
J^{*}}\Big{[}\limsup_{n\to\infty}\big{\|}\Psi^{(J)}_{n}(0)\big{\|}_{H^{s}}+\Big{(}\sum_{j=1}^{J}\|V^{(j)}\|_{W^{0}}^{2}\Big{)}^{\frac{1}{2}}\Big{(}\sum_{j=1}^{J}\|u^{(j)}\|_{S^{s}}^{2}\Big{)}^{\frac{1}{2}}\Big{]}$
$\displaystyle\lesssim_{s,L,E_{c},M_{c}}1$
where we used the fact that the norms $\|\cdot\|_{W^{0}}$ and
$\|\cdot\|_{\underline{S}^{s}}$ are translation invariant. Similarly, to bound
the wave contribution we have
$\displaystyle\sup_{0\leqslant J\leqslant
J^{*}}\limsup_{n\to\infty}\big{\|}\Phi^{(J)}_{n}\big{\|}_{\underline{W}^{0}}$
$\displaystyle\lesssim\sup_{0\leqslant J\leqslant
J^{*}}\Big{[}\limsup_{n\to\infty}\big{\|}\Phi^{(J)}_{n}(0)\big{\|}_{L^{2}}+\sum_{j=1}^{J}\Big{\|}\mathcal{J}_{0}\big{[}|\nabla||u^{(j)}|^{2}\big{]}\Big{\|}_{\underline{W}^{0}}\Big{]}$
$\displaystyle\lesssim\sup_{0\leqslant J\leqslant
J^{*}}\Big{[}\limsup_{n\to\infty}\big{\|}\Phi^{(J)}_{n}(0)\big{\|}_{L^{2}_{x}}+\sum_{j=1}^{J}\|u^{(j)}\|_{S^{s}}^{2}\Big{]}$
$\displaystyle\lesssim_{s,L,E_{c},M_{c}}1$
and hence (ii) follows.
Finally to prove (iii), we note that provided $s>\frac{1}{2}$, (ii) together
with (8.4), (8.9), and an application of Theorem 4.1 gives $\theta>0$ such
that
$\displaystyle\big{\|}\mathcal{I}_{0}\big{[}\Re(\Phi^{(J)}_{n}$
$\displaystyle-e^{it|\nabla|}e^{(J)}_{n})e^{it\Delta}w^{(J)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}+\big{\|}\mathcal{I}_{0}\big{[}\Re(e^{it|\nabla|}e^{(J)}_{n})\Psi^{(J)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}$
$\displaystyle\lesssim\Big{(}\|\Phi^{(J)}_{n}\|_{\underline{W}^{0}}+\|e^{(J)}_{n}\|_{L^{2}}\Big{)}\|e^{it\Delta}w^{(J)}_{n}\|_{Y}^{\theta}\|w^{(J)}_{n}\|_{H^{1}}^{1-\theta}+\|e^{it|\nabla|}e^{(J)}_{n}\|_{Y}^{\theta}\|e^{(J)}_{n}\|_{L^{2}}^{1-\theta}\|\Psi^{(J)}_{n}\|_{\underline{S}^{s}}$
$\displaystyle\lesssim_{s,L,E_{c},M_{c}}\|e^{it\Delta}w^{(J)}_{n}\|_{Y}^{\theta}+\|e^{it|\nabla|}e^{(J)}_{n}\|_{Y}^{\theta}.$
Therefore, the asymptotic decoupling provided by Lemma 6.2 and the fact that
the error vanishes in the dispersive norm $Y$ implies that
$\displaystyle\limsup_{J\to J^{*}}$
$\displaystyle\limsup_{n\to\infty}\Big{\|}\mathcal{I}_{0}\Big{[}\Re(\Phi^{(J)}_{n})\Psi^{(J)}_{n}-\sum_{j=1}^{J}\Re(V^{(j)}_{n})u^{(j)}_{n}\Big{]}\Big{\|}_{S^{\frac{1}{2}}}$
$\displaystyle\lesssim\limsup_{J\to
J^{*}}\limsup_{n\to\infty}\Big{(}\sum_{\begin{subarray}{c}1\leqslant
j,k\leqslant J\\\
j\not=k\end{subarray}}\big{\|}\mathcal{I}_{0}\big{[}\Re(V^{(j)}_{n})u^{(k)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}+\big{\|}\mathcal{I}_{0}\big{[}\Re(\Phi^{(J)}_{n}-e^{it|\nabla|}e^{(J)}_{n})e^{it\Delta}w^{(J)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}$
$\displaystyle\qquad\qquad\qquad\qquad+\big{\|}\mathcal{I}_{0}\big{[}\Re(e^{it|\nabla|}e^{(J)}_{n})\Psi^{(J)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}\Big{)}$
$\displaystyle\lesssim_{s,L,E_{c},M_{c}}\limsup_{J\to
J^{*}}\limsup_{n\to\infty}\Big{(}\sum_{\begin{subarray}{c}1\leqslant
j,k\leqslant J\\\
j\not=k\end{subarray}}\big{\|}\mathcal{I}_{0}\big{[}\Re(V^{(j)}_{n})u^{(k)}_{n}\big{]}\big{\|}_{S^{\frac{1}{2}}}+\|e^{it\Delta}w^{(J)}_{n}\|_{Y}^{\theta}+\|e^{it|\nabla|}e^{(J)}_{n}\|_{Y}^{\theta}\Big{)}=0.$
Similarly, by another application of Theorem 4.1 we see that
$\displaystyle\big{\|}|\nabla|\mathcal{J}_{0}\big{[}\Re\big{(}\overline{\Psi}^{(J)}_{n}e^{it\Delta}w^{(J)}_{n}\big{)}\big{]}\big{\|}_{W^{0}}+$
$\displaystyle\big{\|}|\nabla|\mathcal{J}_{0}\big{[}|e^{it\Delta}w^{(J)}_{n}|^{2}\big{]}\big{\|}_{W^{0}}$
$\displaystyle\lesssim\|e^{it\nabla}w^{(J)}_{n}\|_{Y}^{\theta}\|w^{(J)}_{n}\|_{H^{1}}^{1-\theta}\Big{(}\|\Psi^{(J)}_{n}\|_{\underline{S}^{s}}+\|w^{(J)}_{n}\|_{H^{1}}\Big{)}\lesssim_{s,L,E_{c},M_{c}}\|e^{it\nabla}w^{(J)}_{n}\|_{Y}^{\theta}$
and hence
$\displaystyle\limsup_{J\to J^{*}}$
$\displaystyle\limsup_{n\to\infty}\Big{\|}|\nabla|\mathcal{J}_{0}\Big{[}|\Psi^{(J)}_{n}|^{2}-\sum_{j=1}^{J}|u_{n}^{(j)}|^{2}\Big{]}\Big{\|}_{W^{0}}$
$\displaystyle\lesssim\limsup_{J\to
J^{*}}\limsup_{n\to\infty}\Big{(}\sum_{\begin{subarray}{c}1\leqslant
j,k\leqslant J\\\
j\not=k\end{subarray}}\big{\|}|\nabla|\mathcal{J}_{0}\big{[}\Re\big{(}\overline{u}^{(j)}_{n}u^{(k)}_{n}\big{)}\big{]}\big{\|}_{W^{0}}+\big{\|}|\nabla|\mathcal{J}_{0}\big{[}\Re\big{(}\overline{\Psi}^{(J)}_{n}e^{it\Delta}w^{(J)}_{n}\big{)}\big{]}\big{\|}_{W^{0}}$
$\displaystyle\qquad\qquad\qquad\qquad\big{\|}|\nabla|\mathcal{J}_{0}\big{[}|e^{it\Delta}w^{(J)}_{n}|^{2}\big{]}\big{\|}_{W^{0}}\Big{)}$
$\displaystyle\lesssim_{s,L,E_{c},M_{c}}\limsup_{J\to
J^{*}}\limsup_{n\to\infty}\Big{(}\sum_{\begin{subarray}{c}1\leqslant
j,k\leqslant J\\\
j\not=k\end{subarray}}\big{\|}|\nabla|\mathcal{J}_{0}\big{[}\Re\big{(}\overline{u}^{(j)}_{n}u^{(k)}_{n}\big{)}\big{]}\big{\|}_{W^{0}}+\|e^{it\nabla}w^{(J)}_{n}\|_{Y}^{\theta}\Big{)}=0.$
Consequently (iii) also holds. ∎
## 9\. Almost Periodic Solutions
In this section we give the construction of the critical elements (or almost
periodic solutions) in Theorem 1.6. The first step is to show that if
Conjecture 1.4 failed, then there must exist a mass/energy threshold with
energy below the ground state.
###### Lemma 9.1 (Existence of a mass/energy threshold).
Suppose Conjecture 1.4 failed. Then there exists a mass/energy threshold
$(M_{c},E_{c})$ with $4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$.
###### Proof.
Given $M>0$ we let
$E^{*}(M)=\sup\\{E>0\mid L(M,E)<\infty\\}$
where we recall that $L(M,E)$ is defined in (1.7). Note that if
$4E<\|Q\|_{\dot{H}^{1}}^{2}$ and $(u,V)\in\Omega(E)$ (see (1.6)) then Lemma
7.1 gives the $\dot{H}^{1}$ bound
$\|u\|_{L^{\infty}_{t}\dot{H}^{1}_{x}}^{2}\leqslant
4\mathcal{E}_{Z}(u,V)\leqslant 4E.$
In particular, Theorem 1.7 together with (1.8) implies that for fixed $M>0$
and all sufficiently small $E>0$, we have $L(M,E)<\infty$ and hence
$E^{*}(M)>0$. Moreover, by construction we have the implication
$E<E^{*}(M)\qquad\Longrightarrow\qquad L\big{(}M,E\big{)}<\infty.$ (9.1)
If Conjecture (1.4) failed, then there must exist some $M_{0}>0$ such that
$4E^{*}(M_{0})<\|Q\|_{\dot{H}^{1}}$. We now define
$E_{c}=E^{*}(M_{0}),\qquad M_{c}=\inf\\{M>0\mid E^{*}(M)=E^{*}(M_{0})\\}.$
In view of (1.8) and Theorem 1.7, we have global well-posedness and scattering
whenever the initial data satisfies
$\min\\{\|f\|_{L^{2}},\|f\|_{\dot{H}^{1}}\\}\ll_{\|f\|_{H^{1}}}1$
and hence $E_{c},M_{c}>0$. Moreover, as $L(M,E)$ is increasing in both $E$ and
$M$, the definition of $E^{*}$ together with (9.1) implies that if
$E<E_{c}=E^{*}(M_{0})$ then $L(M_{c},E)\leqslant L(M_{0},E)<\infty$. On the
other hand, if $M<M_{c}$, then by construction $E^{*}(M)$ is decreasing in $M$
and hence $E^{*}(M)>E^{*}(M_{0})=E_{c}$ and so another application of the
implication (9.1) gives $L(M,E_{c})<\infty$.
Therefore, to show that $(M_{c},E_{c})$ is a mass/energy threshold, it only
remains to prove that $L(M_{c},E_{c})=\infty$. As usual, this is consequence
of the (right) continuity of $L(M,E)$. More precisely, we claim that for any
$0<4E<\|Q\|_{\dot{H}^{1}}^{2}$ and $M>0$, there exists $C=C(M,E)>0$,
$\theta=\theta(M,E)>0$, and $\epsilon_{0}=\epsilon_{0}(M,E)>0$ such that for
any $0<\epsilon<\epsilon_{0}$ we have
$L(M,E)\leqslant L(M+\epsilon,E+\epsilon)\leqslant L(M,E)+C\epsilon^{\theta}.$
(9.2)
Clearly (9.2) implies that $L(M_{c},E_{c})=\infty$. The right continuity of
$L$ is a consequence of the stability result in Theorem 5.2 together with the
observation in [16] that $\lambda\mapsto\mathcal{E}_{Z}(\lambda
f,\lambda^{2}g)$ is an increasing function under the assumption that $(f,g)$
lie below the ground state $Q$. Roughly the point is that if we are at energy
$E+\epsilon$, then flowing back in $\lambda$ reduces the energy at which point
we can apply the definition of $L(M,E)$. Provided $\epsilon>0$ is sufficiently
small, applying Theorem 5.2 then bounds $L(M+\epsilon,E+\epsilon)$ in terms of
$L(M,E)$. Making this argument precise relies on the variational properties of
the ground state contained in Lemma 7.1.
We now turn to the details. The first inequality in (9.2) is immediate from
the definition. To prove the second, it is enough to consider the case
$L(M,E)<\infty$. Let $(u,V)\in\Omega(E+\epsilon)$ with
$\mathcal{M}(u)\leqslant M+\epsilon$. Our goal is to prove that
$\|u\|_{D}-L(M,E)\lesssim_{E,M}\epsilon^{\theta}$ with the implied constant
only depending on $E$ and $M$. Define
$(f,g)=(\lambda u,\lambda^{2}V)(0)\in H^{1}\times L^{2}$
where $0<\lambda<1$ is to be chosen later. Note that
$\mathcal{M}(u)-\mathcal{M}(f)=(1-\lambda^{2})\mathcal{M}(u)\gtrsim_{M}(1-\lambda).$
On the other hand, for any
$0<\epsilon\leqslant\frac{1}{8}(\|Q\|^{2}_{\dot{H}^{1}}-E)$, as
$4E<\|Q\|_{\dot{H}^{1}}^{2}$ Lemma 7.1 gives for any $0<\lambda\leqslant 1$
the lower bound
$\displaystyle\mathcal{K}\big{(}\lambda
u(0)\big{)}+\lambda^{4}\big{\|}V(0)+|u(0)|^{2}\big{\|}_{L^{2}_{x}}^{2}$
$\displaystyle\geqslant\lambda^{4}\Big{(}\mathcal{K}\big{(}u(0)\big{)}+\big{\|}V(0)+|u(0)|^{2}\big{\|}_{L^{2}_{x}}^{2}\Big{)}$
$\displaystyle\geqslant\lambda^{4}4\mathcal{E}_{Z}(u,V)\frac{\|Q\|_{\dot{H}^{1}}^{2}-4\mathcal{E}_{Z}(u,V)}{2\|Q\|_{\dot{H}^{1}}^{2}}\gtrsim_{E}\lambda^{4}\mathcal{E}_{Z}(u,V).$
Consequently the identity (7.1) implies that
$\displaystyle\mathcal{E}_{Z}(u,V)-\mathcal{E}_{Z}(f,g)=\int_{\lambda}^{1}a^{-1}\mathcal{K}\big{(}au(0)\big{)}+a^{3}\big{\|}V(0)+|u(0)|^{2}\big{\|}_{L^{2}_{x}}^{2}da\gtrsim_{E}(1-\lambda)\mathcal{E}_{Z}(u,V).$
In particular, the energy and mass decrease as $\lambda\to 0$, and hence
choosing $(1-\lambda)\approx_{E}\epsilon$ we have
$\mathcal{E}_{Z}(f,g)\leqslant E$ and $\mathcal{M}(f)\leqslant M$. Therefore,
letting $(\psi,\phi)\in\Omega(E)$ denote the corresponding global solution to
(1.2) with data $(\psi,\phi)(0)=(f,g)$ given by Theorem 1.1, we see that
$\|\psi\|_{D}\leqslant L(M,E)$. To conclude the corresponding dispersive bound
for $u$ we apply the stability result from Theorem 5.2. More precisely,
Theorem 3.1 and Lemma 7.1 implies that
$\|u(0)\|_{H^{1}}+\|\psi\|_{S^{\frac{1}{2}}}\lesssim_{E,M}1,\qquad\|\phi\|_{L^{\infty}_{t}L^{2}_{x}}\leqslant
2E^{\frac{1}{2}}<\|Q\|_{\dot{H}^{1}}$
(strictly speaking the implied constant here also depends on $L(M,E)$). On the
other hand, the choice of $(f,g)$ gives
$\|u(0)-\psi(0)\|_{\dot{H}^{1}}+\|V(0)-\phi(0)\|_{L^{2}}\lesssim_{E,M}1-\lambda\lesssim_{E,M}\epsilon.$
Hence provided $\epsilon>0$ is sufficiently small (depending only on $M$ and
$E$, albeit via $L(M,E)$), (5.3) together with Theorem 5.2 implies that $u\in
S^{\frac{1}{2}}$ with the bound
$\|u-\psi\|_{D}\lesssim\|u-\psi-e^{it\Delta}(u-\psi)(0)\|_{S^{\frac{1}{2}}}+\|(u-\psi)(0)\|_{H^{1}}\lesssim_{E,M}\epsilon^{\theta}.$
In other words, we have a constant $C=C(M,E)>0$ depending only on $E$ and $M$
(and $L(M,E)$) such that $\|u\|_{D}\leqslant L(M,E)+C\epsilon^{\theta}$.
Taking the sup over all $(u,V)\in\Omega(E+\epsilon)$ with $\mathcal{M}(u)=M$
we conclude the required bound. ∎
The proof of Theorem 1.6 is now an application of the Palais-Smale type
property together with the stability result obtained earlier.
###### Proof of Theorem 1.6.
Suppose Conjecture 1.4 failed. Applying Lemma 9.1 we would then conclude that
there exists a mass/energy threshold $(M_{c},E_{c})$ with
$4E_{c}<\|Q\|_{\dot{H}^{1}}^{2}$. In particular, as $L(M_{c},E_{c})=\infty$,
there exists a sequence $(u_{n},V_{n})\in\Omega(E_{c})$ such that
$\lim_{n\to\infty}\mathcal{E}_{Z}(u_{n},V_{n})=E_{c},\qquad\lim_{n\to\infty}\mathcal{M}(u_{n})=M_{c},\qquad\lim_{n\to\infty}\|u_{n}\|_{D}=\infty.$
Choose $t_{n}\in\mathbb{R}$ such that
$\lim_{n\to\infty}\|u_{n}\|_{D((-\infty,t_{n}])}=\lim_{n\to\infty}\|u_{n}\|_{D([t_{n},\infty))}=\infty.$
After replacing $u_{n}(t)$ with $u_{n}(t+t_{n})$, we may assume that $t_{n}=0$
for all $n\in\mathbb{N}$. Theorem 8.1 then implies that, up to a subsequence,
we have $(f_{c},g_{c})\in H^{1}\times L^{2}$ and $x_{n}\in\mathbb{R}^{4}$ such
that
$\lim_{n\to\infty}\big{\|}(u_{n},V_{n})(0,x+x_{n})-(f_{c},g_{c})(x)\big{\|}_{H^{1}\times
L^{2}}=0.$ (9.3)
Note that $\mathcal{E}_{Z}(f_{c},g_{c})=E_{c}$, $\mathcal{M}(f_{c})=M_{c}$,
and $\|g_{c}\|_{L^{2}}\leqslant\|Q\|_{\dot{H}^{1}}$. Hence applying Theorem
1.1 with data $(f_{c},g_{c})\in H^{1}\times L^{2}$ we obtain a global solution
$(\psi,\phi)\in C(\mathbb{R};H^{1}\times L^{2})$ to (1.2) with
$\psi\in
L^{2}_{t,loc}W^{\frac{1}{2},4}_{x},\qquad\mathcal{E}_{Z}(\psi,\phi)=E_{c},\qquad\mathcal{M}(\psi)=M_{c},\qquad\|\phi\|_{L^{\infty}_{t}L^{2}_{x}}\leqslant\|Q\|_{\dot{H}^{1}}.$
(9.4)
Moreover, the limit (9.3) together with the stability result in Theorem 5.2
gives
$\|\psi\|_{D((-\infty,0])}=\|\psi\|_{D([0,\infty))}=\infty.$ (9.5)
It only remains to verify that there exists $x(t):\mathbb{R}\to\mathbb{R}^{4}$
such that the orbit
$\big{\\{}(\psi,\phi)\big{(}t,x+x(t)\big{)}\,\,\big{|}\,\,t\in\mathbb{R}\big{\\}}$
(9.6)
is precompact in $H^{1}\times L^{2}$. To prove the existence of the
translations $x(t)$, one option is to argue abstractly as in [32].
Alternatively, and this is the approach we take here, we can give a concrete
definition111This observation was kindly communicated to us by Kenji
Nakanishi. of the translations by noting that $x(t)$ should essentially be the
“centre of mass” of the solution $(\psi,\phi)(t)$. To this end, we choose the
components $x_{j}(t)\in\mathbb{R}$ of $x(t)\in\mathbb{R}^{4}$ as
$\int_{x_{j}(t)}^{\infty}\int_{\mathbb{R}^{3}}\big{(}|\nabla\psi|^{2}+|\psi|^{2}+|\phi|^{2}\big{)}(t,y)dy^{\prime}dy_{j}=\frac{1}{2}\|(\psi,\phi)(t)\|_{H^{1}\times
L^{2}}^{2}$
where $y^{\prime}\in\mathbb{R}^{3}$ denotes the remaining spatial variables.
In other words, $x(t)$ is roughly the centre of the $H^{1}\times L^{2}$ mass
of $(\psi,\phi)$. Suppose (9.6) is not precompact. Then there exists sequence
$t_{n}\in\mathbb{R}$ and $C>0$ such that for all $n\not=m$ we have
$\big{\|}(\psi,\phi)\big{(}t_{n},x+x_{n}\big{)}-(\psi,\phi)\big{(}t_{m},x+x_{m}\big{)}\big{\|}_{H^{1}\times
L^{2}}\geqslant C$ (9.7)
where for ease of notation we let $x_{n}=x(t_{n})$. Applying Theorem 8.1 to
the sequence $(\psi,\phi)(t+t_{n})$, the properties (9.4) and (9.5) imply that
there exists $\tilde{x}_{n}\in\mathbb{R}^{4}$ and $(\tilde{f},\tilde{g})\in
H^{1}\times L^{2}$ such that up to a subsequence, the translated sequence
$(\psi,\phi)(t_{n},x+\tilde{x}_{n})$ converges to $(\tilde{f},\tilde{g})\in
H^{1}\times L^{2}$. In particular, after translating once more, we have
$\lim_{n\to\infty}\big{\|}(\psi,\phi)(t_{n},x+x_{n})-(\tilde{f},\tilde{g})(x+x_{n}-\tilde{x}_{n})\big{\|}_{H^{1}\times
L^{2}}=0.$ (9.8)
If $\sup_{n}|x_{n}-\tilde{x}_{n}|<\infty$, then after taking a further
subsequence we can assume $x_{n}-\tilde{x}_{n}$ converges. But then (9.8) is
clearly a contradiction to (9.7). Similarly, if $(\tilde{f},\tilde{g})=0$,
then again (9.8) contradicts (9.7). Thus, we may assume that
$(\tilde{f},\tilde{g})\not=0$, and writing the components of the vectors
$x_{n},\tilde{x}_{n}\in\mathbb{R}^{4}$ as $x_{n,j}$ and $\tilde{x}_{n,j}$,
there must exist some $1\leqslant j\leqslant 4$ such that
$|x_{n,j}-\tilde{x}_{n,j}|\to\infty$. Observe that, by our choice of $x(t)$
and (9.8), we have
$\displaystyle\lim_{n\to\infty}\int_{x_{n,j}-\tilde{x}_{n,j}}^{\infty}\int_{\mathbb{R}^{3}}\big{(}|\nabla\tilde{f}|^{2}+|\tilde{f}|^{2}+|\tilde{g}|^{2}\big{)}(y)\,dy^{\prime}dy_{j}$
$\displaystyle=\lim_{n\to\infty}\int_{x_{n,j}}^{\infty}\int_{\mathbb{R}^{3}}\big{(}|\nabla\psi|^{2}+|\psi|^{2}+|\phi|^{2}\big{)}(t_{n},y)dy^{\prime}dy_{j}$
$\displaystyle=\frac{1}{2}\lim_{n\to\infty}\big{\|}(\psi,\phi)(t_{n})\big{\|}_{H^{1}\times
L^{2}}^{2}=\frac{1}{2}\big{\|}(\tilde{f},\tilde{g})\big{\|}_{H^{1}\times
L^{2}}^{2}.$
But this is again a contradiction, as the left hand side converges to either
$0$ or $\|(\tilde{f},\tilde{g})\|_{H^{1}\times L^{2}}^{2}\not=0$. Therefore
(9.7) cannot hold, and hence the orbit (9.6) is precompact as claimed. ∎
## Acknowledgements
The author would like to thank Kenji Nakanishi for many helpful conversations
on the concentration compactness argument for dispersive PDE. Financial
support from the Marsden Fund Council grant 19-UOO-142, managed by Royal
Society Te Apārangi is gratefully acknowledged.
## References
* [1] Thierry Aubin, _Problèmes isopérimétriques et espaces de Sobolev_ , Journal of Differential Geometry 11 (1976), no. 4, 573 – 598\.
* [2] Hajer Bahouri and Patrick Gérard, _High frequency approximation of solutions to critical nonlinear wave equations_ , American Journal of Mathematics 121 (1999), no. 1, 131–175.
* [3] Ioan Bejenaru, Zihua Guo, Sebastian Herr, and Kenji Nakanishi, _Well-posedness and scattering for the Zakharov system in four dimensions_ , Anal. PDE 8 (2015), no. 8, 2029–2055. MR 3441212
* [4] Ioan Bejenaru and Sebastian Herr, _Convolutions of singular measures and applications to the Zakharov system_ , (2010).
* [5] Ioan Bejenaru, Sebastian Herr, Justin Holmer, and Daniel Tataru, _On the 2d Zakharov system with $l^{2}$ Schrödinger data_, Nonlinearity 22 (2008), 1063–1089.
* [6] J. Bourgain, _Global wellposedness of defocusing critical nonlinear Schrödinger equation in the radial case_ , J. Amer. Math. Soc. 12 (1999), no. 1, 145–171. MR 1626257
* [7] J. Bourgain and J. Colliander, _On wellposedness of the Zakharov system_ , International Mathematics Research Notices 1996 (1996), no. 11, 515–546.
* [8] Timothy Candy, _Multi-scale bilinear restriction estimates for general phases_ , Math. Ann. 375 (2019), no. 1, 777–843.
* [9] Timothy Candy, Sebastian Herr, and Kenji Nakanishi, _Global wellposedness for the energy-critical Zakharov system below the ground state_ , Advances in Mathematics 384 (2021), 107746.
* [10] Timothy Candy, Sebastian Herr, and Kenji Nakanishi, _The Zakharov system in dimension $d>=4$_, to appear in Journal of the European Mathematical Society (2021).
* [11] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, and T. Tao, _Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $\mathbb{R}^{3}$_, Ann. of Math. (2) 167 (2008), no. 3, 767–865. MR 2415387 (2009f:35315)
* [12] James Colliander, Justin Holmer, and Nikolaos Tzirakis, _Low regularity global well-posedness for the Zakharov and Klein-Gordon-Schrödinger systems_ , Trans. Amer. Math. Soc. 360 (2008), no. 9, 4619–4638. MR 2403699 (2009c:35434)
* [13] Benjamin Dodson, _Global well-posedness and scattering for the focusing, cubic Schrödinger equation in dimension $d=4$_, Ann. Sci. Éc. Norm. Supér. (4) 52 (2019), no. 1, 139–180. MR 3940908
* [14] J. Ginibre and G. Velo, _Scattering theory for the Zakharov system_ , Hokkaido Math. J. 35 (2006), no. 4, 865–892. MR 2289364
* [15] Florian Grube, _Norm inflation for the zakharov system_ , arXiv:2203.09823 (2022).
* [16] Zihua Guo and Kenji Nakanishi, _The Zakharov system in 4d radial energy space below the ground state_ , American Journal of Mathematics 143 (2021), no. 5.
* [17] Zihua Guo, Kenji Nakanishi, and Shuxia Wang, _Global dynamics below the ground state energy for the Zakharov system in the 3D radial case_ , Adv. Math. 238 (2013), 412–441. MR 3033638
* [18] Isao Kato and Kotaro Tsugawa, _Scattering and well-posedness for the Zakharov system at a critical space in four and more spatial dimensions_ , Differential Integral Equations 30 (2017), no. 9-10, 763–794. MR 3656486
* [19] Markus Keel and Terence Tao, _Endpoint Strichartz estimates_ , Amer. J. Math. 120 (1998), no. 5, 955–980.
* [20] Carlos E. Kenig and Frank Merle, _Global well-posedness, scattering and blow-up for the energy-critical, focusing, non-linear Schrödinger equation in the radial case_ , Invent. Math. 166 (2006), no. 3, 645–675. MR 2257393
* [21] Carlos E Kenig, Gustavo Ponce, and Luis Vega, _On the Zakharov and Zakharov-schulman systems_ , Journal of Functional Analysis 127 (1995), no. 1, 204–234.
* [22] Sahbi Keraani, _On the blow up phenomenon of the critical nonlinear Schrödinger equation_ , J. Funct. Anal. 235 (2006), no. 1, 171–192. MR 2216444
* [23] Rowan Killip and Monica Visan, _Nonlinear Schrödinger equations at critical regularity_ , Evolution equations, Clay Math. Proc., vol. 17, Amer. Math. Soc., Providence, RI, 2013, p. 325–437.
* [24] Sanghyuk Lee and Ana Vargas, _Sharp null form estimates for the wave equation_ , Amer. J. Math. 130 (2008), no. 5, 1279–1326.
* [25] Nader Masmoudi and Kenji Nakanishi, _Energy convergence for singular limits of Zakharov type systems_ , Inventiones mathematicae 172 (2008), no. 3, 535–583.
* [26] Frank Merle, _Blow-up results of virial type for Zakharov equations_ , Comm. Math. Phys. 175 (1996), no. 2, 433–455. MR 1370102
* [27] Steven H. Schochet and Michael I. Weinstein, _The nonlinear Schrödinger limit of the Zakharov equations governing Langmuir turbulence_ , Communications in Mathematical Physics 106 (1986), no. 4, 569 – 580.
* [28] Catherine Sulem and Pierre-Louis Sulem, _The nonlinear Schrödinger equation: self-focusing and wave collapse_ , vol. 139, Springer Science & Business Media, 2007.
* [29] Giorgio Talenti, _Best constant in Sobolev inequality_ , Annali di Matematica pura ed Applicata 110 (1976), no. 1, 353–372.
* [30] T. Tao, _A sharp bilinear restriction estimate for paraboloids_ , Geom. Funct. Anal. 13 (2003), no. 6, 1359–1384. MR 2033842 (2004m:47111)
* [31] Terence Tao, _An inverse theorem for the bilinear $L^{2}$ Strichartz estimate for the wave equation_, arXiv:0904.2880 (2009).
* [32] Terence Tao, Monica Visan, and Xiaoyi Zhang, _Minimal-mass blowup solutions of the mass-critical NLS_ , Forum Math. 20 (2008), no. 5, 881–919. MR 2445122
* [33] Benjamin Texier, _Derivation of the Zakharov equations_ , Archive for Rational Mechanics and Analysis 184 (2007), no. 1, 121–183.
* [34] Vladimir E. Zakharov, _Collapse of Langmuir waves_ , Sov. Phys. JETP 35 (1972), no. 5, 908–914. |
# WiCV 2022: The Tenth Women In Computer Vision Workshop
Doris Antensteiner1, Silvia Bucci2, Arushi Goel3, Marah Halawa4, Niveditha
Kalavakonda5,
Tejaswi Kasarla6, Miaomiao Liu7, Nermin Samet8, Ivaxi Sheth9
1Austrian Institute of Technology, 2Polytechnic of Turin, 3University of
Edinburgh,
4Technical University of Berlin, 5Univeristy of Washington, 6University of
Amsterdam,
7Australian National University, 8Ecole des Ponts ParisTech, 9Mila-Quebec AI,
ETS Montreal
<EMAIL_ADDRESS>
###### Abstract
In this paper, we present the details of Women in Computer Vision Workshop -
WiCV 2022, organized alongside the hybrid CVPR 2022 in New Orleans, Louisiana.
It provides a voice to a minority (female) group in the computer vision
community and focuses on increasing the visibility of these researchers, both
in academia and industry. WiCV believes that such an event can play an
important role in lowering the gender imbalance in the field of computer
vision. WiCV is organized each year where it provides a) opportunity for
collaboration between researchers from minority groups, b) mentorship to
female junior researchers, c) financial support to presenters to overcome
monetary burden and d) large and diverse choice of role models, who can serve
as examples to younger researchers at the beginning of their careers. In this
paper, we present a report on the workshop program, trends over the past
years, a summary of statistics regarding presenters, attendees, and
sponsorship for the WiCV 2022 workshop.
## 1 Introduction
While excellent progress has been made in a wide variety of computer vision
research areas in recent years, similar progress has not been made in the
increase of diversity in the field and the inclusion of all members of the
computer vision community. Despite the rapid expansion of our field, females
still only account for a small percentage of the researchers in both academia
and industry. Due to this, many female computer vision researchers can feel
isolated in workspaces which remain unbalanced due to the lack of inclusion.
The Women in Computer Vision workshop is a gathering for both women and men
working in computer vision. It aims to appeal to researchers at all levels,
including established researchers in both industry and academia (e.g. faculty
or postdocs), graduate students pursuing a Masters or PhD, as well as
undergraduates interested in research. This aims to raise the profile and
visibility of female computer vision researchers at each of these levels,
seeking to reach women from diverse backgrounds at universities and industry
located all over the world.
There are three key objectives of the WiCV workshop. The first to increase the
WiCV network and promote interactions between members of this network, so that
female students may learn from professionals who are able to share career
advice and past experiences. A mentoring banquet is run alongside the
workshop. This provides a casual environment where both junior and senior
women in computer vision can meet, exchange ideas and even form mentoring or
research relationships.
The workshop’s second objective is to raise the visibility of women in
computer vision. This is done at both the junior and senior levels. Several
senior researchers are invited to give high quality keynote talks on their
research, while junior researchers are invited to submit recently published or
ongoing works with many of these being selected for oral or poster
presentation through a peer review process. This allows junior female
researchers to gain experience presenting their work in a professional yet
supportive setting. We strive for diversity in both research topics and
presenters’ backgrounds. The workshop also includes a panel, where the topics
of inclusion and diversity can be discussed between female colleagues.
Finally, the third objective is to offer junior female researchers the
opportunity to attend a major computer vision conference which they otherwise
may not have the means to attend. This is done through travel grants awarded
to junior researchers who present their work in the workshop via a poster
session. These travel grants allow the presenters to not only attend the WiCV
workshop, but also the rest of the CVPR conference.
## 2 Workshop Program
The workshop program consisted of 4 keynotes, 7 oral presentations, 34 poster
presentations, a panel discussion, and a mentoring session. As with previous
years, our keynote speakers were selected to have diversity among topic,
background, whether they work in academia or industry, as well as their
seniority. It is crucial to provide a diverse set of speakers so that junior
researchers have many different potential role models who they can relate to
in order to help them envision their own career paths.
The workshop schedule was as follows:
* •
Introduction
* •
Invited Talk 1: Marina Marie-Claire Höhne (Technische Universität Berlin,
Germany), Improving Explainable AI by using Bayesian Neural Networks
* •
Oral Session 1
* –
Jennifer Hobbs, Deep Density Estimation Based on Multi-Spectral Remote Sensing
Data for In-Field Crop Yield Forecasting
* –
Ranya Almohsen , Generative Probabilistic Novelty Detection with Isometric
Adversarial Autoencoders.
* –
Sarah A. Schneider , A Comparative Analysis in the Realm of Anomaly Detection
* •
Invited Talk 2: Tatiana Tommasi(Polytechnic University of Turin, Italy),
Reliable 2D and 3D Models for Open World Applications
* •
Poster Session (in person)
* •
Invited Talk 3: Michal Irani ( Weizmann Institute of Science, Israel), “Mind
Reading”: Self-supervised decoding of visual data from brain activity
* •
Oral Session 2
* –
Maxine A Perroni-Scharf, Material Swapping for 3D Scenes using a Learnt
Material Similarity Measure
* –
Mengyuan Zhang , Enriched Robust Multi-View Kernel Subspace Clustering.
Presenter
* –
Asra Aslam , Detecting Objects in Less Response Time for Processing Multimedia
Events in Smart Cities
* –
Sonam Gupta , RV-GAN: Recurrent GAN for Unconditional Video Generation
* •
Invited Talk 4: Angela Yao (School of Computing at the National University of
Singapore), Capturing and Understanding 3D Hands in Action
* •
Panel Discussion
* •
Closing Remarks
* •
Mentoring Session and Dinner (in person)
* –
Speaker: Angela Dai (Technical University of Munich, Germany)
* –
Speaker: Djamila Aouada (University of Luxembourg)
### 2.1 Hybrid Setting
This year, the organization has been slightly modified as CVPR 2022 was held
in hybrid setting. We had to make two plans, one for in-person attendance and
one for virtual attendance. We made sure to make the virtual WiCV workshop as
engaging and interactive as possible. Talks, oral sessions, and the panel was
shared via zoom for the virtual attendances. While the poster session was
repeated virtually a week after the conference, similar to the main conference
setting. We also provided online mentoring sessions held via Zoom, for mentors
and mentees that are able to attend only virtually.
## 3 Workshop Statistics
Originally, the first workshop for WiCV was held in conjunction with CVPR
2015. Since then, the participation rate and number quality of submissions to
WiCV have been steadily increasing. Following the examples from the editions
held in previous years [1, 2, 3, 4, 5], we were encouraged to collect the top
quality submissions into workshop proceedings. By providing oral and poster
presenters with the opportunity to publish their work in the conference’s
proceedings, we believe that the visibility of female researchers will be
further increased. This year, the workshop was held as a half day hybrid
gathering, the virtual setting was over gather-town and Zoom, and the in-
person setting was in New Orleans, Louisiana. Senior and junior researchers
were invited to present their work, and poster presentations are included as
already described in the previous Section 2.
The organizers for this year WiCV workshop are working in both academia and
industry from various institutions located in different time zones. Their
miscellaneous backgrounds and research areas have pledged the organizing
committee a diverse perspective. Their research interests in computer vision
and machine learning include video understanding, representation learning, 3D
reconstruction, domain adaptation, domain generalization, vision and language,
and semi/self-supervised learning in different application areas such as
vision for robotics, and healthcare.
Figure 1: WiCV Submissions. The number of submissions over the past years of
WiCV.
This year we had 70 high quality submissions from a wide range of topics and
institutions. It is on par with WiCV@CVPR21. The most popular topics were deep
learning architectures and techniques followed by video action and event
recognition , segmentation and shape analysis, and medical application. Over
all 70 submissions, 64 went into the review process. 7 papers were selected to
be presented as oral talks and appeared into the CVPR22 workshop’s
proceedings, and 34 papers were selected to be presented as posters. Within
the accepted submissions. The comparison with previous years is presented in
Figure 1. With the great effort of an interdisciplinary program committee
consisting of 41 reviewers, the submitted papers were evaluated and received
valuable feedback.
This year we kept WiCV tradition of last year’s workshops [1, 2, 3, 4, 5] in
providing grants to help the authors of accepted submissions participate in
the workshop. The grants covered the conference registration fees itinerary
(two ways flight), and two days accommodation for all the authors of accepted
submissions who requested funding.
The total amount of sponsorship this year is $62,000 USD with 10 sponsors,
reaching a very good target. In Figure 2 you can find the details with respect
to the past years.
Figure 2: WiCV Sponsors. The number of sponsors and the amount of sponsorship
for WiCV. The amount is expressed in US dollar (USD).
## 4 Conclusions
WiCV at CVPR 2022 has continued to be a valuable opportunity for presenters,
participants and organizers in providing a platform to bring the community
together. It continues to overcome the existing issue of gender balance
prevailing around us and we hope that it has played an important part in
making the community even stronger. It provided an opportunity for people to
connect from all over the world from their personal comforts. With a high
number of paper submissions and even higher number of attendees, we foresee
that the workshop will continue the marked path of previous years and foster
stronger community building with increased visibility, providing support, and
encouragement inclusively for all the female researchers in academia and in
industry.
## 5 Acknowledgments
First of all, we would like to thank our sponsors. We are very grateful to our
other Platinum sponsors: Toyota Research Institute, Google, and Apple. We
would also like to thank our Gold sponsor: Microsoft, and DeepMind; Silver
Sponsors: Meta, Disney Research, and Zalando ; Bronze sponsors: Meshcapade,
and Nvidia. We would also like to thank San Francisco Study Center as our
fiscal sponsor, which helped to process our sponsorships and travel awards. We
would also like to thank and acknowledge the organizers of previous WiCV,
without the information flow and support from the previous WiCV organizers,
this WiCV would not have been possible. Finally, we would like to acknowledge
the time and efforts of our program committee, authors, reviewers, submitters,
and our prospective participants for being part of WiCV network community.
## 6 Contact
Website: https://sites.google.com/view/wicvcvpr2022/home
E-mail<EMAIL_ADDRESS>
Facebook: https://www.facebook.com/WomenInComputerVision/
Twitter: https://twitter.com/wicvworkshop
Google group<EMAIL_ADDRESS>
## References
* [1] Z. Akata, D. Bazazian, Y. Hasson, A. Kanazawa, H. Kuehne, and G. Varol. WiCV at ECCV2018: The Fifth Women in Computer Vision Workshop. In Proceedings of European Conference on Computer Vision Workshops, 2018.
* [2] I. Amerini, E. Balashova, S. Ebrahimi, K. Leonard, A. Nagrani, and A. Salvador. WiCV 2019: The Sixth Women In Computer Vision Workshop. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
* [3] I. Demir, D. Bazazian, A. Romero, V. Sharmanska, and L. Tchapmi. WiCV 2018: The Fourth Women In Computer Vision Workshop. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 1860–1862, 2018.
* [4] H. Doughty, N. Karessli, K. Leonard, B. Li, C. Martinez, A. Mobasher, A. Nagrani, and S. Yadav. Wicv 2020: The seventh women in computer vision workshop. arXiv preprint arXiv:2101.03787, 2021.
* [5] A. Goel, N. Kalavakonda, N. Karessli, T. Kasarla, K. Leonard, B. Li, N. Samet, , and G. Zamzmi. Wicv 2021: The eighth women in computer vision workshop, 2022.
|
# FAST: Fidelity-Adjustable Semantic Transmission over Heterogeneous Wireless
Networks
Peichun Li1,2, Guoliang Cheng1, Jiawen Kang1, Rong Yu1, Liping Qian3, Yuan
Wu2, and Dusit Niyato4 1School of Automation, Guangdong University of
Technology, Guangzhou, China
2State Key Laboratory of Internet of Things for Smart City, University of
Macau, Macau, China
3College of Information Engineering, Zhejiang University of Technology,
Hangzhou, China
4School of Computer Science and Engineering, Nanyang Technological University,
Singapore
Email<EMAIL_ADDRESS><EMAIL_ADDRESS>{kavinkang,
<EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
In this work, we investigate the challenging problem of on-demand semantic
communication over heterogeneous wireless networks. We propose a fidelity-
adjustable semantic transmission framework (FAST) that empowers wireless
devices to send data efficiently under different application scenarios and
resource conditions. To this end, we first design a dynamic sub-model training
scheme to learn the flexible semantic model, which enables edge devices to
customize the transmission fidelity with different widths of the semantic
model. After that, we focus on the FAST optimization problem to minimize the
system energy consumption with latency and fidelity constraints. Following
that, the optimal transmission strategies including the scaling factor of the
semantic model, computing frequency, and transmitting power are derived for
the devices. Experiment results indicate that, when compared to the baseline
transmission schemes, the proposed framework can reduce up to one order of
magnitude of the system energy consumption and data size for maintaining
reasonable data fidelity.
###### Index Terms:
Semantic communications, dynamic neural networks, on-demand communications,
resource management.
## I Introduction
By 2030, 17.1 billion wireless devices equipped with versatile sensors will
produce 5 zettabytes of data per month [1]. The explosive growth of edge-
generated data rises challenges on how to achieve efficient information
exchange among massive devices. Semantic communication is an emerging data
transmission paradigm that aims to extract and deliver the explicative meaning
of the data [2]. The semantic-aware communication systems can reveal the
intrinsic information of the raw data by leveraging the knowledge of prior
models [3, 4]. By integrating semantic communication into wireless networks,
the required data traffic will be significantly reduced, leading to a green
and reliable communication pattern [5, 6].
By leveraging the capacity of neural networks, learning-based semantic
communication systems can extract compact and accurate information from the
image and speech [7, 8, 9]. To improve the freshness of status updates, the
age of semantics is incorporated into the semantic communication systems [10].
Recently, system-level methods focus on improving the efficiency of semantic
communication, such as the spectrum-efficient method that assigns the optimal
channel for the wireless devices [11], and adaptive resource scheduling that
maximizes the successful probability of transmission [12]. However, these
methods employ fixed neural networks to accomplish the extraction of semantic
information during the running time, which hinders the flexibility of
semantic-aware transmission over heterogeneous networks.
Using a fixed learning-based semantic model is a stringent limit for the
communication system over heterogeneous wireless networks. This setting
deteriorates the ability of the communication system in handling different
application scenarios. As illustrated in Figure 1, a typical semantic model
may be designed to concurrently support multiple vision-related tasks under
different resource conditions. Compared with image classification that only
identifies the category of the image, object detection needs to additionally
analyze the location of the object of interest [13]. Thus, given limited
computation and communication resources, high-fidelity semantic data with
fine-grained information should be reserved for object detection [14], while
the low-fidelity one is sufficient for image classification. Also, the quality
of semantic communication should be adapted to the energy status of the
battery-powered devices. Employing high-fidelity mode for performance-first
setting and switching to low-fidelity mode for the purpose of energy saving is
an effective way to maintain the transmission quality while prolonging the
battery lifetime [15, 16].
Figure 1: FAST over heterogeneous wireless networks.
In this paper, we propose FAST, a fidelity-adjustable semantic transmission
framework, to improve the flexibility of learning-based semantic communication
systems over heterogeneous wireless networks. We focus on the image
transmission task with autoencoder as the semantic model [17]. Our goal is to
train a flexible semantic model that enables wireless devices to select
different sizes of sub-models at the running time. Then, we propose a
fidelity-aware resource management approach, where the optimal transmission
strategy is designed to meet the quality and efficiency constraints.
Specifically, a full-size model with powerful capacity is preferred in the
high-fidelity scenario, and a small sub-model is adopted in the low-fidelity
scenario to save the system cost [18].
However, determining the optimal transmission strategies for FAST with
personalized constraints is a non-trivial task, as how to train the flexible
semantic models and how the fidelity of semantic data is affected by model
size is unknown. To address these issues, we first design a dynamic sub-model
training scheme to concurrently support flexible encoding and decoding with
different model widths. Meanwhile, the relationship between model size and the
expected fidelity is empirically quantified. Following that, we study the FAST
optimization problem to improve energy efficiency with given latency and
fidelity budgets. Based on the theoretical analysis, the problem is
transformed into a convex problem. Finally, we develop a hierarchical
bisection algorithm to solve the problem, where the size of the semantic sub-
model, CPU computing frequency, and transmitting power are determined
according to the fidelity constraint and resource status.
Our main contributions are summarized as follows.
* •
We propose a novel semantic communication framework, named FAST that enables
wireless devices to perform fidelity-adjustable data transmission.
* •
We investigate the fidelity-aware resource management problem for FAST, and
the optimal transmission strategy is devised to minimize the system energy
cost.
* •
Extensive experiments demonstrate the efficiency and effectiveness of FAST,
which outperforms the existing baselines in terms of resource utilization and
data fidelity.
The remainder of this paper is organized as follows. Section II details the
main components of FAST to fulfill flexible semantic communication. The
problem formulation, theoretical analysis, and the corresponding solution are
provided in Section III. The experiment simulations are presented in Section
IV, and we finally conclude the paper in Section V.
## II System Model
### II-A Outline of FAST
We consider the scenario of wireless semantic communication between two
physical entities, i.e., the transmitter and the receiver. As shown in Figure
2, unlike traditional methods that utilize the fixed model during the running
time, FAST employs a flexible semantic model to accomplish the fidelity-
adjustable transmission. Specifically, the semantic model comprises of two
parts, including the encoder and decoder. For the full-size model, we use
$\boldsymbol{\theta}$ and $\boldsymbol{\vartheta}$ to parameterize the weights
of the encoder and decoder, respectively. Here, we introduce a scaling factor
$\pi\in(0,1]$ for the width of each layer in the flexible model. Given a
scaling factor $\pi$, we can derive a pair of small encoder and decoder from
the full-size model, denoted as $\boldsymbol{\theta}_{\pi}$ and
$\boldsymbol{\vartheta}_{\pi}$, respectively. The process of the FAST is
divided into the following three phases.
#### II-A1 Phase I for encoding
With the pre-determined scaling factor $\pi$, the source device switches from
the full-size encoder to a small one parameterized by
$\boldsymbol{\theta}_{\pi}$. Let $\boldsymbol{x}$ denote the raw data, and let
${\boldsymbol{h}}_{\pi}$ represent the corresponding semantic data. The
function of semantic encoding can be expressed as
${\boldsymbol{h}}_{\pi}=\texttt{enc}({\boldsymbol{x}};\boldsymbol{\theta}_{\pi}).$
(1)
#### II-A2 Phase II for transmission
After obtaining the semantic data ${\boldsymbol{h}}_{\pi}$, the source device
transmits it to the destination. Here, we consider that the semantic data is
converted into binary symbols. Thus, the transmission for the semantic
information still follows the Shannon capacity [12].
#### II-A3 Phase III for decoding
After receiving the semantic data, the destination device decodes it to
reconstruct the data $\hat{{\boldsymbol{x}}}$. Specifically, the encoder and
decoder have the symmetry structures, i.e., the decoder shares the same
scaling factor $\pi$ as the encoder does. The process of semantic decoding can
be represented as
$\hat{{\boldsymbol{x}}}=\texttt{dec}({\boldsymbol{h}}_{\pi};\boldsymbol{\vartheta}_{\pi}).$
(2)
Figure 2: FAST with flexible semantic model.
### II-B Flexible Semantic Model
We aim to train a flexible semantic model that supports nearly continuous
scaling factor $\pi\in(0,1]$. We first focus on deriving a pair of small-size
encoder and decoder
$\\{\boldsymbol{\theta}_{\pi},\boldsymbol{\vartheta}_{\pi}\\}$ from
$\\{\boldsymbol{\theta},\boldsymbol{\vartheta}\\}$. Then, we propose a dynamic
sub-model training scheme to train the flexible semantic model efficiently.
#### II-B1 Sub-model derivation
Given a scaling factor $\pi$, we aim to derive the small-size sub-model
$\\{\boldsymbol{\theta}_{\pi},\boldsymbol{\vartheta}_{\pi}\\}$ from the full-
size one $\\{\boldsymbol{\theta},\boldsymbol{\vartheta}\\}$. Specifically, the
sub-model derivation is performed in a layer-by-layer manner. For a
convolution layer with a number of filters as $C$ (i.e., the width of the
layer), we select the weights of the first $\lfloor\pi C\rfloor$ filters to
construct the layer for a small-size sub-model. Given a scaling factor $\pi$,
the derivations for $\boldsymbol{\theta}_{\pi}$ and
$\boldsymbol{\vartheta}_{\pi}$ are respectively expressed by
$\boldsymbol{\theta}_{\pi}=\texttt{sel}(\boldsymbol{\theta},{\pi})\quad\textrm{and
\quad}\boldsymbol{\vartheta}_{\pi}=\texttt{sel}(\boldsymbol{\vartheta},{\pi}),$
(3)
where $\texttt{sel}(\cdot,\cdot)$ denotes the function to select the weight
from the full-size model to the small one.
#### II-B2 Learning objective
Our goal is to minimize the data reconstruction error for any sub-models
derived from the full-size model. Let
$\ell(\boldsymbol{x},\hat{\boldsymbol{x}})$ be the pre-determined loss
function. The learning objective can be expressed as
$\mathop{\min}\limits_{\boldsymbol{\theta},\boldsymbol{\vartheta}}\int_{\pi_{\min}}^{1}\sum\limits_{\boldsymbol{x}\in\boldsymbol{X}_{\textrm{test}}}{\ell\big{(}{\boldsymbol{x}},F({\boldsymbol{x}};\boldsymbol{\theta},\boldsymbol{\vartheta},\pi)\big{)}}d\pi,$
(4)
where the function
$F({\boldsymbol{x}};\boldsymbol{\theta},\boldsymbol{\vartheta},\pi)=\texttt{dec}\big{(}\texttt{enc}({\boldsymbol{x}};\boldsymbol{\theta}_{\pi});\boldsymbol{\vartheta}_{\pi}\big{)}$
computes the reconstructed data with given
$\boldsymbol{x},\boldsymbol{\theta},\boldsymbol{\vartheta}$ and $\pi$.
#### II-B3 Dynamic sub-model training
Note that the process of optimizing Eqn. (4) requires enumerating all sub-
models, which incurs a prohibitive cost for the model training. Inspired by
the study in [19], an efficient training method via sub-model sampling is
proposed to reduce the computation cost. As presented in Algorithm 1, we
propose to dynamically sample a sub-model at each iteration (i.e., Steps 3).
Specifically, the total training loss is computed as the sum of the losses of
the random sub-model and the full-size model (i.e., Step 5). In this way, we
maintain the performance of different sub-model while reducing the training
overhead.
Input: Training dataset $\boldsymbol{X}_{\textrm{train}}$, and
$\\{\boldsymbol{\theta},\boldsymbol{\vartheta}\\}$.
1 for _each epoch $i=1,2,\ldots,I$_ do
2 for _each batch of training data
$\boldsymbol{x}_{\textrm{batch}}\in\boldsymbol{X}_{\textrm{train}}$_ do
3 Randomly sample a scaling factor $\pi$. Perform forward propagation with
$\pi$:
$\hat{\boldsymbol{x}}_{\textrm{sub}}=F(\boldsymbol{x}_{\textrm{batch}};\boldsymbol{\theta},\boldsymbol{\vartheta},\pi)$;
4 Perform forward propagation with full-size model:
$\hat{\boldsymbol{x}}_{\textrm{full}}=F(\boldsymbol{x}_{\textrm{batch}};\boldsymbol{\theta},\boldsymbol{\vartheta},1)$;
5 Compute the total reconstruction loss:
$Loss=\ell(\boldsymbol{x}_{\textrm{batch}},\hat{\boldsymbol{x}}_{\textrm{sub}})+\ell(\boldsymbol{x}_{\textrm{batch}},\hat{\boldsymbol{x}}_{\textrm{full}})$;
6 Apply backward propagation to update
$\\{\boldsymbol{\theta},\boldsymbol{\vartheta}\\}$;
7
8 end for
9
10 end for
Algorithm 1 Dynamic sub-model training
### II-C Characterizing the Semantic Fidelity
We next investigate how the scaling factor $\pi$ affects the performance of
the corresponding sub-model. We first present the definition of semantic
fidelity of the given semantic model, and then reveal the relationship between
the scaling factor $\pi$ and the semantic fidelity of the sub-model.
#### II-C1 Definition of semantic fidelity
We define the semantic fidelity of the semantic model as the capability of
reconstructing data over the testing dataset. Formally, given a scaling factor
of $\pi$, the semantic fidelity $\phi_{\pi}$ of the corresponding sub-model is
calculated by
$\phi_{\pi}=1-\frac{1}{M|\boldsymbol{X}_{\textrm{test}}|}\sum\limits_{\boldsymbol{x}\in\boldsymbol{X}_{\textrm{test}}}{{\|{\boldsymbol{x}}-F({\boldsymbol{x}};\boldsymbol{\theta},\boldsymbol{\vartheta},\pi)\|}},$
(5)
where $M$ denotes the number of pixels in the image,
$|\boldsymbol{X}_{\textrm{test}}|$ measures the number of samples of the
testing dataset, and $\|\cdot\|$ calculates the L1 norm for the given vector.
#### II-C2 The impact of the scaling factor on semantic fidelity
Intuitively, larger sub-models with strong representation capabilities can
extract more latent information from the raw data, resulting in higher
semantic fidelity. Being consistent with the existing studies in [20], we
employ the parameter fitting method to empirically investigate the
relationship between semantic fidelity $\phi_{\pi}$ and the scaling factor
$\pi$. We adopt the CIFAR-10 and CINIC-10 datasets, and the experiment
settings are provided in Section IV. Then, we sample a subset of sub-models
from the flexible semantic model trained by Algorithm 1, and evaluate their
corresponding performance by Eqn. (5). The relationship between $\phi_{\pi}$
and $\pi$ is formulated as
$\phi_{\pi}=\kappa_{1}\ln(\frac{\kappa_{2}}{\pi}+\kappa_{3})+\kappa_{4},$ (6)
where $\\{\kappa_{1},\kappa_{2},\kappa_{3},\kappa_{4}\\}$ are constant hyper-
parameters that can be experimentally fitted. The experiment results are
provided in Figure 3. As $\pi$ increases, the fidelity of semantic data
increases to carry more detailed information.
Figure 3: Semantic fidelity $\phi_{\pi}$ of sub-model with respect to scaling
factor $\pi$.
## III Problem and Solution
### III-A FAST over Wireless Networks
For semantic communication with the full-size model and single image sample,
we use $W_{e}$ and $W_{d}$ to denote the computation workloads for encoding
and decoding, respectively, and the size of the semantic information is $S$.
For FAST with a scaling factor of $\pi$, the encoding workloads, decoding
workloads, and data size to be transmitted are reduced as $\pi^{2}W_{e}$,
$\pi^{2}W_{d}$ and $\pi S$, respectively.
#### III-A1 Computation model
Let $f_{e}$ and $f_{d}$ denote the computing frequency for the encoding and
decoding, respectively. For the semantic-based transmission with $K$ samples,
given the sub-model scaling factor $\pi$, the overall time taken for the model
inference can be measured by
$T_{\text{cmp}}=K\pi^{2}\bigg{(}\frac{{W_{e}}}{f_{e}}+\frac{{W_{d}}}{f_{d}}\bigg{)}.$
(7)
Meanwhile, the overall energy consumption is estimated by
$E_{\text{cmp}}=K\pi^{2}(\epsilon_{e}f_{e}^{2}{W_{e}}+\epsilon_{d}f_{d}^{2}{W_{d}}),$
(8)
where $\epsilon_{e}$ and $\epsilon_{d}$ are the hardware energy coefficients
of the source and destination devices, respectively.
#### III-A2 Communication model
Let $B$ denote the available bandwidth, $P$ the transmitting power of the
source device and $N_{0}$ be the power spectral density of the Gaussian noise.
For the transmission of semantic data from the source to the destination, the
achievable transmitting rate is estimated by
$r={B}{\log_{2}}\Big{(}1+\frac{{|h|^{2}d^{-\eta}P}}{{{N_{0}}{B}}}\Big{)},$ (9)
where $d$ represents the distance between the transmitter and receiver, $\eta$
is the pathloss exponent, and $h$ denotes the Rayleigh channel coefficient.
With given scaling factor of $\pi$, the required time $T_{\text{com}}$ and
energy consumption $E_{\text{com}}$ for the transmission of semantic data can
be respectively calculated by
$T_{\text{com}}=\frac{K\pi S}{r},~{}\text{and
}E_{\text{com}}=PT_{\text{com}}.$ (10)
#### III-A3 Problem formulation
Given a pair of source-destination devices with different local resources, our
goal is to optimize the transmission strategy for these two devices to
minimize the total energy cost with latency and fidelity constraints. To this
end, we formulate the following optimization problem.
$\displaystyle({\text{P1}})$ $\displaystyle\min$
$\displaystyle\;E_{\text{tot}}$ (11) subject to: $\displaystyle
T_{\text{tot}}\leq$ $\displaystyle\;{T^{\max}},$ (11a)
$\displaystyle\phi_{\pi}\geq$ $\displaystyle\;\phi^{{\min}},$ (11b)
$\displaystyle{\pi^{\min}}\leq$ $\displaystyle\;\pi\leq 1,$ (11c)
$\displaystyle 0\leq f_{e}\leq f_{e}^{\max}$ $\displaystyle,\;0\leq f_{d}\leq
f_{d}^{\max},$ (11d) $\displaystyle 0\leq P$ $\displaystyle{}\leq P^{\max},$
(11e) variables: $\displaystyle\pi,\;f_{e}$ $\displaystyle,\;f_{d},\;P,$
where $E_{\text{tot}}=E_{\text{cmp}}+E_{\text{com}}$ and
$T_{\text{tot}}=T_{\text{cmp}}+T_{\text{com}}$ are the total energy cost and
the total system latency, respectively.
### III-B Problem Simplification
In this subsection, we transform Problem (P1) into a tractable yet equivalent
form via constraint simplification and variable substitution. We first derive
the following lemma.
###### Lemma 1.
The equality always holds for Constraints (11a) and (11b) under the optimal
solution $\\{\pi^{\ast},f_{e}^{\ast},f_{d}^{\ast},P^{\ast}\\}$, namely, we
always have $T_{\text{tot}}^{\ast}={T^{\max}}$ and
$\phi_{\pi}^{\ast}=\phi^{{\min}}$.
* Proof.
We prove the lemma by showing contradictions. Suppose that there exists an
optimal solution such that $T_{\text{tot}}^{\ast}<T^{\max}$. Then, we
construct a new solution by replacing $f_{e}^{\ast}$ with $f_{e}^{\prime}$ in
the optimal solution such that $f_{e}^{\prime}<f_{e}^{\ast}$ and
$T_{\text{tot}}^{\prime}=T^{\max}$. Let $E_{\text{tot}}^{\prime}$ denote the
corresponding system energy cost of the new solution. Since the system energy
cost decreases with the decrease of $f_{e}$, then we obtain
$E_{\text{tot}}^{\prime}<E_{\text{tot}}^{\ast}$. Similarly, the contradiction
also applies for $\phi_{\pi}>\phi^{\min}$, and thus we complete the proof. ∎
Based on Lemma 1 and Eqn. (6), we can derive the optimal width scaling factor
$\pi^{\ast}$ as
$\pi^{\ast}=\frac{\kappa_{2}}{\exp{\big{(}\frac{\phi^{\min}-\kappa_{4}}{\kappa_{1}}\big{)}}-\kappa_{3}}.$
(12)
Moreover, we introduce three intermediate variables $\alpha>0,\beta>0$, and
$\gamma>0$ such that $\alpha+\beta+\gamma=1$, and they denote the time
splitting factors of the latency for encoding, semantic transmission, and
decoding, respectively. Thus, we obtain the following equations.
$\displaystyle\small\begin{split}\alpha
T^{\max}=K(\pi^{\ast})^{2}\frac{{W_{e}}}{f_{e}}&,\quad\beta
T^{\max}=\frac{K\pi^{\ast}S}{r},\\\ \gamma
T^{\max}=&\,K(\pi^{\ast})^{2}\frac{{W_{d}}}{f_{d}}.\end{split}$ (13)
By combining Eqns. (7)-(10) and (13), the total system energy cost
$E_{\text{tot}}$ can be re-expressed with respect to
$\\{\alpha,\beta,\gamma\\}$ as
$E_{\text{tot}}=\frac{\tau_{1}}{\alpha^{2}}+\tau_{2}\beta\big{(}2^{\frac{\tau_{3}}{\beta}}-1\big{)}+\frac{\tau_{4}}{\gamma^{2}},$
(14)
where the constants $\\{\tau_{1},\tau_{2},\tau_{3},\tau_{4}\\}$ can be
calculated by
$\displaystyle\small\begin{split}\tau_{1}={\epsilon_{e}}\frac{{K^{3}W_{e}^{3}{(\pi^{\ast})^{6}}}}{{{{\left({{T^{\max}}}\right)}^{2}}}}>0,\;\;\tau_{2}=\frac{{B{N_{0}}{T^{\max}}}}{|h|^{2}d^{-\eta}}>0,\\\
\tau_{3}=\frac{{K\pi^{\ast}S}}{{B{T^{\max}}}}>0,\;\;\tau_{4}={\epsilon_{d}}\frac{{K^{3}W_{d}^{3}{(\pi^{\ast})^{6}}}}{{{{\left({{T^{\max}}}\right)}^{2}}}}>0.\end{split}$
(15)
Therefore, given the optimal scaling factor $\pi^{\ast}$, Problem (P1) can be
transformed into the following problem.
$\displaystyle({\text{P2}})$
$\displaystyle\min\;\frac{\tau_{1}}{\alpha^{2}}+\tau_{2}\beta$
$\displaystyle\big{(}2^{\frac{\tau_{3}}{\beta}}-1\big{)}+\frac{\tau_{4}}{\gamma^{2}}$
(16) subject to: $\displaystyle\alpha+\beta\,+$ $\displaystyle\,\gamma=1,$
(16a) $\displaystyle\alpha^{\min}\leq\alpha,\;\beta^{\min}$
$\displaystyle\leq\beta,\;\gamma^{\min}\leq\gamma,$ (16b) variables:
$\displaystyle\alpha,\beta,$ $\displaystyle\;\gamma,$
where the lower limits of $\\{\alpha,\beta,\gamma\\}$ can be acquired by
$\displaystyle\small\begin{split}\alpha^{\min}=\frac{K(\pi^{\ast})^{2}{W_{e}}}{f_{e}^{\max}T^{\max}},\;\gamma^{\min}\,=\frac{K(\pi^{\ast})^{2}{W_{d}}}{f_{d}^{\max}T^{\max}},\\\
\beta^{\min}=\frac{K\pi^{\ast}S}{T^{\max}{B}{\log_{2}}\Big{(}1+\frac{{|h|^{2}d^{-\eta}P^{\max}}}{{{N_{0}}{B}}}\Big{)}}.\end{split}$
(17)
Notably, the optimal solution of Problem (P1) can be obtained directly with
the help of $\\{\alpha^{\ast},\beta^{\ast},\gamma^{\ast}\\}$ according to Eqn.
(13). It can be verified that Problem (P2) is a convex optimization problem,
and we discuss the solution in next subsection.
### III-C Hierarchical Bisection Search
To solve Problem (P2), we first apply Karush–Kuhn–Tucker (KKT) conditions to
derive necessary equations for achieving the optimality. By utilizing
$\lambda$ as the Lagrange multiplier for the equality Constraint (16a), and
$\\{\mu_{\alpha},\mu_{\beta},\mu_{\gamma}\\}$ as the multipliers for the
inequality Constraint (16b), we obtain
$\displaystyle{\mu_{\alpha}}={\frac{{-2{\tau_{1}}}}{{{\alpha^{3}}}}+\lambda},\,{\mu_{\gamma}}={\frac{{-2{\tau_{4}}}}{{{\gamma^{3}}}}+\lambda},$
(18a)
$\displaystyle{\mu_{\beta}}={\big{(}{{\tau_{2}}-\frac{{{\tau_{2}}{\tau_{3}}\ln
2}}{\beta}}\big{)}{2^{\frac{{{\tau_{3}}}}{\beta}}}-{\tau_{2}}+\lambda},$ (18b)
$\displaystyle{\mu_{\alpha}}(\alpha-{{\alpha^{\min}}})={\mu_{\beta}}(\beta-{{\beta^{\min}}})={\mu_{\gamma}}(\gamma-{{\gamma^{\min}}})=0,$
(18c) $\displaystyle
0\leq{\mu_{\alpha}},0\leq{\mu_{\beta}},0\leq{\mu_{\gamma}},$ (18d)
$\displaystyle\text{Constraints~{}(\ref{eqn:p2-ctr-1}) and
(\ref{eqn:p2-ctr-2})}.$
By substituting $\\{\mu_{\alpha},\mu_{\beta},\mu_{\gamma}\\}$ from Eqns (18a)
and (18b) into Eqn. (18c), we have
$\displaystyle\small\begin{split}&\Big{(}{\frac{{-2{\tau_{1}}}}{{{\alpha^{3}}}}+\lambda}\Big{)}(\alpha-\alpha^{\min})=\Big{(}{\frac{{-2{\tau_{4}}}}{{{\gamma^{3}}}}+\lambda}\Big{)}(\beta-\beta^{\min})=0,\end{split}$
(19)
$\displaystyle\begin{split}\Big{(}\underbrace{{\big{(}{{\tau_{2}}-\frac{{{\tau_{2}}{\tau_{3}}\ln
2}}{\beta}}\big{)}{2^{\frac{{{\tau_{3}}}}{\beta}}}-{\tau_{2}}+\lambda}}_{g_{\lambda}(\beta)}\Big{)}(\gamma-\gamma^{\min})=0.\end{split}$
(20)
According to Constraint (16b), the discussion on the value of $\alpha^{\ast}$
can be divided into two cases, i.e., $\alpha^{\ast}>\alpha^{\min}$ and
$\alpha^{\ast}=\alpha^{\min}$. Based on Eqn. (19), we have
$\alpha^{\ast}=\sqrt[3]{{\frac{{2{\tau_{1}}}}{\lambda}}}$ if
$\alpha^{\ast}>\alpha^{\min}$. Similarly, the optimal values of
$\\{\beta^{\ast},\gamma^{\ast}\\}$ can be analyzed on the same basis.
Therefore, we have
$\displaystyle\small\begin{split}\alpha^{\ast}_{\lambda}=\max\big{\\{}\sqrt[3]{{\frac{{2{\tau_{1}}}}{\lambda}}},\alpha^{\min}\big{\\}}&{},\;\beta^{\ast}_{\lambda}=\max\\{\beta_{\lambda},\beta^{\min}\\},\\\
\gamma^{\ast}_{\lambda}=\max\big{\\{}&{}\sqrt[3]{{\frac{{2{\tau_{4}}}}{\lambda}}},\gamma^{\min}\big{\\}},\end{split}$
(21)
where $\beta_{\lambda}$ is the zero of function $g_{\lambda}(\beta)$ in Eqn.
(20) such that $g_{\lambda}(\beta_{\lambda})=0$. Given a specific $\lambda$,
we define that
$z_{\lambda}=\alpha^{\ast}_{\lambda}+\beta^{\ast}_{\lambda}+\gamma^{\ast}_{\lambda}$.
According to Constraint (16a), the solution of Problem (P2) can be acquired by
searching an optimal Lagrange multiplier $\lambda^{\ast}$ such that
$z_{\lambda^{\ast}}=1$.
It can be verified that $\alpha^{\ast}_{\lambda}$, $\beta^{\ast}_{\lambda}$
and $\gamma^{\ast}_{\lambda}$ are monotonically non-increasing with respect to
$\lambda$. Hence, $\lambda^{\ast}$ can be efficiently obtained by the
bisection search as shown in Algorithm 2. Specifically, given a $\lambda$,
$\alpha^{\ast}_{\lambda}$ and $\gamma^{\ast}_{\lambda}$ be directly calculated
while $\beta^{\ast}_{\lambda}$ involves another bisection search (i.e., Steps
8-12). Given the tolerance value $\varepsilon$ and searching range
$\\{\lambda^{\min},\lambda^{\max},\beta^{\min},\beta^{\max}\\}$ and
$J=\max\\{\lambda^{\max}-\lambda^{\min},\beta^{\max}-\beta^{\min}\\}$, the
computational complexity of Algorithm 2 is estimated by ${\cal
O}(log_{2}^{2}J)$.
Input: $\lambda^{\min},\lambda^{\max},\beta^{\min},\beta^{\max}$, and
$\varepsilon$.
Output: The optimal Lagrange multiplier $\lambda^{\ast}$.
1 repeat
2 $\lambda=(\lambda^{\max}+\lambda^{\min})/2$;
3 Compute $\alpha^{\ast}_{\lambda}$ and $\gamma^{\ast}_{\lambda}$ based on
Eqn. (21);
4 Search for $\beta^{\ast}_{\lambda}$, and compute
$z_{\lambda}=\alpha^{\ast}_{\lambda}+\beta^{\ast}_{\lambda}+\gamma^{\ast}_{\lambda}$;
5 if $z_{\lambda}<1$ then $\lambda^{\max}=\lambda$ else
$\lambda^{\min}=\lambda$;
6
7until _$|\lambda^{\max}-\lambda^{\min}|\leq\varepsilon$_ ;
8return $\lambda^{\ast}$
/* Function for searching $\beta^{\ast}_{\lambda}$. */
9 repeat
10 $\beta=(\beta^{\max}+\beta^{\min})/2$;
11 Compute $g_{\lambda}(\beta)$ based on Eqn. (20);
12 if $g_{\lambda}(\beta)>0$ then $\beta^{\max}=\beta$ else
$\beta^{\min}=\beta$;
13
14until _$|\beta^{\max}-\beta^{\min}|\leq\varepsilon$_ ;
15return $\beta^{\ast}_{\lambda}$
Algorithm 2 Hierarchical bisection search
## IV Simulation Results
### IV-A Experiment Settings
We consider the semantic communication for image transmission with CIFAR10
dataset. For the semantic model, we use two three-layer convolutional neural
networks with kernel size as 4, stride as 2, and padding as 1 for the encoder
and decoder. Specifically, the widths of the encoder and decoder are {12, 24,
32} and {24, 12, 3}, respectively. The semantic data is represented by
$32\times 4\times 4=512$ numbers in 8-bit unsigned integer and $S=4096$ bits.
For training hyper-parameters, the batch size, total epochs and $\pi_{\min}$
are set as 16, 30 and 0.25, respectively. The hyper-parameters for
communication $\\{B,|h|^{2},d,\eta,N_{0}\\}$ are set as {1MHz, 10-3W, 200m,
3.76, $-95$dBm/MHz} by default. The hyper-parameters for computation
$\\{W_{e},W_{d},\epsilon_{e},\epsilon_{d},K\\}$ are empirically set as {0.65
MCycles, 3.25MCycles, $1^{-26}$, $1^{-26}$, 512} by default.
TABLE I: Performance comparison between FAST and baseline methods on image transmission with CIFAR-10 datasets. Method | Data size (Mbit) | Comp. Cost (GFLOPs) | $E_{\text{cmp}}$ (J) | $E_{\text{com}}$ (J) | $E_{\text{tot}}$ (J) | Fidelity
---|---|---|---|---|---|---
Raw | 12.58 (1$\times$) | 0 | 0 | 2.24 | 2.24 | 1
JPEG | 2.76 (4.56$\times$) | — | — | — | — | 0.73
Prune ($\rho$=0.3) | 2.31(5.5$\times$) | 3.31 | 1.65 | 0.53 | 2.18 | 0.80
Quant (3 bits) | 1.18 (10.7$\times$) | 3.31 | 1.44 | 0.26 | 1.70 | 0.80
FAST ($\pi$=0.3) | 0.92 (13.7$\times$) | 0.97 | 0.01 | 0.10 | 0.11 | 0.80
Prune ($\rho$=0.1) | 2.71(4.6$\times$) | 3.31 | 1.73 | 0.64 | 2.37 | 0.85
Quant (4 bits) | 1.57 (8.0$\times$) | 3.31 | 1.51 | 0.35 | 1.86 | 0.85
FAST ($\pi$=0.5) | 1.67 (7.5$\times$) | 1.76 | 0.06 | 0.21 | 0.27 | 0.85
### IV-B Performance Evalutions
We first show that the hierarchical bisection search algorithm can converge to
the optimal solution of Problem (P2). Figure 4(a) presents the evolution of
the total energy cost with respect to the number of iterations. We observe
that the algorithm can achieve the optimum after about 30 iterations.
We next compare the proposed FAST with the following three baseline methods
under $T^{\max}=8$ seconds. (1) JPEG: we reduce the size of the raw data with
a radical compression ratio of about 4.5. (2) Prune: we employ filter-wise
feature pruning for the semantic data, and $\rho\in[0,1]$ is the pruning rate.
(3) Quant: we quantize the semantic data with fewer bits (from 1 to 8 bits)
before the transmission.
Table I provides the comparison results of FAST against the baseline methods
in terms of the size of semantic data, computation cost, and energy
consumption under two types of fidelity constraints. Compared with Prune and
Quant, FAST can respectively reduce 15 times and 6.9 times the total energy
consumption for realizing semantic communication with the fidelity of 0.80 and
0.85. Particularly, the proposed FAST can reduce 13.7 times the data size
under the low fidelity scenario. Meanwhile, one of the important advantages of
FAST is that the proposed flexible semantic model enables on-demand
computation to mitigate the computation cost.
Figure 4(b) shows the total energy consumption of different methods with
different fidelity constraints. With a given fidelity constraint, the proposed
FAST consistently outperforms the baseline methods to mitigate the total
energy consumption. Specifically, FAST can switch to the small sub-model to
significantly reduce the computation cost in the low-fidelity scenario. Figure
4(c) provides the image plots of reconstructed samples with different fidelity
constraints. Specifically, the proposed FAST can achieve the best semantic
fidelity with the least total energy cost, which strikes the balance between
transmission quality and resource utilization.
Figure 4: The main advantages of FAST. ((a): convergence of the searching
algorithm; (b-c): performance of different methods; (d-f): impact of key
settings.)
Figures 4(d)-(f) show how the energy efficiency is affected by the key system
configurations, including the transmission distance (i.e., channel state),
computational energy coefficient, and latency constraint. It can be observed
that the proposed FAST consistently outperforms the baseline methods under a
variety of system settings. The experiments show that FAST is more resilient
than other baselines to achieve green transmission under heterogeneous
scenarios.
## V Conclusion
In this paper, we proposed FAST, a fidelity-adjustable semantic transmission
framework for green communication. We presented the dynamic model training
scheme to enable wireless devices to adopt different sub-model on demand. To
improve the energy efficiency of FAST, we focused on minimizing the total
energy consumption under personalized latency and fidelity constraints. By
leveraging the theoretical analysis, we transformed the optimization problem
into a tractable form and designed an algorithm to efficiently search for the
optimal transmission strategy. Experimental results demonstrate the advantage
of FAST in improving energy efficiency and semantic quality against the
baseline methods.
## Acknowledgment
Rong Yu and Yuan Wu are the corresponding authors. This work was supported in
part by National Natural Science Foundation of China under Grants 61971148,
U22A2054 and 62102099, in part by National Key R&D Program of China under
Grants 2020YFB1807802 and 2020YFB1807800, in part by Science and Technology
Development Fund of Macau SAR under Grant 0162/2019/A3, in part by FDCT-MOST
Joint Project under Grant 0066/2019/AMJ, and in part by Research Grant of
University of Macau under Grant MYRG2020-00107-IOTSC. This work was also
supported in part by the National Research Foundation (NRF), Singapore and
Infocomm Media Development Authority under the Future Communications Research
Development Programme (FCP), and DSO National Laboratories under the AI
Singapore Programme (AISG Award No: AISG2-RP-2020-019).
## References
* [1] M. Z. Chowdhury, M. Shahjalal, S. Ahmed, and Y. M. Jang, “6g wireless communication systems: Applications, requirements, technologies, challenges, and research directions,” _IEEE Open J. Commun. Soc._ , vol. 1, pp. 957–975, 2020.
* [2] J. Bao, P. Basu, M. Dean, C. Partridge, A. Swami, W. Leland, and J. A. Hendler, “Towards a theory of semantic communication,” in _IEEE Netw. Sci. Workshop_. IEEE, 2011, pp. 110–117.
* [3] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” _IEEE Trans. Signal Process._ , vol. 69, pp. 2663–2675, 2021.
* [4] Y. Wang _et al._ , “Performance optimization for semantic communications: An attention-based reinforcement learning approach,” _IEEE J. Sel. Areas Commun._ , vol. 40, no. 9, pp. 2598–2613, 2022.
* [5] G. Shi, Y. Xiao, Y. Li, and X. Xie, “From semantic communication to semantic-aware networking: Model, architecture, and open problems,” _IEEE Commun. Mag._ , vol. 59, no. 8, pp. 44–50, 2021.
* [6] W. Yang _et al._ , “Semantic communication meets edge intelligence,” _arXiv preprint arXiv:2202.06471_ , 2022.
* [7] Z. Weng and Z. Qin, “Semantic communication systems for speech transmission,” _IEEE J. Sel. Areas Commun._ , vol. 39, no. 8, pp. 2434–2444, 2021.
* [8] X. Luo, B. Yin, Z. Chen, B. Xia, and J. Wang, “Autoencoder-based semantic communication systems with relay channels,” in _Proc. of IEEE ICC’2022 Workshop_.
* [9] D. Huang, X. Tao, F. Gao, and J. Lu, “Deep learning-based image semantic coding for semantic communications,” in _Proc. of IEEE GLOBECOM’2021_.
* [10] X. Chen _et al._ , “Age of semantics in cooperative communications: To expedite simulation towards real via offline reinforcement learning,” _arXiv preprint arXiv:2209.08947_ , 2022.
* [11] L. Yan, Z. Qin, R. Zhang, Y. Li, and G. Y. Li, “Resource allocation for text semantic communications,” _IEEE Wireless Commun. Lett._ , vol. 11, no. 7, pp. 1394–1398, 2022.
* [12] C. Liu, C. Guo, Y. Yang, and N. Jiang, “Adaptable semantic compression and resource allocation for task-oriented communications,” _arXiv preprint arXiv:2204.08910_ , 2022.
* [13] C. Szegedy, A. Toshev, and D. Erhan, “Deep neural networks for object detection,” in _Proc. of NeurIPS’2013_.
* [14] L. Song, Y. Li, Z. Jiang, Z. Li, H. Sun, J. Sun, and N. Zheng, “Fine-grained dynamic head for object detection,” in _Proc. of NeurIPS’2020_.
* [15] R. Yu and P. Li, “Toward resource-efficient federated learning in mobile edge computing,” _IEEE Netw._ , vol. 35, no. 1, pp. 148–155, 2021.
* [16] Y. Wu, Y. Song, T. Wang, L. Qian, and T. Q. Quek, “Non-orthogonal multiple access assisted federated learning via wireless power transfer: A cost-efficient approach,” _IEEE Trans. Commun._ , vol. 70, no. 4, pp. 2853–2869, 2022.
* [17] H. Tong _et al._ , “Federated learning based audio semantic communication over wireless networks,” in _Proc. of IEEE GLOBECOM’2021_.
* [18] P. Li _et al._ , “Anycostfl: Efficient on-demand federated learning over heterogeneous edge devices,” in _Proc. of IEEE INFOCOM’2023_.
* [19] J. Lin, R. Zhang, F. Ganz, S. Han, and J.-Y. Zhu, “Anycost gans for interactive image synthesis and editing,” in _Proc. of CVPR’2021_.
* [20] P. Li, X. Huang, M. Pan, and R. Yu, “Fedgreen: Federated learning with fine-grained gradient compression for green mobile edge computing,” in _Proc. of IEEE GLOBECOM’2021_.
|
# On the positivity of the hypergeometric Veneziano amplitude
###### Abstract
Recently, an infinite one-parameter generalisation of the Veneziano amplitude
was bootstrapped using as input assumptions an integer mass spectrum, crossing
symmetry, high-energy boundedness, and exchange of finite spins. This new
result was dubbed hypergeometric Veneziano amplitude, with the deformation
parameter $r$ being a real number. Using the partial-wave decomposition and
the positivity of said decomposition’s coefficients we are able to bound the
deformation parameter to $r\geq 0$ and, also, to obtain an upper bound on the
number of spacetime dimensions $D\leq 26$, which is the critical dimension of
bosonic string theory.
###### Contents
1. 1 Prologue
2. 2 Generalities and setup
3. 3 The partial wave coefficients in $D=4$ dimensions
1. 3.1 The leading Regge trajectory
2. 3.2 More on the Regge trajectories
3. 3.3 The general coefficients
4. 3.4 A simpler expression for the general coefficients
4. 4 The partial wave coefficients in $D$ dimensions
1. 4.1 The leading Regge trajectory
2. 4.2 More on the Regge trajectories
3. 4.3 The general coefficients
5. 5 Comments on unitarity
6. 6 Epilogue
7. A Partial-wave coefficients that are equal to zero
1. A.1 The effect of a non-vanishing value for the r-parameter
2. A.2 Vanishing coefficients in $D=4$
3. A.3 Vanishing coefficients in any $D$
8. B The polynomials for the Regge trajectories in $D=4$ dimensions
## 1 Prologue
Bootstrapping scattering amplitudes of massless and massive particles, see [1]
for a recent summarized exposition to advances and developments of the
S-matrix bootstrap, is an old theme in theoretical high-energy physics. The
idea behind it is to provide an alternative approach to understanding and
examining physics theories. Concretely, we can formulate specific mathematical
questions about the S-matrix and attempt to answer this kind of questions,
rather than resorting to Lagrangian descriptions and sophisticated geometrical
approaches. This, in turn, implies that we can understand the theories of
interest as being fixed by constraints and conditions that are imposed on
scattering amplitudes.
In order to employ any bootstrap algorithm, we have to choose a set of
assumptions and conditions and then impose them on a landscape of objects. For
the purposes of bootstrapping scattering amplitudes such a set can consist of
crossing symmetry, polynomial residues, and high-energy boundedness.
An explicit four-point amplitude that satisfies the above and is a meromorphic
function, except for its simple poles, was constructed by Veneziano [2] and is
given by:
$\mathcal{M}(s,t)=\frac{\Gamma(-(1+\alpha^{\prime}s))\Gamma(-(1+\alpha^{\prime}t))}{\Gamma(-(1+\alpha^{\prime}s)-(1+\alpha^{\prime}t))}\,.$
(1.1)
Today we know, of course, that it describes the $2\rightarrow 2$ scattering of
open-string tachyons of mass $\alpha^{\prime}m^{2}=-1$111we work with
conventions in which $\alpha^{\prime}=1$ for open string theory..
It is worthwhile stressing that a scattering amplitude violating the
requirement of tame ultraviolet behaviour is an indication for the breakdown
of unitarity and causality of the theory [3, 4]. Very robust expressions have
been derived describing bounds for theories that are gapped; the Regge and the
Froissart bounds.
The Veneziano amplitude has been studied quite extensively, with the recent
works of [5, 6] focusing on the coefficients of the partial-wave decomposition
of the amplitude and discussing its unitarity from tree-level considerations,
as well as the critical dimension of string theory. Questions regarding its
uniqueness were posed since the early days of its discovery.
Along this particular line of investigation, an answer was given in [7, 8, 9]
that is nowadays known as the Coon amplitude. This is another amplitude that
satisfies the criteria of polynomial boundedness, finite-spin exchange, and
meromorphicity. It comes with logarithmic Regge trajectories, and for many
years it was disregarded by virtually everyone. Recently, however, it has
received revived activity [10, 11, 12, 13, 14, 15]. The Coon amplitude is a
deformation of the Veneziano in terms of one-parameter and it exhibits a mass
spectrum with discrete levels converging to an infinite density at an
accumulation point that is followed by a branch cut.
While the work of [16] raises concerns regarding the status of unitarity of
the Coon amplitude, it was realised in [17] that string theory admits
amplitudes behaving like that since they arise from the of open-string
scattering with open strings having their endpoints on a D-brane in AdS.
With the Coon amplitude being able to provide us with an explicit and
consistent generalisation of the Veneziano amplitude, people have been
revisiting the question of constructing more general four-point amplitudes
that are consistent with the principles of the S-matrix bootstrap [18, 19, 20,
21]. This can, also, be phrased as a question to the uniqueness of string
theory. Phrased in simple terms, since string theory amplitudes satisfy
particular constraints, are they the only objects doing so?
In this work we focus our attention on an infinite generalisation of the
Veneziano amplitude, recently derived in [21]. Similarly to the Coon
amplitude, it is, also, written as a one-parameter deformation and has been
dubbed the hypergeometric Veneziano amplitude; this name will become perfectly
clear in the next section. Using the partial-wave decomposition of the
amplitude, we wish to impose the positivity of the coefficients in order
derive bounds on the allowed values for the parameter $r$ and the spacetime
dimensions $D$222The authors of [21] have discussed constraints and bounds
resulting from unitarity using a numerical evaluation of the partial-wave
coefficients. More specifically, starting from the integral representation of
the coefficients, they were able to re-express them as a double-sum and upon
an explicit numerical evaluation they were able to constrain the allowed
values for the deformation parameter, $r$, and the number of allowed spacetime
dimensions, $D$. Our analysis is a more systematic examination of the partial-
wave coefficients..
The approach we take here, in order to derive bounds on the allowed values of
the deformation parameter $r$ and the spacetime dimension $D$, is to examine
the positivity of the coefficients of the partial-wave decomposition of the
amplitude. To do so, we compute the residues of the amplitude at the location
of its poles. Then, we proceed to decompose the residues in a basis spanned by
Gegenbauer polynomials, which is valid for any number of spacetime dimensions,
$D\geq 3$333In the special cases of $D=4$ and $D=5$ the Gegenbauer polynomials
reduce to the Legendre polynomials and the Chebyshev polynomials of the second
kind.. As we have already mentioned, the task at hand is to find the numbers
that multiply these polynomials, since their non-negativity is tied to the
unitarity of the underlying theory. We proceed by utilizing the orthogonality
relations that these polynomials obey, and we obtain a relation for the
partial-wave coefficients as an integral of those special polynomials and some
non-trivial function. Using the generating functions for the special
polynomials we can define a “pseudo-generating function”. After some algebraic
manipulations, which consist of writing our expressions as power series
expansions we, effectively, have two polynomial expansions for the original
equation of the partial-wave coefficients. From that we can read off the terms
of appropriate scaling in order to derive the coefficients.
In addition to the above, we also resort to some experimental guess-work, in
order to derive additional analytic expressions for the partial-wave
coefficients of sub-leading Regge trajectories. This means that, starting from
the original expression for the partial-wave coefficients given as an integral
of the special polynomials times some non-trivial function, we compute the
integral for some values, we make a guess for the general form of the answer
and we proceed to verify our claim by checking explicitly some non-trivial
values444In this context, by non-trivial we mean some values that were not
used in order to claim the answer of the partial-wave coefficients..
The structure of this work is the following: in section 2 we set-up our
notation and conventions. We move on to section 3, where we specialise the
discussion in the $D=4$ case and we derive the partial-wave coefficients for
the leading Regge trajectory, $a_{n,n+1}$. We, also, provide expressions for
the partial-wave coefficients of the sub-leading Regge trajectories,
$a_{n,n-\gamma}$, with $\gamma=\\{0,1,\dots,10\\}$. Finally, we re-write the
original integral representation of the partial-wave coefficients as multiple
sums. In section 4 we analyse the partial-wave coefficients for general
dimensions. We proceed to analyse the positivity constraints of those
coefficients in section 5 and derive bounds on the parameter $r$ and the
spacetime dimensions $D$. We conclude in section 6.
## 2 Generalities and setup
The new infinite family of hypergeometric amplitudes is given by [21]
$\mathcal{A}(s,t)=\frac{\Gamma(-s-1)\Gamma(-t-1)}{\Gamma(-s-t-2)}{}_{3}F_{2}\left(-s-1,-t-1,r;-s-t-2,1+r;1\right)\,,$
(2.1)
where in the above ${}_{3}F_{2}(a;b;z)$ is the generalised hypergeometric
function555Note that relative to [21] we have a shift
$\\{s,t\\}\rightarrow\\{s+1,t+1\\}$..
It is obvious that the amplitude, $\mathcal{A}(s,t)$, has poles in $s$ at
$s=n=-1,0,1\ldots$ and of course the same is true for $t$.
We start by calculating the residues at the $s$ poles. We have
$\mathop{\mathrm{Res}}_{s=n}\mathcal{A}(s,t)=-\frac{r!}{(n+r+1)!}(t+2+r)_{n+1}\,.$
(2.2)
From the Gegenbauer expansion for arbitrary dimensions $D$ we know
$\mathop{\mathrm{Res}}_{s=n}\mathcal{A}(s,t)=-\sum_{l=0}^{n+1}a_{n,l}C_{l}^{(\alpha)}\left(1+\frac{2t}{n+4}\right)\,,$
(2.3)
where in the above the parameter $\alpha$ is related to the spacetime
dimensions $D$ via $\alpha=\tfrac{D-3}{2}$. Now we can just equate equations
2.2 and 2.3 to get
$\frac{r!}{(n+r+1)!}(t+2+r)_{n+1}=\sum_{l=0}^{n+1}a_{n,l}C_{l}^{(\alpha)}\left(1+\frac{2t}{n+4}\right)\,.$
(2.4)
We can use the fact that the Gegenbauer polynomials satisfy the following
orthogonality condition
$\int_{-1}^{+1}dxC_{\ell}^{(\alpha)}(x)C_{\ell^{\prime}}^{(\alpha)}(x)(1-x^{2})^{\alpha-\frac{1}{2}}=2\mathcal{K}(\ell,\alpha)\delta_{\ell\ell^{\prime}}\,,$
(2.5)
where in the above we have defined
$\mathcal{K}(\ell,\alpha)=\frac{\pi\Gamma(\ell+2\alpha)}{2^{2\alpha}\ell!(\ell+\alpha)\Gamma^{2}(\alpha)}\,,$
(2.6)
in order to derive the following expression for the partial-wave coefficients
of equation 2.4
$\displaystyle a_{n,\ell}=$
$\displaystyle\frac{r!}{(n+1+r)!}\frac{1}{\mathcal{K}(\ell,\alpha)}\left[\frac{4}{(n+4)^{2}}\right]^{\alpha-\tfrac{1}{2}}$
(2.7)
$\displaystyle\int^{n+4}_{0}dtC_{\ell}^{(\alpha)}\left(1-\frac{2t}{n+4}\right)(t(n+4-t))^{\alpha-\tfrac{1}{2}}(-t+2+r)_{n+1}\,.$
By examining equation 2.7 we make the following observations:
* •
Unlike the case of the Veneziano amplitude where $a_{n,\ell}=0$ for $n+\ell$
equal to an even number, here we do not have that. This is due to the presence
of the deformation parameter $r$. In the special case $r=0$ the coefficients
are equal to $0$ as they should.
* •
We have, however, that $a_{n,\ell}=0$ for $\ell\geq n+2$, as is the case for
the Veneziano amplitude as well.
We conclude this section here and discuss more the above two points in
appendix A.
## 3 The partial wave coefficients in $D=4$ dimensions
Here we specialize the discussion in the $D=4$, or equivalently
$\alpha=\tfrac{1}{2}$, case. Equation 2.7 simplifies to the following
expression
$a_{n,\ell}=\frac{r!}{(n+1+r)!}\frac{1+2\ell}{n+4}\int^{n+4}_{0}dtP_{\ell}\left(1-\frac{2t}{n+4}\right)(-t+2+r)_{n+1}\,,$
(3.1)
where in the above $P_{\ell}(x)$ denotes the Legenedre polynomials.
We can proceed by utilizing the generating function of the Legendre
polynomials
$\sum^{\infty}_{\ell=0}P_{\ell}(x)t^{-\ell-1}=\frac{1}{\sqrt{1-2xt+t^{2}}}\,.$
(3.2)
and the representation of the Pochhammer symbol in terms of the Stirling
number of the first kind, $s^{(b)}_{a}$,666in some places in the literature it
is written as $s(a,b)$ but we opt for the one that is closer to the
Mathematica implementation.
$(x)_{n}=\sum^{n}_{k=0}(-1)^{n-k}s^{(k)}_{n}x^{k}\,,$ (3.3)
in order to obtain
$\sum^{\infty}_{j=0}\frac{1}{2j+1}\frac{a_{n,j}}{h^{j+1}}=\frac{1}{(n+4)(n+1)!}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\underbrace{\int^{n+4}_{0}dt\frac{(-t+2+r)^{k}}{\sqrt{(h-1)^{2}+\tfrac{4ht}{n+4}}}}_{\mathcal{G}^{(r)}_{n,k}(h)}\,.$
(3.4)
In the above, equation 3.4, we have defined a “pseudo generating function”
$\mathcal{G}^{(r)}_{n,k}(h)$ which we can evaluate explicitly and is given by:
$\mathcal{G}^{(r)}_{n,k}(h)=\frac{1}{2^{2k+1}}\frac{(n+4)^{k+1}}{h^{k+1}}\left[(h-1)^{2}+\frac{4h}{n+4}(2+r)\right]^{k}\Big{(}\mathcal{E}^{+}-\mathcal{E}^{-}\Big{)}\,,$
(3.5)
with the shorthands
$\displaystyle\mathcal{E}^{\pm}$ $\displaystyle=(h\pm
1)~{}{}_{2}F_{1}(\tfrac{1}{2},-k,\tfrac{3}{2};f^{(\pm)})\,,$ (3.6)
$\displaystyle f^{(\pm)}$ $\displaystyle=\frac{(h\pm
1)^{2}}{(h-1)^{2}+\tfrac{4h}{n+4}(2+r)}\,.$
Using the formal power-series definition for the hypergeometric function
${}_{2}F_{1}(a,b,c;z)$
${}_{2}F_{1}(a,b,c;x)=\sum^{\infty}_{y=0}\frac{1}{y!}\frac{(a)_{y}(b)_{y}}{(c)_{y}}x^{y}\,,$
(3.7)
we can re-write equation 3.5 as777Note that the $\mathfrak{p}$-sum is
terminated, however, this is natural since $(-a)_{b}=0$ holds $\forall b>a$.
$\displaystyle\mathcal{G}^{(r)}_{n,k}(h)=$
$\displaystyle\frac{1}{2^{2k+1}}\frac{(n+4)^{k+1}}{h^{k+1}}$ (3.8)
$\displaystyle\sum^{k}_{\mathfrak{p}=0}\frac{(-k)_{\mathfrak{p}}}{(2\mathfrak{p}+1)\mathfrak{p}!}\left((h-1)^{2}+\frac{4h}{n+4}(2+r)\right)^{k-\mathfrak{p}}\sum^{2\mathfrak{p}+1}_{\mathfrak{m}=0}\binom{2\mathfrak{p}+1}{\mathfrak{m}}h^{\mathfrak{m}}(1+(-1)^{\mathfrak{m}})\,.$
### 3.1 The leading Regge trajectory
In equation 3.8 the function $\mathcal{G}^{(r)}_{n,k}(h)$ has in total $k+1$
terms that scale according to
$\tfrac{1}{h},\tfrac{1}{h^{2}},\dots,\tfrac{1}{h^{k+1}}$. It is a rather
straightforward exercise to extract the $\tfrac{1}{h^{k+1}}$ coefficient,
which is given by
$\mathcal{D}^{(r)}_{n,k}=\frac{(n+4)^{k+1}}{2^{2k+1}}\sqrt{\pi}\frac{k!}{(k+\tfrac{1}{2})!}\,.$
(3.9)
Let us recall at this point, that the point of the exercise is to extract the
partial-wave coefficient for the leading Regge trajectory, $a_{n,n+1}$. This
is the term that scales like $\tfrac{1}{h^{n+2}}$ in equation 3.4. We have
$\frac{1}{2n+3}a_{n,n+1}=\frac{1}{n+4}\frac{r!}{(n+r+1)!}s^{(n+1)}_{n+1}\mathcal{D}^{(r)}_{n,n+1}\,,$
(3.10)
which yields
$a_{n,n+1}=\frac{1}{4^{n+1}}\sqrt{\pi}\frac{(n+4)^{n+1}}{(n+\tfrac{1}{2})!}\frac{(n+r+1)!}{r!}\,,$
(3.11)
and equation 3.11 holds for any integer $n\geq-1$. Note, also, that for $r=0$
since the amplitude reduces to the Veneziano amplitude, the partial-wave
coefficient of the leading Regge trajectory, given by equation 3.11, should
reduce to the result derived for that case. We have checked against the result
derived in [5] and this is indeed the case.
### 3.2 More on the Regge trajectories
In this section we wish to provide some, hopefully, useful expressions for the
partial-wave coefficients of the Regge trajectories. We start the discussion
by considering the case $a_{n,n}$. We work in an experimental manner. That is,
we compute the coefficients of interest using equation 3.1 for some low-lying
values of $n$ and manage to spot the pattern. Subsequently, we perform
numerous non-trivial tests of our expression. By non-trivial we mean against
values for the quantum number $n$ that lie outside the data-range used to
obtain the original expression. We find the following
$a_{n,n}=a_{n,n+1}~{}\frac{2n+1}{n+4}~{}2r\,.$ (3.12)
Let us make the comment that in the $r=0$ case the coefficients given by
equation 3.12 should coincide with those of the undeformed Veneziano
amplitude. It is useful at this point to remind ourselves of the fact that for
the undeformed Veneziano amplitude the partial-wave coefficients,
$a_{n,\ell}$, are always equal to zero when $n+\ell$ is an even number. This
agreement is manifest in equation 3.12.
Working in a similar vein for a couple more Regge trajectories, we observe
that there is a striking pattern. All the coefficients can be written as
$a_{n,n-\gamma}=a_{n,n+1}~{}\frac{2(n-\gamma)+1}{(n+4)^{\gamma+1}}~{}\frac{1}{2}\left((-1)^{\gamma}(r-1)+r+1\right)~{}\mathcal{P}_{\gamma}(n,r)$
(3.13)
where in the above $\mathcal{P}_{\gamma}(n,r)$ is a polynomial in $n$ and $r$.
The $r$-degree of the polynomial is
$\tfrac{1}{2}(-1)^{\gamma}\left((-1)^{\gamma}(2\gamma+1)-1\right)$ and only
the even powers in $r$ appear. The degree in $n$ is
$\tfrac{1}{4}(-1)^{\gamma}\left((-1)^{\gamma}(6\gamma+1)-1\right)$.
Unfortunately, we do not have a closed-form expression for all Regge
trajectories, however, we have closed-form expressions for
$\gamma=\\{0,1,\ldots,10\\}$. Since the polynomials become quite unwieldy,
below we provide the expressions for those that we will need in order to
constrain the parameter $r$ and more expressions can be found in appendix B.
$\displaystyle\mathcal{P}_{0}(n,r)$ $\displaystyle=2\,,$ (3.14)
$\displaystyle\mathcal{P}_{1}(n,r)$
$\displaystyle=(4n+2)r^{2}+\frac{n^{2}}{6}+\frac{19n}{6}+\frac{23}{3}\,,$
$\displaystyle\mathcal{P}_{2}(n,r)$
$\displaystyle=\left(\frac{16n^{2}}{3}-\frac{4}{3}\right)r^{2}+\frac{2n^{3}}{3}+\frac{43n^{2}}{3}+\frac{121n}{3}+\frac{50}{3}\,,$
### 3.3 The general coefficients
It is possible, with the relations that we have derived so far, to re-write
the general partial-wave coefficients given by equation 3.1. Algorithmically
we will work in the same way as above for the leading Regge trajectory,
$a_{n,n+1}$. The task at hand, now, is to compute the general term that scales
like $h^{-\ell-1}$ from both sides of the $h$-expansion. We re-state below
what we have obtained, for the reader’s convenience. The relation that we need
to use in order to get the $a_{n,\ell}$ is given by:
$\displaystyle\sum^{\infty}_{j=0}\frac{1}{2j+1}\frac{1}{h^{j+1}}a_{n,j}=\frac{r!}{(n+r+1)!}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\frac{1}{2^{2k+1}}(n+4)^{k}\frac{1}{h^{k+1}}$
(3.15)
$\displaystyle\sum^{k}_{\mathfrak{p}=0}\frac{(-k)_{\mathfrak{p}}}{\mathfrak{p}!(2\mathfrak{p}+1)}\left[(h-1)^{2}+\frac{4h}{n+4}(r+2)\right]^{k-\mathfrak{p}}\sum^{2\mathfrak{p}+1}_{\mathfrak{m}=0}\binom{2\mathfrak{p}+1}{\mathfrak{m}}h^{\mathfrak{m}}(1+(-1)^{\mathfrak{m}})\,.$
We will use the binomial expansion
$(a+b)^{c}=\sum^{c}_{d=0}\binom{c}{d}a^{c-d}b^{d}\,,$ (3.16)
in order to re-write
$\left[(h-1)^{2}+\frac{4h}{n+4}(2+r)\right]^{k-\mathfrak{p}}=\sum^{k-\mathfrak{p}}_{\mathfrak{r}=0}\binom{k-\mathfrak{p}}{\mathfrak{r}}\left(\frac{4}{n+4}(2+r)\right)^{k-\mathfrak{p}-\mathfrak{r}}h^{k-\mathfrak{p}-\mathfrak{r}}(h-1)^{2\mathfrak{r}}\,,$
(3.17)
and subsequently
$(h-1)^{2\mathfrak{r}}=\sum^{2\mathfrak{r}}_{\mathfrak{z}=0}\binom{2\mathfrak{r}}{\mathfrak{z}}(-1)^{2\mathfrak{r}-\mathfrak{z}}h^{\mathfrak{z}}\,,$
(3.18)
thus arriving at
$\displaystyle\sum^{\infty}_{j=0}\frac{1}{2j+1}\frac{1}{h^{j+1}}a_{n,j}=$
(3.19)
$\displaystyle\frac{r!}{(n+r+1)!}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\frac{1}{2^{2k+1}}(n+4)^{k}~{}\frac{1}{h^{k+1}}~{}\times~{}$
$\displaystyle\sum^{k}_{\mathfrak{p}=0}\sum^{2\mathfrak{p}+1}_{\mathfrak{m}=0}\sum^{k-\mathfrak{p}}_{\mathfrak{r}=0}\sum^{2\mathfrak{r}}_{\mathfrak{z}=0}\binom{2\mathfrak{p}+1}{\mathfrak{m}}\binom{k-\mathfrak{p}}{\mathfrak{r}}\binom{2\mathfrak{r}}{\mathfrak{z}}\left(\frac{4}{n+4}(r+2)\right)^{k-\mathfrak{p}-\mathfrak{r}}$
$\displaystyle\frac{(-k)_{\mathfrak{p}}}{\mathfrak{p}!(2\mathfrak{p}+1)}(-1)^{\mathfrak{z}}h^{\mathfrak{z}+\mathfrak{m}+k-\mathfrak{p}-\mathfrak{r}}(1+(-1)^{\mathfrak{m}})\,.$
From the above we can read-off the terms proportional to $h^{-\ell-1}$ from
both sides and the result yields888We have checked that our result, equation
3.20, matches the expression for the partial-wave coefficients in [21], Note
that there is a shift by one in the definitions of $n$ between the two works;
the $n$ here has to be shifted as $n\rightarrow n+1$ to match the result in
[21].:
$\displaystyle a_{n,\ell}=$
$\displaystyle\frac{\left(\ell+1\right)r!}{(n+r+1)!}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\frac{1}{2^{2k+1}}(n+4)^{k}$
(3.20)
$\displaystyle\sum^{k}_{\mathfrak{p}=0}\sum^{2\mathfrak{p}+1}_{\mathfrak{m}=0}\sum^{k-\mathfrak{p}}_{\mathfrak{r}=0}\binom{2\mathfrak{p}+1}{\mathfrak{m}}\binom{k-\mathfrak{p}}{\mathfrak{r}}\binom{2\mathfrak{r}}{\mathfrak{p}+\mathfrak{r}-\ell-\mathfrak{m}}\frac{(-k)_{\mathfrak{p}}}{\mathfrak{p}!(2\mathfrak{p}+1)}$
$\displaystyle\left(\frac{4}{n+4}(r+2)\right)^{k-\mathfrak{p}-\mathfrak{r}}(-1)^{\mathfrak{p}+\mathfrak{r}-\ell-\mathfrak{m}}(1+(-1)^{\mathfrak{m}})$
### 3.4 A simpler expression for the general coefficients
We will derive a simpler expression for the $a_{n,\ell}$ coefficients compared
to equation 3.20. To do so, we remind ourselves of the definition of the
“pseudo generating function” $\mathcal{G}^{(r)}_{n,k}(h)$ in $D=4$ which is
given by:
$\mathcal{G}^{(r)}_{n,k}(h)=\int^{n+4}_{0}dt\frac{(-t+2+r)^{k}}{\sqrt{(h-1)^{2}+\tfrac{4ht}{n+4}}}\,.$
(3.21)
As we have seen, the integral in equation 3.21 can be performed analytically
in terms of the ordinary Gauss hypergeometrics functions,
${}_{2}F_{1}(a,b,c;x)$. There exists another way of obtaining the integral in
terms of the Appell hypergeometrics function, $F_{1}(a;b,c;d;x,y)$. The answer
is given by the following:
$\mathcal{G}^{(r)}_{n,k}(h)=2^{k}\frac{n+4}{h-1}F_{1}\left(1;-k,\tfrac{1}{2};2;\tfrac{n+4}{2+r},-\tfrac{4h}{(h-1)^{2}}\right)\,.$
(3.22)
We recall now that the formal definition of the Appell hypergeometrics
function as a power-series in the following manner
$F_{1}(a;b,c;d;x,y)=\sum^{\infty}_{e=0}\sum^{\infty}_{f=0}\frac{1}{e!}\frac{1}{f!}\frac{(a)_{e+f}(b)_{e}(c)_{f}}{(d)_{e+d}}x^{e}y^{f}\,,$
(3.23)
and we also make note of the simplification:
$\frac{(1)_{x+y}}{(2)_{x+y}}=1+x+y\ ,$ (3.24)
in order to obtain:
$\mathcal{G}^{(r)}_{n,k}(h)=(r+2)^{k}\frac{n+4}{h-1}\sum^{k}_{\mathfrak{u}=0}\sum^{\infty}_{\mathfrak{v}=0}\frac{1}{\mathfrak{u}!}\frac{1}{\mathfrak{v}!}\frac{(-k)_{\mathfrak{u}}(\tfrac{1}{2})_{\mathfrak{v}}}{1+\mathfrak{u}+\mathfrak{v}}\left(\frac{n+4}{r+2}\right)^{\mathfrak{u}}\left(-\frac{4h}{(h-1)^{2}}\right)^{\mathfrak{v}}\,.$
(3.25)
To proceed we perform the $\mathfrak{u}$-sum in the above relation and we
obtain
$\mathcal{G}^{(r)}_{n,k}(h)=(r+2)^{k}(n+4)\sum^{k}_{\mathfrak{u}=0}\frac{(\tfrac{1}{2})_{\mathfrak{v}}}{(\mathfrak{v}+1)!}{}_{2}F_{1}\left(-k,1+\mathfrak{v},2+\mathfrak{v};\tfrac{n+4}{r+2}\right)(-4)^{\mathfrak{v}}\frac{h^{\mathfrak{v}}}{(h-1)^{2\mathfrak{v}+1}}\,.$
(3.26)
Now, we can use, once more, the binomial expansion
$\frac{1}{(1+x)^{n}}=\sum^{\infty}_{y=0}\binom{n+y-1}{y}(-1)^{y}x^{y}\,,$
(3.27)
as well as the relation between the binomial coefficients and the Pochhammer
symbol
$\binom{x+y-1}{y}=\frac{(x)_{y}}{y!}\,,$ (3.28)
in order to re-write
$\frac{1}{(h-1)^{2\mathfrak{v}+1}}=(-1)^{-2\mathfrak{v}-1}\sum^{\infty}_{\mathfrak{r}=0}\frac{(2\mathfrak{v}+1)_{\mathfrak{r}}}{\mathfrak{r}!}(-1)^{2\mathfrak{r}}h^{\mathfrak{r}}\,,$
(3.29)
and thus the expression for $\mathcal{G}^{(r)}_{n,k}(h)$ becomes
$\displaystyle\mathcal{G}_{n,k}(h)=(r+2)^{k}(n+4)$
$\displaystyle\sum^{k}_{\mathfrak{v}=0}\frac{(\tfrac{1}{2})_{\mathfrak{v}}}{(\mathfrak{v}+1)!}{}_{2}F_{1}\left(-k,1+\mathfrak{v},2+\mathfrak{v};\tfrac{n+4}{r+2}\right)(-4)^{\mathfrak{v}}(-1)^{-2\mathfrak{v}-1}~{}h^{\mathfrak{v}}~{}$
(3.30)
$\displaystyle\sum^{\infty}_{\mathfrak{r}=0}\frac{(2\mathfrak{v}+1)_{\mathfrak{r}}}{\mathfrak{r}!}(-1)^{2\mathfrak{r}}h^{\mathfrak{r}}\,.$
Having obtained an explicit form as power series expansion for the “pseudo
generating function”, $\mathcal{G}_{n,k}(h)$, in terms of $h$, equation 3.30,
it is quite straightforward to extract the coefficient $a_{n,\ell}$. It is
given by999We have checked in this case as well that, equation 3.31, matches
the expression for the partial-wave coefficients in [21] as we did
previously.:
$\displaystyle a_{n,\ell}=$
$\displaystyle\frac{(2\ell+1)r!}{(n+4)(n+r+1)!}\sum_{k=0}^{n+1}(-1)^{n+1-k}s^{(k)}_{n+1}(r+2)^{k}(n+4)$
(3.31)
$\displaystyle\sum_{\mathfrak{v}=0}^{\ell}\frac{(\frac{1}{2})_{\mathfrak{v}}}{(\mathfrak{v}+1)!}{}_{2}F_{1}\left(-k,1+\mathfrak{v};2+\mathfrak{v};\tfrac{n+4}{r+2}\right)(-4)^{\mathfrak{v}}\frac{(2\mathfrak{v}+1)_{\ell-\mathfrak{v}}}{(\ell-\mathfrak{v})!}\,.$
Note that, while equation 3.31, is completely equivalent to equation 3.20 they
are distinct parametrisations of the partial-wave coefficients. Equation 3.31
comes with only two sums, instead of the four ones that appear in equation
3.20. An argument can be made in favour of both those equations in comparison
to equation 3.1 as they are just sums, rather than integration of the Legendre
polynomials times non-trivial functions.
Also, note that for $\ell=0$ we have just a single sum
$\displaystyle a_{n,0}=$
$\displaystyle\frac{r!}{(n+4)(n+r+1)!}\sum_{k=0}^{n+1}(-1)^{n+1-k}s^{(k)}_{n+1}(r+2)^{k}$
(3.32)
$\displaystyle\left(\frac{r+2}{k+1}+(-1)^{k}\frac{(r+2)^{-k}(n-r+2)^{k+1}}{k+1}\right)\,.$
## 4 The partial wave coefficients in $D$ dimensions
In this section, we wish to derive similar expressions as we did previously in
$D=4$ dimensions, section 3, but without specifying the number of spacetime
dimensions. The steps and basic relations that we need in order to manipulate
the expressions have already appeared in the previous section, and hence we
will proceed with a faster pace here.
For the reader’s convenience we record here again, the basic relation, as an
integral over the Gegenbauers, that gives the partial-wave coefficients
$a_{n,\ell}$
$\displaystyle a_{n,\ell}=$
$\displaystyle\frac{r!}{(n+r+1)!}\frac{1}{\mathcal{K}(\ell,\alpha)}\frac{1}{n+4}\left(\frac{4}{(n+4)^{2}}\right)^{\alpha-\tfrac{1}{2}}$
(4.1)
$\displaystyle\int^{n+4}_{0}dtC^{(\alpha)}_{\ell}\left(1-\frac{2t}{n+4}\right)(t(n+4-t))^{\alpha-\tfrac{1}{2}}(-t+2+r)_{n+1}\,.$
In order to proceed, we want to make use of the generating function of the
Gegenbauer polynomials
$\sum^{\infty}_{\ell=0}C^{(\alpha)}_{\ell}(x)t^{\ell}=\frac{1}{(1-2xt+t^{2})^{\alpha}}\,,$
(4.2)
the representation of the Pochhammer symbol in terms of the Stirling number of
the first kind, see equation 3.3, and also
$(n+4-t)^{\alpha-\tfrac{1}{2}}=\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}(n+4)^{\alpha-\tfrac{1}{2}-p}t^{p}\,.$
(4.3)
After using the above, we obtain the following:
$\displaystyle\sum^{\infty}_{j=0}\mathcal{K}(j,\alpha)a_{n,j}h^{j}$ (4.4)
$\displaystyle=\frac{r!}{(n+r+1)!}\frac{1}{n+4}\left[\frac{4}{(n+4)^{2}}\right]^{\alpha-\tfrac{1}{2}}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}(n+4)^{\alpha-\tfrac{1}{2}-p}\times$
$\displaystyle\underbrace{\int^{n+4}_{0}dt~{}\frac{(-t+2+r)^{k}t^{p+\alpha-\tfrac{1}{2}}}{\left[(h-1)^{2}+\frac{4ht}{n+4}\right]^{\alpha}}}_{\mathcal{G}^{(\alpha)(r)}_{n,k,p}(h)}\,,$
where in the above relation, equation 4.4, we have defined the “pseudo
generating function” $\mathcal{G}^{(\alpha)(r)}_{n,k,p}(h)$. The integral can
be performed analytically and we obtain
$\displaystyle\mathcal{G}^{(\alpha)(r)}_{n,k,p}(h)=$
$\displaystyle\frac{2(r+2)^{k}}{2\alpha+2p+1}(n+4)^{\alpha+p+\tfrac{1}{2}}\frac{1}{(h-1)^{2\alpha}}$
(4.5) $\displaystyle
F_{1}\left(\alpha+p+\tfrac{1}{2};-k,\alpha;\alpha+p+\tfrac{3}{2};\tfrac{n+4}{r+2},-\tfrac{4h}{(h-1)^{2}}\right)\,.$
Now, we can proceed as we did in section 3.4, in order to re-write equation
4.5 in a form that is appropriate for our manipulations. Namely, we can use
the definition of the Appell hypergeometrics function as a power-series, which
is given by equation 3.23, alongside with the simplification
$\frac{(\alpha+p+\tfrac{1}{2})_{x+y}}{(\alpha+p+\tfrac{3}{2})_{x+y}}=\frac{1+2\alpha+2p}{1+2\alpha+2p+2x+2y}\,,$
(4.6)
and then analytically perform the $\mathfrak{u}$-sum, after which we need to
use the binomial expansion, equation 3.27, and the relation between the
binomial coefficients and the Pochhammer symbol given by equation 3.28 in
order to obtain
$\displaystyle\mathcal{G}^{(\alpha)(r)}_{n,k,p}(h)=2(r+2)^{k}(n+4)^{\alpha+p+\tfrac{1}{2}}$
$\displaystyle\sum^{k}_{\mathfrak{v}=0}\frac{(\alpha)_{\mathfrak{v}}}{\mathfrak{v}!}{}_{2}F_{1}\left(-k,\alpha+p+\tfrac{1}{2}+\mathfrak{v},\alpha+p+\tfrac{3}{2}+\mathfrak{v};\tfrac{n+4}{r+2}\right)$
(4.7)
$\displaystyle(-4)^{\mathfrak{v}}(-1)^{-2\mathfrak{v}-2\alpha}~{}h^{\mathfrak{v}}~{}\sum^{\infty}_{\mathfrak{r}=0}\frac{(2\mathfrak{v}+2\alpha)_{\mathfrak{r}}}{\mathfrak{r}!}(-1)^{2\mathfrak{r}}h^{\mathfrak{r}}\,.$
After the above simplifications, the equation we need to consider in order to
extract the partial-wave coefficients is given by:
$\displaystyle\sum^{\infty}_{j=0}\mathcal{K}(j,\alpha)a_{n,j}h^{j}=\frac{r!}{(n+r+1)!}4^{\alpha-\tfrac{1}{2}}\sum^{n+1}_{k=0}(-1)^{n+1-k}s^{(k)}_{n+1}\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}2(r+)2^{k}$
(4.8)
$\displaystyle\sum^{k}_{\mathfrak{v}=0}\frac{(\alpha)_{\mathfrak{v}}}{\mathfrak{v}!}\frac{1}{1+2\alpha+2p+2\mathfrak{v}}(-4)^{\mathfrak{v}}{}_{2}F_{1}\left(-k,\alpha+p+\tfrac{1}{2}+\mathfrak{v},\alpha+p+\tfrac{3}{2}+\mathfrak{v};\tfrac{n+4}{r+2}\right)$
$\displaystyle
h^{\mathfrak{v}}\sum^{\infty}_{\mathfrak{r}=0}\frac{(2\mathfrak{v}+2\alpha)_{\mathfrak{r}}}{\mathfrak{r}!}(-1)^{2\mathfrak{r}}h^{\mathfrak{r}}\,.$
### 4.1 The leading Regge trajectory
Using equation 4.8 we can read-off from both sides the terms that scale as
$h^{n+1}$ in order to extract the expression for the partial-wave coefficients
on the leading Regge trajectory, $a_{n,n+1}$. We have that:
$\displaystyle a_{n,n+1}=$
$\displaystyle\frac{r!}{(n+r+1)!}\frac{2(r+2)^{n+1}}{\mathcal{K}(n+1,\alpha)}4^{\alpha-\tfrac{1}{2}}\sum^{\infty}_{p=0}\sum^{n+1}_{\mathfrak{v}=0}\binom{\alpha-\tfrac{1}{2}}{p}\frac{(\alpha)_{\mathfrak{v}}}{\mathfrak{v}!}\frac{(-1)^{p}(-4)^{\mathfrak{v}}}{1+2\alpha+2p+2\mathfrak{v}}$
(4.9)
$\displaystyle{}_{2}F_{1}\left(-n-1,\alpha+p+\tfrac{1}{2}+\mathfrak{v},\alpha+p+\tfrac{3}{2}+\mathfrak{v};\tfrac{n+4}{r+2}\right)\frac{(2\mathfrak{v}+2\alpha)_{n+1-\mathfrak{v}}}{(n+1-\mathfrak{v})!}\,.$
### 4.2 More on the Regge trajectories
While, equation 4.9 is a formal derivation for the partial-wave coefficients
of the leading Regge trajectories in arbitrary $D$-dimensions, it looks more
like a simplified re-writing of the integral, rather than a helpful
expression. Again, we can work experimentally in order to derive
$a_{n,n+1}=\frac{(n+4)^{n+1}(n+1)!}{4^{n}}\frac{1}{(D-3)(D-1)}\frac{r!}{(n+r+1)!}\frac{1}{\left(\frac{D+2}{2}\right)_{n-1}}\,,$
(4.10)
for the leading Regge trajectory. It turns out that a similar behaviour to the
one in the $D=4$ case is observed here as well. The sub-leading trajectories
can be expressed as
$a_{n,n-\gamma}=a_{n,n+1}~{}\frac{1}{(n+4)^{\gamma+1}}~{}\frac{1}{2}\left(1+(-1)^{\gamma}(r-1)+r\right)~{}\left(D-3+2(n-\gamma)\right)~{}\mathbb{P}_{\gamma}(n,D,r)\,,$
(4.11)
where in the above $\mathbb{P}_{\gamma}(n,d,r)$ is a polynomial, which we do
not have in a closed-form for all levels. For the first few we have the
expressions:
$\displaystyle\mathbb{P}_{0}(n,D,r)$ $\displaystyle=2\,,$ (4.12)
$\displaystyle\mathbb{P}_{1}(n,D,r)$
$\displaystyle=\frac{12r^{2}(D+2n-3)-Dn-2D+n^{2}+23n+54}{6(n+4)^{2}}\,,$
$\displaystyle\mathbb{P}_{2}(n,D,r)$
$\displaystyle=\frac{(D+2n-3)\left(-Dn-2D+n^{2}+25n+58\right)+4r^{2}(D+2n-5)(D+2n-3)}{3(n+4)^{3}}\,.$
### 4.3 The general coefficients
Finally, it is a straightforward exercise to extract the $h^{\ell}$ term from
both sides of equation 4.8 in order to obtain the expression for all partial-
wave coefficients. It is given by101010We have checked in this case also that,
equation 4.13, matches the expression for the partial-wave coefficients in
[21] as we did in the previous cases.:
$\displaystyle a_{n,\ell}=$
$\displaystyle\frac{r!}{(n+r+1)!}\frac{1}{\mathcal{K}(\ell,\alpha)}4^{\alpha-\tfrac{1}{2}}\sum_{k=0}^{n+1}(-1)^{n+1-k}s^{(k)}_{n+1}(r+2)^{k}\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}$
(4.13)
$\displaystyle\sum_{\mathfrak{v}=0}^{\ell}\frac{(\alpha)_{\mathfrak{v}}}{\mathfrak{v}!}\frac{1}{1+2\alpha+2p+2\mathfrak{v}}{}_{2}F_{1}\left(-k,\alpha+p+\tfrac{1}{2}+\mathfrak{v},\alpha+p+\tfrac{3}{2}+\mathfrak{v};\tfrac{n+4}{r+2}\right)$
$\displaystyle(-4)^{\mathfrak{v}}\frac{(2\mathfrak{v}+2\alpha)_{\ell-\mathfrak{v}}}{(\ell-\mathfrak{v})!}\,.$
## 5 Comments on unitarity
The unitarity of the underlying theory, no matter what the theory, is directly
related to the positivity of the partial-wave coefficients that we derived in
the previous sections. The reason is that negative partial-wave coefficients
indicate the exchange of ghost states. Hence, requiring that the partial-wave
coefficients are non-negative numbers, is equivalent to the requirement that
the theory is ghost-free.
The deformation parameter $r$ that enters equation 2.1 can be any real number.
With that in mind, we start from the expressions we derived for the $D=4$
case. From equation 3.11 and for $n=0$ we obtain
$a_{0,1}=2\frac{1}{r+1}\,,$ (5.1)
from which we conclude that $r>-1$ and this is our first unitarity bound.
A second more stringent bound comes from the study of $a_{n,n}$ given by
equation 3.12 already at $n=0$. We obtain
$a_{0,0}=r\frac{1}{r+1}\,,$ (5.2)
and requiring positivity of the above yields
$r<-1\lor r\geq 0\,.$ (5.3)
From the above, we conclude that $r\geq 0$, since the $r<-1$ solution does not
allow the deformation parameter $r$ to become $0$ and thus undo the
deformation of the Veneziano amplitude. The study of coefficients derived from
the sub-leading trajectories, does not lead to further bounds on the allowed
values of the parameter $r$.
We continue to the general $D$ dimensions and check if we can derive any
bounds on the allowed value of $D$. Here, we have already derived $r\geq 0$
and we will use this as an input. Note, also, that we are interested in
integer spacetime dimensions $D$. The first non-trivial constraints in this
case come from the $a_{n,n-1}$ and $n=1$. We have the expression
$a_{1,0}=\frac{12(D-1)r^{2}-3D+78}{12(D-1)(r+1)(r+2)}\,,$ (5.4)
the positivity of which leads to
$\left(r>0\land D\leq 4(D-1)r^{2}+26\right)\lor 2r\geq 1\lor D\leq 26\,.$
(5.5)
Clearly, the first part, which reads $\left(r>0\land D\leq
4(D-1)r^{2}+26\right)$ is inconsistent as it does not allow the deformation
parameter to return to the value $r=0$ and thus to obtain the un-deformed
Veneziano amplitude. For the same reason we can exclude the second solution,
$2r\geq 1$, and we are only left with $D\leq 26$. Namely the underlying theory
has to live below the critical dimension of string theory. Since, $r\geq 0$
one could imagine that the partial-wave coefficients of the Veneziano
amplitude going to negative values above the critical dimensions of string
theory could become positive for some appropriate value of $r$, however, such
is not the case.
Before concluding this section we would like to add some clarifying comments.
As we have already mentioned, in [21] the authors provided an expression for
all partial-wave coefficients as a double-sum. Upon explicit evaluations of
the relation, they were able to derive bounds on the allowed values for the
deformation parameter, $r$ and the number of spacetime dimensions, $D$.
The results we have obtained here are in agreement with those presented in
[21] upon the appropriate shift that we have already mentioned in the previous
section and for the special case of $m^{2}_{0}=-1$. While our formulae given
by equations 3.20 and 3.31 for the special case of $D=4$ and equation 4.13 for
general-$D$ dimensions are equivalent to the result obtained in [21] they are
distinct parametrisations of the partial-wave coefficients.
Furthermore, the general expressions for all partial-wave coefficients that
appear both here and in [21] are not expressed in terms of simple analytic
functions, but rather as sums. This is more of a formal re-writing of the
coefficients, rather than a straightforward expression that allows the non-
negativity of the said coefficients to be manifest. Taking that into
consideration, the formulae describing the partial-wave coefficients on the
Regge trajectories, while not formally derived, appear to be more useful in a
practical sense.
## 6 Epilogue
In this work, we focused on examining the following hypergeometric deformation
of the Veneziano amplitude
$\mathcal{A}(s,t)=\frac{\Gamma(-s-1)\Gamma(-t-1)}{\Gamma(-s-t-2)}{}_{3}F_{2}\left(-s-1,-t-1,r;-s-t-2,1+r;1\right)\,,$
(6.1)
that was derived in [21]. Using the decomposition into partial waves, we were
able to derive bounds on the deformation parameter $r$ and the number of
spacetime dimensions $D$, namely
$r\geq 0\,,\qquad\text{and}\qquad D\leq 26\,,$ (6.2)
based on the requirement that the coefficients in the partial-wave
decomposition are non-negative numbers. This requirement is a consequence of
the unitarity of the underlying theory.
We find it quite remarkable that even though the deformation parameter $r$
could be any real number at the beginning, and naively one could expect that
this would not allow to derive any bounds on the dimensions of the underlying
theory, the positivity of the coefficients in the expansion requires that $D$
is bounded from above. More extraordinary is the fact that this upper bound
matches precisely the critical dimensions of the bosonic string.
As we have mentioned already in the introduction, generalising the Veneziano
amplitude is motivated from many different points of view, one of which is a
question on the uniqueness of string theory. Being able to derive the critical
dimension of the string from the examination of the hypergeometric Veneziano
amplitude, is not a proof that string theory is unique, however, it can be
seen as suggestive evidence. Of course, as was explained in [20], one can
argue that the input assumptions used as constraints to bootstrap equation 6.1
were neither strong nor restrictive enough, and hence it is not a big surprise
that new mathematical functions were found that satisfy the bootstrap
conditions.
We hope that this work is a first step towards the more systematic study of
these new and exciting hypergeometric amplitudes, supplementing and extending
the unitarity analysis of [21].
There are many exciting and interesting avenues for future work.
The first and most straightforward path, would be to consider performing a
similar analysis for different values of $m^{2}_{0}$. There is already
numerical evidence from [21] that for different values of $m^{2}_{0}$ the
deformation parameter can be negative without violating the unitarity of the
underlying theory111111We are grateful to Grant Remmen for stressing this
possibility to us..
We still do not know the underlying theory of the amplitude given by equation
6.1, if any. Taking our findings into consideration, as well as the fact that
this four-point amplitude has an integral representation in terms of the Koba-
Nielsen formula [21], perhaps the first and more natural place to try and look
for an answer would be some $2\rightarrow 2$ scattering process within string
theory itself.
Furthermore, it is well-known that the Veneziano and the Virasoro-Shapiro
amplitudes given by121212We are considering the scattering of four tachyons of
mass $\alpha^{\prime}m^{2}=-1$ and $\alpha^{\prime}m^{2}=-4$ in open and
closed string theory respectively, and we are following conventions in which
$\alpha^{\prime}=1$ for open strings and $\alpha^{\prime}=4$ for closed-string
theory. We have used $s+t+u=4m^{2}$.:
$\displaystyle\mathcal{A}_{\text{ven}}$
$\displaystyle=\frac{\Gamma(-s-1)\Gamma(-t-1)}{\Gamma(-s-t-2)}\,,$ (6.3)
$\displaystyle\mathcal{A}_{\text{vs}}$
$\displaystyle=\frac{\Gamma(-s-1)\Gamma(-t-1)\Gamma(s+t+3)}{\Gamma(s+2)\Gamma(t+2)\Gamma(-s-t-2)}\,,$
are related via the KLT relation [22]
$\mathcal{A}_{\text{vs}}=\underbrace{\frac{\sin(\pi s)\sin(\pi
t)}{\pi\sin(\pi(-s-t))}}_{\text{kernel}}\mathcal{A}^{2}_{\text{ven}}\,.$ (6.4)
It would be very interesting to examine if a similar relation holds true for
the hypergeometric deformations of the Veneziano and Virasoro-Shapiro
amplitudes, to derive the kernel in this generalised context, and thus the
generalised KLT relation.
Finally, a straightforward path is to consider the hypergeometric Coon
amplitude that was, also, derived in [21] and attempt to obtain the
corresponding unitarity bounds for the deformation parameters, $q$ and $r$,
and perhaps the spacetime dimensions $D$ in that case. Note that this is the
most general construction in terms of the hypergeometric deformations that
were discussed in that article and certain limits can be taken in order to
derive equation 6.1 from that. We believe that the bounds derived here can be
used as useful input in order to derive bounds on the allowed region of values
that the parameters can have in the case of the hypergeometric Coon amplitude.
## Acknowledgments
We are grateful to James M. Drummond for bringing [21] to our attention and
suggesting to carry out the unitarity analysis. We have greatly benefited from
discussions with James M. Drummond, Pronobesh Maity, and Theodoros Nakas
throughout the various stages of this project. We are, also, indebted to Grant
Remmen, and Xinan Zhou for reading a draft of this work and offering their
valuable insight and comments. Finally, we would like to acknowledge the
hospitality of ShanghaiTech University where parts of this work were
completed. The work of KCR is supported by starting funds from University of
Chinese Academy of Sciences (UCAS), the Kavli Institute for Theoretical
Sciences (KITS), and the Fundamental Research Funds for the Central
Universities.
## Appendix A Partial-wave coefficients that are equal to zero
### A.1 The effect of a non-vanishing value for the r-parameter
Let us discuss the first point of section 2 at a bit more depth in order to
showcase our argument. To do so, we remind ourselves that in the case of the
Veneziano amplitude, in order to prove that $a_{n,\ell}=0$ when $n+\ell$ is
equal to an even number, we have to consider the shift $t=n+4-t^{\prime}$ [5].
Using the properties $(-x)_{y}=(-1)^{y}(x-y+1)_{y}$ and
$C^{(\alpha)}_{\ell}(-x)=(-1)^{\ell}C^{(\alpha)}_{\ell}(x)$ and the integral
of equation 2.7 becomes
$\displaystyle\int^{n+4}_{0}dtC_{\ell}^{(\alpha)}\left(1-\frac{2t}{n+4}\right)(t(n+4-t))^{\alpha-\tfrac{1}{2}}(-t+2+r)_{n+1}=$
(A.1) $\displaystyle-(-1)^{n+\ell}$
$\displaystyle\int^{n+4}_{0}dtC_{\ell}^{(\alpha)}\left(1-\frac{2t}{n+4}\right)(t(n+4-t))^{\alpha-\tfrac{1}{2}}(-t+2-r)_{n+1}\,,$
where in the above we renamed $t^{\prime}$ as $t$ after using the properties.
Now, in the Veneziano case, which is the case $r=0$, the original integral is
just re-written as $-(-1)^{n+\ell}$ times itself, and hence for $n+\ell$ any
even number the result is zero. It is clear, that due to the presence of a
non-zero $r$ this is no longer the case.
### A.2 Vanishing coefficients in $D=4$
Here we will specify the discussion to the case $D=4$. In this case, we have
seen that the expression for the partial-wave coefficient simplifies
drastically, see equation 3.1. Let us focus on the integral of that
expression, given by
$\int^{n+4}_{0}dtP_{\ell}\left(1-\frac{2t}{n+4}\right)(-t+2+r)_{n+1}\,.$ (A.2)
We will use that the Legendre polynomials satisfy
$P_{\ell}\left(1-\frac{2t}{n+4}\right)=\sum^{\ell}_{k=0}\binom{\ell}{k}\binom{\ell+k}{k}\left(-\frac{t}{n+4}\right)^{k}\,,$
(A.3)
in order to re-write equation A.2 as:
$\sum^{\ell}_{k=0}\binom{\ell}{k}\binom{\ell+k}{k}\left(-\frac{1}{n+4}\right)^{k}\int^{n+4}_{0}dtt^{k}\underbrace{\left(-t+2\right)\left(-t+3\right)\ldots\left(-t+2+n+r\right)}_{(n+1)-\text{terms}}\,.$
(A.4)
Let us consider that $\mathfrak{a}_{j}$ is the coefficient of the term $t^{j}$
in the above and hence we have that equation A.4 becomes
$\sum^{n+1}_{j=0}\mathfrak{a}_{j}(n+4)^{j+1}\mathcal{T}\left(\ell,j\right)\,,$
(A.5)
where in the above we have defined:
$\mathcal{T}(\ell,j)=\sum^{\ell}_{k=0}(-1)^{k}\binom{\ell}{k}\binom{\ell+k}{k}\frac{1}{k+j+1}\,.$
(A.6)
Notice that equation A.6 can be written as:
$\mathcal{T}(\ell,j)=\int^{1}_{0}dz~{}z^{j}\widetilde{P}_{\ell}(z)\,,$ (A.7)
with $\widetilde{P}_{\ell}(z)$ being the shifted Legendre polynomial that
satisfy
$\widetilde{P}_{\ell}(z)=P_{\ell}(1-2z)=\sum^{\ell}_{k=0}(-1)^{k}\binom{\ell}{k}\binom{\ell+k}{k}z^{k}\,.$
(A.8)
Recall that the task at hand was to evaluate $\mathcal{T}(\ell,j)$. The
integral in equation A.7 can be evaluated to be
$\mathcal{T}(\ell,j)=\frac{(-j)_{\ell}}{(j+1)(j+2)_{\ell}}\,.$ (A.9)
From equation A.9 we conclude that $\mathcal{T}(\ell\geq n+2,j)$ for any
$j=0,1,\ldots,n+1$.
### A.3 Vanishing coefficients in any $D$
Now, we proceed to compute the integral and show the vanishing of the partial-
wave coefficients in any dimensions for $\ell\geq n+2$. We begin by
considering the following representation of the Gegenbauer polynomials
$C^{(\alpha)}_{\ell}(x)=\frac{\left(2\alpha\right)_{\ell}}{\ell!}{}_{2}F_{1}\left(-n,2\alpha+n,\alpha+\tfrac{1}{2};\tfrac{1-x}{2}\right)\,,$
(A.10)
which can be written in the more convenient, for our purposes, form
$C^{(\alpha)}_{\ell}(x)=\frac{\left(2\alpha\right)_{\ell}}{\ell!}\sum^{\ell}_{j=0}\binom{n}{j}\frac{\left(2\alpha+n\right)_{j}}{\left(\alpha+\tfrac{1}{2}\right)_{j}}\left(\frac{x-1}{2}\right)^{j}\,.$
(A.11)
Using equation A.11, the integral appearing in equation 2.7 becomes
$\int^{n+4}_{0}dt\frac{\left(2\alpha\right)_{\ell}}{\ell!}\sum^{\ell}_{k=0}\binom{\ell}{k}\frac{\left(2\alpha+\ell\right)_{k}}{\left(\alpha+\tfrac{1}{2}\right)_{k}}\left(-\frac{1}{n+4}\right)^{k}t^{k}t^{\alpha-\tfrac{1}{2}}\left(-t+n+4\right)^{\alpha-\tfrac{1}{2}}\left(-t+2+r\right)_{n+1}\,.$
(A.12)
In the above, $\left(-t+2+r\right)_{n+1}$ has in total $(n+1)$-terms of the
form:
$\left(-t+2+r\right)\left(-t+3+r\right)\ldots\left(-t+2+n+r\right)$.
Additionally, we can use the binomial theorem to express
$\left(-t+n+4\right)^{\alpha-\tfrac{1}{2}}$ as:
$\left(-t+n+4\right)^{\alpha-\tfrac{1}{2}}=\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}(n+4)^{\alpha-\tfrac{1}{2}-p}t^{p}\,.$
(A.13)
Now, we consider that $a_{j}$ is the coefficient of $t^{j}$ in
$\left(-t+2+r\right)\left(-t+3+r\right)\ldots\left(-t+2+n+r\right)$ and
equation A.12 becomes
$\frac{\left(2\alpha\right)_{\ell}}{\ell!}\sum^{n+1}_{j=0}a_{j}(n+4)^{j+2\alpha}\mathcal{T}(\ell,j,\alpha)\,,$
(A.14)
where in the above
$\mathcal{T}(\ell,j,\alpha)=\sum^{\ell}_{k=0}\binom{\ell}{k}\frac{\left(2\alpha+\ell\right)_{k}}{\left(\alpha+\tfrac{1}{2}\right)_{k}}(-1)^{k}\sum^{\infty}_{p=0}\binom{\alpha-\tfrac{1}{2}}{p}(-1)^{p}\frac{2}{1+2\alpha+2j+2k+2p}\,.$
(A.15)
The sums in equation A.15 can be performed analytically and we obtain
$\mathcal{T}(\ell,j,\alpha)=\left[\Gamma\left(\alpha+\tfrac{1}{2}\right)\right]^{2}\Gamma\left(\alpha+j+\tfrac{1}{2}\right){}_{3}\mathcal{F}_{2}\left(\\{\alpha+j+\tfrac{1}{2},-\ell,\ell+2\alpha\\};\\{\alpha+\tfrac{1}{2},2\alpha+j+1\\};1\right)\,,$
(A.16)
where in the above we have used
${}_{p}\mathcal{F}_{q}(\\{a_{1},a_{2},\ldots,a_{p}\\};\\{b_{1},b_{2},\ldots,b_{q}\\};z)$
to denote the regularised hypergeometric function, which is given in terms of
the generalised hypergeometric function as:
${}_{p}\mathcal{F}_{q}(\\{a_{1},a_{2},\ldots,a_{p}\\};\\{b_{1},b_{2},\ldots,b_{q}\\};z)=\frac{1}{\Gamma(b_{1})\Gamma(b_{2})\ldots\Gamma(b_{q})}{}_{p}F_{q}(\\{a_{1},a_{2},\ldots,a_{p}\\};\\{b_{1},b_{2},\ldots,b_{q}\\};z)\,,$
(A.17)
From equation A.16 we can conclude that $\mathcal{T}(\ell\geq n+2,j,\alpha)=0$
for any number of spacetime dimensions $D$ and any $j=0,1,\ldots,n+1$.
## Appendix B The polynomials for the Regge trajectories in $D=4$ dimensions
In this appendix we provide some additional examples for the polynomials
governing the Regge trajectories in $D=4$ dimensions from section 3.2.
$\displaystyle\mathcal{P}_{3}(n,r)=$
$\displaystyle\left(\frac{16n^{3}}{3}-8n^{2}-\frac{4n}{3}+2\right)r^{4}+\left(\frac{4n^{4}}{3}+\frac{92n^{3}}{3}+\frac{215n^{2}}{3}-\frac{23n}{3}-18\right)r^{2}+$
(B.1)
$\displaystyle\frac{n^{5}}{36}+\frac{401n^{4}}{360}+\frac{1367n^{3}}{90}+\frac{23503n^{2}}{360}+\frac{5923n}{60}+\frac{523}{15}\,,$
$\displaystyle\mathcal{P}_{4}(n,r)=$
$\displaystyle\left(\frac{64n^{4}}{15}-\frac{256n^{3}}{15}+\frac{224n^{2}}{15}+\frac{64n}{15}-4\right)r^{4}+$
$\displaystyle\left(\frac{16n^{5}}{9}+\frac{376n^{4}}{9}+36n^{3}-\frac{1486n^{2}}{9}-\frac{82n}{9}+\frac{116}{3}\right)r^{2}+$
$\displaystyle\frac{n^{6}}{9}+\frac{218n^{5}}{45}+\frac{4157n^{4}}{60}+\frac{24743n^{3}}{90}+\frac{55921n^{2}}{180}-\frac{2071n}{30}-82\,,$
$\displaystyle\mathcal{P}_{5}(n,r)=$
$\displaystyle\left(\frac{128n^{5}}{45}-\frac{64n^{4}}{3}+\frac{448n^{3}}{9}-32n^{2}-\frac{568n}{45}+\frac{28}{3}\right)r^{6}+$
$\displaystyle\left(\frac{16n^{6}}{9}+\frac{368n^{5}}{9}-\frac{680n^{4}}{9}-\frac{2440n^{3}}{9}+\frac{3889n^{2}}{9}+\frac{587n}{9}-\frac{310}{3}\right)r^{4}+$
$\displaystyle\left(\frac{2n^{7}}{9}+\frac{461n^{6}}{45}+\frac{13139n^{5}}{90}+\frac{14515n^{4}}{36}-\frac{2221n^{3}}{9}-\frac{224179n^{2}}{180}+\frac{1577n}{30}+286\right)r^{2}+$
$\displaystyle\frac{n^{8}}{324}+\frac{103n^{7}}{540}+\frac{72011n^{6}}{15120}+\frac{31553n^{5}}{560}+\frac{285899n^{4}}{1008}+\frac{1033789n^{3}}{1680}+$
$\displaystyle\frac{4928053n^{2}}{11340}-\frac{594829n}{3780}-\frac{1138}{9}\,,$
$\displaystyle\mathcal{P}_{6}(n,r)=$
$\displaystyle\left(\frac{512n^{6}}{315}-\frac{2048n^{5}}{105}+\frac{5248n^{4}}{63}-\frac{1024n^{3}}{7}+\frac{23648n^{2}}{315}+\frac{3968n}{105}-24\right)r^{6}+$
$\displaystyle\left(\frac{64n^{7}}{45}+\frac{1376n^{6}}{45}-\frac{8576n^{5}}{45}+\frac{16n^{4}}{9}+\frac{52756n^{3}}{45}-\frac{55546n^{2}}{45}-\frac{1406n}{5}+308\right)r^{4}+$
$\displaystyle\left(\frac{8n^{8}}{27}+\frac{1904n^{7}}{135}+\frac{5108n^{6}}{27}+\frac{8144n^{5}}{135}-\frac{105671n^{4}}{54}-\frac{126079n^{3}}{135}+\right.$
$\displaystyle\left.\frac{87601n^{2}}{18}+\frac{10327n}{45}-\frac{3292}{3}\right)r^{2}+$
$\displaystyle\frac{n^{9}}{81}+\frac{221n^{8}}{270}+\frac{16141n^{7}}{756}+\frac{69851n^{6}}{280}+\frac{2588081n^{5}}{2520}+$
$\displaystyle\frac{272901n^{4}}{280}-\frac{11443967n^{3}}{4536}-\frac{7777853n^{2}}{1890}+\frac{356711n}{630}+964\,,$
$\displaystyle\mathcal{P}_{7}(n,r)=$
$\displaystyle\left(\frac{256n^{7}}{315}-\frac{128n^{6}}{9}+\frac{4288n^{5}}{45}-\frac{2720n^{4}}{9}+\frac{19792n^{3}}{45}-\frac{1688n^{2}}{9}-\frac{12172n}{105}+66\right)r^{8}+$
(B.2)
$\displaystyle\left(\frac{128n^{8}}{135}+\frac{2432n^{7}}{135}-\frac{32096n^{6}}{135}+\frac{16864n^{5}}{27}+\right.$
$\displaystyle\left.\frac{107992n^{4}}{135}-\frac{620152n^{3}}{135}+\frac{504206n^{2}}{135}+\frac{9982n}{9}-980\right)r^{6}+$
$\displaystyle\left(\frac{8n^{9}}{27}+\frac{1924n^{8}}{135}+\frac{22508n^{7}}{135}-\frac{94526n^{6}}{135}-\frac{677699n^{5}}{270}+\right.$
$\displaystyle\left.\frac{3716929n^{4}}{540}+\frac{1240327n^{3}}{135}-\frac{3456947n^{2}}{180}-\frac{64279n}{30}+4382\right)r^{4}+$
$\displaystyle\left(\frac{2n^{10}}{81}+\frac{698n^{9}}{405}+\frac{8641n^{8}}{189}+\frac{468323n^{7}}{945}+\right.$
$\displaystyle\left.\frac{75233n^{6}}{72}-\frac{1451983n^{5}}{360}-\frac{8123005n^{4}}{648}+\frac{18345539n^{3}}{3240}+\right.$
$\displaystyle\left.\frac{10273985n^{2}}{378}-\frac{737869n}{630}-6028\right)r^{2}+$
$\displaystyle\frac{n^{11}}{3888}+\frac{841n^{10}}{38880}+\frac{2152081n^{9}}{2721600}+\frac{5665843n^{8}}{362880}+\frac{5406743n^{7}}{32400}+\frac{30769397n^{6}}{36288}+$
$\displaystyle\frac{972735997n^{5}}{544320}-\frac{38975383n^{4}}{1088640}-\frac{7255356281n^{3}}{1360800}-\frac{460545103n^{2}}{90720}+\frac{15354599n}{12600}+\frac{18871}{15}\,,$
$\displaystyle\mathcal{P}_{8}(n,r)=$
$\displaystyle\left(\frac{1024n^{8}}{2835}-\frac{8192n^{7}}{945}+\frac{11264n^{6}}{135}-\frac{2048n^{5}}{5}+\frac{144256n^{4}}{135}-\frac{60928n^{3}}{45}+\right.$
(B.3)
$\displaystyle\left.\frac{1390016n^{2}}{2835}+\frac{344192n}{945}-\frac{572}{3}\right)r^{8}+$
$\displaystyle\left(\frac{512n^{9}}{945}+\frac{7936n^{8}}{945}-\frac{13184n^{7}}{63}+\frac{53696n^{6}}{45}-\frac{74464n^{5}}{45}-\frac{241136n^{4}}{45}+\right.$
$\displaystyle\left.\frac{3306440n^{3}}{189}-\frac{11112316n^{2}}{945}-\frac{1344076n}{315}+3256\right)r^{6}+$
$\displaystyle\left(\frac{32n^{10}}{135}+\frac{7616n^{9}}{675}+\frac{7592n^{8}}{75}-\frac{303824n^{7}}{225}+\frac{134866n^{6}}{225}+\right.$
$\displaystyle\left.\frac{3756572n^{5}}{225}-\frac{27364001n^{4}}{1350}-\frac{36813233n^{3}}{675}+\frac{34407883n^{2}}{450}+\frac{945907n}{75}-\frac{89292}{5}\right)r^{4}+$
$\displaystyle\left(\frac{8n^{11}}{243}+\frac{964n^{10}}{405}+\frac{531536n^{9}}{8505}+\frac{1619614n^{8}}{2835}-\frac{1250071n^{7}}{1134}-\frac{2318551n^{6}}{180}+\right.$
$\displaystyle\left.\frac{12720941n^{5}}{4860}+\frac{133287227n^{4}}{1620}+\frac{77483591n^{3}}{6804}-\frac{449387851n^{2}}{2835}-\frac{2828951n}{945}+\frac{104056}{3}\right)r^{2}+$
$\displaystyle\frac{n^{12}}{972}+\frac{112n^{11}}{1215}+\frac{1196773n^{10}}{340200}+\frac{11905489n^{9}}{170100}+\frac{70047469n^{8}}{100800}+$
$\displaystyle\frac{548246269n^{7}}{226800}-\frac{66696559n^{6}}{38880}-\frac{48406423n^{5}}{1944}-\frac{79199036753n^{4}}{2721600}+\frac{31022789003n^{3}}{680400}+$
$\displaystyle\frac{17353447943n^{2}}{226800}-\frac{20749049n}{2100}-\frac{259286}{15}\,,$
$\displaystyle\mathcal{P}_{9}(n,r)=$
$\displaystyle\left(\frac{2048n^{9}}{14175}-\frac{1024n^{8}}{225}+\frac{280576n^{7}}{4725}-\frac{93184n^{6}}{225}+\right.$
(B.4)
$\displaystyle\left.\frac{1117952n^{5}}{675}-\frac{843136n^{4}}{225}+\frac{60356992n^{3}}{14175}-\frac{99392n^{2}}{75}-\frac{368216n}{315}+572\right)r^{10}+$
$\displaystyle\left(\frac{256n^{10}}{945}+\frac{2816n^{9}}{945}-\frac{15104n^{8}}{105}+\frac{433408n^{7}}{315}-\frac{228128n^{6}}{45}+\frac{138272n^{5}}{45}+\right.$
$\displaystyle\left.\frac{25833776n^{4}}{945}-\frac{62429264n^{3}}{945}+\frac{12001739n^{2}}{315}+\frac{1711723n}{105}-11154\right)r^{8}+$
$\displaystyle\left(\frac{64n^{11}}{405}+\frac{14752n^{10}}{2025}+\frac{24752n^{9}}{675}-\frac{1002248n^{8}}{675}+\frac{4159412n^{7}}{675}+\right.$
$\displaystyle\left.\frac{6835234n^{6}}{675}-\frac{172516181n^{5}}{2025}+\frac{181332943n^{4}}{4050}+\right.$
$\displaystyle\left.\frac{189778778n^{3}}{675}-\frac{413439001n^{2}}{1350}-\frac{2927713n}{45}+73612\right)r^{6}+$
$\displaystyle\left(\frac{8n^{12}}{243}+\frac{328n^{11}}{135}+\frac{521078n^{10}}{8505}+\frac{1100114n^{9}}{2835}-\frac{25659041n^{8}}{5670}-\frac{19007911n^{7}}{1890}+\right.$
$\displaystyle\left.\frac{145457699n^{6}}{1944}+\frac{93677327n^{5}}{1080}-\frac{30023039233n^{4}}{68040}-\frac{5731391107n^{3}}{22680}+\right.$
$\displaystyle\left.\frac{1636399621n^{2}}{1890}+\frac{36484051n}{630}-190028\right)r^{4}+$
$\displaystyle\left(\frac{n^{13}}{486}+\frac{941n^{12}}{4860}+\frac{1288333n^{11}}{170100}+\frac{50049121n^{10}}{340200}+\frac{557418061n^{9}}{453600}+\right.$
$\displaystyle\left.\frac{137847077n^{8}}{907200}-\frac{933350689n^{7}}{34020}-\frac{2027689027n^{6}}{38880}+\frac{212491115287n^{5}}{1360800}+\right.$
$\displaystyle\left.\frac{1087556641399n^{4}}{2721600}-\frac{50810550289n^{3}}{226800}-\frac{2094388529n^{2}}{2800}+\frac{11761469n}{252}+162838\right)r^{2}+$
$\displaystyle\frac{n^{14}}{58320}+\frac{107n^{13}}{58320}+\frac{362693n^{12}}{4082400}+\frac{10121317n^{11}}{4082400}+\frac{1667002441n^{10}}{39916800}+\frac{3207436571n^{9}}{7983360}+$
$\displaystyle\frac{163569574211n^{8}}{89812800}+\frac{68609574763n^{7}}{35925120}-\frac{4576464369611n^{6}}{359251200}-\frac{14953602874033n^{5}}{359251200}-$
$\displaystyle\frac{927033772163n^{4}}{59875200}+\frac{2596960946917n^{3}}{29937600}+\frac{441409885271n^{2}}{4989600}-\frac{2649524531n}{138600}-\frac{314354}{15}\,,$
$\displaystyle\mathcal{P}_{10}(n,r)=$
$\displaystyle\left(\frac{8192n^{10}}{155925}-\frac{65536n^{9}}{31185}+\frac{370688n^{8}}{10395}-\frac{3473408n^{7}}{10395}+\frac{13976576n^{6}}{7425}-\frac{1925120n^{5}}{297}+\right.$
(B.5)
$\displaystyle\left.\frac{409485056n^{4}}{31185}-\frac{425455616n^{3}}{31185}+\frac{63601568n^{2}}{17325}+\frac{13240064n}{3465}-1768\right)r^{10}+$
$\displaystyle\left(\frac{1024n^{11}}{8505}+\frac{5632n^{10}}{8505}-\frac{137728n^{9}}{1701}+\frac{666112n^{8}}{567}-\frac{2309504n^{7}}{315}+\frac{7955776n^{6}}{405}+\right.$
$\displaystyle\left.\frac{2012032n^{5}}{1701}-\frac{214121696n^{4}}{1701}+\frac{2118827684n^{3}}{8505}-\frac{119186042n^{2}}{945}-\frac{11763526n}{189}+\frac{117260}{3}\right)r^{8}+$
$\displaystyle\left(\frac{256n^{12}}{2835}+\frac{55808n^{11}}{14175}-\frac{24064n^{10}}{14175}-\frac{1096192n^{9}}{945}+\frac{1090592n^{8}}{105}-\frac{92613952n^{7}}{4725}-\right.$
$\displaystyle\left.\frac{1380237872n^{6}}{14175}+\frac{1098647552n^{5}}{2835}-\frac{8138435n^{4}}{567}-\frac{2140478078n^{3}}{1575}+\frac{646434493n^{2}}{525}+\right.$
$\displaystyle\left.\frac{33164102n}{105}-305448\right)r^{6}+$
$\displaystyle\left(\frac{32n^{13}}{1215}+\frac{11888n^{12}}{6075}+\frac{1942184n^{11}}{42525}+\frac{3789124n^{10}}{42525}-\frac{6254986n^{9}}{945}+\frac{200483701n^{8}}{14175}+\right.$
$\displaystyle\left.\frac{10897731679n^{7}}{85050}-\frac{7691537891n^{6}}{24300}-\frac{30358969699n^{5}}{34020}+\frac{359628196349n^{4}}{170100}+\right.$
$\displaystyle\left.\frac{122628506327n^{3}}{56700}-\frac{21427008719n^{2}}{4725}-\frac{153371047n}{315}+1006456\right)r^{4}+$
$\displaystyle\left(\frac{2n^{14}}{729}+\frac{976n^{13}}{3645}+\frac{897097n^{12}}{85050}+\frac{981730n^{11}}{5103}+\frac{1102788149n^{10}}{1020600}-\frac{15745694n^{9}}{1701}-\right.$
$\displaystyle\left.\frac{207637186087n^{8}}{4082400}+\frac{25174204033n^{7}}{204120}+\frac{157940397331n^{6}}{226800}-\frac{10725272695n^{5}}{20412}-\right.$
$\displaystyle\left.\frac{14608972386719n^{4}}{4082400}+\frac{10269635113n^{3}}{22680}+\frac{681990515411n^{2}}{113400}-\frac{17273033n}{210}-1290532\right)r^{2}+$
$\displaystyle\frac{n^{15}}{14580}+\frac{227n^{14}}{29160}+\frac{11153n^{13}}{28350}+\frac{22796371n^{12}}{2041200}+\frac{16378424261n^{11}}{89812800}+\frac{89710139017n^{10}}{59875200}+$
$\displaystyle\frac{488830522129n^{9}}{179625600}-\frac{1146948964289n^{8}}{44906400}-\frac{1733919836167n^{7}}{14968800}+\frac{1953314586743n^{6}}{179625600}+$
$\displaystyle\frac{137895712720613n^{5}}{179625600}+\frac{8473021434481n^{4}}{9979200}-\frac{2101852716961n^{3}}{1663200}-\frac{1626950129029n^{2}}{831600}+$
$\displaystyle\frac{747757867n}{2772}+435964\,.$
## References
* [1] M. Kruczenski, J. Penedones, and B. C. van Rees, “Snowmass White Paper: S-matrix Bootstrap,” 2203.02421.
* [2] G. Veneziano, “Construction of a crossing - symmetric, Regge behaved amplitude for linearly rising trajectories,” Nuovo Cim. A 57 (1968) 190–197.
* [3] X. O. Camanho, J. D. Edelstein, J. Maldacena, and A. Zhiboedov, “Causality Constraints on Corrections to the Graviton Three-Point Coupling,” JHEP 02 (2016) 020, 1407.5597.
* [4] K. Häring and A. Zhiboedov, “Gravitational Regge bounds,” 2202.08280.
* [5] P. Maity, “Positivity of the Veneziano amplitude in D = 4,” JHEP 04 (2022) 064, 2110.01578.
* [6] N. Arkani-Hamed, L. Eberhardt, Y.-t. Huang, and S. Mizera, “On unitarity of tree-level string amplitudes,” JHEP 02 (2022) 197, 2201.11575.
* [7] D. D. Coon, “Uniqueness of the veneziano representation,” Phys. Lett. B 29 (1969) 669–672.
* [8] M. Baker and D. D. Coon, “Dual resonance theory with nonlinear trajectories,” Phys. Rev. D 2 (1970) 2349–2358.
* [9] D. D. Coon, U. P. Sukhatme, and J. Tran Thanh Van, “DUALITY AND PROTON PROTON SCATTERING AT ALL ANGLES,” Phys. Lett. B 45 (1973) 287–291.
* [10] C. B. Jepsen, “Cutting the Coon amplitude,” JHEP 06 (2023) 114, 2303.02149.
* [11] R. Bhardwaj, S. De, M. Spradlin, and A. Volovich, “On unitarity of the Coon amplitude,” JHEP 08 (2023) 082, 2212.00764.
* [12] Y. Li and H.-Y. Sun, “Towards $\alpha^{\prime}$-finiteness: $q$-deformed open string amplitude,” 2307.13117.
* [13] F. Figueroa and P. Tourkine, “Unitarity and Low Energy Expansion of the Coon Amplitude,” Phys. Rev. Lett. 129 (2022), no. 12 121602, 2201.12331.
* [14] J. Chakravarty, P. Maity, and A. Mishra, “On the positivity of Coon amplitude in D = 4,” JHEP 10 (2022) 043, 2208.02735.
* [15] N. Geiser and L. W. Lindwasser, “Properties of infinite product amplitudes: Veneziano, Virasoro, and Coon,” JHEP 12 (2022) 112, 2207.08855.
* [16] S. Caron-Huot, Z. Komargodski, A. Sever, and A. Zhiboedov, “Strings from Massive Higher Spins: The Asymptotic Uniqueness of the Veneziano Amplitude,” JHEP 10 (2017) 026, 1607.04253.
* [17] J. Maldacena and G. N. Remmen, “Accumulation-point amplitudes in string theory,” JHEP 08 (2022) 152, 2207.06426.
* [18] N. Geiser and L. W. Lindwasser, “Generalized Veneziano and Virasoro amplitudes,” JHEP 04 (2023) 031, 2210.14920.
* [19] C. Cheung and G. N. Remmen, “Veneziano variations: how unique are string amplitudes?,” JHEP 01 (2023) 122, 2210.12163.
* [20] C. Cheung and G. N. Remmen, “Bespoke Dual Resonance,” 2308.03833.
* [21] C. Cheung and G. N. Remmen, “Stringy dynamics from an amplitudes bootstrap,” Phys. Rev. D 108 (2023), no. 2 026011, 2302.12263.
* [22] H. Kawai, D. C. Lewellen, and S. H. H. Tye, “A Relation Between Tree Amplitudes of Closed and Open Strings,” Nucl. Phys. B 269 (1986) 1–23.
|
Combining (<ref>) and (<ref>), using Lemma <ref> and the penultimate item of Lemma <ref> yields
\begin{equation*}
\inf_{\hat{\sigma}^{2}} \sup_{P \in \mathcal{P}_{\text{Gauss}}(\mu)} R_{n, \delta}(P, \hat{\sigma}^{2}) \geq \sup_{k \in \N} p_{\alpha_k}^{-1}(1-\delta) = p^{-1}_{n/2}(1-\delta)
\end{equation*}
This proves the lower bound. For the upper bound, we have, for any $\sigma^2 \in (0, \infty)$
\begin{equation*}
\frac{n \cdot \sigma^2}{\sum_{i=1}^{n}(X_i - \mu)^{2}} \sim \text{Inv-Gamma}(n/2, n/2)
\end{equation*}
therefore, for $\hat{\sigma}^{2}$ as defined in the theorem, we have
\begin{align*}
&\Prob\paren*{\abs*{\log(\sigma^2/\hat{\sigma}^{2}((X_i)_{i=1}^{n}))} \leq p_{n/2}^{-1}(1-\delta)} \\
&= \Prob\paren*{\exp(-p_{n/2}^{-1}(1-\delta)) \leq \frac{\sigma^2}{\hat{\sigma}^{2}((X_i)_{i=1}^{n})} \leq \exp(p_{n/2}^{-1}(1-\delta))}\\
&= \Prob\paren*{\frac{1-\exp(-2p^{-1}_{n/2}(1-\delta))}{2p^{-1}_{n/2}(1-\delta)} \leq \frac{n \cdot \sigma^{2}}{\sum_{i=1}^{n}(X_i - \mu)^{2}} \leq \frac{\exp(2p^{-1}_{n/2}(1-\delta)) - 1}{2p^{-1}_{n/2}(1-\delta)}} \\
&= p_{n/2}(p_{n/2}^{-1}(1 - \delta))\\
&= 1-\delta
\end{align*}
from which we conclude that for all $\sigma^{2} \in (0, \infty)$
\begin{equation*}
R_{n, \delta}(P, \hat{\sigma}^{2}) \leq p_{n/2}^{-1}(1-\delta)
\end{equation*}
completing the proof of the minimaxity of $\hat{\sigma}^{2}$ and of the upper bound on the minimax risk.
§ PROOFS OF SECTION <REF>
§.§ Proof of Theorem <ref>
Before we proceed with the proof, we start with a simple lemma.
Under the setup of Theorem <ref>, the functions $\widetilde{\mathcal{E}}, \widetilde{E}$ are strictly convex and symmetric with unique minimizer $0$. Furthermore, if $(X, Y) \sim P \in \mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^{2})$ so that $Y = \inp{w^{*}}{X} + \eta$, then $E(v) = \widetilde{E}(v - w^{*})$ for all $v$, and $w^{*}$ is the unique minimizer of $E(w)$.
We prove the convexity and symmetry of $\widetilde{E}$ first. We start with the symmetry.
\begin{equation*}
\widetilde{E}(-\Delta) = \Exp\brack*{e(- \inp{\Delta}{X} + \eta)}
= \Exp\brack*{e(\inp{\Delta}{X} - \eta)}
= \Exp\brack*{e(\inp{\Delta}{X} + \eta)} = \widetilde{E}(\Delta),
\end{equation*}
where the second equality follows from the symmetry of $e$, and the fourth equality follows from $\eta \overset{d}{=} -\eta$.
For the strict convexity, let $t \in (0, 1)$ and $\Delta, \Delta' \in \R^{d}$. Then
\begin{equation*}
\widetilde{E}((1-t) \Delta + t \Delta') = \Exp\brack*{e\paren*{(1-t)\brace*{\inp{\Delta}{X} + \eta} + t\brace*{\inp{\Delta'}{X} + \eta}}} < (1-t) \widetilde{E}(\Delta) + t \widetilde{E}(\Delta'),
\end{equation*}
where the inequality follows from the strict convexity of $e$. Therefore $\widetilde{E}$ is strictly convex and symmetric, and since $\widetilde{\mathcal{E}}$ and $\widetilde{E}$ differ by a constant, the same holds for $\widetilde{\mathcal{E}}$.
For the second statement, notice that, by symmetry of $\eta$,
\begin{equation*}
E(v) = \Exp\brack*{e(\inp{v}{X} - Y)} = \Exp\brack*{e(\inp{v-w^{*}}{X} - \eta)} = \Exp\brack*{e(\inp{v-w^{*}}{X} + \eta)} = \widetilde{E}(v-w^{*}).
\end{equation*}
After routine calculations and an application of the chain rule, this also shows that $E$ is strictly convex, symmetric, and differentiable at $w^{*}$ with $\nabla E(w^{*}) = \nabla \widetilde{E}(0)$. We compute
\begin{equation*}
\nabla E(w^{*}) = \nabla \widetilde{E}(0) = \Exp\brack*{\nabla e(\eta)} = \Exp\brack*{e'(\eta) X} = \Exp\brack*{e'(\eta)} \Exp\brack*{X},
\end{equation*}
where $\eta \sim \mathcal{N}(0, \sigma^{2})$ and the last equality follows from the independence of $\eta$ and $X$. Now
\begin{equation*}
\Exp\brack*{e'(\eta)} = \frac{1}{\sigma^{2}}\Exp\brack*{e(\eta) \eta} = \frac{1}{\sigma^{2}} \Exp\brack*{e(-\eta) \cdot (-\eta)} = -\frac{1}{\sigma^{2}}\Exp\brack*{e(\eta)\eta} = -\Exp\brack*{e'(\eta)}
\end{equation*}
where the first and last equalities are by Stein's lemma, the second since $\eta \overset{d}{=} -\eta$, and the third by the symmetry of $e$. This proves that $\Exp\brack{e'(\eta)} = 0$, and hence that $w^{*}$ is the unique minimizer of $E$ by strong convexity.
We now present the main proof of the theorem.
Our strategy is to use Theorem <ref> with a properly chosen sequence of distributions $(\pi_k)_{k \in \N}$. Notice that, associated to each $P \in \mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^2)$ is a unique minimizer $w^{*} \in \R^{d}$ of the expected error $E(w)$. So putting a distribution on the set of the latter, $\R^{d}$, induces a distribution on the set of the former, $\mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^2)$. Specifically, let $(\lambda_k)_{k \in \N}$ be a strictly positive sequence converging to $0$, and define $\pi_{k} \defeq \mathcal{N}(0, \lambda_k^{-1} \cdot (\sigma^2/n) \cdot I_{d \times d})$. With the goal of applying Theorem <ref>, we need to compute
\begin{equation*}
p_{k}(t) \defeq \sup_{\hat{w}} \Prob\paren*{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n})) \leq t},
\end{equation*}
where $w^{*} \sim \pi_k$, $(X_i)_{i=1}^{n} \sim P_{X}^{n}$, and $Y_{i} \mid (w^{*}, X_i) \sim \mathcal{N}(\inp{w^{*}}{X_i}, \sigma^2)$ for all $i \in [n]$, and independently. A basic calculation shows that $w^{*} \mid (X_i, Y_i)_{i=1}^{n} \sim \mathcal{N}(w_{k}, (\sigma^2/n) \Sigma_{k}^{-1})$, where
\begin{equation*}
w_{k} \defeq \Sigma_{k}^{-1} \paren*{\frac{1}{n}\sum_{i=1}^{n}Y_{i}X_i}, \quad \Sigma_{k} \defeq \widehat{\Sigma}_{n} + \lambda_k I_{d}, \quad \widehat{\Sigma}_{n} \defeq \frac{1}{n}\sum_{i=1}^{n} X_{i}X_{i}^{T}
\end{equation*}
Therefore, using Lemma <ref>,
\begin{align*}
p_{k}(t) &= \sup_{\hat{w}} \Prob\paren*{\mathcal{E}(\hat{w}) \leq t} = \sup_{\hat{w}} \Prob\paren*{\widetilde{\mathcal{E}}(\hat{w} - w^{*}) \leq t} \\
&= \sup_{\hat{w}} \Exp\brack*{\Prob\paren*{\widetilde{\mathcal{E}}(\hat{w} - w^{*}) \leq t \st (X_i, Y_i)_{i=1}^{n}}} \\
&= \Exp\brack*{\sup_{v \in \R^{d}} \Prob\paren*{w^{*} - v \in \widetilde{\mathcal{E}}^{-1}((-\infty,t]) \st (X_i, Y_i)_{i=1}^{n}}} \\
&= \Exp\brack*{\Prob\paren*{w^{*} - w_k \in \widetilde{\mathcal{E}}^{-1}((-\infty,t]) \st (X_i, Y_i)_{i=1}^{n}}} \\
&= \Prob\paren*{\widetilde{\mathcal{E}}(Z_k) \le t} = F_{\widetilde{\mathcal{E}}(Z_k)}(t)
\end{align*}
where $Z_{k} \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, (\sigma^{2}/n)\Sigma_{k}^{-1})$. The fifth equality is obtained by combining the first item of Lemma <ref> with the first item of Lemma <ref>, and an application of Lemma <ref>. With the goal of applying Theorem <ref>, we verify the needed properties on the sequence $(p_{k})_{k \in \N}$.
First, since each $p_{k}$ is a CDF, it is right-continuous. To show that $(p_k)_{k \in \N}$ is decreasing, let $k \in \N$. Since $\lambda_{k} \geq \lambda_{k+1}$ by assumption, $\Sigma_{k} \succeq \Sigma_{k+1}$, and therefore $\Sigma_{k}^{-1} \preceq \Sigma_{k+1}^{-1}$. We conclude that $Z_{k+1} \overset{d}{=} Z_{k} + Y_{k}$ where $Y_{k} \indep Z_{k} \mid (X_i)_{i=1}^{n}$ and $Y_k \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \frac{\sigma^{2}}{n} \brace*{\Sigma_{k+1}^{-1} - \Sigma_{k}^{-1}})$. Now
\begin{align*}
F_{\widetilde{\mathcal{E}}(Z_{k+1}) \mid (X_i)_{i=1}^{n}}(t) &= \Prob\paren*{\widetilde{\mathcal{E}}(Z_{k+1}) \leq t \st (X_i)_{i=1}^{n}} \\
&= \Prob\paren*{Z_{k+1} \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} \\
&= \Prob\paren*{Z_{k} + Y_{k} \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} \\
&= \Exp\brack*{\Prob\paren*{Z_k + Y_{k} \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}, Y_k}} \\
&\leq \Exp\brack*{\sup_{a \in \R^{d}}\Prob\paren*{Z_k + a \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}, Y_k}} \\
&= \Exp\brack*{\Prob\paren*{Z_k \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}, Y_k}} \\
&= F_{\widetilde{\mathcal{E}}(Z_{k}) \mid (X_i)_{i=1}^{n}}(t),
\end{align*}
where the penultimate equality follows from Lemma <ref> and the fact that, given $((X_i)_{i=1}^{n}, Y_k)$, $Z_k$ is a centred Gaussian vector. Taking expectation of both sides with respect to $(X_i)_{i=1}^{n}$ proves that the sequence $(p_k)_{k \in \N}$ is decreasing. It remains to compute its limit.
By the monotone convergence theorem, we have
\begin{align}
\lim_{k \to \infty} F_{\widetilde{\mathcal{E}}(Z_k)}(t) &= \lim_{k \to \infty} \Prob\paren*{\widetilde{\mathcal{E}}(Z_k) \leq t} \nonumber \\
&= \lim_{k \to \infty} \Exp\brack*{\Prob\paren*{\widetilde{\mathcal{E}}(Z_k) \leq t \st (X_i)_{i=1}^{n}}} \nonumber \\
&= \Exp\brack*{\lim_{k \to \infty} \Prob\paren*{\widetilde{\mathcal{E}}(Z_k) \leq t \st (X_i)_{i=1}^{n}}}. \label{eq:pf_thm_3_6}
\end{align}
Furthermore, letting $Z \sim \mathcal{N}(0, I_{d \times d})$, we have
\begin{align}
\lim_{k\to \infty} \Prob\paren*{\widetilde{\mathcal{E}}(Z_k) \leq t \st (X_i)_{i=1}^{n}} &= \lim_{k\to \infty} \Prob\paren*{Z_k \in \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} \nonumber \\
&= \lim_{k\to \infty} \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{Z \in \bigcap_{k=1}^{\infty} \brace*{\frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t])} \mid (X_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}_{n}^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}}, \label{eq:pf_thm_3_7}
\end{align}
where the second line follows from the fact that $Z_k \overset{d}{=} \frac{\sigma}{\sqrt{n}}\Sigma^{1/2}_{k} Z$ and the third line from the continuity of probability and the fact that for all $k \in \N$,
\begin{equation*}
\frac{\sqrt{n}}{\sigma}\Sigma_{k+1}^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \subset \frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]).
\end{equation*}
Indeed, by the spectral theorem, there exists an orthogonal matrix $Q$ and a diagonal matrix $\Lambda$ such that $\widehat{\Sigma}_{n} = Q \Lambda Q^{T}$, so $\Sigma_{k}^{1/2} = Q (\Lambda^{1/2} + \lambda_k^{1/2} I) Q^{T}$. Now since $\lambda_{k+1} \leq \lambda_{k}$, we have by Lemma <ref>
\begin{equation*}
(\Lambda^{1/2} + \lambda_{k+1}^{1/2}) Q^{T}\widetilde{\mathcal{E}}^{-1}((-\infty, t]) \subset (\Lambda^{1/2} + \lambda_{k}^{1/2}) Q^{T}\widetilde{\mathcal{E}}^{-1}((-\infty, t]),
\end{equation*}
Mapping the above sets through $Q$ yields the desired statement. Now if $\rank({\widehat{\Sigma}_{n}}) < d$, then $\dim(\im(\widehat{\Sigma}_{n}^{1/2})) < d$ and
\begin{equation}
\label{eq:pf_thm_3_8}
0 \leq \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}_{n}^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} \leq \Prob\paren*{Z \in \im(\widehat{\Sigma}_{n}^{1/2})} = 0
\end{equation}
where the last equality follows since $Z$ is a standard normal vector, so its distribution is absolutely continuous with respect to Lebesgue measure on $\R^{d}$, and Lebesgue measure assigns zero measure to all hyperplanes. Otherwise, $\rank(\widehat{\Sigma}_{n}) = d$, and we get
\begin{equation}
\label{eq:pf_thm_3_9}
\Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}_{n}^{1/2} \widetilde{\mathcal{E}}^{-1}((-\infty, t]) \st (X_i)_{i=1}^{n}} = \Prob\paren*{\widetilde{\mathcal{E}}\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}_{n}^{-1/2}Z} \leq t \st (X_i)_{i=1}^{n}}
\end{equation}
Combining (<ref>), (<ref>), (<ref>), and (<ref>) proves that
\begin{equation*}
\lim_{k \to \infty} p_{k}(t) = \Exp\brack*{\Prob\paren*{\widetilde{\mathcal{E}}\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}_{n}^{-1/2}Z} \leq t \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\brace*{\rank(\widehat{\Sigma}_{n}) = d}}((X_i)_{i=1}^{n})}
\end{equation*}
which can be interpreted as the CDF of the random variable
\begin{equation*}
A((X_i)_{i=1}^{n}, Z) \defeq \begin{dcases*}
\widetilde{\mathcal{E}}\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}_{n}^{-1/2} Z} & if $\rank(\widehat{\Sigma}_{n}) = d$ \\
\infty & otherwise
\end{dcases*}
\end{equation*}
so we write $\lim_{k \to \infty} p_{k}(t) = F_{A}(t)$.
It remains to show that the worst case risk of the procedures defined in the theorem is $Q_{A}(1-\delta)$. Let $\hat{w}$ be a procedure satisfying the condition stated in the theorem and fix $w^{*} \in \R^{d}$. Then, on the event that $\rank(\widehat{\Sigma}_{n}) = d$, and through an elementary explicit calculation, we have $\hat{w} - w^{*} = \widehat{\Sigma}_{n}^{-1} (\frac{1}{n} \sum_{i=1}^{n} \eta_i X_i)$ where $\eta_{i} \sim \mathcal{N}(0, \sigma^{2})$ are . Therefore, $\frac{1}{n} \sum_{i=1}^{n} \eta_i X_i \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \sigma^{2}/n \cdot \widehat{\Sigma}_{n})$, and hence $\hat{w} - w^{*} \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \sigma^{2}/n \cdot \widehat{\Sigma}_{n}^{-1})$, so the worst case risk of this procedure is upper bounded by $Q_{A}(1-\delta)$. Applying Theorem <ref> concludes the proof.
We start with the lower bound. Let $k \in \N$, and let $\lambda_{k}$ be a strictly positive sequence converging to $0$. We have
\begin{align}
\sup_{P \in \mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^2)} R_{n, \delta}(P, \hat{w}) &= \sup_{P \in \mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^2)} Q_{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n}))}(1 - \delta) \nonumber \\
&= \sup_{w^{*} \in \R^{d}} F^{-1}_{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n}))}(1 - \delta) \nonumber \\
&= \paren*{\inf_{w^{*} \in \R^{d}} F_{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n}))}}^{-1}(1-\delta) \nonumber \\
&\geq \paren*{\Exp_{w^{*} \sim \mathcal{N}(0, \sigma^{2}(\lambda_k n)^{-1} I_{d})}\brack*{F_{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n})) \mid w^{*}}}}^{-1}(1-\delta) \nonumber \\
&= F^{-1}_{\mathcal{E}(\hat{w}((X_i, Y_i)_{i=1}^{n}))}(1-\delta) \nonumber \\
&= F^{-1}_{\ell(\hat{w}((X_i, Y_i)_{i=1}^{n})) - w^{*})}(1-\delta) \label{eq:pf_thm_3_1}
\end{align}
Where the first five steps are justified by the same reasoning as in the analogous argument in the proof of Theorem <ref>, and where the last step uses the definition of $\ell$ given by Theorem. Now define
\begin{equation*}
\hat{w}_{k} \defeq \Sigma_{k}^{-1} \paren*{\frac{1}{n}\sum_{i=1}^{n}Y_{i}X_i}, \quad \Sigma_{k} \defeq \widehat{\Sigma} + \lambda_k I_{d}, \quad \widehat{\Sigma} \defeq \frac{1}{n}\sum_{i=1}^{n} X_{i}X_{i}^{T}
\end{equation*}
Then by a classical Bayesian calculation, we have
\begin{equation*}
w^{*} \mid (X_i, Y_i)_{i=1}^{n} \sim \mathcal{N}\paren*{\hat{w}_{k}, \frac{\sigma^2}{n}\Sigma_{k}^{-1}}.
\end{equation*}
Now for any $\hat{w}$, we have
\begin{align}
F_{\ell\paren*{\hat{w}((X_i, Y_i)_{i=1}^{n}) - w^{*}} \mid (X_i, Y_i)_{i=1}^{n}}(r) &= \Prob\paren*{\ell\paren*{\hat{w}((X_i, Y_i)_{i=1}^{n}) - w^{*}} \leq r \mid (X_i, Y_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{w^{*} - \hat{w}((X_i, Y_i)_{i=1}^{n}) \in \ell^{-1}((-\infty, r]) \st (X_i, Y_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{w^{*} - \hat{w}_{k} \in \ell^{-1}((-\infty, r]) + \hat{w}((X_i, Y_i)_{i=1}^{n}) - \hat{w}_{k} \st (X_i, Y_i)_{i=1}^{n}} \nonumber \\
&\leq \Prob\paren*{w^{*} - \hat{w}_{k} \in \ell^{-1}((-\infty, r]) \st (X_i, Y_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{\ell(\hat{w}_{k} - w^{*}) \leq r \st (X_i, Y_i)_{i=1}^{n}} \nonumber \\
&= F_{\ell(\hat{w}_{k} - w^{*}) \mid (X_i, Y_i)_{i=1}^{n}}(r) \label{eq:pf_thm_3_2}
\end{align}
where again the steps are justified by the same reasoning as the analogous argument in the proof of Theorem <ref>, with the only modification that the quasiconvexity and symmetry of $\ell$ is deduced from Lemma <ref>. Taking expectation with respect to $(X_i, Y_i)_{i=1}^{n}$ on both sides of (<ref>), and using the sixth item of Lemma <ref>, we obtain
\begin{equation}
\label{eq:pf_thm_3_3}
\inf_{\hat{w}} F^{-1}_{\ell(\hat{w}((X_i, Y_i)_{i=1}^{n})) - w^{*})}(1-\delta) = F^{-1}_{\ell(\hat{w}_{k} - w^{*})}(1-\delta) = F^{-1}_{\ell(Z_k)} (1-\delta) = Q_{\ell(Z_k)}(1-\delta),
\end{equation}
where $Z_k \mid (X_i)_{i=1}^{n} \sim \mathcal{N}\paren*{0, \frac{\sigma^2}{n}\Sigma^{-1}_{k}}$.
Now we claim that for all $r \in \R$ and all $k \in \N$
\begin{equation}
\label{eq:pf_thm_3_4}
F_{\ell(Z_k)}(r) \geq F_{\ell(Z_{k+1})}(r)
\end{equation}
Indeed, let $k \in \N$. Note that since $\lambda_{k} \geq \lambda_{k+1}$, $\Sigma_{k} \succeq \Sigma_{k+1}$, and therefore $\Sigma_{k}^{-1} \preceq \Sigma_{k+1}^{-1}$. We conclude that $Z_{k+1} \overset{d}{=} Z_{k} + Y_{k}$ where $Y_{k} \indep Z_{k} \mid (X_i)_{i=1}^{n}$ and $Y_k \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \frac{\sigma^{2}}{n} \brace*{\Sigma_{k+1}^{-1} - \Sigma_{k}^{-1}})$. Now
\begin{align*}
F_{\ell(Z_{k+1}) \mid (X_i)_{i=1}^{n}}(r) &= \Prob\paren*{\ell(Z_{k+1}) \leq r} \\
&= \Prob\paren*{Z_{k+1} \in \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} \\
&= \Prob\paren*{Z_{k} + Y_{k} \in \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} \\
&= \Exp\brack*{\Prob\paren*{Z_k \in \ell^{-1}((-\infty, r]) - Y_k \st (X_i)_{i=1}^{n}, Y_k}} \\
&\leq \Exp\brack*{\Prob\paren*{Z_k \in \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}, Y_k}} \\
&= F_{\ell(Z_{k}) \mid (X_i)_{i=1}^{n}}(r),
\end{align*}
where the inequality follows from Lemma <ref> and the fact that, given $(X_i)_{i=1}^{n}, Y_k$, $Z_k$ is a centred Gaussian vector. Taking expectation of both sides with respect to $(X_i)_{i=1}^{n}$ finishes the proof of the first item.
We now further claim that
\begin{equation}
\label{eq:pf_thm_3_5}
\lim_{k \to \infty} F_{\ell(Z_k)}(r) = \Exp\brack*{\Prob\paren*{\ell\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}^{-1/2}Z} \leq r \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\brace*{\rank\paren*{\widehat{\Sigma}} = d}}((X_i)_{i=1}^{n})} \eqdef \phi_{n}(r)
\end{equation}
Indeed, by the monotone convergence theorem, we have
\begin{align}
\lim_{k \to \infty} F_{\ell(Z_k)}(r) &= \lim_{k \to \infty} \Prob\paren*{\ell(Z_k) \leq r} \nonumber \\
&= \lim_{k \to \infty} \Exp\brack*{\Prob\paren*{\ell(Z_k) \leq r \st (X_i)_{i=1}^{n}}} \nonumber \\
&= \Exp\brack*{\lim_{k \to \infty} \Prob\paren*{\ell(Z_k) \leq r \st (X_i)_{i=1}^{n}}}. \label{eq:pf_thm_3_6}
\end{align}
Furthermore, letting $Z \sim \mathcal{N}(0, I_{d \times d})$, we have
\begin{align}
\lim_{k\to \infty} \Prob\paren*{\ell(Z_k) \leq r \st (X_i)_{i=1}^{n}} &= \lim_{k\to \infty} \Prob\paren*{Z_k \in \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} \nonumber \\
&= \lim_{k\to \infty} \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{Z \in \bigcap_{k=1}^{\infty} \brace*{\frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \ell^{-1}((-\infty, r])} \mid (X_i)_{i=1}^{n}} \nonumber \\
&= \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}^{1/2} \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}}, \label{eq:pf_thm_3_7}
\end{align}
where the second line follows from the fact that $Z_k \overset{d}{=} \frac{\sigma}{\sqrt{n}}\Sigma^{1/2}_{k} Z$ and the third line from the continuity of probability and the fact that for all $k \in \N$,
\begin{equation*}
\frac{\sqrt{n}}{\sigma}\Sigma_{k+1}^{1/2} \ell^{-1}((-\infty, r]) \subset \frac{\sqrt{n}}{\sigma}\Sigma_k^{1/2} \ell^{-1}((-\infty, r]).
\end{equation*}
Indeed, by the spectral theorem, there exists an orthogonal matrix $Q$ and a diagonal matrix $\Lambda$ such that $\widehat{\Sigma} = Q \Lambda Q^{T}$, so that $\Sigma_{k}^{1/2} = Q (\Lambda^{1/2} + \lambda_k^{1/2} I) Q^{T}$. Now since $\lambda_{k+1} \leq \lambda_{k}$, we have by Lemma <ref>
\begin{equation*}
(\Lambda^{1/2} + \lambda_{k+1}^{1/2}) Q^{T}\ell^{-1}((-\infty, r]) \subset (\Lambda^{1/2} + \lambda_{k}^{1/2}) Q^{T}\ell^{-1}((-\infty, r]),
\end{equation*}
Mapping the above sets through $Q$ yields the desired statement. Now if $\rank\paren*{\widehat{\Sigma}} < d$. Then $\dim(\im(\widehat{\Sigma}^{1/2})) < d$ and
\begin{equation}
\label{eq:pf_thm_3_8}
0 \leq \Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}^{1/2} \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} \leq \Prob\paren*{Z \in \im(\Sigma^{1/2})} = 0
\end{equation}
where the last equality follows since $Z$ is a standard normal vector, so its distribution is absolutely continuous with respect to Lebesgue measure on $\R^{d}$, and Lebesgue measure assigns zero measure to all hyperplanes. Otherwise, $\rank\paren*{\widehat{\Sigma}} = d$, and we get
\begin{equation}
\label{eq:pf_thm_3_9}
\Prob\paren*{Z \in \frac{\sqrt{n}}{\sigma} \widehat{\Sigma}^{1/2} \ell^{-1}((-\infty, r]) \st (X_i)_{i=1}^{n}} = \Prob\paren*{\ell\paren*{\frac{\sigma}{\sqrt{n}} \widehat{\Sigma}^{-1/2}Z} \leq r \st (X_i)_{i=1}^{n}}
\end{equation}
Combining (<ref>), (<ref>), (<ref>), and (<ref>) proves (<ref>). Combining (<ref>) and (<ref>) and appealing to the last item of Lemma <ref>, we obtain
\begin{equation}
\label{eq:pf_thm_3_10}
\sup_{k \in \N} Q_{\ell(Z_k)}(1 - \delta) = \phi_{n}^{-}(1 - \delta)
\end{equation}
Finally, recalling that (<ref>) holds for all $k \in \N$, and using (<ref>) and (<ref>) yields
\begin{equation*}
\inf_{\hat{w}} \sup_{P \in \mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^{2})} R_{n, \delta}(P, \hat{w}) \geq \sup_{k \in \N} Q_{\ell(Z_k)}(1 - \delta) = \phi_{n}^{-}(1 - \delta).
\end{equation*}
The upper bound follows from the following reasoning. (ADD PROOF).
§.§ Proof of Proposition <ref>
The proof is a simple application of the second-order delta method. Let $(Z_{n}, (X_i)_{i=1}^{n})$ be such that $(X_i)_{i=1}^{n} \sim P_{X}^{n}$ and $Z_{n} \mid (X_i)_{i=1}^{n} \sim \mathcal{N}(0, \frac{\sigma^2}{n} \widehat{\Sigma}_{n}^{-1})$ whenever $\widehat{\Sigma}_{n}$ is invertible and set $Z_{n} = 0$ otherwise. The conclusion of Theorem <ref> can then be rewritten as
\begin{equation*}
R^{*}_{n, \delta}(\mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^{2})) = Q_{\widetilde{\mathcal{E}}(Z_n)}(1-\delta),
\end{equation*}
with the additional specification that $\widetilde{\mathcal{E}}(Z_{n}) \defeq \infty$ whenever $(X_i)_{i=1}^{n}$ is such that $\widehat{\Sigma}_{n}$ is singular. Recall that $Z \sim \mathcal{N}(0, I_{d \times d})$. By a property of Gaussian vectors, we have that on the event that $\widehat{\Sigma}_{n}$ is invertible, $Z_{n} \overset{d}{=} \frac{\sigma}{\sqrt{n}} \widehat{\Sigma}_{n}^{-1/2} Z$.
By the weak law of large numbers and the continuous mapping theorem, we have $
\widehat{\Sigma}_{n}^{-1/2} \overset{p}{\to} \Sigma^{-1/2}$, so that an application of Slutsky's theorem yields $\sqrt{n} \cdot Z_{n} \overset{d}{\to} \mathcal{N}(0, \sigma^{2} \Sigma^{-1})$. Now by assumption, $\widetilde{\mathcal{E}}$ is twice differentiable at $0$ where its gradient vanishes by Lemma <ref>, and where its Hessian is given by $\nabla^{2} \widetilde{\mathcal{E}}(0) = \Exp\brack*{e''(\eta) XX^{T}} = 2 \alpha \Sigma$ by independence of $\eta$ and $X$. Therefore, by an application of the delta method, we obtain
\begin{equation*}
\lim_{n \to \infty} n \cdot \widetilde{\mathcal{E}}(Z_n) \overset{d}{\to}
\sigma^{2} \alpha \norm{Z}_2^2
\end{equation*}
Since convergence in distribution implies the pointwise convergence of quantiles, we obtain the first equality in the proposition. The second statement follows from Lemma <ref>.
§.§ Proof of Lemma <ref>
We start with the first statement. Let $\delta \in (\eps_{n}, 1)$. By the monotone convergence theorem and the fact that $\widetilde{\mathcal{E}}(w) < \infty$ for all $w \in \R^{d}$ by assumption on $P_{X}$, we have
\begin{equation*}
\lim_{t \to \infty} F_{\widetilde{\mathcal{E}}(Z)}(t) = \lim_{t \to \infty} \Exp\brack*{\Prob\paren*{\widetilde{\mathcal{E}}(Z) \leq t \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\brace*{\rank(\widehat{\Sigma}_{n}) = d}}((X_i)_{i=1}^{n})} = 1 - \eps_{n}
\end{equation*}
Therefore, since $1-\delta < 1-\eps_{n}$ there exists a $t \in \R$, such that $F_{\widetilde{\mathcal{E}}(Z)}(t) \geq 1-\delta$, so $Q_{\widetilde{\mathcal{E}}(Z)}(1-\delta) < \infty$. On the other hand, for all $t \in \R$, $F_{\widetilde{\mathcal{E}}(Z)}(t) < 1-\eps_{n}$, so for any $\delta \in [0, \eps_{n}]$, $Q_{\widetilde{\mathcal{E}}(Z)}(1-\delta) = \infty$. As for the lower bound on $\eps_{n}$, <cit.> proved that there exists a $w_{0} \in S^{d-1}$ such that $\rho(P_{X}) = \sup_{w \in \R^{d} \setminus \brace{0}} \Prob\paren*{\inp{w}{X} = 0} = \Prob\paren*{\inp{w_0}{X} = 0} < 1$. Therefore
\begin{equation*}
\eps_{n} = \Prob\paren*{\lambdamin(\widehat{\Sigma}_{n}) = 0} \geq \Prob\paren*{\bigcap_{i=1}^{n} \brace*{\inp{w_0}{X_i} = 0}} = \rho(P_{X})^{n}.
\end{equation*}
The upper bound on $\eps_{n}$ follows from the proof of <cit.>.
§.§ Proof of Proposition <ref>
In this proof we will let $Z \sim \mathcal{N}(0, \frac{\sigma^2}{n}\widetilde{\Sigma}^{-1}_{n})$, so that the minimax risk is given by $Q_{\norm{Z}_2^2}(1-\eps_{n} - \delta)/2$.
Define the random variables
\begin{equation*}
M((X_i)_{i=1}^{n}) \defeq \begin{dcases*}
\Exp\brack*{\norm{Z}_{2} \st (X_i)_{i=1}^{n}} & if $\rank(\widehat{\Sigma}_{n}) = d$ \\
\infty & otherwise
\end{dcases*}
\end{equation*}
\begin{equation*}
R((X_i)_{i=1}^{n}) \defeq \begin{dcases*}
\lambdamax\paren*{\frac{\sigma^2}{n}\widetilde{\Sigma}_{n}^{-1}} & if $\rank(\widehat{\Sigma}_{n}) = d$ \\
\infty & otherwise
\end{dcases*}
\end{equation*}
To simplify notation, we will write $M$ and $R$ only, and leave the dependence on $(X_i)_{i=1}^{n}$ implicit.
Upper bound. We have, for all $r \in \R$,
\begin{align*}
F_{\norm{Z}_2^{2}}(r^2) &= \Exp\brack*{\Prob\paren*{\norm{Z}_2 \leq r \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\brace*{\rank(\widehat{\Sigma}_{n}) = d}}((X_{i})_{i=1}^{n})} \\
&= \Exp\brack*{\brace*{1 - \Prob\paren*{\norm{Z}_{2} > r \st (X_i)_{i=1}^{n}}} \mathbbm{1}_{\brace*{\rank(\widehat{\Sigma}_{n}) = d}}((X_{i})_{i=1}^{n})} \\
&\geq \Exp\brack*{\brace*{1 - \exp\paren*{-\frac{\abs{r-M}^2}{2R}}}\mathbbm{1}_{[M, \infty)}(r) \mathbbm{1}_{\brace*{\rank(\widehat{\Sigma}_{n}) = d}}((X_{i})_{i=1}^{n})} \\
&= \Exp\brack*{\brace*{1 - \exp\paren*{-\frac{\abs{r-M}^2}{2R}}}\mathbbm{1}_{[0, r]}(M)} \\
&= \Prob\paren*{M \leq r} - \Exp\brack*{\exp\paren*{-\frac{\abs{r - M}^2}{2R}} \mathbbm{1}_{[0, r]}(M)} \eqdef L(r)
\end{align*}
where the inequality follows from the Gaussian concentration (Lemma <ref>), and where the expression inside the expectation is defined to be $0$ whenever $\rank(\widehat{\Sigma}) < d$. The penultimate equality follows from the fact that $\mathbbm{1}_{[M, \infty)}(r) \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n}) = \mathbbm{1}_{[0, r]}(M)$.
Now let $0 < c \leq 1$ and define $q \defeq Q_{M}(1 - \eps_{n} - c\delta)$. Then, recalling the definition of $W$ from the statement,
\begin{align*}
L(r + q) &\geq \Prob\paren*{M \leq q} + \Prob\paren*{M \in (q, r + q]} \\ &\quad - \Exp\brack*{\exp\paren*{-\frac{r^2}{2R}} \mathbbm{1}_{[0, q]}(M)} - \Exp\brack*{\underbrace{\exp\paren*{-\frac{\abs{r - M}^2}{2R}}}_{\textstyle \leq 1}\mathbbm{1}_{(q, r + q]}(M)} \\
&\geq \Prob\paren*{M \leq q} - \Exp\brack*{\exp\paren*{-\frac{r^2}{2R}} \mathbbm{1}_{[0, q]}(M)} \\
&\geq 1 - \eps_{n} - c\delta - \Exp\brack*{\exp\paren*{-\frac{r^2}{2R}}\mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&=\Exp\brack*{\brace*{1 - \exp\paren*{-\frac{r^{2}}{2R}}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} - c\delta \\
&= \Prob\paren*{\sqrt{\frac{2\sigma^{2}}{n}W} \leq r} - c\delta
\end{align*}
hence taking $r = \sqrt{\frac{2\sigma^{2}}{n}Q_{W}(1 - \eps_{n} - c \delta)}$ and $c = 1/2$ in the last display yields
\begin{equation*}
L\paren*{Q_{M}(1 - \eps_{n} - \delta/2) + \sqrt{\frac{2\sigma^{2}}{n}Q_{W}(1 - \eps_{n} - \delta/2)}} \geq 1 - \eps_{n} - \delta
\end{equation*}
And since $F_{\norm{Z}_2^2} \circ \varphi \geq L$ where $\varphi(r) = r^2$, we get by the second item of Lemma <ref> that $\varphi^{-1} \circ Q_{\norm{Z}_2^2} \leq L^{-}$. Applying $\varphi$ to both sides yields and using Lemma <ref> we obtain,
\begin{align*}
Q_{\norm{Z}_2^2}(1 - \eps_{n} - \delta) &\leq (L^{-}(1 - \eps_{n} - \delta))^{2} \\
&\leq \paren*{Q_{M}(1 - \eps_n - \delta/2) + \sqrt{\frac{2\sigma^{2}}{n}Q_{W}(1 - \eps_{n} - \delta/2)}}^2 \\
&\leq 2 \brack*{Q_{M^{2}}(1 - \eps_{n} - \delta/2) + \frac{2\sigma^{2}}{n}Q_{W}(1 - \eps_{n} - \delta/2)} \\
&\leq \frac{2\sigma^{2}}{n} \paren*{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1-\eps_{n}-\delta/2) + 2 Q_{W}(1 - \eps_{n} - \delta/2)} \\
&\leq 4 \cdot \frac{\sigma^{2}}{n} \paren*{Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1-\eps_{n}-\delta/2) + Q_{W}(1 - \eps_{n} - \delta/2)},
\end{align*}
where in the penultimate inequality, we used the fact that $M^2 \leq \Tr\paren*{\widetilde{\Sigma}_{n}^{-1}}$ by Jensen's inequality.
Lower bound. For any $(X_i)_{i=1}^{n}$, define $v((X_i)_{i=1}^{n})$ to be the eigenvector corresponding to the smallest eigenvalue of $\widetilde{\Sigma}_{n}$. Then we have.
\begin{align*}
F_{\norm{Z}_2^2}(r^2) &= \Exp\brack*{\Prob\paren*{\norm{Z}_2 \leq r \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&\leq \Exp\brack*{\Prob\paren*{\abs{\inp{v((X_i)_{i=1}^{n})}{Z}} \leq r \st (X_i)_{i=1}^{n}}\mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&\leq \Exp\brack*{\sqrt{1 - \exp\paren*{-\frac{2r^{2}}{\pi R}}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&\leq \Exp\brack*{\brace*{1 - \frac{1}{2}\exp\paren*{-\frac{2r^{2}}{\pi R}}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&= \frac{1}{2}(1 - \eps_{n}) + \frac{1}{2} \Exp\brack*{\brace*{1 - \exp\paren*{-\frac{2r^{2}}{\pi R}}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&= \frac{1}{2}\paren*{1 - \eps_{n} + \Prob\paren*{\sqrt{\frac{\pi \sigma^2}{2n}W} \leq r}} \eqdef U_1(r)
\end{align*}
Where the third line follows from Lemma <ref>. Now let $\eps > 0$ and define
\begin{equation*}
r(\eps) \defeq \sqrt{\frac{\pi \sigma^2}{2n} Q_{W}(1 - \eps_{n} - 2\delta)} - \eps
\end{equation*}
\begin{equation*}
U_1(r(\eps)) < \frac{1}{2}\paren*{1 - \eps_{n} + 1 - \eps_{n} - 2\delta} = 1 - \eps_{n} - \delta.
\end{equation*}
Since this holds for all $\eps > 0$, we obtain $U_{1}^{-}(1 - \eps_{n} - \delta) \geq r(0)$. Therefore,
\begin{equation*}
Q_{\norm{Z}_2^2}(1 - \eps_{n} - \delta) \geq (U_{1}^{-}(1 - \eps_{n} - \delta))^{2} \geq r^2(0) = \frac{\pi \sigma^2}{2n} Q_{W}(1 - \eps_{n} - 2\delta)
\end{equation*}
This finishes the proof of the first part of the lower bound.
For the second part of the lower bound, we also have by Gaussian concentration, and in particular Lemma <ref>,
\begin{align*}
F_{\norm{Z}_2^2}(r^2) &= \Exp\brack*{\Prob\paren*{\norm{Z}_2 \leq r \st (X_i)_{i=1}^{n}} \mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&\leq \Exp\brack*{\exp\paren*{-\frac{\abs{M - r}^{2}}{\pi M^2}} \mathbbm{1}_{[0, M]}(r)\mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n}) + \mathbbm{1}_{(M, \infty)}(r)\mathbbm{1}_{\rank(\widehat{\Sigma}) = d}((X_{i})_{i=1}^{n})} \\
&= \Exp\brack*{\exp\paren*{-\frac{\abs{M-r}^2}{\pi M^2}} \mathbbm{1}_{[r, \infty)}(M) + \mathbbm{1}_{[0, r)}(M)} \\
&= \Prob\paren*{M < r} + \Exp\brack*{\exp\paren*{-\frac{\abs{M-r}^2}{\pi M^2}} \mathbbm{1}_{[r, \infty)}(M)} \eqdef U_2(r)
\end{align*}
Let $a \in (0, 1)$, $c > 1$, and $q \defeq Q_{M}(1 - \eps_n - c\delta)$. Then we have,
\begin{align*}
U((1-a)q) &= \Exp\brack*{\underbrace{\exp\paren*{-\frac{\abs*{M - (1-a)q}^2}{\pi M^2}}}_{\textstyle \leq 1} \mathbbm{1}_{[(1-a)q, q)}(M)} + \Exp\brack*{\exp\paren*{-\frac{\abs*{M-(1-a)q}^2}{\pi M^2}} \mathbbm{1}_{[q, \infty)}(M)} \\
&\quad + \Prob\paren*{M < q} - \Prob\paren*{(1-a)q \leq M < q} \\
&\leq \Prob\paren*{M < q} + \Exp\brack*{\exp\paren*{-\frac{\abs{M-(1-a)q}^2}{\pi M^2}} \mathbbm{1}_{[q, \infty)}(M)} \\
&\leq \Prob\paren*{M < q} + \exp\paren*{-\frac{a^2}{\pi}} \Prob\paren*{q \leq M < \infty} \\
&\leq \Prob\paren*{M < q} + \exp\paren*{-\frac{a^2}{\pi}} (1 - \eps_{n} - \Prob\paren*{M < q}) \\
&\leq \paren*{1 - \exp\paren*{-\frac{a^2}{\pi}}} \Prob\paren*{M < q} + \exp\paren*{-\frac{a^2}{\pi}}(1 - \eps_{n}) \\
&\leq \paren*{1 - \exp\paren*{-\frac{a^2}{\pi}}} (1 - \eps_{n} - c\delta) + \exp\paren*{-\frac{a^2}{\pi}}(1-\eps_{n}) \\
&= 1 - \eps_{n} - \paren*{1 - \exp\paren*{-\frac{a^2}{\pi}}} c \delta \\
&< 1 - \eps_{n} - \delta
\end{align*}
where the last line follows from taking $a = 0.96$, and $c = 4$, and noticing that with these choices $c \paren*{1 - \exp\paren*{-\frac{a^2}{\pi}}} > 1$. Now since $F_{\norm{Z}_2^2} \circ \varphi \leq U_2$ where $\varphi(r) = r^2$, we get by the second item of Lemma <ref> and an application of Lemma <ref>,
\begin{align*}
Q_{\norm{Z}_2^2}(1 - \eps_{n} - \delta) &\geq (U^{-}(1 - \eps_{n} - \delta))^2 \\
&\geq \paren*{\frac{1}{25}Q_{M}(1 - \eps_{n} - 4\delta)}^2 \\
&= \frac{1}{625} Q_{M^{2}}(1 - \eps_{n} - 4\delta) \\
& \geq \frac{1}{625 (1 + \pi/2)} \frac{\sigma^2}{n} Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_{n} - 4\delta).
\end{align*}
Averaging the two lower bounds yields the result.
§.§ Are the bounds in Proposition <ref> tight ?
We start with a general lemma.
Let $X \in [0, \infty]$ be random variable and let $p \defeq \Prob\paren*{X = \infty}$. Assume that
\begin{equation*}
\lim_{x \to \infty} x^{\alpha}(1 - p - F_{X}(x)) = 0,
\end{equation*}
for some $\alpha > 0$. Then, for all $c > 1$, we have
\begin{equation*}
\liminf_{\delta \downarrow 0} \frac{Q_{X}(1 - p - \delta/c)}{Q_{X}(1 - p - \delta)} \leq c^{1/\alpha}.
\end{equation*}
Suppose not. Define $\Delta \defeq M - c^{1/\alpha} > 0$ and the function
\begin{equation*}
r(\delta) \defeq \frac{Q_{X}(1 - p - \delta/c)}{Q_{X}(1 - p - \delta)}.
\end{equation*}
Then, since $\liminf_{\delta \downarrow 0} r(\delta) = M > c^{1/\alpha}$, there exists $\delta_{*} \in (0, 1)$ such that
\begin{equation*}
\inf\brace*{r(\delta) \st \delta \in (0, \delta_*]} \geq M - \frac{\Delta}{2}
\end{equation*}
On the other hand, there exists $x_{*} \in [0, \infty)$ such that for all $x \geq x_{*}$, we have $x^{\alpha}(1 - F_{X}(x)) \leq 1$, or equivalently
\begin{equation*}
F_{X}(x) \geq \paren*{1 - p - \frac{1}{x^{\alpha}}} \mathbbm{1}_{[x_{*}, \infty)}(x)
\end{equation*}
This in turn implies, by Lemma (REFERENCE), that
\begin{equation*}
Q_{X}(1 - p - \delta) \leq \max\brace*{x_{*}, \frac{1}{\delta^{1/\alpha}}}
\end{equation*}
so that, for $\delta \in (0, x_{*}^{-a}]$, we obtain
\begin{equation*}
Q_{X}(1 - p - \delta) \leq \frac{1}{\delta^{1/\alpha}}
\end{equation*}
Define $\delta_{0} \defeq \min\brace*{x_{*}^{-\alpha}, \delta_{*}} > 0$, and let $k \in N$. Then by induction, we have
\begin{equation*}
Q_{X}(1 - \delta_{0}/c^{k}) = Q_{X}(1-\delta_{0}/c^{k-1}) r(\delta_{0}/c^k) \geq \dotsc \geq (M - \Delta/2)^k Q_{X}(1 - \delta_{0})
\end{equation*}
On the other, we have
\begin{equation*}
Q_{X}(1-\delta_{0}/c^k) \leq \paren*{\frac{c^{k}}{\delta_{0}}}^{1/\alpha} = \frac{c^{k/\alpha}}{\delta_{0}^{1/\alpha}}
\end{equation*}
which leads to the contradiction
\begin{equation*}
0 < \delta_{0}^{1/\alpha} Q_{X}(1-\delta_{0}) \leq \paren*{\frac{c^{1/\alpha}}{M - \Delta/2}}^{k} \to 0 \text{ as } k \to \infty,
\end{equation*}
where the last limit converges to $0$ since $M - \Delta/2 > M - \Delta = c^{1/\alpha}$.
Suppose that there exists an $\alpha > 0$ such that
\begin{equation*}
\lim_{t \to \infty} t^{\alpha} \Prob\paren*{\lambdamin(\widetilde{\Sigma}_{n}) < \frac{1}{t}} = 0
\end{equation*}
Then there exists a sequence $(\delta_k)_{k=1}^{\infty}$ in $(0, 1-\eps_n)$ satisfying $\delta_k \to 0$ as $k \to \infty$ such that
\begin{equation*}
R_{n, \eps_{n} + \delta}^{*}(\mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^2)) \asymp Q_{\Tr\paren*{\widetilde{\Sigma}_{n}^{-1}}}(1 - \eps_n - \delta_{k}) + Q_{W}(1 - \eps_n - \delta_{k}).
\end{equation*}
We note that a sufficient condition for the hypothesis of Corollary <ref> to hold is the finiteness of $\Exp\brack*{\lambdamax(\widetilde{\Sigma}^{-1}_{n}) \mathbbm{1}_{[0, \infty)}(\lambdamax(\widetilde{\Sigma}_{n}^{-1}))}$ for some $\alpha > 0$.
§.§ Proof of Lemma <ref>
We start with the bounds on $Q_{\Tr(\widetilde{\Sigma}^{-1}_{n})}(1 - \delta)$, and in particular the lower bound. If $(a_i)_{i=1}^{d}$ is a finite sequence of non-negative real numbers, then twice applying the AM-GM inequality we obtain
\begin{equation*}
\frac{d}{\sum_{i=1}^{d}{\frac{1}{a_i}}} \leq \paren*{\prod_{i=1}^{d} a_i}^{1/d} \leq \frac{\sum_{i=1}^{n}a_{i}}{d} \implies \sum_{i=1}^{d} \frac{1}{a_i} \geq \frac{d^{2}}{\sum_{i=1}^{d}a_i}.
\end{equation*}
Using this, we have
\begin{equation*}
\Tr\paren*{\widetilde{\Sigma}_{n}^{-1}} = \sum_{i=1}^{d} \lambda_i(\widetilde{\Sigma}_{n}^{-1}) = \sum_{i=1}^{d} \frac{1}{\lambda_i(\widetilde{\Sigma}_{n})} \geq \frac{d^{2}}{\Tr(\widetilde{\Sigma}_{n})}.
\end{equation*}
Now, since $\Exp\brack{\Tr(\widetilde{\Sigma}_{n})} = d$, we have
\begin{equation*}
\Prob\paren*{\frac{d^2}{\Tr(\widetilde{\Sigma}_{n})} \leq t} = \Prob\paren*{\Tr(\widetilde{\Sigma}_{n}) \geq \frac{d^2}{t}} \leq \frac{\Exp\brack{\Tr(\widetilde{\Sigma}_{n})}}{d^{2}/t} = \frac{t}{d}.
\end{equation*}
Applying the second item of Lemma <ref>, we obtain the desired lower bound
\begin{equation*}
Q_{\Tr(\widetilde{\Sigma}_{n}^{-1})}(1 - \delta) \geq Q_{d^{2}/\Tr(\widetilde{\Sigma}_{n})}(1-\delta) \geq d \cdot (1-\delta).
\end{equation*}
The upper bound follows from the simple observation $\Tr(\widetilde{\Sigma}_{n}^{-1}) \leq d \cdot \lambdamax(\widetilde{\Sigma}^{-1}_{n})$. We now move to bounds on $Q_{W}(1-\delta)$, and we start with the lower bound. By definition, we have
\begin{equation*}
1-\delta \leq \Prob\paren*{W \leq Q_{W}(1-\delta)} = 1 - \Exp\brack*{\exp(-Q_{W}(1-\delta) \cdot \lambdamin(\widetilde{\Sigma}_{n}))},
\end{equation*}
hence, by Jensen's inequality
\begin{equation*}
\delta \geq \Exp\brack*{\exp(-Q_{W}(1-\delta) \cdot \lambdamin(\widetilde{\Sigma}_{n}))} \geq \exp\paren*{-Q_{W}(1-\delta) \cdot \Exp\brack*{\lambdamin(\widetilde{\Sigma}_{n})}},
\end{equation*}
and using the variational characterization of the smallest eigenvalue we get, for any $v \in S^{d-1}$,
\begin{equation*}
\Exp\brack*{\lambdamin(\widetilde{\Sigma}_{n})} = \Exp\brack*{\inf_{v \in S^{d-1}} \frac{1}{n} \sum_{i=1}^{n} \inp{v}{\Sigma^{-1/2}X_i}^{2}} \leq \Exp\brack*{\inp{v}{\Sigma^{-1/2}X}^{2}} = 1.
\end{equation*}
Therefore $Q_{W}(1-\delta) \geq \log(1/\delta)$ as desired. For the upper bound, let $q \defeq Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1-\delta/2)$ and define the event $A \defeq \brace*{\lambdamax(\widetilde{\Sigma}^{-1}_{n}) \leq q}$ which satisfies $\Prob\paren*{A} \geq 1-\delta/2$. Notice that
\begin{equation*}
\Prob\paren*{W \leq t} \geq \Exp\brack*{\brace*{1 - \exp\paren*{-\frac{t}{\lambdamax(\widetilde{\Sigma}^{-1}_{n})}}} \mathbbm{1}_{A}((X_i)_{i=1}^{n})} \geq (1-\delta/2)\paren*{1 - \exp(t/q)}.
\end{equation*}
Taking $t \geq q \cdot \log(2/\delta)$ ensures that the above probability is at least $1-\delta$. By the minimality of the quantile, we get that $Q_{W}(1-\delta) \leq Q_{\lambdamax(\widetilde{\Sigma}_{n}^{-1})}(1-\delta/2) \cdot \log(2/\delta)$, which is the desired upper bound.
§.§ Proof of Corollary <ref>
We claim that for all the allowed sample sizes,
\begin{equation*}
Q_{\lambdamax(\widetilde{\Sigma}_{n}))}(1-\delta/2) \leq 2.
\end{equation*}
Indeed, the restriction on the sample size $n$ is chosen in such a way that by the upper bound in Proposition <ref>, we have
\begin{equation*}
Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1-\delta/2) \leq \frac{1}{2}
\end{equation*}
Now if $1 - \lambdamin(\widetilde{\Sigma}_{n}) \leq 1/2$, then $\lambdamin(\widetilde{\Sigma}_{n}) \geq 1/2$ and $\lambdamax(\widetilde{\Sigma}_{n}^{-1}) = \lambdamin^{-1}(\widetilde{\Sigma}_{n}) \leq 2$. Therefore
\begin{equation*}
\Prob\paren*{\lambdamax(\widetilde{\Sigma}^{-1}_{n}) \leq 2} \geq \Prob\paren*{\lambdamin(\widetilde{\Sigma}_{n}) \leq 1/2} \geq \Prob\paren*{\lambdamin(\widetilde{\Sigma}_{n}) \leq Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1-\delta/2)} \geq 1-\delta/2.
\end{equation*}
which finishes the proof of the bound on $Q_{\lambdamax(\widetilde{\Sigma}_{n}))}(1-\delta/2)$. Now appealing to Lemma <ref> proves the result.
§.§ Proof of Proposition <ref>
Using Lemma 2.5 in [Adil et al., 2023], we have for the $p$-th power error $e(t) = \abs{t}^{p}/[p(p-1)]$,
\begin{equation*}
\widetilde{\mathcal{E}}(\Delta) = \Exp\brack*{e(\inp{\Delta}{X} + \eta)} - \Exp\brack*{e(\eta)} \geq \frac{1}{8(p-1)} \Delta^{T}\Exp\brack*{e''(\eta) XX^{T}}\Delta.
\end{equation*}
Since $e''(t) = \abs{t}^{p-2}$, and $\eta$ and $X$ are independent,
\begin{equation*}
\widetilde{\mathcal{E}}(\Delta) \geq \frac{1}{8(p-1)} \cdot m(p-2) \sigma^{p-2} \Delta^{T} \Sigma \Delta.
\end{equation*}
Therefore, by Theorem <ref>,
\begin{equation*}
R_{n, \delta}(\mathcal{P}_{\text{Gauss}}(P_{X}, \sigma^{2})) = Q_{\widetilde{\mathcal{E}}(Z)}(1-\delta) \geq \frac{m(p-2) \sigma^{p-2}}{8(p-1)} \cdot \frac{\sigma^{2}}{n} Q_{\norm{A}_{2}^{2}}(1 - \delta)
\end{equation*}
where $A \sim \mathcal{N}(0, \widetilde{\Sigma}_{n}^{-1})$. Now noting that $\frac{\sigma^2}{n} Q_{\norm{A}_2^{2}}(1-\delta)$ is the minimax risk under the square error, applying Proposition <ref> and Lemma <ref>, and using the constraint on $\delta$, we obtain the desired lower bound.
§ PROOFS OF SECTION <REF>
§.§ Proof of Theorem <ref>
Fix a distribution $P \in \mathcal{P}_{2}(P_{X}, \sigma^2)$. We will prove an upper bound on the risk of the proposed procedure under $P$. We follow the approach developed by Lugosi and Mendelson, 2019. Define
\begin{equation*}
\phi(w) = \max_{v \in \R^{d}} \psi_{k}(w, v),
\end{equation*}
and note that by definition of $\hat{w}_{n, \delta}$, we have
\begin{equation}
\label{eq:g_0_1}
\psi_{k}(\hat{w}_{n, k}, w^{*})\leq \phi(\hat{w}_{n, k}) \leq \phi(w^{*}).
\end{equation}
The key idea of the proof is to show that $\norm{\hat{w}_{n, k} - w^{*}}_{\Sigma}$ is small, by simultaneously showing that
* For all $w \in \R^{d}$, if $\norm{w - w^{*}}_{\Sigma}$ is large, then so is $\psi_{k}(w, w^{*})$,
* $\phi(w^{*})$ is small.
The combination of these statements combined with (<ref>) will show that $\norm{\hat{w}_{n,k} - w^{*}}_{\Sigma}$ is indeed small. Define
\begin{equation*}
\Delta(\delta) \defeq 50 \sqrt{\frac{\sigma^{2} [d + \log(4/\delta)]}{n}}
\end{equation*}
All the following lemmas are stated under the conditions of Theorem <ref>.
The first step of the proof is a simple application of Lemma <ref>.
\begin{equation*}
\Prob\paren*{\sup_{\norm{v}_{\Sigma} \leq 1} n^{-1}\varphi_{k}\brack*{(\inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v})_{i=1}^{n}} > \Delta(\delta)} \leq \delta/2
\end{equation*}
For $v \in \R^{d}$ such that $\norm{v}_{\Sigma} \leq 1$ and $i \in [n]$, define
\begin{equation*}
Z_{i, v} \defeq \frac{1}{n} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v} = \frac{1}{n} \xi_i \inp{X_{i}}{v}
\end{equation*}
Our aim is to apply Lemma <ref>, so we make the necessary computations here. We have
\begin{align*}
\Exp\brack*{Z_{i, v}} &= \frac{1}{n} \inp{\Exp\brack*{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}}{v} = \frac{1}{n}\inp{\nabla E(w^{*})}{v} = 0, \\
\sup_{\norm{v}_{\Sigma} \leq 1} \sum_{i=1}^{n} \Exp\brack*{Z_{i, v}^{2}} &= \frac{1}{n} \sup_{\norm{v}_{\Sigma} = 1} \Exp\brack*{\xi^{2}\inp{X}{v}^{2}} \leq \frac{\sigma^{2}}{n}.
\end{align*}
where the last inequality follows from the assumption $\Exp\brack*{\xi^{2} \mid X} \leq \sigma^{2}$. Now, for independent Rademacher variables $(\eps_i)_{i=1}^{n}$, we have
\begin{align*}
\Exp\brack*{\sup_{\norm{v}_{\Sigma}=1} \sum_{i=1}^{n} \eps_i Z_{i, v}}
&= \Exp\brack*{\sup_{\norm{v}_{\Sigma} = 1} \inp*{\frac{1}{n}\sum_{i=1}^{n} \eps_{i} \xi_i X_{i}}{v}} \\
&= \Exp\brack*{\norm*{\frac{1}{n} \sum_{i=1}^{n} \eps_i \xi_i X_i}_{\Sigma^{-1}}} \\
&\leq \Exp\brack*{\norm*{\frac{1}{n} \sum_{i=1}^{n} \eps_i \xi_i X_i}_{\Sigma^{-1}}^{2}}^{1/2} \\
&= \sqrt{\frac{\Exp\brack{\xi^{2} \norm{X}_{\Sigma^{-1}}^{2}}}{n}} \leq \sqrt{\frac{\sigma^{2} d}{n}}.
\end{align*}
Where again we have used the assumption $\Exp\brack*{\xi^{2} \mid X} \leq \sigma^{2}$. Recalling that $k = 8\log(2/(\delta/2))$ from the statement of the theorem, and applying Lemma <ref> with the above constants yields the result.
From this result, we can deduce the following estimate, which will help us bound $\phi(w^{*})$ later on.
For any $r \in (0, \infty)$,
\begin{equation*}
\Prob\paren*{\sup_{\norm{v - w^{*}}_{\Sigma} < r} \psi_{k}(w^{*}, v) > r \cdot \Delta(\delta)} \leq \delta/2.
\end{equation*}
We have
\begin{align*}
\sup_{\norm{v - w^{*}}_{\Sigma} < r} \psi_{k}(w^{*}, v) &= \sup_{\norm{v - w^{*}}_{\Sigma} < r} \varphi_{k}\brack*{\paren*{e(\inp{w^{*}}{X_{i}} - Y_{i}) - e(\inp{v}{X_{i}} - Y_{i})}_{i=1}^{n}} \\
&= \sup_{\norm{v - w^{*}}_{\Sigma} < r} \varphi_{k}\brack*{\paren*{-\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}} - \frac{1}{2}\inp{X_i}{v - w^{*}}^{2}}_{i=1}^{n}} \\
&\leq \sup_{\norm{v - w^{*}}_{\Sigma} < r} \varphi_{k}\brack*{\paren*{-\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}}}_{i=1}^{n}} \\
&= \sup_{\norm{v - w^{*}}_{\Sigma} < r} r \cdot \varphi_{k}\brack*{\paren*{\inp*{\nabla e(\inp*{w^{*}}{X_i} - Y_i)}{\frac{v - w^{*}}{r}}}_{i=1}^{n}} \\
&= r \cdot \sup_{\norm{v}_{\Sigma} < 1} \varphi_{k}\brack*{\paren*{\inp{\nabla e(\inp*{w^{*}}{X_i} - Y_i)}{v}}_{i=1}^{n}}
\end{align*}
where the first line is by definition, the second holds since $e$ is quadratic so its second order Taylor expansion is exact, the third by the third item of Lemma <ref>, and the fourth by the first item of Lemma <ref>. Applying Lemma <ref> to the last line yields the result.
The key technical novelty of this proof is the following lemma, which uses our new results Proposition <ref> and Lemma <ref>.
Let $r \in [8\Delta(\delta), \infty)$. Then
\begin{equation*}
\Prob\paren*{\inf_{\norm{v - w^{*}} \geq r} \psi_{k}(v, w^{*}) < \frac{r^{2}}{8} - r\Delta(\delta)} \leq \delta
\end{equation*}
We start with the case $\norm{v - w^{*}}_{\Sigma} = r$. We have
\begin{align*}
\inf_{\norm{v - w^{*}}_{\Sigma} = r} \psi_{k}(v, w^{*}) &= \inf_{\norm{v - w^{*}}_{\Sigma} = r} n^{-1} \varphi_{k}\brack*{\paren*{e(\inp{v}{X_{i}} - Y_{i}) - e(\inp{w^{*}}{X_{i}} - Y_{i})}_{i=1}^{n}} \\
&= \inf_{\norm{v - w^{*}}_{\Sigma} = r} n^{-1} \varphi_{k}\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}} + \frac{1}{2} \inp{v - w^{*}}{X_i}^{2} }_{i=1}^{n}} \\
&= \inf_{v \in S^{d-1}} n^{-1} \varphi_{k}\brack*{\paren*{r \cdot \inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{\Sigma^{-1/2}v} + \frac{r^{2}}{2} \inp{v}{\Sigma^{-1/2}X_i}^{2} }_{i=1}^{n}}.
\end{align*}
Define $\widetilde{X}_{i} = \Sigma^{-1/2}X_{i}$, and $Z_{i, v} \defeq \inp{v}{\widetilde{X}_i}^{2}$ for $(i, v) \in [n] \times S^{d-1}$. Then we have by Lemma <ref>,
\begin{align*}
&\inf_{\norm{v - w^{*}}_{\Sigma} = r} \psi_{k}(v, w^{*}) \\
&\geq r \cdot \inf_{v \in S^{d-1}} n^{-1} \varphi_k\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{\Sigma^{-1/2}v}}_{i=1}^{n}} + r^{2} \cdot \inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} \\
&= \frac{r^{2}}{2} \cdot \inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} - r\cdot \sup_{\norm{v}_{\Sigma} = 1} n^{-1} \varphi_k\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v}}_{i=1}^{n}}
\end{align*}
The second term is bounded with probability $1-\delta/2$ by $r \cdot \Delta(\delta)$ by Lemma <ref>. For the first term, the restriction on the sample size in Theorem <ref> is chosen such that by Proposition <ref>, with probability at least $1 - \delta^{2}/2 \geq 1-\delta/2$
\begin{equation*}
\inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} \geq \frac{1}{4}
\end{equation*}
Therefore, with probability at least $1-\delta$
\begin{equation*}
\inf_{\norm{v - w^{*}}_{\Sigma} = r} \psi_{k}(v, w^{*}) \geq \frac{r^{2}}{8} - r \Delta(\delta).
\end{equation*}
We now extend this to all vectors $w \in \R^{d}$ such that $\norm{w - w^{*}}_{\Sigma} \geq r$. On the same event, if $\norm{w - w^{*}}_{\Sigma} = R > r$, then $v \defeq w^{*} + \frac{r}{R} (w-w^{*})$ satisfies $\norm{v - w^{*}}_{\Sigma} = r$, and
\begin{align*}
\psi_k(w, w^{*}) &= n^{-1} \varphi_{k}\paren*{(\inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{w - w^{*}} + \frac{1}{2} \inp{w - w^{*}}{X_i}^{2})_{i=1}^{n}} \\
&= n^{-1} \varphi_{k}\brack*{\paren*{ \frac{R}{r} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v - w^{*}} + \frac{R^2}{r^2}\frac{1}{2} \inp{v - w^{*}}{X_i}^{2}}_{i=1}^{n}} \\
&\geq n^{-1} \varphi_{k}\brack*{\paren*{ \frac{R}{r} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v-w^{*}} + \frac{R}{r}\frac{1}{2} \inp{v - w^{*}}{X_i}^{2})_{i=1}^{n}}_{i=1}^{n}} \\
&= \frac{R}{r} \cdot \psi_{k}(v, w^{*}) \\
&\geq \psi_{k}(v, w^{*})
\end{align*}
where the first inequality follows from the fact that $R/r > 1$, $\inp{v - w^{*}}{X_{i}}^{2} > 0$, and Lemma <ref>, and the second inequality follows from the fact that by the condition on $r$, we have $\psi_{k}(v, w^{*}) \geq 0$ on the event we are considering.
We are now ready to state the proof of Theorem <ref>. Set $r \defeq 20 \Delta(\delta)$, and recall from (<ref>) that
\begin{align*}
\psi_{k}(\hat{w}_{n, k}, w^{*}) \leq \phi(w^{*}) &= \sup_{v \in \R^{d}} \psi_k(w^{*}, v) \\
&= \max\brace*{\sup_{\norm{v - w^{*}}_{\Sigma} \geq r} \psi_{k}(w^{*}, v), \sup_{\norm{v - w^{*}}_{\Sigma} < r} \psi_{k}(w^{*}, v)} \\
&= \max\brace*{- \inf_{\norm{v - w^{*}} \geq r} \psi_{k}(v, w^{*}), \sup_{\norm{v - w^{*}}_{\Sigma} < r} \psi_{k}(w^{*}, v)},
\end{align*}
where the last line uses the fact that $\psi_k(w, v) = -\psi_{k}(v, w)$.
Now by combining Corollary <ref> and Lemma <ref>, we have with probability $1-\delta$ that the first term in the above maximum is negative, while the second is bounded by $r \cdot \Delta(\delta) = 20 \Delta^{2}(\delta)$. On the other hand, we have on the same event by Lemma <ref> that
\begin{equation*}
\inf_{\norm{v - w^{*}}_{\Sigma} \geq r} \psi_{k}(v, w^{*}) \geq \frac{r^{2}}{8} - r \Delta(\delta) = 30 \Delta^{2}(\delta)
\end{equation*}
Therefore, we conclude that with probability at least $1-\delta$
\begin{equation*}
\norm{\hat{w}_{n, k} - w^{*}}_{\Sigma} \leq 20 \Delta(\delta).
\end{equation*}
finally, noticing that this implies, with probability at least $1-\delta$
\begin{equation*}
\mathcal{E}(\hat{w}_{n, k}) = \frac{1}{2}\norm{\hat{w}_{n,k} - w^{*}}_{\Sigma}^{2} \leq 20^{2} \Delta^{2}(\delta),
\end{equation*}
finishes the proof.
§.§ Proof of Theorem <ref>
The high-level idea behind the proof of Theorem <ref> is similar to that of Theorem <ref>, but with a few more challenges. Fix $P \in \mathcal{P}_{p}(P_{X}, \sigma^{2}, \mu)$. We prove an upper bound on the risk of $\hat{w}_{n, k}$ under this fixed $P$. Define $H \defeq \nabla^{2} E(w^{*})$, $c \defeq \essinf(\Exp\brack*{\abs{\xi}^{p-2}\mid X})$, $C \defeq \esssup(\Exp\brack*{\abs{\xi}^{2(p-1)} \mid X})$. Note that
\begin{equation}
\label{eq:tired_1}
H = \Exp\brack*{\abs{\xi}^{p-2} XX^{T}} \succeq c \cdot \Sigma
\end{equation}
\begin{equation*}
\Delta_{p}(\delta) \defeq 50 \sqrt{\frac{m(2p-2)}{m(p-2)} \cdot \frac{\sigma^{p}[d + \log(4/\delta)]}{n}}
\end{equation*}
Our first statement is an analogue to Lemma <ref>.
\begin{equation*}
\Prob\paren*{\sup_{\norm{v}_{H} \leq 1} n^{-1}\varphi_{k}\brack*{(\inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v})_{i=1}^{n}} > \Delta_{p}(\delta)} \leq \delta/2
\end{equation*}
For $v \in \R^{d}$ such that $\norm{v}_{H} \leq 1$ and $i \in [n]$, define
\begin{equation*}
Z_{i, v} \defeq \frac{1}{n} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v}
\end{equation*}
Our aim is to apply Lemma <ref>, so we make the necessary computations here. We have
\begin{align*}
\Exp\brack*{Z_{i, v}} &= \frac{1}{n} \inp{\Exp\brack*{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}}{v} = \frac{1}{n}\inp{\nabla E(w^{*})}{v} = 0
\end{align*}
\begin{align*}
\sup_{\norm{v}_{H} \leq 1} \sum_{i=1}^{n} \Exp\brack*{Z_{i, v}^{2}} &= \frac{1}{n} \sup_{\norm{v}_{H} \leq 1} \Exp\brack*{\xi^{2(p-1)}\inp{X}{v}^{2}} \\
&\leq \frac{1}{n} \sup_{\norm{v}_{\Sigma} = 1} \frac{\esssup\paren*{\Exp\brack*{\xi^{2(p-1)} \mid X}}}{\essinf(\Exp\brack*{\xi^{p-2} \mid X}))} \Exp\brack*{\inp{X}{v}^{2}} \\
&\leq \frac{m(2p-2)}{m(p-2)} \frac{\sigma^{p}}{n}
\end{align*}
where the first inequality follows from (<ref>), and the second from the assumption on the class of distributions. Now, for independent Rademacher variables $(\eps_i)_{i=1}^{n}$, we have
\begin{align*}
\Exp\brack*{\sup_{\norm{v}_{H}=1} \sum_{i=1}^{n} \eps_i Z_{i, v}}
&= \Exp\brack*{\sup_{\norm{v}_{H} = 1} \inp*{\frac{1}{n}\sum_{i=1}^{n} \eps_{i} \nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v}} \\
&= \Exp\brack*{\norm*{\frac{1}{n} \sum_{i=1}^{n} \eps_i \nabla e(\inp{w^{*}}{X_i} - Y_{i})}_{H^{-1}}} \\
&\leq \Exp\brack*{\norm*{\frac{1}{n} \sum_{i=1}^{n} \eps_i \nabla e(\inp{w^{*}}{X_i} - Y_{i})}_{H^{-1}}^{2}}^{1/2} \\
&= \sqrt{\frac{\Exp\brack{\xi^{2(p-1)} \norm{X}_{H^{-1}}^{2}}}{n}} \\
&\leq \sqrt{\frac{\esssup\paren*{\Exp\brack*{\xi^{2(p-1)} \mid X}}}{\essinf(\Exp\brack*{\xi^{p-2} \mid X}))} \cdot \frac{\Exp\brack*{\norm{X}_{\Sigma^{-1}}^{2}}}{n}} \\
&\leq \sqrt{\frac{m(2p-2)}{m(p-2)} \cdot \frac{\sigma^{p} d}{n}}
\end{align*}
Where again we have used the assumption on the class of distributions, and where we used (<ref>) in the penultimate line. Recalling that $k = 8\log(2/(\delta/2))$ from the statement of the theorem, and applying Lemma <ref> with the above constants yields the result.
The second statement is also similar to Corollary <ref>. The additional challenge here is that second order Taylor expansion is not exact.
\begin{equation*}
\Prob\paren*{\sup_{\norm{v - w^{*}}_{H} < r} \psi_{k}(w^{*}, v) > r \cdot \Delta_{p}(\delta)} \leq \delta/2.
\end{equation*}
By Lemma 2.5 in [Adil et al., 2023], we have that, for all $t, s \in \R$,
\begin{equation*}
e(t) - e(s) - e'(s)(t-s) \geq \frac{1}{8(p-1)} e''(s) (t-s)^{2}
\end{equation*}
\begin{align*}
\sup_{\norm{v - w^{*}}_{H} < r} \psi_{k}(w^{*}, v) &= \sup_{\norm{v - w^{*}}_{H} < r} \varphi_{k}\brack*{\paren*{e(\inp{w^{*}}{X_{i}} - Y_{i}) - e(\inp{v}{X_{i}} - Y_{i})}_{i=1}^{n}} \\
&\leq \sup_{\norm{v - w^{*}}_{H} < r} \varphi_{k}\brack*{\paren*{-\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}} - \frac{\abs{\xi_i}^{p-2}}{8(p-1)}\inp{X_i}{v - w^{*}}^{2}}_{i=1}^{n}} \\
&\leq \sup_{\norm{v - w^{*}}_{H} < r} \varphi_{k}\brack*{\paren*{-\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}}}_{i=1}^{n}} \\
&= \sup_{\norm{v - w^{*}}_{H} < r} r \cdot \varphi_{k}\brack*{\paren*{\inp*{\nabla e(\inp*{w^{*}}{X_i} - Y_i)}{\frac{v - w^{*}}{r}}}_{i=1}^{n}} \\
&= r \cdot \sup_{\norm{v}_{H} < 1} \varphi_{k}\brack*{\paren*{\inp{\nabla e(\inp*{w^{*}}{X_i} - Y_i)}{v}}_{i=1}^{n}}
\end{align*}
where the second line is by the inequality cited above, and the third by dropping negative terms. Applying Lemma <ref> to the last line finishes the proof.
It remains to show the analogue of Lemma <ref>. This is the most technical part of the proof.
Let $r \in [32(p-1)\Delta_{p}(\delta), \infty)$. Then
\begin{equation*}
\Prob\paren*{\inf_{\norm{v - w^{*}}_{H} \geq r} \psi_{k}(v, w^{*}) < \frac{r^{2}}{32(p-1)} - r\Delta_{p}(\delta)} \leq \delta
\end{equation*}
We start with the case $\norm{v - w^{*}}_{H} = r$. We have, using the quoted lemma in the proof of Corollary <ref>,
\begin{align*}
&\inf_{\norm{v - w^{*}}_{H} = r} \psi_{k}(v, w^{*}) \\
&= \inf_{\norm{v - w^{*}}_{H} = r} n^{-1} \varphi_{k}\brack*{\paren*{e(\inp{v}{X_{i}} - Y_{i}) - e(\inp{w^{*}}{X_{i}} - Y_{i})}_{i=1}^{n}} \\
&\geq \inf_{\norm{v - w^{*}}_{H} = r} n^{-1} \varphi_{k}\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v - w^{*}} + \frac{\abs{\xi_i}^{p-2}}{8(p-1)} \inp{v - w^{*}}{X_i}^{2} }_{i=1}^{n}} \\
&= \inf_{v \in S^{d-1}} n^{-1} \varphi_{k}\brack*{\paren*{r \cdot \inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{H^{-1/2}v} + r^{2} \cdot \frac{\abs{\xi_i}^{p-2}}{8(p-1)} \inp{v}{H^{-1/2}X_i}^{2} }_{i=1}^{n}}.
\end{align*}
Now define the random vector $W \defeq \abs{\xi}^{(p-2)/2} \cdot X$, whose (uncentered) covariance matrix is $H$. Further define $\widetilde{W}_{i} \defeq H^{-1/2} W_i$, and $Z_{i, v} \defeq \inp{v}{\widetilde{W}_i}^{2}$ for $(i, v) \in [n] \times S^{d-1}$. Then we have by Lemma <ref>,
\begin{align*}
&\inf_{\norm{v - w^{*}}_{H} = r} \psi_{k}(v, w^{*}) \\
&\geq r \cdot \inf_{v \in S^{d-1}} n^{-1} \varphi_k\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{H^{-1/2}v}}_{i=1}^{n}} + \frac{r^{2}}{8(p-1)} \cdot \inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} \\
&= \frac{r^{2}}{8(p-1)} \cdot \inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} - r\cdot \sup_{\norm{v}_{H} = 1} n^{-1} \varphi_k\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_i)}{v}}_{i=1}^{n}}
\end{align*}
The second term is bounded with probability $1-\delta/2$ by $r \cdot \Delta_{p}(\delta)$ by Lemma <ref>. For the first term, we claim
that the restriction on the sample size in Theorem <ref> is chosen such that by Proposition <ref>, with probability at least $1 - \delta^{2}/2 \geq 1-\delta/2$
\begin{equation}
\label{eq:exh}
\inf_{v \in S^{d-1}} n^{-1} \sum_{i=1}^{n - 2k} Z_{i, v}^{*} \geq \frac{1}{4}
\end{equation}
Let us show why this is true. Let $P_{W}$ be the distribution of $W$, and notice that
\begin{align*}
\lambdamax(S(P_{W})) + 1 &= \sup_{v \in S^{d-1}} \Exp\brack*{\norm{W}^{2}_{H^{-1}} \inp{v}{H^{-1/2} W}^{2}} \\
&= \sup_{v \in S^{d-1}} \Exp\brack*{\abs{\xi}^{2(p-2)} \cdot \norm{X}^{2}_{H^{-1}} \inp{H^{-1/2} v}{X}^{2}} \\
&\leq \esssup(\Exp\brack{\abs{\xi}^{2(p-2)} \mid X}) \cdot \sup_{\norm{v}_{H} = 1} \Exp\brack*{\norm{X}^{2}_{H^{-1}} \inp{v}{X}^{2}} \\
&\leq \frac{C^{(p-2)/(p-1)}}{c^{2}} \cdot \sup_{\norm{v}_{\Sigma} = 1} \Exp\brack*{\norm{X}_{\Sigma^{-1}} \inp{v}{X}^{2}} \\
&\leq \paren*{\frac{m(2p-2) \sigma^{p}}{m(p-2)}}^{\frac{p-2}{p-1}} \frac{1}{\mu^{p/(p-1)}} \cdot [\lambdamax(S(P_{X})) + 1]
\end{align*}
where the fourth line follows by Jensen's inequality, and the last line by the properties of the class of distributions. Note that this upper bound holds uniformly over all members of $\mathcal{P}_{p}(P_{X}, \sigma^{2}, \mu)$. Through a very similar argument, one may show
\begin{equation*}
R(P_{W}) + 1 \leq \paren*{\frac{m(2p-2) \sigma^{p}}{m(p-2)}}^{\frac{p-2}{p-1}} \frac{1}{\mu^{p/(p-1)}} \cdot [R(P_{X}) + 1]
\end{equation*}
It is then straightforward to apply Proposition <ref> under the above bounds and the sample size restriction and conclude that the claim (<ref>) is true.
Therefore, with probability at least $1-\delta$
\begin{equation*}
\inf_{\norm{v - w^{*}}_{H} = r} \psi_{k}(v, w^{*}) \geq \frac{r^{2}}{32(p-1)} - r \Delta_{p}(\delta).
\end{equation*}
We now extend this to all vectors $w \in \R^{d}$ such that $\norm{w - w^{*}}_{H} \geq r$. On the same event, if $\norm{w - w^{*}}_{H} = R > r$, then $v \defeq w^{*} + \frac{r}{R} (w-w^{*})$ satisfies $\norm{v - w^{*}}_{H} = r$, and
\begin{align*}
\psi_k(w, w^{*}) &\geq n^{-1} \varphi_{k}\brack*{\paren*{\inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{w - w^{*}} + \frac{\abs{\xi_i}^{p-2}}{8(p-1)} \inp{w - w^{*}}{X_i}^{2}}_{i=1}^{n}} \\
&= n^{-1} \varphi_{k}\brack*{\paren*{ \frac{R}{r} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v - w^{*}} + \frac{R^2}{r^2}\frac{\abs{\xi_i}^{p-2}}{8(p-1)} \inp{v - w^{*}}{X_i}^{2}}_{i=1}^{n}} \\
&\geq n^{-1} \varphi_{k}\brack*{\paren*{ \frac{R}{r} \inp{\nabla e(\inp{w^{*}}{X_i} - Y_{i})}{v-w^{*}} + \frac{R}{r}\frac{\abs{\xi_i}^{p-2}}{8(p-1)} \inp{v - w^{*}}{X_i}^{2}}_{i=1}^{n}} \\
&= \frac{R}{r} \cdot \paren*{\frac{r^{2}}{32(p-1)} - r\Delta_{p}(\delta)} \\
&\geq \frac{r^{2}}{32(p-1)} - r\Delta_{p}(\delta)
\end{align*}
where the first inequality follows from the fact that $R/r > 1$, $\inp{v - w^{*}}{X_{i}}^{2} > 0$, and Lemma <ref>, and the second inequality follows from the fact that by the condition on $r$, we have $\frac{r^{2}}{32(p-1)} - r\Delta_{p}(\delta) \geq 0$ on the event we are considering.
Finally, we present the main proof. For the first step, we localize $\hat{w}_{n,k}$ using the lemmas we just proved. In particular, let $r \defeq 96(p-1)\Delta_{p}(\delta)$. Then following the same argument as in the proof of Theorem <ref>, we obtain that with probability at least $1-\delta$
\begin{equation*}
\psi_k(\hat{w}_{n, k}, w^{*}) \leq r \cdot \Delta_{p}(\delta) = 96(p-1)\Delta^{2}_{p}(\delta)
\end{equation*}
On the other hand, and on the same event,
\begin{equation*}
\inf_{\norm{v - w^{*}}_{H} \geq r} \psi_{k}(v, w^{*}) \geq \frac{r^{2}}{32(p-1)} - r \Delta_p(\delta) = 192 (p-1)^{2} \Delta^{2}_{p}(\delta).
\end{equation*}
\begin{equation*}
\norm{\hat{w}_{n, k} - w^{*}}_{H} \leq 96(p-1)\Delta_{p}(\delta)
\end{equation*}
It remains to bound the excess expected error. By Lemma 2.5 in [Adil et al., 2023], we have the upper bound
\begin{equation*}
e(t) - e(s) - e'(s)(t-s) \leq 4 e''(s)(t-s)^{2} + 2 p^{p-2} \abs*{t-s}^{p}
\end{equation*}
Integrating this bound we obtain
\begin{equation*}
\mathcal{E}(\hat{w}_{n,k}) \leq 4 \norm{\hat{w}_{n,k} - w^{*}}_{H}^{2} + 2p^{p-2} \Exp\brack*{\abs{\inp{\hat{w}_{n,k} - w^{*}}{X}}^{p}}.
\end{equation*}
We have control over the first term. We need to control the second, in a noise-independent way. We have, for any $w \in \R^{d}$, by (<ref>)
\begin{equation*}
\norm{w}_{H} \geq \sqrt{c} \cdot \norm{w}_{\Sigma} \geq \sqrt{\mu} \cdot \Exp\brack*{\inp{w}{X}^{2}}^{1/2}
\end{equation*}
\begin{equation*}
\sup_{w \in \R^{d} \setminus \brace*{0}} \frac{\Exp\brack*{\abs*{\inp{w}{X}}^{p}}^{1/p}}{\norm{w}_{H}} \leq \frac{1}{\sqrt{\mu}} \sup_{w \in \R^{d} \setminus \brace*{0}} \frac{\Exp\brack*{\abs*{\inp{w}{X}}^{p}}^{1/p}}{\Exp\brack*{\inp{w}{X}^{2}}^{1/2}} = \frac{N(P_{X}, p)}{\sqrt{\mu}}.
\end{equation*}
Using this we obtain
\begin{equation*}
\mathcal{E}(\hat{w}_{n,k}) \leq 4\norm{\hat{w}_{n,k} - w^{*}}_{H}^{2} + 2p^{p-2} \frac{N^{p}(P_{X}, p)}{\mu^{p/2}} \cdot \norm{\hat{w}_{n,k} - w^{*}}_{H}^{p}
\end{equation*}
Under the restriction on the sample size stated in the theorem, in particular the second term, we have on the same event
\begin{equation*}
2p^{p-2} \frac{N^{p}(P_{X}, p)}{\mu^{p/2}} \cdot \norm{\hat{w}_{n,k} - w^{*}}_{H}^{p} \leq 4 \norm{\hat{w}_{n,k} - w^{*}}_{H}^{2}.
\end{equation*}
Hence with probability at least $1-\delta$
\begin{equation*}
\mathcal{E}(\hat{w}_{n,k}) \leq 8 \cdot \brack*{96 (p-1)\Delta_{p}(\delta)}^{2}
\end{equation*}
Replacing $\Delta_{p}(\delta)$ with its value we recover the desired bound.
We start with the following Lemma.
Let $\delta \in (0, 1)$ be such that $k \defeq 8\log(2/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/2}$. Then
\begin{equation*}
\Prob\paren*{\sup_{v \in S^{d-1}} \frac{\varphi_{k}\paren*{(\eta_i \inp{v}{\tilde{X}_i})_{i=1}^{n}}}{n} > \Delta((X_i)_{i=1}^{n}, \delta) \st (X_i)_{i=1}^{n}} < \delta,
\end{equation*}
\begin{equation*}
\Delta((X_i)_{i=1}^{n}, \delta) \defeq 50 \max\brace*{\sqrt{\frac{\sigma^{2}\Tr(\tilde{\Sigma})}{n}}, \sqrt{\frac{\sigma^{2}\lambdamax(\tilde{\Sigma})\log(2/\delta)}{n}}}.
\end{equation*}
We have
\begin{align*}
&\Exp\brack*{\eta_i \inp{v}{\tilde{X}_i} \st (X_i)_{i=1}^{n}} = 0, \\
&\sup_{v \in S^{d-1}} \sum_{i=1}^{n} \Exp\brack*{\eta_i^{2}\inp{v}{\tilde{X}_{i}}^{2} \st (X_i)_{i=1}^{n}} \leq n \sigma^{2} \sup_{v \in S^{d-1}}\frac{1}{n}\sum_{i=1}^{n}\inp{v}{\tilde{X}_{i}^{2}} = n\sigma^{2} \lambdamax(\tilde{\Sigma}), \\
& \Exp\brack*{\sup_{v \in S^{d-1}} \inp*{v}{\frac{1}{n} \sum_{i=1}^{n}\eps_i \eta_i \tilde{X}_{i}} \st (X_i)_{i=1}^{n}} = \Exp\brack*{\norm*{\frac{1}{n} \sum_{i=1}^{n} \eps_i \eta_i \tilde{X}_{i}}_{2} \st (X_i)_{i=1}^{n}} \leq \sqrt{n \sigma^{2} \Tr(\tilde{\Sigma})}
\end{align*}
Therefore, applying Lemma <ref> and dividing both sides by $n$ yields the result.
Let $\delta \in (0, 1)$ be such that $k \defeq 8\log(2/\delta)$ is an integer satisfying $1 \leq k \leq \floor{n/8}$. For $(i, v) \in [n] \times S^{d-1}$, define $Z_{i, v} \defeq \inp{v}{\tilde{X}_{i}}^{2}$. Let $(X_i)_{i=1}^{n}$ be such that
\begin{equation*}
\inf_{v \in S^{d-1}} \frac{1}{n} \sum_{i=2k+1}^{n-2k} Z_{i,v}^{*} \geq \frac{1}{4}
\end{equation*}
Let $r > 0$ satisfy
\begin{equation*}
r \geq 4 \Delta((X_i)_{i=1}^{n}, \delta)
\end{equation*}
\begin{equation*}
\Prob\paren*{\inf_{\norm{w - w^{*}}_{\Sigma} \geq r} \psi_{k}(w, w^{*}) < -r \cdot \Delta((X_i)_{i=1}^{n}, \delta) + \frac{r^{2}}{4} \st (X_i)_{i=1}^{n}} < \delta
\end{equation*}
\begin{align}
\inf_{\norm{w - w^{*}}_{\Sigma} = r} \psi_{k}(v, w^{*}) &= \inf_{\norm{w - w^{*}}_{\Sigma} = r} n^{-1} \varphi_{k}\paren*{(-\eta_i \inp{w - w^{*}}{X_i} + \frac{1}{2} \inp{w - w^{*}}{X_i}^2)_{i=1}^{n}} \nonumber \\
&= \inf_{\norm{v}_{\Sigma} = r} n^{-1} \varphi_{k}\paren*{(-\eta_i\inp{v}{X_i} + \frac{1}{2}\inp{v}{X_i}^{2})_{i=1}^{n}} \nonumber \\
&= \inf_{v \in S^{d-1}} n^{-1}\varphi_{k}\paren*{(-r\eta_i\inp{v}{\tilde{X}_i} + \frac{r^{2}}{2}\inp{v}{\tilde{X}_i}^{2})_{i=1}^{n}} \nonumber \\
&\geq - r \cdot \sup_{v \in S^{d-1}} \frac{\varphi_{k}\paren*{(\eta_i \inp{v}{\tilde{X}_{i}})_{i=1}^{n}}}{n} + \frac{r^{2}}{4}
\label{eq:pf_lem8_1}
\end{align}
so that applying Lemma <ref> shows the statement holds for $\norm{w - w^{*}}_{\Sigma} = r$. Now on the same event, if $\norm{w - w^{*}}_{\Sigma} = R > r$, then $v \defeq w^{*} + \frac{r}{R} (w-w^{*})$ satisfies $\norm{v - w^{*}} = r$, and
\begin{align*}
\psi_k(w, w^{*}) &= n^{-1} \varphi_{k}\paren*{(-\eta_i \inp{w - w^{*}}{X_i} + \frac{1}{2} \inp{w - w^{*}}{X_i}^{2})_{i=1}^{n}} \\
&= n^{-1} \varphi_{k}\paren*{(- \frac{R}{r}\eta_i \inp{v - w^{*}}{X_i} + \frac{R^2}{r^2}\frac{1}{2} \inp{v - w^{*}}{X_i}^{2})_{i=1}^{n}} \\
&\geq n^{-1} \varphi_{k}\paren*{- \frac{R}{r}\eta_i \inp{v - w^{*}}{X_i} + \frac{R}{r}\frac{1}{2} \inp{v - w^{*}}{X_i}^{2})_{i=1}^{n}} \\
&= \frac{R}{r} \cdot \psi_{k}(v, w^{*}) \\
&\geq \psi_{k}(v, w^{*})
\end{align*}
where the first inequality follows from the fact that $R/r > 1$ and $\inp{v - w^{*}}{X_{i}}^{2} > 0$, and the second inequality follows from the fact that by the condition on $r$, we have $\psi_{k}(v, w^{*}) > 0$.
Under the conditions of Lemma <ref>, with probability at least $1-\delta$
\begin{equation*}
\norm{\hat{w}_{k} - w^{*}}_{\Sigma} \leq 8 \Delta((X_i)_{i=1}^{n}, \delta)
\end{equation*}
Set $r = 8 \Delta((X_i)_{i=1}^{n}, \delta)$ and define $\gamma(w) \defeq \sup_{v \in \R^{d}} \psi_{k}(w, v)$. Notice that by definition of $\hat{w}_{k}$, we have
\begin{equation}
\psi_{k}(\hat{w}_{k}, w^{*}) \leq \gamma(\hat{w}_k) \leq \gamma(w^{*})
\end{equation}
Now on the one hand we have
\begin{equation*}
\gamma(w^{*}) = \sup_{v \in \R^{d}} \psi_{k}(w^{*}, v) = - \inf_{v \in R^{d}}\psi_{k}(v, w^{*}) \leq - \min \brace*{\inf_{\norm{v - w}_{\Sigma} \geq r} \psi_{k}(v, w^{*}), \inf_{\norm{v - w}_{\Sigma} < r} \psi_{k}(v, w^{*})}
\end{equation*}
By Lemmas (REFERENCE) and (REFERENCE), we obtain on event (REFERENCE)
\begin{equation}
\gamma(w^{*}) < r \Delta((X_i)_{i=1}^{n}, \delta)
\end{equation}
and by Lemma (REFERENCE) we obtain
\begin{equation}
\inf_{\norm{w - w^{*}}_{\Sigma} \geq r} \psi_{k}(v, w^{*}) \geq r \Delta((X_i)_{i=1}^{n}, \delta)
\end{equation}
which implies that $\norm{w - w^{*}}_{\Sigma} \leq r$
Define the events
\begin{align*}
A &\defeq \brace*{(x_i)_{i=1}^{n} \st \Tr\paren*{\widetilde{\Sigma}^{-1}} \leq Q_{\Tr\paren*{\widetilde{\Sigma}^{-1}}}(1 - \eps_{n} - \delta/6)} \\
B(a) &\defeq \brace*{(x_i)_{i=1}^{n} \st Q_{W \mid (X_i)_{i=1}^{n}}(1 - \delta) \leq a Q_{W}(1 - \eps_{n} - \delta/6)} \\
C &\defeq \brace*{(x_i)_{i=1}^{n} \st \inf_{v \in S^{d-1}} \frac{1}{n}\sum_{i=2k+1}^{n-2k} Z_{i,v}^{*} \geq \frac{1}{4}} \\
D &\defeq \brace*{(x_i)_{i=1}^{n} \st \rank(\widetilde{\Sigma}) = d}
\end{align*}
where $a > 0$ is a free-parameter we set later. Note that $A, B(a), C \subset D$ for all $a > 0$. Now by definition of the event $A$, we have
\begin{equation*}
\Prob\paren*{A \st D} \Prob\paren*{D} = \Prob\paren*{A \cap D} = \Prob\paren*{A} \geq 1 - \eps_{n} - \delta/3 \Rightarrow \Prob\paren*{A \st D} \geq 1 - \frac{\delta/3}{1 - \eps_{n}}
\end{equation*}
Furthermore by the condition on $n$, we have
\begin{equation*}
\Prob\paren*{C \st D} \Prob\paren*{D} = \Prob\paren*{C} \geq 1 - \delta^2/3 \geq 1 - \delta/3 \Rightarrow \Prob\paren*{C \st D} \geq \frac{1 - \delta/3}{1 - \eps_{n}} \geq 1 - \frac{\delta/3}{1 - \eps_{n}}
\end{equation*}
Define $a_{*} \defeq \frac{\log(1/\delta)}{\log(2)}$ and $B_{*} \defeq B(a_{*})$. We claim that $\Prob\paren*{B_{*}} \geq 1 - \eps_{n} - \delta/3$. Indeed, we have on the one hand
\begin{equation*}
\Prob\paren*{\brace*{W \leq Q_{W}(1 - \eps_{n} - \delta/3}} \geq 1 - \eps_{n} - \delta/6,
\end{equation*}
On the other, we have
\begin{align*}
&\Prob\paren*{W \leq Q_{W}(1 - \eps_{n} - \delta/3)} \\
&= \Exp\brack*{\Prob\paren*{W \leq Q_{W}(1 - \eps_{n} - \delta/3) \st (X_i)_{i=1}^{n}}} \\
&= \Exp\brack*{\Prob\paren*{W \leq Q_{W}(1 - \eps_{n} - \delta/3) \st (X_i)_{i=1}^{n}} \mathbbm{1}_{B_{*}}((X_i)_{i=1}^{n})} + \Exp\brack*{\Prob\paren*{W \leq Q_{W}(1 - \eps_{n} - \delta/3) \st (X_i)_{i=1}^{n}} \mathbbm{1}_{B_{*}^{c}}((X_i)_{i=1}^{n})} \\
&\leq \Prob\paren*{B_{*} \cap D} + \Exp\brack*{\Prob\paren*{W < a_{*}^{-1} \cdot Q_{W \mid (X_i)_{i=1}^{n}}(1 - \delta) \st (X_i)_{i=1}^{n}} \mathbbm{1}_{B_{*}^{c} \cap D}((X_i)_{i=1}^{n})} \\
&= \Prob\paren*{B_{*} \cap D} + \Exp\brack*{\Prob\paren*{W < Q_{W \mid (X_i)_{i=1}^{n}}(1 -\delta^{1/a_{*}}) \st (X_i)_{i=1}^{n}} \mathbbm{1}_{B_{*}^{c} \cap D}((X_i)_{i=1}^{n})} \\
&\leq \Prob\paren*{B_{*} \cap D} + (1 -\delta^{1/a_{*}}) \Prob\paren*{B_{*}^{c} \cap D} \\
&= (1 - \delta^{1/a_{*}})(1 - \eps_{n}) + \delta^{1/a_{*}} \Prob\paren*{B_{*}} \\
&= \frac{1}{2}(1 - \eps_{n}) + \frac{1}{2}\Prob\paren*{B_{*}}
\end{align*}
Combining the upper and lower bounds yields the desired lower bound on $\Prob\paren*{B_{*}}$ and by the same argument as for the events $A$ and $B$, we obtain $\Prob\paren*{B_{*} \st D} \geq 1 - \frac{\delta/3}{1 - \eps_{n}}$.
Now by the union bound
\begin{equation*}
\Prob\paren*{A \cap B \cap C} = \Prob\paren*{A \cap B \cap C \st D} \Prob\paren*{D} \geq \paren*{1 - \frac{\delta}{1-\eps_{n}}} \paren*{1 - \eps_{n}} = 1 - \eps_{n} - \delta
\end{equation*}
From which the claim follows (ADD JUSTIFICATION).
§ PROOFS OF SECTION <REF>
§.§ Proof of Proposition <ref>
Asymptotic lower bound. By the central limit theorem, as $n \to \infty$, and by the finiteness of the fourth moments of $P_{X}$,
\begin{equation*}
\sqrt{n} (\widetilde{\Sigma}_{n} - I) \overset{d}{\to} G,
\end{equation*}
where $G$ is a centred symmetric Gaussian matrix with covariance
\begin{equation*}
\Exp\brack*{g_{ij}g_{st}} = \Exp\brack*{(\widetilde{X}_i\widetilde{X}_j - I_{i,j})(\widetilde{X}_s\widetilde{X}_t - I_{s, t})},
\end{equation*}
for $i,j,s,t \in [d]$. Now since $G$ is Gaussian and centred, we have $G \overset{d}{=} -G$. On the one hand, by the continuous mapping theorem, this implies
\begin{equation*}
\sqrt{n}(1 - \lambdamin(\widetilde{\Sigma}_{n}))= \sqrt{n}\lambdamax(I - \widetilde{\Sigma}_{n}) \overset{d}{\to} \lambdamax(G).
\end{equation*}
On the other, $\lambdamax(G) \overset{d}{=} \lambdamax(-G) = -\lambdamin(G)$, and therefore for $t \geq 0$,
\begin{multline*}
\Prob\paren*{\norm{G}_{\text op} > t} = \Prob\paren*{\lambdamax(G) > t \text{ or } \lambdamin(G) < -t} \\ \leq \Prob\paren*{\lambdamax(G) > t} + \Prob\paren*{\lambdamin(G) < -t} = 2 \Prob\paren*{\lambdamax(G) > t}.
\end{multline*}
so we conclude that
\begin{equation*}
\lim_{n \to \infty} \Prob\paren*{1 - \lambdamin(\widetilde{\Sigma}_{n}) \leq \frac{t}{\sqrt{n}}} = \Prob\paren*{\lambdamax(G) \leq t} \leq \frac{1}{2} \paren*{1 + \Prob\paren*{\norm{G}_{\text op} \leq t}}.
\end{equation*}
Now since convergence in distribution implies the pointwise convergence of quantiles, we obtain
\begin{equation*}
\lim_{n \to \infty} \sqrt{n} \cdot Q_{1 - \lambdamin(\widetilde{\Sigma}_{n})}(1-\delta) = Q_{\lambdamax(G)}(1-\delta) \geq Q_{\norm{G}_{\text{op}}}(1-2\delta)
\end{equation*}
It remains to lower bound this last quantile. We do this by deriving two upper bounds on the CDF of $\norm{G}_{\text{op}}$.
Let $v_{*} \defeq \argmax_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{\widetilde{X}}^2 - 1}^2}$ and note that $v^{T}_{*} G v_{*} \sim \mathcal{N}(0, R(P_{X}))$. Therefore by Lemma <ref>,
\begin{equation*}
\Prob\paren*{\norm{G}_{\text op} \leq t} \leq \Prob\paren*{\abs*{v_{*}^{T} G v_{*}^{T}} \leq t} \leq \sqrt{1 - \exp\paren*{-\frac{2t^2}{\pi R(P_{X})}}}
\end{equation*}
On the other hand, it can be shown that $\norm{G}_{\text{op}}$ is a Lipschitz function of a standard normal vector (see e.g. Van Handel, 2017), with Lipschitz constant $\sqrt{R(P_{X})}$. Therefore by Gaussian concentration (Lemma <ref>)
\begin{equation*}
\Prob\paren*{\norm{G}_{op} \leq t} = \Prob\paren*{\Exp\brack*{\norm{G}_{op}} - \norm{G}_{op} > \Exp\brack*{\norm{G}_{op}} - t } \leq \exp\paren*{-\frac{(\Exp\brack*{\norm{G}_{op}} - t)^2}{2R(P_{X})}}.
\end{equation*}
Now note that since $v^{T} G v$ is a Gaussian random variable for any $v \in \R^{d}$,
\begin{equation*}
\Exp\brack*{\norm{G}_{\text{op}}} \geq \sup_{v \in \S^{d-1}} \Exp\brack*{\abs*{v^{T}Gv}} = \sqrt{\frac{2}{\pi}} \Exp\brack*{(v^{T}Gv)^{2}}^{1/2} = \sqrt{\frac{2}{\pi}} \sqrt{R(P_{X})}
\end{equation*}
where the first equality is an explicit calculation of the first absolute moment of a Gaussian random variable. Bounding the right-most term in the previous display, we obtain
\begin{equation*}
\Prob\paren*{\norm{G}_{\text{op}} \leq t} \leq \exp\paren*{-\frac{(\Exp\brack*{\norm{G}_{op}} - t)^2}{\pi \Exp\brack*{\norm{G}_{\text{op}}}^{2}}}
\end{equation*}
Using the two bounds on the CDF of $\norm{G}_{\text{op}}$ and the second item of Lemma <ref>, we obtain the following lower bound
\begin{equation*}
Q_{\norm{G}_{op}}(1-2\delta) \geq \frac{1}{2} \Exp\brack*{\norm{G}_{\text{op}}} \paren*{1 - \sqrt{\pi \log\paren*{\frac{1}{1-2\delta}}}} + \frac{1}{2} \sqrt{\frac{\pi}{2}} \sqrt{R(P_{X})\log\paren*{\frac{1}{4\delta}}}
\end{equation*}
using the restriction on $\delta \in (0, 0.1)$, we obtain
\begin{equation*}
Q_{\norm{G}_{op}}(1-2\delta) \geq \frac{1}{20} \Exp\brack*{\norm{G}_{\text{op}}} + \frac{1}{2}\sqrt{R(P_{X}) \log(1/4\delta)}
\end{equation*}
Finally by the Gaussian Poincare inequality (Lemma <ref>)
\begin{equation*}
\Exp\brack*{\norm{G}^{2}_{\text{op}}} - (\Exp\brack*{\norm{G}_{\text{op}}})^{2} \leq R(P_{X}) \leq \frac{\pi}{2} (\Exp\brack*{\norm{G}_{\text{op}}})^{2}
\end{equation*}
rearranging yields
\begin{equation*}
\Exp\brack*{\norm{G}_{\text{op}}} \geq \frac{1}{\sqrt{1 + \pi/2}} \Exp\brack*{\norm{G}_{\text{op}}^{2}}^{1/2} \geq \frac{\norm*{\Exp\brack*{G^2}}^{1/2}_{\text{op}}}{\sqrt{1+\pi/2}} = \sqrt{\frac{\lambdamax(S)}{1+\pi/2}}
\end{equation*}
and therefore
\begin{equation*}
Q_{\norm{G}_{\text{op}}}(1-2\delta) \geq \frac{\sqrt{\lambdamax(S)}}{40} + \frac{1}{2} \sqrt{R(P_{X}) \log(1/4\delta)}
\end{equation*}
This concludes the proof of the lower bound.
Upper bound.
We have the variational representation
\begin{equation}
\label{eq:one}
1 - \lambdamin(\widetilde{\Sigma}_{n}) = \lambdamax(I - \widetilde{\Sigma}_{n}) = \sup_{v \in S^{d-1}} \sum_{i=1}^{n} \underbrace{\frac{1}{n} (\Exp\brack*{\inp{v}{\widetilde{X}}^{2}} - \inp{v}{\widetilde{X}_{i}}^{2})}_{\textstyle Z_{i,v} \defeq}.
\end{equation}
Now the processes $(\brace{Z_{i, v}}_{v \in S^{d-1}})_{i=1}^{d}$ are , $\Exp\brack*{Z_{i, v}} = 0$, and $Z_{i, v} \leq n^{-1}$ for all $(i, v) \in [n] \times S^{d-1}$, so that by Bousquet's inequality [Bousquet, 2002], with probability at least $1-\delta$
\begin{equation}
\label{eq:two}
\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v} < 2 \Exp\brack*{\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v}} + \sqrt{\frac{2 R(P_{X}) \log(1/\delta)}{n}} + \frac{4 \log(1/\delta)}{3n}
\end{equation}
It remains to bound the expectation in (<ref>). We may rewrite it as
\begin{equation*}
\Exp\brack*{\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v}} = \Exp\brack*{\sup_{v \in S^{d-1}} v^{T} \brace*{\sum_{i=1}^{n}\frac{1}{n}(I - \widetilde{X}_{i}\widetilde{X}_{i}^{T})}v} = \Exp\brack*{\lambdamax\paren*{\sum_{i=1}^{n}\frac{1}{n}(I - \widetilde{X}_{i}\widetilde{X}_{i}^{T})}}
\end{equation*}
Define the matrices $Y_{i} \defeq \frac{1}{n}(I - \widetilde{X}_{i}\widetilde{X}_{i}^{T})$ and notice that they are and satisfy $\lambdamax(Y_i) = n^{-1}$, so that by the Matrix Bernstein inequality <cit.> we obtain
\begin{equation}
\label{eq:three}
\Exp\brack*{\lambdamax\paren*{\sum_{i=1}^{n} Y_{i}}} \leq \sqrt{\frac{2\lambdamax(S)\log(3d)}{n}} + \frac{\log(3d)}{3n}.
\end{equation}
Combining (<ref>), (<ref>), and (<ref>) yields the desired result.
We start by proving the asymptotic lower bound. By the central limit theorem, as $n \to \infty$, and by the finiteness of the fourth moments of $P_{X}$,
\begin{equation*}
\sqrt{n} (\widehat{\Sigma}_{n} - \Sigma) \overset{d}{\to} G,
\end{equation*}
where $G$ is a centred symmetric Gaussian matrix with covariance
\begin{equation*}
\Exp\brack*{g_{ij}g_{st}} = \Exp\brack*{(X_iX_j - \Sigma_{i,j})(X_sX_t - \Sigma_{s, t})},
\end{equation*}
for $i,j,s,t \in [d]$. By the continuous mapping theorem and the continuity of the smallest eigenvalue,
\begin{equation*}
\sqrt{n}\lambdamin(\widehat{\Sigma}_{n} - \Sigma) \overset{d}{\to} \lambdamin(G).
\end{equation*}
Now $\norm{G}_{\text op} = \max\brace*{\lambdamax(G), -\lambdamin(G)}$, and
\begin{equation*}
\lambdamin(G) = \inf_{v \in S^{d-1}} v^{T} G v = - \sup_{v \in S^{d-1}} - v^{T}Gv \overset{d}{=} - \sup_{v \in S^{d-1}} v^{T} G v = -\lambdamax(G),
\end{equation*}
where the penultimate equality follows from the fact that $G$ is a centred Gaussian random matrix, so all the random variables $v^{T}Gv$ are symmetric, and therefore so is their supremum over a countable dense set of $S^{d-1}$. Therefore, for $t > 0$,
\begin{equation*}
\Prob\paren*{\norm{G}_{\text op} > t} = \Prob\paren*{\lambdamax(G) > t \text{ or } \lambdamin(G) < -t} \leq \Prob\paren*{\lambdamax(G) > t} + \Prob\paren*{\lambdamin(G) < -t} = 2 \Prob\paren*{\lambdamin(G) < -t}.
\end{equation*}
so that
\begin{equation*}
\lim_{n \to \infty} \Prob\paren*{\lambdamin(\widehat{\Sigma}_{n} - \Sigma) < -\frac{t}{\sqrt{n}}} = \Prob\paren*{\lambdamin(G) < -t} \geq \frac{1}{2} \Prob\paren*{\norm{G}_{\text op} > t}.
\end{equation*}
Now note that $\lambdamin(\widehat{\Sigma}_{n} - \Sigma) = - \lambdamax(\Sigma - \widehat{\Sigma}_{n})$. Therefore
\begin{equation*}
\lim_{n \to \infty} \Prob\paren*{\lambdamax(\Sigma - \widehat{\Sigma}_{n}) \leq \frac{t}{\sqrt{n}}} \leq \frac{1}{2}\paren*{1 + \Prob\paren*{\norm{G}_{\text op} \leq t}}
\end{equation*}
It remains to upper bound the last probability. Let $v_{*} \defeq \argmax_{v \in S^{d-1}} \Exp\brack*{\paren*{\inp{v}{X}^2 - \Exp\brack*{\inp{v}{X}^{2}}}^2}$ and note that $v^{T}_{*} G v_{*} \sim \mathcal{N}(0, R)$ (this follows from the fact that it is a linear combination of jointly Gaussian variables and that the variance of $v^{T}_{*}Gv_{*}$ is the same as that of $v^{T}_{*}(XX^{T} - \Sigma)v_{*}$). Therefore
\begin{equation*}
\Prob\paren*{\norm{G}_{\text op} \leq t} \leq \Prob\paren*{\abs*{v_{*}^{T} G v_{*}^{T}} \leq t} \leq \sqrt{1 - \exp\paren*{-\frac{2t^2}{\pi R}}}
\end{equation*}
We also have that $-\norm{G}_{\text op}$ is a $\sqrt{R}$-Lipschitz function of standard Gaussian random variables (REFERENCE Van Handel), therefore, for $t < \Exp\brack*{\norm{G}_{\text op}}$,
\begin{equation*}
\Prob\paren*{\norm{G}_{op} \leq t} = \Prob\paren*{\Exp\brack*{\norm{G}_{op}} - \norm{G}_{op} > \Exp\brack*{\norm{G}_{op}} - t } \leq \exp\paren*{-\frac{(\Exp\brack*{\norm{G}_{op}} - t)^2}{2R}}.
\end{equation*}
Lower bounding $\Exp\brack*{\norm{G}_{op}} \geq C \cdot \lambdamax(S)$, replacing and taking pseudo-inverse of both sides, and noting that the distribution of $\lambdamax(G)$ has a density (so that the quantiles converge at all point, since continuity points of CDF are all of of the points) yields the result.
§.§ Proof of Proposition <ref>
We have
\begin{equation}
\label{eq:one}
\lambdamax(\Sigma - \widehat{\Sigma}_{n}) = \sup_{v \in S^{d-1}} \sum_{i=1}^{n} \underbrace{\frac{1}{n} (\Exp\brack*{\inp{v}{X}^{2}} - \inp{v}{X_{i}}^{2})}_{\textstyle Z_{i,v} \defeq}
\end{equation}
Now the processes $(\brace{Z_{i, v}}_{v \in S^{d-1}})_{i=1}^{d}$ are , $\Exp\brack*{Z_{i, v}} = 0$, and $Z_{i, v} \leq n^{-1}\lambdamax(\Sigma)$ for all $(i, v) \in [n] \times S^{d-1}$, so that by Bousquet's inequality (REFERENCE), with probability at least $1-\delta$
\begin{equation}
\label{eq:two}
\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v} < 2 \Exp\brack*{\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v}} + \sqrt{\frac{2 R \log(1/\delta)}{n}} + \frac{4 \lambdamax(\Sigma)\log(1/\delta)}{3n}
\end{equation}
It remains to bound the expectation in (<ref>). We may rewrite it as
\begin{equation*}
\Exp\brack*{\sup_{v \in S^{d-1}} \sum_{i=1}^{n} Z_{i, v}} = \Exp\brack*{\sup_{v \in S^{d-1}} v^{T} \brace*{\sum_{i=1}^{n}\frac{1}{n}(\Sigma - X_{i}X_{i}^{T})}v} = \Exp\brack*{\lambdamax\paren*{\sum_{i=1}^{n}\frac{1}{n}(\Sigma - X_{i}X_{i}^{T})}}
\end{equation*}
Define the matrices $Y_{i} \defeq \frac{1}{n}(\Sigma - X_{i}X_{i}^{T})$ and notice that they are and satisfy $\lambdamax(Y_i) = n^{-1}\lambdamax(\Sigma)$, so that by the Matrix Bernstein inequality (REFERENCE TROPP), we obtain
\begin{equation}
\label{eq:three}
\Exp\brack*{\lambdamax\paren*{\sum_{i=1}^{n} Y_{i}}} \leq \sqrt{\frac{2\lambdamax(S)\log(d)}{n}} + \frac{\lambdamax(\Sigma)\log(d)}{3n}
\end{equation}
Combining (<ref>), (<ref>), and (<ref>) yields the desired result.
§.§ Proof of Proposition <ref>
Define $\tilde{S} \defeq S(P_{X}) + I = \Exp\brack*{\norm{\widetilde{X}}_2^{2}\widetilde{X}\widetilde{X}^{T}}$ and $\tilde{R} \defeq R(P_{X}) + 1 = \sup_{v \in S^{d-1}}\Exp\brack*{\inp{v}{\widetilde{X}}^{4}}$. Let
\begin{equation*}
B \defeq \sqrt{\frac{n \lambdamax(\tilde{S})}{4(1 + 2 \ceil{\log(d)})}},
\end{equation*}
and define $X_{B} \defeq \widetilde{X} \cdot \mathbbm{1}_{[0, B)}(\norm{\widetilde{X}}_{2}^{2})$, $\Sigma_{B} \defeq \Exp\brack*{X_{B}X_{B}^{T}}$, $\tilde{S}_{B} \defeq \Exp\brack*{(X_{B}X_{B}^{T})^{2}}$, and $\tilde{R}_{B} \defeq \sup_{v \in S^{d-1}}\Exp\brack*{\inp{v}{X_{B}}^{4}}$. Note that $\lambdamax(\tilde{S}_{B}) \leq \lambdamax(\tilde{S})$ and $\tilde{R}_{B} \leq \tilde{R}$. For $(i, v) \in [n] \times S^{d-1}$, define $Z_{i,v} \defeq \inp{v}{X_{B, i}}^{2}$, and note that $(Z_{i, v})_{i=1}^{n}$ are with mean $m(v) \defeq \Exp\brack{\inp{v}{X_{B}}^{2}}$ and $Y_{i, v} \geq Z_{i, v}$.
Now we have by Lemma <ref>
\begin{equation*}
\sup_{v \in S^{d-1}} \sum_{i=k+1}^{n-k} \Exp\brack*{\inp{v}{\widetilde{X}}^{2}} - Y_{i, v}^{*} \leq (n-2k) \sup_{v \in S^{d-1}} \Exp\brack*{\inp{v}{\widetilde{X}}^{2}} - \Exp\brack*{\inp{v}{X_{B}}^{2}} + \sup_{v \in S^{d-1}} \sum_{i=k+1}^{n-k} \Exp\brack*{\inp{v}{X_{B}}^{2}} - Z_{i,v}^{*}
\end{equation*}
The first term is bounded by
\begin{align*}
\sup_{v \in S^{d-1}} \Exp\brack*{\inp{v}{\widetilde{X}}^{2}} - \Exp\brack*{\inp{v}{X_{B}}^{2}} &= \sup_{v \in S^{d-1}} \Exp\brack*{\inp{v}{\widetilde{X}}^{2} \mathbbm{1}_{[B, \infty)}(\norm{\widetilde{X}}_{2}^{2})} \\
&= \sup_{v \in S^{d-1}} \Exp\brack*{\inp{v}{\widetilde{X}}^{2} \norm{\widetilde{X}}_2^2 \frac{1}{\norm{\widetilde{X}}_{2}^{2}} \mathbbm{1}_{[B, \infty)}(\norm{\widetilde{X}}_{2}^{2})} \\
&\leq \frac{\lambdamax(\tilde{S})}{B} = \sqrt{4(1 + 2 \ceil{\log(d)})} \sqrt{\frac{\lambdamax(\tilde{S})}{n}}
\end{align*}
For the second term, define, for $(i,v) \in [n] \times S^{d-1}$, $W_{i,v} \defeq \Exp\brack*{\inp{v}{X_{B}}^2} - Z_{i, v}$, and note that $\Exp\brack*{W_{i, v}} = 0$, $\Exp\brack*{W_{i, v}^{2}} \leq \tilde{R}$. Furthermore
\begin{align*}
&2\Exp\brack*{\sup_{v \in S^{d-1}} \sum_{i=1}^{n} \eps_i W_{i, v}} \\
&=2\Exp\brack*{\sup_{v \in S^{d-1}} v^{T} \brace*{\sum_{i=1}^{n} \eps_i (\Sigma_{B} - X_{B, i}X_{B, i}^{T})}v} \\
&\leq 2\Exp\brack*{\norm*{\sum_{i=1}^{n} \eps_i (\Sigma_{B} - X_{B, i}X_{B, i}^{T})}_{op}} \\
&\leq \sqrt{4 (1+ 2\ceil{\log(d)})} \sqrt{n \lambdamax(\tilde{S}_{B})} + 4(1 + 2\ceil{\log(d)}) \Exp\brack*{\max_{i \in [n]} \norm{X_{B, i}X_{B, i}^{T} - \Sigma_{B}}_{op}^{2}}^{1/2} \\
%&\leq \sqrt{n (1+ 2\ceil{\log(d)}) \lambdamax(\tilde{S})} + (1 + 2\ceil{\log(d)}) \cdot B \\
&\leq \sqrt{4(1+ 2\ceil{\log(d)})} \sqrt{n \lambdamax(\tilde{S})} + 4(1 + 2\ceil{\log(d)}) \Exp\brack*{\max\brace*{\max_{i \in [n]}\norm{X_{B, i}}_2^2, \lambdamax(\Sigma_{B})}^2}^{1/2} \\
&\leq \sqrt{4(1+ 2\ceil{\log(d)})} \sqrt{n \lambdamax(\tilde{S})} + 4(1 + 2\ceil{\log(d)}) \max\brace*{B, 1} \\
%&\leq \sqrt{1+ 2\ceil{\log(d)}} \sqrt{n \lambdamax(\tilde{S})} + (1 + 2\ceil{\log(d)}) B \\
&= 4 \sqrt{1+ 2\ceil{\log(d)}} \sqrt{n \lambdamax(\tilde{S})}
\end{align*}
where the fourth line follows from the proof of the second item of Theorem 5.1 in Tropp, 2015, the sixth line follows from the fact that $\norm{X_{B}}_{2}^{2} \leq B$, and the last line follows from the condition on $n$ and the fact that $\lambdamax(\tilde{S}) \geq 1$, which itself follows from the positive semi-definiteness of $S(P_{X})$. Defining $W_{v} = (W_{i, v})_{i=1}^{n}$ for $v \in S^{d-1}$, and appealing to Lemma <ref>, we may therefore bound the second term as follows, with probability at least $1-\delta$
\begin{align*}
\sup_{v \in S^{d-1}} \sum_{i=k+1}^{n-k} \Exp\brack*{\inp{v}{X_{B}}^{2}} - Z_{i,v}^{*} &= \sup_{v \in S^{d-1}} \sum_{i=k+1}^{n-k} W_{i, v}^{*} \\
&\leq \sup_{v \in S^{d-1}} \varphi_{k}(W_{v}) + \sup_{v \in S^{d-1}} k (\abs{W_{1+k, v}} + \abs{W_{n-k, v}}) \\
&\leq 96 \cdot \paren*{\sqrt{4(1 + 2 \ceil{\log(d)})} \sqrt{n \lambdamax(\tilde{S})} + \sqrt{n \tilde{R} \log(2/\delta)}}
\end{align*}
Combining the bounds, and bounding $\sqrt{4(1+\ceil{\log(d)})} \leq 8 \log(6d)$ yields the result. |
# Causal Mediation Analysis with Multi-dimensional and Indirectly Observed
Mediators
Ziyang Jiang1 Yiling Liu1 Michael H. Klein1 Ahmed Aloui1 Yiman Ren2
Keyu Li1 Vahid Tarokh1 David Carlson1
1Duke University 2University of Michigan Ross School of Business
{ziyang.jiang,yiling.liu,michael.klein413,ahmed.aloui,keyu.li,
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
Causal mediation analysis (CMA) is a powerful method to dissect the total
effect of a treatment into direct and mediated effects within the potential
outcome framework. This is important in many scientific applications to
identify the underlying mechanisms of a treatment effect. However, in many
scientific applications the mediator is unobserved, but there may exist
related measurements. For example, we may want to identify how changes in
brain activity or structure mediate an antidepressant’s effect on behavior,
but we may only have access to electrophysiological or imaging brain
measurements. To date, most CMA methods assume that the mediator is one-
dimensional and observable, which oversimplifies such real-world scenarios. To
overcome this limitation, we introduce a CMA framework that can handle complex
and indirectly observed mediators based on the identifiable variational
autoencoder (iVAE) architecture. We prove that the true joint distribution
over observed and latent variables is identifiable with the proposed method.
Additionally, our framework captures a disentangled representation of the
indirectly observed mediator and yields accurate estimation of the direct and
mediated effects in synthetic and semi-synthetic experiments, providing
evidence of its potential utility in real-world applications.
## 1 Introduction
Causal inference methods are powerful tools to understand and quantify the
causal relationships between treatments and outcomes, motivating studies in
many areas [1, 2, 3, 4]. Causal inference has been combined with machine
learning in recent years to make powerful and flexible frameworks [5, 6].
While these frameworks are highly useful to estimate the total treatment
effect on an outcome, many scientific applications require understanding _how_
a treatment impacts outcomes. This knowledge can then be used to design
interventions that target the mediators to influence the outcome of interest.
For example, we may want to identify neural changes that mediate a behavioral
outcome when studying a treatment for a psychiatric disorder. Recent work has
in fact found and manipulated neural changes related to depression [7] and
social processing [8].
This need motivates the usage of causal mediation analysis (CMA), which
estimates the causal effect on an outcome of interest that is due to changes
in intermediate variables (the “mediators”) versus directly from the treatment
[9]. In specific contexts, understanding the role of the mediator is crucial
as it tells us how nature works and provides insights into the underlying
mechanisms that link variables, which enables a more accurate assessment of
the treatment’s effectiveness. In the above case, this means estimating how
much of the behavior change is explained by the treatment’s impact on the
brain, as well as how much behavioral change is unexplained by that
relationship. Early studies on mediation analysis mainly adopted linear
structural equation models (SEMs) including Wright’s method of path analysis
[10, 11] and Baron and Kenny’s method for testing mediation hypotheses [12].
In the past few decades, researchers have come up with nonparametric
generalizations for SEMs [13, 14] which do not impose any functional or
distributional forms on the causal relationships and therefore offer greater
flexibility in modeling complex dependencies between variables.
Despite these advances, a key challenge is that causal mediation analysis
typically assumes a low-dimensional, often one-dimensional, mediator, whereas
in many cases we want to identify mediation effects of complex data, such as
neuroimaging, electrophysiology, and myriad -omics studies. In this paper, we
build upon the concept of the identifiable variational autoencoder (iVAE) [15]
and introduce a novel framework for CMA that can handle _multi-dimensional_
and _indirectly observed_ mediators. We assume that there is a latent space
that generates the high-dimensional observed data (e.g., a smaller latent
space can generate the observed neural dynamics). By using an identifiable
model structure, we show that we can recover the latent space prior
conditioned on the treatment and any available covariates. In summary, our
main contributions are:
* •
We propose a causal graph that involves both an _indirectly observed_ mediator
and observed covariates that acts as a confounder for the treatment, the
mediator, and the outcome.
* •
We build a framework for CMA that can handle _multi-dimensional_ and
_indirectly observed_ mediators based on the proposed causal graph.
* •
We theoretically prove that the joint distribution over observed and latent
variables in our framework is identifiable.
* •
We show that our framework learns a disentangled representation of the
_indirectly observed_ mediator between control and treatment groups.
* •
We empirically demonstrate the effectiveness of our framework on complex
synthetic and semi-synthetic datasets.
## 2 Related Work
#### Causal Mediation Analysis
As mentioned in the introduction, traditional mediation analysis was mainly
based on linear SEMs where the direct, mediated, and total effects are
determined by linear regression coefficients [10, 11, 12, 16, 17]. Despite its
simplicity, this approach relies on several assumptions such as normally
distributed residuals [18] and often leads to ambiguities when either the
mediator or the outcome variable is not continuous [19]. To address this
limitation, researchers formulated the causal mediation analysis (CMA)
framework based on counterfactual thinking [9, 20, 21], which can accommodate
nonlinear or nonparametric models such as targeted maximum likelihood
estimation [22], inverse propensity weighting (IPW) [23], and natural effect
models (NEMs) [24]. Within the counterfactual framework, the causal effects
are calculated as the difference between two counterfactual combinations of
mediators and outcomes, for which we will provide formal definitions in the
next section. Although causal effects are defined at the individual level, in
practice, we usually relax our estimation to their expected values over the
population as we do not generally observe both potential outcomes
simultaneously [25].
#### Causal Mediation Effect Estimation with Deep Models
Deep learning models have gained increasing attention for their capability in
estimating causal effects within the potential outcome framework [26, 27, 28,
29]. In contrast, the use of deep learning models for mediation effect
estimation has received comparatively less exploration. Xu et al. [30]
developed a semiparametric neural network-based framework to reduce the bias
in CMA. Cheng et al. [31] and Xu et al. [32] used variational autoencoders
(VAEs) to estimate the mediation effect based on a causal graph with hidden
confounders. Although these VAE-based methods share some similarities with our
proposed method, we distinguish ourselves by modeling the _mediator_ as the
latent variable rather than the covariates, resulting in a different causal
graph. Furthermore, these approaches assume that the mediator is observable
and one-dimensional, which is not necessarily the case in many scientific
applications.
#### Multi-dimensional Mediators
Compared to the many CMA methods proposed, significantly less research has
been conducted on scenarios where the mediator is multi-dimensional and not
directly observable. The majority of investigations on this subject are
situated within the domains of neuroscience [33, 34], biostatistics [35], and
bioinformatics [36, 37, 38, 39]. The approach proposed by Nath et al. [34] is
the most relevant work to our research, where the high-dimensional mediator is
first transformed into a one-dimensional variable, and the mediation effect is
estimated using an iterative maximization algorithm. Nevertheless, all these
methods primarily rely on linear SEMs and neglect the impact of any
confounding variables, thereby limiting their applicability.
## 3 Problem Setup
$T$$Z$$Y$$X$ (a)
$T$$Z$$Y$$X$$W$ (b)
Figure 1: Graphs of CMA for (a) case without observed covariates and (b) case
with observed covariates, where $T$ is the treatment assignment, $Y$ is the
outcome, $Z$ is the unobserved true mediator, $W$ is a set of observed
covariates, and $X$ is a feature caused by the unobserved true mediator $Z$
with a much higher dimension. The observed variables are colored in grey.
We assume that our causal model belongs to one of the two cases as displayed
in Figure 1. To ensure consistency with previous studies on mediation analysis
[18, 40, 41], we further assume that the treatment assignment $T$ is binary
for each observed samples, with $T=0$ indicating an assignment to the control
group and $T=1$ indicating an assignment to the treatment group. Consider the
$n^{th}$ individual in an experiment with a total of $N$ units (i.e.
$n=1,...,N$). Let
$\boldsymbol{z}_{n}(t_{n})\in\mathcal{Z}\subset\mathbb{R}^{d}$ denote the
potential value of the unobserved true mediator under the treatment assignment
$t_{n}$. Since $Y$ depends on both $T$ and $Z$, we denote
$y(t_{n},\boldsymbol{z}_{n}(t_{n}))\in\mathcal{Y}\subset\mathbb{R}$ as the
potential outcome of the $n^{th}$ individual under treatment $t_{n}$ and true
mediator $\boldsymbol{z}_{n}(t_{n})$. Following [9, 40, 42], we can define the
average causal mediation effects (ACME), the average direct effects (ADE), and
the average total effect (ATE) as follows:
$\displaystyle ACME(t)$
$\displaystyle\coloneqq\mathbb{E}\left[y(t,\boldsymbol{z}(1))-y(t,\boldsymbol{z}(0))\right],$
(1) $\displaystyle ADE(t)$
$\displaystyle\coloneqq\mathbb{E}\left[y(1,\boldsymbol{z}(t))-y(0,\boldsymbol{z}(t))\right],$
(2) $\displaystyle ATE$
$\displaystyle\coloneqq\mathbb{E}\left[y(1,\boldsymbol{z}(1))-y(0,\boldsymbol{z}(0))\right],$
(3)
where the expectations are taken over all the samples in our experiment. Our
main objective is to recover these quantities as accurately as possible. As
$\boldsymbol{z}_{n}$ is unobserved, we must infer $\boldsymbol{z}_{n}$ from
the related observed feature
$\boldsymbol{x}_{n}\in\mathcal{X}\subset\mathbb{R}^{D}$ with a much higher
dimension, i.e. $D\gg d$, as well as any other available information. In
practice, there often exists a set of observed covariates
$\boldsymbol{w}_{n}\in\mathcal{W}\subset\mathbb{R}^{m}$ that also acts as
confounders for $T$, $Z$, and $Y$ as shown in Figure 1(b). With the presence
of observed covariates, we need to make the following assumptions to make
valid inferences about the causal effects:
###### Assumption 3.1.
There exists an observed variable $X\in\mathcal{X}\subset\mathbb{R}^{D}$ that
is caused by the unobserved true mediator
$Z\in\mathcal{Z}\subset\mathbb{R}^{d}$ as shown in Figure 1.
###### Assumption 3.2.
The following two conditional independence assumptions hold sequentially.
$\displaystyle\left\\{Y(t^{\prime},z),Z(t)\right\\}$
$\displaystyle\mathchoice{\mathrel{\hbox
to0.0pt{$\displaystyle\perp$\hss}\mkern
2.0mu{\displaystyle\perp}}}{\mathrel{\hbox
to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptstyle\perp$\hss}\mkern
2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern
2.0mu{\scriptscriptstyle\perp}}}T|W=w,$ (4) $\displaystyle Y(t^{\prime},z)$
$\displaystyle\mathchoice{\mathrel{\hbox
to0.0pt{$\displaystyle\perp$\hss}\mkern
2.0mu{\displaystyle\perp}}}{\mathrel{\hbox
to0.0pt{$\textstyle\perp$\hss}\mkern 2.0mu{\textstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptstyle\perp$\hss}\mkern
2.0mu{\scriptstyle\perp}}}{\mathrel{\hbox
to0.0pt{$\scriptscriptstyle\perp$\hss}\mkern
2.0mu{\scriptscriptstyle\perp}}}Z(t)|T=t,W=w,$ (5)
where $0<p(T=t|W=w)<1$, $0<p(Z(t)=z|T=t,W=w)<1$, and
$t,t^{\prime}\in\\{0,1\\}$.
Assumption 3.2 is first introduced by Imai et al. [3], which is also known as
_sequential ignorability_. Note that Equation 4 is equivalent to the strong
ignorability assumption common in causal inference [43, 44]. It states that
the treatment assignment $T$ is statistically independent of potential outcome
$Y$ and potential mediators $Z$ given covariates $W$. Equation 5 states that
given the treatment and covariates, the mediator $Z$ can be viewed as if it
was randomized (in other words, there are no explained “backdoor” paths
between the mediator and outcome [18]).
## 4 Method
We leverage the model structure of identifiable variational autoencoder (iVAE)
to estimate the causal mediation effects based on the causal graphs
illustrated in Figure 1. Our primary objective is to learn a disentangled
representation of the true mediator in the latent space so that the
statistical distance between $p(\boldsymbol{z}|t=0)$ and
$p(\boldsymbol{z}|t=1)$ can be better estimated. In the following sections, we
briefly review the concepts of identifiable variational autoencoder (iVAE) in
Section 4.1, present our framework in Section 4.2, and formally state the
identifiability of our framework in Section 4.3.
Figure 2: Illustration of an iVAE where the blue nodes correspond to
probabilistic distributions.
### 4.1 Identifiable Variational Autoencoder (iVAE)
To begin with, here we provide a brief overview of iVAE [15]. We abuse the
notation slightly by redefining $\boldsymbol{x}$ and $\boldsymbol{z}$ to refer
to the observed data and the latent feature learned by a general variational
autoencoder (VAE), respectively. The primary claim made by iVAE is that a VAE
becomes identifiable up to _a linear invertible transformation_ (see Section
4.3 for full definitions) if we introduce a factorized prior distribution over
the latent variable $\boldsymbol{z}$ conditioned on an auxiliary variable
$\boldsymbol{u}$. Specifically, we have $\boldsymbol{z}$ sampled from
$p(\boldsymbol{z}|\boldsymbol{u})$ which is assumed to be conditionally
factorial with each $z_{i}\in\boldsymbol{z}$ belonging to a univariate
exponential family as specified by the following probability density function:
$p_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})=\prod_{i}\frac{Q_{i}(z_{i})}{C_{i}(\boldsymbol{u})}\exp\left[\sum_{j=1}^{k}S_{i,j}(z_{i})\lambda_{i,j}(\boldsymbol{u})\right],$
(6)
where $Q_{i}$ is the base measure, $C_{i}(\boldsymbol{u})$ is the normalizing
constant, $k$ is a pre-defined number of sufficient statistics,
$\boldsymbol{S}_{i}=(S_{i,1},...,S_{i,k})$ are the sufficient statistics, and
$\boldsymbol{\lambda}_{i}(\boldsymbol{u})=\left(\lambda_{i,1}(\boldsymbol{u}),...,\lambda_{i,k}(\boldsymbol{u})\right)$
are the natural parameters.
The architecture of the iVAE framework is displayed in Figure 2, which
consists of a variational posterior
$q_{\boldsymbol{\phi}}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{u})$ and a
conditional generative model
$p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z}|\boldsymbol{u})=p_{\textbf{f}}(\boldsymbol{x}|\boldsymbol{z})p_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$
where f is an injective function such that
$p_{\textbf{f}}(\boldsymbol{x}|\boldsymbol{z})=p_{\boldsymbol{\epsilon}}(\boldsymbol{x}-\textbf{f}(\boldsymbol{z}))$
and $\boldsymbol{\epsilon}$ is an independent noise variable with probability
density function $p(\boldsymbol{\epsilon})$. The parameters of the generative
model are denoted as
$\boldsymbol{\theta}=\\{\textbf{f},\boldsymbol{S},\boldsymbol{\lambda}\\}$.
When fitting iVAE on observed data, the parameter vector
$(\boldsymbol{\theta},\boldsymbol{\phi})$ is learned by maximizing the
evidence lower bound (ELBO)
$\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{u})$:
$\log
p_{\boldsymbol{\theta}}(\boldsymbol{x}|\boldsymbol{u})\geq\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{u})\coloneqq\mathbb{E}_{q_{\boldsymbol{\phi}}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{u})}\left[\log
p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z}|\boldsymbol{u})-\log
q_{\boldsymbol{\phi}}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{u})\right],$
(7)
where we use the reparameterization trick to sample from
$q_{\boldsymbol{\phi}}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{u})$. Briefly
speaking, both the model structure and the learning process of iVAE are
similar to conventional VAEs except that the prior, the variational posterior,
and the decoder are additionally conditioned on the auxiliary variable
$\boldsymbol{u}$. However, it is important to note that $\boldsymbol{u}$ must
have some association with $\boldsymbol{x}$ and $\boldsymbol{z}$.
### 4.2 Estimating Mediation Effect with VAE
(a)
(b)
Figure 3: Illustration of the overall architecture of IMAVAE for (a) case
without observed covariates and (b) case with observed covariates. Note that
in case (b) the treatment assignment $T$ and the observed covariates $W$ are
first concatenated and then passed into the prior, encoder, and decoder.
In this section, we formally present our approach — Identifiable Mediation
Analysis with Variational Autoencoder (IMAVAE), with the overall architecture
displayed in Figure 3. The encoder, decoder, and prior components in IMAVAE
have exactly the same probabilistic form as specified in Section 4.1 and share
a similar structure with iVAE, where we take the high-dimensional feature $X$
as the input to the encoder to learn the unobserved mediator $Z$ and generate
a reconstruction $\hat{X}$ with the decoder. Importantly, we further include a
parametric model $g_{\boldsymbol{\gamma}}$ to predict the outcome $\hat{Y}$.
Figures 3(a) and 3(b) depict two variants of our framework, corresponding to
the two cases outlined in the causal graphs in Figure 1:
* •
_Case (a)_ : Without observed covariates, the treatment assignment $T$ is
employed as the auxiliary variable and serves as input to the encoder, prior,
and predictor, as illustrated in Figure 3(a).
* •
_Case (b)_ : With observed covariates, we first concatenate the observed
covariates $W$ and the treatment assignment $T$. The concatenated vector
$(W,T)$ is then passed into the encoder, prior, and predictor as the auxiliary
variable, as illustrated in Figure 3(b).
Similar to iVAE, we denote the parameter vector of IMAVAE as
$(\boldsymbol{\theta},\boldsymbol{\phi},\boldsymbol{\gamma})$ where
$\boldsymbol{\theta}=\\{\textbf{f},\boldsymbol{S},\boldsymbol{\lambda}\\}$.
When fitting IMAVAE to the observed data, we optimize the parameter vector by
minimizing the following objective:
$\boldsymbol{\theta}^{*},\boldsymbol{\phi}^{*},\boldsymbol{\gamma}^{*}\coloneqq\arg\min_{\boldsymbol{\theta},\boldsymbol{\phi},\boldsymbol{\gamma}}\left\\{\alpha\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\hat{\boldsymbol{x}},\boldsymbol{x})-\beta\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{u})+\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{S},\boldsymbol{\lambda},\boldsymbol{\gamma}}(\hat{y},y)\right\\},$
(8)
where $\boldsymbol{u}=t$ for case (a), $\boldsymbol{u}=(\boldsymbol{w},t)$ for
case (b),
$\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\hat{\boldsymbol{x}},\boldsymbol{x})$
is the discrepancy between the input feature $\boldsymbol{x}$ and its
reconstruction $\hat{\boldsymbol{x}}$,
$\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{S},\boldsymbol{\lambda},\boldsymbol{\gamma}}(\hat{y},y)$
is the error between the predicted outcome $\hat{y}$ and the true outcome $y$.
$\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\boldsymbol{x},\boldsymbol{u})$
represents the same loss term as Equation 7. We note that this creates some
overlap as the reconstruction term on $\boldsymbol{x}$ is also in Equation 7,
but choose this form to highlight each term independently (and does not change
the overall loss with appropriately chosen weights). $\alpha$ and $\beta$ are
hyperparameters representing the importance of the reconstruction error and
the ELBO, respectively. In our experiments, we use mean squared error (MSE)
loss for both
$\mathcal{L}_{\boldsymbol{\theta},\boldsymbol{\phi}}(\hat{\boldsymbol{x}},\boldsymbol{x})$
and
$\mathcal{L}_{\boldsymbol{\phi},\boldsymbol{S},\boldsymbol{\lambda},\boldsymbol{\gamma}}(\hat{y},y)$.
The prior
$p_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$ is
set to be a multivariate normal distribution whose mean and covariance are
parameterized as a function of $\boldsymbol{u}$ using a neural network.
To give an estimation on the direct, mediated, and total effects after fitting
the parameters, we repeatedly sample $\boldsymbol{z}(t)$ from the learned
distributions (i.e.,
$p_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|t)$ for case (a) and
$p_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{w},t)$ for
case (b)). Next, we feed both $\boldsymbol{z}(t)$ and the auxiliary variables
into the predictor $g_{\boldsymbol{\gamma}}$ to obtain
$y(t,\boldsymbol{z}(t))$ for case (a) or
$y(t,\boldsymbol{w},\boldsymbol{z}(t))$ for case (b). Finally, we estimate the
ACME, ADE, and ATE according to Equations 1-3 using estimated values of $y$.
### 4.3 Identifiability of IMAVAE
In this section, we use similar definitions and assumptions stated by
Khemakhem et al. [15]. Specifically, let $\mathcal{Z}\subset\mathbb{R}^{d}$ be
the support of distribution of $\boldsymbol{z}$. The support of distribution
of $\boldsymbol{u}$ is $\mathcal{U}=\\{0,1\\}$ for case (a) and
$\mathcal{U}=\\{0,1\\}\times\mathcal{W}\subset\mathbb{R}^{m+1}$ for case (b).
We denote by
$\textbf{S}\coloneqq(\textbf{S}_{1},...,\textbf{S}_{d})=(S_{1,1},...,S_{d,k})\in\mathbb{R}^{dk}$
the vector of sufficient statistics of Equation 6 and
$\boldsymbol{\lambda}({\boldsymbol{u}})=(\boldsymbol{\lambda}_{1}(\boldsymbol{u}),...,\boldsymbol{\lambda}_{d}(\boldsymbol{u}))=(\lambda_{1,1}(\boldsymbol{u}),...,\lambda_{d,k}(\boldsymbol{u}))\in\mathbb{R}^{dk}$
the vector of its parameters. Following the same notations in [15], we define
$\mathcal{X}\subset\mathbb{R}^{D}$ as the image of f in Equation 6 and denote
by $\textbf{f}^{-1}:\mathcal{X}\rightarrow\mathcal{Z}$ the inverse of f.
Furthermore, we make the following assumption on the predictor:
###### Assumption 4.1.
The predictor $g_{\boldsymbol{\gamma}}(\boldsymbol{z},\boldsymbol{u})$ takes
the following form:
$g_{\boldsymbol{\gamma}}(\boldsymbol{z},\boldsymbol{u})\coloneqq
p_{\textbf{h}}(y|\boldsymbol{z},\boldsymbol{u})=p_{\boldsymbol{\xi}}(y-\textbf{h}(\boldsymbol{z},\boldsymbol{u})),$
(9)
where the function
$\textbf{h}:\mathcal{Z}\times\mathcal{U}\rightarrow\mathcal{Y}$ is injective,
$\mathcal{Y}\subset\mathbb{R}$ is the image of h, and $\boldsymbol{\xi}$ is an
independent noise variable with probability density function
$p_{\boldsymbol{\xi}}(\boldsymbol{\xi})$.
Similar to [15], for the sake of analysis, we treat h as a parameter of the
entire model and define
$\boldsymbol{\psi}\coloneqq(\textbf{f},\textbf{h}):\mathcal{Z}\times\mathcal{U}\rightarrow\mathcal{X}\times\mathcal{Y}$.
$\boldsymbol{\psi}$ remains injective since both f and h are injective, and we
consider the projection $\boldsymbol{\psi}^{-1}$ on $\mathcal{Z}$ to be
$\boldsymbol{\psi}_{|\boldsymbol{z}}^{-1}$. The domain of parameters is thus
$\Theta=\\{\boldsymbol{\theta}\coloneqq(\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda})\\}$.
To formally present our claim, we give the following definitions:
###### Definition 4.2.
Let $\sim$ be an equivalence relation on $\Theta$. We say that
$p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z},y|\boldsymbol{u})$ is
identifiable up to $\sim$ if
$p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z},y|\boldsymbol{u})=p_{\boldsymbol{\tilde{\theta}}}(\boldsymbol{x},\boldsymbol{z},y|\boldsymbol{u})\Longrightarrow\boldsymbol{\theta}\sim\boldsymbol{\tilde{\theta}}$.
###### Definition 4.3.
Let $\sim_{A}$ be the equivalence relation on $\Theta$ defined as follows:
$(\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda})\sim(\tilde{\textbf{f}},\tilde{\textbf{h}},\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}})\Longleftrightarrow\exists
A,\textbf{c}\,|\,\textbf{S}(\boldsymbol{\psi}^{-1}_{\boldsymbol{|z}}(\boldsymbol{x},y))=A\tilde{\textbf{S}}(\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}(\boldsymbol{x},y))+\textbf{c},\forall\boldsymbol{x}\in\mathcal{X};y\in\mathcal{Y},$
(10)
where $A$ is an invertible $dk\times dk$ matrix and c is a vector.
With all the assumptions and definitions stated above, we state our theorem
below as an extension of the results in [15]. The detailed proof will be
provided in Appendix A.
###### Theorem 4.4.
(Extension to Theorem 1 in Khemakhem et al. [15]) Assume that we observe data
sampled from the generative model
$p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z},y|\boldsymbol{u})=p_{\textbf{f}}(\boldsymbol{x}|\boldsymbol{z})p_{\textbf{h}}(y|\boldsymbol{z},\boldsymbol{u})p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$
where $p_{\textbf{f}}(\boldsymbol{x}|\boldsymbol{z})$,
$p_{\textbf{h}}(y|\boldsymbol{z},\boldsymbol{u})$ and
$p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$ follow
the distributional form defined in Section 4.1, Equation 9, and Equation 6,
respectively. Then the parameters
$(\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda})$ will be
$\sim_{A}$-identifiable if we assume the following holds:
1. 1.
The set
$\\{(\boldsymbol{x},y)\in\mathcal{X}\times\mathcal{Y}\,|\,\varphi_{\epsilon}(\boldsymbol{x})=0,\varphi_{\xi}(y)=0\\}$
has measure zero, where $\varphi_{\boldsymbol{\epsilon}}$ and
$\varphi_{\boldsymbol{\xi}}$ are the characteristic functions of
$p_{\boldsymbol{\epsilon}}$ and $p_{\boldsymbol{\xi}}$ defined in Section 4.1
and Equation 9, respectively.
2. 2.
The functions f and h are both injective.
3. 3.
The sufficient statistics $S_{i,j}$ in Equation 6 are differentiable almost
everywhere, and $(S_{i,j})_{1\leq j\leq k}$ are linearly independent on any
subset of $\mathcal{Y}$ of measure greater than zero.
4. 4.
There exists $dk+1$ distinct points
$\boldsymbol{u}_{0},...,\boldsymbol{u}_{dk}$ such that the matrix
$L=(\boldsymbol{\lambda}(\boldsymbol{u}_{1})-\boldsymbol{\lambda}(\boldsymbol{u}_{0}),...,\boldsymbol{\lambda}(\boldsymbol{u}_{dk})-\boldsymbol{\lambda}(\boldsymbol{u}_{0}))$
of size $dk\times dk$ is invertible.
With the aforementioned theorem, we state that the joint distribution learned
by the generative model
$p_{\boldsymbol{\theta}}(\boldsymbol{x},\boldsymbol{z},y|\boldsymbol{u})$ is
identifiable. Moreover, it is important to highlight that our extension of the
identifiability theorem, originally presented in [15], incorporates the
additional conditioning of $y$ on $\boldsymbol{u}$, thereby broadening the
scope of iVAE.
## 5 Experiments
We follow the approach using synthetic and semi-synthetic datasets used in
recent causal inference manuscripts to allow for benchmarking and comparison
of the results. We test IMAVAE on 3 datasets: 1 synthetic dataset and 2 semi-
synthetic datasets111Code will be released upon acceptance.. This allows us to
evaluate how well we estimate counterfactual values of the treatment
assignment and the mediator, and the direct, mediated, and total effects under
reasonable assumptions. The detailed experimental setup (e.g. training
details, computing resources, licenses, etc.) is given in Appendix B.
1.13
Figure 4: Distribution of the true and the estimated
$p(\boldsymbol{z}|\boldsymbol{u})$ in the latent space where the upper row
corresponds to case (a) without observed covariates, i.e. $\boldsymbol{u}=t$
and the bottom row corresponds for case (b) with observed covariates, i.e.
$\boldsymbol{u}=(\boldsymbol{w},t)$. From left to right, we present (left) the
true distribution of $p(\boldsymbol{z}|\boldsymbol{u})$, (middle left) the
estimated distribution
$\hat{p}_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$
by IMAVAE, (middle right) the estimated distribution
$\hat{p}_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$
without the reconstruction term, i.e. $\alpha=-1$, and (right) the estimated
distribution
$\hat{p}_{\boldsymbol{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$
without the ELBO term, i.e. $\beta=0$. The blue dots denote samples in control
group and the orange dots denote samples in treatment group.
### 5.1 Synthetic Dataset
| IMAVAE | | IMAVAE
---
$\alpha=-1$
| IMAVAE
---
$\beta=0$
ACME ($t=1$) | 0.056 $\pm$ .007 | 0.078 $\pm$ .007 | 2.875 $\pm$ .016
ADE ($t=0$) | 0.058 $\pm$ .000 | 0.052 $\pm$ .000 | 0.043 $\pm$ .000
ATE | 0.003 $\pm$ .007 | 0.025 $\pm$ .006 | 2.917 $\pm$ .015
Table 1: Absolute error of ACME under treated, ADE under control, and ATE on
the synthetic dataset for IMAVAE in case (a) _without_ observed covariates
| IMAVAE | | IMAVAE
---
$\alpha=-1$
| IMAVAE
---
$\beta=0$
ACME ($t=1$) | 0.214 $\pm$ .016 | 0.379 $\pm$ .014 | 5.912 $\pm$ .028
ADE ($t=0$) | 0.194 $\pm$ .000 | 0.385 $\pm$ .000 | 0.127 $\pm$ .000
ATE | 0.019 $\pm$ .015 | 0.011 $\pm$ .016 | 6.036 $\pm$ .028
Table 2: Absolute error of ACME under treated, ADE under control, and ATE on
the synthetic dataset for IMAVAE in case (b) _with_ observed covariates
We first construct a synthetic dataset following the causal graphs in Figure
1, where we give the details of data generation process in Appendix C. We set
the unobserved true mediator to be two-dimensional (i.e. $d=2$) for easier
visualization. We display the distributions of the true and estimated
unobserved mediator in Figure 4, where we note that IMAVAE effectively learns
_disentangled representations_ of $Z$ for the control and treatment groups in
the latent space, up to trivial indeterminacies such as rotations and sign
flips, for cases both with and without observed covariates. If we remove the
reconstruction term (i.e. $\alpha=-1$ due to the overlap of reconstruction
terms in Equation 8), the shape and orientation of the distributions become
slightly different but remain disentangled. However, if we discard the ELBO
term (i.e. $\beta=0$), the model fails to separate the distributions of
control and treatment groups. We also compute the absolute errors between the
estimated ACME, ADE, ATE, and their corresponding ground truths as shown in
Tables 1 and 2. It can be observed that IMAVAE yields slightly larger errors
when the reconstruction term is removed (i.e., $\alpha=-1$). However, without
the ELBO term (i.e., $\beta=0$), the model produces significantly larger
errors on ACME and ATE. From the obtained results, we conclude that the iVAE-
like structure in our framework is essential for learning a better
representation of the unobserved mediator, which, in turn, improves the
accuracy of mediation effect estimation.
### 5.2 Electrophysiological Dataset
As described in the introduction, causal mediation analysis holds significant
relevance for applications in systems neuroscience. Once such area is in the
emerging area of targeted neurostimulation (see [45, 46, 47] for a
description), where the brain is manipulated by optical, electrical, or
mechanical stimulation with the goal of manipulating behavior in many brain
conditions. However, while the mechanism of behavioral change is the brain,
identifying such changes is challenging with existing causal mediation
techniques due to the high-dimensional and complex nature of the brain data.
Accurate appraisal of neural changes causing behavioral change will provide a
deeper understanding of mechanisms driving neural activity and potentially
lead to more efficacious treatments.
We demonstrate capability of our method to this domain with a semisynthetic
dataset by post-processing real multi-site brain recordings using local field
potential (LFP) data from 26 mice [48], which is publicly available [49]. Each
mouse is recorded by a certain number of time steps, resulting in a total of
43,882 data points. We take the LFP signals as the observed feature $X$, while
the true mediator $Z$ is manually generated by applying principal component
analysis (PCA) to map $X$ into a lower-dimensional representation. The
treatment assignment $T$ indicates whether the mouse is recorded during an
open field exploration $(T=0)$ or a tail suspension test $(T=1)$. Furthermore,
we consider the genotype of the mouse, a binary variable, as an observed
covariate, denoted by $W\in{0,1}$. Lastly, we construct the outcome $Y$
manually as a function of the treatment, the mediator, and the genotype (only
for case (b) with observed covariate). The detailed procedure of dataset
generation is given in Appendix D.
We compare our method with two baseline models that are designed to handle
high-dimensional mediators: an integrated framework of shallow or deep neural
network and linear SEM (Shallow/Deep LSEM) [34] and a high-dimensional
mediation analysis (HIMA) framework [39]. Notably, HIMA considers each
component of $Z$ as an individual mediator instead of a multidimensional
mediator. As such, we report the mediation effect using the component with the
highest correlation. We compute and display the absolute errors of ACME, ADE,
and ATE in Table 3. Our results indicate that IMAVAE outperforms both
benchmarks by a very weide margin on all estimations except the ATE in case
(a) without covariates. The two benchmarks used in this experiment yield
significantly larger errors on ACME and ADE. We believe this is reasonable, as
both benchmarks are designed based on linear SEMs and are thus not able to
capture the correlation between the components of $Z$.
Table 3: Absolute error of ACME under treated, ADE under control, and ATE on
the tail suspension test dataset for IMAVAE and other benchmarks.
| Case (a) | Case (b)
---|---|---
| | IMAVAE
---
(ours)
| Shallow
---
LSEM
| Deep
---
LSEM
HIMA | | IMAVAE
---
(ours)
| Shallow
---
LSEM
| Deep
---
LSEM
HIMA
| ACME
---
$(t=1)$
2.348 $\pm$ 0.003 | 15.14 $\pm$ 0.03 | 15.48 $\pm$ 0.07 | 13.95 $\pm$ 0.03 | 0.559 $\pm$ 0.002 | 3.06 $\pm$ 2.16 | 4.80 $\pm$ 0.44 | 2.67 $\pm$ 0.02
| ADE
---
$(t=0)$
1.603 $\pm$ 0.000 | 14.71 $\pm$ 0.03 | 15.06 $\pm$ 0.07 | 3.82 $\pm$ 0.01 | 0.782 $\pm$ 0.000 | 5.03 $\pm$ 1.20 | 5.16 $\pm$ 0.51 | 3.21 $\pm$ 0.01
ATE | 0.744 $\pm$ 0.003 | 0.42 $\pm$ 0.06 | 0.42 $\pm$ 0.14 | 17.77 $\pm$ 0.03 | 0.223 $\pm$ 0.002 | 1.96 $\pm$ 3.36 | 0.36 $\pm$ 0.94 | 0.54 $\pm$ 0.02
### 5.3 Jobs II Dataset
To evaluate whether our method can generalize to real-world scenarios used in
recent causal mediation analysis frameworks, we test IMAVAE on the Jobs II
dataset [50], which aims to explore the impact of unemployment on workers’
stress and mental health and evaluate the potential benefits of participation
in a job-search skills seminar. The dataset includes a binary treatment
assignment $T$, which indicates whether a participant was assigned to attend a
job-search skills seminar ($T=1$) or to receive a booklet ($T=0$). The
mediator $Z$ is a continuous variable that measures the job-search efficacy.
All other attributes are treated as the observed covariates $W$. The outcome
variable $Y$ is also a continuous variable that represents the level of
depression reported by each participant during follow-up interviews. To obtain
the ground truth for direct and mediated effects, we followed a simulation
procedure similar to [51] to make ACMEs, ADEs, and ATE all equal to zero. The
detailed simulation procedure is given in Appendix E.
We compare the performance of our method with several benchmarks: nonlinear
SEM with interaction (LSEM-I) [3], imputing-based natural effect model (NEM-I)
[24], IPW [23], and Causal Mediation Analysis with Variational Autoencoder
(CMAVAE) [31]. It is worth noting that the Jobs II dataset presents an
observable mediator variable $Z$, which is _not_ the optimal scenario for our
proposed framework, as IMAVAE is specifically designed for CMA with
_implicitly_ observed mediators. Nonetheless, according to the results shown
in Tables 4 and 5 (where $N$ is the total number of simulated samples and
$\eta$ is a simulation parameter which stands for the magnitude of selection
into the mediator), our method still mostly outperforms the benchmarks in
terms of the estimation on ACME, ADE, and ATE with a reasonable level of
uncertainty.
## 6 Discussion
#### Design Choice of the Predictor
In Section 4.3, we prove that the true joint distribution over observed and
latent variables learned by IMAVAE is identifiable if we specify the predictor
$g_{\boldsymbol{\gamma}}$ to be a conditional distribution reparameterized by
function h. However, in practice, $g_{\boldsymbol{\gamma}}$ can be as simple
as a linear or logistic regression model since we believe the identifiability
on $(\textbf{f},\textbf{S},\boldsymbol{\lambda})$ is enough to disentangle the
representations between control and treatment groups and give an accurate
estimation on the mediation effects. We encourage readers to consider
designing $g_{\boldsymbol{\gamma}}$ in order to achieve optimal performance.
#### Limitations
As discussed in Section 3, our method relies on sequential ignorability, a
condition that is not directly testable using the observed data. However,
recent studies [31, 32] propose a potential solution by considering $X$ as a
proxy variable and accounting for hidden confounders. Exploring this approach
represents an intriguing direction for our future research.
#### Applications and Broader Impacts
We believe the proposed model architecture can be very useful for improving
interpretability for neuroscience applications. For instance, the disentangled
mediator representations obtained by IMAVAE can be used to investigate the
brain activities of individuals under different interventions. It can also be
combined with other interpretable methods such as linear factor models to
better illustrate the high-dimensional dynamics in brain networks as proposed
by Talbot et al [52]. We have not identified any potential negative societal
consequences specific to this manuscript.
Table 4: Absolute error of ACME under treated, ADE under control, and ATE on
simulated Jobs II data for IMAVAE and other benchmarks where 10% of the data
are mediated (i.e. $Z>3$).
| LSEM-I | NEM-I | IPW | CMAVAE | IMAVAE (ours)
---|---|---|---|---|---
$N$ | 500 | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000
| ACME under treated $(t=1)$
$\eta=10$ | 0.9 $\pm$ .04 | 0.6 $\pm$ .02 | 0.6 $\pm$ .03 | 0.8 $\pm$ .01 | 0.6 $\pm$ .04 | 0.8 $\pm$ .02 | 0.2 $\pm$ .00 | 0.3 $\pm$ .00 | 0.1 $\pm$ .02 | 0.1 $\pm$ .01
$\eta=1$ | 0.0 $\pm$ .01 | 0.1 $\pm$ .01 | 0.0 $\pm$ .00 | 0.1 $\pm$ .01 | 0.0 $\pm$ .01 | 0.1 $\pm$ .01 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00 | 0.1 $\pm$ .01 | 0.0 $\pm$ .01
| ADE under control $(t=0)$
$\eta=10$ | 1.3 $\pm$ .07 | 1.6 $\pm$ .06 | 1.2 $\pm$ .06 | 1.8 $\pm$ .05 | 1.2 $\pm$ .06 | 0.2 $\pm$ .06 | 0.1 $\pm$ .00 | 0.0 $\pm$ .03 | 0.3 $\pm$ .00 | 0.3 $\pm$ .00
$\eta=1$ | 3.3 $\pm$ .08 | 0.0 $\pm$ .07 | 1.1 $\pm$ .03 | 0.2 $\pm$ .07 | 3.3 $\pm$ .08 | 0.3 $\pm$ .06 | 0.5 $\pm$ .02 | 0.4 $\pm$ .01 | 0.2 $\pm$ .00 | 0.3 $\pm$ .00
| ATE
$\eta=10$ | 2.2 $\pm$ .05 | 1.0 $\pm$ .06 | 1.8 $\pm$ .05 | 0.9 $\pm$ .06 | 0.5 $\pm$ .05 | 1.0 $\pm$ .06 | 0.3 $\pm$ .01 | 0.3 $\pm$ .03 | 0.2 $\pm$ .02 | 0.2 $\pm$ .01
$\eta=1$ | 3.3 $\pm$ .08 | 0.1 $\pm$ .07 | 3.4 $\pm$ .03 | 0.1 $\pm$ .06 | 3.2 $\pm$ .07 | 0.2 $\pm$ .05 | 0.4 $\pm$ .02 | 0.3 $\pm$ .01 | 0.2 $\pm$ .01 | 0.2 $\pm$ .01
Table 5: Absolute error of ACME under treated, ADE under control, and ATE on
simulated Jobs II data for IMAVAE and other benchmarks where 50% of the data
are mediated (i.e. $Z>3$)
| LSEM-I | NEM-I | IPW | CMAVAE | IMAVAE (ours)
---|---|---|---|---|---
$N$ | 500 | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000 | 500 | 1000
| ACME under treated $(t=1)$
$\eta=10$ | 0.9 $\pm$ .03 | 0.6 $\pm$ .03 | 0.2 $\pm$ .03 | 0.4 $\pm$ .03 | 0.2 $\pm$ .03 | 0.4 $\pm$ .03 | 0.0 $\pm$ .00 | 0.1 $\pm$ .00 | 0.0 $\pm$ .05 | 0.0 $\pm$ .03
$\eta=1$ | 0.1 $\pm$ .01 | 0.0 $\pm$ .01 | 0.2 $\pm$ .00 | 0.1 $\pm$ .01 | 0.1 $\pm$ .01 | 0.0 $\pm$ .01 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00 | 0.0 $\pm$ .05 | 0.0 $\pm$ .03
| ADE under control $(t=0)$
$\eta=10$ | 0.6 $\pm$ .06 | 0.1 $\pm$ .04 | 0.1 $\pm$ .06 | 0.1 $\pm$ .04 | 0.7 $\pm$ .07 | 0.2 $\pm$ .05 | 0.3 $\pm$ .01 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00
$\eta=1$ | 0.1 $\pm$ .10 | 0.3 $\pm$ .10 | 0.1 $\pm$ .10 | 0.3 $\pm$ .04 | 0.3 $\pm$ .10 | 0.2 $\pm$ .04 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00 | 0.1 $\pm$ .00
| ATE
$\eta=10$ | 0.3 $\pm$ .05 | 0.8 $\pm$ .03 | 0.1 $\pm$ .05 | 0.5 $\pm$ .03 | 0.9 $\pm$ .05 | 0.2 $\pm$ .04 | 0.3 $\pm$ .01 | 0.0 $\pm$ .01 | 0.1 $\pm$ .04 | 0.1 $\pm$ .03
$\eta=1$ | 0.1 $\pm$ .09 | 0.3 $\pm$ .04 | 0.3 $\pm$ .10 | 0.3 $\pm$ .04 | 0.2 $\pm$ .10 | 0.2 $\pm$ .04 | 0.0 $\pm$ .01 | 0.2 $\pm$ .01 | 0.1 $\pm$ .04 | 0.1 $\pm$ .03
## 7 Conclusion
This work makes a contribution to the field of causal mediation analysis (CMA)
by proposing a novel method, IMAVAE, that can handle situations where the
mediator is indirectly observed and observed covariates are likely to be
present. Our approach builds on existing CMA methods and leverages the
identifiable variational autoencoder (iVAE) model architecture to provide a
powerful tool for estimating direct and mediated effects. We have demonstrated
the effectiveness of IMAVAE in mediation effect estimation through theoretical
analysis and empirical evaluations. Specifically, we have proved the
identifiability of the joint distribution learned by IMAVAE and demonstrated
the disentanglement of mediators in control and treatment groups. Overall, our
proposed method offers a promising avenue for CMA in settings with much more
complex data, where traditional methods may struggle to provide accurate
estimates.
## References
* Athey and Imbens [2017] Susan Athey and Guido W Imbens. The state of applied econometrics: Causality and policy evaluation. _Journal of Economic perspectives_ , 31(2):3–32, 2017.
* Glass et al. [2013] Thomas A Glass, Steven N Goodman, Miguel A Hernán, and Jonathan M Samet. Causal inference in public health. _Annual review of public health_ , 34:61–75, 2013.
* Imai et al. [2010] Kosuke Imai, Luke Keele, and Dustin Tingley. A general approach to causal mediation analysis. _Psychological methods_ , 15(4):309, 2010.
* Rothman and Greenland [2005] Kenneth J Rothman and Sander Greenland. Causation and causal inference in epidemiology. _American journal of public health_ , 95(S1):S144–S150, 2005.
* Guo et al. [2020] Ruocheng Guo, Lu Cheng, Jundong Li, P Richard Hahn, and Huan Liu. A survey of learning causality with data: Problems and methods. _ACM Computing Surveys (CSUR)_ , 53(4):1–37, 2020\.
* Li and Zhu [2022] Zongyu Li and Zhenfeng Zhu. A survey of deep causal model. _arXiv preprint arXiv:2209.08860_ , 2022.
* Hultman et al. [2018] Rainbo Hultman, Kyle Ulrich, Benjamin D. Sachs, Cameron Blount, David E. Carlson, Nkemdilim Ndubuizu, Rosemary C. Bagot, Eric M. Parise, Mai-Anh T. Vu, Neil M. Gallagher, Joyce Wang, Alcino J. Silva, Karl Deisseroth, Stephen D. Mague, Marc G. Caron, Eric J. Nestler, Lawrence Carin, and Kafui Dzirasa. Brain-wide electrical spatiotemporal dynamics encode depression vulnerability. _Cell_ , 173(1):166–180.e14, 2018. ISSN 0092-8674. doi: https://doi.org/10.1016/j.cell.2018.02.012. URL https://www.sciencedirect.com/science/article/pii/S0092867418301569.
* Mague et al. [2022] Stephen D. Mague, Austin Talbot, Cameron Blount, Kathryn K. Walder-Christensen, Lara J. Duffney, Elise Adamson, Alexandra L. Bey, Nkemdilim Ndubuizu, Gwenaëlle E. Thomas, Dalton N. Hughes, Yael Grossman, Rainbo Hultman, Saurabh Sinha, Alexandra M. Fink, Neil M. Gallagher, Rachel L. Fisher, Yong-Hui Jiang, David E. Carlson, and Kafui Dzirasa. Brain-wide electrical dynamics encode individual appetitive social behavior. _Neuron_ , 110(10):1728–1741.e7, 2022. ISSN 0896-6273. doi: https://doi.org/10.1016/j.neuron.2022.02.016. URL https://www.sciencedirect.com/science/article/pii/S0896627322001817.
* Pearl [2001] Judea Pearl. Direct and indirect effects. _Probabilistic and Causal Inference: The Works of Judea Pearl_ , page 373, 2001.
* Wright [1923] Sewall Wright. The theory of path coefficients a reply to niles’s criticism. _Genetics_ , 8(3):239, 1923.
* Wright [1934] Sewall Wright. The method of path coefficients. _The annals of mathematical statistics_ , 5(3):161–215, 1934.
* Baron and Kenny [1986] Reuben M Baron and David A Kenny. The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. _Journal of personality and social psychology_ , 51(6):1173, 1986.
* Balke and Pearl [2013] Alexander Balke and Judea Pearl. Counterfactuals and policy analysis in structural models. _arXiv preprint arXiv:1302.4929_ , 2013.
* Jöreskog et al. [1996] Karl G Jöreskog, Fan Yang, G Marcoulides, and R Schumacker. Nonlinear structural equation models: The kenny-judd model with interaction effects. _Advanced structural equation modeling: Issues and techniques_ , 3:57–88, 1996.
* Khemakhem et al. [2020] Ilyes Khemakhem, Diederik Kingma, Ricardo Monti, and Aapo Hyvarinen. Variational autoencoders and nonlinear ica: A unifying framework. In _International Conference on Artificial Intelligence and Statistics_ , pages 2207–2217. PMLR, 2020.
* MacKinnon [2012] David P MacKinnon. _Introduction to statistical mediation analysis_. Routledge, 2012.
* MacKinnon and Dwyer [1993] David P MacKinnon and James H Dwyer. Estimating mediated effects in prevention studies. _Evaluation review_ , 17(2):144–158, 1993.
* Pearl [2014] Judea Pearl. Interpretation and identification of causal mediation. _Psychological methods_ , 19(4):459, 2014.
* Rijnhart et al. [2019] Judith JM Rijnhart, Jos WR Twisk, Iris Eekhout, and Martijn W Heymans. Comparison of logistic-regression based methods for simple mediation analysis with a dichotomous outcome variable. _BMC medical research methodology_ , 19:1–10, 2019.
* Holland [1988] Paul W Holland. Causal inference, path analysis and recursive structural equations models. _ETS Research Report Series_ , 1988(1):i–50, 1988\.
* Rubin [1974] Donald B Rubin. Estimating causal effects of treatments in randomized and nonrandomized studies. _Journal of educational Psychology_ , 66(5):688, 1974.
* Zheng and van der Laan [2012] Wenjing Zheng and Mark J van der Laan. Targeted maximum likelihood estimation of natural direct effects. _The international journal of biostatistics_ , 8(1):1–40, 2012.
* Huber et al. [2013] Martin Huber, Michael Lechner, and Conny Wunsch. The performance of estimators based on the propensity score. _Journal of Econometrics_ , 175(1):1–21, 2013\.
* Lange et al. [2012] Theis Lange, Stijn Vansteelandt, and Maarten Bekaert. A simple unified approach for estimating natural direct and indirect effects. _American journal of epidemiology_ , 176(3):190–195, 2012.
* Holland [1986] Paul W Holland. Statistics and causal inference. _Journal of the American statistical Association_ , 81(396):945–960, 1986.
* Alaa and Van Der Schaar [2017] Ahmed M Alaa and Mihaela Van Der Schaar. Bayesian inference of individualized treatment effects using multi-task gaussian processes. _Advances in neural information processing systems_ , 30, 2017.
* Jiang et al. [2023] Ziyang Jiang, Zhuoran Hou, Yiling Liu, Yiman Ren, Keyu Li, and David Carlson. Estimating causal effects using a multi-task deep ensemble. _arXiv preprint arXiv:2301.11351_ , 2023.
* Louizos et al. [2017] Christos Louizos, Uri Shalit, Joris M Mooij, David Sontag, Richard Zemel, and Max Welling. Causal effect inference with deep latent-variable models. _Advances in neural information processing systems_ , 30, 2017.
* Shalit et al. [2017] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: generalization bounds and algorithms. In _International Conference on Machine Learning_ , pages 3076–3085. PMLR, 2017.
* Xu et al. [2022] Siqi Xu, Lin Liu, and Zhonghua Liu. Deepmed: Semiparametric causal mediation analysis with debiased deep learning. _arXiv preprint arXiv:2210.04389_ , 2022.
* Cheng et al. [2022] Lu Cheng, Ruocheng Guo, and Huan Liu. Causal mediation analysis with hidden confounders. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_ , pages 113–122, 2022.
* Xu et al. [2023] Ziqi Xu, Debo Cheng, Jiuyong Li, Jixue Liu, Lin Liu, and Ke Wang. Disentangled representation for causal mediation analysis. _arXiv preprint arXiv:2302.09694_ , 2023.
* Chén et al. [2018] Oliver Y Chén, Ciprian Crainiceanu, Elizabeth L Ogburn, Brian S Caffo, Tor D Wager, and Martin A Lindquist. High-dimensional multivariate mediation with application to neuroimaging data. _Biostatistics_ , 19(2):121–136, 2018.
* Nath et al. [2023] Tanmay Nath, Brian Caffo, Tor Wager, and Martin A Lindquist. A machine learning based approach towards high-dimensional mediation analysis. _NeuroImage_ , 268:119843, 2023.
* Zhang et al. [2021a] Haixiang Zhang, Jun Chen, Yang Feng, Chan Wang, Huilin Li, and Lei Liu. Mediation effect selection in high-dimensional and compositional microbiome data. _Statistics in medicine_ , 40(4):885–896, 2021a.
* Perera et al. [2022] Chamila Perera, Haixiang Zhang, Yinan Zheng, Lifang Hou, Annie Qu, Cheng Zheng, Ke Xie, and Lei Liu. Hima2: high-dimensional mediation analysis and its application in epigenome-wide dna methylation data. _BMC bioinformatics_ , 23(1):1–14, 2022.
* Yang et al. [2021] Tianzhong Yang, Jingbo Niu, Han Chen, and Peng Wei. Estimation of total mediation effect for high-dimensional omics mediators. _BMC bioinformatics_ , 22:1–17, 2021.
* Zhang et al. [2021b] Haixiang Zhang, Yinan Zheng, Lifang Hou, Cheng Zheng, and Lei Liu. Mediation analysis for survival data with high-dimensional mediators. _Bioinformatics_ , 37(21):3815–3821, 2021b.
* Zhang et al. [2016] Haixiang Zhang, Yinan Zheng, Zhou Zhang, Tao Gao, Brian Joyce, Grace Yoon, Wei Zhang, Joel Schwartz, Allan Just, Elena Colicino, et al. Estimating and testing high-dimensional mediation effects in epigenetic studies. _Bioinformatics_ , 32(20):3150–3154, 2016.
* Hicks and Tingley [2011] Raymond Hicks and Dustin Tingley. Causal mediation analysis. _The Stata Journal_ , 11(4):605–619, 2011.
* MacKinnon et al. [2007] David P MacKinnon, Amanda J Fairchild, and Matthew S Fritz. Mediation analysis. _Annu. Rev. Psychol._ , 58:593–614, 2007.
* Robins and Greenland [1992] James M Robins and Sander Greenland. Identifiability and exchangeability for direct and indirect effects. _Epidemiology_ , 3(2):143–155, 1992.
* Rosenbaum and Rubin [1983] Paul R Rosenbaum and Donald B Rubin. The central role of the propensity score in observational studies for causal effects. _Biometrika_ , 70(1):41–55, 1983.
* Rubin [2005] Donald B Rubin. Causal inference using potential outcomes: Design, modeling, decisions. _Journal of the American Statistical Association_ , 100(469):322–331, 2005.
* Deisseroth [2011] Karl Deisseroth. Optogenetics. _Nature Methods_ , 8(1):26–29, Jan 2011. ISSN 1548-7105. doi: 10.1038/nmeth.f.324. URL https://doi.org/10.1038/nmeth.f.324.
* Limousin and Foltynie [2019] Patricia Limousin and Tom Foltynie. Long-term outcomes of deep brain stimulation in parkinson disease. _Nature Reviews Neurology_ , 15(4):234–242, Apr 2019. ISSN 1759-4766. doi: 10.1038/s41582-019-0145-9. URL https://doi.org/10.1038/s41582-019-0145-9.
* Tufail et al. [2011] Yusuf Tufail, Anna Yoshihiro, Sandipan Pati, Monica M. Li, and William J. Tyler. Ultrasonic neuromodulation by brain stimulation with transcranial ultrasound. _Nature Protocols_ , 6(9):1453–1470, Sep 2011\. ISSN 1750-2799. doi: 10.1038/nprot.2011.371. URL https://doi.org/10.1038/nprot.2011.371.
* Gallagher et al. [2017] Neil Gallagher, Kyle R Ulrich, Austin Talbot, Kafui Dzirasa, Lawrence Carin, and David E Carlson. Cross-spectral factor analysis. _Advances in neural information processing systems_ , 30, 2017.
* Carlson et al. [2023] David Carlson, Sunil Kumar, and Kafui Dzirasa. Multi-region local field potential recordings during a tail-suspension test. _Duke Research Data Repository_ , 2023.
* Vinokur et al. [1995] Amiram D Vinokur, Richard H Price, and Yaacov Schul. Impact of the jobs intervention on unemployed workers varying in risk for depression. _American journal of community psychology_ , 23(1):39–74, 1995.
* Huber et al. [2016] Martin Huber, Michael Lechner, and Giovanni Mellace. The finite sample performance of estimators for mediation analysis under sequential conditional independence. _Journal of Business & Economic Statistics_, 34(1):139–160, 2016.
* Talbot et al. [2020] Austin Talbot, David Dunson, Kafui Dzirasa, and David Carlson. Supervised autoencoders learn robust joint factor models of neural activity. _arXiv preprint arXiv:2004.05209_ , 2020.
* Ben-Israel [1999] Adi Ben-Israel. The change-of-variables formula using matrix volume. _SIAM Journal on Matrix Analysis and Applications_ , 21(1):300–312, 1999.
## Appendix A Detailed Proof of Identifiability
The proof of Theorem 4.4 closely resembles the proof presented in [15], which
consists of 3 main steps.
#### Step 1
The first step is to transform the equality of observed data distributions
into equality of noise-free distributions using a convolutional trick based on
the $1^{st}$ assumption in Theorem 4.4. Similar to [15], we introduce the
volume of a matrix denoted by $\text{vol}\,A$ as the product of singular
values of $A$. When $A$ is full column rank,
$\text{vol}\,A=\sqrt{\text{det}\,A^{\text{T}}A}$, and when $A$ is invertible,
$\text{vol}\,A=|\text{det}\,A|$. We use this matrix volume as a replacement
for the absolute determinant of Jacobian [53] in the change of variables
formula, which is most useful when the Jacobian is a rectangular matrix
$(d<D)$. Suppose we have two sets of parameters
$(\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda})$ and
$(\tilde{\textbf{f}},\tilde{\textbf{h}},\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}})$
such that
$p_{\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{x},y|\boldsymbol{u})=p_{\tilde{\textbf{f}},\tilde{\textbf{h}},\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}}}(\boldsymbol{x},y|\boldsymbol{u})$.
Recall that
$\boldsymbol{\psi}=(\textbf{f},\textbf{h}):\mathcal{Z}\times\mathcal{U}\rightarrow\mathcal{X}\times\mathcal{Y}$
and define the concatenated vector of $\boldsymbol{x}$ and $y$ as
$\boldsymbol{v}$. We have:
$\displaystyle\int_{\mathcal{Z}}p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})p_{\boldsymbol{\psi}}(\boldsymbol{v}|\boldsymbol{z},\boldsymbol{u})d\boldsymbol{z}$
$\displaystyle=\int_{\mathcal{Z}}p_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}}}(\boldsymbol{z}|\boldsymbol{u})p_{\tilde{\boldsymbol{\psi}}}(\boldsymbol{v}|\boldsymbol{z},\boldsymbol{u})d\boldsymbol{z},$
(11)
$\displaystyle\int_{\mathcal{Z}}p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\boldsymbol{\psi}(\boldsymbol{z},\boldsymbol{u}))d\boldsymbol{z}$
$\displaystyle=\int_{\mathcal{Z}}p_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}}}(\boldsymbol{z}|\boldsymbol{u})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\tilde{\boldsymbol{\psi}}(\boldsymbol{z},\boldsymbol{u}))d\boldsymbol{z}.$
(12)
Next, we apply change of variables
$\bar{\boldsymbol{v}}=\boldsymbol{\psi}(\boldsymbol{z},\boldsymbol{u})$ on the
left hand side (LHS) and
$\bar{\boldsymbol{v}}=\tilde{\boldsymbol{\psi}}(\boldsymbol{z},\boldsymbol{u})$
on the right hand side (RHS):
$\displaystyle\int_{\mathcal{X}\times\mathcal{Y}}p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})|\boldsymbol{u})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\bar{\boldsymbol{v}})\text{vol}\,J_{\boldsymbol{\psi}^{-1}}(\bar{\boldsymbol{v}})d\bar{\boldsymbol{v}}$
(13)
$\displaystyle=\int_{\mathcal{X}\times\mathcal{Y}}p_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}}}(\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})|\boldsymbol{u})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\bar{\boldsymbol{v}})\text{vol}\,J_{\tilde{\boldsymbol{\psi}}^{-1}}(\bar{\boldsymbol{v}})d\bar{\boldsymbol{v}},$
where $J$ denotes the Jacobian and recall that
$\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}$ is the projection of
$\boldsymbol{\psi}^{-1}$ on $\mathcal{Z}$. Next, we introduce
$\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}(\boldsymbol{v})=p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})|\boldsymbol{u})\text{vol}\,J_{\boldsymbol{\psi}^{-1}}(\boldsymbol{v})\mathbb{I}_{\mathcal{X}\times\mathcal{Y}}(\boldsymbol{v})$
on the LHS and similarly on the RHS, then, following [15], Equation 13 reduces
to:
$\displaystyle\int_{\mathcal{X}\times\mathcal{Y}}\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}(\bar{\boldsymbol{v}})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\bar{\boldsymbol{v}})d\bar{\boldsymbol{v}}$
$\displaystyle=\int_{\mathcal{X}\times\mathcal{Y}}\tilde{p}_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}},\tilde{\boldsymbol{\psi}},\boldsymbol{u}}(\bar{\boldsymbol{v}})p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\boldsymbol{v}-\bar{\boldsymbol{v}})d\bar{\boldsymbol{v}},$
(14)
$\displaystyle(\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}*p_{\boldsymbol{\epsilon},\boldsymbol{\xi}})(\boldsymbol{v})$
$\displaystyle=(\tilde{p}_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}},\tilde{\boldsymbol{\psi}},\boldsymbol{u}}*p_{\boldsymbol{\epsilon},\boldsymbol{\xi}})(\boldsymbol{v}),$
(15) $\displaystyle
F[\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}](\omega)\varphi_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\omega)$
$\displaystyle=F[\tilde{p}_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}},\tilde{\boldsymbol{\psi}},\boldsymbol{u}}](\omega)\varphi_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\omega),$
(16) $\displaystyle
F[\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}](\omega)$
$\displaystyle=F[\tilde{p}_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}},\tilde{\boldsymbol{\psi}},\boldsymbol{u}}](\omega),$
(17)
$\displaystyle\tilde{p}_{\textbf{S},\boldsymbol{\lambda},\boldsymbol{\psi},\boldsymbol{u}}(\boldsymbol{v})$
$\displaystyle=\tilde{p}_{\tilde{\textbf{S}},\tilde{\boldsymbol{\lambda}},\tilde{\boldsymbol{\psi}},\boldsymbol{u}}(\boldsymbol{v}),$
(18)
where we use $*$ for the convolutional operator in Equation 15, we use
$F[\cdot]$ to designate the Fourier transform in Equation 16,
$\varphi_{\boldsymbol{\epsilon},\boldsymbol{\xi}}=F[p_{\boldsymbol{\epsilon},\boldsymbol{\xi}}]$
according to the definition of the characteristic function, and we drop
$\varphi_{\boldsymbol{\epsilon},\boldsymbol{\xi}}(\omega)$ from both sides in
Equation 17 as it is non-zero almost everywhere by the $1^{st}$ assumption in
Theorem 4.4.
Equation 18 is valid for all
$(\boldsymbol{v},\boldsymbol{u})\in\mathcal{X}\times\mathcal{Y}\times\mathcal{U}$.
It indicates that for the observed data distributions to be the same, the
noise-free distributions have to be the same.
#### Step 2
The second step in [15] is about removing all terms that are either a function
of observations $\boldsymbol{x}$ and $y$ or auxiliary variables
$\boldsymbol{u}$, which is done by introducing the points provided by the
$4^{th}$ assumption in Theorem 4.4, and using $\boldsymbol{u}_{0}$ as a
“pivot”. Specifically, by taking the logarithm on both sides of Equation 18
and plugging in Equation 6 for $p_{\textbf{S},\boldsymbol{\lambda}}$, we get:
$\begin{split}&\log\text{vol}\,J_{\boldsymbol{\psi}^{-1}}(\boldsymbol{v})+\sum_{i=1}^{d}\left(\log
Q_{i}(\psi^{-1}_{i\,|\boldsymbol{z}}(\boldsymbol{v}))-\log
C_{i}(\boldsymbol{u})+\sum_{j=1}^{k}S_{i,j}(\psi^{-1}_{i\,|\boldsymbol{z}}(\boldsymbol{v}))\lambda_{i,j}(\boldsymbol{u})\right)\\\
&=\log\text{vol}\,J_{\tilde{\boldsymbol{\psi}}^{-1}}(\boldsymbol{v})+\sum_{i=1}^{d}\left(\log\tilde{Q}_{i}(\tilde{\psi}^{-1}_{i\,|\boldsymbol{z}}(\boldsymbol{v}))-\log\tilde{C}_{i}(\boldsymbol{u})+\sum_{j=1}^{k}\tilde{S}_{i,j}(\tilde{\psi}^{-1}_{i\,|\boldsymbol{z}}(\boldsymbol{v}))\tilde{\lambda}_{i,j}(\boldsymbol{u})\right).\end{split}$
(19)
Let $\boldsymbol{u}_{0},...,\boldsymbol{u}_{dk}$ be the points provided by the
$4^{th}$ assumption of Theorem 4.4, and define
$\bar{\boldsymbol{\lambda}}(\boldsymbol{u})\coloneqq\boldsymbol{\lambda}(\boldsymbol{u})-\boldsymbol{\lambda}(\boldsymbol{u}_{0})$.
Then for $l=1,...,dk$, we plug each of those $\boldsymbol{u}_{l}$ in Equation
19 to obtain $dk+1$ equations and subtract the first equation for
$\boldsymbol{u}_{0}$ from the remaining $dk$ equations to get:
$\langle\textbf{S}(\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})),\bar{\boldsymbol{\lambda}}(\boldsymbol{u}_{l})\rangle+\sum_{i=1}^{d}\log\frac{C_{i}(\boldsymbol{u}_{0})}{C_{i}(\boldsymbol{u}_{l})}=\langle\tilde{\textbf{S}}(\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})),\bar{\tilde{\boldsymbol{\lambda}}}(\boldsymbol{u}_{l})\rangle+\sum_{i=1}^{d}\log\frac{\tilde{C}_{i}(\boldsymbol{u}_{0})}{\tilde{C}_{i}(\boldsymbol{u}_{l})},$
(20)
where $\langle\cdot,\cdot\rangle$ denotes inner product. Let $L$ be the matrix
defined in the $4^{th}$ assumption of Theorem 4.4, and $\tilde{L}$ similarly
for $\tilde{\boldsymbol{\lambda}}$ ($\tilde{L}$ is not necessarily
invertible). Define
$b_{l}=\sum_{i=1}^{d}\log\frac{\tilde{C}_{i}(\boldsymbol{u}_{0})C_{i}(\boldsymbol{u}_{l})}{C_{i}(\boldsymbol{u}_{0})\tilde{C}_{i}(\boldsymbol{u}_{l})}$
and b the vector of all $b_{l}$ for $l=1,...,dk$. Expressing Equation 20 in
matrix form, we get:
$L^{\text{T}}\textbf{S}(\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v}))=\tilde{L}^{\text{T}}\tilde{\textbf{S}}(\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v}))+\textbf{b}.$
(21)
Multiplying both sides of Equation 21 by the transpose of the inverse of
$L^{\text{T}}$, we get:
$\textbf{S}(\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v}))=A\tilde{\textbf{S}}(\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v}))+\textbf{c},$
(22)
where $A=L^{-\text{T}}\tilde{L}$ and $\textbf{c}=L^{-\text{T}}\textbf{b}$.
Note that here $\boldsymbol{\psi}^{-1}_{|\boldsymbol{z}}(\boldsymbol{v})$ can
be referred to either the projection from f or from h as we map the same $Z$
into $\hat{X}$ and $Y$ through $p_{\textbf{f}}$ and $p_{\textbf{h}}$,
respectively.
#### Step 3
In the last step, Khemakhem et al. [15] show that the linear transformation
$A$ is invertible for both $k=1$ and $k>1$, resulting in an equivalence
relation in the iVAE framework. This is mainly based on the $3^{rd}$
assumption in Theorem 4.4 which guarantees the existence of the $dk\times d$
Jacobian matrix of S with rank $d$. This also implies that the Jacobian of
$\tilde{\textbf{S}}\circ\tilde{\boldsymbol{\psi}}^{-1}_{|\boldsymbol{z}}$
exists and is of rank $d$ and so is $A$. We argue that the proof of
invertibility in our framework follows the same line of reasoning as that of
the iVAE.
Therefore, with Equation 22 and the invertibility of $A$, we prove that the
parameters $(\textbf{f},\textbf{h},\textbf{S},\boldsymbol{\lambda})$ are
$\sim_{A}$-identifiable.
## Appendix B Details of Experimental Setup
#### Model Configuration
In all experiments, the encoder
$q_{\boldsymbol{\phi}}(\boldsymbol{z}|\boldsymbol{x},\boldsymbol{u})$, decoder
$p_{\textbf{f}}(\boldsymbol{x}|\boldsymbol{z})$, and prior
$p_{\textbf{S},\boldsymbol{\lambda}}(\boldsymbol{z}|\boldsymbol{u})$ of IMAVAE
are configured as multivariate normal distributions whose mean and covariance
are parameterized as a function of the conditioned variables using a feed-
forward neural network. As for the predictor
$g_{\boldsymbol{\gamma}}(\boldsymbol{z},\boldsymbol{u})$, it is implemented as
a simple linear regression model. It is worth noting that while alternative
stochastic models can also be employed to replace $g_{\boldsymbol{\gamma}}$,
our experiments demonstrate that the linear regression model already achieves
superior performance. See the included code link for details on reproduction
of all experiments.
#### Training Details
When training IMAVAE to minimize the objective in Equation 8, we use the Adam
optimizer and adopt parameter annealing so that the KL divergence will
gradually dominate the reconstruction error in ELBO.
#### Computing Resources
The experiments conducted on the synthetic data and the Jobs II data in
Sections 5.1 and 5.3 are of a relatively small scale and can be executed
locally. The experiment on the electrophysiological data is performed on a
computer cluster equipped with a GeForce RTX 2080 Ti GPU.
#### Data Availability
Multi-region local field potential recordings during a tail-suspension test is
an experiment comparing the electrical neural activity and behaviors of
Wildtype and Clock-$\Delta$19 genotypes of mice in the tail-suspension test.
This dataset is available to download at
https://research.repository.duke.edu/concern/datasets/zc77sr31x?locale=en for
free under a Creative Commons BY-NC Attribution-NonCommercial 4.0
International license.
Jobs II is a randomized field experiment that investigates the efficacy of a
job training intervention on unemployed workers, which can be downloaded from
the R package "mediation" or the following URL:
https://r-data.pmagunia.com/dataset/r-dataset-package-mediation-jobs.
## Appendix C Synthetic Data Generation
The synthetic data contains $N=6000$ data points and is generated according to
the causal graphs shown in Figure 1. Specifically, for case (a) without
observed covariates, we generate $t_{i}$, $\boldsymbol{z}_{i}$,
$\boldsymbol{x}_{i}$, and $y_{i}$ for $i=1,...,N$ as follows:
$\displaystyle t_{i}$ $\displaystyle\sim\text{Bernoulli}(p),$
$\displaystyle\boldsymbol{z}_{i}$
$\displaystyle\sim\mathcal{N}(\boldsymbol{0},\sigma_{z}^{2}\textbf{I}_{d})+c\mathbb{I}(t_{i}=1)\textbf{1}_{d},$
$\displaystyle\boldsymbol{x}_{i}$
$\displaystyle=f(\boldsymbol{z}_{i})+\boldsymbol{\epsilon}_{x},$
$\displaystyle y_{i}$
$\displaystyle=\mu_{t}t_{i}+\boldsymbol{\mu}_{z}^{\text{T}}\boldsymbol{z}_{i}+\epsilon_{y},$
where $p$ is a probability parameter, $\sigma_{z}^{2}$ is a variance
parameter, $\textbf{I}_{d}$ is a $d\times d$ identity matrix,
$\mathbb{I}(\cdot)$ denotes the indicator function, $\textbf{1}_{d}$ is a
$d$-dimensional vector of all ones,
$f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}$ is a nonlinear function modeled by
an un-trained neural network, $\boldsymbol{\epsilon}_{x}\in\mathbb{R}^{D}$ and
$\epsilon_{y}\in\mathbb{R}$ are Gaussian noise terms, and $c$, $\mu_{t}$, and
$\boldsymbol{\mu}_{z}$ are coefficient constants/vector. For case (b) with
observed covariates, we generate $t_{i}$, $\boldsymbol{z}_{i}$,
$\boldsymbol{x}_{i}$, $\boldsymbol{w}_{i}$, and $y_{i}$ for $i=1,...,N$ as
follows:
$\displaystyle\boldsymbol{w}_{i}$
$\displaystyle\sim\mathcal{N}(\boldsymbol{0},\sigma_{w}^{2}\textbf{I}_{m}),$
$\displaystyle t_{i}$
$\displaystyle\sim\text{Bernoulli}(\text{sigmoid}(\boldsymbol{\mu}_{s}^{\text{T}}\boldsymbol{w}_{i})),$
$\displaystyle\boldsymbol{z}$
$\displaystyle\sim\mathcal{N}(\boldsymbol{0},\sigma_{z}^{2}\textbf{I}_{d})+c_{1}\mathbb{I}(t_{i}=1)\textbf{1}_{d}+c_{2}f_{1}(\boldsymbol{w}_{i}),$
$\displaystyle\boldsymbol{x}_{i}$
$\displaystyle=f_{2}(\boldsymbol{z}_{i})+\boldsymbol{\epsilon}_{x},$
$\displaystyle y_{i}$
$\displaystyle=\mu_{t}t_{i}+\boldsymbol{\mu}_{z}^{\text{T}}\boldsymbol{z}_{i}+\boldsymbol{\mu}_{w}^{\text{T}}\boldsymbol{w}_{i}+\epsilon_{y},$
where $\sigma_{w}^{2}$, $\sigma_{z}^{2}$ are variance parameters,
$\textbf{I}_{m},\textbf{I}_{d}$ are identity matrices with dimension $m\times
m$ and $d\times d$, respectively, $\mathbb{I}(\cdot)$ denotes the indicator
function, $\textbf{1}_{d}$ is a $d$-dimensional vector of all ones,
$f_{1}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{d},f_{2}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{D}$
are nonlinear functions modeled by un-trained neural networks,
$\boldsymbol{\epsilon}_{x}\in\mathbb{R}^{D}$ and $\epsilon_{y}\in\mathbb{R}$
are Gaussian noise terms, and
$c_{1},c_{2},\mu_{t},\boldsymbol{\mu}_{s},\boldsymbol{\mu}_{z},\boldsymbol{\mu}_{w}$
are coefficient constants/vectors.
## Appendix D Post-processing for Tail Suspension Test Data
As elaborated in Section 5.2, we have the LFP signals from 26 mice. The full
dataset contains a total of $N=43882$ data points. The observed feature, i.e.
$\boldsymbol{x}_{i}$, represents the power spectral densities of the LFPs
recorded at 11 brain regions. These densities are evaluated from $1$ to
$56Hz$, resulting in a total of $616$ attributes, i.e.
$\boldsymbol{x}_{i}\in\mathbb{R}^{616}$. We also have treatment assignment
$t_{i}$ indicating whether the mouse corresponding to the $i^{th}$ data point
is recorded during an open field exploration $(t_{i}=0)$ or a tail suspension
test $(t_{i}=1)$.
To generate the semi-synthetic dataset, we do some post-processing on these
data. For case (a) without observed covariates, we first apply PCA to map
$\boldsymbol{x}_{i}$ into a lower-dimensional representation
$\boldsymbol{s}_{i}$. Then the true mediator $\boldsymbol{z}_{i}$ is generated
by $\boldsymbol{z}_{i}=\boldsymbol{s}_{i}+\mathbb{I}(t_{i}=1)$ where
$\mathbb{I}(\cdot)$ denotes the indicator function. The final outcome $y_{i}$
is modeled by
$y_{i}=\mu_{t}t_{i}+\boldsymbol{\mu}_{z}\boldsymbol{z}_{i}+\epsilon_{y}$ where
$\mu_{t},\boldsymbol{\mu}_{z}$ are coefficient constant/vector and
$\epsilon_{y}$ is a Gaussian noise term. For case (b), we use the genotype of
the mouse $w_{i}\in\\{0,1\\}$ as an observed covariate where $w_{i}=0$ denotes
wild type and $w_{i}=1$ denotes Clock$\Delta 19$ mutation. The true mediator
$\boldsymbol{z}_{i}$ is then generated by
$\boldsymbol{z}_{i}=\boldsymbol{s}_{i}+\mathbb{I}(t_{i}=1)+f(\boldsymbol{w}_{i})$
where $f:\mathbb{R}\rightarrow\mathbb{R}^{d}$ is a nonlinear function modeled
by a neural network. The final outcome $y_{i}$ is modeled by
$y_{i}=\mu_{t}t_{i}+\boldsymbol{\mu}_{z}\boldsymbol{z}_{i}+\mu_{w}w_{i}+\epsilon_{y}$
where $\mu_{t},\mu_{w},\boldsymbol{\mu}_{z}$ are coefficient constants/vector
and $\epsilon_{y}$ is a Gaussian noise term.
## Appendix E Simulation Procedure for Jobs II Data
To achieve _zero_ direct, mediation, and total effects, we adopt the following
simulation procedure, similar to [31, 51], on the Jobs II dataset. Note that
all attributes other than $T,Z,Y$ are treated as observed covariates $W$ in
this analysis. $X$ in our causal graphs does not exist in this dataset.
1. 1.
Estimate probit specifications in which we regress (a) $T$ on $W$ to get an
estimated probit coefficient $\hat{\beta}_{\text{pop}}$ and (b) $Z$ on $T$ and
$W$ to get estimated probit coefficients $\hat{\gamma}_{\text{pop}}$ and
$\hat{\delta}_{\text{pop}}$, respectively.
2. 2.
Apply the indicator function to $Z$ so that the mediator becomes a binary
variable $Z\coloneqq\mathbb{I}(Z\geq 3)$.
3. 3.
Discard all samples with either $T=0$ or $Z=0$, resulting in a dataset with
all non-mediated and non-treated samples.
4. 4.
Draw independent Monte Carlo samples (500 and 1000 samples for Tables 4 and 5,
respectively) with replacement $(T^{\prime},Z^{\prime},W^{\prime},Y^{\prime})$
from the resulting dataset.
5. 5.
Simulate the (pseudo-)treatment and (pseudo-)mediator using the following
formula:
$\displaystyle T^{\prime}$
$\displaystyle\coloneqq\mathbb{I}(W^{\prime}\hat{\beta}_{\text{pop}}+U>0),$
$\displaystyle M^{\prime}$
$\displaystyle\coloneqq\eta(T^{\prime}\hat{\gamma}_{\text{pop}}+W^{\prime}\hat{\delta}_{\text{pop}})+\alpha+V,$
where $U\sim\mathcal{N}(0,1)$, $V\sim\mathcal{N}(0,1)$, $\eta$ is a simulation
parameter which stands for the magnitude of selection into the mediator, and
we manually set $\alpha$ such that either $10\%$ or $50\%$ of the samples are
mediated (i.e. $M^{\prime}\geq 3$). Note that here the (pseudo-)mediator
$M^{\prime}$ is continuous after simulation.
With this simulation design, the ground truth of the direct, mediated, and
total effects are all _zero_.
|
# ViPErLEED package II: Spot tracking, extraction and processing of I(V)
curves
Michael Schmid Institute of Applied Physics, TU Wien, Vienna, Austria
Florian Kraushofer Institute of Applied Physics, TU Wien, Vienna, Austria
Department of Chemistry, TUM School of Natural Sciences, Technical University
of Munich, D-85748 Garching bei München, Germany Alexander M. Imre Institute
of Applied Physics, TU Wien, Vienna, Austria Tilman Kißlinger Solid State
Physics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
Lutz Hammer Solid State Physics, Friedrich-Alexander-Universität Erlangen-
Nürnberg, Erlangen, Germany Ulrike Diebold Institute of Applied Physics, TU
Wien, Vienna, Austria Michele Riva Institute of Applied Physics, TU Wien,
Vienna, Austria
###### Abstract
As part of the ViPErLEED project (Vienna package for Erlangen LEED, low-energy
electron diffraction), computer programs have been developed for facile and
user-friendly data extraction from movies of LEED images. The programs make
use of some concepts from astronomical image processing and analysis. As a
first step, flat-field and dark-frame corrections reduce the effects of
inhomogeneities of the camera and screen. In a second step, for identifying
all diffraction maxima (“spots”), it is sufficient to manually mark and label
a single spot or very few spots. Then the program can automatically identify
all other spots and determine the distortions of the image. This forms the
basis for automatic spot tracking (following the “beams” as they move across
the LEED screen) and intensity measurement. Even for complex structures with
hundreds to a few thousand diffraction beams, this step takes less than a
minute. The package also includes a program for further processing of these
$I(V)$ curves (averaging of equivalent beams, manual and/or automatic
selection, smoothing) as well as several utilities. The software is
implemented as a set of plugins for the public-domain image processing program
ImageJ and provided as an open-source package.
††preprint: APS/123-QED
## I Introduction
Analysis of energy-dependent low-energy electron diffraction intensities [LEED
$I(V)$ data, also named $I(E)$] is the oldest technique for obtaining high-
accuracy data in surface crystallography and has the advantage that it
requires rather simple instrumentation (LEED optics), which is available in
many ultrahigh-vacuum surface science systems [1, 2, 3, 4, 5, 6]. LEED $I(V)$
analysis is based on the comparison of calculated diffraction intensities $I$
as a function of the kinetic energy $E$ of the electrons (or acceleration
voltage $V$) with the experimental ones. The agreement between calculated and
experimental $I(V)$ curves is described by an $R$ factor, such as Pendry’s $R$
factor $R_{\mathrm{P}}$ [7], which takes values between 0 for perfect
agreement and 1 for uncorrelated curves. (Higher values up to 2 can in
principle occur for anti-correlated curves, but are rare.)
In most LEED $I(V)$ studies, symmetry-equivalent beams are averaged 111Most
LEED $I(V)$ studies use normal incidence. Since LEED $I(V)$ spectra depend
sensitively on the incidence angle, one can verify normal incidence by
comparing the $I(V)$ curves of symmetry-equivalent beams, provided that the
surface has sufficient symmetry (at least a rotation axis). Advantages of
normal incidence are (i) noise reduction by averaging of symmetry-equivalent
beams (also reduction of errors due to residual misalignment) and (ii) the
exact incidence angle is known. Off-normal incidence usually requires to
handle the exact incidence angle as one or two additional fit parameter(s) in
the structure search.. The sum of the energy ranges of all the resulting
inequivalent beams that enter the analysis is usually named the size of the
experimental database. It is well known that care must be taken to provide a
sufficiently large experimental database for the structure search. Increasing
the size of the experimental data base lowers the risk of the structural
analysis becoming stuck in a local minimum of the $R$ factor, improves the
accuracy, and increases the trustworthiness of the final result [9, 10]. For
obtaining a large database, it is desirable to obtain intensity data with
sufficient quality not only for bright spots but also for the weak ones.
Therefore, a major goal of both the data acquisition and the analysis should
be minimizing the noise.
Acquisition and analysis of experimental LEED $I(V)$ data is not only useful
for structure determination. $I(V)$ curves are also valuable as fingerprints
of structures, especially in cases where different surface structures share
the same qualitative appearance of the LEED pattern. This is the case, for
instance, if two different terminations of the same bulk crystal have a
$(1\times 1)$ LEED pattern, or two different adsorbate coverages lead to the
same superstructure. In such a case, LEED $I(V)$ data can verify that a
surface preparation can be reproduced. This is useful if other methods to
distinguish these two structures are not available in a given vacuum system,
or these other methods are not sensitive enough to detect the difference.
The ViPErLEED (Vienna Package for Erlangen LEED) project aims at drastically
reducing the effort for LEED $I(V)$ studies, both on the computational and on
the experimental side. The package consists of (i) hardware and software for
data acquisition [11], (ii) software for extracting $I(V)$ curves from the
experimental data, as well as (iii) software for calculation of $I(V)$ curves
for a given structure and structure optimization, by minimizing the difference
between the calculated and experimental $I(V)$ data [12]. Part (ii) is the
topic of this paper.
Experimentally, $I(V)$ curves are obtained by acquiring images of the LEED
screen with a digital camera for a range of energies (usually, several hundred
electronvolts, with 0.5 or 1 eV steps). This results in so-called LEED movies,
where the diffraction maxima (the “spots”) move radially, in the ideal case
with a distance from the (0,0) spot proportional to $1/\sqrt{E}$. These LEED
movies are processed by following the motion of the diffraction maxima with
energy (spot tracking) and evaluation of the intensity of each diffraction
maximum (each “beam”) as a function of energy — the $I(V)$ curves. For other
types of LEED investigations, it is also useful to determine the beam
intensities over time or temperature at a fixed energy (e.g., for studying
phase transitions); a program for the analysis of LEED intensities should also
provide this option.
Commercial programs for extraction of $I(V)$ curves from LEED movies usually
require selecting each diffraction maximum manually and are often restricted
to rectangular regions of interest (ROIs) for intensity integration. Integer
ROI coordinates can also lead to jumps of the measured intensity when the ROI
moves by a single pixel. Some older programs are also restricted to 8-bit
images, and do not take advantage of the high dynamic range of modern cameras
(in our experience, about 13–14 bits with the Sony IMX174 sensor and
2$\times$2 binning of pixels). Among developments by scientific groups, the
EasyLEED program by Mayer _et al._ [13, 14] is probably the most suitable
development in this field. It is based on a Kalman filter for spot tracking
and fitting Gaussians for intensity measurement. This open-source program
requires manually selecting each diffraction maximum for measurement, which is
a time-consuming and potentially error-prone task in the case of complex
superstructures. The work of Sojka _et al._ [15, 16, 17] is based on carefully
modeling the relation between the reciprocal lattice and the position on the
LEED screen. After a manual step of roughly superimposing the experimentally
measured and ideal lattice, this makes it possible to automatically assign
$(h,k)$ indices to each spot. While this program is mainly motivated by the
desire for accurate measurements of positions in reciprocal space, it could
also be extended for $I(V)$ measurements. To our knowledge, though, currently
no full solution for $I(V)$ curve extraction based on this program is
available.
One problem in obtaining high-quality LEED $I(V)$ data comes from the grids of
the LEED optics. There are at least two grids, a grounded grid facing the
sample and a suppressor grid at negative voltage that repels electrons that
have undergone substantial energy losses by inelastic scattering. It is more
common to have three grids, and four grids are used in LEED optics that also
serve as retarding-field analyzers [6]. The grids absorb diffracted electrons
hitting a grid wire, and moiré effects can occur from the stacking of
differently rotated grids, which results in a spatially inhomogeneous
transmission. In addition, further inhomogeneities can result from particles
on the grids and dust particles on the camera sensor. In MCP (microchannel
plate)-LEED systems, the MCP contributes to the inhomogeneous response. The
grids also slightly deflect the electrons, which further complicates the
problem. The current work shows how these issues can be mitigated by suitable
calibration images (dark screen, flat field).
Our set of programs was written with the aims of (i) making the extraction of
$I(V)$ curves as user-friendly as possible, and (ii) obtaining the best data
quality with respect to noise and artifacts. The program package is written in
Java and based on the public-domain image processing program ImageJ [18],
which ensures good performance and operation on all major operating systems
(Windows, Linux and MacOS). Details on the installation and use of the
programs are provided in the Supplemental Material [19]; updates will be
published on GitHub 222https://github.com/viperleed/viperleed-imagej.
## II Program description
### II.1 Data input
_Input files_ — The main ImageJ plugin is the Spot Tracker (Fig. 1). The main
input are LEED movies (named image stacks in ImageJ); these can be opened by
appropriate ImageJ commands (File$>$Open or File$>$Import$>$Image Sequence),
thus any image format that can be read by ImageJ or one of its plugins can be
used. The plugin package can also open .zip archives (containing images and an
index file with the list of images and metadata) created by the ViPErLEED data
acquisition [11], as well as .vid files of the “AIDA” (Automatic Image and
Data Acquisition) EE2000/EE2010 program [21]. For these formats, the metadata
such as energy, time, beam current $I_{0}$ and additional analog input
channels are also read. When reading a collection of single images with
File$>$Import$>$Image Sequence, these data can be decoded from the file names.
The “Set Energies I0, t” button of the spot tracker panel also includes an
option to enter these values as a linear function or read them from an ImageJ
table. (In ImageJ, opening a comma- or tab-delimited file, .csv or .tsv,
creates a table). It is also possible to specify an independent variable other
than the energy. This is useful for intensity measurements during a phase
transition as a function of time or temperature, at fixed energy.
Figure 1: The graphical user interface of the ViPErLEED spot tracker (here
under a Linux operating system), with (a) the input image stack, (b) the
processed stack after dark-frame and flat-field correction, and (c) the main
spot-tracker panel. The image stacks (d) and (e) are the dark frames and flat
field, respectively, (f) is the mask of the usable screen area [also visible
as orange outline in (b)], and (g) is a plot of the raw and smoothed beam
current $I_{0}$ (available via the “Set Energies, I0, t” button; the $I_{00}$
line is invisible because it coincides with the $x$ axis at this scale). The
red item in (c) indicates that user input is required. In (a) and (b), the
LEED images of the Cu(111)-$(5\times\sqrt{3}_{\text{rect}})$-4Te structure
[22] are displayed with high contrast, leading to saturation of the bright
spots. The magnified and contrast-enhanced insets in (a) and (b) show the
improvement of the background uniformity with the dark-frame and flat-field
correction.
_The mask_ — The spot tracker requires that the user provides a mask, which is
an image that defines the usable area of the LEED screen [Fig. 1(f)]. This is
a binary image, implemented in ImageJ as an 8-bit image with only two
different pixel values occurring: 255 (black) for the foreground (usable) area
and 0 (white) for the unused area. A utility for creation of such an image is
available via the “More$\gg$” button of the spot-tracker panel. The standard
ImageJ selection and image-modification commands can be used to edit the mask.
### II.2 Dark-frame and flat-field correction
The spot tracker has provisions for dark-frame and flat-field correction of
the LEED images, which can substantially improve the data quality. These
corrections are standard in astronomical image processing [23] and in some
applications of light microscopy, but not widely used in the LEED community.
The aim of these corrections is reducing the effect of inhomogeneities of the
LEED optics (grids, and microchannel plate, if any) and camera as well as
subtracting background illumination, for instance, from the filament of the
electron source. In the standard method, intensities are corrected pixel by
pixel,
$I_{\mathrm{corr}}=\frac{I_{\mathrm{main}}-I_{\mathrm{dark}}}{I_{\mathrm{flat}}-I_{\mathrm{dark}}}\
,$ (1)
where $I_{\mathrm{main}}$ is the pixel intensity of the LEED image with the
diffraction pattern and $I_{\mathrm{dark}}$ is the pixel intensity of a dark
frame. The dark frame is an image obtained without electrons reflected at the
sample. $I_{\mathrm{flat}}$ is the pixel intensity of the flat field, which is
an image with uniform illumination.
The _dark frame_ is best obtained with the same filament current and screen
voltage as the main image, but with a highly negative Wehnelt voltage to
suppress all electrons. All other settings (exposure time, camera gain) should
be the same as for the main LEED images. This ensures that the intensity of
any stray light of the filament is the same for both the main LEED images and
the dark frames, and, hence, this background intensity will be subtracted
(together with the dark current of the camera). In some cases, there can be
also a background due to field-emitted electrons from asperities on a grid
[such as the bright spots in Fig. 1(d)], which will be subtracted by this
procedure. When recording a full movie of energy-dependent dark frames and the
LEED electronics provide a beam current ($I_{0}$) output, the $I_{0}$
measurement acquired with this movie conveniently provides the energy-
dependent offset $I_{00}$ of the beam current (cf. Sec. II.8).
For obtaining a _flat field_ , one should have uniform illumination of the
LEED screen with electrons coming from the same position as the reflected
electrons forming the usual LEED image. This is not easy to achieve. The best
option we found is placing a polycrystalline surface (e.g., the sample holder
333Note that annealing polycrystalline materials can lead to grain growth and,
thus, the appearance of LEED spots for such materials. To ensure that this is
not the case, it is a good practice to inspect the stack of flat-field images
as it will be applied to the main input. This image stack, processed with the
appropriate dark frame, the normalization polynomial in Eq. (2), and averaging
for noise reduction, is available with the “Show processed flat field” option
in the “Dark&Flat Processing” dialog.) acting as a diffuse scatterer at the
same position as the sample [25]. The distance from the electron source to the
surface must be the same for the main LEED $I(V)$ movie and the flat field. In
other words, both, the sample and the polycrystalline surface must be exactly
in the same plane for the respective measurements. Since the flat-field
intensity is spread out over the whole screen, the flat-field images taken
with the same settings as the main $I(V)$ movie of the sample might be rather
noisy due to low intensity. In that case, one may use a higher beam current
and/or longer exposure times than for the main $I(V)$ movie to ensure a
sufficiently high intensity. Obtaining flat fields from a polycrystalline
surface has the problem of angle-dependent scattering, typically with a
maximum at 180∘ scattering angle (backscattering); see Fig. 1(e) for an
example. Thus, the correction in equation (1) would introduce a bias,
attenuating diffraction intensities near the center compared with those at the
periphery. For standard LEED $I(V)$ experiments, this will also lead to an
apparent decrease of intensity towards high energies, because the beams move
inwards. We therefore use a modified correction
$I_{\mathrm{corr}}=\left.(I_{\mathrm{main}}-I_{\mathrm{dark}})\middle/\left(\frac{I_{\mathrm{flat}}-I_{\mathrm{dark2}}}{\exp(\sum{a_{ij}x^{i}y^{j}})}\right)\right.\
,$ (2)
where the polynomial $\sum{a_{ij}x^{i}y^{j}}$ is a fit to the logarithm of the
background-corrected $I_{\mathrm{flat}}-I_{\mathrm{dark2}}$ values inside the
area defined by the mask. A second-order polynomial would correspond to a 2D
Gaussian distribution of the flat-field intensity; typically, we use a 4th-
order polynomial for better uniformity of the flat-field correction while
still maintaining the high spatial frequencies of the inhomogeneities. As the
fit is done in the logarithmic domain we use fit weights proportional to the
$I_{\mathrm{flat}}-I_{\mathrm{dark2}}$ value; otherwise low values would gain
too much weight (especially if the logarithm is highly negative). The flat-
field images should have sufficient brightness to ensure low noise; therefore,
as mentioned above, the camera settings (gain, exposure time) for recording
the flat field might be different from those used for the main LEED movie. In
such a case, it will be necessary to obtain a separate set of dark frames with
these settings, different from the dark frames for the main $I(V)$ movie. This
is indicated by the “2” in $I_{\mathrm{dark2}}$. If the same camera settings
are used for the main $I(V)$ movie and for the flat field, the same dark
frame(s) can be selected for both (i.e.,
$I_{\mathrm{dark2}}=I_{\mathrm{dark}}$).
The dark frames depend at most weakly on the beam energy. (A weak dependence
is possible if field emission from the last grid to the screen causes a
background intensity and the voltage between the grid and screen varies with
the beam energy.) If the dark frames are energy-independent, it is enough to
average over a few dark-frame images to reduce the noise; otherwise a linear
fit of each pixel intensity over energy is usually sufficient. These options
are accessible via the “Dark&Flat Processing” button of the spot tracker.
In the flat-field images, as mentioned above, the intensity is spread out over
the whole screen, which leads to an intensity below that of the spots in the
main LEED $I(V)$ movie, and, hence, higher noise. Therefore, noise reduction
should be applied by smoothing the pixel intensity vs. energy; also this
function is available in the “Dark&Flat Processing” options. It requires that
the flat fields are acquired as an image stack with the same energy steps as
the main LEED image stack. When using energy-dependent flat fields, but not a
2D fit for the flat-field intensity, the ($I_{\mathrm{dark2}}$-corrected) flat
field should be normalized, to avoid influencing the $I(V)$ data by the energy
dependence of the diffuse backscattering, which creates the dark-field images.
In our experience, the dark-frame/flat-field correction has a profound impact
on the data quality. This is especially true for LEED measurements with the
sample at room temperature, where the background from scattering by phonons is
high and therefore its variations due to the grid wires and other
inhomogeneities of the grids are clearly visible. The improvement of the
background uniformity is also evident in LEED movies recorded at low
temperature, where the background is low [compare the insets in Figs. 1(a) and
1(b)]. The correction especially improves the quality of the $I(V)$ curves of
weak spots, where the background fluctuations have a comparably strong impact
on the intensity measurements. Unfortunately, the correction cannot fully
eliminate the influence of the grids on the beam intensities: The electrons
get deflected by the lateral electric-field components of the suppressor grid:
the grid meshes act similarly to electrostatic lenses. Since the flat-field
correction is based on the position of the diffracted beam on the screen
(recorded by the camera), which can deviate from the original direction of the
diffracted electrons before they reach the grids, the flat-field intensity
distribution at the screen cannot accurately describe intensity variations
depending on where the electrons reach the grid. This is mainly a problem with
highly focused beams (very sharp spots). The intensity noise caused by the
modulation by the grids can be reduced by slightly defocusing the electron
beam. This is only possible if the spots are sufficiently far apart for
accurate determination of the background (see Sec. II.4), and the background
of inelastically scattered electrons is low. (Otherwise, weak, smeared-out
spots will not stand out high enough over the background.) A further method to
reduce the noise due to the grid structure is averaging LEED $I(V)$ curves
obtained from movies with slightly different azimuthal rotation of the sample
(if the sample manipulator allows this) or slightly different distance to the
sample (1–2 mm shift is sufficient; in this case also a flat field should be
recorded for each distance).
The flat-field correction also increases the usable screen area near the
electron source. Since camera lenses with a large aperture are required for
good photon collection efficiency, the outline of the electron source appears
blurred in the images because it is out of focus [Fig. 1(a)]. The flat-field
correction compensates for the reduced intensity recorded where the screen is
partly hidden by the electron source. Thus the mask of usable screen area (see
below) can extend closer to the edge of the electron source than without flat-
field correction, and the usable energy range of spots disappearing behind the
electron source is extended.
_Implementation notes_ — All operations on the input image stacks are
implemented as ImageJ VirtualStacks, which means that the processed images are
not necessarily kept in memory but rather read from disk and calculated on the
fly as required. Only the final result $I_{\mathrm{corr}}$ is cached in memory
as long as there is enough RAM (using the Java SoftReference mechanism and
prefetching). This ensures that even very large image stacks can be handled
while good performance is achieved when there is sufficient memory.
### II.3 Distortion correction for identification of the spots
_Polynomial fits_ — A LEED pattern is essentially a 2D map of the reciprocal
space, with some distortions that come from various sources: The point where
the incident beam hits the sample may not exactly coincide with the center of
curvature of the LEED grids and screen, the camera is not at infinite distance
and not necessarily aligned with the axis of the incident electron beam, the
grids and/or screen may deviate from the ideal spherical-cap shape, there may
be residual electric and magnetic fields, and the sample may be tilted. Some
of these sources of distortions should be clearly minimized when acquiring
LEED $I(V)$ data. (Usually normal incidence of the electrons on the sample is
desired, and stray fields must be avoided.) Nevertheless, it is not possible
to avoid all sources of distortion. Many sources of distortion can be modelled
[15, 16], but this is rather cumbersome for the general case. Therefore, we
took a more simplistic approach: We fit the pixel coordinates $x,y$ with a
polynomial function of the reciprocal-space coordinates $k_{x},k_{y}$,
$x=\sum_{i,j;i+j\leq N}{a_{ij}k_{x}^{i}k_{y}^{j}}\ ,\quad y=\sum_{i,j;i+j\leq
N}{b_{ij}k_{x}^{i}k_{y}^{j}}\ .$ (3)
The polynomial order $N$ is chosen adaptively (see below); the maximum order
supported is 5th order. In addition to polynomials with all coefficients up to
a given order, the program also includes models where the highest-order terms
only depend on the reciprocal-space distance from the (0,0) spot, but other
high-order coefficients are left out:
$\displaystyle x$ $\displaystyle=\sum_{i,j;i+j\leq
1}{a_{ij}k_{x}^{i}k_{y}^{j}}+(k_{x}^{2}+k_{y}^{2})(a_{\mathrm{rx}}k_{x}+a_{\mathrm{ry}}k_{y})$
$\displaystyle y$ $\displaystyle=\sum_{i,j;i+j\leq
1}{b_{ij}k_{x}^{i}k_{y}^{j}}+(k_{x}^{2}+k_{y}^{2})(b_{\mathrm{rx}}k_{x}+b_{\mathrm{ry}}k_{y})\
,$ (4)
and
$\displaystyle x$ $\displaystyle=\sum_{i,j;i+j\leq
3}{a_{ij}k_{x}^{i}k_{y}^{j}}+(k_{x}^{2}+k_{y}^{2})^{2}(a_{\mathrm{rx}}k_{x}+a_{\mathrm{ry}}k_{y})$
$\displaystyle y$ $\displaystyle=\sum_{i,j;i+j\leq
3}{b_{ij}k_{x}^{i}k_{y}^{j}}+(k_{x}^{2}+k_{y}^{2})^{2}(b_{\mathrm{rx}}k_{x}+b_{\mathrm{ry}}k_{y})\
.$ (5)
These two types of fit polynomials are suitable in case of normal incidence
and mainly radial distortions. They offer the advantage of handling radial
distortions with fewer fit parameters (10 and 24) than the full third- and
fifth-order polynomial fits (20 and 42 parameters, respectively), thus they do
not require as many spots as the full third- and fifth-order polynomials in
Eq. 3.
_The spot pattern file_ — Correlating the spots on the screen and the
reciprocal-space coordinates requires a list of beams for the structure. This
list must be provided as a spot pattern file. For each beam, it lists the
designation and the indices $h$ and $k$, the Cartesian reciprocal-space
coordinates $g_{x},g_{y}$ (in arbitrary units) and a beam-group index
(symmetry-equivalent beams belong to the same group). This file can be created
with the LEED pattern simulator of the ViPErLEED GUI, supplied with the
ViPErLEED data acquisition [11]. There, the lattice and overlayer symmetry can
be entered manually or taken from the output of the ViPErLEED simulation
program [12] (experiment-symmetry.ini file).
_Identification of the spots_ — In practice, for determination of the fit
coefficients of Eq. (3), the user has to select an energy where many spots can
be seen, preferably including spots near the edges in many different
directions from the center. To aid this procedure, spots with sufficient
brightness are marked by circles after pressing the “Set Indices” button. The
spots are found as local maxima with a threshold (for noise suppression),
based on the Find Maxima function of ImageJ, and their positions are refined
as described in Sec. II.4. In the next step, the user has to select one of
these spots and enter its $(h,k)$ indices. If only one spot is known, the
program will assume that the (0,0) spot is in the center of the screen (given
by the bounding box of the mask area described above) and search for
additional spots, starting with those closest in reciprocal space to the
initial spots. At first, a purely linear relationship will be tried. Whenever
a new spot has been identified, the fit in Eq. (3) is repeated with the new
spot included. If there are enough spots for obtaining higher-order polynomial
coefficients, the program attempts fitting with a higher-order polynomial
[including the functions in eqs. (4) and (5); in the sequence of increasing
number of fit parameters] and uses the higher order if the goodness of fit
improves (taking into account that a larger number of fit parameters will
reduce the residuals).
Since the polynomials in Eq. (3) are not necessarily monotonic, it can happen
that the polynomials map high-order spots far outside the screen (or even non-
existent at a given energy) to a position inside the LEED screen. If such a
position happens to coincide with a lower-order spot (or a defect of the
screen that is mistaken as a spot), this will cause misidentification of that
spot (or defect). The program therefore contains provisions to discard the
calculated positions in such a case: A spot position obtained from Eq. (3) is
accepted only if the nonlinear terms in the polynomial do not “deflect” the
direction of spot motion with increasing energy by more than 30∘ (with respect
to the direction calculated from the linear terms). This prevents
misidentification of spots in cases where the calculated position folds back
to the screen area (like the “down“ branches of an $x-x^{3}$ polynomial). For
5th-order polynomials, this condition is not sufficient. For spots actually
visible on the screen, the 5th-order terms are only a small correction (even
for off-normal incidence). The 5th-order terms can become large for spots that
are actually far outside the screen area, and make the polynomial fold back
and forth across the LEED screen (like $x-x^{3}+x^{5}/5$, which has a three
zeros with positive slope). Therefore, for 5th-order polynomials, it is also
required that the derivatives of the respective 4th-order polynomial (without
the 5th-order terms) fulfill the same condition as the full polynomial.
For (almost) normal incidence, it is usually sufficient to manually enter the
$(h,k)$ indices of one spot; the program then automatically identifies all the
others. In some cases, especially far from normal incidence, it may be
required to select several spots and enter their $(h,k)$ indices manually. We
have successfully tested the program with up to 20∘ off-normal incidence,
where correct identification of all spots usually requires manual input of the
$(h,k)$ indices for 3–4 spots. Apart from the spot labels shown on the image,
a correct identification can also be inferred from low values of the root-
mean-square (rms) residuals of the pixel positions with respect to the
polynomial fit, as calculated and displayed by the program. Typical rms
residuals are $\lesssim 0.2$% of the image width (1 pixel for a 512$\times$512
pixel image).
### II.4 Analysis and intensity evaluation of a single diffraction maximum
Analysis of a single spot has two major aims, determination of (i) the
position and (ii) the intensity. The problem of spot analysis is comparable to
photometry of single stars in astronomy, and there are two basic approaches
[26]: Aperture photometry and fitting of a point spread function (PSF). The
PSF is the intensity distribution that one would obtain for an idealized
($\delta$-like) maximum. In principle, PSF fitting has the potential of better
accuracy in terms of both position and intensity, but it requires knowledge of
the PSF. (In astronomy, stars are almost perfect $\delta$ functions, all
smeared out the same way by atmospheric turbulence and the optics; thus the
PSF can be obtained by averaging the images of a few bright stars without
nearby background objects.) For LEED diffraction maxima, the PSF cannot be
determined for several reasons. (i) Due to deflection of electrons by the
suppressor grid and electron capture by grid wires, the spot intensity
profiles show modulations caused by the grid. These modulations depend on the
position on the screen. (ii) Due to the curvature of the screen, spots are
distorted towards the edges of the screen. (iii) On samples with a high step
density (step–step separation less than or comparable to the transfer width of
the instrument), the width of the spots depends on their index and the energy
in a non-trivial way. (Assuming kinetic theory, broadening occurs at out-of-
phase conditions [27].) (iv) The background due to scattering by phonons is
not constant but increases towards the spots [28]. This makes it difficult to
separate the contributions of the spot and the phonon background. Instead of
using a pre-determined PSF, one can also use independent 2D Gaussian fit
functions for each spot [13], but this approach becomes difficult at low
intensities, where the fit is ill-defined and further complicated in case of a
sloping background.
The other approach to spot analysis is known as aperture photometry (Fig. 2)
[29], and in astronomical image processing it was shown that its accuracy for
intensity measurements can be comparable to or even surpass PSF fitting [30].
Aperture photometry integrates the intensity over a (usually circular) disk;
the background intensity is taken from an annular area around the integration
disk. In the most simple case, the average of the pixel intensities in the
background area is taken, but other schemes like median, histogram centroid or
statistical mode are also common [29]. For the background of LEED spots,
different methods of evaluating the background intensity were compared in Ref.
31; good results were achieved with fitting a linear or 2nd-order polynomial
in $x$ and $y$. The profile of a LEED spot decays rather slowly at large
distances $r$ from its center ($1/r$ or $1/r^{2}$) [28]. A 2nd-order
polynomial does not provide a good description of this decay. Compared to a
linear fit, 2nd order also has the disadvantage of more free parameters, which
tends to increase the noise. Therefore, the program uses a linear function in
$x$ and $y$ to fit the intensity in the background area. This ensures that the
measurement is independent of the gradient of the background 444Instead of
fitting a linear background one could simply use the average over the
background area if the spot is exactly centered (vanishing first moments over
the integration disk after subtraction of the linear background) and the
shapes of the integration areas for the spot and the background have at least
twofold rotation symmetry around the center. Subtracting the linear background
is required for obtaining the spot position via the first moments, and also
for the intensity measurement if the spot is not perfectly centered spot or
there is an asymmetry of the integration areas. Slight asymmetry can occur due
to the spatial quantization (image pixels). We use a one-pixel-wide transition
zone where the weight of the background evaluation decreases to zero; this
transition zone is not required to be fully inside the foreground area of the
mask. If pixels in the transition zone have to be excluded because they are
outside the mask foreground area, this also causes asymmetry..
In astronomy, where the distance between the stars is often much larger than
the size of the PSF, it is common to use an inner radius of the background
annulus that is larger than the outer radius of the integration disk for the
star intensity. Thus, there is a dead zone in between. For the analysis of
LEED spots, we use a few modifications of this scheme. The program offers
three different geometries for the integration and background areas.
The first is an _annular background_ with the inner radius equal to the radius
$r_{\mathrm{i}}$ of the integration disk, and an outer radius of
$\sqrt{2}r_{\mathrm{i}}$ [Fig. 2(a), named “circular” in the program]. Thus,
the background area is equal to the integration area. This choice was
motivated by the following consideration about noise: If the spot intensity
vanishes, the noise obtained in the integration disk and the background
annulus (by averaging) are equal. Thus, the spot-minus-background noise is
higher by a factor of $\sqrt{2}$ than the noise obtained from integration over
the inner disk 555Since the noise of different pixels is usually uncorrelated,
one can use the rules of error propagation to estimate the noise. If the areas
of the integration disk and background annulus are the same, at vanishing spot
intensity (i.e., without intensity-dependent shot noise), the noise-related
errors of the two integrals over these areas will be the same. Thus, the error
of the difference of these two integrals equals $\sqrt{2}$ times the error of
one of the integrals.. If the spot intensity is higher, the influence of the
background noise on the $I(V)$ curves will be less, since both the shot noise
and spot intensity modulations due to the grid increase with intensity. In
contrast to astronomical aperture photometry, we do not use a dead zone
between the integration disk and the background area. The main reason is the
non-uniform background in LEED (see Fig. 1). If the background is a nonlinear
function of $x$ and $y$, the non-uniformity induces a background error that
increases with increasing radius of the background annulus. The other reason
for having a small background annulus is trivial: For complex LEED patterns
and high energies, the distance between the spots becomes small, and the
background area of one spot must not overlap with the neighboring spots.
Figure 2: Aperture photometry. Integration disk and area for background
intensity evaluation with (a) circular and (b) oval outline of the background
area. (c) Shapes of the integration and background areas in mode “azimuth
blur” for spots with different distances from the (0,0) spot. (d) Illustration
of the minimum integration area for LEED spots proposed, assuming a Gaussian
profile and annular background (light blue), as shown in panel (a).
The _oval background_ is second type of background area available in our
program. It does not use a circular outer boundary but rather an ellipse with
the semiminor axis equal to the radius of the integration disk and the
semimajor axis twice as large [Fig. 2(b)]. As in the case of the annular
background, the inner boundary is given by the integration disk, and the
background is averaged over as many pixels as the integration disk of the
spot. The major axis of the ellipse is in the tangential direction (we simply
take the center of the bounding box of the mask as the center). The main
advantage of this background type (named “oval” for short) is better
suppression of radial variations of the background intensity compared with the
annular background. This is especially valuable for some channel-plate LEED
optics, where concentric ringlike artifacts in the background intensity occur.
The oval background is also valuable for standard LEED instruments, due to the
(nonlinear) decrease of background intensity from the center of the screen. An
additional bonus of the oval background is a slightly increased usable energy
range in most cases: If a spot is close to the outer edge of the LEED screen
or the electron source, but its integration disk is still inside the usable
screen area (the mask), an annular background area may reach beyond the screen
and prevent a well-defined measurement. The oval background area protrudes
only in the tangential direction and can be fully evaluated until the
integration disk of the spot touches the edge 666For spots moving along the
arm holding the electron source, which is essentially a radial “spoke” (to the
bottom right in Fig. 1), the oval background touches that arm before an
annular background area with equal area would touch it. This can reduce the
amount of data available with the oval background as compared with a circular
background. This case occurs less often than that of spots close to the inner
or outer boundary. If it occurs, it often affects only one of several
symmetry-equivalent beams, so it does not affect the size of the experimental
database of the symmetry-averaged $I(V)$ curves.. The oval background is
inferior to the annular one in case of very crowded LEED patterns, because
neighboring spots will typically enter the oval background area due to its
larger extension in the tangential direction before they would affect the
annular background. The oval background is also less suitable than the annular
one if non-radial background variations are dominating, such as background
variations due to short-range order or phonons: The oval background reaches
out further than the annular background area, thus it is more sensitive to
non-radial background variations that are a nonlinear function of $x$ or $y$.
_Azimuth blur mode_ — Finally, there are situations where the spots are
blurred in the azimuthal direction. This happens in the case of overlayers
with poorly determined azimuthal orientation. For this case, we offer an
option that is close to the circular geometry in the vicinity of the (0,0)
spot, but the integration area becomes elongated in the tangential direction
at larger distances from (0,0), see Fig. 2(c). For simplicity, we use an
elliptical integration area, not an arc; this limits the blur angle to small
values (a few degrees). In this geometry, one should not measure the
background intensity all around the integration ellipse; in the tangential
direction the background evaluation area would be too far from the spot center
and therefore the measurement would become very sensitive to spatial
variations of the background. For high eccentricity, we therefore use the
geometry shown at the right side of Fig. 2(c), where the outer border of the
background area is an ellipse touching the integration ellipse at the vertices
(in the tangential direction). For this geometry, we take a smaller ratio
between the background and integration areas than in the circular and oval
case, to limit the influence of nonlinear variations of the background in the
radial direction: As soon as the ratio between the major and minor axes of the
integration ellipse exceeds $\sqrt{2}$, the minor axis of the outer background
border is limited to $\sqrt{2}$ times the minor axis $2r_{\mathrm{i}}$ of the
integration disk. This results in a ratio between background area and
integration area of 0.41. For more circular integration ellipses [closer to
the (0,0) spot], we use a circular outline of the background, with a radius of
$\sqrt{2}$ times the semiminor axis of the integration ellipse, see the two
left cases in Fig. 2(c). This results in our usual “circular” geometry close
to the (0,0) spot, where azimuthal blurring is negligible.
Both the position of the integration and background areas and their borders
are calculated with subpixel accuracy. For integration, pixels in a one-pixel-
wide zone at the border are weighted between 1 (inside) and 0 (outside that
zone). This avoids jumps that could otherwise occur in case of very sharp
spots and small integration areas.
Spot analysis is not only used for intensity measurements but also for
determining the exact spot position; this is required for fitting the
polynomial model in Sec. II.3 and when tracking the spots (section II.5). For
this purpose, it is important to fit a linear background in the (oval or
annular) background area. After subtraction of this background, we determine
the position of the center of mass inside the integration disk. This process
is repeated iteratively until convergence (iteration step less than 0.3
pixels) or aborted if the new position deviates too much from the initial
guess (this can happen when searching for a spot with vanishing intensity).
In addition to integrated intensity and position, spot analysis also yields an
estimate for the spot size derived from the second moments inside the
integration disk. The program gives the spot size as standard deviation
$\sigma$ assuming a 2D Gaussian 777For the calculation of $\sigma$ from the
moments we use a heuristic correction to take into account that the spot
intensity is not fully inside the integration disk. Since experimental LEED
spot profiles are non-Gaussian and typically have slowly decaying tails at low
intensity [28], the spot size is typically overestimated, especially if the
integration radius is large.; the sizes in the radial and tangential
directions are given separately. Finally, we also extract a measure of
significance, which depends on the ration of the spot intensity and the
standard deviation of the background (after subtraction of a linear
background); this value is used to obtain smoothed spot positions (section
II.5).
_The integration radius_ — The choice of the integration radius depends on the
spot size and whether spots come close to each other at high energies. It has
been suggested to use an adaptive integration area that only encompasses the
region where the spot can be clearly discerned from the background, thus
shrinking the area with decreasing intensity [36]. This approach has the
advantage of reducing the noise for weak spots, but it is problematic because
it introduces a bias: For weak spots, only very center will be inside the
integration area and the remaining intensity discarded. Thus the intensity of
weak spots will be underestimated. Therefore, we use an integration disk with
a radius that does not depend on the intensity. For a 2D Gaussian with a
standard deviation of $\sigma$, illustrated in Fig. 2(d), 86% of the intensity
is contained in a circle with a radius of $2\sigma$. Using an annular
background as described above, most of the remaining intensity will spill into
the background annulus and increase the background, reducing the integral-
minus-background measurement to 75% of the total intensity. As long as the
shape of the intensity distribution stays the same, this factor is constant
and has no detrimental effect on the $I(V)$ curves, where absolute intensity
is not important. While the optimal radius of the integration disk in
astronomical photometry is lower ($\approx 1.6\sigma$, Ref. 26), we consider
$2\sigma$ the minimum integration radius for a good LEED intensity
measurement. The reason for the difference lies in the fact that astronomy
deals with a roughly constant PSF of stars, while the shapes of the LEED
diffraction maxima change, due to deflection and capturing of the electrons by
the grid. Furthermore, astronomy uses a dead zone between the integration disk
and the background annulus, which is impractical for LEED (see above). The
grid-related noise increases with decreasing size of the integration area. In
our experience, this increase becomes significant at $r_{\mathrm{i}}\lesssim
2\sigma$. Therefore, if the distance between the spots at high energy allows
it, the integration radius should be chosen slightly larger than $2\sigma$. On
the other hand, for weak spots, the noise of the measured intensities
increases with increasing size of the integration disk (the image noise is
integrated over a larger area). The signal-to-noise ratio of the measured
intensities will typically have a minimum at a radius close to or somewhat
larger than $2\sigma$. In addition, as mentioned above, the impact of
nonlinear background variations increases as the background evaluation area
becomes larger. Even in cases of extremely low background and low camera
noise, the integration radius should not be chosen larger than about $3\sigma$
(1.5 times the lower limit), since the increased sensitivity to background
variations outweighs any advantage from the marginally reduced grid-related
noise.
Usually, the spot size is energy dependent. It increases towards lower
energies because of less perfect focusing of the electron beam (phase space
and space charge effects), but also due to the increasing influence of finite
sizes of the domains or terraces on the sample surface. We therefore use an
energy-dependent radius $r_{\mathrm{i}}$ of the integration disk
$r_{\mathrm{i}}^{2}=r_{\infty}^{2}+r_{1}^{2}/E\ ,$ (6)
where $r_{\infty}$ is the radius at very high energies (assumed to approach a
constant value) and $r_{1}$ describes the increase of the radius towards low
energies (for $E$ in electronvolts and $r_{1}\gg r_{\infty}$, $r_{1}$ would be
the radius at 1 eV). In the case of superstructure domains, the superstructure
spots may be less sharp than substrate spots; this can be accounted for by
choosing separate $r_{1}$ values for integer-order and superstructure spots.
(In the “Set Integration Radius” input, for convenience, the user is asked to
enter $r_{\infty}$ and the radius $r_{\mathrm{i}}$ at the lowest energy of the
LEED image stack, not $r_{1}$.) In “azimuth blur” mode, the semimajor axis of
the integration (and background) ellipse is calculated by essentially the same
equation, we only add $(\alpha_{\mathrm{az}}d_{\mathrm{spot-(0,0)}})^{2}$ to
Eq. (6). Here, $d_{\mathrm{spot-(0,0)}}$ is the distance between the spot of
interest and the (0,0) spot, and $\alpha_{\mathrm{az}}$ is the blur angle in
radians (assumed to be small,
$\tan\alpha_{\mathrm{az}}\approx\alpha_{\mathrm{az}}$). Thus, the major axis
of the integration ellipse is $\approx r_{\mathrm{i}}$ near the (0,0) spot and
dominated by the $\alpha_{\mathrm{az}}$-dependent term for large distances
from the (0,0) spot, as shown in Fig. 2(c).
### II.5 Spot tracking
In a standard LEED experiment, the spots move radially with energy, with the
distance from the (0,0) spot proportional to $1/\sqrt{E}$. Since spots can
disappear over some energy range (if the intensity vanishes), it is necessary
to take this motion into account when tracking the spots. Our approach starts
at the energy where the fit for the spot pattern was obtained (Section II.3),
and then continues searching at increasingly higher energies, thereafter
descending the full energy range, and ascending again. At each energy, we make
use of the spot pattern file to search for all spots in that file. Searching
the full energy range both up and down makes sure that each spot will be
tracked, provided that it can be found at any energy. When searching for spots
that were not detected so far (or not detected at nearby energies), we follow
two strategies, trying (i) the positions calculated by the polynomial fit in
Eq. (3), scaled with $1/\sqrt{E}$, and (ii) a corrected position obtained from
nearby spots already found. For these nearby spots, we calculate the
deviations of their positions from the polynomial model. A linear fit of these
deviations (as a function of $k_{x}$ and $k_{y}$, with weights decreasing with
distance) yields a correction for the coordinates of the spot that we search
for. This procedure corresponds to setting up a local coordinate system
determined by the nearby spots, but still taking the overall nonlinear
distortions of the LEED pattern into account. The latter approach is
especially valuable at very low energies, where the deviation between
calculated and actual positions can be large (because of residual magnetic or
electric fields, but also due to the large $1/\sqrt{E}$ scale factor between
reciprocal-space coordinates and real-space positions). As soon as a given
spot is found, its deviations from the polynomial model are kept for searching
it at the next energies. If a spot has not been detected over a large energy
range (default 30 eV), these deviations may be unreliable and the polynomial
model with corrections from the neighbors is also tried to find the spot (as
if it were a spot never detected before). The code also includes plausibility
checks for the spot positions. For example, the position is considered invalid
if large jumps occur or if the position deviates too much from the polynomial
fit. In case of doubt, uncertain positions are marked; these beams can be
deleted from the analysis (“More$\gg$” menu).
The spot positions obtained this way are smoothed in two passes. Smoothing
uses a linear fit of the deviations from the polynomial distortion model in
Eq. (3) as a function of $1/\sqrt{E}$, in a neighborhood of typically 30 eV
from each energy. Apart from the choice of $1/\sqrt{E}$ as the independent
variable, this smoothing method is akin to a first-order Savitzky-Golay filter
[37]. To avoid a large impact of inaccurately determined spot positions near
the intensity minima, the fit uses weights related to the spot significance,
as introduced in Sec. II.4. The first pass bridges gaps where a spot is
invisible or only weak. (If there are no or not enough valid points at both
sides, the energy range for fitting is extended to include enough points.) The
linear fit also provides some extrapolation beyond the energy range where the
spot was observed with sufficient significance to determine its positions.
Extrapolation to high energies can sometimes lead to large slopes and,
therefore, large deviations from the polynomial model. To avoid this problem
it is beneficial to use an additional low-weight data point with zero
deviation from the polynomial model for $1/\sqrt{E}=0$. The second pass
provides additional smoothing, with fit weights derived from the uncertainty
of the fit results from the first pass. For obtaining the final (smoothed)
spot positions, the fit results for the deviations are added to the positions
calculated from the polynomial model in Eq. (3).
The choice of $1/\sqrt{E}$ as the independent variable in the linear fits is
justified by our experience that the deviations from the polynomial model can
be usually approximated as linear functions of $1/\sqrt{E}$. Thus, especially
at low energies, this method provides much better results than smoothing
methods not taking this $1/\sqrt{E}$-dependence into account.
The spot tracker also has a mode for LEED movies obtained at constant energy,
typically used for analyzing a phase transition as a function of time or
temperature. In this case, spot tracking may be necessary because thermal
expansion of the sample or small movements of the sample holder due to its
thermal expansion can cause the spots to move. This case is handled the same
way as a standard tracking experiment, but without the $1/\sqrt{E}$ radial
motion, and the image number replaces $1/\sqrt{E}$ as the independent variable
in the smoothing of spot positions.
The spot tracker also features a LEEM (low-energy electron microscope) mode.
In LEEM diffraction movies as a function of the energy, the spot pattern
remains essentially stationary and spots move a few pixels at most. Thus, the
LEEM mode works the same way as the analysis of LEED experiments at constant
energy. Currently there are no provisions for automatic handling of the
energy-dependent range of reciprocal space imaged by a LEEM instrument. (This
would require a mask that changes with energy.) Therefore, if beams are
invisible at low energies, their low-energy limit must be manually selected
when editing the $I(V)$ curves.
### II.6 I(V) curve measurements
The extraction of $I(V)$ curves uses aperture photometry (Section II.4) at the
smoothed spot positions described in the previous section. Having smoothly
varying positions for the integration disk (and background) helps to obtain
smooth $I(V)$ curves, without artifacts from jumps of the position of the
integration area. As described in the previous section, these smoothed
positions are also available for energy ranges with very low intensity of a
given spot, where the images do not provide a reliable position.
Especially for superstructures, a frequent problem is having weak spots close
to a strong one. In this case, the tails of the intensity distribution of the
strong spot will lead to a curvature of the background intensity for nearby
spots. Since we use a linear fit for the background, this can lead to
apparently negative intensities of the weak spot. In this context, it is
important that LEED diffraction maxima can be approximated by Gaussians only
near the center (when ignoring the modulation by the grid). In the periphery
we typically find a Lorentzian-like decay of the intensity with $1/r^{2}$,
where $r$ is the distance from the center of the spot. This is in agreement
with the expectation for kinematic scattering from phonons above the Debye
temperature [28]888The experimental results in Ref. 28 rather indicate an
$1/r$ decay, which we cannot confirm.. The $1/r^{2}$ background implies that
the tails of bright spots reach out rather far; this is the reason why they
often affect the intensity measurement of nearby weak spots. Therefore, our
program provides an option to subtract the tails of the strong spots before
measuring the weak ones. For this purpose, the spots are measured in order of
decreasing intensity, and for each spot, after intensity measurement, a
$1/r^{2}$ background is fitted in an annular region between $r_{\mathrm{i}}$
and $2r_{\mathrm{i}}$. This background is subtracted from the image in a large
region around the spot before the next (weaker) spot is measured. In our
experience, this background subtraction procedure eliminates the majority of
minima reaching below zero in the $I(V)$ curves.
### II.7 Assessment of the data quality
Besides the $I(V)$ curves, spot tracking produces a number of diagnostic
plots, which help the user to assess the validity and quality of the data and
optimize the choice of the parameters. One such plot shows the spot radii
$\sigma$ (see Sec. II.4) as a function of energy, together with a line at half
the integration radius (separate for integer and superstructure, when
different). This plot can be used to verify that the integration radius is
chosen such that it is least $2\sigma$, as discussed in Sec. II.4.
A further plot (Fig. 3) shows statistics useful to assess the quality of the
$I(V)$ data 999Since the “$I(V)$ Quality Statistics” plot is based on the
mutual $R$-factors of symmetry-equivalent beams, it is created only if there
are at least two symmetry-equivalent beams. In case of non-normal incidence,
for a valid result, only beams that are symmetry-equivalent at the given beam
incidence should belong to the same group in the spot pattern file.. The blue
points show Pendry’s $R$ factor [7] between pairs of symmetry-equivalent beams
as a function of average beam intensity. (The $I(V)$ curves are smoothed for
this using a 4th-degree modified sinc smoother [40]). Since beam intensities
typically decrease with increasing energy, and the intensity affects the
signal-to-noise ratio, the $I(V)$ curves are split into sections of $\approx
100$ eV and each of these sections is analyzed and plotted individually.
Figure 3: Output plot of the spot tracker for assessing the quality of the
$I(V)$ curves. This plot combines several aspects of the data. $R$-factors
between pairs of symmetry-equivalent beams are blue dots. Typically, high-
intensity beams (at the right) have lower $R$-factors than low-intensity ones.
(The latter are more affected by the noise.) A summary of $I(V)$ curve regions
with negative intensity is in red. For the red and blue data points, the $x$
axis is the intensity (average over $\approx 100$ eV regions for the blue
points of $R$ vs. intensity); the intensity is normalized such that the
highest intensity in any $I(V)$ curve is 1000. The dark-blue curve
“$R_{\mathrm{Pendry}}$ vs. total pair $E$ overlap” allows a quick comparison
of different data sets (lower is better). For this curve, the $x$ axis is the
cumulative energy range of all pairs of symmetry-equivalent beams with an $R$
factor better than the $y$-axis value at that position of the curve. (ImageJ
currently does not support dual $x$ or $y$ axes, thus the double use of the
axes.)
The $I(V)$ data quality plot also includes an additional curve useful for
judging the data quality (dark blue in Fig. 3): This curve displays the total
energy range of all pairs of symmetry-equivalent curves (split into $\approx
100$ eV sections) where the $R$ factor does not exceed a given value (this
value is the $y$ coordinate). The lower this curve, the better the agreement
between equivalent beams. This curve is helpful for comparing data taken for
the same system with different acquisition parameters or different spot
tracking parameters. For instance, one can investigate the influence of the
radius of the integration disk on the noise. Since the $x$ axis of this curve
is the total energy range of the “good” data, most of this curve is not
influenced by parameters that lead to elimination of the worst (e.g., most
noisy) parts of the $I(V)$ curves. This makes it easy to obtain a valid
comparison of data sets that include different energy ranges, such as one set
containing low-intensity regions that are missing in the other set. (For
comparison, curves can be copied from one ImageJ plot to another by
“Data$\gg$Add from plot…”.)
The quality plot can also help when optimizing the voltage of the suppressor
grid. This can be done by comparing the mutual $R$-factors obtained from
$I(V)$ movies acquired with different suppressor voltages. When the influence
of electron capture and/or deflection by the grids on the $I(V)$ curves is
minimized, the agreement between symmetry-equivalent beams is best. Since
insufficient electron repulsion by the suppressor grid leads to an increase of
the inelastic background, it is advisable to select the suppressor voltage on
the strong-suppression side of the $R$ factor minimum, especially if the
inelastic background is high and defocusing by the suppressor grid is not an
issue.
Of course, comparing equivalent beams does not provide information on
systematic effects that affect the equivalent beams the same way. For example,
if the integration radius is too large, nonlinear variations of the inelastic
background or the intensity tails of neighboring bright spots may affect the
intensity of symmetry-equivalent curves in the same way; this cannot be
detected via the $R$ factors between symmetry-equivalent curves. To diagnose
at least one of these problems, the plot also contains statistics on $I(V)$
curve regions where the smoothed intensity is negative (red in Fig. 3). The
$x$ axis of these points gives the absolute value of the most negative
intensity, and the $y$ axis gives the total energy range (per beam, in
kiloelectronvolts) where the intensity is negative. Thus, high-quality data
are characterized by few (or no) red points. If there are any red points, they
should be close to the bottom left. If data obtained with an oval background
(see Sec. II.4) are badly plagued by negative intensities it is usually better
to choose a circular background instead.
The plots generated when tracking spots also include a set of selected $I(V)$
curves of symmetry-equivalent spots (useful to check alignment). There is also
a plot stack (i.e., a set of plots) of the deviations of the spot positions
from the polynomial model in Eq. (3) to check spot tracking. In addition, the
user can plot a large number of quantities for a single beam, a few beams, or
the overall measurement (available via the “More$\gg$” button of the spot
tracker panel).
### II.8 The beam current I0
Since the electron beam current $I_{0}$ usually changes during a LEED
experiment, the raw intensities measured should be normalized by dividing by
$I_{0}$. The beam current $I_{0}$ measured by LEED electronics can have an
offset, which may be a substantial fraction of $I_{0}$ at low currents
(especially for microchannel-plate LEED optics, where beam currents are a few
orders of magnitudes lower than for standard LEED). This offset is named
$I_{00}$ and may depend on the energy; the program can subtract it from the
measured $I_{0}$ values.
Any noise of $I_{0}$ will affect the final $I(V)$ curves. To avoid this
problem, it is possible to smooth the $I_{0}-I_{00}$ curve. These options are
available via the “Set Energies, I0, t” button [Fig. 1(c); typical data are
shown in Fig. 1(g)]. Smoothing uses a 4th-degree modified-sinc kernel, which
is similar to Savitzky–Golay filtering but provides better noise suppression
and is less prone to overshoot at the boundaries [40].
In some cases, sudden jumps of the electron intensity occur. (One possible
reason is thermal expansion of some part of the electron source, leading to
sudden movement when it overcomes static friction.) In such a case, smoothing
of the electron current would smooth out the jump and result in an improper
normalization. Such jumps can be also detected in the background intensity of
the LEED images far from the spots, especially if the background is high
(e.g., if the sample temperature is comparable to or higher than the Debye
temperature). For these cases, the user can choose to take the rapid
variations of the background (which tends to have low noise because it is the
average over a large number to pixels 101010As a background intensity, we use
the average intensity of the 40% darkest pixels in the screen area as defined
by the mask, excluding the integration areas of the spots.) and apply these
variations to the smoothed $I_{0}$ values,
$I_{0}^{\text{corr}}=S(I_{0}-I_{00})\frac{I_{\mathrm{b}}}{S(I_{\mathrm{b}})}\
,$ (7)
where $I_{\mathrm{b}}$ is the background intensity and $S$ represents the
smoothing operator. The fraction ${I_{\mathrm{b}}}/{S(I_{\mathrm{b}})}$ in Eq.
(7) is similar to a high-pass-filtered version of the background intensity
with a baseline shifted to unity. If $I_{\mathrm{b}}$ is proportional to the
$I_{0}-I_{00}$ (i.e., if $I_{\mathrm{b}}$ has the same energy dependence as
the measured beam current), Eq. (7) ensures that the corrected
$I_{0}^{\text{corr}}$ values will be proportional to $I_{0}-I_{00}$; otherwise
the slow variations of $I_{0}-I_{00}$ are combined with the fast variations of
$I_{\mathrm{b}}$. In many cases, we find that the background intensity varies
with energy in a manner similar to $I(V)$ curves, albeit with a lower relative
amplitude of $\approx 20$%. A large portion of these background variations
comes from stray light from very bright spots. If these variations are
substantial, only rapid variations of the background intensity should be used
for $I_{0}$ correction. According to Eq. (7), this means that only mild
smoothing should be used for $I_{0}-I_{00}$ (e.g., a smoothing parameter of 10
points for 0.5 eV energy steps 111111The number of points entered as a
smoothing parameter is the number of points of a moving-average filter with
the same suppression of white noise. In other words, if $n$ is entered as the
number of points, the noise suppression factor is $1/\sqrt{n}$. The program
calculates the filter kernel required for this noise suppression.). The effect
of the correction can be examined after spot tracking by plotting $I_{0}$ and
$I_{0}^{\text{corr}}$ (available via the “More$\gg$” button).
### II.9 More spot-tracker features and utilities
To enhance usability, the spot tracker contains additional features, available
in the “More$\gg$” menu of the spot tracker panel. These include highlighting
specific beams (to find and follow them easily in the LEED movie), and
deleting beams in the output (fully or within some energy range). The
“More$\gg$” menu also has an entry to list the current values of all
parameters, together with the respective default values. The parameters are
also written to a .log file upon saving data. The parameters can be also read
from .log files, for processing the same or related data with identical
parameters.
A further function of the spot tracker is undistorting one of the LEED images
in the movie or the full movie, based on the polynomial model in Eq. (3). The
undistorted image stack can be created with a fixed $k$-space scale; then the
spots do not move with energy. Averaging the images (“slices”) of such a stack
over some energy range (or even the whole stack) provides an “average” LEED
image with a high signal-to-noise ratio; also the adverse effects of the grids
will be averaged out.
In addition, the package includes several utility plugins for handling $I(V)$
curves: averaging $I(V)$ curves, stitching curves with different (overlapping)
energy ranges, resampling to a different energy step, (energy-dependent)
intensity corrections, and $R$ factor calculations. A typical application of
these utilities is mentioned in Sec. II.2: Noise reduction by averaging the
$I(V)$ curves obtained with slightly different distance between the LEED
optics and the sample. This is an efficient way to reduce the influence of the
grids: In each movie, the electron beam of a given diffraction maximum reaches
the grids at a slightly different position. Averaging uses the algorithm
described in Sec. II.10, which includes smooth fading in or fading out if the
energy ranges of the curves differ.
Most functions of the spot tracker and utilities can be controlled via the
ImageJ macro language. Automation is simplified by the ImageJ Macro Recorder,
which records the macro commands corresponding to a given workflow during
manual operation.
### II.10 The I(V) curve editor
Figure 4: Screenshot of the $I(V)$ curve editor. “Group” refers to a set of
symmetry-equivalent beams. The buttons marked by arrows at the right side
provide additional functionality with right-clicking (e.g., setting a
parameter for all groups). Note that the intensity ($y$ axis) scale is chosen
such that the highest spot intensity in the set of all $I(V)$ curves is
$10^{3}$; even an intensity of 1 on this scale (0.1% of the brightest spot) is
sufficient for reasonable data quality. This $y$ axis scale only applies to
the intensities, not to the $Y$ function of the $R$ factor, which is always
shown in the same place at the top of the plot, irrespective of the y-axis
scale.
The $I(V)$ curve editor (Fig. 4) is used for the final steps before the
experimental $I(V)$ curves can be used for comparison with “theoretical”
curves in structure optimization. These steps include averaging between
symmetry-equivalent beams, selection of the useful data (sufficiently low
noise, reasonable agreement between inequivalent beams), smoothing, and
examination of the data. As described in the following, these steps are at
least partly automated. This allows handling large data sets with hundreds of
symmetry-inequivalent beams in a short time.
_Curve averaging_ — When averaging the $I(V)$ curves of symmetry-equivalent
beams, the individual curves usually encompass different energy ranges. If a
curve begins or ends within the energy range selected for the output, it is
important to avoid jumps of the averaged intensity where that curve begins or
ends. Therefore, before averaging, the curves are normalized and slow trends
in the intensity ratio between the beginning and end of these curves are
equalized. In addition, we use smooth fading in and/or fading out of the
$I(V)$ curves that do not span the full energy range required for the output:
Shorter curves use a linear increase or decrease of the weight in the
averaging process at the respective end. (We use the imaginary part
$V_{\mathrm{0i}}$ of the inner potential as an indication of a typical energy
scale for variations in the $I(V)$ curves [7]; e.g. the increase or decrease
of the weight is over an energy interval of $4|V_{\mathrm{0i}}|$.)
_Selection of data_ — As mentioned in the introduction, a large database of
experimental beams is important for a reliable structure analysis. [Therefore,
the $I(V)$ curve editor displays the total energy range of symmetry-
inequivalent beams selected in the status line at the bottom, unless the mouse
pointer is at a button; then the status line displays information related to
that button.] Selecting the data range is a compromise between a large
database and rejecting low-quality data that increase the $R$ factor and do
not help in the structure optimization.
As an aid for selecting the useful data, the $I(V)$ curve editor does not only
plot the original data and the (smoothed and unsmoothed) average over the
symmetry-equivalent beams, but also the $Y$ function of Pendry’s $R$ factor
$R_{\mathrm{P}}$ [7]. According to current knowledge, Pendry’s $R$ factor is
the method of choice for experiment–simulation comparison; in structure
optimization it yields more accurate results than $R_{2}$, which is based on
the squared difference of the normalized $I(V)$ curves [43]. $R_{\mathrm{P}}$
is based on a comparison of the $Y$ function between experiment and theory.
The $Y$ function is given by
$Y=\frac{L}{1+\left(V_{\mathrm{0i}}L\right)^{2}}\quad\text{with}\quad
L=\frac{1}{I}\frac{\mathrm{d}I}{\mathrm{d}V}\ ,$ (8)
where $V_{\mathrm{0i}}$ is the imaginary part of the inner potential
(typically, $|V_{\mathrm{0i}}|\approx 4$ to $5$ eV) [7]. $Y$ is a nonlinear
function of the logarithmic derivative $L$ of the $I(V)$ curves. Thus, it is
not directly obvious to what degree the $Y$ function is influenced by noise
and how it depends on smoothing. Plotting the $Y$ function allows the user to
avoid data regions where $R_{\mathrm{P}}$ is strongly influenced by
experimental noise and to examine the impact of smoothing on $Y$.
Manual selection of the “good” beams and their useful energy ranges can be a
cumbersome task, especially if there are hundreds of symmetry-inequivalent
beams. The $I(V)$ curve editor therefore provides an option for automatic
selection. The main basis for this analysis is an estimate of the noise of the
$Y$ function, which is done by an $R$ factor-like comparison between the $Y$
function of the raw data and that after slight smoothing. As explained in Ref.
7, the width of the features in an $I(V)$ curve is determined by
$|V_{\mathrm{0i}}|$. A smoothing parameter of $0.55|\,V_{\mathrm{0i}}|$ leads
to almost no noticeable change of low-noise $I(V)$ curves; therefore we use
this smoothing parameter for the comparison with the $Y$ function of the raw
curve. The comparison is made point by point and then smoothed by a running-
average filter with a window length of $2|V_{\mathrm{0i}}|$. With proper
scaling, this procedure gives an estimate $r_{\mathrm{n}}(E)$ of the local
contributions of the noise in the $I(V)$ data to the overall $R$ factor,
assuming that the $R$ factor is dominated by noise 121212Like the $R$ factor,
the estimate of the local noise contribution $r_{\mathrm{n}}(E)$ is based on
the squared difference of the $Y$ function of the unsmoothed and slightly
smoothed $I(V)$ curve. To calculate the noise, we assume white Gaussian noise
of the $Y$ function and make use of the known bandwidth of the smoothing
filter [40]. For the impact on the noise on the final $R$ factor we assume
that a smoothing parameter of $1.0\,|V_{\mathrm{0i}}|$ will be chosen for the
final curves.. [The user can select in the options menu to plot the noise
function $r_{\mathrm{n}}(E)$.] Automatic selection of the beams and energy
ranges (i) selects only curves or energy ranges where the average of the noise
contributions $r_{\mathrm{n}}(E)$ is below a given limit $R_{\mathrm{limit}}$
and (ii) maximizes a figure of merit (FoM), given by
$F=\frac{E_{\mathrm{max}}-E_{\mathrm{min}}}{\langle
r_{\mathrm{n}}(E)+c\rangle}$ (9)
where $E_{\mathrm{min}}$ and $E_{\mathrm{max}}$ are the bounds of the energy
range selected for a given beam and the angle brackets denote the average over
that energy range. For very low noise values $r_{\mathrm{n}}(E)$, the constant
$c$ ensures that the FoM depends only on the energy range, not on minor
changes of the noise (we use $c=0.5\,R_{\mathrm{limit}}$). Maximizing the FoM
results in a large energy range, but penalizes energy regions with a high
noise that would strongly increase the $R$ factor (high values of the
denominator). For low-noise data, the denominator is dominated by the constant
$c$, and only the energy range is maximized. A typical choice of the noise
limit is $R_{\mathrm{limit}}\approx 0.05$; lower values are required for data
with excellent agreement between calculated and experimental $I(V)$ curves
($R_{\mathrm{P}}\lesssim 0.1$) to avoid compromising the $R$ factor.
In addition to the noise-dependent part, automatic selection of beams and
energy ranges also takes care of regions of negative intensity that result
from uneven background. (We only consider negative intensities after
smoothing, since noise may also cause negative intensities.) $R_{\mathrm{P}}$
is only defined for non-negative intensities $I$. To avoid negative values, a
simple strategy is adding the absolute value of the most negative intensity to
the $I(V)$ curve. Since $Y$ is a nonlinear function of the intensities (and
their derivatives), such an upshift of the intensities affects $Y$ also in the
regions where negative intensities do not occur. We therefore calculate the
$R$ factor between the original and the upshifted curve in the positive-
intensity regions. If this $R$ factor is too high (above
$R_{\mathrm{limit}}$), instead of upshifting, we exclude the negative-
intensity region and its immediate vicinity. The noise-dependent selection
then proceeds as described above. Since we limit ourselves to $I(V)$ curves
with a contiguous energy range, the FoM determines whether the energy range
below or above the negative-intensity region is used. Typically, the ratio
between background and intensities increases with energy, thus negative
intensities occur more often at high energies and the low-energy side gets
selected.
_Noise-dependent smoothing_ — For large data sets, noise-dependent adjustment
of the smoothing parameter can be a lengthy task. By comparing the $R$ factor
between experimental and simulated data, as well as by analyzing manually
selected smoothing parameters, we found that a good choice is a smoothing
parameter proportional to the average of the noise estimate
$r_{\mathrm{n}}(E)$ of a given $I(V)$ curve raised to the power of 0.15
131313Here, the smoothing parameter of the modified sinc filter [40] is given
in electronvolts and corresponds to the kernel length of a moving-average
filter with equal suppression of white noise, as already described in Ref.
[42] above. This smoothing parameter is roughly proportional to the width of
the smoothing kernel and inversely proportional to the bandwidth..
As a guide for choosing the smoothing parameter, the program also contains a
facility to find the smoothing setting that optimizes the $R$ factor between
smoothed experimental data and theoretical intensities (calculated, e.g., with
viperleed.calc [12]). When calculating such an optimal smoothing parameter for
the whole data set or a subset thereof, the overall smoothing parameter can be
adapted to the noise of each curve, as described above. The minimum of the $R$
factor against the smoothing parameter is rather shallow. To avoid
oversmoothing, we do not use the value exactly at the minimum but the weakest
smoothing that does not lead to an $R$ factor more than 1% above the minimum.
_Finding “bad” regions in the data_ — For examination of the data quality, the
$I(V)$ curve editor provides a function to selectively examine the cases of
poor agreement of symmetry-equivalent beams with their average, based on
Pendry’s $R$ factor. It has to be noted that $R_{\mathrm{P}}$ is extremely
sensitive to the exact shape of the curves at each minimum; even tiny
deviations at the minima can lead to high values of the difference of the $Y$
functions, which determines the $R$ factor. Since such deviations are
unavoidable, we first eliminate sharp peaks of the $Y$ function difference by
a minimum filter [46], followed by smoothing with a running-average filter. If
the maxima of the local $R$ factor filtered this way exceed a user-defined
threshold, they are flagged as regions of bad agreement. These regions are
sometimes related to a beam passing over a defect of the LEED screen; in such
a case the the affected beam can be excluded from averaging in this energy
range.
_Workflow_ — The workflow for editing a large set of $I(V)$ curves can thus be
reduced to (i) selection of a suitable noise limit $R_{\mathrm{limit}}$ and
(ii) choice of the smoothing parameter for a curve with medium noise, followed
by applying noise-dependent smoothing to all other curves. For some materials
(notably 5d elements) or if very low energies (below 50 eV) are included, the
low-energy peaks may by substantially sharper than expected from the
$V_{\mathrm{0i}}$ value. In such a case, weaker smoothing should be selected
for beams including low energies. This can be done by first selecting the
smoothing parameter for the first beams, which tolerate less smoothing due to
their sharp peaks at lo energies. Then, starting with a “higher” beam (with
the onset at higher energy), stronger smoothing can be applied to this beam
and those further up. The final step in the workflow is the examination of the
data quality; in many cases this can be restricted to the cases of poor
agreement of symmetry-equivalent beams mentioned above (a small fraction of
the whole data set).
When editing is complete, the final set of $I(V)$ curves can be saved into a
file that is directly suitable as experimental input for structure
optimization with viperleed.calc [12]. The edit parameters (data ranges
selected, smoothing strengths) are saved in an edit log file, which allows the
user to (i) interrupt an editing session at any point and resume it later,
(ii) modify a previous edit (changing the smoothing or adding/removing some
beams), and (iii) apply the same editing parameters to a different data set
with the same symmetry. This is useful, for instance, as a starting point when
analyzing a later measurement of the same sample.
Apart from selecting and editing data, the $I(V)$ curve editor can be also
used to compare different data sets, by opening two (or more) editor windows.
Multiple editor windows are synchronized, which means that they show the same
group of symmetry-equivalent beams over the same energy range (prior to any
manual zooming). In addition, the $I(V)$ curve editor can be synchronized with
the LEED movie shown in the spot tracker. Then, the energy selected in the
spot tracker is marked in the $I(V)$ curve editor, and the beams selected in
the $I(V)$ curve editor are marked in the spot tracker.
## III Conclusions
We have developed an open-source software package 141414The code is licensed
under GNU GPLv3 or any later version. The documentation is licensed under
Creative Commons Attribution CC BY 4.0., implemented as a set of ImageJ
plugins, for analysis of LEED movies, extraction and processing of $I(V)$
curves. The package was designed for (i) efficient and fast workflow and (ii)
optimal data quality. For reaching these aims it includes many features not
available in previous software solutions. The package currently contains about
15000 lines of code written over more than four years. The screenshots in
Figs. 3 and 4 give an indication of the typical data quality obtained with our
system. Even spots that have 0.1% of the maximum spot intensity can be
evaluated with acceptable noise (Fig. 4). This data quality is substantially
better than what could be achieved with legacy systems based on 8-bit images.
With our system, we have successfully performed spot tracking and $I(V)$
measurements for LEED movies acquired with four different experimental setups
from more than 20 different structures: from simple Cu(111)-$(1\times 1)$ to
complex ones with a dense arrangement of spots. The upper limit in complexity
processed so far was a $(10\times 10)$ superstructure on Pt(111) [10], where
about 2000 $I(V)$ curves could be measured; about 350 inequivalent beams with
sufficient quality were finally selected. For this structure, the proximity of
spots limits the usable energy range to $E\leq 400$ eV. Taking advantage of
parallelization for multi-core machines, the processing times for spot
tracking and $I(V)$ measurements on a contemporary desktop computer are below
one minute even for such a large data set. Including all user intervention
(from opening the files and creation of the mask for a new LEED setup, setting
all parameters, up to and including assessing the quality of the results),
extraction of the raw $I(V)$ data from a typical LEED movie takes about 10–15
minutes (less for subsequent similar movies with the same setup). The time
required for selection, processing and examination of the data in the $I(V)$
curve editor is of the same order of magnitude. This has to be compared with a
week of work for a structure with somewhat lower complexity [48] when manually
selecting and tracking spots one by one. Besides reduction of manual work
(which also reduces the risk of human errors), our software contains many
features that improve the data quality. Together with the other parts of the
ViPErLEED project, we consider it an important step towards making LEED $I(V)$
studies more accessible.
###### Acknowledgements.
The authors would like to thank Maximilian Buchta for providing test data for
the “azimuth blur” mode and Alessandro Sala for providing LEEM image stacks
for testing. This research was funded in part by the Austrian Science Fund
(FWF) under doi 10.55776/F81, Taming Complexity in Materials Modeling (TACO).
For the purpose of open access, the authors have applied a CC BY public
copyright license to any Author Accepted Manuscript version arising from this
submission.
## References
* Van Hove _et al._ [1986] M. A. Van Hove, W. H. Weinberg, and C.-M. Chan, _Low-energy electron diffraction: Experiment, theory and surface structure determination_ , softcover reprint of the hardcover 1. edition 1986 ed., Springer Series in Surface Sciences No. 6 (Springer, Berlin, 1986).
* Van Hove _et al._ [1993] M. A. Van Hove, W. Moritz, H. Over, P. J. Rous, A. Wander, A. Barbieri, N. Materer, U. Starke, and G. A. Somorjai, Automated determination of complex surface structures by LEED, Surf. Sci. Rep. 19, 191 (1993).
* Heinz and Hammer [1998] K. Heinz and L. Hammer, Surface crystallography by low energy electron diffraction, Z. Kristallogr. 213, 615 (1998).
* Held [2010] G. Held, Low-energy electron diffraction – Crystallography of surfaces and interfaces, Bunsen-Magazin 12, 124 (2010).
* Heinz [2013] K. Heinz, Electron based methods: 3.2.1 Low-energy electron diffraction (LEED), in _Surface and Interface Science_, Vol. 1, edited by K. Wandelt (John Wiley & Sons, Ltd, 2013) pp. 93–150.
* Fauster _et al._ [2020] T. Fauster, L. Hammer, K. Heinz, and M. A. Schneider, _Surface Physics: Fundamentals and Methods_ (De Gruyter Oldenbourg, 2020).
* Pendry [1980] J. B. Pendry, Reliability factors for LEED calculations, J. Phys. C: Solid State Phys. 13, 937 (1980).
* Note [1] Most LEED $I(V)$ studies use normal incidence. Since LEED $I(V)$ spectra depend sensitively on the incidence angle, one can verify normal incidence by comparing the $I(V)$ curves of symmetry-equivalent beams, provided that the surface has sufficient symmetry (at least a rotation axis). Advantages of normal incidence are (i) noise reduction by averaging of symmetry-equivalent beams (also reduction of errors due to residual misalignment) and (ii) the exact incidence angle is known. Off-normal incidence usually requires to handle the exact incidence angle as one or two additional fit parameter(s) in the structure search.
* Schmidt _et al._ [2002] A. Schmidt, W. Meier, L. Hammer, and K. Heinz, Deep-going reconstruction of Ir(100)-$5\times 1$, J. Phys.: Condens. Matter 14, 12353 (2002).
* Kißlinger _et al._ [2023] T. Kißlinger, A. Schewski, A. Raabgrund, H. Loh, L. Hammer, and M. A. Schneider, Surface telluride phases on Pt(111): Reconstructive formation of unusual adsorption sites and well-ordered domain walls, Phys. Rev. B 108, 205412 (2023).
* Dörr _et al._ [2024] F. Dörr, M. Schmid, F. Kraushofer, T. Kißlinger, L. Hammer, U. Diebold, and M. Riva, ViPErLEED package III: Data acquisition for low-energy electron diffraction, to be published (2024).
* Kraushofer _et al._ [2024] F. Kraushofer, A. M. Imre, T. Kißlinger, G. Francheschi, M. Schmid, U. Diebold, L. Hammer, and M. Riva, ViPErLEED package I: Calculation of $I(V)$ curves and structural optimization, Phys. Rev. Res., submitted (2024).
* Mayer _et al._ [2012] A. Mayer, H. Salopaasi, K. Pussi, and R. D. Diehl, A novel method for the extraction of intensity–energy spectra from low-energy electron diffraction patterns, Computer Physics Communications 183, 1443 (2012).
* [14] A. Mayer, H. Salopaasi, and N. Ferralis, EasyLEED: LEED I(E)-spectra analysis – EasyLEED 2.5.2 documentation, https://andim.github.io/easyleed/, accessed: 2021-06-21.
* Sojka _et al._ [2013a] F. Sojka, M. Meissner, C. Zwick, R. Forker, and T. Fritz, Determination and correction of distortions and systematic errors in low-energy electron diffraction, Rev. Sci. Instrum. 84, 015111 (2013a).
* Sojka _et al._ [2013b] F. Sojka, M. Meissner, C. Zwick, R. Forker, M. Vyshnepolsky, C. Klein, M. Horn-von Hoegen, and T. Fritz, To tilt or not to tilt: Correction of the distortion caused by inclined sample surfaces in low-energy electron diffraction, Ultramicroscopy 133, 35 (2013b).
* [17] F. Sojka, LEEDCal, http://fritz-sojka-gbr.de/leedcal/, accessed: 2021-08-05.
* Schneider _et al._ [2012] C. A. Schneider, W. S. Rasband, and K. W. Eliceiri, NIH Image to ImageJ: 25 years of image analysis, Nat. Methods 9, 671 (2012).
* [19] See Supplemental Material at [URL] for the program and program documentation, including installation instructions.
* Note [2] https://github.com/viperleed/viperleed-imagej.
* [21] M. F. Opheys, http://www.ee2000.de, accessed: 2021-06-21.
* Kißlinger _et al._ [2021] T. Kißlinger, M. A. Schneider, and L. Hammer, Submonolayer copper telluride phase on Cu(111): Ad-chain and trough formation, Phys. Rev. B 104, 155426 (2021).
* Buil [1991] C. Buil, _CCD Astronomy_ (Willmann-Bell, Richmond, 1991).
* Note [3] Note that annealing polycrystalline materials can lead to grain growth and, thus, the appearance of LEED spots for such materials. To ensure that this is not the case, it is a good practice to inspect the stack of flat-field images as it will be applied to the main input. This image stack, processed with the appropriate dark frame, the normalization polynomial in Eq. (2), and averaging for noise reduction, is available with the “Show processed flat field” option in the “Dark&Flat Processing” dialog.
* Koller _et al._ [2002] R. Koller, W. Bergermayer, G. Kresse, C. Konvicka, M. Schmid, J. Redinger, R. Podloucky, and P. Varga, The structure of the oxygen-induced c(6×2) reconstruction of V(110), Surf. Sci. 512, 16 (2002).
* Mighell [1999] K. J. Mighell, Algorithms for CCD Stellar Photometry, in _ASP Conference Series_ , Vol. 172, edited by D. M. Mehringer, R. L. Plante, and D. A. Roberts (Astronomical Society of the Pacific, San Francisco, 1999) pp. 317–328.
* Henzler [1978] M. Henzler, Quantitative evaluation of random distributed steps at interfaces and surfaces, Surf. Sci. 73, 240 (1978).
* McKinney _et al._ [1967] J. T. McKinney, E. R. Jones, and M. B. Webb, Surface lattice dynamics of silver. II. Low-energy electron thermal diffuse scattering, Phys. Rev. 160, 523 (1967).
* Howell [1989] S. B. Howell, Two-dimensional aperture photometry: Signal-to-noise ratio of point-source observations and optimal data-extraction techniques, Publ. Astron. Soc. Pac. 101, 616 (1989).
* Sonnett _et al._ [2013] S. Sonnett, K. Meech, R. Jedicke, S. Bus, J. Tonry, and O. Hainaut, Testing accuracy and precision of existing photometry algorithms on moving targets, Publ. Astron. Soc. Pac. 125, 456 (2013).
* Roučka _et al._ [2002] R. Roučka, J. Jiruše, and T. Šikola, Spot intensity processing in LEED images, Vacuum 65, 121 (2002).
* Note [4] Instead of fitting a linear background one could simply use the average over the background area if the spot is exactly centered (vanishing first moments over the integration disk after subtraction of the linear background) and the shapes of the integration areas for the spot and the background have at least twofold rotation symmetry around the center. Subtracting the linear background is required for obtaining the spot position via the first moments, and also for the intensity measurement if the spot is not perfectly centered spot or there is an asymmetry of the integration areas. Slight asymmetry can occur due to the spatial quantization (image pixels). We use a one-pixel-wide transition zone where the weight of the background evaluation decreases to zero; this transition zone is not required to be fully inside the foreground area of the mask. If pixels in the transition zone have to be excluded because they are outside the mask foreground area, this also causes asymmetry.
* Note [5] Since the noise of different pixels is usually uncorrelated, one can use the rules of error propagation to estimate the noise. If the areas of the integration disk and background annulus are the same, at vanishing spot intensity (i.e., without intensity-dependent shot noise), the noise-related errors of the two integrals over these areas will be the same. Thus, the error of the difference of these two integrals equals $\sqrt{2}$ times the error of one of the integrals.
* Note [6] For spots moving along the arm holding the electron source, which is essentially a radial “spoke” (to the bottom right in Fig. 1), the oval background touches that arm before an annular background area with equal area would touch it. This can reduce the amount of data available with the oval background as compared with a circular background. This case occurs less often than that of spots close to the inner or outer boundary. If it occurs, it often affects only one of several symmetry-equivalent beams, so it does not affect the size of the experimental database of the symmetry-averaged $I(V)$ curves.
* Note [7] For the calculation of $\sigma$ from the moments we use a heuristic correction to take into account that the spot intensity is not fully inside the integration disk. Since experimental LEED spot profiles are non-Gaussian and typically have slowly decaying tails at low intensity [28], the spot size is typically overestimated, especially if the integration radius is large.
* Toofan and Watson [1994] J. Toofan and P. R. Watson, A new image processing method for extracting integrated intensities from low‐energy electron diffraction spots, Rev. Sci. Instrum. 65, 3382 (1994).
* Savitzky and Golay [1964] A. Savitzky and M. J. E. Golay, Smoothing and differentiation of data by simplified least squares procedures, Anal. Chem. 36, 1627 (1964).
* Note [8] The experimental results in Ref. mckinney_thermal_1967 rather indicate an $1/r$ decay, which we cannot confirm.
* Note [9] Since the “$I(V)$ Quality Statistics” plot is based on the mutual $R$-factors of symmetry-equivalent beams, it is created only if there are at least two symmetry-equivalent beams. In case of non-normal incidence, for a valid result, only beams that are symmetry-equivalent at the given beam incidence should belong to the same group in the spot pattern file.
* Schmid _et al._ [2022] M. Schmid, D. Rath, and U. Diebold, Why and how Savitzky–Golay filters should be replaced, ACS Meas. Sci. Au 2, 185 (2022).
* Note [10] As a background intensity, we use the average intensity of the 40% darkest pixels in the screen area as defined by the mask, excluding the integration areas of the spots.
* Note [11] The number of points entered as a smoothing parameter is the number of points of a moving-average filter with the same suppression of white noise. In other words, if $n$ is entered as the number of points, the noise suppression factor is $1/\sqrt{n}$. The program calculates the filter kernel required for this noise suppression.
* Sporn _et al._ [1998] M. Sporn, E. Platzgummer, S. Forsthuber, M. Schmid, W. Hofer, and P. Varga, The accuracy of quantitative LEED in determining chemical composition profiles of substitutionally disordered alloys: a case study, Surf. Sci. 416, 423 (1998).
* Note [12] Like the $R$ factor, the estimate of the local noise contribution $r_{\mathrm{n}}(E)$ is based on the squared difference of the $Y$ function of the unsmoothed and slightly smoothed $I(V)$ curve. To calculate the noise, we assume white Gaussian noise of the $Y$ function and make use of the known bandwidth of the smoothing filter [40]. For the impact on the noise on the final $R$ factor we assume that a smoothing parameter of $1.0\tmspace+{.1667em}|V_{\mathrm{0i}}|$ will be chosen for the final curves.
* Note [13] Here, the smoothing parameter of the modified sinc filter [40] is given in electronvolts and corresponds to the kernel length of a moving-average filter with equal suppression of white noise, as already described in Ref. [42] above. This smoothing parameter is roughly proportional to the width of the smoothing kernel and inversely proportional to the bandwidth.
* Burger and Burge [2016] W. Burger and M. J. Burge, _Digital Image Processing – An Algorithmic Introduction Using Java_ , 2nd ed., Texts in Computer Science (Springer-Verlag, London, 2016).
* Note [14] The code is licensed under GNU GPLv3 or any later version. The documentation is licensed under Creative Commons Attribution CC BY 4.0.
* von Witte _et al._ [2019] G. von Witte, T. Kißlinger, J. G. Horstmann, K. Rossnagel, M. A. Schneider, C. Ropers, and L. Hammer, Surface structure and stacking of the commensurate $(\sqrt{13}\times\sqrt{13})$R13.9∘ charge density wave phase of 1T-TaS2(0001), Phys. Rev. B 100, 155407 (2019).
|
since $1\leq2(|\xi_1|+\dots+|\xi_r|+\ell_1+\dots+\ell_s)$, so the claim follows.
Let $L$ be a differential operator on $\G$ with constant coefficients, with symbol $\sigma_L$, that is, $\sigma_L:\mathbb{Z}^{r+1}\times\frac{1}{2}\Z^s\to\mathbb{C}$ such that for any $f\in \mathscr{D}'(\G)$:
$$\widehat{Lf}(\tau,\xi,\ell)_{\alpha\beta} = \sigma_L(\tau,\xi,\alpha)\widehat{f}(\tau,\xi,\ell)_{\alpha\beta}$$
$$\widehat{\Lt f}(\tau,\xi,\ell)_{\alpha\beta}=\sigma_L(-\tau,-\xi,-\alpha)\widehat{f}(\tau,\xi,\ell)_{\alpha\beta}$$
for each $(\tau,\xi)\in\Z^{r+1}, \ell\in\frac{1}{2}\N_0^s, -\ell\leq\alpha,\beta\leq \ell$. Then
$$(\ker \Lt)^0= \left\{f\in \mathscr{D}'(\G)| \text{ such that } \widehat{f}(\tau,\xi,\ell)_{\alpha\beta} = 0 \text{ if } \sigma_L(\tau,\xi,\alpha)=0\right\}.$$
Suppose first that $\widehat{f}(\tau,\xi,\ell)_{\alpha\beta} = 0 \text{ if }\sigma_L(\tau,\xi,\alpha)=0$. Let $v\in C^\infty(\G)$ be such that $v\in\ker \Lt$. Then, by Lemma <ref>:
\begin{align*}
\langle f,v\rangle &= (2\pi)^{r+1}\sum_{(\tau,\xi)\in\Z^{r+1}}\sum_{\ell\in\frac{1}{2}\N_0^s}d_l\sum_{-\ell\leq\alpha,\beta\leq \ell}\widehat{f}(\tau,\xi,\ell)_{\alpha\beta}\widehat{v}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)}(-1)^{\sum \beta_j-\alpha_j}.
\end{align*}
If $\sigma_L(\tau,\xi,\alpha) = 0$, then $\widehat{f}(\tau,\xi,\ell)_{\alpha\beta} = 0$. If not, then since
\begin{align*}
0 &= \widehat{\Lt v}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)} = \sigma_L(\tau,\xi,\alpha)\widehat{v}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)}
\end{align*}
this implies $\widehat{v}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)}=0$, so every term in the sum above is zero and $\langle f,v\rangle=0$, so that $f\in(\ker \Lt)^0$. Now let $f\in(\ker \Lt)^0$ and suppose $\sigma_L(\tau,\xi,\alpha)=0$. For each $-\ell\leq\beta\leq \ell$, $l-\beta\in\N_0^s$, take $v_{\tau,\xi,\ell,\alpha,\beta}\in C^{\infty}(\G)$ given by:
$$ \widehat{v_{\tau,\xi,\ell,\alpha,\beta}}(-\tau,-\xi,\ell)_{(-\alpha)(-\beta)} = 1,$$
And $\widehat{v_{\tau,\xi,\ell,\alpha,\beta}}(\tau',\xi
',\ell')_{\alpha'\beta'}=0$ otherwise.
\begin{align*}
\widehat{\Lt v_{\tau,\xi,\ell,\alpha,\beta}}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)} &= \sigma_L(\tau,\xi,\alpha)\widehat{v_{\tau,\xi,\ell,\alpha,\beta}}(-\tau,-{\xi},\ell)_{(-\alpha)(-\beta)}\\
\end{align*}
so $v_{\tau,\xi,\ell,\alpha,\beta}\in\ker \Lt$. Therefore, by Lemma <ref>:
$$0 = \langle f,v_{\tau,\xi,\ell,\alpha,\beta}\rangle = d_l\widehat{f}(\tau,\xi,\ell)_{\alpha\beta}(-1)^{\sum\beta_j-\alpha_j}$$
so we conclude the other inclusion also holds.
Let $g,\theta \in C^{\infty}(\T^1)$ and $\theta_0 \doteq \frac{1}{2\pi}\int_0^{2\pi}\theta(t)\mathop{dt}$. If $\theta_0\not\in i\mathbb{Z}$, then the differential equation
\begin{equation}\label{ode}
\partial_t u(t)+\theta(t)u(t)=g(t),\,\qquad t\in\T^1
\end{equation}
admits unique solution in $C^\infty(\T^1)$ given by:
\begin{equation}\label{sol-}
u(t) = \frac{1}{1-e^{-2\pi\theta_0}}\int_0^{2\pi}g(t-s)e^{-\int_{t-s}^t\theta(\tau)d\tau}ds
\end{equation}
or equivalently by:
\begin{equation}\label{sol+}
u(t) = \frac{1}{e^{2\pi\theta_0}-1}\int_0^{2\pi}g(t+s)e^{\int_{t}^{t+s}\theta(\tau)d\tau}ds.
\end{equation}
If $\theta_0\in i\mathbb{Z}$, then equation <ref> admits infinitely many solutions given by:
\begin{equation}\label{solz}
u_\lambda(t) = \lambda e^{-\int_0^t\theta(\tau)d\tau}+\int_0^t g(s)e^{-\int_s^t\theta(\tau)d\tau}ds
\end{equation}
for every $\lambda\in\mathbb{R}$, if and only if
$$ \int_0^{2\pi}g(t)e^{\int_0^t\theta(\tau)d\tau}\mathop{dt}=0.$$
Since the functions on the torus may be seen as $2\pi$ periodic functions on $\R$, the proof follows from simple differentiation of the formulas above and applying the periodicity.
Let $\phi\in C^\infty(\T^1)$ be a non-null function, and let $\Phi$ be a function such that $\Phi'=\phi$. Suppose there exists $m\in\R$ such that the sublevel set
$$\Omega_m = \{t\in\T^1; \Phi(t)<m\}$$
is not connected.
Then, there exists $m_0<m$ such that $\Omega_{m_0}$ has two connected components with disjoint closures. Consequently, we can define functions $g_0,v_0\in C^{\infty}(\T^1)$ such that:
$$\int_0^{2\pi}g_0(t)\mathop{dt}=0,\ \supp(g_0)\cap\Omega_{m_0}=\varnothing, \ \supp(v_0')\subset\Omega_{m_0}\text{ and } \int_0^{2\pi} g_0(t)v_0(t)\mathop{dt}>0.$$
Let $C_1\subset \T^1$ be a connected component of $\Omega_m$. Notice that $C_1$ is homeomorphic to an open interval and has two distinct boundary points: $\partial C_1=\{t_1,t_2\}$. Choose $t_3\in C_1$ such that $\Phi(t)<m$. Since $\Omega_m$ is not connected, there exists another connected component $C_2$ of $\Omega_m$ such that $C_1\cap C_2=\emptyset$. Similar to $C_1$, the component $C_2$ is also homeomorphic to an open interval and its boundary is given by two distinct points: $\partial C_2=\{t_4,t_5\}$. Choose $t_6\in C_2$ such that $\Phi(t_6)<m$.
Now, choose $\epsilon>0$ such that $m_0 \doteq \max\{\Phi(t_3),\Phi(t_6)\}+\epsilon<r$. Since $\Phi(t_1)=m$, by the continuity of $\Phi$, there exists an open set $U_1\subset \T^1$ containing $t_1$ such that $\Phi(t)>m_0$ for each $t\in U_1$. Similarly, we can find an open set $U_2$ containing $t_2$ with the same property.
Let $I$ and $J$ be the connected components of $\Omega_{m_0}$ that contain $t_3$ and $t_6$, respectively. It is important to note that $U_1$ and $U_2$ are contained in $\T^1\backslash(I\cup J)$. Moreover, $I\subset C_1$ and $J\subset C_2$ are “separated" by $U_1$ and $U_2$, which implies that their closures do not intersect. In other words, if $x\in \overline{I}\cap\overline{J}$, then there exist sequences $(x_n)_n\subset I$ and $(y_n)_n\subset J$ such that $x_n\to x$ and $y_n\to x$. However, since $x_n\in I\subset C_1$, it follows that $\Phi(x_n)< m_0$ for all $n$, which implies $\Phi(x)\leq m_0<m$. Therefore, we have $x\in C_1$. The same logic applies to $y_n$, $J$, and $C_2$, which leads to $x\in C_1\cap C_2$, which is a contradiction.
Let us consider the previously defined set as contained in the interval $K = [t_1,t_1+2\pi]\subset \R$. Without loss of generality, we can assume that
$$t_1<t_3<t_2\leq t_4<t_6<t_5\leq t_1+2\pi$$
$$t_3\in I\subset C_1=(t_1,t_2), \quad t_6\in J\subset C_2=(t_4,t_5)$$
$$U_1 = [t_1,t_1+\epsilon')\cup (t_1+2\pi-\epsilon',t_1+2\pi]$$
where $0<\epsilon'$ and $t_1+\epsilon'<t_3$ and
$$U_2 = (t_2-\epsilon'',t_2+\epsilon'')$$
where $0<\epsilon''$ and $t_3<t_2-\epsilon''<t_2+\epsilon''<t_6$.
Now, for $j=1,2$, let $g_j\in C_c^\infty(U_j)$ be a bump function such that $\int_0^{2\pi}g_j(t)\mathop{dt}=1$. Set $g_0 = g_2-g_1$, so that $\supp(g_0)\subset U_1\cup U_2$ and so $\supp(g_0)\cap \Omega_{m_0}=\emptyset$. Also,
Finally, let $\delta>0$ be such that $t_3+\delta\in I$ and $t_6-\delta\in J$. Choose $v_0\in C_c^{\infty}((t_3,t_6))$ such that $v_0\equiv1$ in $[t_3+\delta,t_6-\delta]$. In this case,
and $\supp(v_0')\subset I\cup J\subset\Omega_{m_0}$.
Let $\psi\in C^{\infty}(\T^1)$ be a smooth real function such that $\psi(s)\geq 0$ for all $s$, and let $s_0\in\T^1$ be a zero of order greater than one for $\psi$, i.e., $\psi(s_0)=0=\psi'(s_0)$. Then, there exists $M>0$ such that for all $\lambda>0$ sufficiently large and $\delta>0$:
$$\int_{s_0-\delta}^{s_0+\delta}e^{-\lambda\psi(s)}ds\geq \left(\int_{-\delta}^{\delta}e^{-s^2}ds\right)\lambda^{-1/2}M^{-1/2}.$$
Let us consider the Taylor expansion of $\psi$ around $s_0$. For each $s\in (s_0-\delta,s_0+\delta)$, there exists $s'\in (s_0-\delta,s_0+\delta)$ such that
$$\psi(s) = \frac{\psi''(s')}{2}(s-s_0)^2$$
Let $\tilde{M}=\sup_{s\in [s_0-\delta,s_0+\delta]}\left|\frac{\psi''(s)}{2}\right|\geq0$. If $\tilde{M}=0$, then $\psi\equiv0$ and the inequality is trivial with $M=1$. Otherwise, let $M=\tilde{M}$ and then for $\lambda M>1$ we have:
\begin{align*}
\int_{s_0-\delta}^{s_0+\delta}e^{-\lambda\psi(s)}ds&\geq \int_{s_0-\delta}^{s_0+\delta}e^{-(\sqrt{\lambda M}(s-s_0))^2}ds\geq \frac{1}{\sqrt{\lambda M}}\left(\int_{-\delta\sqrt{\lambda M}}^{\delta\sqrt{\lambda M}}e^{-s^2}ds\right) \\
&\geq \frac{1}{\sqrt{\lambda M}}\left(\int_{-\delta}^{\delta}e^{-s^2}ds\right).
\end{align*}
Let $\phi\in C^\infty(\T^1)$ be such that $\int_0^{2\pi}\phi(t)\mathop{dt} = 0$ and for every $r\in\R$, the set: $\Omega_r = \left\{t\in\T^1|\int_0^t\phi(\tau)d\tau<r\right\}$ is connected. Then so is the set:
$$\tilde{\Omega}_r = \left\{t\in\T^1|\int_0^t\phi(\tau)d\tau\geq r\right\}=\left\{t\in\T^1|-\int_0^t\phi(\tau)d\tau\leq -r\right\}.$$
This follows from $\tilde{\Omega}_r=\T^1\backslash\Omega_r$ and the general fact that any $A\subset\T^1$ is connected if and only if $\T^1\backslash A$ is connected. To see this, let $A$ be connected. Note that the claim $\T^1\backslash A$ is connected is trivially true if $A=\T^1$. Otherwise, then $\T^1\backslash A$ contains at least one point, which, without loss of generality we may assume it is $0=0+2\pi\Z$. If we consider
\begin{align*}
x&\mapsto x+2\pi\Z
\end{align*}
then $f$ is an homeomorphism. Since $A$ is connected, $I=f^{-1}(A)$ also is. But then $I$ is an interval, so $I^c = (0,2\pi)\backslash I$ is either an interval containing $(0,\epsilon)$ or $(2\pi-\epsilon,2\pi)$ for some $\epsilon>0$ or the disjoint union of two intervals containing $(0,\epsilon)\cup(2\pi-\epsilon,0)$, for some $\epsilon>0$. In both cases, since $\T^1\backslash A = f(I^c)\cup\{0+2\pi\Z\}$ it is clearly connected. Switching the roles of $A$ and $A^c$, the converse also follows.
[1]
Gabriel Araújo.
Global regularity and solvability of left-invariant differential
systems on compact Lie groups.
Ann. Global Anal. Geom., 56(4):631–665, 2019.
[2]
Gabriel Araújo, Igor A. Ferra, and Luis F. Ragognette.
Global analytic hypoellipticity and solvability of certain operators
subject to group actions.
Proc. Am. Math. Soc., 150(11):4771–4783, 2022.
[3]
Gabriel Araújo, Igor A. Ferra, and Luis F. Ragognette.
Global solvability and propagation of regularity of sums of squares
on compact manifolds.
J. Anal. Math., 148(1):85–118, 2022.
[4]
A. P. Bergamasco, G. A. Mendoza, and S. Zani.
On global hypoellipticity.
Commun. Partial Differ. Equations, 37(9):1517–1527, 2012.
[5]
Adalberto P. Bergamasco, Paulo D. Cordaro, and Gerson Petronilho.
Global solvability for a class of complex vector fields on the
Commun. Partial Differ. Equations, 29(5-6):785–819, 2004.
[6]
Adalberto P. Bergamasco, Paulo L. Dattori da Silva, and Rafael B. Gonzalez.
Existence and regularity of periodic solutions to certain first-order
partial differential equations.
J. Fourier Anal. Appl., 23(1):65–90, 2017.
[7]
Adalberto P. Bergamasco, Paulo L. Dattori da Silva, Rafael B. Gonzalez, and
Alexandre Kirilov.
Global solvability and global hypoellipticity for a class of complex
vector fields on the 3-torus.
J. Pseudo-Differ. Oper. Appl., 6(3):341–360, 2015.
[8]
Wenyi Chen and M. Y. Chi.
Hypoelliptic vector fields and almost periodic motions on the torus
\(T^n\).
Commun. Partial Differ. Equations, 25(1-2):337–354, 2000.
[9]
Fernando de Ávila Silva.
Global hypoellipticity for a class of periodic Cauchy operators.
J. Math. Anal. Appl., 483(2):10, 2020.
Id/No 123650.
[10]
Fernando de Ávila Silva.
Globally hypoelliptic triangularizable systems of periodic
pseudo-differential operators.
Math. Nachr., 296:2293–2320, 2023.
[11]
Fernando de Ávila Silva and Marco Cappiello.
Time-periodic Gelfand-Shilov spaces and global hypoellipticity on
\(\mathbb{T}\times\mathbb{R}^n\).
J. Funct. Anal., 282(9):29, 2022.
Id/No 109418.
[12]
Fernando de Ávila Silva and Cleber de Medeira.
Global hypoellipticity for a class of overdetermined systems of
pseudo-differential operators on the torus.
Ann. Mat. Pura Appl. (4), 200(6):2535–2560, 2021.
[13]
Fernando de Ávila Silva, Rafael Borro Gonzalez, Alexandre Kirilov, and
Cleber de Medeira.
Global hypoellipticity for a class of pseudo-differential operators
on the torus.
J. Fourier Anal. Appl., 25(4):1717–1758, 2019.
[14]
Fernando de Ávila Silva, Todor Gramchev, and Alexandre Kirilov.
Global hypoellipticity for first-order operators on closed smooth
J. Anal. Math., 135(2):527–573, 2018.
[15]
Fernando De Ávila Silva and Alexandre Kirilov.
Perturbations of globally hypoelliptic operators on closed manifolds.
J. Spectr. Theory, 9(3):825–855, 2019.
[16]
Wagner Augusto Almeida de Moraes.
Regularity of solutions to a Vekua-type equation on compact Lie
Ann. Mat. Pura Appl. (4), 201(1):379–401, 2022.
[17]
D. Dickinson, T. Gramchev, and M. Yoshino.
First order pseudodifferential operators on the torus: Normal
forms, diophantine phenomena and global hypoellipticity.
Ann. Univ. Ferrara, Nuova Ser., Sez. VII, 41:51–64, 1996.
[18]
Detta Dickinson, Todor Gramchev, and Masafumi Yoshino.
Perturbations of vector fields on tori: Resonant normal forms and
Diophantine phenomena.
Proc. Edinb. Math. Soc., II. Ser., 45(3):731–759, 2002.
[19]
Giovanni Forni.
On the Greenfield-Wallach and Katok conjectures in dimension
In Geometric and probabilistic structures in dynamics. Workshop
on dynamical systems and related topics in honor of Michael Brin on the
occasion of his 60th birthday, College Park, MD, USA, March 15–18, 2008,
pages 197–213. Providence, RI: American Mathematical Society (AMS), 2008.
[20]
Stephen J. Greenfield and Nolan R. Wallach.
Global hypoellipticity and Liouville numbers.
Proc. Am. Math. Soc., 31:112–114, 1972.
[21]
Stephen J. Greenfield and Nolan R. Wallach.
Remarks on global hypoellipticity.
Trans. Am. Math. Soc., 183:153–164, 1973.
[22]
A. Alexandrou Himonas and Gerson Petronilho.
Global hypoellipticity for sums of squares of vector fields of
infinite type.
Mat. Contemp., 15:145–155, 1998.
[23]
A. Alexandrou Himonas, Gerson Petronilho, and L. A. Carvalho Dos Santos.
Global analytic, Gevrey and \(C^{\infty }\) hypoellipticity on
the 3-torus.
Math. Nachr., 285(2-3):265–282, 2012.
[24]
Jorge Hounie.
Globally hypoelliptic and globally solvable first order evolution
Trans. Am. Math. Soc., 252:233–248, 1979.
[25]
Alexandre Kirilov, Wagner A. A. de Moraes, and Michael Ruzhansky.
Partial Fourier series on compact Lie groups.
Bull. Sci. Math., 160:27, 2020.
Id/No 102853.
[26]
Alexandre Kirilov, Wagner A. A. de Moraes, and Michael Ruzhansky.
Global hypoellipticity and global solvability for vector fields on
compact Lie groups.
J. Funct. Anal., 280(2):39, 2021.
Id/No 108806.
[27]
Alexandre Kirilov, Ricardo Paleari, and Wagner A. A. de Moraes.
Global analytic hypoellipticity for a class of evolution operators on
\(\mathbb{T}^1 \times \mathbb{S}^3\).
J. Differ. Equations, 296:699–723, 2021.
[28]
G. Petronilho.
Global hypoellipticity, global solvability and normal form for a
class of real vector fields on a torus and application.
Trans. Am. Math. Soc., 363(12):6337–6349, 2011.
[29]
Michael Ruzhansky and Ville Turunen.
On pseudo-differential operators on group \(SU(2)\).
In New developments in pseudo-differential operators. Selected
papers of the 6th congress of the International Society for Analysis, its
Applications and Computation (ISAAC), the ISAAC Group in Pseudo-Differential
Operators (IGPDO), Middle East Technical University, Ankara, Turkey, August
13–18, 2007, pages 307–322. Basel: Birkhäuser, 2007.
[30]
Michael Ruzhansky and Ville Turunen.
On the Fourier analysis of operators on the torus.
In Modern trends in pseudo-differential operators, pages
87–105. Basel: Birkhäuser, 2007.
[31]
Michael Ruzhansky and Ville Turunen.
Pseudo-differential operators and symmetries. Background
analysis and advanced topics, volume 2 of Pseudo-Differ. Oper., Theory
Basel: Birkhäuser, 2010.
[32]
Michael Ruzhansky and Ville Turunen.
Global quantization of pseudo-differential operators on compact Lie
groups, \(\mathrm{SU}(2)\), 3-sphere, and homogeneous spaces.
Int. Math. Res. Not., 2013(11):2439–2496, 2013. |
# Rational Transformations and Invariant Polynomials
Max Schulz
University of Rostock Germany
<EMAIL_ADDRESS>
###### Abstract
Rational transformations of polynomials are extensively studied in the context
of finite fields, especially for the construction of irreducible polynomials.
In this paper, we consider the factorization of rational transformations with
(normalized) generators of the field $K(x)^{G}$ of $G$-invariant rational
functions for $G$ a finite subgroup of $\operatorname{PGL}_{2}(K)$, where $K$
is an arbitrary field. Our main theorem shows that the factorization is
related to a well-known group action of $G$ on a subset of monic polynomials.
With this, we are able to extend a result by Lucas Reis for $G$-invariant
irreducible polynomials. Additionally, some new results about the number of
irreducible factors of rational transformations for $Q$ a generator of
$\mathbb{F}_{q}(x)^{G}$ are given when $G$ is non-cyclic.
## Introduction
Let $K$ be an arbitrary field, $K^{\ast}=K\setminus\\{0\\}$ the set of its
units, $K[x]$ the set of polynomials with coefficients in $K$ and
$\mathcal{I}_{K}$ the set of monic irreducible polynomials in $K[x]$, $K(x)$
the rational function field over $K$ and $\mathbb{F}_{q}$ the field with $q$
elements. For a rational function $Q(x)\in K(x)$ we always denote its
numerator and denominator as $g$ and $h$, i.e. $Q(x)=g(x)/h(x)$. Furthermore,
we assume that rational functions are represented as reduced fractions, so
$\gcd(g,h)=1$. Recall that the degree of $Q$ is
$\deg(Q)=\max\\{\deg(g),\deg(h)\\}$. The $Q$-transform of a polynomial
$F(x)=\sum_{i=0}^{k}a_{i}x^{i}\in K[x]$ is defined as
$F^{Q}(x):=h(x)^{\deg(F)}F\left(\frac{g(x)}{h(x)}\right)=\sum\limits_{i=0}^{k}a_{i}g(x)^{i}h(x)^{k-i}.$
This is not yet well-defined since for all $a\in K^{\ast}$ we have
$Q(x)=\frac{a\cdot g(x)}{a\cdot h(x)}$
which leads to
$F^{Q}(x)=\sum\limits_{i=0}^{k}a_{i}(ag(x))^{i}(ah(x))^{k-i}=a^{k}\cdot\sum\limits_{i=0}^{k}a_{i}g(x)^{i}h(x)^{k-i}.$
One might make this transformation unambiguous by normalizing either the
numerator $g$ of $Q$ or the resulting polynomial $F^{Q}$. In our setup we most
often have that $Q$ satisfies $\deg(g)>\deg(h)$ and if $F,g$ are monic so is
$F^{Q}$.
The transformation $F^{Q}$ is often used for constructing irreducible
polynomials of high degree over finite fields starting with an irreducible
polynomial and a rational function. There is a rich literature on this topic,
for example [1], [3], [7], [17], [18] and [19]. The main criterion in use is
###### Lemma ([6, Lemma 1]).
Let $Q(x)=g(x)/h(x)\in K(x)$ and $F\in K[x]$. Then $F^{Q}$ is irreducible if
and only if $F\in K[x]$ is irreducible and $g(x)-\alpha h(x)$ is irreducible
over $K(\alpha)[x]$, where $\alpha$ is a root of $F$.
The original version is only stated for finite fields, but the proof does work
for arbitrary fields as well. The concrete application of this lemma for
arbitrary rational functions and starting polynomials $F$ is very hard, which
is why the best one can do is to focus on specific rational functions or
”small” families of rational functions.
This paper considers two specific $Q$-transformations: The first is $Q$ being
a rational function of degree 1, i.e.
$Q(x)=\frac{ax+b}{cx+d}$
where $ad-bc\neq 0$. The $Q$-transform of $F$ looks like this
$F^{Q}(x)=\lambda_{Q,F}(cx+d)^{\deg(F)}F\left(\frac{ax+b}{cx+d}\right),$
where $\lambda_{Q,F}\in K^{\ast}$ makes the resulting polynomial monic. This
transformation preserves the irreducibility and degree of $F$ if $\deg(F)\geq
2$ by the previous lemma. There is another way to interpret this particular
$Q$-transformation: Let $\operatorname{GL}_{2}(K)$ be the set of invertible
$2\times 2$-matrices over $K$ and let
$A=\left(\begin{array}[]{cc}a&b\\\
c&d\end{array}\right)\in\operatorname{GL}_{2}(K).$ (1)
We make the convention that if we write $A\in\operatorname{GL}_{2}(K)$ then we
assume that $A$ is of the form (1). We consider the projective general linear
group $\operatorname{PGL}_{2}(K)=\operatorname{GL}_{2}(K)/Z$ over $K$, where
$Z=K^{\ast}I_{2}$ is the set of invertible scalar multiples of the identity
matrix $I_{2}$, which is the center of $\operatorname{GL}_{2}(K)$. The group
$\operatorname{PGL}_{2}(K)$ is isomorphic to the set of degree 1 rational
functions in $K(x)$, where the multiplication is composition. We denote by
$[A]$ the coset of $A$ in $\operatorname{PGL}_{2}(K)$, that is,
$[A]:=\\{\alpha\cdot A|\alpha\in K^{\ast}\\}.$
We define $\ast:\operatorname{PGL}_{2}(K)\times K[x]\to K[x]$ by
$[A]\ast
f(x):=\lambda_{A,f}\cdot(cx+d)^{\deg(f)}f\left(\frac{ax+b}{cx+d}\right),$ (2)
where $\lambda_{A,f}\in K^{\ast}$ makes the output-polynomial monic. We call
$f\in K[x]$ $[A]$-invariant for an $[A]\in\operatorname{PGL}_{2}(K)$ if
$[A]\ast f(x)=f(x)$. Moreover $f$ is called $G$-invariant for a subgroup
$G\leq\operatorname{PGL}_{2}(K)$ if it is $[A]$-invariant for all $[A]\in G$.
It can be shown that an $[A]$-invariant polynomial is also
$\langle[A]\rangle$-invariant. There is a substantial amount of literature on
this transformation and its variations in the context of finite fields, for
example [12], [20], [21], [22], [25], [26]. For instance, it is shown that
this transformation induces a (right) group action of
$\operatorname{PGL}_{2}(K)$ on the set of monic polynomials with no roots in
$K$. The following theorem shows that $[A]$-invariant irreducible monic
polynomials over finite fields are always $Q_{A}$-transformations for specific
rational functions $Q_{A}$ depending on
$A\in\operatorname{GL}_{2}(\mathbb{F}_{q})$:
###### Theorem R ([20, Theorem 6.0.7.]).
Let $[A]\in\operatorname{PGL}_{2}(\mathbb{F}_{q})$ be an element of order
$D=\operatorname{ord}([A])$. Then there exists a rational function
$Q_{A}(x)=g_{A}(x)/h_{A}(x)$ of degree $D$ with the property that the
$[A]$-invariant monic irreducible polynomials of degree $Dm>2$ are exactly the
monic irreducible polynomials of the form
$F^{Q_{A}}(x)=h_{A}(x)^{m}\cdot F\left(\frac{g_{A}(x)}{h_{A}(x)}\right),$
where $\deg(F)=m$. In addition, $Q_{A}$ can be explicitly computed from $A$.
This theorem is proved by dividing $\operatorname{PGL}_{2}(\mathbb{F}_{q})$
into four types of conjugacy classes and showing it for a nice representative
of each class.
Let $G\leq\operatorname{PGL}_{2}(K)$ be a finite subgroup and for
$A\in\operatorname{GL}_{2}(K)$ set
$[A]\circ x:=\frac{ax+b}{cx+d}.$ (3)
There exists a rational function $Q_{G}\in K(x)$ of degree $|G|$ so that
$K(x)^{G}=K(Q_{G}(x))$ where
$K(x)^{G}:=\\{Q\in K(x)|~{}Q([A]\circ x)=Q(x)\text{ for all }[A]\in G\\}$
is the fixed field of $G$ (for reference see [4]). Moreover, every rational
function $Q\in K(x)^{G}$ of degree $|G|$ is a generator of $K(x)^{G}$, so we
can normalize $Q_{G}$ in such a way that $Q_{G}(x)=g(x)/h(x)$ with
$0\leq\deg(h)<\deg(g)=|G|$ and $g$ monic. Based on [4], we call these
generators quotient maps for $G$ and this is the second class of rational
functions we consider in this paper. In [20] it is noted that for some
$[A]\in\operatorname{PGL}_{2}(\mathbb{F}_{q})$ the functions $Q_{A}$ in
Theorem R are in fact generators of the fixed field
$K(x)^{\langle[A]\rangle}$.
A natural question to ask is whether the function $Q_{A}$ in Theorem R is
always a generator of $K(x)^{\langle[A]\rangle}$ for all
$[A]\in\operatorname{PGL}_{2}(\mathbb{F}_{q})$. An understanding of this
question is of interest since many constructions of irreducible polynomials
over finite fields via $Q$-transformations that work very well use specific
generators of specific fields of invariant functions, see for example [7],
[18] and [19]. Another natural question is whether the theorem still holds if
we consider $G$-invariant and not necessarily irreducible polynomials for
arbitrary finite subgroups of $\operatorname{PGL}_{2}(K)$. These two questions
led us to study the $Q_{G}$-transformations of irreducible polynomials and
their factorization. We did not want to necessarily restrict ourselves to the
case that $K$ is finite, so we formulate the results for arbitrary fields.
However, the theory is especially beautiful in characteristic $p>0$ because
the finite subgroups of $\operatorname{PGL}_{2}(K)$ are more diverse there
(see [10], [11] and [27]).
The main result and starting point of this paper can be summarized as the
following theorem about the factorization of $F^{Q_{G}}$ for $F$ an
irreducible monic polynomial and $Q_{G}$ a quotient map for $G$:
###### Main Theorem.
Let $F\in K[x]$ be monic and irreducible, $G\leq\operatorname{PGL}_{2}(K)$ a
finite subgroup and $Q_{G}=g/h\in K(x)$ a quotient map for $G$. Then there is
an irreducible monic polynomial $r\in K[x]$ with $\deg(F)|\deg(r)$ and an
integer $k>0$ such that
$F^{Q_{G}}(x)=\left(\prod\limits_{t\in G\ast r}t(x)\right)^{k},$
where $G\ast r:=\\{[A]\ast r|[A]\in G\\}$ is the $G$-orbit of $r$.
Additionally $k=1$ for all but finitely many irreducible monic polynomials
$F\in K[x]$.
The main difficulty of the proof is to show that $k=1$ for all but finitely
many irreducible and monic $F\in K[x]$ in non-perfect fields.
We want to point out that a very similar result is known for the case that
$F\in\mathcal{I}_{K}$ is of degree 1 and $K=\mathbb{F}_{q}$; we state said
theorem for convenience:
###### Theorem ([13, Theorem 26]).
Let $G$ be a subgroup of $\operatorname{PGL}_{2}(\mathbb{F}_{q})$ and
$Q(x)=g(x)/h(x)$ a generator for $\mathbb{F}_{q}(x)^{G}$. Let
$\alpha\in\overline{\mathbb{F}}_{q}$ have the property that
$Q(\alpha)\in\mathbb{F}_{q}$ and assume that $G$ acts regularly on the roots
of $F_{\alpha}(T):=g(T)-Q(\alpha)h(T)\in\mathbb{F}_{q}[T]$ via Möbius-
Transformation, then
1. 1.
$F_{\alpha}$ will factor into irreducible polynomials of the same degree over
$\mathbb{F}_{q}[T]$
2. 2.
The minimal polynomial of $\alpha$ is one of the factors of $F_{\alpha}$
3. 3.
The degree of each factor must be the order of an element of $G$.
To see that both theorems are connected notice that for
$\beta=Q(\alpha)\in\mathbb{F}_{q}$ we have that $F^{Q_{G}}(T)=g(T)-\beta h(T)$
for $F=T-\beta$, which factors into a $G$-orbit of an irreducible polynomial
by our Main Theorem and all elements in a $G$-orbit have the same degree,
which explains item 1. The second item is also true in our setup, that is, if
$\beta\in\overline{K}$ is a root of $F$, then $\alpha\in Q_{G}^{-1}(\beta)$ is
a root of $F^{Q_{G}}$. The third item, however, is a finite field specific
result and generalizes to arbitrary fields and irreducible polynomials $F$ of
arbitrary degree as follows: Every irreducible factor of $F^{Q_{G}}$ has
degree $\deg(F)$ times the size of a subgroup of $G$. The condition that the
set of roots of $F_{\alpha}$ only contains regular $G$-orbits is a crucial one
for the case that $k=1$ in the Main Theorem. All of this will be explained in
depth in this paper. The phenomenon that $F^{Q_{G}}$ factorizes into a
$G$-orbit of an irreducible polynomial was, until now, only noted for some
instances of generators of specific invariant rational function fields over
finite fields.
###### Example 1.
1. 1.
We start with $Q_{1}=x+1/x\in\mathbb{F}_{q}(x)$ and look at the factorization
of $F^{Q_{1}}$, where $F\in\mathcal{I}_{q}:=\mathcal{I}_{\mathbb{F}_{q}}$. It
is proved in [18, Lemma 4] that $F^{Q_{1}}$ is either irreducible and self-
reciprocal or factorizes into a reciprocal pair. For
$r\in\mathcal{I}_{q}\setminus\\{x\\}$ we set
$r^{\ast}(x):=a_{0}^{-1}x^{\deg(r)}r(1/x)$ as its reciprocal polynomial, where
$a_{0}$ is the constant term of $r$. A polynomial is said to be self-
reciprocal if $r(x)=r^{\ast}(x)$ and a reciprocal pair is a pair $r,r^{\ast}$
such that $r\neq r^{\ast}$. This result can be explained with our Main
Theorem: Let
$G_{1}=\left\langle\left[\left(\begin{array}[]{cc}0&1\\\
1&0\end{array}\right)\right]\right\rangle.$
This is a subgroup of order 2 and a generator of $G_{1}$ is
$Q_{G_{1}}(x)=x+1/x=(x^{2}+1)/x\in K(x)$. Then, for all but finitely many
irreducible monic polynomials $F\in\mathcal{I}_{K}$ we obtain that there
exists an irreducible monic polynomial $r\in K[x]$ such that
$F^{Q_{G_{1}}}(x)=\begin{cases}r(x),&\text{ if }F^{Q_{G_{1}}}\text{ is
irreducible}\\\ r(x)\cdot a_{0}^{-1}x^{\deg(r)}r(1/x),&\text{ if
}F^{Q_{G_{1}}}\text{ is not irreducible}\end{cases}.$
2. 2.
The factorization of $F(x^{n})$ in $\mathbb{F}_{q}[x]$ for $n|q-1$ leads to
another nice example. Let $a\in\mathbb{F}_{q}^{\ast}$ be a primitive $n$-th
root of unity. It can be shown that for all
$F\in\mathcal{I}_{q}\setminus\\{x\\}$ there exists $m\mid n$ and
$r\in\mathcal{I}_{q}$ such that
$F(x^{n})=\prod\limits_{i=0}^{m-1}a^{-i\cdot\deg(r)}\cdot r(a^{i}x).$
For reference see [2] and [8]; for a nice application of this see [15]. The
rational function $Q_{2}(x)=x^{n}$ is a quotient map for the subgroup
$G_{2}:=\left\\{\left[\left(\begin{array}[]{cc}a^{i}&0\\\
0&1\end{array}\right)\right]|i\in\mathbb{N}\right\\}$
and $F^{Q_{2}}(x)=F(x^{n})$. The factors belong to the same $G_{2}$-orbit,
since
$\left[\left(\begin{array}[]{cc}a^{i}&0\\\ 0&1\end{array}\right)\right]\ast
r(x)=a^{-i\deg(r)}\cdot r(a^{i}x).$
3. 3.
In [5] it is noted that if $F(x^{p}-x)$ is not irreducible for
$F\in\mathcal{I}_{q}$ and $p^{l}=q$, then $F(x^{p}-x)$ factorizes into exactly
$p$ irreducible polynomials of degree $\deg(F)$ and, more precisely, there
exists $r\in\mathcal{I}_{q}$ with $\deg(r)=\deg(F)$ such that
$F(x^{p}-x)=r(x)\cdot r(x+1)\cdot\ldots\cdot r(x+(p-1)).$
The rational function $Q_{3}(x)=x^{p}-x$ is a quotient map for
$G_{3}:=\left\\{\left[\left(\begin{array}[]{cc}1&a\\\
0&1\end{array}\right)\right]|a\in\mathbb{F}_{p}\right\\}$
and for $r\in K[x]$ the transformation with an element of $G_{3}$ looks like
this
$\left[\left(\begin{array}[]{cc}1&a\\\ 0&1\end{array}\right)\right]\ast
r(x)=r(x+a).$
All of these examples still hold in every field $K$ in which the corresponding
subgroups of $\operatorname{PGL}_{2}(K)$ exist. The factorization of
$F^{Q_{G}}$ can be easily obtained by finding just one irreducible factor and
calculating the $G$-orbit of this factor. We also see that in literature,
apart from [13, Theorem 26], only small cyclic subgroups of
$\operatorname{PGL}_{2}(\mathbb{F}_{q})$ of prime order were considered. In
contrast, we want to look at big subgroups of
$\operatorname{PGL}_{2}(\mathbb{F}_{q})$ instead. We can obtain the following
new result over finite fields:
###### Theorem 2.
Let $K=\mathbb{F}_{q}$ and $G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})$ with
quotient map $Q_{G}\in\mathbb{F}_{q}(x)$. Moreover, set $\mu_{G}\in\mathbb{N}$
as the maximal order of an element in $G$ and let $F\in\mathbb{F}_{q}[x]$ be
an irreducible monic polynomial such that $F^{Q_{G}}$ is separable. Then we
have that $F^{Q_{G}}$ has at least $|G|/\mu_{G}$ irreducible factors and every
such factor has degree at most $\mu_{G}\cdot\deg(F)$.
###### Remark 3.
The polynomial $F^{Q_{G}}$ is separable if $F\in\mathcal{I}_{q}$ and
$\deg(F)\geq 3$, so the only exception polynomials for which the theorem does
not necessarily hold are irreducible polynomials of degree less than 3. For an
explanation see Theorem 17, Theorem 22 and Lemma 28.
For example, take $\\{0\\}\neq V\leq_{p}\mathbb{F}_{q}$ as a
$\mathbb{F}_{p}$-subspace of $\mathbb{F}_{q}$, then define
$\overset{\sim}{V}:=\left\\{\left[\left(\begin{array}[]{cc}1&v\\\
0&1\end{array}\right)\right]|v\in V\right\\}.$
We call $\overset{\sim}{V}$ the to $V$ associated subgroup in
$\operatorname{PGL}_{2}(\mathbb{F}_{q})$. Observe that
$V\cong\overset{\sim}{V}$ as groups. A quotient map for $\overset{\sim}{V}$ is
the to $V$ associated subspace polynomial, that is,
$Q_{V}(x)=\prod\limits_{v\in V}(x-v)\in\mathbb{F}_{q}[x].$
Every non-trivial element in $\overset{\sim}{V}$ has order $p$, so
$\mu_{\overset{\sim}{V}}=p$ and therefore we obtain the following corollary
###### Corollary 4.
Let $\\{0\\}\neq V\leq_{p}\mathbb{F}_{q}$ be an $\mathbb{F}_{p}$-subspace of
$\mathbb{F}_{q}$ and $Q_{V}\in\mathbb{F}_{q}[x]$ the associated subspace
polynomial. For every irreducible (monic) polynomial $F\in K[x]$ we have that
$F(Q_{V}(x))$ has at least $|V|/p$ irreducible factors and every irreducible
factor has the same degree, which is at most $p\cdot\deg(F)$.
In the last part of this paper we consider two further examples of big
subgroups of $\operatorname{PGL}_{2}(\mathbb{F}_{q})$ and show how to apply
Theorem 2 to them.
The Main Theorem shows that the irreducible factors of $F^{Q_{G}}$ belong to
the same $G$-orbit. Together with the fact that for every $G$-orbit $G\ast r$
in $\mathcal{I}_{K}$ there exists an irreducible $F\in\mathcal{I}_{K}$ such
that $F^{Q_{G}}$ has all polynomials in $G\ast r$ as its factors we can prove
a generalization of Theorem R:
###### Theorem 5.
All but finitely many $G$-invariant irreducible monic polynomials $f$ can be
written as a $Q_{G}$-transformation, i.e. there is $F\in\mathcal{I}_{K}$ such
that $f=F^{Q_{G}}$.
This result does not say anything about the existence of $G$-invariant
irreducible polynomials, it just makes a statement about them if they exist in
$K[x]$!
Our proof of a general version of Theorem R avoids the original idea of
dividíng $\operatorname{PGL}_{2}(K)$ into different types of conjugacy classes
and showing the theorem for each type, which should be hard as such a list can
become quite large depending on the field, see [11] or [27].
The last result shows that the $G$-invariant but not-necessarily irreducible
polynomials are a product of a $Q_{G}$-transformation and some exception
polynomials:
###### Theorem 6.
Let $G\leq\operatorname{PGL}_{2}(K)$ be a finite subgroup and $Q_{G}$ a
quotient map. There exists $k\in\mathbb{N}\setminus\\{0\\}$ and irreducible
monic polynomials $r_{1},\ldots r_{k}\in K[x]$ and $n_{1},\ldots
n_{k}\in\mathbb{N}\setminus\\{0\\}$ such that for every $G$-invariant monic
polynomial $f\in K[x]$ there is a unique monic $F\in K[x]$ and integers
$k_{i}<n_{i}$ such that
$f=\left(\prod\limits_{i=1}^{k}(\prod\limits_{t\in G\ast
r_{i}}t)^{k_{i}}\right)\cdot F^{Q_{G}}.$
We give full explanations about what the polynomials $r_{1},\ldots,r_{k}$ and
the integers $n_{i}$ are in section 3.
## 1 Preliminaries
### 1.1 Invariant Polynomials
We denote by
$\circ:\operatorname{PGL}_{2}(K)\times(\overline{K}\cup\\{\infty\\})\to\overline{K}\cup\\{\infty\\}$
the Möbius-Transformation on $\overline{K}\cup\\{\infty\\}$, that is,
$[A]\circ v=\frac{av+b}{cv+d}.$
This equation is self-explanatory if $v\notin\\{\infty,-\frac{d}{c}\\}$. For
$c\neq 0$ we set $[A]\circ\infty=\frac{a}{c}$ and
$[A]\circ(-\frac{d}{c})=\infty$; $[A]\circ\infty=\infty$ if $c=0$. The Möbius-
Transformation is a left group action of $\operatorname{PGL}_{2}(K)$ on
$\overline{K}\cup\\{\infty\\}$ and thus every subgroup of
$\operatorname{PGL}_{2}(K)$ acts on $\overline{K}\cup\\{\infty\\}$ too. For
$G\leq\operatorname{PGL}_{2}(K)$ we denote the $G$-orbit of
$v\in\overline{K}\cup\\{\infty\\}$ as $G\circ v$. Let
$G\leq\operatorname{PGL}_{2}(K)$ then define
$\mathcal{NR}_{K}^{G}:=\\{f\in K[x]|f\text{ monic and }f(\alpha)\neq 0\text{
for all }\alpha\in G\circ\infty\\}.$
This set is closed under multiplication, i.e. is a submonoid of $K[x]$. We
make the convention that $f(\infty)=\infty$ for all polynomials of degree
greater than 0 and $a(\infty)=a$ for $a\in K$. The following basic result
about $\ast$ holds:
###### Lemma 7.
Let $G\leq\operatorname{PGL}_{2}(K)$. For all $f,g\in\mathcal{NR}_{K}^{G}$ and
$[A],[B]\in G$ the following hold:
1. 1.
$\deg([A]\ast f)=\deg(f)$
2. 2.
$[AB]\ast f=[B]\ast([A]\ast f)$ and $[I_{2}]\ast f=f$, so $\ast$ is a right
group action of $G$ on $\mathcal{NR}_{K}^{G}$
3. 3.
$[A]\ast(fg)=([A]\ast f)([A]\ast g)$
4. 4.
$f$ irreducible if and only if $[A]\ast f$ irreducible
We omit the proof as it can be done almost exactly as in [12] or [26]. Because
of the fourth item of the previous lemma we know that $G$ induces a group
action on
$\mathcal{I}_{K}^{G}:=\mathcal{I}_{K}\cap\mathcal{NR}_{K}^{G}.$
Remember that we write $G\ast f$ for the $G$-orbit of $f$. Note that $G\ast
f\subset\mathcal{NR}_{K}^{G}$ if $f\in\mathcal{NR}_{K}^{G}$ and every
polynomial in the orbit has the same degree as $f$. The following lemma
explains the connection between $G$-invariant polynomials and the Möbius-
Transformation and the proof can be done similarly as in [12] or [26] again:
###### Lemma 8.
Let $G\leq\operatorname{PGL}_{2}(K)$ and $f\in\mathcal{NR}_{K}^{G}$. Further
we denote by
$R_{f}:=\\{v\in\overline{K}|f(v)=0\\}$ (4)
the set of roots of $f$ in $\overline{K}$. Then the following hold:
1. 1.
If $f$ is $G$-invariant, then $[A]\circ R_{f}=R_{f}$ for all $[A]\in G$. Here
$[A]\circ R_{f}:=\\{[A]\circ v|v\in R_{f}\\}$
2. 2.
If $f$ is irreducible the converse is also true, more precisely: $[A]\circ
R_{f}=R_{f}$ for all $[A]\in G$ implies that $f$ is $G$-invariant
From now on we use $R_{f}$ as the set of roots of a polynomial $f\in K[x]$ as
defined in (4). In the lemma above we did not make the assumption that $G$ has
to be a finite subgroup of $\operatorname{PGL}_{2}(K)$. So now, we want to
explore what happens if $G$ is infinite. Let
$[A]\in\operatorname{PGL}_{2}(K)$, then it is quite obvious that all fixed
points of $[A]$ in $\overline{K}$ under $\circ$ are in $K\cup\\{\infty\\}$ or
in a quadratic extension of $K$. Let $v\in\overline{K}$ with $[K(v):K]\geq 3$,
then $[A]\circ v\neq v$ for all $[A]\in\operatorname{PGL}_{2}(K)$, so $G\circ
v$ contains infinitely many elements. Therefore there can not exist
$G$-invariant irreducible monic polynomials of degree greater than 2 for $G$
an infinite subgroup of $\operatorname{PGL}_{2}(K)$, since otherwise it would
have infinitely many roots by Lemma 8. This is one of the reasons why we focus
on finite subgroups of $\operatorname{PGL}_{2}(K)$. Thus, from now on, $G$
denotes a finite subgroup of $\operatorname{PGL}_{2}(K)$. The following
corollary helps us to understand the factorization of $G$-invariant
polynomials:
###### Corollary 9.
Let $f,s,t\in\mathcal{NR}_{K}^{G}$, where $f$ is a $G$-invariant polynomial
with irreducible factor $r\in\mathcal{I}_{K}^{G}$. Then the following hold:
1. 1.
$[A]\ast r$ divides $f$ for all $[A]\in G$
2. 2.
If $\gcd(s,t)=1$, then $\gcd([A]\ast s,[A]\ast t)=1$ for all $[A]\in G$
3. 3.
If $r^{n}|f$ and $r^{n+1}\nmid f$, then $([A]\ast r)^{n}|f$ and $([A]\ast
r)^{n+1}\nmid f$ for all $[A]\in G$. So all polynomials in $G\ast r$ divide
$f$ with the same multiplicity
###### Proof.
The first statement is immediate, since $[A^{-1}]\circ R_{r}=R_{[A]\ast
r}\subset R_{f}$ by Lemma 8, so $[A]\ast r$ divides $f$ as well.
The condition $\gcd(s,t)=1$ is equivalent to $R_{s}\cap R_{t}=\varnothing$ in
$\overline{K}$ and again $[A^{-1}]\circ R_{s}=R_{[A]\ast s}$ and
$[A^{-1}]\circ R_{t}=R_{[A]\ast t}$. With the fact that every $[A]\in G$
induces a bijection on $\overline{K}\cup\\{\infty\\}$ we obtain $R_{[A]\ast
s}\cap R_{[A]\ast t}=\varnothing$.
For the last item we use the previous statement: Write $f=r^{n}\cdot P$ where
$\gcd(r,P)=1$. Then, with the third item of Lemma 7
$\displaystyle f=[A]\ast f=[A]\ast(r^{n}\cdot P)=([A]\ast r)^{n}\cdot([A]\ast
P)$
and $\gcd([A]\ast r,[A]\ast P)=1$. ∎
We just showed that every $G$-invariant polynomial $f\in\mathcal{NR}_{K}^{G}$
consists of powers of $G$-orbits in $\mathcal{I}_{K}^{G}$ that are glued
together by multiplication. So, in a nutshell, $G$-orbits in
$\mathcal{I}_{K}^{G}$ are the atoms of $G$-invariant polynomials and thus are
quite important for this paper; hence the following definition:
###### Definition 10.
We call $f\in\mathcal{NR}_{K}^{G}$ $G$-orbit polynomial (or simply orbit
polynomial) if there exists an irreducible polynomial
$r\in\mathcal{I}_{K}^{G}$ such that
$f=\prod\limits_{t\in G\ast r}t=:\prod(G\ast r).$
For the sake of completeness we state our observation about the factorization
of $G$-invariant polynomials as a corollary, but we omit the proof since it is
a trivial consequence of Corollary 9.
###### Corollary 11.
Let $f\in\mathcal{NR}_{K}^{G}$ be a $G$-invariant polynomial. Then there are
$r_{1},\ldots,r_{k}\in\mathcal{I}_{K}^{G}$ and
$n_{1},\ldots,n_{k}\in\mathbb{N}\setminus\\{0\\}$ such that
$f=\prod\limits_{i=1}^{k}(\prod(G\ast g_{i}))^{n_{i}}.$
### 1.2 Quotient Maps and Rational Transformations
Throughout the rest of the paper we assume that $Q_{G}\in K(x)$ is a quotient
map for $G$ with monic numerator polynomial $g$. Note that such a rational
function exists for all finite subgroups of $\operatorname{PGL}_{2}(K)$ and if
$Q_{G}^{\prime}$ is another quotient map for $G$, then there are constants
$a,b\in K$ such that $Q_{G}^{\prime}(x)=aQ_{G}(x)+b$ (see [4]). We denote by
$(\overline{K}\cup\\{\infty\\})/G$ the set of $G$-orbits in
$\overline{K}\cup\\{\infty\\}$. The following theorem is an important tool for
proving our Main Theorem and explains the name ”quotient map”:
###### Theorem 12 ([4, Proposition 3.9]).
The quotient map $Q_{G}$ induces a bijection between
$(\overline{K}\cup\\{\infty\\})/G$ and $\overline{K}\cup\\{\infty\\}$, more
precisely
$\displaystyle\psi:\begin{cases}(\overline{K}\cup\\{\infty\\})/G\to\overline{K}\cup\\{\infty\\},\\\
G\circ v\mapsto Q_{G}(G\circ v)=Q_{G}(v)\end{cases}$
is a bijection and $\psi(G\circ\infty)=Q_{G}(\infty)=\infty$.
There are essentially two ways to calculate a quotient map for a given finite
subgroup $G$ that we know of. One of them is explained in [13] and works as
follows: Calculate the polynomial
$F_{G}(y):=\prod\limits_{[A]\in G}(y-([A]\circ x))\in K(x)[y].$
One of the coefficients of $F_{G}(y)$ has to be a generator of $K(x)^{G}$ and
thus can be normalized so that it becomes a quotient map. Another method is
explained in subsection 3.3. in [4].
For an arbitrary rational function $Q(x)=g(x)/h(x)$ with $g,h$ having leading
coefficients $a(g),a(h)\in K^{\ast}$ we set
$Q(\infty):=\begin{cases}\infty,&\deg(g)>\deg(h)\\\
\frac{a(g)}{a(h)},&\deg(g)=\deg(h)\\\ 0,&\deg(h)>\deg(g).\end{cases}$
We collect some known facts about rational transformations in the next lemma:
###### Lemma 13.
Let $Q=\frac{g}{h}$ be such that $g$ is monic and $F\in K[x]$ such that
$F(Q(\infty))\neq 0$, then the following hold:
1. 1.
$\deg(F^{Q})=\deg(F)\cdot\deg(Q)$
2. 2.
If $F$ is reducible so is $F^{Q}$, more precisely: $F=rt$ and
$\deg(r),\deg(t)\geq 1$, then $F^{Q}=r^{Q}t^{Q}$
3. 3.
If $F$ is monic and $\deg(g)>\deg(h)$, then $F^{Q}$ is monic as well
The following lemma shows that rational transformations with $Q_{G}$ yield
$G$-invariant polynomials:
###### Lemma 14.
Let $F\in K[x]$ be a monic polynomial, then $F^{Q_{G}}\in\mathcal{NR}_{K}^{G}$
and $F^{Q_{G}}$ is $G$-invariant.
###### Proof.
By Theorem 3.10 and Proposition 3.4 in [4], the denominator of $Q_{G}$ is of
the form
$h(x)=\prod\limits_{v\in(G\circ\infty)\setminus\\{\infty\\}}(x-v)^{m_{\infty}},$
where
$m_{\infty}:=|\operatorname{Stab}_{G}(\infty)|=\frac{|G|}{|G\circ\infty|}$ is
the cardinality of the stabilizer of $\infty$ in $G$. First of all we want to
show that $F^{Q_{G}}\in\mathcal{NR}_{K}^{G}$. For that we write
$F(x)=\sum\limits_{i=0}^{k}a_{i}x^{i}$ with $a_{k}=1$, then
$F^{Q_{G}}(x)=\sum\limits_{i=0}^{k}a_{i}g(x)^{i}h(x)^{k-i}.$
Since the set of roots of $h$ is $G\circ\infty\setminus\\{\infty\\}$ we obtain
for all $w\in G\circ\infty\setminus\\{\infty\\}$:
$\displaystyle
F^{Q_{G}}(w)=\sum\limits_{i=0}^{k}a_{i}g(w)^{i}h(w)^{k-i}=g(w)^{k}\neq 0.$
The last step in the calculation is a consequence of $\gcd(h,g)=1$ which
implies $g(w)\neq 0$. Thus we got $F^{Q_{G}}\in\mathcal{NR}_{K}^{G}$, since
$\mathcal{NR}_{K}^{G}$ is the set of monic polynomials with no roots in the
orbit of $\infty$ and $f(\infty)=\infty\neq 0$ for all polynomials111Note that
the only polynomial of degree 0 in $\mathcal{NR}_{K}^{G}$ is $1$ with
$\deg(f)\geq 1$.
For the calculation that $F^{Q_{G}}$ is indeed $G$-invariant we verify whether
for all $[A]\in G$ there exists $\alpha_{A}\in K^{\ast}$ such that
$(cx+d)^{\deg(F^{Q_{G}})}F^{Q_{G}}(\frac{ax+b}{cx+d})=\alpha_{A}F^{Q_{G}}(x)$
Writing the left side out gives
$\displaystyle(cx+d)^{\deg(F^{Q_{G}})}F^{Q_{G}}((A\circ
x))=(cx+d)^{\deg(F^{Q_{G}})}h(\frac{ax+b}{cx+d})^{\deg(F)}F(Q_{G}(\frac{ax+b}{cx+d})).$
Since $Q_{G}(\frac{ax+b}{cx+d})=Q_{G}(x)$, we can focus on
$(cx+d)^{\deg(F^{Q_{G}})}h(\frac{ax+b}{cx+d})^{\deg(F)}$. For that we have to
consider 2 separate cases, namely $c\neq 0$ and $c=0$. Here we only consider
the first as the latter can be done similarly. So now $c\neq 0$, then we get:
$\displaystyle(cx+d)^{\deg(F^{Q_{G}})}h(\frac{ax+b}{cx+d})^{\deg(F)}$
$\displaystyle=(cx+d)^{\deg(F)\deg(Q_{G})}\prod\limits_{v\in(G\circ\infty)\setminus\\{\infty\\}}(\frac{ax+b}{cx+d}-v)^{m_{\infty}\deg(F)}$
$\displaystyle=\frac{(cx+d)^{\deg(F)\cdot|G|}}{(cx+d)^{\deg(F)\cdot(|G\circ\infty|-1)\cdot\frac{|G|}{|G\circ\infty|}}}\prod\limits_{v\in(G\circ\infty)\setminus\\{\infty\\}}\left((a-cv)x+(b-dv))\right)^{m_{\infty}\cdot\deg(F)}$
$\displaystyle=\left((cx+d)\left(b-d\cdot\frac{a}{c}\right)\prod\limits_{v\in
G\circ\infty\setminus\\{\infty,\frac{a}{c}\\}}(a-cv)\left(x+\frac{b-dv}{a-cv}\right)\right)^{m_{\infty}\cdot\deg(F)}$
$\displaystyle=\alpha_{A}\left(\left(x+\frac{d}{c}\right)\prod\limits_{v\in
G\circ\infty\setminus\\{\infty,\frac{a}{c}\\}}\left(x-[A^{-1}]\circ
v\right)\right)^{m_{\infty}\cdot\deg(F)}$
$\displaystyle=\alpha_{A}\left(\left(x-[A^{-1}]\circ\infty\right)\prod\limits_{v\in
G\circ\infty\setminus\\{\infty,\frac{a}{c}\\}}\left(x-[A^{-1}]\circ
v\right)\right)^{m_{\infty}\cdot\deg(F)}$
$\displaystyle=\alpha_{A}\left(\prod\limits_{v\in
G\circ\infty\setminus\\{\frac{a}{c}\\}}\left(x-[A^{-1}]\circ
v\right)\right)^{m_{\infty}\cdot\deg(F)}$
$\displaystyle=\alpha_{A}\left(\prod\limits_{u\in
G\circ\infty\setminus\\{\infty\\}}\left(x-u\right)\right)^{m_{\infty}\cdot\deg(F)}=\alpha_{A}h(x)^{\deg(F)}.$
The factor $\alpha_{A}$ has the form
$\alpha_{A}=\left(\underbrace{c(b-d\cdot\frac{a}{c})}_{=-\det(A)}\prod\limits_{v\in
G\circ\infty\setminus\\{\infty,\frac{a}{c}\\}}(a-cv)\right)^{{m_{\infty}\cdot\deg(F)}},$
so it is non-zero, since all its factors are non-zero. Moreover note that the
inverse of $[A]$ in $\operatorname{PGL}_{2}(K)$ is $[B]$ with
$B=\left(\begin{array}[]{cc}d&-b\\\ -c&a\end{array}\right),$
because $A\cdot B=\det(A)I_{2}$. This finishes the proof. ∎
This lemma does not hold in general if we consider a general generator instead
of quotient maps. Consider a quotient map $Q_{G}=g/h$, then
$Q:=Q_{G}^{-1}=h/g$ is a generator of $K(x)^{G}$. However, for $F=x$ we get
$F^{Q}=h(x)$, which is not in $\mathcal{NR}_{K}^{G}$ because $h$ has
$G\circ\infty\setminus\\{\infty\\}$ as its roots; therefore $F^{Q}$ can not be
$G$-invariant. This example suggests that the only exceptions are the monic
polynomials $F\in K[x]$ with root $Q(\infty)$. To show that we need a lemma
first.
###### Lemma 15.
Let $Q_{G}\in K(x)$ be a quotient map for $G$ and $Q\in K(x)$ another
generator of $K(x)^{G}$, then $\deg(Q)=\deg(Q_{G})=|G|$ and there is
$[C]\in\operatorname{PGL}_{2}(K)$ such that $Q=[C]\circ Q_{G}$. More precisely
there is
$C=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(K)$
such that
$Q(x)=\frac{aQ_{G}(x)+b}{cQ_{G}(x)+d}.$
###### Proof.
The proof is a combination of Lemma 3.1 and Proposition 3.3 in [4]. ∎
###### Corollary 16.
Let $Q_{G}=\frac{g}{h}$ be a quotient map, $Q\in K(x)$ an arbitrary generator
of $K(x)^{G}$ and $F\in K[x]$ monic. Write $Q=[C]\circ Q_{G}$, which is
possible by the lemma above. If $F([C]\circ\infty)\neq 0$, or equivalently
$F(Q(\infty))\neq 0$, then $a\cdot F^{Q}\in\mathcal{NR}_{K}^{G}$ and $a\cdot
F^{Q}$ is $G$-invariant (the factor $a\in K^{\ast}$ is needed to make
$F^{Q_{G}}$ monic).
###### Proof.
Let $H=[C]\ast F$, then $\deg(H)=\deg(F)$ by the assumption that
$F([C]\circ\infty)\neq 0$ (see Lemma 7). Write
$C=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(K),$
then $H(x)=\lambda_{C,H}(cx+d)^{\deg(F)}F(\frac{ax+b}{cx+d})$ and
$Q(x)=\frac{ag(x)+bh(x)}{cg(x)+dh(x)}$. Moreover
$\displaystyle H^{Q_{G}}(x)$
$\displaystyle=h(x)^{\deg(H)}H(\frac{g(x)}{h(x)})$
$\displaystyle=\lambda_{C,H}h(x)^{\deg(H)}(c\frac{g(x)}{h(x)}+d)^{\deg(F)}F\left(\frac{a\frac{g(x)}{h(x)}+b}{c\frac{g(x)}{h(x)}+d}\right)$
$\displaystyle=\lambda_{C,H}(cg(x)+dh(x))^{\deg(F)}F\left(\frac{ag(x)+bh(x)}{cg(x)+dh(x)}\right)=\lambda_{C,H}F^{Q}(x).$
With Lemma 14 we have that $H^{Q_{G}}$ is $G$-invariant and an element of
$\mathcal{NR}_{K}^{G}$, thus both facts are also true for
$\lambda_{C,H}F^{Q}$. ∎
## 2 Proof of the Main Theorem
Here $\overline{K}$ denotes a fixed algebraic closure of $K$ and for a
polynomial $P\in K[x]$ we denote its splitting field in $\overline{K}$ as
$L_{P}$. Further we define for an extension $K\subset L\subset\overline{K}$
the set $\operatorname{hom}_{K}(L)$ as the $K$-automorphisms
$\sigma:L\to\overline{K}$. If $L$ is normal over $K$ then
$\operatorname{hom}_{K}(L)=\operatorname{Aut}_{K}(L)$. We are going to prove
the first part of the Main Theorem:
###### Theorem 17.
Let $F\in\mathcal{I}_{K}$, then there is $k\in\mathbb{N}\setminus\\{0\\}$ and
$r\in\mathcal{I}_{K}^{G}$ with $\deg(F)|\deg(r)$ such that
$F^{Q_{G}}=(\prod G\ast r)^{k}.$
In words: The $Q_{G}$-transform of an irreducible monic polynomial is a power
of an orbit polynomial.
###### Proof.
We know that $F^{Q_{G}}\in\mathcal{NR}_{K}^{G}$ is $G$-invariant by Lemma 14
and therefore all its irreducible factors are contained in
$\mathcal{I}_{K}^{G}$. First we prove the degree condition for the irreducible
factors of $F^{Q_{G}}$. For that let $r\in\mathcal{I}_{K}^{G}$ be an arbitrary
irreducible factor of $F^{Q_{G}}$ and $v\in\overline{K}$ a root of $r$, then
$0=F^{Q_{G}}(v)=h(v)^{\deg(F)}F(Q_{G}(v))$
and $h(v)\neq 0$ because $r\in\mathcal{NR}_{K}^{G}$. Thus $F(Q_{G}(v))=0$
which shows that $Q_{G}(v)=\alpha\in R_{F}$ and with that
$K(Q_{G}(v))\subseteq K(v)$. We conclude
$\deg(r)=[K(v):K]=[K(v):K(\alpha)]\cdot[K(\alpha):K]=[K(v):K(\alpha)]\cdot\deg(F).$
For the rest note that $G\ast r$ divides $F^{Q_{G}}$ by Corollary 9. So our
goal now is to show that every irreducible factor of $F^{Q_{G}}$ belongs to
$G\ast r$. With Lemma 8 we know that the set of roots of $F^{Q_{G}}$ can be
partitioned into $G$-orbits under the Möbius-Transformation, i.e. there exist
$v_{1},\ldots,v_{l}\in R_{F^{Q_{G}}}$ such that
$R_{F^{Q_{G}}}=\bigcup\limits_{i=1}^{l}(G\circ v_{i})$
and $(G\circ v_{i})\cap(G\circ v_{j})=\varnothing$ for $i\neq j$. We set
w.l.o.g. $v=v_{1}$. By Theorem 12 we know that there are
$\alpha_{1},\ldots,\alpha_{l}\in\overline{K}$ such that $Q_{G}(G\circ
v_{i})=\alpha_{i}$. Note that $R_{F}=\\{\alpha_{1},\ldots,\alpha_{l}\\}$. Now
consider the splitting fields $L_{F},L_{F^{Q_{G}}}$ of $F$ and $F^{Q_{G}}$
over $K$. The extensions $L_{F}/K,L_{F^{Q_{G}}}/K$ and $L_{F^{Q_{G}}}/L_{F}$
are normal and finite. It can be shown that for all $\alpha_{i},\alpha_{j}\in
R_{F}$ there is
$\sigma_{i,j}\in\operatorname{hom}_{K}(L_{F})=\operatorname{Aut_{K}}(L_{F})$
such that $\sigma_{i,j}(\alpha_{i})=\alpha_{j}$ because $F$ is irreducible
(for reference see [24, Theorem 2.8.3] for example). Now let $\beta\in R_{F}$
be arbitrary, then there is $\sigma_{\beta}\in\operatorname{Aut}_{K}(L_{F})$
such that $\sigma_{\beta}(\alpha_{1})=\beta$. The automorphism
$\sigma_{\beta}$ can be extended to an automorphism in
$\operatorname{Aut}_{K}(L_{F^{Q_{G}}})$, we denote it by
$\overline{\sigma}_{\beta}$ (for reference see [24, Theorem 2.8.4]). Finally,
we put everything together: Let $w\in R_{F^{Q_{G}}}$ and $Q_{G}(w)=\gamma\in
R_{F}$, then
$\displaystyle
Q_{G}(w)=\gamma=\overline{\sigma}_{\gamma}(\alpha)=\overline{\sigma}_{\gamma}(Q_{G}(v))=Q_{G}(\overline{\sigma}_{\gamma}(v)),$
so $w$ and $\overline{\sigma}_{\gamma}(v)$ are contained in the same
$G$-orbit. We just showed that every $G$-orbit in $R_{F^{Q_{G}}}$ contains at
least one root of $r$, since $\sigma(v)$ is always a root of $r$ for all
$\sigma\in\operatorname{Aut}_{K}(L_{F^{Q_{G}}})$. To finish the proof let
$t\in\mathcal{I}_{K}^{G}$ be an arbitrary irreducible factor of $F^{Q_{G}}$
and $w$ a root of $t$, then there is $[A]\in G$ and $v\in R_{r}$ such that
$[A]^{-1}\circ v=w$, thus $t=[A]\ast r$. ∎
###### Remark 18.
This theorem still holds for arbitrary generators $Q=\frac{g}{h}$ of
$K(x)^{G}$ if $F\in\mathcal{I}_{K}$ satisfies $F(Q(\infty))\neq 0$ because of
Corollary 16 with proof. Notice that $Q(\infty)\in K\cup\\{\infty\\}$, thus
for $\deg(F)\geq 2$ it always holds. But $F^{Q}$ is not guaranteed to be
monic, so we have to normalize it on occasion.
What is left to show is that $k=1$ for all but finitely many
$F\in\mathcal{I}_{K}$. The next corollary is very helpful:
###### Corollary 19.
Let $F\in\mathcal{I}_{K}$, then every $G$-orbit in $R_{F^{Q_{G}}}$ is of the
same size. So for $v\in R_{F^{Q_{G}}}$ we obtain:
$|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|$
###### Proof.
Let $v\in R_{F^{Q_{G}}}$ be such that $Q_{G}(G\circ v)=\alpha\in R_{F}$.
Additionally, for $\beta\in R_{F}$ let
$\overline{\sigma}_{\beta}:L_{F^{Q_{G}}}\to L_{F^{Q_{G}}}$ be an automorphism
of $L_{F^{Q_{G}}}$ such that $\overline{\sigma}_{\beta}(\alpha)=\beta$ as in
the proof of Theorem 17. Moreover let $w_{\beta}$ be a root of $F^{Q_{G}}$
such that $Q_{G}(w_{\beta})=\beta$. We have
$\overline{\sigma}_{\beta}(G\circ v)\subseteq G\circ w_{\beta}$
and since
$\overline{\sigma}_{\beta}^{-1}\in\operatorname{Aut}_{K}(L_{F^{Q_{G}}})$ with
$\overline{\sigma}_{\beta}^{-1}(\beta)=\alpha$ also $G\circ
w_{\beta}\subseteq\overline{\sigma}_{\beta}(G\circ v)$. Hence $G\circ
w_{\beta}=\overline{\sigma}_{\beta}(G\circ v)$ and $|G\circ
v|=|\overline{\sigma}_{\beta}(G\circ v)|$ since $\overline{\sigma}_{\beta}$ is
bijective on $L_{F^{Q_{G}}}$. Thus we obtain
$\displaystyle|R_{F^{Q_{G}}}|=|\bigcup\limits_{\beta\in
R_{F}}Q_{G}^{-1}(\beta)|=\sum\limits_{\beta\in
R_{F}}|Q_{G}^{-1}(\beta)|=|R_{F}|\cdot|Q_{G}^{-1}(\alpha)|=|R_{F}|\cdot|G\circ
v|.$
∎
It follows that if $K$ is perfect, then $F^{Q_{G}}$ is separable if and only
if $G\circ v$ is regular, because then
$|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|=\deg(F)\cdot|G|=\deg(F^{Q_{G}}).$
Later we will see that there are only finitely many non-regular $G$-orbits in
$\overline{K}\cup\\{\infty\\}$ and consequentially there are only finitely
many $F\in\mathcal{I}_{K}$ such that $F^{Q_{G}}$ is a proper power of an orbit
polynomial. But before we do that we want to show that $G\circ v$ is regular
for a root of $F^{Q_{G}}$ implies that $F^{Q_{G}}$ is a $G$-orbit polynomial
and not a proper power thereof holds over every field.
### 2.1 Proof of the Second Part of the Main Theorem, Theorem R and Theorem 6
Let $L/K$ be a finite field extension. The separable degree of $L$ over $K$ is
defined as
$[L:K]_{s}:=|\operatorname{hom}_{K}(L)|.$
Recall that it behaves in the same way as the degree of field extensions, that
is, for $M/L/K$ we have
$[M:K]_{s}=[M:L]_{s}\cdot[L:K]_{s}.$
If $K$ is perfect, then $[L:K]_{s}=[L:K]$ for every finite field extension $L$
of $K$. Now let $\operatorname{char}(K)=p>0$ and $r\in\mathcal{I}_{K}$ with
root $v\in R_{r}$. Then there is a natural number $d$ such that
$[K(v):K]=p^{d}\cdot[K(v):K]_{s},$
this $d$ is called the radical exponent of $r$ or $v$ over $K$. It can be
shown that the radical exponent of $r$ is the smallest positive integer such
that there is an irreducible and separable polynomial $s\in K[x]$ such that
$r(x)=s(x^{p^{d}}).$ For a nice reference on this topic see [24]. For the sake
of convenience we use the notation
$\operatorname{rad}(r):=p^{d}=\frac{[K(v):K]}{[K(v):K)]_{s}}$
for $r\in\mathcal{I}_{K}$ with radical exponent $d$ and $v\in R_{r}$.
The essential part of the proof is to show that
$\operatorname{rad}(F)=\operatorname{rad}(r)$ for all irreducible factors
$r\in\mathcal{I}_{K}^{G}$ of $F^{Q_{G}}$. So let $F\in\mathcal{I}_{K}$,
$r\in\mathcal{I}_{K}^{G}$ an irreducible factor of $F^{Q_{G}}$ and $v\in
R_{r}$ a root of $r$ with $Q_{G}(v)=\alpha\in R_{F}$, then
$\displaystyle\operatorname{rad}(r)$
$\displaystyle=\frac{[K(v):K]}{[K(v):K]_{s}}=\frac{[K(v):K(\alpha)]}{[K(v):K(\alpha)]_{s}}\cdot\frac{[K(\alpha):K]}{[K(\alpha):K]_{s}}$
$\displaystyle=\frac{[K(v):K(\alpha)]}{[K(v):K(\alpha)]_{s}}\cdot\operatorname{rad}(F),$
so $\operatorname{rad}(F)\leq\operatorname{rad}(r)$. For
$\operatorname{rad}(F)\geq\operatorname{rad}(r)$ we need to work a bit.
###### Lemma 20.
Let $F\in\mathcal{I}_{K}$ be such that $|G\circ v|=|G|$ for $v\in
R_{F^{Q_{G}}}$ and $\operatorname{char}(K)=p>0$. Further $F(x)=H(x^{q})$ for
$q$ a power of $p$ and $H\in\mathcal{I}_{K}$ also separable, thus
$\operatorname{rad}(F)=q$. Then there is a separable polynomial $S\in K[x]$
such that
$F^{Q_{G}}(x)=S(x^{q}).$
###### Proof.
This proof has two parts. At first we show that we can write $F^{Q_{G}}$ as
$S(x^{q})$ and afterwards show that the polynomial $S$ is separable. The first
part is a calculation exercise. For that let $Q_{G}=\frac{g}{h}$ and
$H=\sum_{i=0}^{n}a_{i}x^{i}$. Additionally, for an arbitrary polynomial
$P:=\sum_{i=0}^{m}c_{i}x^{i}$ we define
$P^{(q)}:=\sum\limits_{i=0}^{m}c_{i}^{q}x^{i}.$
Observe that $P(x)^{q}=P^{(q)}(x^{q})$ for $q$ a power of
$\operatorname{char}(K)=p$. With that we obtain the following:
$\displaystyle F^{Q_{G}}(x)$
$\displaystyle=h(x)^{\deg(F)}F(\frac{g(x)}{h(x)})=h(x)^{q\deg(H)}H(\frac{g(x)^{q}}{h(x)^{q}})$
$\displaystyle=\sum\limits_{i=0}^{n}a_{i}g(x)^{iq}h(x)^{(n-i)q}=\sum\limits_{i=0}^{n}a_{i}g^{(q)}(x^{q})^{i}h^{(q)}(x^{q})^{(n-i)}$
Since $K[x^{q}]\subseteq K[x]$ is a subring there exists a polynomial $S$ such
that $F^{Q_{G}}(x)=S(x^{q})$, which is exactly what we wanted. The polynomial
$S$ is of degree $\deg(S)=\deg(F^{Q_{G}})/q=|G|\cdot\deg(H)$. For every $v\in
R_{F^{Q_{G}}}$ holds $v^{q}\in R_{S}$. The map $y\mapsto y^{q}$ is bijective
on $\overline{K}$, thus $\rho:R_{F^{Q_{G}}}\to R_{S}$ with $v\mapsto v^{q}$ is
injective and therefore $|R_{F^{Q_{G}}}|\leq|R_{S}|$. Conversely, for
$\alpha\in R_{S}$ the $q$-th root $\alpha^{1/q}$ is a root of $F^{Q_{G}}$,
because $F(\alpha^{1/q})=S((\alpha^{1/q})^{q})=S(\alpha)=0$. This shows that
$\rho$ is actually a bijection and $|R_{S}|=|R_{F^{Q_{G}}}|$. We finish this
proof by applying Corollary 19 and using $\deg(H)=|R_{F}|$ as well as our
assumption that $|G|=|G\circ v|$:
$|R_{S}|=|R_{F^{Q_{G}}}|=|R_{F}|\cdot|G\circ v|=\deg(H)\cdot|G|=\deg(S)$
So $S$ is separable because it has $\deg(S)$ many roots in $\overline{K}$. ∎
If $S\in K[x]$ is the separable polynomial such that $F^{Q_{G}}=S(x^{q})$ as
in the lemma above, then $S$ factorizes into separable irreducible factors
$S=s_{1}\cdot\ldots\cdot s_{l}$. Hence $S(x^{q})=s_{1}(x^{q})\cdot\ldots\cdot
s_{l}(x^{q})$, so it should be beneficial to study the factorization of
polynomials of the form $s(x^{q})$ for $s\in\mathcal{I}_{K}$ irreducible and
separable. The next lemma shows that such polynomials consist of only one
irreducible factor. We give a proof of this lemma as it is a crucial tool for
the following theorem. In the proof we employ a similar method as in the proof
of Theorem 17. We want to point out that there is no finite subgroup
$G\leq\operatorname{PGL}_{2}(K)$ with $x^{q}\in K(x)$ as a quotient map for
$q$ a power of $\operatorname{char}(K)$, so this is not a particular case of
Theorem 17.
###### Lemma 21.
Let $s\in K[x]$ be an irreducible, separable and monic polynomial. Furthermore
let $\operatorname{char}(K)=p>0$ and $q=p^{d}$ for $d>0$. Then there is an
irreducible,separable and monic polynomial $f\in K[x]$ with $\deg(f)=\deg(s)$
and $a,b\in\mathbb{N}$ with $a+b=d$ such that
$s(x^{q})=(f(x^{p^{a}}))^{p^{b}}$
and $f(x^{p^{a}})$ is irreducible.
###### Proof.
At first we show that $s(x^{q})$ only has one irreducible factor. Since $s$ is
both irreducible and separable, $L_{s}/K$ is a Galois extension. If we set
$P(x):=s(x^{q})$ and consider the splitting field $L_{P}/K$ of $P$, then, with
similar arguments as in the proof of Theorem 17, we can extend every
$\sigma\in\operatorname{Gal}(L_{s}/K)$ to a $K$-homomorphism
$\overline{\sigma}\in\operatorname{hom}_{K}(L_{P})$. Since splitting fields
are normal, every such $K$-homomorphism is actually an automorphism on
$L_{P}$. Let $F\in\mathcal{I}_{K}$ be an irreducible factor of $P$ and $v\in
R_{F}$ one of its roots. Then $v^{q}=:\alpha$ is a root of $s$ and similarly
$w^{q}=:\beta\in R_{s}$ for $w\in R_{P}$. Let
$\overline{\sigma}\in\operatorname{hom}(L_{P})$ be the extension of the
homomorphism $\sigma\in\operatorname{Gal}(L_{s}/K)$ with
$\sigma(\alpha)=\beta$, then
$w^{q}=\beta=\overline{\sigma}(\alpha)=\overline{\sigma}(v^{q})=(\overline{\sigma}(v))^{q}.$
As $y\mapsto y^{q}$ is injective on $\overline{K}$ we obtain that
$w=\overline{\sigma}(v)$. Therefore $w$ has to be a root of $F\in K[x]$ as
well, since $\overline{\sigma}$ is an automorphism of $L_{P}$ that fixes $K$.
So we just showed that all roots of $P$ are also roots of the irreducible
factor $F$, thus $P=F^{k}$ for $k\in\mathbb{N}$. With the degree formula for
field extensions we get
$\deg(F)=[K(v):K]=[K(v):K(\alpha)]\cdot[K(\alpha):K]=[K(v):K(\alpha)]\cdot\deg(s),$
which shows $\deg(s)|\deg(F)$. Together with the fact that
$k\cdot\deg(F)=\deg(P)=q\cdot\deg(s)$ we get $\deg(F)=p^{a}\cdot\deg(s)$ for
an $a\in\\{0,\ldots,d\\}$ and $k=p^{b}$ such that $a+b=d$. Moreover
$p^{a}=[K(v):K(\alpha)]$, which is the degree of the minimal polynomial
$m_{v}\in K(\alpha)[x]$ of $v$ in $K(\alpha)$. Since $v^{q}=\alpha$, it is
also a root of $x^{q}-\alpha$ and therefore $m_{v}|x^{q}-\alpha$. A simple
calculation shows that $m_{v}=x^{p^{a}}-\alpha^{\frac{1}{p^{b}}}$, so
$\alpha^{\frac{1}{p^{b}}}\in K(\alpha)$. Conversely $\alpha\in
K(\alpha^{\frac{1}{p^{b}}})$, since $\alpha$ is the $p^{b}$-th power of
$\alpha^{\frac{1}{p^{b}}}$, hence $K(\alpha)=K(\alpha^{\frac{1}{p^{b}}})$. Let
$f\in\mathcal{I}_{K}$ be the minimal polynomial of $\alpha^{\frac{1}{p^{b}}}$
in $K[x]$. We just showed that $\deg(f)=[K(\alpha):K]=\deg(s)$. All that shows
that $v$ is also root of the monic polynomial $F^{\prime}:=f(x^{p^{a}})\in
K[x]$. Since $F$ is the minimal polynomial of $v$ over $K$ it has to divide
$F^{\prime}$. Moreover, $F$ and $F^{\prime}$ have the same degree and are
monic, thus have to be equal. ∎
This is enough to prove that $k=1$ if $G\circ v$ is regular for $v\in
R_{F^{Q_{G}}}$:
###### Theorem 22.
Let $F\in\mathcal{I}_{K}$ be such that $|G\circ v|=|G|$ for $v\in
R_{F^{Q_{G}}}$. Then $F^{Q_{G}}=\prod(G\ast r)$, i.e. it is an orbit
polynomial.
###### Proof.
If $F$ is separable and $|G\circ v|=|G|$ for a root of $F^{Q_{G}}$, then all
$G$-orbits in $R_{F^{Q_{G}}}$ are of the same size by Corollary 19 and as a
consequence $F^{Q_{G}}$ is also separable since
$\deg(F^{Q_{G}})=|R_{F^{Q_{G}}}|$. If $\operatorname{rad}(F)=q>1$ then by
Lemma 20
$F^{Q_{G}}(x)=S(x^{q})=\prod\limits_{i=1}^{l}s_{i}(x^{q})=\prod\limits_{i=1}^{l}(f_{i}(x^{p^{a_{i}}}))^{p^{b_{i}}},$
where $s_{i}\in\mathcal{I}_{K}$ are the irreducible and separable factors of
$S$ and $f_{i}\in\mathcal{I}_{K}$ the separable and irreducible polynomials as
in the lemma above, so $a_{i}+b_{i}=d$ for $q=p^{d}$. Observe that
$\gcd(f_{i}(x^{p^{a_{i}}}),f_{j}(x^{p^{a_{j}}}))=1$ for $i\neq j$. The reason
for that is that since $S$ is separable, $s_{i}$ and $s_{j}$ have to be
different irreducible polynomials, i.e. $\gcd(s_{i},s_{j})=1$ which is
equivalent to $R_{s_{i}}\cap R_{s_{j}}=\varnothing$. Now, the roots of
$s_{i}(x^{q})$ and $s_{j}(x^{q})$ are the preimages of $R_{s_{i}}$ and
$R_{s_{j}}$ under the map $y\mapsto y^{q}$ on $\overline{K}$. This map is
bijective and therefore also injective, so the preimages are also different
and thus $s_{i}(x^{q})$ and $s_{j}(x^{q})$ have no roots in common. This small
observation is very important because now, with the help of Theorem 17, we can
deduce that
$\displaystyle S(x^{q})$
$\displaystyle=\prod\limits_{i=1}^{l}(f_{i}(x^{p^{a_{i}}}))^{p^{b_{i}}}$
$\displaystyle=(\prod\limits_{t\in G\ast r}t)^{k}.$
The irreducible factors $f_{i}(x^{p^{a_{i}}})$ and $t\in G\ast r$ have to
coincide with each other because $K[x]$ is a factorial ring. So we obtain with
the remarks above (and the fact that all $t_{1},t_{2}\in G\ast r$ with
$t_{1}\neq t_{2}$ also satisfy $\gcd(t_{1},t_{2})=1$ since they are
irreducible) that for all $t\in G\ast r$ there is exactly one $i\in[l]$ such
that $t(x)=f_{i}(x^{p^{a_{i}}})$. This also implies $k=p^{b_{i}}$ for all
$i\in[l]$ and thus $p^{a_{i}}=p^{a_{j}}$ for all $i,j\in[l]$. To summarize
what we could obtain:
$\displaystyle
S(x^{q})=\prod\limits_{i=1}^{l}(f_{i}(x^{p^{a}}))^{k}=\prod\limits_{t\in G\ast
r}t^{k}=F^{Q_{G}}(x).$
This shows that $\operatorname{rad}(t)=p^{a}\leq q=\operatorname{rad}(F)$ for
all $t\in G\ast r$, thus $\operatorname{rad}(t)=\operatorname{rad}(F)$, since
we already explained that $\operatorname{rad}(r)\geq\operatorname{rad}(F)$.
Hence $p^{a}=q$ and $b_{i}=0$, which shows $k=1$. ∎
Before we finish the proof of the Main Theorem we want to state an immediate
consequence of this result
###### Corollary 23.
Let $F\in\mathcal{I}_{K}$ be such that $|G\circ v|=|G|$ for a root $v$ of
$F^{Q_{G}}$, then
1. 1.
The degree of every irreducible factor $r$ of $F^{Q_{G}}$ satisfies
$\deg(r)=\frac{|G|}{|G\ast r|}\cdot\deg(F)$
2. 2.
If $\deg(F)<\deg(r)$ for an irreducible factor $r$ of $F^{Q_{G}}$, then
$\\{[I_{2}]\\}\neq\operatorname{Stab}_{G}(r)\leq G$ is non-trivial
###### Proof.
For the first we use Theorem 22 (so $k=1$) and calculate:
$|G|\cdot\deg(F)=\deg(F^{Q_{G}})=|G\ast r|\cdot\deg(r).$
The last is obvious because $\deg(r)=\frac{|G|}{|G\ast
r|}\cdot\deg(F)>\deg(F)$, so $|\operatorname{Stab}_{G}(r)|=\frac{|G|}{|G\ast
r|}>1$. ∎
###### Remark 24.
Observe that for $Q$ an arbitrary generator of $K(x)^{G}$ Theorem 22 and
Corollary 23 still hold if $F\in\mathcal{I}_{K}$ and $F(Q(\infty))\neq 0$. The
reason for this is again the proof of Corollary 16: Write
$Q(x)=[C]\circ Q_{G}(x)=\frac{aQ_{G}(x)+b}{cQ_{G}(x)+d}$
as in Corollary 16, where $Q_{G}$ is a quotient map, then there is
$H\in\mathcal{I}_{K}$ such that $H^{Q_{G}}=a\cdot F^{Q}$. Since we always
assume something about the roots of the resulting polynomial, i.e. about
$F^{Q}$ and thus also about $H^{Q_{G}}$, both Theorem 22 and Corollary 23 hold
because if $F^{Q}$ only contains roots in regular $G$-orbits so does
$H^{Q_{G}}$.
We state an analogue of Theorem 12 for polynomials:
###### Corollary 25.
The map $\delta_{Q_{G}}:\mathcal{I}_{K}\to\mathcal{I}_{K}^{G}/G$ with
$F\mapsto G\ast r$ such that $F^{Q_{G}}=\prod(G\ast r)^{k}$ is a bijection.
###### Proof.
By Theorem 17 $\delta_{Q_{G}}$ defines a mapping between $\mathcal{I}_{K}$ and
$\mathcal{I}_{K}^{G}/G$. First we show that $\delta_{Q_{G}}$ is surjective.
Let $r\in\mathcal{I}_{K}^{G}$ and $v\in R_{r}$ a root of $r$. Then $r$ is the
minimal polynomial of $v$ over $K$. Moreover, let $\alpha\in\overline{K}$ be
such that $Q_{G}(v)=\alpha$ and denote by $F\in\mathcal{I}_{K}$ the minimal
polynomial of $\alpha$. Then $F^{Q_{G}}$ has $v$ as a root, thus $r|F^{Q_{G}}$
and $F^{Q_{G}}=(\prod(G\ast r))^{k}$ by Theorem 17, so
$\delta_{Q_{G}}(F)=G\ast r$. Now onto the injectivity: Let
$F,H\in\mathcal{I}_{K}$ be such that
$\delta_{Q_{G}}(F)=\delta_{Q_{G}}(H)=G\ast r$ for $r\in\mathcal{I}_{K}^{G}$,
so $F^{Q_{G}}=(\prod(G\ast r))^{k}$ and $H^{Q_{G}}=(\prod(G\ast r))^{l}$ and
both $F^{Q_{G}}$ and $H^{Q_{G}}$ have the same roots. With the help of what we
observed in the proof of Theorem 17 we obtain:
$\bigcup\limits_{\alpha\in
R_{F}}Q_{G}^{-1}(\alpha)=R_{F^{Q_{G}}}=R_{H^{Q_{G}}}=\bigcup\limits_{\beta\in
R_{H}}Q_{G}^{-1}(\beta).$
Therefore $R_{F}=R_{H}$ since $Q_{G}$ induces a bijection between
$\overline{K}\cup\\{\infty\\}$ and $(\overline{K}\cup\\{\infty\\})/G$ by
Theorem 12. As $F$ and $H$ are irreducible, monic and share the same roots
they have to be equal. ∎
As an immediate consequence of this corollary we obtain the main part of the
general version of Theorem R:
###### Theorem 26.
If $f\in\mathcal{I}_{K}^{G}$ is a $G$-invariant monic irreducible polynomial
with root $v\in R_{f}$ that is contained in a regular $G$-orbit, then there is
$F\in\mathcal{I}_{K}$ such that $f=F^{Q_{G}}$.
###### Proof.
If $f\in\mathcal{I}_{K}^{G}$ is $G$-invariant, then $G\ast f=\\{f\\}$. By
Corollary 25 we get that there is $F\in\mathcal{I}_{K}$ such that
$\delta_{Q_{G}}(F)=\\{f\\}$, which translates to $F^{Q_{G}}=f^{k}$. Further
$k=1$ because of Theorem 22 and the assumption that $v\in R_{f}$ is contained
in a regular $G$-orbit. ∎
To complete the proofs of the Main Theorem and the general Theorem R we need
to show that the number of irreducible polynomials for which $F^{Q_{G}}=(\prod
G\ast r)^{k}$ with $k>1$ is finite. By Theorem 22 this is equivalent to
showing that the number of irreducible polynomials $F\in\mathcal{I}_{K}$ for
which $F^{Q_{G}}$ has roots in non-regular $G$-orbits is finite. We give these
polynomials the following name:
###### Definition 27.
We call $F\in\mathcal{I}_{K}$ $Q_{G}$-non-conformal if $F^{Q_{G}}=(\prod(G\ast
r))^{k}$ for $r\in\mathcal{I}_{K}^{G}$, $k\in\mathbb{N}$ and $k>1$. For the
set of $Q_{G}$-non-conformal polynomials we write $\mathcal{NC}^{Q_{G}}$.
Further we set
$P_{G}:=\\{u\in\overline{K}\cup\\{\infty\\}|~{}|G\circ u|<|G|\\}$
as the set of elements in $\overline{K}\cup\\{\infty\\}$ contained in non-
regular $G$-orbits. We have the following:
###### Lemma 28 ([4, Lemma 2.1]).
Let $G\leq\operatorname{PGL}_{2}(K)$ be finite and
$v\in\overline{K}\cup\\{\infty\\}$. Then $G\circ v$ is non-regular if and only
if there is $[A]\in G\setminus\\{[I_{2}]\\}$ such that $[A]\circ v=v$, thus
$P_{G}=\\{u\in\overline{K}\cup\\{\infty\\}|~{}\exists[A]\in
G\setminus\\{[I_{2}]\\}:~{}[A]\circ u=u\\}$
and this set is finite; more precisely $|P_{G}|\leq 2(|G|-1)$. Furthermore
$[K(u):K]\leq 2$ for all $u\in P_{G}\setminus\\{\infty\\}$.
We denote by $\mathcal{P}_{G}$ the set of minimal polynomials of elements in
$P_{G}$, that is,
$\mathcal{P}_{G}:=\\{r\in\mathcal{I}_{K}^{G}|\exists\alpha\in
P_{G}:~{}r(\alpha)=0\\}.$ (5)
Notice that $\mathcal{P}_{G}\subseteq\mathcal{I}_{K}^{G}$ by definition, thus
$\mathcal{P}_{G}$ does not contain polynomials with roots in
$(G\circ\infty)\setminus\\{\infty\\}$, even if this orbit is non-regular. We
obtain
###### Lemma 29.
We have222The set $\mathcal{NC}^{Q_{G}}$ can be empty. In that case we define
the left side of the subset equation to be empty as well.
$\bigcup\limits_{F\in\mathcal{NC}^{Q_{G}}}\delta_{Q_{G}}(F)\subseteq\mathcal{P}_{G}.$
In particular
$|\mathcal{NC}^{Q_{G}}|\leq|\mathcal{P}_{G}|\leq|P_{G}|\leq 2(|G|-1),$
so there are only finitely many $Q_{G}$-non-conformal polynomials.
###### Proof.
The map $\mathcal{NC}^{Q_{G}}\to\mathcal{P}_{G}/G$ with
$F\mapsto\delta_{Q_{G}}(F)$ defines an injective mapping by Theorem 25 and
Theorem 22, thus the subset equation
$\bigcup\limits_{F\in\mathcal{NC}^{Q_{G}}}\delta_{Q_{G}}(F)\subseteq\mathcal{P}_{G}$
follows. Since $\mathcal{P}_{G}$ only contains minimal-polynomials of elements
in $P_{G}$ and the degree of $r\in\mathcal{P}_{G}$ is either 1 or 2 by Lemma
28 we get that $|\mathcal{P}_{G}|\leq|P_{G}|$, so $\mathcal{P}_{G}$ is finite
because $|P_{G}|\leq 2(|G|-1)$ by Lemma 28. Hence $\mathcal{NC}^{Q_{G}}$ is
finite as well and
$|\mathcal{P}_{G}|\geq\sum\limits_{F\in\mathcal{NC}^{Q_{G}}}|\delta_{Q_{G}}(F)|\geq\sum\limits_{F\in\mathcal{NC}^{Q_{G}}}1=|\mathcal{NC}^{Q_{G}}|.$
∎
We close this section by proving Theorem 6:
###### Theorem 30.
Let $F_{1},\ldots F_{l}\in\mathcal{NC}^{Q_{G}}$ be all $Q_{G}$-non-conformal
polynomials. Further let $r_{1},\ldots,r_{l}\in\mathcal{I}_{K}^{G}$ be such
that $\delta_{Q_{G}}(F_{i})=G\ast r_{i}$ and $n_{i}\in\mathbb{N}$ such that
$F_{i}^{Q_{G}}=(\prod G\ast r_{i})^{n_{i}}$. Then for every $G$-invariant
polynomial $f\in\mathcal{NR}_{K}^{G}$ there is a unique monic polynomial $F\in
K[x]$ and unique natural numbers $0\leq k_{i}<n_{i}$ such that
$f=\left(\prod\limits_{i=1}^{l}(\prod\limits G\ast r_{i})^{k_{i}}\right)\cdot
F^{Q_{G}}.$
###### Proof.
By Corollary 11 we have
$f=\prod\limits_{i=1}^{e}(\prod(G\ast g_{i}))^{m_{i}}$
with $g_{i}\in\mathcal{I}_{K}^{G}$ and
$m_{1},\ldots,m_{e}\in\mathbb{N}\setminus\\{0\\}$. We refine this
factorization by grouping the orbit polynomials into either belonging to
$\mathcal{P}_{G}$ or not, which gives
$f=\prod\limits_{i=1}^{l}(\prod(G\ast
r_{i}))^{l_{i}}\cdot\prod\limits_{j=1}^{c}(\prod(G\ast h_{j}))^{d_{j}};$
here we allow $l_{i}=0$. Since
$h_{j}\in\mathcal{I}_{K}^{G}\setminus\mathcal{P}_{G}$ for all $j\in[c]$ there
exists a unique monic polynomial $H_{1}\in K[x]$ such that
$H_{1}^{Q_{G}}=\prod\limits_{j=1}^{c}(\prod(G\ast h_{j}))^{d_{j}}$
by Theorem 22 together with Lemma 13 item 2. For the remaining factor we
divide $l_{i}$ by $n_{i}$ and write $k_{i}$ for the remainder, so
$l_{i}=a_{i}\cdot n_{i}+k_{i}$ and $0\leq k_{i}<n_{i}$. We have
$(F_{i}^{a_{i}})^{Q_{G}}=(\prod(G\ast r_{i}))^{a_{i}\cdot n_{i}}.$
Hence we define
$H_{2}:=\prod_{i=1}^{l}F_{i}^{a_{i}}$
and get
$f=\left(\prod\limits_{i=1}^{l}(\prod\limits G\ast
r_{i})^{k_{i}}\right)\cdot(H_{2}\cdot H_{1})^{Q_{G}}.$
Set $F=H_{2}\cdot H_{1}$, which is unique since both $H_{1}$ and $H_{2}$ are
unique. ∎
## 3 Some Notes on the Galois Theory of Invariant Polynomials
In this section we want to explain some statements about the Galois theory of
$G$-invariant polynomials and their implications for finite fields. In
particular, we give an alternative proof of the fact that all irreducible
monic polynomials $f\in\mathbb{F}_{q}[x]$ of degree $\deg(f)\geq 3$ have
cyclic stabilizers in $\operatorname{PGL}_{2}(\mathbb{F}_{q})$ ([22, Theorem
1.3]). This means that if $f\in\mathcal{I}_{\mathbb{F}_{q}}=:\mathcal{I}_{q}$
is of degree $\deg(f)\geq 3$ and $G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})$
is non-cyclic, then $|G\ast f|>1$. We will exploit this in the proof of
Theorem 2.
Consider the $G$-invariant separable and monic polynomial
$f\in\mathcal{I}_{K}^{G}$ with roots belonging to regular $G$-orbits and its
splitting field $L_{f}$ in a fixed algebraic closure of $K$. As seen before we
can partition the set of roots of $f$ into $G$-orbits
$R_{f}=\bigcup_{i=1}^{k}(G\circ v_{i})$
where $v_{1},\ldots v_{k}\in R_{f}$ is a set of representatives of $R_{f}/G$.
Thus $\deg(f)=|G|\cdot k$, since all $G$-orbits in $R_{f}$ are regular. Let
$\sigma\in\operatorname{Gal}(f):=\operatorname{Gal}(L_{f}/K)$ and
$A=\begin{pmatrix}a&b\\\ c&d\end{pmatrix}$
such that $[A]\in G$, then
$\sigma([A]\circ
v)=\sigma(\frac{av+b}{cv+d})=\frac{a\sigma(v)+b}{c\sigma(v)+d}=[A]\circ\sigma(v)$
for all $v\in R_{f}$. This shows that the actions of $G$ and
$\operatorname{Gal}(f)$ on $R_{f}$ commute and that $G\circ
v_{1},\ldots,G\circ v_{k}$ is a non-trivial block system for
$\operatorname{Gal}(f)$:
###### Definition 31 (See [9]).
Let $G$ be a finite group acting transitively on a non-empty finite set $X$.
We say that a subset $Y\subseteq X$ is a block for $G$ if $g\cdot Y=Y$ or
$g\cdot Y\cap Y=\varnothing$. Moreover, $Y$ is a non-trivial block if
$1<|Y|<|X|$. If $Y$ is a block then $\\{g\cdot Y|g\in G\\}$ is a partition of
$X$ and is called a block system of $X$ for $G$.
We define the point-wise stabilizer subgroup of a block $G\circ v$ as
$\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ
v):=\\{\sigma\in\operatorname{Gal}(f)|\sigma(w)=w\text{ for all }w\in G\circ
v\\}.$
Similarly, the set-wise stabilizer is
$\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v):=\\{\sigma\in\operatorname{Gal}(f)|\sigma(w)\in G\circ v\text{ for all
}w\in G\circ v\\}.$
Notice that $\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ
v)\trianglelefteq\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ v)$. Our
first goal is to show that $G$ is isomorphic to the quotient of these
stabilizers. For that, we need a nice lemma about commuting group actions
stated in [13] and [14]:
###### Lemma 32 ([13] & [14]).
Let $X$ be a finite non-empty set and $G,H$ groups acting transitively and
faithful333$G$ acts faithful on $X$ if $g\cdot x=x$ for all $x\in X$ implies
$g=1$ on $X$. Moreover, $G$ acts regularly on $X$, that is,
$\operatorname{Stab}_{G}(x)=\\{1\\}$ for all $x\in X$ and the actions of $G$
and $H$ commute, i.e.
$g(h(x))=h(g(x))$
for all $h\in H$, $g\in G$ and $x\in X$. Then $H$ acts regularly on $X$ and is
isomorphic to $G$.
Additionally we prove the following
###### Lemma 33.
Let $f\in\mathcal{I}_{K}^{G}$ be $G$-invariant, separable and $R_{f}$ only
contains regular $G$-orbits. Moreover let $v\in R_{f}$ be a root of $f$. Then
we have:
1. 1.
$G$ acts transitively and regularly on $G\circ v$
2. 2.
$U:=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)/\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)$ acts faithful and
transitively on $G\circ v$
3. 3.
$\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ
v)=\operatorname{Stab}_{\operatorname{Gal}(f)}(v)$, thus $U$ acts regularly on
$G\circ v$
###### Proof.
For the first item note that the action of $G$ restricted to any of its orbits
in $R_{f}$ is always transitive. Additionally, $G$ acts regularly on $G\circ
v$ since all orbits are regular. That the induced action of $U$ on $G\circ v$
is faithful and transitive follows from standard facts about group actions, so
onto the last item: The inclusion
$\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ
v)\subseteq\operatorname{Stab}_{\operatorname{Gal}(f)}(v)$ is obvious. For the
other let $\sigma\in\operatorname{Stab}_{\operatorname{Gal}(f)}(v)$, so
$\sigma(v)=v$. Moreover, by the first item, $G$ acts transitively on $G\circ
v$, so for all $w\in G\circ v$ there is $[A]\in G$ such that $[A]\circ v=w$.
As a consequence
$\sigma(w)=\sigma([A]\circ v)=[A]\circ\sigma(v)=[A]\circ v=w,$
so both sets are equal. Moreover, all stabilizers of elements in $G\circ v$
are equal to $\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ v)$. So for
all $w\in G\circ v$ we get
$U=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)/\operatorname{Stab}_{\operatorname{Gal}(f)}(w),$
thus $U$ acts regularly on $G\circ v$. ∎
We apply Lemma 32 to our setup. We set $G$ as $G$ in Lemma 32 and $H=U$ as in
the previous lemma, then
$G\cong U=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)/\operatorname{Stab}_{\operatorname{Gal}(f)}(v).$ (6)
With that we can obtain
###### Corollary 34.
Let $f\in\mathcal{I}_{K}^{G}$ be $G$-invariant, separable and all $G$-orbits
in $R_{f}$ are regular. Moreover let $v\in R_{f}$ be a root of $f$.
1. 1.
If $\operatorname{Gal}(f)$ is abelian, then $G\cong
U\leq\operatorname{Gal}(f)$
2. 2.
If $\deg(f)=|G|$, then $G\cong\operatorname{Gal}(f)$
###### Proof.
We want to show that $\operatorname{Gal}(f)$ is abelian implies
$\operatorname{Stab}_{\operatorname{Gal}(f)}(v)=\\{\operatorname{id}\\}$. To
see this let $\sigma\in\operatorname{Stab}_{\operatorname{Gal}(f)}(v)$ and
$w\in R_{f}$. Additionally set $\tau\in\operatorname{Gal}(f)$ such that
$\tau(v)=w$ (exists since $f$ is irreducible and thus $\operatorname{Gal}(f)$
acts transitively on $R_{f}$). Then we obtain
$\sigma(w)=\sigma(\tau(v))=\tau(\sigma(v))=\tau(v)=w,$
so $\sigma(w)=w$ for all $w\in R_{f}$ and thus $\sigma=\operatorname{id}$
because $\operatorname{Gal}(f)$ acts faithful on $R_{f}$. Consequentially
$G\cong\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)\leq\operatorname{Gal}(f)$, which finishes the first part.
Now let $f$ be of degree $|G|$, so also $|R_{f}|=|G|$ and $R_{f}=G\circ v$ for
all $v\in R_{f}$. Therefore
$\operatorname{Gal}(f)=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)$ by definition and $\operatorname{p-Stab}_{\operatorname{Gal}(f)}(G\circ
v)=\\{\operatorname{id}\\}$ because $\operatorname{Gal}(f)$ acts faithful on
$R_{f}$. This shows
$\operatorname{Gal}(f)=\operatorname{s-Stab}_{\operatorname{Gal}(f)}(G\circ
v)\cong G.$
∎
Further, we get:
###### Corollary 35.
Let $f\in\mathcal{I}_{q}$. If $f$ is $G$-invariant for a subgroup
$G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})$ and $R_{f}$ only contains
regular $G$-orbits then $G$ has to be cyclic and $|G|\mid\deg(f)$.
###### Proof.
It is well-known that $\operatorname{Gal}(f)\cong C_{\deg(f)}$, where $C_{n}$
is the cyclic group of order $n$, so $\operatorname{Gal}(f)$ is also abelian.
If $f$ is $G$-invariant for $G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})$,
then $G$ has to be isomorphic to a subgroup of $C_{\deg(f)}$ by Corollary 34
(1) and therefore has to be cyclic as well. By Lagrange’s theorem $|G|$
divides $|C_{\deg(f)}|=\deg(f)$. ∎
###### Remark 36.
This result does not hold for quadratic irreducible polynomials over finite
fields. Define $\mathcal{I}_{q}^{n}$ as the set of monic irreducible
polynomials over $\mathbb{F}_{q}$ of degree $n$. It can be shown that for
$g\in\mathcal{I}_{q}^{2}$ we have
$G\ast g=\mathcal{I}_{q}^{2}$
and thus
$|\operatorname{Stab}_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}(g)|=\frac{|\operatorname{PGL}_{2}(\mathbb{F}_{q})|}{|\mathcal{I}_{q}^{2}|}=2(q+1).$
Since the biggest order of an element in
$\operatorname{PGL}_{2}(\mathbb{F}_{q})$ is $q+1$ the stabilizer can not be
cyclic. In fact, it is dihedral. The reason why Corollary 35 fails is that the
set of roots of a quadratic irreducible polynomial $g\in\mathcal{I}_{q}^{2}$
does not contain regular
$\operatorname{Stab}_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}(g)$-orbits.
We have enough to prove Theorem 2
###### Proof. (Theorem 2).
Since $F^{Q_{G}}$ is separable it has degree many roots and together with
Corollary 19 we have that all $G$-orbits in $R_{F^{Q_{G}}}$ are regular. Note
that $K=\mathbb{F}_{q}$ is perfect so $|R_{r}|=\deg(r)$ for all irreducible
polynomials in $\mathbb{F}_{q}[x]$. With Theorem 17 and 22 we know that there
exists an irreducible polynomial $r\in\mathcal{I}_{K}^{G}$ such that
$F^{Q_{G}}=\prod G\ast r$. Observe that $r$ is $\operatorname{Stab}_{G}(r)\leq
G$ invariant and the $\operatorname{Stab}_{G}(r)$-orbits in $R_{r}$ are
regular because the $G$-orbits in $R_{F^{Q_{G}}}$ are regular. Consequentially
$\operatorname{Stab}_{G}(r)$ has to be cyclic by Corollary 35 and with
Corollary 23 we obtain
$\deg(r)=|\operatorname{Stab}_{G}(r)|\cdot\deg(F)\leq\mu_{G}\cdot\deg(F)$
and
$|G\ast r|=\frac{|G|}{|\operatorname{Stab}_{G}(r)|}\geq\frac{|G|}{\mu_{G}}.$
∎
###### Corollary 37.
If $G\leq\operatorname{PGL}_{2}(\mathbb{F}_{q})$ is non-cyclic and
$Q_{G}\in\mathbb{F}_{q}(x)$ is a quotient map for $G$. Then $F^{Q_{G}}$ is
reducible for all $F\in\mathbb{F}_{q}[x]$.
###### Proof.
Note that ”$F^{Q_{G}}$ is irreducible” implies ”$F$ is irreducible”, so we can
focus on $F$ being irreducible. If
$F\in\mathcal{I}_{K}\setminus\mathcal{NC}^{Q_{G}}$, then $F^{Q_{G}}$ has at
least $|G|/\mu_{G}$ irreducible factors by Theorem 2 and $|G|/\mu_{G}>1$ if
$G$ is non-cyclic. If $F\in\mathcal{NC}^{Q_{G}}$, then
$F^{Q_{G}}=\prod(G\ast r)^{k}$
for $r\in\mathcal{I}_{K}^{G}$ and $k>1$. ∎
## 4 Examples of Invariant Polynomials
In this section we show how our result apply to specific subgroups of
$\operatorname{PGL}_{2}(K)$ where $\operatorname{char}(K)>0$.
### 4.1 Unipotent Subgroups
Consider a field $K$ with $\operatorname{char}(K)=p>0$ and $q=p^{l}$ for
$l>0$. Moreover assume $\mathbb{F}_{q}\subseteq K$ and let $V\leq_{q}K$ be a
$\mathbb{F}_{q}$-subspace of $K$ of dimension
$n\in\mathbb{N}\setminus\\{0\\}$. For the subspace $V$ we define
$\overset{\sim}{V}:=\left\\{\left[\left(\begin{array}[]{cc}1&v\\\
0&1\end{array}\right)\right]:v\in V\right\\}\leq\operatorname{PGL}_{2}(K).$
Observe that $\overset{\sim}{V}\cong\mathbb{F}_{q}^{n}$ as groups, so
$\overset{\sim}{V}$ is abelian and every non-trivial element
$[A]\in\overset{\sim}{V}$ has order $p$. Additionally,
$\overset{\sim}{V}\subseteq\operatorname{Stab}_{\operatorname{PGL}_{2}(K)}(\infty)$,
so $\mathcal{NR}_{K}^{\overset{\sim}{V}}$ is just the set of monic polynomials
over $K$. A quotient map is the to $V$ associated subspace polynomial (see [4,
§10])
$Q_{\overset{\sim}{V}}(x)=\prod_{v\in V}(x-v).$ (7)
The set $P_{\overset{\sim}{V}}$ only contains $\infty$, so
$\mathcal{P}_{\overset{\sim}{V}}=\varnothing=\mathcal{NC}^{Q_{\overset{\sim}{V}}}$.
This makes the classification of $\overset{\sim}{V}$-invariant polynomials
especially nice:
###### Corollary 38.
For every monic $\overset{\sim}{V}$-invariant polynomial
$f\in\mathcal{NR}_{K}^{\overset{\sim}{V}}$ exists a unique monic polynomial
$F\in K[x]$ such that
$f(x)=F\left(\prod_{v\in V}(x-v)\right).$
###### Remark 39.
This result is already known, see [23, Theorem 2.5.]. There,
$\overset{\sim}{V}$-invariant polynomials are called $V$-translation invariant
polynomials since
$[A]\ast f(x)=f(x+v)$
for
$A=\left(\begin{array}[]{cc}1&v\\\ 0&1\end{array}\right).$
Even though the proof of Theorem 2.5. is only stated for finite fields, the
assumption that $K$ is finite is not used at all so it also holds if $K$ is
infinite. We gave an alternative proof of this result.
Next we look at the factorization of $F(Q_{\overset{\sim}{V}}(x))$.
###### Lemma 40.
Let $F\in\mathcal{I}_{K}$, then we obtain:
1. 1.
All irreducible factors of $F(Q_{\overset{\sim}{V}}(x))$ have the same
stabilizer in $\overset{\sim}{V}$, i.e. there is $W\leq V$ such that all
irreducible factors of $F(Q_{\overset{\sim}{V}}(x))$ are
$\overset{\sim}{W}$-invariant.
2. 2.
Let $F(Q_{\overset{\sim}{V}}(x))=\prod(\overset{\sim}{V}\ast r)$ for an
$r\in\mathcal{I}_{K}$ and
$\operatorname{Stab}_{\overset{\sim}{V}}(r)=\overset{\sim}{W}$ for $W\leq V$.
Moreover let $v_{1},\ldots,v_{k}\in V$ be a complete set of representatives
for $V/W$, then
$F(Q_{\overset{\sim}{V}}(x))=\prod\limits_{i=1}^{k}r(x+v_{i}).$ (8)
###### Proof.
By Theorem 17 and 22 we know that
$F(Q_{\overset{\sim}{V}}(x))=\prod(\overset{\sim}{V}\ast r)$ for an
$r\in\mathcal{I}_{K}^{\overset{\sim}{V}}=\mathcal{I}_{K}$. All elements in the
same orbit have conjugated stabilizers. Since $\overset{\sim}{V}$ is abelian
every subgroup is normal, thus
$\operatorname{Stab}_{\overset{\sim}{V}}(t)=\operatorname{Stab}_{\overset{\sim}{V}}(r)$
for all $t\in\overset{\sim}{V}\ast r$, so the first part is proved.
For the second item notice that for $v,u\in V$ we have
$r(x+v)=r(x+u)\Leftrightarrow r(x)=r(x+(u-v))\Leftrightarrow u-v\in W,$
hence all $r(x+v_{i})$ are different irreducible polynomials for
$v_{1},\ldots,v_{k}$ a complete set of representatives of $V/W$. Since
$F(Q_{\overset{\sim}{V}}(x))$ has exactly $|V|/|W|$ irreducible factors by
Corollary 23 equation (8) follows. ∎
Note that Corollary 4 is an immediate consequence of Theorem 2.
### 4.2 Borel-Subgroup
Here we consider the Borel-subgroup of $\operatorname{PGL}_{2}(q)$ in fields
$K$ with $\mathbb{F}_{q}\subseteq K$. This groups is defined as
$B(q)=\left\\{\left[\left(\begin{array}[]{cc}a&b\\\
0&1\end{array}\right)\right]:a\in\mathbb{F}_{q}^{\ast},b\in\mathbb{F}_{q}\right\\}.$
The transformation with $[A]\in B(q)$ looks like
$[A]\ast f(x)=a^{-\deg(f)}\cdot f(ax+b)$
for
$A=\left(\begin{array}[]{cc}a&b\\\ 0&1\end{array}\right).$ (9)
The group can be seen as
$B(q)\cong\mathbb{F}_{q}\rtimes\mathbb{F}_{q}^{\ast}$
where the multiplication is defined as
$(b,a)\cdot(d,c)=(b+ad,ac)$
and for $A\in\operatorname{GL}_{2}(K)$ as in (9) we have
$\operatorname{ord}([A])=\begin{cases}\operatorname{ord}_{\mathbb{F}_{q}^{\ast}}(a),&\text{
if }a\neq 1\\\ p,&\text{ if }a=1\text{ and }b\neq 0\end{cases}$
Moreover $B(q)=\operatorname{Stab}_{\operatorname{PGL}_{2}(q)}(\infty)$, so
$\mathcal{NR}_{K}^{B(q)}$ is the set of monic polynomials in
$\mathbb{F}_{q}[x]$. We calculate a quotient map for $B(q)$ using
###### Lemma 41 ([4, Theorem 3.10]).
Let $G\leq\operatorname{PGL}_{2}(K)$ be a finite subgroup and for
$v\in\overline{K}$ let
$g_{v}(x):=\prod\limits_{u\in G\circ v}(x-u)^{m_{v}}\text{ and
}h_{\infty}(x)=\prod\limits_{u\in
G\circ\infty\setminus\\{\infty\\}}(x-u)^{m_{\infty}},$
where $m_{u}:=|\operatorname{Stab}_{G}(u)|$ for
$u\in\overline{K}\cup\\{\infty\\}$. Then there is $w\in\overline{K}$ such that
$Q_{G}(x):=\frac{g_{v}(x)}{h_{\infty}(x)}+w\in K(x)$
is a quotient map for $G$. Conversely, if there is $w\in\overline{K}$ with
$\frac{g_{v}(x)}{h_{\infty}(x)}+w\in K(x)$, then
$Q(x):=\frac{g_{v}(x)}{h_{\infty}(x)}+w$ is a quotient map for $G$.
Let $v\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}$, then $B(q)\circ
v=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}$ because $\\{1,v\\}$ is a
$\mathbb{F}_{q}$ basis of $\mathbb{F}_{q^{2}}$, thus
$\\{a\cdot
v+b|a\in\mathbb{F}_{q}^{\ast},b\in\mathbb{F}_{q}\\}=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}.$
Hence, a quotient map is given by
$\displaystyle Q_{B(q)}(x)$ $\displaystyle=\prod\limits_{w\in B(q)\circ
v}(x-w)^{|\operatorname{Stab}_{B(q)}(v)|}=\prod\limits_{w\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}}(x-w)^{|\operatorname{Stab}_{B(q)}(v)|}$
$\displaystyle=\prod\limits_{g\in\mathcal{I}_{q}^{2}}g(x).$
Recall that $\mathcal{I}_{q}^{2}$ is the set of monic irreducible polynomials
of degree $2$ over $\mathbb{F}_{q}$. That $|\operatorname{Stab}_{B(q)}(v)|=1$
holds is a consequence of $av+b=v$ only having solutions
$v\in\mathbb{F}_{q}\cup\\{\infty\\}$ if $(a,b)\neq(1,0)$.
If $q\neq 2$ then $P_{B(q)}$ is equal to $\mathbb{F}_{q}\cup\\{\infty\\}$
because $A$ as in (9) fixes $-b/(a-1)$. For $q=2$ we have
$B(2)\cong\mathbb{F}_{2}$ and $P_{B(2)}=\\{\infty\\}$; so this case belongs to
the previous example. The group $B(q)$ acts transitively on
$P_{G}\setminus\\{\infty\\}=\mathbb{F}_{q}$ because for fixed
$c\in\mathbb{F}_{q}$ and $b\in\mathbb{F}_{q}$ arbitrarily take the matrix
$B=\left(\begin{array}[]{cc}1&b-c\\\ 0&1\end{array}\right)$
and we see $[B]\circ c=c+(b-c)=b$. Consequentially, $B(q)$ also acts
transitively on $\mathcal{P}_{G}$, which consists of polynomials of the form
$x-c$ for $c\in\mathbb{F}_{q}$. Hence there is a monic irreducible polynomial
$F$ of degree $1$ (so $F=x-\alpha$ for $\alpha\in K$) and an exponent $k>1$
such that
$\left(\prod\limits_{g\in\mathcal{I}_{q}^{2}}g(x)\right)-\alpha=F^{Q_{B(q)}}(x)=\left(\prod\limits_{v\in\mathbb{F}_{q}}(x-v)\right)^{k}=(x^{q}-x)^{k}.$
We can deduce the exponent $k$ from comparing the degree of the polynomials on
both sides of the equality. The left side has degree $2\cdot
N_{q}(2)=2\cdot(\frac{1}{2}(q^{2}-q))=q^{2}-q=q(q-1)$, so $k=q-1$. To obtain
$\alpha$ we calculate
$\displaystyle(x^{q}-x)^{q-1}=\frac{x^{q^{2}}-x^{q}}{x^{q}-x}=\frac{x^{q^{2}}-x}{x^{q}-x}-\frac{x^{q}-x}{x^{q}-x}=\prod\limits_{g\in\mathcal{I}_{q}^{2}}g(x)-1$
so $\alpha=1$ and thus
$\left(\prod\limits_{g\in\mathcal{I}_{q}^{2}}g(x)\right)-1=(x^{q}-x)^{q-1}.$
Hence $(x-1)\in\mathcal{NC}^{Q_{b(q)}}$ and
$\delta_{Q_{B(q)}}(x-1)=\mathcal{P}_{B(q)}$, so
$\mathcal{NC}^{Q_{b(q)}}=\\{x-1\\}$ by Lemma 29. With this we can characterize
all $B(q)$-invariant polynomials as follows:
###### Corollary 42.
For every monic $B(q)$-invariant polynomial $f\in\mathcal{NR}_{K}^{B(q)}$
exists a unique monic polynomial $F\in K[x]$ and $m\in\mathbb{N}$ with $0\leq
m<q-1$ such that
$f(x)=(x^{q}-x)^{m}\cdot F\left(\prod_{g\in\mathcal{I}_{q}^{2}}g(x)\right).$
###### Remark 43.
The polynomial
$Q(x)=(x^{q}-x)^{q-1}$
is another quotient map for $B(q)$. For $Q$ the sets $P_{B(q)}$ and
$\mathcal{P}_{B(q)}$ remain the same (notice that these sets are always the
same regardless which quotient map we choose), just $\mathcal{NC}^{Q}=\\{x\\}$
is different. Therefore we can reformulate the previous Corollary in the
following way:
> For every monic $B(q)$-invariant polynomial $f\in\mathcal{NR}_{K}^{B(q)}$
> exists a unique monic polynomial $F\in K[x]$ and $m\in\mathbb{N}$ with
> $0\leq m<q-1$ such that
>
> $f(x)=(x^{q}-x)^{m}\cdot F\left((x^{q}-x)^{q-1}\right)$
Changing the quotient maps for $B(q)$ in the representation of
$B(q)$-invariant polynomials is like changing the basis of a vector space. The
polynomials $F$ are, in this analogy, like the coefficients of the vectors
written as the linear combination of the basis elements.
The factorization over finite $K$ can be explained with Theorem 2 again:
###### Corollary 44.
Let $K=\mathbb{F}_{q^{s}}$ and $F\in\mathcal{I}_{q^{s}}$ for an
$s\in\mathbb{N}\setminus\\{0\\}$ and $q=p^{n}$. If $F\neq x-1$ then
$F^{Q_{B(q)}}$ has at least
1. 1.
$q$ irreducible factors if $q$ is not prime, i.e. $n>1$, or
2. 2.
$q-1$ irreducible factors if $q$ is prime, i.e. $n=1$
Every such factor has a cyclic stabilizer in $B(q)$ and thus has degree at
most $(q-1)\cdot\deg(F)$ if $q$ is not prime and $q\cdot\deg(F)$ if $q$ is
prime.
### 4.3 Projective General Linear Groups
Let $\operatorname{char}(K)=p>0$ and assume that $\mathbb{F}_{q}\subseteq K$
for $q$ a power of $p$, then
$G:=\operatorname{PGL}_{2}(\mathbb{F}_{q})\leq\operatorname{PGL}_{2}(K)$.
First of all we need to calculate a quotient map for $G$. As shown in [4,
Example 3.12]
$Q_{G}(x)=\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(\prod_{h\in\mathcal{I}_{q}^{1}}h(x))^{(q-1)q}}=\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(x^{q}-x)^{(q-1)q}}$
(10)
is a quotient map for $G$ over every field that contains $\mathbb{F}_{q}$, so
also over $K$. The sets $\mathcal{I}_{q}^{1}$ and $\mathcal{I}_{q}^{3}$ are
the sets of monic irreducible polynomials in $\mathbb{F}_{q}[x]$ of degree 1
and 3 respectively. We want to determine $P_{G},\mathcal{P}_{G}$ and
$\mathcal{NC}^{Q_{G}}$. By Lemma 28 we know that
$P_{G}\subseteq\mathbb{F}_{q^{2}}\cup\\{\infty\\}$ since the equation
$[A]\circ v=v$
for $[A]\in\operatorname{PGL}_{2}(q)$ is, in essence, a polynomial equation
over $\mathbb{F}_{q}$, hence all solutions are algebraic over $\mathbb{F}_{q}$
(except $\infty$) and thus $[\mathbb{F}_{q}(v):\mathbb{F}_{q}]\leq 2$. Indeed,
$P_{G}=\mathbb{F}_{q^{2}}\cup\\{\infty\\}$ since for $a\in\mathbb{F}_{q}$ and
$v\in\mathbb{F}_{q^{2}}$ we have
$\displaystyle G\circ a$ $\displaystyle=\mathbb{F}_{q}\cup\\{\infty\\}$
$\displaystyle G\circ v$
$\displaystyle=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}.$
Therefore $\mathcal{NR}_{K}^{G}$ consists of monic polynomials in $K[x]$ with
no roots in $\mathbb{F}_{q}$. The set $\mathcal{P}_{G}$ contains minimal
polynomials of elements in
$P_{G}\setminus(G\circ\infty)=\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}$, thus
we have two cases:
1. 1.
If $\mathbb{F}_{q^{2}}\not\subseteq K$, then
$\mathcal{P}_{G}=\mathcal{I}_{q}^{2}$ and every $g\in\mathcal{I}_{q}^{2}$ is
also irreducible over $K[x]$
2. 2.
If $\mathbb{F}_{q^{2}}\subseteq K$, then
$\mathcal{P}_{G}=\\{x-v|v\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}\\}.$
Since $B(q)\subseteq G$ and $B(q)$ acts transitively on $\mathcal{P}_{G}$ so
does $G$ and thus $G\ast g=\mathcal{P}_{G}$ for all $g\in\mathcal{P}_{G}$.
Since $\delta_{Q_{G}}(\mathcal{NC}^{Q_{G}})\subseteq\mathcal{P}_{G}$ we are
looking for an irreducible polynomial $F\in\mathcal{I}_{K}$ such that
$F^{Q_{G}}=(\prod G\ast
h)^{k}=(\prod\mathcal{P}_{G})^{k}=(\prod\limits_{g\in\mathcal{I}_{q}^{2}}g)^{k}$
for an $h\in\mathcal{P}_{G}$ and $k>1$. Looking back at Example 3.12 in [4]
gives $F=x+1$ and $k=q+1$, thus $\mathcal{NC}^{Q_{G}}=\\{x+1\\}$. With Theorem
30 we obtain
###### Corollary 45.
For every monic $\operatorname{PGL}_{2}(q)$-invariant polynomial
$f\in\mathcal{NR}_{K}^{G}$ exists a unique monic polynomial $F\in K[x]$ and
$m\in\mathbb{N}$ with $0\leq m<q+1$ such that
$f(x)=\left(\prod\limits_{g\in\mathcal{I}_{q}^{2}}g(x)\right)^{m}\cdot\left((x^{q}-x)^{(q-1)q\cdot\deg(F)}F\left(\frac{\prod_{r\in\mathcal{I}_{q}^{3}}r(x)}{(x^{q}-x)^{(q-1)q}}\right)\right).$
For the factorization over finite fields we shortly recall the 3 types of
conjugacy classes of cyclic subgroups of
$\operatorname{PGL}_{2}(\mathbb{F}_{q})$ (for reference see [4, Proposition
11.1] or [16, §8]).
Every $[A]\in\operatorname{PGL}_{2}(q)$ is contained in one of the following
three types of conjugacy classes:
1. 1.
$[A]$ fixes a unique element in $\mathbb{F}_{q}\cup\\{\infty\\}$, i.e.
$[A]\circ v=v$ for a unique $v\in\mathbb{F}_{q}\cup\\{\infty\\}$
2. 2.
$[A]$ fixes two different elements in $\mathbb{F}_{q}\cup\\{\infty\\}$ under
Möbius-transformation
3. 3.
$[A]$ fixes $\lambda,\lambda^{q}\in\mathbb{F}_{q^{2}}\setminus\mathbb{F}_{q}$
under Möbius-transformation
We then say that $[A]$ is of type $1,2$ or $3$ respectively. If $[A]$ is of
type 1 then $\operatorname{ord}([A])=p$ and $p$ is the prime dividing $q$.
Every element $[B]$ of type 2 has an order dividing $q-1$ and if $[C]$ is of
type 3 then $\operatorname{ord}([C])|q+1$. So
$\mu_{\operatorname{PGL}_{2}(\mathbb{F}_{q})}=q+1$ and we obtain
###### Corollary 46.
Let $K=\mathbb{F}_{q^{s}}$ for $s>0$ and $F\in\mathcal{I}_{q^{s}}$ and
$G=\operatorname{PGL}_{2}(\mathbb{F}_{q})$. If $F\neq x+1$ then $F^{Q_{G}}$
has at least $q^{2}-q$ irreducible factors and every such factor has degree at
most $(q+1)\cdot\deg(F)$.
###### Proof.
Follows immediately from Theorem 2 together with $\mu_{G}=q+1$ and
$|G|=q^{3}-q$. ∎
## Acknowledgements
I want to thank Alev Topuzoğlu and Henning Stichtenoth for their helpful
remarks and the advice they have given me. I am especially grateful to Henning
Stichenoth helping me with some technicalities of section 2.1. and making me
aware of the paper [27].
I am also very grateful for all of the invaluable help my supervisor Gohar
Kyureghyan has given me. Without her, this paper would probably not exist.
## References
* [1] Sergey Abrahamyan, Mahmood Alizadeh and Melsik K. Kyureghyan “Recursive constructions of irreducible polynomials over finite fields” In _Finite Fields and Their Applications_ 18.4, 2012, pp. 738–745
* [2] A. Albert “Fundamental Concepts of Higher Algebra” University of Chicago Press, 1956
* [3] Alp Bassa and Ricardo Menares “The R-transform as a power map and its generalisations to higher degree”, 2019 arXiv:1909.02608 [math.NT]
* [4] Antonia W. Bluher “Explicit Artin maps into $PGL_{2}$” In _Expositiones Mathematicae_ 40.1, 2022, pp. 45–93
* [5] Xiwang Cao and Lei Hu “On the reducibility of some composite polynomials over finite fields” In _Designs, Codes and Cryptography_ 64, 2012
* [6] Stephen D. Cohen “On irreducible polynomials of certain types in finite fields” In _Mathematical Proceedings of the Cambridge Philosophical Society_ 66.2 Cambridge University Press, 1969, pp. 335–344
* [7] Stephen D. Cohen “The Explicit Construction of Irreducible Polynomials over Finite Fields.” In _Designs, Codes and Cryptography_ 2, 1992, pp. 169–174
* [8] David E. Daykin “Generation of Irreducible Polynomials Over a Finite Field” In _The American Mathematical Monthly_ 72.6 Mathematical Association of America, 1965, pp. 646–648
* [9] John D. Dixon and Brian Mortimer “Permutation Groups”, Graduate Texts in Mathematics Springer New York, 1996
* [10] Gregory P. Dresden “There Are Only Nine Finite Groups of Fractional Linear Transformations with Integer Coefficients” In _Mathematics Magazine_ 77.3 Mathematical Association of America, 2004, pp. 211–218
* [11] Xander Faber “Finite p-Irregular Subgroups of PGL(2,k)”, 2021 arXiv:1112.1999 [math.NT]
* [12] Theodoulos Garefalakis “On the action of $GL_{2}(\mathbb{F}_{q})$ on irreducible polynomials over $\mathbb{F}_{q}$” In _Journal of Pure and Applied Algebra_ 215, 2011, pp. 1835–1843
* [13] Rod Gow and Gary McGuire “Invariant rational functions, linear fractional transformations and irreducible polynomials over finite fields” In _Finite Fields and Their Applications_ 79, 2022, pp. 101991
* [14] Rod Gow and Gary McGuire “On the realization of subgroups of $PGL(2,F)$, and their automorphism groups, as Galois groups over function fields” arXiv, 2021 URL: https://arxiv.org/abs/2109.06693
* [15] Anna-Maurin Graner and Gohar M. Kyureghyan “Constructing irreducible polynomials recursively with a reverse composition method”, 2023 arXiv:2301.09373 [math.NT]
* [16] Bertram Huppert “Endliche Gruppen 1” 134, Grundlehren der mathematischen Wissenschaft Springer-Verlag Berlin Heidelberg, 1967
* [17] Melsik K. Kyuregyan and Gohar Kyureghyan “Irreducible Compositions of Polynomials over Finite Fields” In _Designs, Codes and Cryptography_ 61 arXiv, 2011, pp. 301–314
* [18] Helmut Meyn “On the Construction of Irreducible Self-Reciprocal Polynomials Over Finite Fields” In _AAECC 1_ , 1990, pp. 43–53
* [19] Daniel Panario, Lucas Reis and Qiang Wang “Construction of irreducible polynomials through rational transformations” In _Journal of Pure and Applied Algebra_ 224.5, 2020, pp. 106241
* [20] Lucas Reis “Contemporary topics in Finite Fields: Existence, characterization, construction and enumeration problems”, 2018
* [21] Lucas Reis “Möbius-like maps on irreducible polynomials and rational transformations” In _Journal of Pure and Applied Algebra_ 224, 2019, pp. 169–180
* [22] Lucas Reis “On the existence and number of invariant polynomials” In _Finite Fields and Their Applications_ 61, 2020, pp. 101605
* [23] Lucas Reis “The action of $GL_{2}(\mathbb{F}_{q})$ on irreducible polynomials over $\mathbb{F}_{q}$, revisited” In _Journal of Pure and Applied Algebra_ 222.5, 2018, pp. 1087–1094
* [24] Steven Roman “Field Theory”, Graduate Texts in Mathematics Springer-Verlag New York, 2006
* [25] Vladimir M. Sidel’nikov “On normal bases of a finite field” In _MATH USSR SB_ 61 (2), 1988, pp. 485–494
* [26] Henning Stichtenoth and Alev Topuzoğlu “Factorization of a class of polynomials over finite fields” In _Finite Fields and Their Applications_ 18.1, 2012, pp. 108–122
* [27] Robert C. Valentini and Manohar L. Madan “A Hauptsatz of L. E. Dickson and Artin-Schreier extensions.” In _Journal für die reine und angewandte Mathematik_ 318, 1980, pp. 156–177
|
# Substitution-based Semantic Change Detection using Contextual Embeddings
Dallas Card
University of Michigan School of Information, Ann Arbor, MI
<EMAIL_ADDRESS>
###### Abstract
Measuring semantic change has thus far remained a task where methods using
contextual embeddings have struggled to improve upon simpler techniques
relying only on static word vectors. Moreover, many of the previously proposed
approaches suffer from downsides related to scalability and ease of
interpretation. We present a simplified approach to measuring semantic change
using contextual embeddings, relying only on the most probable substitutes for
masked terms. Not only is this approach directly interpretable, it is also far
more efficient in terms of storage, achieves superior average performance
across the most frequently cited datasets for this task, and allows for more
nuanced investigation of change than is possible with static word vectors.
## 1 Introduction
Measuring semantic change is one of the few areas of NLP where contextual
embeddings have not yet led to a definitive improvement over previous methods.
In particular, the commonly used approach of aligning static embeddings
trained on different time periods (Hamilton et al., 2016b) continues to be a
surprisingly hard to beat baseline.
Given that contextual embeddings provide a representation for each occurrence
of a word in context, they would seem to be ideally suited to a more nuanced
investigation of semantic change. Most attempts to leverage them for this
purpose, however, produce quantitatively worse results, while being less
interpretable and requiring more resources.
Here, we present a simplified and improved approach to scalable,
interpretable, semantic change detection using contextual embeddings. Inspired
by Eyal et al. (2022), we work only with the most probable replacements for
masked words, and measure semantic change in terms of the distributions of
replacements in each time period. Not only does this better match human
judgements, it is highly space efficient, works seamlessly for out-of-
vocabulary words, and helps intuitively characterize meaning change and
variation.
## 2 Background
Measuring semantic change involves a set of tasks related to determining if
and how a term’s meaning has changed over time. Here, we focus on the task of
measuring the amount of change that has occurred from one time period to
another Gulordava and Baroni (2011); Schlechtweg et al. (2020).111For surveys
of computational approaches to lexical semantic change detection, see Kutuzov
et al. (2018), Tang (2018), and Tahmasebi et al. (2021).
Existing approaches to this task are mostly of two types. The first is
associating each term with a single vector per time period and measuring the
distance between vectors, of which we take Hamilton et al. (2016b) to be
representative. As a variation on this, several authors have proposed
averaging the output of contextual embedding models to get a single vector per
term in each time period, but this has generally not led to an improvement
over using static vectors (Martinc et al., 2020a; Kurtyigit et al., 2021; Liu
et al., 2021). A related approach is to represent words in terms of their
nearest neighbors using static word vectors (Hamilton et al., 2016a; Gonen et
al., 2020), but this does not show a clear improvement over other static
embedding methods (Montariol et al., 2021).
A second type of approach begins with various methods for word sense
induction, then measures change in terms of the relative prevalence of a
term’s different senses (Frermann and Lapata, 2016; Hu et al., 2019; Arefyev
and Zhikov, 2020; Arefyev and Bykov, 2021). In some cases, authors simply
cluster contextual representations for each term, and measure differences in
the distributions of clusters between two time periods, rather than dealing
with explicit word senses (Giulianelli et al., 2020; Martinc et al., 2020b;
Montariol et al., 2021).
Despite the additional information provided by contextual embedding models,
methods using type embeddings (as opposed to token), continue to be
competitive. For example, on the recent SemEval multilingual semantic change
detection task, none of the top four systems used token embeddings
(Schlechtweg et al., 2020). Methods using contextual embeddings have done
better on some more recent mono-lingual shared tasks (Kutuzov and Pivovarova,
2021; Zamora-Reina et al., 2022), but have not yet been evaluated with a
consistent setup across multiple languages.
## 3 Methods
Building on Eyal et al. (2022), we represent each token in the corpus (or a
sufficiently large sample of them) by a small set of probable replacement
terms from a contextual embedding model. However, whereas Eyal et al. (2022)
did this for the purpose of word sense disambiguation, we do so for the
purpose of measuring semantic change.
For each sampled occurrence of each term, we mask the term of interest, feed
the masked context through a model, and obtain the predicted token
probabilities corresponding to the mask token.222Words that get tokenized into
multiple word pieces are replaced by a single mask token. From these, we save
only the top-$k$ most probable words (excluding stopwords and partial word
pieces), and discard the rest.
For a given term in a particular time period, we then count how many times
each word in the model vocabulary has appeared as a top-$k$ replacement for
that term, and normalize this by its sum, giving us a distribution over
replacements. To obtain a raw score of semantic change between two time
periods, we compute the Jensen-Shannon Divergence (JSD) between the two
distributions representing the same term in different time periods. However,
as we show below, the raw JSD scores are strongly correlated with term
frequency. Thus, to obtain a scaled metric, we convert the raw JSD scores into
a quantile, comparing the raw score for a term of interest to other terms with
similar frequency.
Compared to saving the full output vector per token, this approach only
requires a miniscule amount of storage per token, and thus does not require
the kind of heuristic dropping of tokens employed by Montariol et al. (2021).
In addition, the dominant meanings of a word in each context can be summarized
by the terms which occur most frequently among the top-$k$ replacements.
Although such replacements are limited to the terms which exist in the model
vocabulary, in practice this is sufficient to represent a nuanced set of
meanings, and works even for words which get tokenized into multiple word
pieces, as we show below.
More formally, given two corpora C1 and C2, let the count of token $v$ as a
top-$k$ replacement for term $t$ in corpus $c$ be:
$\textrm{count}(v,t,c)=\Sigma_{i=1}^{N_{c}(t)}\mathbb{I}[v\in R(t,i,k)],$ (1)
where $R(t,i,k)$ is the set of top-$k$ most probable replacements for
occurrence $i$ of term $t$ (excluding stopwords and partial word pieces in the
model vocabulary), and $N_{c}(t)$ is the number of sampled occurrence of term
$t$ in corpus $c$.333Unlike Eyal et al. (2022), we do not combine
probabilities for different forms of the same lemmas in the model vocabulary.
In addition, we do not exclude the target term from the top-$k$ replacements,
except implicitly for terms which get split into multiple word pieces.
Let $\Delta_{t}^{c}$ by the distribution of top-$k$ replacement counts for
term $t$ in corpus $c$, obtained by dividing the corresponding vector of
counts (i.e., [$\textrm{count}(\cdot,t,c)$]) by the sum over the model
vocabulary. The raw change score for term $t$ is given by the JSD between the
two distributions:
$\textrm{raw}(t)=\textrm{JSD}\left(\Delta_{t}^{C1},\Delta_{t}^{C2}\right).$
(2)
Finally, we correct for frequency effects by rescaling the raw JSD scores
against the scores for terms with similar frequency as the target term, giving
us a quantile scaled in [0, 1]:
$\textrm{scaled}(t)=\Sigma_{s\in
T(t)}\mathbb{I}[\textrm{raw}(t)\geq\textrm{raw}(s)]/|T(t)|,$ (3)
where $T(t)$ is the set of terms with similar frequency to term $t$ (excluding
term $t$ itself). More specifically, we compare against all terms within a
fixed factor of the target frequency:
$T(t)=\\{s:\textrm{fr}(t)/F\leq\textrm{fr}(s)\leq\textrm{fr}(t)\times F,s\neq
t\\},$ (4)
where $\textrm{fr}(t)$ is the frequency of term $t$ in the corpus, with window
factor $F$.
## 4 Experiments
To evaluate our method we make use of datasets for which there have been prior
evaluations of methods across multiple languages, following standards
established by past work for the sake of a head-to-head comparison.444Code to
replicate these experiments is available at
https://github.com/dallascard/SBSCD
### 4.1 Data
We use five datasets with words labeled in terms of semantic change between
two time periods. Four of these are from SemEval 2020 Task 1: Unsupervised
Lexical Semantic Change Detection (SE; Schlechtweg et al., 2020). These
datasets contain 31 to 48 terms from four languages, graded in terms of change
by human raters, along with accompanying corpora to be used in estimating the
amount of change. The fifth dataset (GEMS) comes from Gulordava and Baroni
(2011), and contains 100 words labeled in terms of semantic change from the
1960s to 1990s. As with most recent papers which use this dataset, we use the
Corpus of Historical American English (COHA; Davies, 2010) for measuring
change in the GEMS words.
### 4.2 Experimental Details
For each dataset, we fine tune an appropriate BERT model to the union of the
two associated unlabeled corpora using continued masked language model
training with the HuggingFace transformers package. We then index the corpora
to find all occurrences of each word. For all target words, along with a
random set of 10,000 background terms, we randomly sample up to 4,000
occurrences of each from the associated corpora. We process all sampled tokens
as described above to obtain and store the top-$k$ replacements for each, with
$k=5$. Using the replacements obtained from the model, we compute raw JSD
scores for each term. Finally, we convert these to scaled scores by comparing
to the background terms that have frequency within a factor of two of the
target term (i.e., $F=2$).
Following past work, we evaluate using Spearman correlation with human
ratings, comparing against the best results from recent papers. In particular,
we include two results based on slight variations on Hamilton et al. (2016b),
one of which was the best performing method in the SemEval competition (Pömsl
and Lyapin, 2020), as well as methods using contextual embeddings (Martinc et
al., 2020b; Montariol et al., 2021). For fully experimental details, please
refer to Appendix A.
### 4.3 Results
Full results are given in Table 1. Although our method is not uniformly better
than all previous methods on all dataset, it does produce the best result on
average, as well as improvements on GEMS, SE English and SE Latin.
| GEMS | SE Eng | SE Ger | SE Lat | SE Swe | Average | Average (weighted)
---|---|---|---|---|---|---|---
Number of words | 96∗ | 37 | 40 | 48 | 31 | |
_Static Embedding Methods_ | | | | | | |
Pömsl and Lyapin (2020) | - | 0.422 | 0.725 | 0.412 | 0.547 | - | -
Montariol et al. (2021) [static] | 0.347 | 0.321 | 0.712 | 0.372 | 0.631 | 0.477 | 0.452
_Contextual Embedding Methods_ | | | | | | |
Martinc et al. (2020b) | 0.510 | 0.313 | 0.436 | 0.467 | -0.026 | 0.340 | 0.394
Montariol et al. (2021) [contextual] | 0.352 | 0.437 | 0.561 | 0.488 | 0.321 | 0.432 | 0.422
Scaled JSD | 0.535 | 0.547 | 0.563 | 0.533 | 0.310 | 0.498 | 0.514
Table 1: Spearman correlation results on five datasets, including both an
unweighted average and an average weighted by number of words. Pömsl and
Lyapin (2020) was the best submission from SemEval 2022 Task 1, but did not
evaluate on GEMS. Montariol et al. (2021) included results using static
vectors, as well as several variations on their own method using contextual
embeddings, of which we take the one with the highest average performance.
Martinc et al. (2020b) only evaluated on GEMS, so we report the replication
results from Montariol et al. (2021). ∗We exclude four terms from GEMS to
match past work; for full results on GEMS, please refer to Appendix D.
As an example to better understand these results, the raw JSD scores from our
method are shown in Figure 1 (top) for the SE English data, with select terms
labeled. As can be seen, there is a strong relationship between term frequency
and raw JSD, hence the need to rescale the raw scores relative to terms with
similar frequency. After rescaling, we see a strong correlation between our
final semantic change scores and the human ratings, as shown in Figure 1
(bottom) for the SE English data.
Figure 1: Top: Raw JSD scores for both target and randomly chosen background terms in the SE English dataset, plotted against term counts. Bottom: Human ratings for SE English, plotting against scaled JSD scores, along with a fitted regression line (solid) and the 1:1 diagonal (dotted). Select terms in Table 2 are labeled. Word | SE rating | SE rank | Scaled JSD | Scaled JSD rank | Corpus A substitutes (1810–1860) | Corpus B substitutes (1960–2010)
---|---|---|---|---|---|---
plane | 0.88 | 1 | 0.97 | 1 | plane line planes point surface lines | plane aircraft planes jet airplane car
graft | 0.55 | 4 | 0.97 | 2 | tree plant stock vine fruit wood | corruption bribery fraud crime violence
tip | 0.68 | 2 | 0.85 | 7 | tipped tip covered end filled tips give | tip tips end tipped edge point top ends
gas | 0.16 | 23 | 0.72 | 14 | gas gases vapor air fire water | gas gasoline oil gases fuel water air
head | 0.30 | 10 | 0.68 | 16 | head face hand heads hands eyes | head face heads hand body hands eyes
bit | 0.31 | 9 | 0.51 | 23 | bit piece sort little pieces bits kind | bit little lot touch tad piece bits pieces
fiction | 0.02 | 35 | 0.41 | 27 | fiction history literature art poetry | fiction fact fantasy story stories novels
tree | 0.07 | 33 | 0.22 | 33 | trees tree plants branches plant wood | trees tree plants woods branches bushes
ounce | 0.28 | 11 | 0.08 | 37 | ounce inch pounds hour acre dollars | ounce pounds inch inches cups pieces
Table 2: Example terms from the SE English dataset, showing the most common
substitutes from our approach.
As with the approach of Hamilton et al. (2016b), our method supports direct
interpretation of semantic change. To understand the change in a word’s
typical usage, we can look at the overall most common replacements from each
time period. Table 2 shows the scores and rankings of several selected terms
from SE English, along with the most common substitutes from each time period.
Looking at the results, we can see, for example, strong agreement with human
annotators on a dramatic change in the meaning of _plane_ (comparing 1810–1860
vs. 1960–2010), from the geometric concept to the flying machine. On the other
hand, our results suggest that human raters may have slightly underestimated
the amount of change in the meaning of _graft_ , which was previously used
mostly in reference to vegetation, but now most commonly refers to
corruption.555Note that because _graft_ is not a term in the BERT vocabulary,
the term itself does not appear as a potential substitute, but the results
remain interpretable nonetheless.
By contrast, _ounce_ may be a case where our method has underestimated the
change that has taken place. Older usages seem to map more generically to a
wider range of quantities (hence the appearance among the early substitutes of
_hour_ , _acre_ , and _dollars_), whereas modern usage seems more restricted.
Indeed, we do find some difference in the distribution of substitutes between
the two time periods, but less of a difference than is typical for words with
similar frequency, hence the low final score from our method (see Figure 1).
Although we do not emphasize it in this paper, of our method can easily be
combined with the approach of Eyal et al. (2022) to further investigate
meaning changes, by inferring senses from the term replacements, and looking
at how their usage varies by time period. In particular, for each target term,
we can construct a graph from the set of term substitutes (as nodes), where
edge weights represent the number of top-$k$ clusters in which two substitutes
co-occur. Following Eyal et al. (2022), we experiment with Louvain community
detection to identify sense clusters from these graphs for each term of
interest, and use Jaccard similarity to associate each mention with a sense
cluster, based on substitute overlap (see Appendix A for details).
Inspecting the distribution of these senses over time helps to distinguish the
gradual adoption of existing senses from the creation of new ones. For
example, the most common sense of _plane_ is captured by the sense cluster
{_aircraft_ , _jet_ , _airplane_ , _car_}, and as expected, this sense is not
found in the 1810–1860 English data, except for two instances which appear to
be errors in the inferred sense. By contrast, the second most common
sense—{_planes_ , _line_ , _point_ , _surface_}—appears in both time periods,
but is much more common in the earlier time.
This approach also provides more insight into how the meaning of _graft_ has
changed. The most common sense cluster is the horticultural meaning {_tree_ ,
_plant_ , _stock_ , _vine_}, and this meaning occurs in both time periods, but
is much more common in the earlier one. A second cluster, corresponding to
illicit activity—{_corruption_ , _violence_ , _bribery_ , _fraud_}—occurs only
in the later time period. This clustering method also surfaces a third sense
with a medical meaning—{_transplant_ , _surgery_ , _disease_ , _drug_}—which
is not revealed by the top few overall most common replacements given in Table
2.
## 5 Discussion and Related Work
As noted by others, new and larger datasets for rigorously evaluating semantic
change are badly needed (Tahmasebi et al., 2021). Existing datasets are
relatively small, and are mostly based on inspecting a limited number of
examples per term. Unfortunately, determining ground truth for semantic change
is challenging, and producing such resources is costly. Ideally, future
datasets for evaluation should be larger, both to allow for more robust
evaluation, and to have sufficient targets for both hyperparameter tuning and
evaluation.
In addition to the dataset we have used in this paper, two others are
available from shared tasks on Spanish and Russian, respectively (Kutuzov and
Pivovarova, 2021; Zamora-Reina et al., 2022). Both of these are comparable in
size to the GEMS dataset used here. Unfortunately, they are less useful for
evaluation because most submissions to these shared tasks only evaluated on
the task data, and not on other datasets. As shown by the replication of
Martinc et al. (2020b) in Montariol et al. (2021), a method can sometimes
perform well on one language but fail to generalize to others. As such, we
have based our evaluation on datasets for which there has been a consistent
evaluation of methods across multiple languages. As future work, a careful
replication study of all methods from each competition on all available
datasets, including an assessment of sensitivity to hyperparameters, would be
highly informative.
Besides Eyal et al. (2022), The closest prior work to ours is Kudisov and
Arefyev (2022), who use dynamic patterns to generate many variations on
example usages sampled from the given corpora. These variations are then used
to generate hundreds of replacement terms from a masked language model with
associated probabilities. These probabilities are averaged (heuristically
combining replacements with differing numbers of word pieces) to obtain a mean
vector for each sampled instance. Finally, semantic change is computed as the
average cosine distance between all pairs of vectors across corpora. This
method was evaluated as part of the LSCDiscovery shared task on Spanish
(Zamora-Reina et al., 2022). Preliminary work on this method was described in
Arefyev and Bykov (2021), where a slightly different version of it was
evaluated on the RuShiftEval shared task on Russian (Kutuzov and Pivovarova,
2021).
Compared to Kudisov and Arefyev (2022), our approach is considerably simpler,
and better suited to storing representations of a complete corpus for
subsequent analysis and exploration. In particular, we only consider a small
number of substitutes for each example (storing only the top-$k$ most probable
terms, without the associated probabilities). We do not use dynamic patterns,
and only consider terms in the model vocabulary as potential substitutes. We
also associate each term with a single distribution over the model vocabulary
per time period (not per mention), and use Jensen-Shannon divergence to more
naturally measure the distance between distributions. Importantly, we also
correct for frequency effects, as described above.
Although our approach avoids the onerous storage requirements of methods which
save full contextual vectors, it still requires considerable processing time
to obtain the top-$k$ replacements for all tokens. Future work could explore
smaller or more efficient models for this purpose.666See Appendix B for
results using various model sizes.
Finally, despite its simplicity, measuring the cosine distance between aligned
static vectors remains a strong and efficient baseline (Hamilton et al.,
2016b). More work is needed to determine where contextual embeddings can offer
sufficient advantage in measuring semantic change to justify their greater
computational cost.
Compared to static embeddings, our approach is weakest on the German and
Swedish datasets, which could relate to the quality of the pretrained models
that are available for those languages, the data used for pretraining, or
perhaps issues that arise in tokenization of the reference corpora. For a
tentative exploration of some possible factors, please refer to Appendix C.
## 6 Conclusion
We have presented a simplified and improved approach to measuring semantic
change using contextual embeddings, based on the Jensen-Shannon Divergence
between the distributions of the most probable replacements for masked tokens
in different time periods, corrected for frequency effects. This approach
achieves superior performance on average, while remaining directly
interpretable, with vastly reduced storage requirements.
## Limitations
There are several limitations to this work which should be kept in mind. First
and foremost, the datasets for evaluating the measurement of semantic change
are relatively small, meaning that any estimates of correlation with human
judgements will be relatively high variance. In addition, although the SemEval
data includes text from four languages, there is no guarantee that these
methods will work as well as they do on other languages or other time periods.
Moreover, our approach depends on the use of pretrained language models, and
the quality (or existence) of these and other relevant resources will vary by
language.
In addition, like all methods, our approach involves numerous small choices,
such as the number of background terms to sample, the number of samples taken,
and the value of $k$ in choosing top substitutes. We have kept our choices for
these consistent across all five datasets, and these values have not been
tuned. As such, different choices could result in better or worse correlation
with human judgements. It is also worth noting that the human judgements
collected by the creators of these datasets may involve errors or noise. It is
possible that a different sample of data, or having different people evaluate
the same data, would produce different judgements.
For exploring the variation in word meanings, we have used the approach of
Eyal et al. (2022) directly, with the only differences being that we mask
terms of interest (allowing us to work with terms that do not exist in the
model vocabulary), and do not combine multiple forms of lemmas when getting
the top-$k$ terms. We adopt this approach because it is especially easy to
combine with our own work, but different methods for word sense induction
might lead to different conclusions about the different meanings of a term
that existed in any particular time period. In addition, any conclusions drawn
are necessarily limited to the corpora that are used, most of which will be a
highly biased sample of all text that was produced by all people for any given
period of time.
## Ethical Considerations
This work only uses well established datasets for the purposes for which they
were designed (studying changes in languages and evaluating measurement of
semantic change), thus poses few ethical concerns that did not already exist
for these data. Nevertheless, it is worth emphasizing that all of methods
discussed in this paper only return, at best, a noisy estimate of semantic
change. Words are used differently by different people, and attempts to
measure changes in language inevitably simplify the diversity of uses into a
single number, which discards a great deal of nuance. As such, any work
applying these methods to measure semantic change should be aware of their
limitations and proceed carefully.
## Acknowledgements
Many thanks to Kaitlyn Zhou and anonymous reviewers for helpful comments and
suggestions.
## References
* Arefyev and Zhikov (2020) Nikolay Arefyev and Vasily Zhikov. 2020. BOS at SemEval-2020 task 1: Word sense induction via lexical substitution for lexical semantic change detection. In _Proceedings of the Fourteenth Workshop on Semantic Evaluation_.
* Arefyev and Bykov (2021) Nikolay V. Arefyev and D. A. Bykov. 2021. An interpretable approach to lexical semantic change detection with lexical substitution. In _Proceedings of the International Conference on Computational Linguistics and Intellectual Technologies (Dialogue)_.
* Davies (2010) Mark Davies. 2010. The corpus of historical American English (COHA). Available online at https://www.english-corpora.org/coha/.
* Eyal et al. (2022) Matan Eyal, Shoval Sadde, Hillel Taub-Tabib, and Yoav Goldberg. 2022. Large scale substitution-based word sense induction. In _Proceedings of ACL_.
* Frermann and Lapata (2016) Lea Frermann and Mirella Lapata. 2016. A Bayesian model of diachronic meaning change. _Transactions of the Association for Computational Linguistics_.
* Giulianelli et al. (2020) Mario Giulianelli, Marco Del Tredici, and Raquel Fernández. 2020. Analysing lexical semantic change with contextualised word representations. In _Proceedings of ACL_.
* Gonen et al. (2020) Hila Gonen, Ganesh Jawahar, Djamé Seddah, and Yoav Goldberg. 2020. Simple, interpretable and stable method for detecting words with usage change across corpora. In _Proceedings of ACL_.
* Gulordava and Baroni (2011) Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books ngram corpus. In _Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics_.
* Hamilton et al. (2016a) William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016a. Cultural shift or linguistic drift? Comparing two computational measures of semantic change. In _Proceedings of EMNLP_.
* Hamilton et al. (2016b) William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016b. Diachronic word embeddings reveal statistical laws of semantic change. In _Proceedings of ACL_.
* Hu et al. (2019) Renfen Hu, Shen Li, and Shichen Liang. 2019. Diachronic sense modeling with deep contextualized word embeddings: An ecological view. In _Proceedings of ACL_.
* Kudisov and Arefyev (2022) Artem Kudisov and Nikolay Arefyev. 2022. BOS at LSCDiscovery: Lexical substitution for interpretable lexical semantic change detection. In _Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change_.
* Kurtyigit et al. (2021) Sinan Kurtyigit, Maike Park, Dominik Schlechtweg, Jonas Kuhn, and Sabine Schulte im Walde. 2021. Lexical semantic change discovery. In _Proceedings of ACL_.
* Kutuzov et al. (2018) Andrey Kutuzov, Lilja Øvrelid, Terrence Szymanski, and Erik Velldal. 2018. Diachronic word embeddings and semantic shifts: A survey. In _Proceedings of the International Conference on Computational Linguistics_.
* Kutuzov and Pivovarova (2021) Andrey Kutuzov and Lidia Pivovarova. 2021. RuShiftEval: A shared task on semantic shift detection for Russian. In _Proceedings of the International Conference on Computational Linguistics and Intellectual Technologies (Dialogue)_.
* Liu et al. (2021) Yang Liu, Alan Medlar, and Dorota Glowacka. 2021. Statistically significant detection of semantic shifts using contextual word embeddings. In _Proceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems_.
* Martinc et al. (2020a) Matej Martinc, Petra Kralj Novak, and Senja Pollak. 2020a. Leveraging contextual embeddings for detecting diachronic semantic shift. In _Proceedings of the Twelfth Language Resources and Evaluation Conference_.
* Martinc et al. (2020b) Matej Martinc, Syrielle Montariol, Elaine Zosa, and Lidia Pivovarova. 2020b. Capturing evolution in word usage: Just add more clusters? In _Proceedings of the Web Conference 2020_.
* Montariol et al. (2021) Syrielle Montariol, Matej Martinc, and Lidia Pivovarova. 2021. Scalable and interpretable semantic change detection. In _Proceedings of NAACL_.
* Pömsl and Lyapin (2020) Martin Pömsl and Roman Lyapin. 2020. CIRCE at SemEval-2020 task 1: Ensembling context-free and context-dependent word representations. In _Proceedings of the Fourteenth Workshop on Semantic Evaluation_.
* Schlechtweg et al. (2020) Dominik Schlechtweg, Barbara McGillivray, Simon Hengchen, Haim Dubossarsky, and Nina Tahmasebi. 2020. SemEval-2020 task 1: Unsupervised lexical semantic change detection. In _Proceedings of the Fourteenth Workshop on Semantic Evaluation_.
* Tahmasebi et al. (2021) Nina Tahmasebi, Lars Borin, and Adam Jatowt. 2021. Survey of computational approaches to lexical semantic change detection. In Nina Tahmasebi, Lars Borin, Adam Jatowt, Yang Xu, and Simon Hengchen, editors, _Computational approaches to semantic change_ , chapter 1, pages 1–91. Language Science Press.
* Tang (2018) Xuri Tang. 2018. A state-of-the-art of semantic change computation. _Natural Language Engineering_ , 24(5):649–676.
* Zamora-Reina et al. (2022) Frank D. Zamora-Reina, Felipe Bravo-Marquez, and Dominik Schlechtweg. 2022. LSCDiscovery: A shared task on semantic change discovery and detection in Spanish. In _Proceedings of the 3rd Workshop on Computational Approaches to Historical Language Change_.
## Appendix A Experimental Details
For each dataset, we use a BERT model, preferring a high quality monolingual
model where available. For GEMS and SE English, we use bert-large-uncased. For
SE Latin we use bert-base-multilingual-uncased, deepset/gbert-large for SE
German, and KB/bert-base-swedish-cased for SE Swedish, with all models
available through HuggingFace. In all cases, we first adapt the model to the
dataset by doing continued masked language model training for five epochs on
the union of the two associated corpora.
For the SemEval data, the corpora are provided in both raw and lemmatized
formats, with the target terms given as lemmas. Because the contextual
embedding models have been trained on non-lemmatized text, we prefer to embed
mentions using the raw (non-lemmatized data). However, because of uncertainty
about how the raw text was lemmatized, we begin by aligning the lemmatized
data to the non-lemmatized text. We then index terms in the lemmatized data
(for both target terms and random background terms), and then map these
indices to indices in the corresponding non-lemmatized data, which we then
sample to get replacements.
To do the alignment, we begin by tokenizing the text, and then removing the
punctuation from both the lemmatized and non-lemmatized text, storing indices
to allow mapping back to the original token sequences in the non-lemmatized
data. For each pair of texts (a raw and a lemmatized form), we first identify
tokens that occur exactly once in each, and align the positions of these to
each other, as long as the ordering of these tokens is consistent. We then
recursively do this for the subsequences between each adjacent pair of aligned
tokens. Given these landmark alignments, (using exact matches), we then
attempt to align all remaining substrings between each pair of aligned tokens,
(adding padding tokens as necessary), using Levenshtein distance as a
heuristic way to evaluate possible token alignments. Finally, we do a post-
alignment correction to consider inserting a padding token in each position to
correct for occasional off-by-one errors, and taking the best scoring overall
alignment.
By inspecting the target tokens in the raw (non-lemmatized text) that are
obtained using this alignment (based on indexing target terms in the
lemmatized version, then mapping these indices to the non-lemmatized text
using the alignment), we find that the vast majority of mentions are properly
aligned. To eliminate the small number of alignment errors, we only keep
tokens that are at least two characters in length where the non-lemmatized
form comprises at least 0.02% of the total number of indexed terms for a given
lemma, and where the first letter of the indexed token matches the first
letter of the target lemma. To account for a small number of special cases
(such as examples in SE Latin where a word sometimes starts with “j” and
sometimes with “i”, (presumably due to OCR errors), we create a handful of
exceptions to the first letter rule. For full details of this alignment
process and exceptions, please refer to replication
code.777https://github.com/dallascard/SBSCD
In addition, for the SE English data, target terms (only) are given with
specific part of speech tags. However, to better match a random sample of
background lemmas, we ignore part of speech in our experiments, and index all
occurrences of each target term in the lemmatized data. Future work could
explore the impact of restricting measurements to certain parts of speech,
both for target and background terms.
For GEMS, where the targets are not lemmatized, we ignore lemmatization and
simply sample from all exact matches of the target terms as tokens in the raw
text. As with past work, we combine the multiple annotations for the GEMS data
by averaging their scores.
All masked tokens are fed into the appropriate model with up to 50 tokens to
either side from the original context, which returns a probability
distribution over the model vocabulary. When computing the top-$k$ most
probable substitutes, we follow Eyal et al. (2022) and exclude stopwords and
partial word pieces (i.e., those that start with ##). For GEMS and SE English,
we use the stopword list from the Snowball
stemmer.888http://snowball.tartarus.org/algorithms/english/stop.txt For SE
Latin, we use a Latin stopword list from the Perseus Digital
Library.999https://www.perseus.tufts.edu/hopper/stopwords For SE German and SE
Swedish, we use the respective stopword lists from
NLTK.101010https://www.nltk.org/
For the exploration of sense clusters in the main paper using Louvain
community detection, we use the same data as used in measuring semantic
change, keeping $k=5$, but we exclude the target term itself when gathering
the top-$k$ substitutes.111111In practice, this is done by initially saving
the top-($k+1$) substitutes, and dropping the target term for the purpose of
clustering, where necessary. We then construct a weighted graph for each
target term, where nodes represent substitutes, and edge weights correspond to
the number of top-$k$ replacement sets in which each pair of replacements
appear together.
To obtain sense clusters, we use the implementation of Louvain community
detection in networkx with default parameter settings, to detect clusters in
the graph.121212https://networkx.org/ Finally, we associate each instance of a
target term with a corresponding cluster using Jaccard similarity between the
instance’s set of top-$k$ replacements and the terms in the cluster.
All of these experiments were run on either an NVidia RTX A6000 or A5000 GPU.
## Appendix B Alternative Models
In order to investigate the effect of model size on the performance of our
approach to measuring semantic change, we try a range of model sizes for BERT
on the English datasets, all available from HuggingFace. The results are shown
in Table 3. As can be seen, there is a clear correlation between model size
and task performance for the SE English data, but this is not the case for the
GEMS dataset, perhaps because the COHA corpora used for GEMS provides longer
contexts for term mentions (see Appendix C).
Model | GEMS | SE English
---|---|---
google/bert_uncased_L-4_H-256_A-4 (mini) | 0.559 | 0.433
google/bert_uncased_L-4_H-512_A-8 (small) | 0.544 | 0.495
google/bert_uncased_L-8_H-512_A-8 (medium) | 0.538 | 0.522
google/bert_uncased_L-12_H-768_A-12 (base) | 0.541 | 0.512
bert-base-uncased | 0.509 | 0.525
bert-large-uncased | 0.535 | 0.547
Table 3: Results on the English datasets (Spearman correlation) using a range
of BERT model sizes on HuggingFace.
We also demonstrate the effect of using a multilingual model, rather than a
language specific model, for all datasets other than SE Latin (for which we
are already using a multilingual model in the main paper). As can be seen in
Table 4, the multilingual model uniformly results in worse performance,
demonstrating the importance of having a strong language-specific model for
measuring semantic change in this way.
Model | GEMS | SE Eng | SE Ger | SE Swe
---|---|---|---|---
bert-base-multilingual-uncased | 0.524 | 0.480 | 0.481 | 0.209
Language specific model (from Table 1 in main paper) | 0.535 | 0.547 | 0.563 | 0.310
Table 4: Results when using a multilingual model, compared to the language
specific models used in the paper.
## Appendix C Exploring Performance Differences Across Languages
Using the method presented in the main paper, our results were better than
using static word vectors for English and Latin, but worse for German and
Swedish. Unfortunately, we do not yet have a satisfactory explanation for this
discrepancy in performance. Notably, other approaches using contextual
embeddings (e.g., Montariol et al., 2021), have also performed worse on these
languages (relative to approaches based on Hamilton et al., 2016b).
Several possible explanations suggest themselves for why methods based on
contextual embeddings might struggle. For example, tokenization used for these
models breaks some words into multiple word pieces, which is not an issue for
static embeddings. Another consideration is the amount of context in which the
examples occur in the reference corpora (since static vectors typically only
use very small context windows, whereas contextual embedding models are
capable of using much longer contexts). We might also consider factors
relevant to all methods, such as the number of examples given for each target
term, or the number of different word forms in which each lemma occurs in the
corpora provided.
Although several of these factors perhaps help to explain why performance on
English is especially good (relative to static vectors), they do not provide a
convincing way to explain the differences in performance observed on the other
languages. In particular, the SE English data has the highest proportion of
target words that occur in the model vocabulary (without being broken into
multiple word pieces), and these lemmas occur in text using the fewest number
of surface forms per target.
By contrast, the other languages tend to have more surface forms, on average,
with fewer of the target terms occurring in the corresponding model
vocabulary, but Swedish is mid-range on the later (with German being lowest).
Latin, by contrast, tends to have more examples of target terms per corpus in
both time periods (with German again the lowest), but Swedish is between
English and Latin. The Swedish model does have a larger vocabulary, but it is
not as large as the multilingual model we used for Latin. Quantitative
summaries of these factors are presented for reference in Table 5.
Ultimately, perhaps the best explanation has to do with the quality of the
underlying pretrained models available for each language. Given that different
models for different languages were trained on entirely different data, this
seems like a highly relevant source of potential differences. Unfortunately,
is it difficult to assess the overall quality of pretrained models across
languages, so all of these explanations essentially remain no more than
hypotheses for further investigation.
Dataset | Model | | Median lower
---
target count
| Median
---
target
forms
| Median
---
context
length
| % targets
---
as whole
words
| Vocab
---
size
GEMS | bert-large-uncased | 93 | 1 | 191 | 97.0 | 30522
SE Eng | bert-large-uncased | 209 | 4 | 26 | 95.6 | 30522
SE Ger | deepset/gbert-large | 101 | 7 | 28 | 22.9 | 31102
SE Lat | bert-base-multilingual-uncased | 472 | 8 | 28 | 25.0 | 105879
SE Swe | KB/bert-base-swedish-cased | 249 | 9 | 25 | 74.2 | 50325
Table 5: Quantitative summary statistics of various factors which we might be
expected to affect differences in performance across languages (relative to
approaches based on static word embeddings). Median lower target count is the
median across target terms of the number of examples of each target term in
the corpus with the lower count (early or later). Median target forms is the
median across examples of the number of surface forms corresponding to each
target lemma. Median context length is the median number of tokens in which
target terms occur. % targets as whole words is the percent of target terms
which exist in the model vocabulary. Vocab size is the number of words in the
model vocabulary. Ultimately, none of these provides a convincing explanation
for observed differences.
## Appendix D Additional Results on GEMS
The GEMS dataset has been used for evaluation by many additional papers,
beyond those discussed in the main body of this paper. However, these have not
all used consistent metrics and corpora, making comparison difficult. For
completeness, we include additional results here, as shown in Table 6.
The GEMS dataset was originally introduced by Gulordava and Baroni (2011),
from whom we obtained the labeled data. These authors reported results in
terms of Pearson correlation, and used multiple datasets for measuring
semantic change, including the Google Books Corpus. Frermann and Lapata (2016)
also used this dataset for evaluation, but used different additional data
(beyond COHA), and reported results in terms of Spearman correlation.
More recent papers using this dataset (from Giulianelli et al., 2020 onwards)
have tended to make use of the COHA data from the 1960s and 1990s as the
corpus in which to measure change, to correspond to the periods used in the
annotation process, which we also use for our results in this paper. Martinc
et al. (2020b) reported very strong results on this dataset, but subsequent
work from the same authors (Montariol et al., 2021) revealed that this method
performed relatively poorly on the SemEval datasets, as reported in Table 1 in
the main paper.
Paper | Pearson | Spearman
---|---|---
Gulordava and Baroni (2011) | 0.386 | -
Frermann and Lapata (2016) | - | 0.377
Giulianelli et al. (2020) [99] | 0.231 | 0.293
Martinc et al. (2020b) [96] | 0.560 | 0.510
Montariol et al. (2021) [96] | - | 0.352
Scaled JSD [96] | 0.532 | 0.535
Scaled JSD [99] | 0.541 | 0.553
Table 6: Additional results on the GEMS dataset from Gulordava and Baroni
(2011). Note that not all papers reporting results on this dataset used the
same corpora or evaluation metric, hence we report both Pearson and Spearman
correlation, and restrict ourselves to the COHA dataset, which was used by all
authors. Numbers in brackets show the number of target terms excluded. We
evaluate using the exclusions of both Giulianelli et al. (2020) [99] and
Martinc et al. (2020b) [96] to enable a full comparison. Note that the high
correlation reported on this dataset by Martinc et al. (2020b) did not seem to
transfer to the SemEval datasets, as shown by Montariol et al. (2021) and
Table 1 in the main paper.
Different authors have excluded different numbers of words from the 100 target
terms in evaluation. Giulianelli et al. (2020) excluded _extracellular_ due to
insufficient occurrences in COHA during the 1960 and 1990s, which we also
exclude for the same reason. Martinc et al. (2020b) and Montariol et al.
(2021) excluded _assay_ , _extracellular_ , _mediaeval_ , and _sulphate_
because they were split into multiple tokens by BERT. Because we mask the
target terms, multi-piece words are not a problem, but for completeness we
evaluate using the exclusions of both Giulianelli et al. (2020) and Martinc et
al. (2020b) and report both in Table 6.
|
###### Abstract
This work concerns the evolutionary approaches to distributed stochastic
black-box optimization, in which each worker can individually solve an
approximation of the problem with nature-inspired algorithms. We propose a
distributed evolution strategy (DES) algorithm grounded on a proper
modification to evolution strategies, a family of classic evolutionary
algorithms, as well as a careful combination with existing distributed
frameworks. On smooth and nonconvex landscapes, DES has a convergence rate
competitive to existing zeroth-order methods, and can exploit the sparsity, if
applicable, to match the rate of first-order methods. The DES method uses a
Gaussian probability model to guide the search and avoids the numerical issue
resulted from finite-difference techniques in existing zeroth-order methods.
The DES method is also fully adaptive to the problem landscape, as its
convergence is guaranteed with any parameter setting. We further propose two
alternative sampling schemes which significantly improve the sampling
efficiency while leading to similar performance. Simulation studies on several
machine learning problems suggest that the proposed methods show much promise
in reducing the convergence time and improving the robustness to parameter
settings.
###### Index Terms:
Evolution strategies, distributed optimization, black-box optimization,
stochastic optimization, zeroth-order methods.
## 1 Introduction
We consider the following stochastic optimization problem:
$\min_{\bm{x}\in\mathbb{R}^{n}}f(\bm{x})=\mathbb{E}\left[F(\bm{x};\bm{\xi})\right]$
(1)
where $\bm{x}\in\mathbb{R}^{n}$ is the decision vector, $\bm{\xi}$ is a random
variable, $F$ is an unconstrained real-valued function, and
$\mathbb{E}\left[\cdot\right]$ denotes the expectation taken over the
distribution of $\bm{\xi}$. Problems of this type have a long history dating
back to 1950’s [1] and are still at the heart of many modern applications in
machine learning [2, 3], signal processing [4], and automatic control [5]. For
example, we can let $\bm{\xi}$ be a data point and $F$ a loss assessing how
$\bm{\xi}$ fits to a statistic model parametrized by $\bm{x}$; the problem
(1), in this way, then provides a universal formulation that captures a wide
range of machine learning tasks [6]. The hardness of stochastic optimization
mainly comes from the inherent noise nature, the possibly high dimensionality,
and the complexity of objective landscapes. Despite this hardness, significant
progress in the resolution of problem (1) has been made via exploring the
gradient (first-order) or Hessian (second-order) information of the component
function $F$. A variety of first-order and second-order stochastic
optimization methods have been developed in recent years, enjoying both the
theoretical and practical benefits; see [7, 8] for a comprehensive survey.
However, when the landscape characteristics (differentiability, smoothness,
convexity, etc.) are unknown, stochastic optimization remains a challenging
task.
In this paper, we are particularly interested in solving problem (1) in
distributed black-box settings. Concretely, there are $M$ workers having
access to the distribution of $\bm{\xi}$, but, for a given $\bm{x}$, they can
only evaluate the stochastic objective value $F(\bm{x};\bm{\xi})$. The workers
may run individually and exchange information periodically through a parameter
server, so they can minimize $f$ in a collaborative manner. But apart from the
decision vector $\bm{x}$, what they can share during the collaboration is
limited to the function values (zeroth-order information), excluding the
sharing of gradient or curvature information (which is case of existing
first-/second-order distributed methods). The consideration of this setting is
motivated by two scenarios in the real world. The first scenario is related to
the on-device machine learning, sometimes referred to as federated learning
[9, 10] or edge intelligence [11]. It is known that machine learning
practitioners seldom derive gradients manually; instead, they rely on
automatic differentiation [12] which computes the gradient during the function
evaluation using the chain rule. The automatic differentiation tools work well
on usual PCs, but they have a relatively large memory cost which may be
prohibitive on mobile devices. In addition, automatic differentiation only
runs on certain software environment and may cause compatibility issues in
distributed settings.
The second applicable scenario is the parallel solving of stochastic black-box
problems. Consider minimizing a time-consuming black-box function defined over
a massive amount of data and the goal is to achieve acceleration with a
multicore machine. Due to its black-box nature, the objective function may not
be thread-safe, and therefore we have to use the process-level parallelization
where the data is distributed to multiple processes. Sensor selection [13] and
high-dimensional cox regression [14] are representative examples that are
suitable for this scenario. These problems are in fact white-box, but the
gradient evaluation is much more expensive than the function evaluation;
treating them as black-box would heavily reduce the demand on computational
resources. Generally, distributed black-box optimization offers a powerful
search paradigm when calculating the gradient is expensive or infeasible, and
it also retains the advantages from classical distributed computing frameworks
in handling big data.
Black-box optimization methods, sometimes known as zeroth-order or derivative-
free optimization methods, require only the availability of objective function
values. They were among the earliest optimization methods in the history,
while having attracted renewed interest recently due to the ubiquity of black-
box models. The community has made several efforts in bringing the simplicity
and universality of black-box optimization methods to the distributed world. A
cornerstone of this research line is the Gaussian smoothing technique [15], a
randomized finite-difference method that admits building a smooth surrogate of
the original objective function with only zeroth-order information. With
Gaussian smoothing, we can get a computationally cheap gradient estimator in
the black-box setting, thereby making it possible to reuse existing first-
order methods. Various black-box distributed optimization (DBO) methods have
been proposed, based on the idea of hybridizing Gaussian smoothing with
established distributed optimization methods [16, 17, 18, 19, 20, 21, 22].
These methods are typically easy to implement: the only work to do is to
replace the real gradient with the one produced by Gaussian smoothing. The
disadvantage is that they suffer a dimension-dependent slowdown in convergence
rate, which is the cost must be paid for the absence of gradient information
[23].
Current development in DBO methods has not yet been entirely successful;
several common issues can be identified and should be carefully addressed. The
first issue is the introduction of the smoothing parameter, which keeps a
trade-off in improving the gradient estimation accuracy while avoiding
roundoff errors [24, Chapter 8]. Tuning the smoothing parameter is onerous,
and could become even more tricky in the distributed setting. This is caused
by that its optimal value depends on the computing environment but different
workers may have different environment. The second issue is the lack of
adaptivity, in the sense that decision makers have to tune the step-sizes in
workers or in the server or at both sides. Existing step-size adaptation rules
cannot be generalized to black-box settings easily, as the gradient estimators
produced by Gaussian smoothing does not meet the usual assumptions designed
for first-order methods. There also exist algorithm-specific issues. For
example, in [20], the authors found a well-developed distributed algorithm,
signSGD [25], may fail to approach the optimality when extended to black-box
settings, probably because of the unfordable sampling effort required for
reducing the bias caused by Gaussian smoothing. For the above reasons, schemes
that are based on Gaussian smoothing do not provide a truly seamless
transformation of first-order distributed methods to the black-box setting.
This calls for the need in developing new DBO methods based on completely
different frameworks.
We propose in this work a new DBO method based on evolution strategies (ESs)
[26, 27, 28], a popular family of nature-inspired methods that excel in black-
box real-valued optimization. Unlike Gaussian smoothing, ESs do not try to
approximate the gradient or its surrogate, but instead guide the search with a
probability distribution and gradually update this distribution on the fly.
Moreover, the update of distribution is adaptive, requiring no knowledge about
the landscape characteristics and very less user-supplied parameters. These
features make ESs a strong candidate in designing new DBO methods and seem
promising in addressing the aforementioned issues involved in Gaussian
smoothing. ESs also possess a useful feature that they only use the comparison
results of the objective function values among solutions, rather than their
exact values [29]. This is likely to improve the robustness in the presence of
noise, as the noise would not matter unless it changes the comparison results
[30]. On the other hand, ESs are originally designed for non-distributed
noise-less optimization and have not been extended to distributed settings. In
fact, the major components of ESs are grounded on heuristics and a rigorous
convergence analysis is still missing when applied on problems like (1). The
goal of this paper is to help bridge this gap by describing ideas that can
improve the applicability and rigorousness of ESs in the distributed
stochastic setting. In particular, we propose a distributed evolution strategy
(DES) with characteristics highlighted below:
* •
DES adopts a synchronous architecture that employs ESs to perform worker-side
local updates and allows delayed averaging of individual decision vectors to
reduce the communication overhead. It also supports server-side momentum,
which is found to improve the performance in practice.
* •
When the local ES update is driven by an isotropic Gaussian distribution and
when the function landscape is nonconvex, DES is competitive with existing
zeroth-order methods in terms of iteration complexity. When certain sparsity
assumption is met, DES can even align with the convergence rate of first-order
methods.
* •
DES is fully adaptive in the sense that its convergence is guaranteed with any
initial settings. Moreover, no numerical difference is involved so users will
not worry about the roundoff errors.
* •
We propose two alternative probability distributions for generating mutation
vectors in local updates. This significantly reduces the computation cost in
high-dimensional settings.
In the remainder of this article, we first describe some related work in
Section 2. In Section 3 we describe the details of DES and analyze its
convergence properties. We then provide in Section 4 two alternative sampling
methods and discuss their impact on the algorithm performance. Section 5 uses
simulation studies to investigate the performance of our proposals. The
article is concluded in Section 6. This paper has a supplement containing all
proofs of our theoretical findings, as well as additional experimental
results.
Notation Vectors are written in bold lowercase. We use
$\left\|\bm{x}\right\|_{p}$ to denote the $\ell_{p}$ norm of $\bm{x}$. In
addition, we use $\left\|\bm{x}\right\|$ to denote a generic vector norm and
$\left\|\bm{x}\right\|_{\ast}$ its dual norm. We use
$\mathbb{E}\left[\cdot\right]$ to denote the expectation,
$\mathbb{V}\left[\cdot\right]$ the variance,
$\mathbb{I}\left\\{\cdot\right\\}$ the indicator function, and
$\mathbb{P}\left\\{\cdot\right\\}$ the probability. We use $\bm{0}$ and
$\bm{I}$ to denote respectively the zero vector and the identity matrix of
appropriate dimensions. $\mathcal{N}(\bm{0},\bm{I})$ denotes the multivariate
isotropic Gaussian distribution. We use $\bm{e}_{i}$ to denote the $i$-th
column of $\bm{I}$, i.e., the vector with 1 at the $i$-th coordinate and 0s
elsewhere.
## 2 Related work
Distributed optimization is an active field in the optimization and machine
learning communities. Our proposal belongs to the class of synchronous
distributed optimization methods, which has been extensively studied in the
first-order setting. Representative works include [31, 32, 33] and they are
all based on the federated averaging (FedAvg) framework [34]. These methods
usually use stochastic gradient descent (SGD) as the worker-side solver while
adopting different schemes to improve communication efficiency. Theoretically,
these first-order methods could be generalized straightforwardly to the black-
box setting via Gaussian smoothing; but to the best of our knowledge,
heretofore there exist no generic zeroth-order approaches in the synchronous
distributed setting. The closest work is the zeroth-order version of signSGD,
ZO-signSGD, described in [20]; however, this method requires communication per
iteration and does not guarantee global convergence. Another relevant method
is FedProx [35] which does not rely on the specification of local solvers.
FedProx technically admits using zeroth-order solvers at the worker-side, but
it requires an additional regularization parameter to guarantee local
functions becoming strongly convex; in this sense, it is not applicable when
the function is black-box. On the other hand, there exist several DBO methods
built on asynchronous parallel [36, 19] or multi-agent architectures [18, 21];
but they are not applicable in the synchronous distributed setting which is
the main focus of this work.
Choosing the step-size is critical in implementing stochastic optimization
methods, as one cannot simply use a line search when the landscape is noisy.
To avoid the tedious step-size tuning phase, a variety of adaptation schemes
have been proposed for first-order stochastic methods, where the step-size is
updated with historical first-order information. Remarkable examples includes
[37, 38, 39, 40]. However, only a few of these adaptation schemes have been
extended to the distributed setting, e.g., in [41, 42, 43], and they still
require a manually selected step-size for each worker. The method proposed in
this work, on the contrary, can automatically choose step-sizes for both the
server and the workers, and seems to be the first one that achieves such “full
adaptivity”.
Moving beyond the classical approaches that are based on rigorous mathematic
tools, studies on stochastic optimization are very scarce in the evolutionary
computation community. Almost all existing studies consider a more generic
setting, the noisy optimization, and do not explore the expectation structure
of problem (1); see [44] for a survey. It is found in [45, 46] that, via
simple resampling, modern ESs originally designed for deterministic
optimization may achieve the best known convergence rate on noisy landscapes
[47]. These studies, however, require assumptions that are completely
different from the ones used in classical literatures. It is still unknown how
evolutionary algorithms perform on problem (1) with more generic assumptions.
Various studies on distributed optimization exist in the evolutionary
community; related methodologies and tools have been nicely summarized in [48,
49]. As evolutionary approaches are usually population-based, these studies
mostly focus on the parallel acceleration of the function evaluations of
population, but have seldom touched the data decentralization (which is the
case of this study). In this study, the distributed framework is mainly
designed to achieve data decentralization; but parallelization is also
supported in a synchronous manner.
## 3 The Proposed Method: DES
In this section, we first propose a modified ES method for non-distributed
deterministic optimization and then use it as a building block to develop the
DES algorithm. Although this work focus on black-box optimization, we need the
following assumptions to analyze the performance of DES. Unless stated
otherwise, we assume $\mathbb{R}^{n}$ is equipped with some generic vector
norm $\|\cdot\|$ and its dual norm is denoted by $\|\cdot\|_{*}$.
###### Assumption 1.
The function $F$ has Lipschitz continuous gradient with constant $L$ for any
$\bm{\xi}$, i.e.,
$\left\|\nabla F\left(\bm{x};\bm{\xi}\right)-\nabla
F\left(\bm{y};\bm{\xi}\right)\right\|_{*}\leq
L\left\|\bm{x}-\bm{y}\right\|\;\;\;\forall\bm{x},\bm{y}\in\mathbb{R}^{n}.$
###### Assumption 2.
The gradient of $F$ has bounded variance, i.e.,
$\mathbb{E}\left[\left\|\nabla F\left(\bm{x};\bm{\xi}\right)-\nabla
f\left(\bm{x}\right)\right\|_{*}^{2}\right]\leq\sigma^{2}\;\;\;\forall\bm{x}\in\mathbb{R}^{n}.$
###### Assumption 3.
Every worker has access to the distribution of $\bm{\xi}$ independently and
identically.
Assumptions 1 and 2 are customary in the analysis of stochastic optimization.
They are useful when using gradients in measuring the optimality on nonconvex
landscapes. Assumption 3 is somewhat restrictive; but it is required to reduce
the global variance via minibatching at the worker-side. On the other hand, as
an adaptive method, our method does not assume the gradients to be universally
bounded, and this is an advantage over several existing methods (e.g., [37,
38]).
### 3.1 A modified ES for deterministic optimization
We first consider the simplest ES framework, usually termed as $(1+1)$-ES in
the literature, where in each iteration a parent produces a single offspring
using mutation and the one with a better objective value becomes the new
parent. The mutation is typically performed with an isotropic Gaussian
perturbation and its variance is gradually updated. The pseudo-code of this
method is given in Algorithm 1. Specifically, it maintains a vector
$\bm{x}_{k}\in\mathbb{R}^{n}$ to encode the parent solution and a scalar
$\alpha_{k}$ the standard variance. The vector $\bm{u}_{k}\in\mathbb{R}^{n}$
(called mutation vector) is drawn from the standard Gaussian distribution and
then used to construct the offspring given by
$\bm{x}_{k}+\alpha_{k}\bm{u}_{k}$. Hereinafter we call $\alpha_{k}$ the step-
size because it (approximately) determines the length of the descent step.
The only difference of our implementation to existing ones lies in the
specification of step-sizes: here we use a pre-defined diminishing rule (in
Line 2) while almost all modern ESs adopt a comparison-based adaptation rule.
Precisely, most ESs obtain $\alpha_{k+1}$ via multiplying $\alpha_{k}$ by some
factor that depends on whether the offspring is better than the parent. This
admits the step-size to shrink exponentially fast, so ESs may achieve linear
convergence on certain landscapes [50]. In this work, however, the objective
landscape is generally nonconvex, so we cannot expect more than sublinear
convergence [51]. It suggests a thorough redesign of the step-size rule.
Our choice of the step-size rule $\alpha_{k}=\alpha_{0}/\sqrt{k+1}$ is to
align with the known convergence rate on deterministic nonconvex functions,
$\mathcal{O}\left(1/K\right)$, measured by the squared gradient norm. This is
illustrated in the following theorem.
Algorithm 1 A modified ES implementation for deterministic nonconvex
optimization
1:$\bm{x}_{0}\in\mathbb{R}^{n}$: initial solution;
$\alpha_{0}\in\mathbb{R}_{+}$: initial step-size
2:for $k=0,1,\cdots,K-1$ do
3: $\alpha_{k}=\alpha_{0}/\sqrt{k+1}$
4: Sample $\bm{u}_{k}$ from $\mathcal{N}(\bm{0},\bm{I})$
5: if $f(\bm{x}_{k}+\alpha_{k}\bm{u}_{k})\leq f(\bm{x}_{k})$ then
6: $\bm{x}_{k+1}=\bm{x}_{k}+\alpha_{k}\bm{u}_{k}$
7: else
8: $\bm{x}_{k+1}=\bm{x}_{k}$
9: end if
10:end for
###### Theorem 1.
Let Assumption 1 hold with the self-dual $\ell_{2}$ norm, i.e.,
$\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}$. Assume the function $f$ is bounded
below by $f_{*}$. The iterations generated by Algorithm 1 satisfy
$\begin{split}\frac{1}{K}\sum_{k=0}^{K-1}&\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\\\
\leq&\sqrt{\frac{2\pi}{K}}\left(\frac{f(\bm{x}_{0})-f_{*}}{\alpha_{0}}+\alpha_{0}Ln\left(1+\log
K\right)\right).\end{split}$ (2)
Define $\Delta_{f}=f\left(\bm{x}_{0}\right)-f_{*}$. The bound in 2 is
minimized at $\alpha_{0}=\Theta\left(\sqrt{\frac{\Delta_{f}}{Ln}}\right)$; in
this case, we have, via taking the square on both sides, the following rate
for ES:
$\left(\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\right)^{2}\leq\tilde{\mathcal{O}}\left(\frac{\Delta_{f}Ln}{K}\right)$
(3)
where $\tilde{\mathcal{O}}$ hides the negligible $\log K$ term in the
$\mathcal{O}$ notation. Whereas, for comparison, the best known bound for
gradient descent is
$\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}^{2}\right]\leq\mathcal{O}\left(\frac{\Delta_{f}L}{K}\right),$
(4)
or, if the gradient is estimated using Gaussian smoothing,
$\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}^{2}\right]\leq\mathcal{O}\left(\frac{\Delta_{f}Ln}{K}\right).$
(5)
See [52] for these results. These bounds are quite similar, expect for the
difference in measuring the optimality. It suggests that 1) the proposed
modified ES is competitive with zeroth-order gradient descent methods that are
based on Gaussian smoothing, and 2) is only $n$ times slower than first-order
gradient descent methods. The slowdown compared to first-order methods is
probably due to that the mutation in ES is not necessarily a descent step and
it has a dimension-dependent variance. The advantage of ES is twofold: it does
not need to estimate the gradient and it converges with any step-size setting.
### 3.2 Implementation of DES
We now describe the DES method for handling distributed stochastic problems.
Algorithm 2 provides the pseudo-code for our method. DES adopts the well-known
federated averaging framework and uses the deterministic ES proposed in
Section 3.1 as worker-side solvers. Its search process is divided into $T$
rounds, and in the $t$-th round, the server maintains a solution
$\bm{x}_{t}\in\mathbb{R}^{n}$, a step-size $\alpha_{0}^{t}\in\mathbb{R}_{+}$,
and an optional momentum term $\bm{m}_{t}\in\mathbb{R}^{n}$. The step-size
should decrease at a $1/T^{0.25}$ rate to achieve convergence. The momentum
term is to enhance the robustness of the server-side updates.
At the beginning of the $t$-th round, the server broadcasts $\bm{x}_{t}$ and
$\alpha_{0}^{t}$ to all $M$ workers, and the workers use them as their initial
solutions and step-sizes respectively (in Lines 3-4). Each worker $i$ then
draws a minibatch $\mathcal{D}_{i}$ of size $b$ randomly111To simplify the
analysis, throughout this work, we assume the minibatch to be drawn uniformly
with replacement. and constructs a stochastic approximated function $f_{i}$
(in Lines 5-6). The minibatch $\mathcal{D}_{i}$ is fixed during this round and
thus the function $f_{i}$ is considered as deterministic. The $i$-th worker
then optimizes $f_{i}$ using the deterministic ES with a budget of $K$
iterations. At the $k$-th iteration of the $i$-th worker, we denote
respectively the solution and step-size as $\bm{v}_{i,k}^{t}$ and
$\sigma_{k}^{t}$. After the worker-side search phase terminates, all workers
upload their final output (i.e., $\bm{v}_{i,K}^{t}$), and then the server
computes an averaged descent step, denote by $\bm{d}_{t+1}$, in Line 17.
Before the end of the $t$-th round, as shown in Lines 18-19, the server
accumulates the descent step into the momentum $\bm{m}_{t+1}$, with a
parameter $\beta$ controlling the rate, and finally obtains the new solution
$\bm{x}_{t+1}$ via moving $\bm{x}_{t}$ along the momentum direction. Note that
in the final step we do not specify a step-size; the magnitude of how the
solution is updated is implicitly controlled by the deterministic ES at the
worker-side. This is the critical step for achieving full adaptivity.
Algorithm 2 DES
1:$\bm{x}_{0}\in\mathbb{R}^{n}$: initial solution; $\alpha\in\mathbb{R}_{+}$:
initial step-size; $\beta\in\left[0,\sqrt{\frac{1}{2\sqrt{2}}}\right)$:
momentum parameter; $b\geq\sqrt{T}$: minibatch size
2:for $t=0,1,\cdots,T-1$ do
3: for $i=1,2,\cdots,M$ in parallel do
4: $\bm{v}_{i,0}^{t}=\bm{x}_{t}$
5: $\alpha_{0}^{t}=\alpha/(t+1)^{0.25}$
6: Draw a minibatch $\mathcal{D}_{i}$ of size $b$
7: Define
$f_{i}(\bm{x})=\frac{1}{b}\sum_{\bm{\xi}\in\mathcal{D}_{i}}F(\bm{x};\bm{\xi})$
8: for $k=0,1,\cdots,K-1$ do
9: $\alpha_{k}^{t}=\alpha_{0}^{t}/(k+1)^{0.5}$
10: Sample $\bm{u}_{i,k}^{t}$ from $\mathcal{N}(\bm{0},\bm{I})$
11: if $f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\leq
f_{i}(\bm{v}_{i,k}^{t})$ then
12: $\bm{v}_{i,k+1}^{t}=\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}$
13: else
14: $\bm{v}_{i,k+1}^{t}=\bm{v}_{i,k}^{t}$
15: end if
16: end for
17: end for
18: $\bm{d}_{t+1}=\frac{1}{M}\sum_{i=1}^{M}\bm{v}_{i,K}^{t}-\bm{x}_{t}$
19: $\bm{m}_{t+1}=\beta\bm{m}_{t}+(1-\beta)\bm{d}_{t+1}$
20: $\bm{x}_{t+1}=\bm{x}_{t}+\bm{m}_{t+1}$
21:end for
### 3.3 Convergence properties
We now analyze the convergence behavior of DES. Firstly we consider a general
setting where the optimality is measured by the $\ell_{2}$ norm of the
gradient.
###### Theorem 2.
Let Assumptions 1, 2 and 3 hold with the self-dual $\ell_{2}$ norm, i.e.,
$\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}$. Assume the function $f$ is bounded
below by $f_{*}$ and choose
$0\leq\beta<\sqrt{\frac{1}{2\sqrt{2}}},b\geq\sqrt{T}$. The iterations
generated by Algorithm 2 satisfy
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}\right]\leq\frac{\sqrt{2\pi}}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}\\\
+\frac{\sqrt{n}}{T^{1/4}}\left(2\alpha L\left(\sqrt{2\pi
n}\Psi+\frac{80\beta\sqrt{K}}{3}\right)+\frac{8\sqrt{2\pi}\sigma}{3}\right)$
(6)
where
$\Psi=\left(\left(\frac{2}{1-2\sqrt{2}\beta^{2}}+\frac{1}{2}\right)\sqrt{K}+\frac{1}{2\sqrt{K}}\right)(1+\log
K)+\sqrt{K}$ (7)
Here we briefly discuss our theoretical result and its implications.
###### Remark (Convergence rate).
When $K$ is fixed, the DES method achieves $\mathcal{O}\left(T^{-1/4}\right)$
rate in terms of $\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}\right]$. If, in addition, setting
$\alpha=\Theta(n^{-1/2}L^{-1})$, we achieve
$\left(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}\right]\right)^{2}\leq\mathcal{O}\left(\sigma^{2}\frac{n}{\sqrt{T}}\right).$
(8)
The dependence on $T$ aligns with the best known bound for zeroth-order
stochastic methods, e.g., in [52], which can be rewritten as
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}^{2}\right]\leq\mathcal{O}\left(\sigma\sqrt{\frac{\Delta_{f}Ln}{T}}\right)$
(9)
where $\Delta_{f}=f\left(\bm{x}_{0}\right)-f_{*}$. Our method has a worse
dependence on $\sigma$. However, the best known bound in 9 requires $\sigma$
to be known when setting the step-size; so it remains unknown whether the
dependence of $\sigma$ is improvable in a real black-box setting. Our obtained
rate 8, in fact, matches the rate of adaptive gradient methods [41] in terms
of the $\sigma$-dependence. The convergence of DES is less dependent on the
function landscape characteristics (e.g., $\Delta_{f}$ and $L$), at the cost
of having a worse dimension-dependence. This indicates that DES might suffer
from the curse of dimensionality but could be better in handling ill-
conditioning and robust to initialization.
###### Remark (Minibatching).
The setting $b\geq\sqrt{T}$ is critical in achieving convergence. This
requirement is not usual for first-order methods or Gaussian smoothing based
zeroth-order methods, since for these methods the gradient variance can be
scaled down by choosing a sufficiently small step-size. The DES method only
relies on the comparison results among solutions and does not try to estimate
the gradient, so the bias of the descent step could accumulate and prevent
convergence unless a large minibatch is used to explicitly reduce the noise.
Note that similar issues are encountered in the signSGD method [25] where the
descent step becomes biased due to the sign operation. signSGD, however,
requires $b\geq T$ to achieve convergence whereas in our method it is relaxed
to $b\geq\sqrt{T}$.
###### Remark (Adaptivity).
The DES method is fully adaptive in the sense that it converges with any valid
parameter setting and relies no knowledge about landscape characteristics
(e.g., values of $L$ and $\sigma$). In contrast to existing distributed
adaptive gradient methods such as [41, 42], DES does not need the gradient to
be uniformly bounded and does not involve a non-adaptive worker-side step-
size.
###### Remark (Momentum).
The bound in 6 suggests that the optimal $\beta$ is 0, but in experiments we
found choosing $\beta>0$ in most cases leads to better performance. This is
probably because the suggested rate is overestimated, so it does not reflect
how the momentum influences the algorithm performance. The impact of this
parameter will be investigated using simulation studies.
It is found from 8 that the DES method suffers a dimension-dependence slowdown
in convergence. We note, however, that when the landscape exhibits certain
sparse structure, DES may automatically exploit such sparsity and achieve
speedup. This is formally stated below:
###### Theorem 3.
Let Assumptions 1, 2 and 3 hold with the $\ell_{\infty}$ norm, i.e.,
$\|\cdot\|=\|\cdot\|_{\infty}$ and $\|\cdot\|_{*}=\|\cdot\|_{1}$. Assume the
function $f$ is bounded below by $f_{*}$ and choose
$0\leq\beta<\sqrt{\frac{1}{2\sqrt{2}}},b\geq\sqrt{T}$. If $\|\nabla
f(\bm{x})\|_{0}\leq s$ for any $\bm{x}\in\mathbb{R}^{n}$ and some constant
$s\leq n$, then the iterations generated by Algorithm 2 satisfy
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{1}]\leq\frac{\sqrt{2\pi
s}}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}\\\
+\frac{8\sqrt{\log(\sqrt{2}n)}}{T^{1/4}}\Bigg{\\{}\alpha L\left(\sqrt{2\pi
s\log(\sqrt{2}n)}\Psi+\frac{10\beta\sqrt{K}}{3}\right)\\\ +\frac{2\sqrt{2\pi
s}\sigma}{3}\Bigg{\\}}$ (10)
where $\Psi$ is defined in 7.
###### Remark (Adaptation to sparsity).
The rate established above only poly-logarithmically depends on the dimension.
With any setting of $\alpha$ and noting the fact $\|\nabla
f(\bm{x})\|_{1}\geq\|\nabla f(\bm{x})\|_{2}$, we have
$\left(\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{2}]\right)^{2}\leq\tilde{\mathcal{O}}\left(\frac{\sigma^{2}}{\sqrt{T}}\right)$
which is nearly independent of the dimension, as in most first-order methods.
We emphasize the improvement in the dimension-dependence is achieved
automatically when the landscape is sparse, without any modification made to
the algorithm.
## 4 Alternative Sampling Schemes
One bottleneck of the DES method implemented in Section 3 is the generation of
mutation vectors, in which a huge amount of Gaussian random numbers are
required. It is known that generating Gaussian random numbers is usually
expensive, and it may cause efficiency issue in high-dimensional settings. In
this section we propose two alternative probability models which can be used
in DES for improving the sampling efficiency.
### 4.1 Mixture sampling for fast mutation
In Algorithm 2, each worker has to perturb its maintained solution in all
coordinates, leading to the $O(n)$ complexity per-iteration. Our scheme to
improve this is to only perturb a small subset of the coordinates.
Specifically, at each worker’s iteration we uniformly and randomly sample a
subset of $l$ coordinates with replacement, where $l\ll n$ is a small integer.
Then, on each selected coordinate, we perturb the current solution with a
univariate random noise. This two-level sampling strategy yields a mixture
distribution since its samples follow a mixture of $n$ univariate probability
models defined individually on each coordinate. Statistical characteristics of
this mixture distribution is completely determined by the parameter $l$ and
the underlying univariate model. In the following we provide two ways in
designing the mixture sampling scheme.
The first scheme is to use Gaussian distribution on each selected coordinate
and we call it “mixture Gaussian sampling”. This scheme works via replacing
the Gaussian distribution (e.g., $\mathcal{N}(\bm{0},\bm{I})$ in Algorithm 2)
with the probability model defined below:
###### Definition 1.
We call a random vector $\bm{u}\in\mathbb{R}^{n}$ is obtained from the mixture
Gaussian sampling if it can be expressed as
$\bm{u}=\sqrt{\frac{n}{l}}\sum_{j=1}^{l}\bm{e}_{r_{j}}z_{j}$
where $z_{1},\cdots,z_{l}$ are scalars drawn independently from
$\mathcal{N}(0,1)$, and $r_{1},\cdots,r_{l}$ are integers drawn uniformly from
$\\{1,\cdots,n\\}$ with replacement. We denote its underlying probability
model by $\mathcal{M}_{l}^{G}$.
The second scheme is to use, on each selected coordinate, the Rademacher
distribution which belongs to the sub-Gaussian family. We call this scheme
“mixture Rademacher sampling”. In this case, the mutation vector is drawn from
the following distribution:
###### Definition 2.
We call a random vector $\bm{u}\in\mathbb{R}^{n}$ is obtained from the mixture
Rademacher sampling if it can be expressed as
$\bm{u}=\sqrt{\frac{n}{l}}\sum_{j=1}^{l}\bm{e}_{r_{j}}z_{j}$
where $z_{1},\cdots,z_{l}$ are independent scalars to be either 1 or -1 with
50% chance, and $r_{1},\cdots,r_{l}$ are integers drawn uniformly from
$\\{1,\cdots,n\\}$ with replacement. We denote its underlying probability
model by $\mathcal{M}_{l}^{R}$.
The coefficient $\sqrt{\frac{n}{l}}$ in the above definitions is to normalize
the probability model to achieve the identity covariance matrix, which will be
illustrated in the subsequent analyses. When $l\leq n$, we can implement the
above sampling schemes efficiently in DES, via a loop of length $l$ applied on
the solutions maintained at the worker-side. Algorithm 3 gives the detailed
implementations of this idea. When $l\ll n$, the time complexity for sampling
can be reduced to $O(l)$, and this will save the computing time considerably
when $n$ is large.
Algorithm 3 DES with mixture sampling
1:$\bm{x}_{0}\in\mathbb{R}^{n}$: initial solution; $\alpha\in\mathbb{R}_{+}$:
initial step-size; $\beta\in\left[0,\sqrt{\frac{1}{2\sqrt{2}}}\right)$:
momentum parameter; $b\geq\sqrt{T}$: minibatch size; $l\in\mathbb{Z}_{+}$:
mixture parameter
2:for $t=0,1,\cdots,T-1$ do
3: for $i=1,2,\cdots,M$ in parallel do
4: $\bm{v}_{i,0}^{t}=\bm{x}_{t}$
5: $\alpha_{0}^{t}=\alpha/(t+1)^{0.25}$
6: Draw a minibatch $\mathcal{D}_{i}$ of size $b$
7: Define
$f_{i}(\bm{x})=\frac{1}{b}\sum_{\bm{\xi}\in\mathcal{D}_{i}}F(\bm{x};\bm{\xi})$
8: for $k=0,1,\cdots,K-1$ do
9: $\alpha_{k}^{t}=\alpha_{0}^{t}/(k+1)^{0.5}$
10: $\bm{w}=\bm{v}_{i,k}^{t}$
11: for $j=1,\cdots,l$ do
12: Draw $r$ randomly uniformly from $\\{1,\cdots,n\\}$ with replacement
13: Option I (mixture Gaussian sampling):
14: $z\sim\mathcal{N}(0,1)$
15: Option II (mixture Rademacher sampling):
16: $z$ is either -1 or 1 with 50% chance
17: $w_{r}=w_{r}+\alpha_{k}^{t}\sqrt{\frac{n}{l}}z$
18: end for
19: if $f_{i}(\bm{w})\leq f_{i}(\bm{v}_{i,k}^{t})$ then
20: $\bm{v}_{i,k+1}^{t}=\bm{w}$
21: else
22: $\bm{v}_{i,k+1}^{t}=\bm{v}_{i,k}^{t}$
23: end if
24: end for
25: end for
26: $\bm{d}_{t+1}=\frac{1}{M}\sum_{i=1}^{M}\bm{v}_{i,K}^{t}-\bm{x}_{t}$
27: $\bm{m}_{t+1}=\beta\bm{m}_{t}+(1-\beta)\bm{d}_{t+1}$
28: $\bm{x}_{t+1}=\bm{x}_{t}+\bm{m}_{t+1}$
29:end for
### 4.2 Behavior of DES with mixture sampling
We first discuss the statistic characteristics of proposed two sampling
schemes.
The mixture Gaussian sampling, in the case of $l\rightarrow\infty$, will
degenerate to the standard Gaussian sampling. This limiting case, to some
extent, is useless as it will make the sampling even more expensive.
Therefore, we are more interested in the $l\ll n$ case. The following
describes the statistical properties that are required in understanding the
mixture sampling schemes. Since the probability model is symmetric by design,
we will focus on its second-order and fourth-order moments.
###### Proposition 1.
Let $l\in\mathbb{Z}_{+}$ and $\bm{u}\in\mathbb{R}^{n}$. If
$\bm{u}\sim\mathcal{M}^{G}_{l}$, we have $\mathbb{V}[\bm{u}]=\bm{I}$ and
$\mathbb{E}[|\bm{y}^{T}\bm{u}|^{4}]=3\left(\frac{n}{l}\|\bm{y}\|_{4}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right),\;\;\forall\bm{y}\in\mathbb{R}^{n}.$
(11)
The above shows that the mixture Gaussian sampling will generate mutation
vectors having exactly the same covariance matrix as the standard Gaussian
sampling, regardless of the $l$ value. In addition, since
$n\|\bm{y}\|_{4}^{4}\geq\|\bm{y}\|_{2}^{4}\geq\|\bm{y}\|_{4}^{4}$, we know
$\frac{\mathbb{E}[|\bm{y}^{T}\bm{u}|^{4}]}{\|\bm{y}\|_{2}^{4}}\in\left[3,3\frac{n+l-1}{l}\right],$
which then indicates that any 1-dimensional projection of
$\mathcal{M}_{l}^{G}$ will have a larger kurtosis than Gaussian. Implications
of this property are twofold. Firstly, the mixture Gaussian sampling method is
more likely to generate outliers in the mutation phase, so if the landscape is
highly multimodal, DES equipped with $\mathcal{M}_{l}^{G}$ would have a
greater chance to escape local optima. Secondly, this makes DES prefer
exploration than exploitation, and hence, it may degrade the performance. We
note, as will be demonstrated later, that such a performance degradation is
insignificant when the gradient is dense.
Similarly, the mixture Rademacher sampling can be characterized as below.
###### Proposition 2.
Let $l\in\mathbb{Z}_{+}$ and $\bm{u}\in\mathbb{R}^{n}$. If
$\bm{u}\sim\mathcal{M}^{R}_{l}$, we have $\mathbb{V}[\bm{u}]=\bm{I}$ and
$\mathbb{E}[|\bm{y}^{T}\bm{u}|^{4}]=\frac{n}{l}\|\bm{y}\|_{4}^{4}+3\frac{l-1}{l}\|\bm{y}\|_{2}^{4},\;\;\forall\bm{y}\in\mathbb{R}^{n}.$
(12)
Again, the mixture Rademacher sampling is more likely to produce outlier
mutation vectors than the standard Gaussian sampling, while they have the same
covariance matrix. But it is found, via comparing 12 with 11, that
$\mathcal{M}_{l}^{R}$ can scale down the kurtosis of $\mathcal{M}_{l}^{G}$ by
a factor about $1/3$ for sufficiently large $n$. In this sense, the mixture
Rademacher sampling can be considered as a trade-off between the standard
Gaussian sampling and the mixture Gaussian sampling.
In the following, we analyze the convergence performance of DES when equipped
with the mixture sampling schemes. For expository purposes, we assume
$\beta=0$ and only consider the $\ell_{2}$ norm case, though our analysis can
be extended directly to a more general setting.
###### Theorem 4.
Let Assumptions 1, 2 and 3 hold with the self-dual $\ell_{2}$ norm, i.e.,
$\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}$. Assume the function $f$ is bounded
below by $f_{*}$ and choose $\beta=0,b\geq\sqrt{T}$. If $\|\nabla
f(\bm{x})\|_{2}^{4}/\|\nabla f(\bm{x})\|_{4}^{4}\geq\tilde{s}$ for any
$\bm{x}\in\mathbb{R}^{n}$ and some constant $\tilde{s}\in[1,n]$, then the
iterations generated by Algorithm 3 with mixture Gaussian sampling satisfy
$\frac{1}{T}{\sum_{t=0}^{T-1}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]\leq{\sqrt{3+\frac{3n}{\tilde{s}l}}}\left\\{\frac{2}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}\right.\\\
\left.+\frac{4\sqrt{n}}{T^{1/4}}\left(\frac{4}{3}\sigma+L\sqrt{n}\hat{\Psi}\alpha\right)\right\\}$
(13)
where
$\hat{\Psi}=\left(\frac{1}{2\sqrt{K}}+\frac{5}{2}\sqrt{K}\right)(1+\log
K)+\frac{1}{\sqrt{K}}.$
###### Remark (Impact of the denseness).
The bound in 13 is generally looser than that for DES with standard Gaussian
sampling. For example, consider setting $\alpha=\Theta(n^{-1/2}L^{-1})$, then
we obtain the convergence rate
$\left(\frac{1}{T}\sum_{t=0}^{T}\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{2}]\right)^{2}\leq\mathcal{O}\left(\frac{\sigma^{2}n^{2}}{\tilde{s}l\sqrt{T}}\right),$
which could be $\mathcal{O}\left(\frac{n}{\tilde{s}l}\right)$ times slower
than the rate given in 8. The involved constant $\tilde{s}$, by the definition
of vector norms, always exists in the range $[1,n]$. In fact, as has been
pointed out in [53], the quantity $\|\bm{y}\|_{4}^{4}/\|\bm{y}\|_{2}^{4}$
measures the sparseness of a vector $\bm{y}\in\mathbb{R}^{n}$, so the constant
$\tilde{s}$ here can be viewed as a lower bound of the denseness of the
gradient $\nabla f(\bm{x})$. If the gradient is relatively dense, e.g., all
coordinates in the gradient are of a similar magnitude, then $\tilde{s}$ will
be close to $n$. In this case, the convergence rate with mixture Gaussian
sampling will coincide with that with standard Gaussian sampling. We may
therefore conclude, by comparing Theorems 4 and 3, that the mixture sampling
is more suitable for dense problems whereas the standard Gaussian sampling is
preferred for sparse problems.
###### Theorem 5.
Let Assumptions 1, 2 and 3 hold with the self-dual $\ell_{2}$ norm, i.e.,
$\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}$. Assume the function $f$ is bounded
below by $f_{*}$ and choose $\beta=0,b\geq\sqrt{T}$. If $\|\nabla
f(\bm{x})\|_{2}^{4}/\|\nabla f(\bm{x})\|_{4}^{4}\geq\tilde{s}$ for any
$\bm{x}\in\mathbb{R}^{n}$ and some constant $\tilde{s}\in[1,n]$, then the
iterations generated by Algorithm 3 with mixture Rademacher sampling satisfy
$\frac{1}{T}{\sum_{t=0}^{T-1}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]\leq{\sqrt{3+\frac{n}{\tilde{s}l}}}\left\\{\frac{2}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}\right.\\\
\left.+\frac{4\sqrt{n}}{T^{1/4}}\left(\frac{4}{3}\sigma+L\sqrt{n}\hat{\Psi}\alpha\right)\right\\}$
(14)
where $\hat{\Psi}$ is defined as in Theorem 4.
The bound corresponding to the mixture Rademacher sampling is slightly tighter
than that for the mixture Gaussian sampling. This could make a considerable
difference in practice when $n$ is large. Our empirical studies show that in
certain cases the mixture Rademacher sampling could be better than the mixture
Gaussian sampling, while their performance is in general similar.
## 5 Simulation Study
In this section we perform simulations to investigate the empirical
performance of the proposed methods.
### 5.1 Experimental settings
We consider three binary classification problems arising in machine learning
and statistics. They include logistic regression (LR), nonconvex support
vector machine (NSVM), and linear support vector machine (LSVM) with a hinge
loss. For these problems, the random sample $\bm{\xi}$ corresponds to a pair
of input vector $\bm{z}$ and target label $y$, and the objective function
takes the finite-sum form:
$f(\bm{x})=\frac{1}{N}\sum_{i=1}^{N}F(\bm{x};\bm{\xi}_{i})={\frac{1}{N}\sum_{i=1}^{N}}loss(\bm{x};\bm{z}_{i},y_{i})+\frac{\lambda_{p}}{2}\|\bm{x}\|^{2}_{2},$
where $loss$ is the loss function and $\lambda_{p}$ is the regularization
parameter. We fix $\lambda_{p}=10^{-6}$ throughout this study. The loss
function is defined as
* •
Logistic Regression (LR)
$loss(\bm{x};\bm{z},y)=\log(1+\exp(-y(\bm{x}^{T}\bm{z})))$
* •
Nonconvex Support Vector Machine (NSVM)
$loss(\bm{x};\bm{z},y)=1-\tanh(y(\bm{x}^{T}\bm{z}))$
* •
Linear Support Vector Machine (LSVM)
$loss(\bm{x};\bm{z},y)=\max\left\\{0,1-y(\bm{x}^{T}\bm{z})\right\\}.$
LR is the simplest, being strongly convex and smooth. NSVM is nonconvex but
smooth. LSVM is not smooth so it does not meet our assumption; we choose it to
verify the robustness of our proposals.
Six datasets11footnotetext: All datasets are available at
https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets. The mnist dataset is
transformed into binary class based on whether the label (digital) is grater
than 4. widely used for benchmarking stochastic optimization methods are
selected and their properties are briefly summarized in Table I. For each
dataset, 80% data are chosen for training and the remaining 20% are for
testing. We partition the training samples uniformly into $M$ pieces with no
overlap, and each piece is stored at a counterpart worker.
TABLE I: Statistics of the used datasets. dataset | $n$ | $N$
---|---|---
ijcnn1 | 22 | 49990
SUSY | 18 | 5000000
covtype | 54 | 581012
mnist | 780 | 60000
real-sim | 20958 | 72309
rcv1 | 47236 | 677399
We implement four algorithms for comparison, including federated zeroth-order
gradient method (Fed-ZO-GD), federated zeroth-order SGD (Fed-ZO-SGD), zeroth-
order signSGD method (ZO-signSGD), and the standard ES with cumulative step-
size adaptation (ES-CSA). Fed-ZO-GD, Fed-ZO-SGD, and ZO-signSGD are
distributed algorithms based on gradient estimation. ES-CSA is non-distributed
but we have made some modifications to enable distributed optimization. Their
configurations are described below:
* •
Fed-ZO-GD. It is implemented by replacing the worker-side solver of DES with
the Gaussian smoothing based gradient descent method, so it can be considered
as a plain combination of FedAvg and the zeroth-order gradient descent method.
Each worker individually chooses a random minibatch of size $b$ in each round
and runs zeroth-order gradient descent for $K^{\prime}=K/2$ iterations with
the step-size
$\alpha_{k}^{t}=\frac{\alpha_{0}^{t}}{k+1}=\frac{\alpha}{(k+1)\sqrt{t+1}}$. We
choose the central-difference in Gaussian smoothing, so each worker takes
about $Kb$ function evaluations per round.
* •
Fed-ZO-SGD. It is a zeroth-order extension of the standard federated SGD
algorithm, where each worker’s iteration uses an individually random minibatch
of size $b$. Each worker’s SGD runs for $K^{\prime}=K/2$ iterations with the
step-size
$\alpha_{k}^{t}=\frac{\alpha_{0}^{t}}{\sqrt{k+1}}=\frac{\alpha}{\sqrt{(k+1)(t+1)}}$.
It uses the same setting for Gaussian smoothing as in Fed-ZO-GD.
* •
ZO-signSGD. This method is originally proposed in [20] and we adopt its
variant with majority vote for distributed optimization. In each round, each
worker computes $K^{\prime}=K/2$ gradient estimators, takes the sign of their
average, and then uploads the result to the server. Each gradient estimator is
obtained from a central-difference Gaussian smoothing over a minibatch batch
of size $b$. The server performs global updates using the sign vector with
step-size $\alpha^{t}=\frac{\alpha}{\sqrt{t+1}}$.
* •
ES-CSA. We use the standard $(\mu;\lambda)$-ES described in [27] with slight
modifications for date decentralization. In each round, the server generates a
population of $\lambda$ solutions with a standard multivariate Gaussian
distribution and broadcasts the whole population to each worker. The workers
then evaluate the population with their local data. The server sums up, for
each solution, the results collected from the workers and obtain the
corresponding objective value. The best $\lambda$ ones in the population are
chosen and their recombination becomes the new population mean. In this
setting, each worker takes $\lambda\frac{N}{M}$ function evaluations per
round. The standard cumulative step-size adaptation is used and the initial
step-size is set to $\alpha$.
For the three gradient-based methods, we use the central-difference Gaussian
smoothing which takes two function evaluations on each data sample; so the
setting $K^{\prime}=K/2$ ensures that the total number of function evaluations
per round and per worker is $Kb$, being consistent with DES. For CSA-ES, the
population size is set to $\lambda=MKb/N$; under this setting, all algorithms
have exactly the same number of function evaluations per round.
For all algorithms, we choose $b=1000$, $M=10$. We choose $K=100$ if $n\leq
100$ and 500 if $n>100$. Each algorithm is assigned with a budget of $EN$
function evaluations, where $E=1000$ if $n\leq 100$ and 5000 if $n>100$. For
algorithms relying on Gaussian smoothing, the finite-difference radius is
$\mu=10^{-6}$. The momentum parameter in DES is set to $\beta=0.5$. All
algorithms are run for 8 times individually for each pair of dataset and
problem and the median results are reported. DES with the mixture Gaussian
sampling and the mixture Rademacher sampling schemes are denoted by DES-mG and
DES-mR, respectively; their mixture parameter is set to $l=8$.
### 5.2 Overall performance
We first test DES as well as the competitors on all three problems and over
all six datasets. The initial step-size $\alpha$ for each algorithm is chosen
from $\\{0.1,1,10\\}$ using a grid-search. Figures 1 and 2 report the
convergence behavior of the algorithms, measured with the median training
error versus the number of rounds. It is found that the DES methods with
either standard Gaussian sampling or mixture sampling are the best performers
in all cases. Belonging to the same ES family, our implementation of DES is
significantly better than the non-distributed implementation of ES-CSA, where
the latter performs the worst in most cases. This is caused by that the
standard ES is for deterministic optimization and does not explore the
stochastic characteristics of the objective function. Fed-ZO-SGD is the best
one among the competitors and is competitive with DES in certain cases. Fed-
ZO-GD, in most cases, is not competitive with DES, ZO-signSGD, or Fed-ZO-SGD.
(a)
(a) LR, rcv1
(b) NSVM, rcv1
(c) LSVM, rcv1
(d) LR, SUSY
(e) NSVM, SUSY
(f) LSVM, SUSY
(g) LR, mnist
(h) NSVM, mnist
(i) LSVM, mnist
Figure 1: Comparison on rcv1, SUSY, and mnist datasets. The curve displays the
training error versus the number of rounds and the corresponding shaded area
extends from the 25th to 75th percentiles over the results obtained from all
independent runs.
(a)
(a) LR, real-sim
(b) NSVM, real-sim
(c) LSVM, real-sim
(d) LR, ijcnn1
(e) NSVM, ijcnn1
(f) LSVM, ijcnn1
(g) LR, covtype
(h) NSVM, covtype
(i) LSVM, covtype
Figure 2: Comparison on ijcnn, covtype, and real-sim datasets. The curve
displays the training error versus the number of rounds and the corresponding
shaded area extends from the 25th to 75th percentiles over the results
obtained from all independent runs.
We also observe that, when implemented in DES, the standard Gaussian sampling,
the mixture Gaussian sampling, and the mixture Rademacher sampling do not show
significant difference in performance. Although our analyses in Theorems 4 and
5 suggest the possibility that mixture sampling might degrade the convergence,
the results here show that the degradation, if exists, is in general
negligible. In many cases, in fact, mixture sampling can even improve the
performance. This suggests that mixture sampling could be used as the default
scheme for DES, given its availability in improving the sampling efficiency.
The experimental results obtained on testing sets are reported in the
supplement. In general, the generalization performance of the DES methods are
consistent with their training performance.
### 5.3 Adaptation of step-size
The theoretical analyses have demonstrated that DES converges with any initial
step-size; and in this subsection we provide more empirical evidence. We first
verify the performance of DES and the other competitors under different
initial settings. In order to evaluate their performance over all problems and
all datasets, we adopt the performance profile [54], a classic tool for visual
comparison. The profile of an algorithm is the curve of the fraction of its
solved test instances222Test instance denotes the pair of problem and dataset.
(denoted by $\rho(\tau)$) versus the amount of allocated computational budget
(denoted by $\tau$). The computational budget is measured by the ratio of the
required number of rounds to that required by the best performer. We say an
algorithm can solve a test instance if its obtained objective function value
$f^{\prime}$ satisfies
$f(\bm{x}_{0})-f^{\prime}>\delta(f(\bm{x}_{0})-f_{*}^{\prime})$ where
$\delta\in(0,1)$ controls the accuracy and $f_{*}^{\prime}$ is the best
objective value obtained among all algorithms. An algorithm with high values
of $\rho(\tau)$ or one that is located at the top left of the figure is
preferable. In this section, the objective function value is measured by the
training loss.
Figure 3 plots the performance profiles of DES (with the standard Gaussian
sampling) as well as the three competitors, with initial step-sizes chosen
from $\\{0.1,1,10\\}$. We choose $\delta=0.1$ in plotting the profiles. The
curves of DES are mostly lie to the left of the others, demonstrating that the
relative performance of DES is in general robust to the step-size setting. For
small $\tau$, the profile of DES with $\alpha=0.1$ lies to the right of Fed-
ZO-SGD with $\alpha=10$, and overlaps with that of the other methods; this
indicates that $\alpha=0.1$ is too small for DES to achieve fast decrease in
early stage. But when a sufficient amount of computation budget (e.g.,
$\tau\geq 10$) is allowed, then such a step-size setting can nevertheless lead
to the performance comparable to Fed-ZO-SGD with the best tuned step-size.
Fed-ZO-GD is not robust to the step-size setting. Its profile for $\alpha=0.1$
is not shown in the plot, implying that with this setting Fed-ZO-GD cannot
solve any test instance.
(a)
Figure 3: Performance profiles ($\delta=0.1$) of different algorithms with
different initial step-sizes. Results are obtained on all test instances.
Figure 4 provides, as an representative, the convergence trajectories of the
algorithms with different initial step-sizes. In general, on the two convex
problems (i.e., LR and LSVM), the performance of DES is quite insensitive to
the initial step-size; all three settings admit approaching similar results in
the long run. ES-CSA exhibits similar adaptation ability, albeit with
relatively poor performance. The other gradient based methods are sensitive to
step-size settings, leading to quite different solutions even in the convex
problems. On the nonconvex problem NSVM, the initial value of the step-size
seems to have a considerable influence on all methods, possibly because of
that the step-size setting is critical in escaping local optima. In this case,
large initial step-sizes seem to yield faster convergence, but may also lead
to early stagnation.
(a)
(a) LR
(b) NSVM
(c) LSVM
Figure 4: Convergence on SUSY with different initial step-size settings. The
curve displays the training error versus the number of rounds and the
corresponding shaded area extends from the 25th to 75th percentiles over the
results obtained from all independent runs.
### 5.4 Impact of momentum
The convergence rate established previously does not reflects its dependence
on the momentum parameter, so here we investigate this empirically. Consider
the mixture Rademacher sampling based DES method, with $\beta$ chosen from
$\\{0,0.2,0.4,0.6,0.8\\}$ and $\alpha$ fixed to 1. All other settings are the
same as those in Section 5.2. Note that in the theoretical analyses we have
required
$\beta\leq\sqrt{\frac{1}{2\sqrt{2}}}\lessapprox 0.6$ (15)
for technical reasons. So the choice $\beta=0.8$ is to verify whether the
above requirement is necessary in practice.
Figure 5 gives the profile plot obtained on all test instances, measured with
two different $\delta$ settings. Note that the smaller $\delta$ is, the higher
solution-accuracy the curve reflects. It is found that the momentum mechanism
becomes useless in the low accuracy domain; as setting $\beta$ to 0 is enough
to solve nearly 80% test instances within a very limited amount of
computational budget. In this case, setting $\beta$ to 0.8 is indeed harmful
to the performance. To approach good performance in high accuracy, on the
contrary, an appropriate setting of this parameter is generally helpful and
could influence the final results. Again, we observe that the setting
$\beta=0.8$ leads to poor performance, indicating that the assumption 15 seems
to be mandatory. But it is worthy nothing that the choice of $\beta$ is not
critical to the relative performance of DES compared with the other
competitors; we suggest to fix its value in the range $[0.2,0.6]$ in all
situations.
(a) Low solution-accuracy case: $\delta=0.05$
(b) High solution-accuracy case: $\delta=0.001$
Figure 5: Performance profiles of DES with different momentum parameters.
Results are obtained on all test instances. The mixture Rademacher sampling
scheme is used in implementing DES.
### 5.5 Impact of minibatch size
Here we verify the impact of minibatch size on the algorithm performance. We
consider the mixture Rademacher sampling based DES method and choose $\beta$
from $\\{100,500,1000,1500,2000\\}$. All other settings are the same as in
Section 5.2.
Figure 6 reports the results obtained on all test instances via performance
profile. It is clearly that whether minibatch size matters depends on which
accuracy we would like to achieve. In the low accuracy case ($\delta=0.05$),
choosing a small minibatch $b=100$ can solve at least 50% test instances very
quickly, although suffering early termination later. In this case, using a
large minibatch does not lead to significant improvement in performance.
Oppositely, the impact of minibatch size becomes quite clear in high accuracy
case ($\delta=0.001$) where increasing $b$ consistently improves the number of
test instances that can be solved. This observation matches our theoretical
analyses and suggests that a large minibatch is generally better if the
computational cost is affordable at the worker-side.
(a) Low solution-accuracy case: $\delta=0.05$
(b) High solution-accuracy case: $\delta=0.001$
Figure 6: Performance profiles of DES with different minibatch sizes. Results
are obtained on all test instances. The mixture Rademacher sampling scheme is
used in implementing DES.
## 6 Conclusion
In this work we propose the DES method via modifying the classic evolution
strategy method and adapting it to the distributed setting. Our method uses a
Gaussian probability model to guide the worker’s local update, so it avoids
finite-difference based smoothing techniques which might cause numerical
issues. We have analyzed its convergence properties compared to existing
zeroth-order and first-order methods, demonstrating its adaptivity to
objective landscapes and the exploitation ability towards sparsity. Two
alternative sampling schemes have been suggested and we find they lead to an
improvement in sampling efficiency with no obvious degradation in performance.
The current implementation of DES, however, does not support heterogeneous
data distribution, which seems to be a common issue for those based on biased
descent step; see [20, 25] for an example. The idea of bias correction
suggested in [55] seems to address this issue, and is worth a try in further
development of DES. This idea, nevertheless, would be incompatible with the
comparison-based nature of the ES family. We would like to continue resolving
this in the future. We hope our work on DES will serve as a starting point for
generalizing the rich studies in evolutionary computation communities to the
distributed world.
## References
* [1] H. Robbins and S. Monro, “A Stochastic Approximation Method,” _The Annals of Mathematical Statistics_ , vol. 22, no. 3, pp. 400–407, 1951.
* [2] L. Bottou, F. E. Curtis, and J. Nocedal, “Optimization Methods for Large-Scale Machine Learning,” _SIAM Review_ , vol. 60, no. 2, pp. 223–311, Jan. 2018.
* [3] S. Sun, Z. Cao, H. Zhu, and J. Zhao, “A Survey of Optimization Methods From a Machine Learning Perspective,” _IEEE Transactions on Cybernetics_ , vol. 50, no. 8, pp. 3668–3681, Aug. 2020.
* [4] M. Pereyra, P. Schniter, É. Chouzenoux, J.-C. Pesquet, J.-Y. Tourneret, A. O. Hero, III, and S. McLaughlin, “A Survey of Stochastic Simulation and Optimization Methods in Signal Processing.” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 10, no. 2, pp. 224–241, 2016.
* [5] D. Lee, N. He, P. Kamalaruban, and V. Cevher, “Optimization for Reinforcement Learning: From a single agent to cooperative agents,” _IEEE Signal Processing Magazine_ , vol. 37, no. 3, pp. 123–135, May 2020\.
* [6] C. Gambella, B. Ghaddar, and J. Naoum-Sawaya, “Optimization problems for machine learning: A survey,” _European Journal of Operational Research_ , vol. 290, no. 3, pp. 807–828, May 2021.
* [7] F. E. Curtis and K. Scheinberg, “Adaptive Stochastic Optimization: A Framework for Analyzing Stochastic Optimization Algorithms,” _IEEE Signal Processing Magazine_ , vol. 37, no. 5, pp. 32–42, Sep. 2020\.
* [8] A. Mokhtari and A. Ribeiro, “Stochastic Quasi-Newton Methods,” _Proceedings of the IEEE_ , vol. 108, no. 11, pp. 1906–1922, Nov. 2020.
* [9] J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated Learning: Strategies for Improving Communication Efficiency,” _arXiv:1610.05492 [cs]_ , Oct. 2016.
* [10] J. Konečný, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated Optimization: Distributed Machine Learning for On-Device Intelligence,” _arXiv:1610.02527 [cs]_ , Oct. 2016.
* [11] X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-Edge AI: Intelligentizing Mobile Edge Computing, Caching and Communication by Federated Learning,” _IEEE Network_ , vol. 33, no. 5, pp. 156–165, Sep. 2019.
* [12] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic Differentiation in Machine Learning: A Survey,” _Journal of Machine Learning Research_ , vol. 18, no. 153, pp. 1–43, 2018.
* [13] S. Liu, S. P. Chepuri, M. Fardad, E. Masazade, G. Leus, and P. K. Varshney, “Sensor Selection for Estimation with Correlated Measurement Noise,” _IEEE Transactions on Signal Processing_ , vol. 64, no. 13, pp. 3509–3522, Jul. 2016.
* [14] H. Kvamme, Ø. Borgan, and I. Scheel, “Time-to-Event Prediction with Neural Networks and Cox Regression.” _J. Mach. Learn. Res._ , vol. 20, pp. 129:1–129:30, 2019.
* [15] Y. Nesterov and V. Spokoiny, “Random Gradient-Free Minimization of Convex Functions,” _Foundations of Computational Mathematics_ , vol. 17, no. 2, pp. 527–566, Apr. 2017.
* [16] J. Li, C. Wu, Z. Wu, and Q. Long, “Gradient-free method for nonsmooth distributed optimization,” _Journal of Global Optimization_ , vol. 61, no. 2, pp. 325–340, Feb. 2015.
* [17] D. Yuan, S. Xu, and J. Lu, “Gradient-free method for distributed multi-agent optimization via push-sum algorithms,” _International Journal of Robust and Nonlinear Control_ , vol. 25, no. 10, pp. 1569–1580, 2015.
* [18] D. Yuan, D. W. C. Ho, and S. Xu, “Zeroth-Order Method for Distributed Optimization With Approximate Projections,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 27, no. 2, pp. 284–294, Feb. 2016\.
* [19] B. Gu, Z. Huo, C. Deng, and H. Huang, “Faster Derivative-Free Stochastic Algorithm for Shared Memory Machines,” in _International Conference on Machine Learning_. PMLR, Jul. 2018, pp. 1812–1821.
* [20] S. Liu, P.-Y. Chen, X. Chen, and M. Hong, “signSGD via Zeroth-Order Oracle,” in _7th International Conference on Learning Representations_ , New Orleans, LA, USA, May 2019.
* [21] A. K. Sahu and S. Kar, “Decentralized Zeroth-Order Constrained Stochastic Optimization Algorithms: Frank–Wolfe and Variants With Applications to Black-Box Adversarial Attacks,” _Proceedings of the IEEE_ , vol. 108, no. 11, pp. 1890–1905, Nov. 2020.
* [22] D. Wang, J. Yin, and W. Wang, “Distributed Randomized Gradient-Free Optimization Protocol of Multiagent Systems Over Weight-Unbalanced Digraphs.” _IEEE Trans. Cybern._ , vol. 51, no. 1, pp. 473–482, 2021.
* [23] J. C. Duchi, M. I. Jordan, M. J. Wainwright, and A. Wibisono, “Optimal Rates for Zero-Order Convex Optimization: The Power of Two Function Evaluations,” _IEEE Transactions on Information Theory_ , vol. 61, no. 5, pp. 2788–2806, May 2015.
* [24] J. Nocedal and S. J. Wright, _Numerical Optimization_ , 2nd ed., ser. Springer Series in Operations Research. New York: Springer, 2006.
* [25] J. Bernstein, Y.-X. Wang, K. Azizzadenesheli, and A. Anandkumar, “signSGD: Compressed Optimisation for Non-Convex Problems,” in _International Conference on Machine Learning_ , Jul. 2018, pp. 560–569.
* [26] H.-G. Beyer and H.-P. Schwefel, “Evolution strategies–A comprehensive introduction,” _Natural computing_ , vol. 1, no. 1, pp. 3–52, 2002.
* [27] N. Hansen, D. V. Arnold, and A. Auger, “Evolution strategies,” in _Springer Handbook of Computational Intelligence_ , J. Kacprzyk and W. Pedrycz, Eds. Springer Dordrecht Heidelberg London New York, 2015, pp. 871–898.
* [28] Z. Li, X. Lin, Q. Zhang, and H. Liu, “Evolution strategies for continuous optimization: A survey of the state-of-the-art,” _Swarm and Evolutionary Computation_ , vol. 56, p. 100694, Aug. 2020.
* [29] A. Auger and N. Hansen, “Linear Convergence of Comparison-based Step-size Adaptive Randomized Search via Stability of Markov Chains,” _SIAM Journal on Optimization_ , vol. 26, no. 3, pp. 1589–1624, Jan. 2016.
* [30] S. Astete-Morales, M.-L. Cauwet, and O. Teytaud, “Evolution Strategies with Additive Noise: A Convergence Rate Lower Bound,” in _Proceedings of the 2015 ACM Conference on Foundations of Genetic Algorithms XIII_ , ser. FOGA ’15. New York, NY, USA: Association for Computing Machinery, Jan. 2015, pp. 76–84.
* [31] F. Zhou and G. Cong, “On the convergence properties of a k-step averaging stochastic gradient descent algorithm for nonconvex optimization,” in _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden_ , J. Lang, Ed. ijcai.org, 2018, pp. 3219–3227.
* [32] J. Zhang, C. De Sa, I. Mitliagkas, and C. Ré, “Parallel SGD: When does averaging help?” _arXiv:1606.07365 [cs, stat]_ , Jun. 2016.
* [33] J. Wang and G. Joshi, “Cooperative SGD: A unified Framework for the Design and Analysis of Communication-Efficient SGD Algorithms,” _arXiv:1808.07576 [cs, stat]_ , Jan. 2019.
* [34] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in _Artificial Intelligence and Statistics_ , Apr. 2017, pp. 1273–1282.
* [35] T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated Learning: Challenges, Methods, and Future Directions,” _IEEE Signal Processing Magazine_ , vol. 37, no. 3, pp. 50–60, May 2020.
* [36] X. Lian, Y. Huang, Y. Li, and J. Liu, “Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization.” in _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_ , 2015, pp. 2737–2745.
* [37] R. Ward, X. Wu, and L. Bottou, “AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes,” in _International Conference on Machine Learning_ , May 2019, pp. 6677–6686.
* [38] S. J. Reddi, S. Kale, and S. Kumar, “On the Convergence of Adam and Beyond,” in _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net, 2018.
* [39] K. Y. Levy, “Online to Offline Conversions, Universality and Adaptive Minibatch Sizes.” in _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , 2017, pp. 1613–1622.
* [40] J. Duchi, E. Hazan, and Y. Singer, “Adaptive Subgradient Methods for Online Learning and Stochastic Optimization,” _J. Mach. Learn. Res._ , vol. 12, pp. 2121–2159, Jul. 2011.
* [41] S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konečný, S. Kumar, and H. B. McMahan, “Adaptive Federated Optimization,” _arXiv:2003.00295 [cs, math, stat]_ , Dec. 2020.
* [42] C. Xie, O. Koyejo, I. Gupta, and H. Lin, “Local AdaAlter: Communication-Efficient Stochastic Gradient Descent with Adaptive Learning Rates,” in _12th Annual Workshop on Optimization for Machine Learning_ , Dec. 2020.
* [43] Q. Tong, G. Liang, and J. Bi, “Effective Federated Adaptive Gradient Methods with Non-IID Decentralized Data,” _arXiv:2009.06557 [cs, stat]_ , Dec. 2020.
* [44] P. Rakshit, A. Konar, and S. Das, “Noisy evolutionary optimization algorithms – A comprehensive survey,” _Swarm and Evolutionary Computation_ , vol. 33, pp. 18–45, Apr. 2017.
* [45] H. Beyer and B. Sendhoff, “Toward a Steady-State Analysis of an Evolution Strategy on a Robust Optimization Problem With Noise-Induced Multimodality,” _IEEE Transactions on Evolutionary Computation_ , vol. 21, no. 4, pp. 629–643, Aug. 2017.
* [46] M. Hellwig and H.-G. Beyer, “On the steady state analysis of covariance matrix self-adaptation evolution strategies on the noisy ellipsoid model,” _Theoretical Computer Science_ , vol. 832, pp. 98–122, Sep. 2020.
* [47] C. Qian, Y. Yu, K. Tang, Y. Jin, X. Yao, and Z.-H. Zhou, “On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments,” _Evolutionary Computation_ , vol. 26, no. 2, pp. 237–267, Jun. 2018.
* [48] Y.-J. Gong, W.-N. Chen, Z.-H. Zhan, J. Zhang, Y. Li, Q. Zhang, and J.-J. Li, “Distributed evolutionary algorithms and their models: A survey of the state-of-the-art,” _Applied Soft Computing_ , vol. 34, pp. 286–300, Sep. 2015.
* [49] T. Harada and E. Alba, “Parallel Genetic Algorithms: A Useful Survey,” _ACM Computing Surveys_ , vol. 53, no. 4, pp. 86:1–86:39, Aug. 2020.
* [50] Y. Akimoto, A. Auger, and T. Glasmachers, “Drift theory in continuous search spaces: Expected hitting time of the (1 + 1)-ES with 1/5 success rule,” in _Proceedings of the Genetic and Evolutionary Computation Conference_. Kyoto Japan: ACM, Jul. 2018, pp. 801–808.
* [51] A. Agarwal, P. L. Bartlett, P. Ravikumar, and M. J. Wainwright, “Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization,” _IEEE Transactions on Information Theory_ , vol. 58, no. 5, pp. 3235–3249, May 2012.
* [52] S. Ghadimi and G. Lan, “Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming,” _SIAM Journal on Optimization_ , vol. 23, no. 4, pp. 2341–2368, Jan. 2013.
* [53] N. Hurley and S. Rickard, “Comparing Measures of Sparsity,” _IEEE Transactions on Information Theory_ , vol. 55, no. 10, pp. 4723–4741, 2009.
* [54] E. D. Dolan and J. J. Moré, “Benchmarking optimization software with performance profiles,” _Mathematical Programming_ , vol. 91, no. 2, pp. 201–213, Jan. 2002.
* [55] S. P. Karimireddy, Q. Rebjock, S. Stich, and M. Jaggi, “Error Feedback Fixes SignSGD and other Gradient Compression Schemes,” in _International Conference on Machine Learning_. PMLR, May 2019, pp. 3252–3261.
* [56] E. Chlebus, “An approximate formula for a partial sum of the divergent p-series,” _Applied Mathematics Letters_ , vol. 22, no. 5, pp. 732–737, May 2009.
Supplementary Appendices
## Appendix A Proof of Theorem 1
###### Proof.
For convenience define the following scalar operations
$\text{sign}(a)=\begin{cases}1&\text{\; if $a\geq 0$}\\\ -1&\text{\; if
$a<0$}\end{cases}\;\;\;\text{and}\;\;\;\text{sign}_{+}(a)=\frac{\text{sign}(a)+1}{2}=\begin{cases}1&\text{\;
if $a\geq 0$}\\\ 0&\text{\; if $a<0$}\end{cases}.$ (16)
Note that the $\text{sign}(\cdot)$ is different from the usual operation of
taking sign, as in our definition it returns 1 when performed on 0. In
addition, we have the following useful identities:
$\text{sign}(a)b=\left(-1+2\mathbb{I}\left\\{\text{sign}(a)=\text{sign}(b)\right\\}\right)|b|$
(17)
and
$\mathbb{I}\left\\{\text{sign}(a)=\text{sign}(b)\right\\}=\mathbb{I}\left\\{|a+b|\geq|b|\right\\}$
(18)
which can be verified easily.
With the sign operation defined in (16), the iterations generated by Algorithm
1 can be rewritten as
$\bm{x}_{k+1}=\bm{x}_{k}+\alpha_{k}\text{sign}_{+}\left(f\left(\bm{x}_{k}\right)-f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)\right)\bm{u}_{k}.$
(19)
With Assumption 1, we can bound the per-iteration progress as
$\begin{split}f\left(\bm{x}_{k+1}\right)-f\left(\bm{x}_{k}\right)&\leq\nabla
f\left(\bm{x}_{k}\right)^{T}(\bm{x}_{k+1}-\bm{x}_{k})+\frac{L}{2}\left\|\bm{x}_{k+1}-\bm{x}_{k}\right\|^{2}\\\
&\overset{(\ref{eq:update-rule-simple-
ES})}{\leq}\alpha_{k}\text{sign}_{+}\left(f\left(\bm{x}_{k}\right)-f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)\right)\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}+\frac{L\alpha_{k}^{2}}{2}\left\|\bm{u}_{k}\right\|^{2}\\\
&\overset{(\ref{eq:definition-sign-
signplus})}{=}\frac{1}{2}\alpha_{k}\bm{u}_{k}+\frac{1}{2}\alpha_{k}\underbrace{\left(\text{sign}\left(f\left(\bm{x}_{k}\right)-f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)\right)\right)\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}}_{\overset{\Delta}{=}\mathfrak{A}}+\frac{L\alpha_{k}^{2}}{2}\left\|\bm{u}_{k}\right\|^{2}.\end{split}$
Taking expectation with respect to $\bm{u}_{k}$ at both sides, and according
to Lemma 7, we have
$\mathbb{E}_{k}\left[f\left(\bm{x}_{k+1}\right)\right]-f\left(\bm{x}_{k}\right)\leq\frac{1}{2}\alpha_{k}\mathbb{E}_{k}\left[\mathfrak{A}\right]+\frac{L\alpha_{k}^{2}}{2}U$
(20)
where $\mathbb{E}_{k}$ denotes the expectation conditioned on the randomness
at the $k$-th iteration.
We now bound the term $\mathfrak{A}$ using identities (17) and (18):
$\begin{split}\mathfrak{A}&\overset{(\ref{eq:sign_identity})}{=}\left(-1+2\mathbb{I}\left\\{\text{sign}\left(f\left(\bm{x}_{k}\right)-f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right)\right\\}\right)\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\\\
&=\left(-1+2\mathbb{I}\left\\{\text{sign}\left(f\left(\bm{x}_{k}\right)-f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)\right)=\text{sign}\left(\alpha_{k}\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right)\right\\}\right)\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\\\ &\overset{(\ref{eq:indicator-
neq})}{=}\left(-1+2\mathbb{I}\left\\{\left|f\left(\bm{x}_{k}+\alpha_{k}\bm{u}_{k}\right)-f\left(\bm{x}_{k}\right)-\alpha_{k}\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\geq\alpha_{k}\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right\\}\right)\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\\\
&\leq\left(-1+2\mathbb{I}\left\\{\frac{L}{2}\left\|\alpha_{k}\bm{u}_{k}\right\|^{2}\geq\alpha_{k}\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right\\}\right)\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\end{split}$ (21)
where the last inequality is due to Assumption 1.
Substituting (21) into (20) gives
$\begin{split}\mathbb{E}_{k}\left[f\left(\bm{x}_{k+1}\right)\right]&-f\left(\bm{x}_{k}\right)\\\
&\leq\frac{\alpha_{k}}{2}\mathbb{E}_{k}\left[\left(-1+2\mathbb{I}\left\\{\frac{\alpha_{k}L}{2}\left\|\bm{u}_{k}\right\|^{2}\geq\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right\\}\right)\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right]+\frac{L\alpha_{k}^{2}}{2}U\\\
&=-\frac{\alpha_{k}}{2}\mathbb{E}_{k}\left[\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right]+\alpha_{k}\underbrace{\mathbb{E}_{k}\left[\mathbb{I}\left\\{\frac{\alpha_{k}L}{2}\left\|\bm{u}_{k}\right\|^{2}\geq\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right\\}\left|\nabla
f\left(\bm{x}_{k}\right)^{T}\bm{u}_{k}\right|\right]}_{\overset{\Delta}{=}\mathfrak{B}}+\frac{L\alpha_{k}^{2}}{2}U\\\
&=-\frac{\alpha_{k}}{\sqrt{2\pi}}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}+\alpha_{k}\mathfrak{B}+\frac{L\alpha_{k}^{2}}{2}U\\\
\end{split}$ (22)
where the last equality uses the fact
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]=\sqrt{\frac{2}{\pi}}\|\bm{y}\|_{2}\text{ for
}\bm{u}\sim\mathcal{N}(\bm{0},\bm{I}).$ (23)
Since the distribution of $\bm{u}_{k}$ is isotropic, we can assume $\nabla
f(\bm{x}_{k})=\left\|\nabla f(\bm{x}_{k})\right\|_{2}\bm{e}_{1}$ where
$\bm{e}_{1}=(1,0,\cdots,0)^{T}$. Denoting $u_{k,i}$ as the $i$-th element of
$\bm{u}_{k}$ and noting the assumption $\|\cdot\|=\|\cdot\|_{2}$, we have
$\mathfrak{B}=\mathbb{E}_{k}\left[\mathbb{I}\left\\{\frac{\alpha_{k}L}{2}\sum_{i=1}^{n}u_{k,i}^{2}\geq\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right\\}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right].$ (24)
Now we decompose the expectation operation $\mathbb{E}_{k}$ into two steps:
firstly taking the expectation over $u_{k,2},\cdots,u_{k,n}$ and secondly over
$u_{k,1}$. That is,
$\begin{split}\mathfrak{B}&=\mathbb{E}_{u_{k,1}}\mathbb{E}_{u_{k,2},\cdots,u_{k,n}}\left[\mathbb{I}\left\\{\frac{\alpha_{k}L}{2}\sum_{i=1}^{n}u_{k,i}^{2}\geq\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right\\}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right]\\\
&=\mathbb{E}_{u_{k,1}}\left[\mathbb{P}_{u_{k,2},\cdots,u_{k,n}}\left\\{\frac{\alpha_{k}L}{2}\sum_{i=1}^{n}u_{k,i}^{2}\geq\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right\\}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right]\\\
&\leq\mathbb{E}_{u_{k,1}}\left[\frac{\alpha_{k}L}{2}\frac{u_{k,1}^{2}+\sum_{i=2}^{n}\mathbb{E}_{u_{k,i}}\left[u_{k,i}^{2}\right]}{\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\left|u_{k,1}\right|\right]\\\
&=\frac{\alpha_{k}L}{2}\mathbb{E}_{u_{k,1}}\left[{u_{k,1}^{2}+\sum_{i=2}^{n}\mathbb{E}_{u_{k,i}}\left[u_{k,i}^{2}\right]}\right]=\frac{\alpha_{k}L}{2}\mathbb{E}_{k}\left[\left\|\bm{u}_{k}\right\|^{2}\right].\end{split}$
Here we use the Markov inequality applied on the components
$u_{k,2},\cdots,u_{k,n}$.
Substituting the above bound into (22) and using Lemma 7, we get
$\mathbb{E}_{k}\left[f\left(\bm{x}_{k+1}\right)\right]-f\left(\bm{x}_{k}\right)\leq-\frac{\alpha_{k}}{\sqrt{2\pi}}\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}+L\alpha_{k}^{2}U.$
Taking the total expectation and summing over $k=0,1,\cdots,K-1$ give
$\sum_{k=0}^{K-1}\alpha_{k}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\leq\sqrt{2\pi}\left(f\left(\bm{x}_{0}\right)-f_{*}+LU\sum_{k=0}^{K-1}\alpha_{k}^{2}\right)\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-1-series}}{\leq}\sqrt{2\pi}\left(f\left(\bm{x}_{0}\right)-f_{*}+LU\alpha_{0}^{2}(1+\log
K)\right).$ (25)
On the other hand, we can lower bound the left-hand side as
$\sum_{k=0}^{K-1}\alpha_{k}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-0.5-series-2}}{\geq}{\sqrt{K}\alpha_{0}\left(\frac{1}{K}\sum_{k=0}^{K}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\right)}.$
Combing this with 25 yields
$\frac{1}{K}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{k}\right)\right\|_{2}\right]\leq\sqrt{\frac{2\pi}{K}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha_{0}}+LU\alpha_{0}(1+\log
K)\right).$
The bound (2) can be obtained via specifying $U=n$ according to Lemma 7. ∎
## Appendix B A Unified Implementation of DES and Fundamental Lemmas
Before proving the main results Theorems 2, 3, 4 and 5, we provide in this
section some lemmas which will be used several times in the subsequent proofs.
Since we have two DES implementations (i.e., Algorithms 2 and 3) and they only
differ in the way of generating mutation vectors, we suggest to analyze them
in a unified manner. To this end, we provide in Algorithm 4 a unified
implementation of DES which can recover both Algorithm 2 and Algorithm 3. For
example, it recovers Algorithm 2 if the mutation vector $\bm{u}_{i,k}^{t}$ in
Line 9 is drawn from the Gaussian distribution $\mathcal{N}(\bm{0},\bm{I})$.
It is also logically equivalent to Algorithm 3 when $\bm{u}_{i,k}^{t}$ is
drawn from the mixture Gaussian distribution $\mathcal{M}_{l}^{G}$ or mixture
Rademacher distribution $\mathcal{M}_{l}^{R}$. Note that the lemmas derived in
this section do not rely on the detailed distribution for the mutation
vectors. We will also not specify the vector norm when using the assumptions.
The only requirement is that the variance of the mutation vector
$\bm{u}_{i,k}^{t}$ needs to be bounded by some constant $U$ (see Line 9 in
Algorithm 4). We will show in the next sections that this requirement indeed
holds.
Algorithm 4 Unified implementation of DES for convergence analyses
1:$\bm{x}_{0}\in\mathbb{R}^{n}$: initial solution; $\alpha\in\mathbb{R}_{+}$:
initial step-size; $\beta\in\left[0,\sqrt{\frac{1}{2\sqrt{2}}}\right)$:
momentum parameter; $b\geq\sqrt{T}$: minibatch size; $l\in\mathbb{Z}_{+}$:
mixture parameter
2:for $t=0,1,\cdots,T-1$ do
3: for $i=1,2,\cdots,M$ in parallel do
4: $\bm{v}_{i,0}^{t}=\bm{x}_{t}$
5: $\alpha_{0}^{t}=\alpha/(t+1)^{0.25}$
6: Draw a minibatch $\mathcal{D}_{i}$ of size $b$
7: Define
$f_{i}(\bm{x})=\frac{1}{b}\sum_{\bm{\xi}\in\mathcal{D}_{i}}F(\bm{x};\bm{\xi})$
8: for $k=0,1,\cdots,K-1$ do
9: $\alpha_{k}^{t}=\alpha_{0}^{t}/(k+1)^{0.5}$
10: Generate a random vector $\bm{u}_{i,k}^{t}$ satisfying
$\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|^{2}\right]\leq U$ for some positive
constant $U$ and some generic norm $\|\cdot\|$
11:
$\bm{v}_{i,k+1}^{t}=\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\text{sign}_{+}\left(f_{i}(\bm{v}_{i,k}^{t})-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)$
where $\text{sign}_{+}$ is defined in 16
12: end for
13: end for
14: $\bm{d}_{t+1}=\frac{1}{M}\sum_{i=1}^{M}\bm{v}_{i,K}^{t}-\bm{x}_{t}$
15: $\bm{m}_{t+1}=\beta\bm{m}_{t}+(1-\beta)\bm{d}_{t+1}$
16: $\bm{x}_{t+1}=\bm{x}_{t}+\bm{m}_{t+1}$
17:end for
In the following we give some lemmas regarding the iterations generated from
Algorithm 4. Due to the momentum mechanism, it is difficult to directly work
with the solutions $\left\\{\bm{x}_{t}\right\\}$. Instead, we introduce a
virtual sequence $\left\\{\bm{z}_{t}\right\\}$ which can be regarded as a
counterpart of $\left\\{\bm{x}_{t}\right\\}$ without momentum:
$\bm{z}_{t+1}=\frac{1}{1-\beta}\bm{x}_{t+1}-\frac{\beta}{1-\beta}\bm{x}_{t}.$
To make it well-defined, we specify $\bm{x}_{-1}=\bm{x}_{0}$ such that
$\bm{z}_{0}=\bm{x}_{0}$. We will characterize the algorithm behavior with
$\left\\{\bm{z}_{t}\right\\}$ and relate it to $\left\\{\bm{x}_{t}\right\\}$
in the last step. Note that by this definition and according to the momentum
rule (Lines 14-15 in Algorithm 4) we have
$\bm{z}_{t+1}-\bm{z}_{t}=\bm{d}_{t+1}\text{\;\;\; and \;\;\;
}\left\|\bm{x}_{t}-\bm{z}_{t}\right\|=\frac{\beta}{1-\beta}\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|.$
(26)
###### Lemma 1.
The descent step $\bm{d}_{t+1}$ in Algorithm 4 can be bounded as
$\displaystyle\mathbb{E}\left[\left\|\bm{d}_{t+1}\right\|^{2}\right]$
$\displaystyle\leq\left(\alpha_{0}^{t}\right)^{2}UK\left(1+\log K\right),$
(27) $\displaystyle\mathbb{E}\left[\left\|\bm{d}_{t+1}\right\|\right]$
$\displaystyle\leq 2\alpha_{0}^{t}\sqrt{KU}.$ (28)
###### Proof.
According to Line 13 of Algorithm 4 we have
$\begin{split}\mathbb{E}\left[\left\|\bm{d}_{t+1}\right\|^{2}\right]&\overset{(*)}{\leq}\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}\left[\left\|\bm{v}_{i,K}^{t}-\bm{x}_{t}\right\|^{2}\right]\\\
&=\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}\left[\left\|\sum_{k=0}^{K-1}\bm{v}_{i,k+1}^{t}-\bm{v}_{i,k}^{t}\right\|^{2}\right]\\\
&\overset{(*)}{\leq}\frac{K}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\bm{v}_{i,k+1}^{t}-\bm{v}_{i,k}^{t}\right\|^{2}\right]\\\
&\leq\frac{K}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\left(\alpha_{k}^{t}\right)^{2}\mathbb{E}\left[\left\|\bm{u}_{i,k}^{t}\right\|^{2}\right]\\\
&\leq\left(\alpha_{0}^{t}\right)^{2}\frac{K}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\frac{U}{k+1}\end{split}$
where $(*)$ is due to Jensen’s inequality. Applying 42 in Lemma 8 gives 27.
Similarly, the bound 28 can be obtained as
$\begin{split}\mathbb{E}\left[\left\|\bm{d}_{t+1}\right\|\right]&\leq\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}\left[\left\|\bm{v}_{i,K}^{t}-\bm{x}_{t}\right\|\right]\\\
&=\frac{1}{M}\sum_{i=1}^{M}\mathbb{E}\left[\left\|\sum_{k=0}^{K-1}\bm{v}_{i,k+1}^{t}-\bm{v}_{i,k}^{t}\right\|\right]\\\
&\overset{(*)}{\leq}\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\mathbb{E}\left[\left\|\bm{v}_{i,k+1}^{t}-\bm{v}_{i,k}^{t}\right\|\right]\\\
&\leq\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left\|\bm{u}_{i,k}^{t}\right\|\right]\\\
&\leq\alpha_{0}^{t}\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\sqrt{\frac{U}{k+1}}.\end{split}$
where $(*)$ is due to Jensen’s inequality and the last inequality is due to
$\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|\right]\leq\sqrt{\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|^{2}\right]}\leq\sqrt{U}$.
We can then reach 28 using 43 from Lemma 8. ∎
###### Lemma 2.
Assume $0\leq\beta<\sqrt{\frac{1}{2\sqrt{2}}}$. The change of the sequence
$\\{\bm{x}_{t}\\}$ in Algorithm 4 can be bounded as
$\displaystyle\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|\right]$
$\displaystyle\leq\frac{160(1-\beta)\alpha\sqrt{KU}}{3T^{1/4}},$ (29)
$\displaystyle\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|^{2}\right]$
$\displaystyle\leq\frac{(1-\beta)^{2}}{\frac{1}{2\sqrt{2}}-\beta^{2}}UK\left(1+\log
K\right)\left(\alpha_{0}^{t}\right)^{2}.$ (30)
###### Proof.
We first prove 29. By construction, we have for $t>1$
$\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|=\left\|\bm{m}_{t}\right\|=\left\|\beta\bm{m}_{t-1}+(1-\beta)\bm{d}_{t}\right\|\leq\beta\left\|\bm{m}_{t-1}\right\|+(1-\beta)\left\|\bm{d}_{t}\right\|.$
Expanding the above recursive bound gives
$\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|\leq\left(\beta^{t-1}\|\bm{d}_{1}\|+\cdots+\beta\|\bm{d}_{t-1}\|+\|\bm{d}_{t}\|\right)(1-\beta).$
Taking expectation at both sides yields
$\begin{split}\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|\right]&\leq(1-\beta)\sum_{j=1}^{t}\beta^{t-j}\mathbb{E}\left[\left\|\bm{d}_{j}\right\|\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:descent-step-
bound}}{\leq}(1-\beta)\sum_{j=1}^{t}\beta^{t-j}2\alpha_{0}^{j-1}\sqrt{KU}\\\
&=2(1-\beta)\alpha\sqrt{KU}\sum_{j=1}^{t}\frac{\beta^{t-j}}{j^{0.25}}\\\
&\overset{\lx@cref{creftype~refnum}{eq:beta-series-
bound-1}}{\leq}\frac{40(1-\beta)\alpha\sqrt{KU}}{t^{0.25}}\end{split}$
Recall that we have defined $\bm{x}_{0}=\bm{x}_{-1}$, so
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|\right]\leq\frac{1}{T}\sum_{t=1}^{T-1}\frac{40(1-\beta)\alpha\sqrt{KU}}{t^{0.25}}{\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-0.25-series}}{\leq}}\frac{160(1-\beta)\alpha\sqrt{KU}}{3T^{1/4}}$
and 29 is proved.
30 is trivial for $t=0$. For $t\geq 1$, it can be proved in a way similar to
the above.
Firstly, we obtain via Jensen’s inequality
$\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|^{2}=\left\|\bm{m}_{t}\right\|^{2}=\left\|\beta\bm{m}_{t-1}+(1-\beta)\bm{d}_{t}\right\|^{2}\leq
2\beta^{2}\left\|\bm{m}_{t-1}\right\|^{2}+2(1-\beta)^{2}\left\|\bm{d}_{t}\right\|^{2}.$
Expanding the momentum terms $\\{\bm{m}_{t-1}\\}$ and taking expectation give
$\begin{split}\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|^{2}\right]&\leq
2(1-\beta)^{2}\mathbb{E}\left[\left(2\beta^{2}\right)^{t-1}\left\|\bm{d}_{1}\right\|^{2}+\cdots+\left(2\beta^{2}\right)^{0}\left\|\bm{d}_{t}\right\|^{2}\right]\\\
&=2(1-\beta)^{2}\sum_{j=1}^{t}\left(2\beta^{2}\right)^{t-j}\mathbb{E}\left[\left\|\bm{d}_{j}\right\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:descent-step-bound-
square}}{\leq}2(1-\beta)^{2}\sum_{j=1}^{t}\left(2\beta^{2}\right)^{t-j}\left(\alpha_{0}^{j-1}\right)^{2}UK\left(1+\log
K\right)\\\
&=2(1-\beta)^{2}\sum_{j=1}^{t}\alpha^{2}\frac{\left(2\beta^{2}\right)^{t-j}}{j^{0.5}}UK\left(1+\log
K\right)\\\ &\overset{\lx@cref{creftype~refnum}{eq:beta-series-
bound-2}}{\leq}\frac{2(1-\beta)^{2}\alpha^{2}UK\left(1+\log
K\right)}{\sqrt{t}\left(1-2\sqrt{2}\beta^{2}\right)}\\\
&=\sqrt{\frac{t+1}{t}}\frac{2(1-\beta)^{2}\left(\alpha_{0}^{t}\right)^{2}UK\left(1+\log
K\right)}{1-2\sqrt{2}\beta^{2}}.\end{split}$
The last step is due to the definition of $\alpha_{0}^{t}$. Now use the
assumption $t\geq 1$ and we can reach 30.
∎
###### Lemma 3.
Assume $0\leq\beta<\sqrt{\frac{1}{2\sqrt{2}}}$. The worker drift in Algorithm
4 can be bounded as
$\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}\right]\leq\frac{2}{1-2\sqrt{2}\beta^{2}}UK\left(1+\log
K\right)\left(\alpha_{0}^{t}\right)^{2}.$ (31)
###### Proof.
$\begin{split}\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}\right]&\leq
2\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{x}_{t}\right\|^{2}\right]+2\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{z}_{t}\right\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:properties-virtual-
sequence}}{=}2\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{x}_{t}\right\|^{2}\right]+2\left(\frac{\beta}{1-\beta}\right)^{2}\mathbb{E}\left[\left\|\bm{x}_{t}-\bm{x}_{t-1}\right\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:x-change-
bound-2}}{\leq}2\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{x}_{t}\right\|^{2}\right]+\frac{2\beta^{2}}{\frac{1}{2\sqrt{2}}-\beta^{2}}UK\left(1+\log
K\right)\left(\alpha_{0}^{t}\right)^{2}\end{split}$
where
$\begin{split}\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{x}_{t}\right\|^{2}\right]&\leq
k\sum_{j=0}^{k-1}\mathbb{E}\left[\left\|\bm{v}_{i,j+1}^{t}-\bm{v}_{i,j}^{t}\right\|^{2}\right]\leq
k\sum_{j=0}^{k-1}\left(\alpha_{j}^{t}\right)^{2}\mathbb{E}\left[\left\|\bm{u}_{i,j}^{t}\right\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-1-series}}{\leq}Uk\left(\alpha_{0}^{t}\right)^{2}\left(1+\log k\right)\leq
UK\left(\alpha_{0}^{t}\right)^{2}\left(1+\log K\right).\end{split}$
We thus obtain
$\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-{\bm{z}_{t}}\right\|^{2}\right]\leq\left(2+\frac{2\beta^{2}}{\frac{1}{2\sqrt{2}}-\beta^{2}}\right)UK\left(1+\log
K\right)\left(\alpha_{0}^{t}\right)^{2}\leq\frac{2}{1-2\sqrt{2}\beta^{2}}UK\left(1+\log
K\right)\left(\alpha_{0}^{t}\right)^{2}.$
∎
###### Lemma 4.
Consider Algorithm 4. Let Assumptions 1, 2 and 3 hold for some generic vector
norm $\|\cdot\|$. Denote $\mathbb{E}_{\mathcal{D}_{i}}$ as the expectation
taken over the minibatch $\mathcal{D}_{i}$. We have
$\begin{split}\mathbb{E}_{\mathcal{D}_{i}}&\left[\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}\right]\\\
&\;\;\;\;\;\;\;\leq\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}+\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\sigma^{2}}{2\omega_{2}b}\end{split}$
(32)
###### Proof.
Define
$\mathfrak{A}=\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}.$
By 18, we have
$\mathfrak{A}\overset{\lx@cref{creftype~refnum}{eq:indicator-
neq}}{=}\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\underbrace{\left|f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}\right)-\alpha_{k}^{t}\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|}_{\overset{\Delta}{=}\mathfrak{B}}\geq\alpha_{k}^{t}\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\right\\},$
where
$\begin{split}\mathfrak{B}&\leq\underbrace{\left|f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}\right)-\alpha_{k}^{t}\nabla
f_{i}\left(\bm{v}_{i,k}^{t}\right)^{T}\bm{u}_{i,k}^{t}\right|}_{\mathfrak{C}_{1}}\\\
&+\alpha_{k}^{t}\underbrace{\left|\nabla
f_{i}\left(\bm{v}_{i,k}^{t}\right)^{T}\bm{u}_{i,k}^{t}-\nabla
f_{i}\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|}_{\mathfrak{C}_{2}}+\alpha_{k}^{t}\underbrace{\left|\nabla
f_{i}\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}-\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|}_{\mathfrak{C}_{3}}.\end{split}$
By Assumption 1 we have
$\mathfrak{C}_{1}\leq\frac{L}{2}\left\|\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right\|^{2}.$
Noting that we have
$|\bm{a}^{T}\bm{b}|\leq\frac{1}{2c}\|\bm{a}\|^{2}_{*}+\frac{c}{2}\|\bm{b}\|^{2}$
for any $\bm{a},\bm{b}\in\mathbb{R}^{n}$ and $c\in\mathbb{R}_{+}$, so
$\mathfrak{C}_{2}\leq\frac{1}{2\omega_{1}}\left\|\nabla
f_{i}\left(\bm{v}_{i,k}^{t}\right)-\nabla
f_{i}\left(\bm{z}_{t}\right)\right\|^{2}_{*}+\frac{\omega_{1}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}\leq\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\omega_{1}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}$
and
$\begin{split}\mathfrak{C}_{3}&\leq\frac{1}{2\omega_{2}}\left\|\nabla
f_{i}\left(\bm{z}_{t}\right)-\nabla
f\left(\bm{z}_{t}\right)\right\|^{2}_{*}+\frac{\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2},\end{split}$
for some $\omega_{1},\omega_{2}\in\mathbb{R}_{+}$.
Putting all these together, we reach
$\begin{split}\mathfrak{A}&=\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\mathfrak{B}\geq\alpha_{k}^{t}\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\right\\}\\\
&\leq\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}+\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\left\|\nabla
f_{i}\left(\bm{z}_{t}\right)-\nabla
f\left(\bm{z}_{t}\right)\right\|^{2}_{*}}{2\omega_{2}}\geq\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\right\\}\end{split}$
Now take expectation over $\mathcal{D}_{i}$. Noting that Assumptions 2 and 3
indicate that the gradient variance can be scaled down by a factor of
$b=\left|\mathcal{D}_{i}\right|$, so we have, based on the Markov inequality,
$\begin{split}\mathbb{E}_{\mathcal{D}_{i}}\left[\mathfrak{A}\right]&{\leq}\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{P}_{\mathcal{D}_{i}}\left\\{\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}+\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\left\|\nabla
f_{i}\left(\bm{z}_{t}\right)-\nabla
f\left(\bm{z}_{t}\right)\right\|^{2}_{*}}{2\omega_{2}}\geq\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\right\\}\\\
&\leq\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}+\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\mathbb{E}_{\mathcal{D}_{i}}\left[\left\|\nabla
f_{i}\left(\bm{z}_{t}\right)-\nabla
f\left(\bm{z}_{t}\right)\right\|^{2}_{*}\right]}{{2\omega_{2}}}\\\
&\leq\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\left\|\bm{u}_{i,k}^{t}\right\|^{2}+\frac{L^{2}}{2\omega_{1}}\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}+\frac{\sigma^{2}}{2\omega_{2}b}.\end{split}$
∎
## Appendix C Proof of Theorems 2 and 3
In this section we proof the convergence results for Algorithm 2. Since
Algorithm 2 is a special case of Algorithm 4 with Gaussian mutation, we can
proceed in two steps. In the first step, we start from Lemmas 1, 3 and 4
(which are obtained for Algorithm 4) with the specification
$\bm{u}_{i,k}^{t}\sim\mathcal{N}(\bm{0},\bm{I})$. This admits bounding the
gradient norm averaged over the virtual sequence $\\{\bm{z}_{t}\\}$ with some
constant $U$. The result is given in Lemma 5. Then, in the second step, we
further specify the vector norm used in the assumptions, from which we can get
the detailed values for $U$. In particular, based on Lemmas 5 and 2, we can
prove Theorem 2 with the specification $\|\cdot\|=\|\cdot\|_{2}$ and prove
Theorem 3 with $\|\cdot\|=\|\cdot\|_{\infty}$.
###### Lemma 5.
Let Assumptions 1, 2 and 3 hold for some generic vector norm $\|\cdot\|$. The
virtual sequence $\bm{z}_{t}$ produced by Algorithm 2 satisfies, for some
$U\geq\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|^{2}\right]$,
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]\leq\frac{\sqrt{2\pi}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\sum_{t=0}^{T-1}\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}\right),$
(33)
where $\Psi$ is given in 7.
###### Proof.
First rewrite $\mathbb{E}\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]$ as
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]\\\ &=\mathbb{E}\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\left(\frac{1}{M}\sum_{i=1}^{M}\bm{v}_{i,K}^{t}-\bm{x}_{t}\right)\right]\\\
&=\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\text{sign}_{+}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:definition-sign-
signplus}}{=}\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left(1+\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\right)\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&\overset{(*)}{=}\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:sign_identity}}{=}\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\left(-1+2\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}\right)\right],\end{split}$
where $(*)$ is due to $\mathbb{E}\left[\bm{u}_{i,k}^{t}\right]=\bm{0}$.
Now specify $\bm{u}_{i,k}^{t}\sim\mathcal{N}(\bm{0},\bm{I})$. Using the
identity in 23, we have
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:expectation-half-
gaussian}}{\leq}-\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\sum_{k=0}^{K-1}\alpha_{k}^{t}\\\
&\;\;\;\;\;\;\;\;\;+\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\underbrace{\left|\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}}_{\overset{\Delta}{=}\mathfrak{A}}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-0.5-series}}{\leq}-\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}\sqrt{K}+\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\mathfrak{A}\right],\end{split}$
Now use Lemmas 7 and 4 to bound $\mathbb{E}\left[\mathfrak{A}\right]$:
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]+\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}\sqrt{K}\\\
&\leq\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\left\\{\left(\alpha_{k}^{t}L+\omega_{1}+\omega_{2}\right)U+\frac{L^{2}}{\omega_{1}}\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}\right]+\frac{\sigma^{2}}{\omega_{2}b}\right\\}\\\
&\leq\frac{L^{2}}{2M\omega_{1}}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}\right]+\frac{LU}{2}\sum_{k=0}^{K-1}\left(\alpha_{k}^{t}\right)^{2}+\left(\frac{\omega_{1}+\omega_{2}}{2}U+\frac{\sigma^{2}}{2\omega_{2}b}\right)\sum_{k=0}^{K-1}\alpha_{k}^{t}\\\
&\overset{(\ref{eq:bound-sum-1-series},\ref{eq:bound-
sum-0.5-series})}{\leq}\frac{L^{2}}{2M\omega_{1}}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\right\|^{2}\right]+\frac{LU}{2}(1+\log
K)\left(\alpha_{0}^{t}\right)^{2}+\left((\omega_{1}+\omega_{2})U+\frac{\sigma^{2}}{\omega_{2}b}\right)\sqrt{K}\alpha_{0}^{t}\end{split}$
Using Lemmas 3 and 43 yields
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]+\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}\sqrt{K}\\\
&\leq
LU\sqrt{K}\left(\frac{\alpha_{0}^{t}L}{\omega_{1}}\frac{2}{1-2\sqrt{2}\beta^{2}}K+\frac{1}{2\sqrt{K}}\right)(1+\log
K)\left(\alpha_{0}^{t}\right)^{2}+\left((\omega_{1}+\omega_{2})U+\frac{\sigma^{2}}{\omega_{2}b}\right)\sqrt{K}\alpha_{0}^{t}\\\
\end{split}$
Consider now the setting
$\omega_{1}=\frac{L\alpha_{0}^{t}}{\sqrt{K}},\omega_{2}=\frac{\sigma}{\sqrt{Ub}}$,
and we can reach
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}\right]\\\
&\leq-\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}\sqrt{K}+LU\sqrt{K}\left(\left(\frac{2}{1-2\sqrt{2}\beta^{2}}\sqrt{K}+\frac{1}{2\sqrt{K}}\right)(1+\log
K)+\sqrt{K}\right)\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{K}\sqrt{\frac{U}{b}}\alpha_{0}^{t}.\end{split}$
(34)
Using Assumption 1, we have
$\begin{split}f\left(\bm{z}_{t+1}\right)&\leq f\left(\bm{z}_{t}\right)+\nabla
f\left(\bm{z}_{t}\right)^{T}(\bm{z}_{t+1}-\bm{z}_{t})+\frac{L}{2}\left\|\bm{z}_{t+1}-\bm{z}_{t}\right\|^{2}\\\
&\overset{\lx@cref{creftype~refnum}{eq:properties-virtual-
sequence}}{=}f\left(\bm{z}_{t}\right)+\nabla
f\left(\bm{z}_{t}\right)^{T}\bm{d}_{t+1}+\frac{L}{2}\left\|\bm{d}_{t+1}\right\|^{2}\end{split}$
(35)
Taking total expectation, using 27 and 34, and rearranging yield
$\begin{split}\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}&\leq\frac{\mathbb{E}\left[f\left(\bm{z}_{t}\right)-f\left(\bm{z}_{t+1}\right)\right]}{\sqrt{K}}\\\
&+LU\left(\underbrace{\left(\left(\frac{2}{1-2\sqrt{2}\beta^{2}}+\frac{1}{2}\right)\sqrt{K}+\frac{1}{2\sqrt{K}}\right)(1+\log
K)+\sqrt{K}}_{\overset{\Delta}{=}\Psi}\right)\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\alpha_{0}^{t}.\end{split}$
Summing over $t=0,\cdots,T-1$ gives
$\sum_{t=0}^{T-1}\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]}{\sqrt{2\pi}}\alpha_{0}^{t}\leq\frac{f\left(\bm{z}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\sum_{t=0}^{T-1}\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}.$
By 46, the left-hand side is no smaller than $\frac{\alpha
T^{3/4}}{\sqrt{2\pi}}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\left\|\nabla
f\left(\bm{z}_{t}\right)\right\|_{2}\right]$. And noting that, by definition,
$\bm{z}_{0}=\bm{x}_{0}$, we then obtain 33. ∎
###### Proof of Theorem 2.
Under Assumption 1 and using the specification
$\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}$, we have
$\|\nabla f(\bm{x}_{t})\|_{2}\leq\|\nabla f(\bm{x}_{t})-\nabla
f(\bm{z}_{t})\|_{2}+\|\nabla f(\bm{z}_{t})\|_{2}\leq
L\|\bm{x}_{t}-\bm{z}_{t}\|_{2}+\|\nabla
f(\bm{z}_{t})\|_{2}\overset{\lx@cref{creftype~refnum}{eq:properties-virtual-
sequence}}{{=}}\frac{L\beta}{1-\beta}\|\bm{x}_{t}-\bm{x}_{t-1}\|_{2}+\|\nabla
f(\bm{z}_{t})\|_{2}$
which gives, via taking expectation,
$\mathbb{E}\left[\|\nabla
f(\bm{z}_{t})\|_{2}\right]\geq\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}\right]-\frac{L\beta}{1-\beta}\mathbb{E}\left[\|\bm{x}_{t}-\bm{x}_{t-1}\|_{2}\right].$
Substituting this into 33 and using $b\geq\sqrt{T}$ yield
$\begin{split}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\nabla
f(\bm{x}_{t})\|_{2}\right]&\leq\frac{\sqrt{2\pi}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\sum_{t=0}^{T-1}\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}\right)+\frac{L\beta}{1-\beta}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\left[\|\bm{x}_{t}-\bm{x}_{t-1}\|_{2}\right]\\\
&\overset{(\ref{eq:x-change-bound-1}),(\ref{eq:bound-
sum-0.5-series}),(\ref{eq:bound-
sum-0.25-series})}{\leq}\frac{\sqrt{2\pi}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\alpha^{2}2\sqrt{T}+2\sigma\sqrt{\frac{U}{b}}\alpha\frac{4}{3}T^{3/4}\right)+L\beta\frac{160\alpha\sqrt{KU}}{3T^{1/4}}\\\
&\leq\frac{\sqrt{2\pi}}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}+\frac{\sqrt{U}}{T^{1/4}}\left(2\alpha
L\left(\sqrt{2\pi
U}\Psi+\frac{80\beta\sqrt{K}}{3}\right)+\frac{8\sqrt{2\pi}\sigma}{3}\right).\end{split}$
where when using 29 we have specified $\|\cdot\|=\|\cdot\|_{2}$. Finally,
according to Lemma 7, we have
$\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|_{2}^{2}\right]=n$ when
$\bm{u}_{i,k}^{t}\sim\mathcal{N}(\bm{0},\bm{I})$. We can therefore choose
$U=n$ and then reach the target bound. ∎
###### Proof of Theorem 3.
Firstly, the assumption $\|\nabla f(\bm{x})\|_{0}\leq s$ implies
$\|\nabla f(\bm{x})\|_{\infty}\leq\|\nabla f(\bm{x})\|_{2}\leq\|\nabla
f(\bm{x})\|_{1}\leq\sqrt{s}\|\nabla f(\bm{x})\|_{\infty},$
and hence we have
$\begin{split}\|\nabla f(\bm{x}_{t})\|_{1}&\leq\|\nabla f(\bm{x}_{t})-\nabla
f(\bm{z}_{t})\|_{1}+\|\nabla f(\bm{z}_{t})\|_{1}\\\ &\leq\|\nabla
f(\bm{x}_{t})-\nabla f(\bm{z}_{t})\|_{1}+\sqrt{s}\|\nabla
f(\bm{z}_{t})\|_{2}\\\
&\overset{(*)}{\leq}L\|\bm{x}_{t}-\bm{z}_{t}\|_{\infty}+\sqrt{s}\|\nabla
f(\bm{z}_{t})\|_{2}\\\ &{\overset{\lx@cref{creftype~refnum}{eq:properties-
virtual-
sequence}}{=}}\frac{L\beta}{1-\beta}\|\bm{x}_{t}-\bm{x}_{t-1}\|_{\infty}+\sqrt{s}\|\nabla
f(\bm{z}_{t})\|_{2}\end{split}$
where $(*)$ uses Assumption 1 with the specification
$\|\cdot\|=\|\cdot\|_{\infty}$ and $\|\cdot\|_{*}=\|\cdot\|_{1}$. Taking
expectation and rearranging give
$\mathbb{E}[\|\nabla
f(\bm{z}_{t})\|_{2}]\geq\frac{1}{\sqrt{s}}\left(\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{1}]-\frac{L\beta}{1-\beta}\mathbb{E}[\|\bm{x}_{t}-\bm{x}_{t-1}\|_{\infty}]\right).$
Substituting this into the left-hand side of 33 in Lemma 5 yields
$\begin{split}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{1}]&\leq\frac{L\beta}{1-\beta}\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\|\bm{x}_{t}-\bm{x}_{t-1}\|_{\infty}]+\frac{\sqrt{2\pi
s}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\sum_{t=0}^{T-1}\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}\right)\\\
&\overset{\lx@cref{creftype~refnum}{eq:x-change-
bound-1}}{\leq}\frac{160L\beta\alpha\sqrt{KU}}{3T^{1/4}}+\frac{\sqrt{2\pi
s}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\sum_{t=0}^{T-1}\left(\alpha_{0}^{t}\right)^{2}+2\sigma\sqrt{\frac{U}{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}\right)\\\
&\overset{(\ref{eq:bound-sum-0.5-series},\ref{eq:bound-
sum-0.25-series})}{\leq}\frac{160L\beta\alpha\sqrt{KU}}{3T^{1/4}}+\frac{\sqrt{2\pi
s}}{\alpha
T^{3/4}}\left(\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\sqrt{K}}+LU\Psi\alpha^{2}2\sqrt{T}+2\sigma\sqrt{\frac{U}{b}}\alpha\frac{4}{3}T^{3/4}\right)\\\
&\leq\frac{\sqrt{2\pi
s}}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}+\frac{\sqrt{U}}{T^{1/4}}\left(2\alpha
L\left(\sqrt{2\pi Us}\Psi+\frac{80\beta\sqrt{K}}{3}\right)+\frac{8\sqrt{2\pi
s}\sigma}{3}\right)\end{split}$
where the last step uses the assumption $b\geq\sqrt{T}$.
Since we have used Lemma 5, we need
$U\geq\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|_{\infty}^{2}\right]$. According to
Lemma 7, we know $U=4\log(\sqrt{2}n)$ is valid choice. We then obtain the
final bound as
$\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}[\|\nabla
f(\bm{x}_{t})\|_{1}]\leq\frac{\sqrt{2\pi
s}}{T^{3/4}}\frac{f\left(\bm{x}_{0}\right)-f_{*}}{\alpha\sqrt{K}}+\frac{8\sqrt{\log(\sqrt{2}n)}}{T^{1/4}}\left(\alpha
L\left(\sqrt{2\pi
s\log(\sqrt{2}n)}\Psi+\frac{10\beta\sqrt{K}}{3}\right)+\frac{2\sqrt{2\pi
s}\sigma}{3}\right)$
∎
## Appendix D Proof of Propositions 1 and 2
In the above proofs for DES with Gaussian mutation, we have repeatedly used
the lower bound of $\mathbb{E}[|\bm{u}^{T}\bm{y}|]$ where
$\bm{y}\in\mathbb{R}^{n}$ and $\bm{u}$ is random. This bound is trivial when
$\bm{u}\sim\mathcal{N}(\bm{0},\bm{I})$, as has been given in 23. To prove
Theorems 4 and 5 we need a similar bound when $\bm{u}$ is sampled from the
mixture Gaussian distribution $\mathcal{M}_{l}^{G}$ or the mixture Rademacher
distribution $\mathcal{M}_{l}^{R}$. This can be achieved by analyzing the
second-order and the fourth-order momentums of the corresponding probability
distribution; this is the reason why Propositions 1 and 2 are required.
###### Proof of Proposition 1.
We prove this proposition using moment-generating function.
Denote the moment-generating function of $\mathcal{M}_{l}^{G}$ by $M(\bm{t})$.
By definition, $M(\bm{t})$ can be written as
$\begin{split}M(\bm{t})&=\mathbb{E}\left[\exp(\bm{t}^{T}\bm{u})\right]=\mathbb{E}\left[\exp\sqrt{\frac{n}{l}}\left(\bm{t}^{T}\sum_{j=1}^{l}\bm{e}_{r_{j}}z_{j}\right)\right]=\mathbb{E}\left[\exp\sqrt{\frac{n}{l}}\left(\sum_{j=1}^{l}t_{r_{j}}z_{j}\right)\right]\overset{(*)}{=}\prod_{j=1}^{l}\mathbb{E}\left[\exp\left(\sqrt{\frac{n}{l}}t_{r_{j}}z_{j}\right)\right]\\\
&=\prod_{j=1}^{l}\mathbb{E}\left[\sum_{k=1}^{n}\mathbb{I}\\{r_{j}=k\\}\exp\left(\sqrt{\frac{n}{l}}t_{r_{j}}z_{j}\right)\right]=\prod_{j=1}^{l}\sum_{k=1}^{n}\mathbb{P}\\{r_{j}=k\\}\mathbb{E}_{k}\left[\exp\left(\sqrt{\frac{n}{l}}t_{k}z_{j}\right)\right]\end{split}$
where $t_{r_{j}}$ denotes the $r_{j}$-th element of $\bm{t}$ and
$\mathbb{E}_{k}$ denotes the expectation conditioned on the event $r_{j}=k$.
Equation ($*$) is due to the independence of $\\{z_{j}\\}$ and $\\{r_{j}\\}$.
Since the coordinate index $r_{j}$ is sampled uniformly with replacement, we
have $\mathbb{P}\\{r_{j}=k\\}=\frac{1}{n}$. Note that
$\mathbb{E}_{k}[\exp(t_{k}z_{j})]$ is in fact the (conditioned) moment-
generating function of the univariate Gaussian variable
$\sqrt{\frac{n}{l}}z_{j}$, which is given by
$\exp\left(\frac{n}{2l}t_{k}^{2}\right)$. So we reach
$M(\bm{t})=\prod_{j=1}^{l}\sum_{k=1}^{n}\frac{1}{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)=\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)\right)^{l}.$
By construction, the covariance matrix must be diagonal, so we focus on its
diagonal elements. Firstly, take the partial derivative with respect to
$t_{j}$ and this yields
$\frac{\partial M(\bm{t})}{\partial
t_{j}}=\frac{l}{n}\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)\right)^{l-1}\frac{\partial}{\partial
t_{j}}\exp{\left(\frac{n}{2l}t_{j}^{2}\right)}=\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)\right)^{l-1}\exp\left(\frac{n}{{2l}}t_{j}^{2}\right)t_{j}.$
The second-order partial derivative is then
$\frac{\partial^{2}M(\bm{t})}{\partial
t_{j}^{2}}=\underbrace{\left\\{\frac{\partial}{\partial
t_{j}}\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)\right)^{l-1}\right\\}\exp\left(\frac{n}{{2l}}t_{j}^{2}\right)t_{j}}_{T_{1}}+\underbrace{\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}t_{k}^{2}\right)\right)^{l-1}}_{T_{2}}\underbrace{\frac{\partial}{\partial
t_{j}}\left(\exp\left(\frac{n}{2l}t_{j}^{2}\right)t_{j}\right)}_{T_{3}}.$
When setting $\bm{t}=\bm{0}$, $T_{1}$ vanishes and $T_{2}$ becomes 1. We also
have
$T_{3}=\exp\left(\frac{n}{{2l}}t_{j}^{2}\right)\left(\frac{\partial}{\partial
t_{j}}\left(\frac{n}{2l}t_{j}^{2}\right)\right)t_{j}+\exp\left(\frac{n}{2l}t_{j}^{2}\right)\overset{\bm{t}=\bm{0}}{\Rightarrow}1.$
So the $j$-th diagonal element is 1. We therefore conclude that $\bm{u}$ has
an identity covariance matrix.
In a similar manner, the moment-generating function of $\bm{y}^{T}\bm{u}$ is
$\tilde{M}(t)=\left(\frac{1}{n}\sum_{k=1}^{n}\exp\left(\frac{n}{2l}y_{k}^{2}t^{2}\right)\right)^{l}.$
Now expand the exponential term as Taylor series
$\tilde{M}(t)=\left(\frac{1}{n}\sum_{k=1}^{n}\left(1+\frac{n}{2l}y_{k}^{2}t^{2}+\frac{1}{2}\left(\frac{n}{2l}y_{k}^{2}t^{2}\right)^{2}+\mathcal{O}(t^{6})\right)\right)^{l}=\left(1+\frac{1}{2l}\|\bm{y}\|_{2}^{2}t^{2}+\frac{n}{8l^{2}}\|\bm{y}\|_{4}^{4}t^{4}+\mathcal{O}(t^{6})\right)^{l}.$
Using the multinomial theorem, we get
$\begin{split}\tilde{M}(t)&=\sum_{j=0}^{l}\left(\begin{subarray}{c}l\\\
j\end{subarray}\right)\left(\frac{1}{2l}\|\bm{y}\|_{2}^{2}t^{2}+\frac{n}{8l^{2}}\|\bm{y}\|_{4}^{4}t^{4}+\mathcal{O}(t^{6})\right)^{j}\\\
&=\left(\begin{subarray}{c}l\\\
1\end{subarray}\right)\left(\frac{n}{8l^{2}}\|\bm{y}\|_{4}^{4}t^{4}\right)+\left(\begin{subarray}{c}l\\\
2\end{subarray}\right)\left(\frac{1}{2l}\|\bm{y}\|_{2}^{2}t^{2}\right)^{2}+1+At^{2}+\mathcal{O}(t^{6})\\\
&=\frac{1}{8}\left(\frac{n}{l}\|\bm{y}\|_{4}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right)t^{4}+1+At^{2}+\mathcal{O}(t^{6})\end{split}$
where $A$ is some constant not depending on $t$. We can then reach the desired
result by taking the fourth-order derivative and setting $t=0$, i.e.,
$\mathbb{E}\left[|\bm{y}^{T}\bm{u}|^{4}\right]=\frac{\partial^{4}}{\partial
t^{4}}\tilde{M}(t)\Bigg{|}_{t=0}=3\left(\frac{n}{l}\|\bm{y}\|_{4}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right)+\mathcal{O}(t^{2})\Bigg{|}_{t=0}=3\left(\frac{n}{l}\|\bm{y}\|_{4}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right).$
∎
###### Proof of Proposition 2.
The proof is very similar to that of Proposition 1. First, we obtain the
moment-generating function for $\mathcal{M}_{l}^{R}$ as
$\begin{split}M(\bm{t})&=\mathbb{E}\left[\exp(\bm{t}^{T}\bm{u})\right]=\mathbb{E}\left[\exp\sqrt{\frac{n}{l}}\left(\bm{t}^{T}\sum_{j=1}^{l}\bm{e}_{r_{j}}z_{j}\right)\right]=\mathbb{E}\left[\exp\sqrt{\frac{n}{l}}\left(\sum_{j=1}^{l}t_{r_{j}}z_{j}\right)\right]\overset{(*)}{=}\prod_{j=1}^{l}\mathbb{E}\left[\exp\left(\sqrt{\frac{n}{l}}t_{r_{j}}z_{j}\right)\right]\\\
&=\prod_{j=1}^{l}\mathbb{E}\left[\sum_{k=1}^{n}\mathbb{I}\\{r_{j}=k\\}\exp\left(\sqrt{\frac{n}{l}}t_{r_{j}}z_{j}\right)\right]=\prod_{j=1}^{l}\sum_{k=1}^{n}\mathbb{P}\\{r_{j}=k\\}\mathbb{E}_{k}\left[\exp\left(\sqrt{\frac{n}{l}}t_{k}z_{j}\right)\right]\\\
&=\prod_{j=1}^{l}\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}t_{k}\right)+\exp\left(-\sqrt{\frac{n}{l}}t_{k}\right)\right)=\left(\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}t_{k}\right)+\exp\left(-\sqrt{\frac{n}{l}}t_{k}\right)\right)\right)^{l}.\end{split}$
Equation ($*$) in the above is due to the independence of $\\{z_{j}\\}$ and
$\\{r_{j}\\}$. The partial derivative with respect to $t_{j}$ is then
$\frac{\partial M(\bm{t})}{\partial
t_{j}}=\frac{1}{2}\sqrt{\frac{l}{n}}\left(\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}t_{k}\right)+\exp\left(-\sqrt{\frac{n}{l}}t_{k}\right)\right)\right)^{l-1}\left(\exp\left(\sqrt{\frac{n}{l}}t_{j}\right)-\exp\left(-\sqrt{\frac{n}{l}}t_{j}\right)\right)$
and
$\begin{split}&\frac{\partial^{2}M(\bm{t})}{\partial
t_{j}^{2}}=\frac{1}{2}\sqrt{\frac{l}{n}}\frac{\partial}{\partial
t_{j}}\left(\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}t_{k}\right)+\exp\left(-\sqrt{\frac{n}{l}}t_{k}\right)\right)\right)^{l-1}\underbrace{\left(\exp\left(\sqrt{\frac{n}{l}}t_{j}\right)-\exp\left(-\sqrt{\frac{n}{l}}t_{j}\right)\right)}_{=0\text{
when }\bm{t}=\bm{0}}\\\
&+\frac{1}{2}\sqrt{\frac{l}{n}}\underbrace{\left(\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}t_{k}\right)+\exp\left(-\sqrt{\frac{n}{l}}t_{k}\right)\right)\right)^{l-1}}_{=1\text{
when }\bm{t}=\bm{0}}\frac{\partial}{\partial
t_{j}}\left(\exp\left(\sqrt{\frac{n}{l}}t_{j}\right)-\exp\left(-\sqrt{\frac{n}{l}}t_{j}\right)\right).\end{split}$
We therefore obtain
$\begin{split}\frac{\partial^{2}M(\bm{t})}{\partial
t_{j}^{2}}\Bigg{|}_{\bm{t}=\bm{0}}=\frac{1}{2}\sqrt{\frac{l}{n}}\frac{\partial}{\partial
t_{j}}\left(\exp\left(\sqrt{\frac{n}{l}}t_{j}\right)-\exp\left(-\sqrt{\frac{n}{l}}t_{j}\right)\right)\Bigg{|}_{\bm{t}=\bm{0}}=1\end{split}$
As the covariance matrix is diagonal, we conclude from the above that the
covariance matrix is an identity matrix.
The moment-generating function of the random variable $\bm{y}^{T}\bm{u}$,
denoted by $\tilde{M}(t)$, can be obtained by substituting $\bm{t}=\bm{y}t$
into $M(\bm{t})$:
$\tilde{M}(t)=\left(\frac{1}{2n}\sum_{k=1}^{n}\left(\exp\left(\sqrt{\frac{n}{l}}y_{k}t\right)+\exp\left(-\sqrt{\frac{n}{l}}y_{k}t\right)\right)\right)^{l}=\left(\frac{1}{n}\sum_{k=1}^{n}\cosh\left(\sqrt{\frac{n}{l}}y_{k}t\right)\right)^{l}.$
Now expanding the $\cosh$ function using Taylor series, we obtain
$\begin{split}\tilde{M}(t)&=\left(\frac{1}{n}\sum_{k=1}^{n}\left(1+\frac{1}{2}\left(\sqrt{\frac{n}{l}}y_{k}t\right)^{2}+\frac{1}{4!}\left(\sqrt{\frac{n}{l}}y_{k}t\right)^{4}+\mathcal{O}(t^{6})\right)\right)^{l}\\\
&=\left(1+\frac{1}{2l}\|\bm{y}\|_{2}^{2}t^{2}+\frac{n}{24l^{2}}\|\bm{y}\|_{4}^{4}t^{4}+\mathcal{O}(t^{6})\right)^{l}\\\
&=l\left(\frac{n}{24l^{2}}\|\bm{y}\|_{4}^{4}t^{4}\right)+\frac{l(l-1)}{2}\left(\frac{1}{2l}\|\bm{y}\|_{2}^{2}t^{2}\right)^{2}+1+At^{2}+\mathcal{O}(t^{6}).\end{split}$
The fourth-order moment of $\bm{y}^{T}\bm{u}$ can be obtained as
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]=\frac{\partial^{4}}{\partial
t^{4}}\tilde{M}(t)\Bigg{|}_{t=0}=\frac{n}{l}\|\bm{y}\|_{4}^{4}+3\frac{l-1}{l}\|\bm{y}\|_{2}^{4}.$
∎
## Appendix E Proof of Theorems 4 and 5
We will require the following lemma, which can be derived from Propositions 1
and 2.
###### Lemma 6.
Let $\bm{y}\in\mathbb{R}^{n}$ be a vector satisfying
$\|\bm{y}\|_{2}^{4}/\|\bm{y}\|_{4}^{4}\geq\tilde{s}$ for some constant
$s\in[1,n]$. We have
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\|\bm{y}\|_{2}}{\sqrt{3n/(\tilde{s}l)+3}}\;\;\text{
for }\bm{u}\sim\mathcal{M}_{l}^{G}$
and
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\|\bm{y}\|_{2}}{\sqrt{n/(\tilde{s}l)+3}}\;\;\text{
for }\bm{u}\sim\mathcal{M}_{l}^{R}.$
###### Proof.
First, by Hölder’s inequality, we have
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\left(\mathbb{E}[|\bm{y}^{T}\bm{u}|^{2}]\right)^{3/2}}{\left(\mathbb{E}[|\bm{y}^{T}\bm{u}|^{4}]\right)^{1/2}}=\frac{\|\bm{y}\|_{2}^{3}}{\left(\mathbb{E}[|\bm{y}^{T}\bm{u}|^{4}]\right)^{1/2}}.$
(36)
where the equality uses the fact $\mathbb{V}[\bm{u}]=\bm{I}$, according to
Propositions 1 and 2.
Now consider the case of mixture Gaussian sampling. In this case, we have,
from 11, that
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\|\bm{y}\|_{2}^{3}}{\sqrt{3\left(\frac{n}{l}\|\bm{y}\|_{4}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right)}}.$
(37)
Using the assumption $\|\bm{y}\|_{2}^{4}/\|\bm{y}\|_{4}^{4}\geq\tilde{s}$ then
yields
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\|\bm{y}\|_{2}^{3}}{\sqrt{3\left(\frac{n}{{\tilde{s}l}}\|\bm{y}\|_{2}^{4}+\frac{l-1}{l}\|\bm{y}\|_{2}^{4}\right)}}=\frac{\|\bm{y}\|_{2}}{\sqrt{3\left(\frac{n}{{\tilde{s}l}}+\frac{l-1}{l}\right)}}\geq\frac{\|\bm{y}\|_{2}}{\sqrt{3\left(n/{(\tilde{s}l)}+1\right)}}.$
Consider then the case of mixture Rademacher sampling. From 36, 12, and the
assumption $\|\bm{y}\|_{2}^{4}/\|\bm{y}\|_{4}^{4}\geq\tilde{s}$, we have
$\mathbb{E}[|\bm{y}^{T}\bm{u}|]\geq\frac{\|\bm{y}\|_{2}^{3}}{\sqrt{\frac{n}{l}\|\bm{y}\|_{4}^{4}+3\frac{l-1}{l}\|\bm{y}\|_{2}^{4}}}\geq\frac{\|\bm{y}\|_{2}^{3}}{\sqrt{\frac{n}{{\tilde{s}l}}\|\bm{y}\|_{2}^{4}+3\frac{l-1}{l}\|\bm{y}\|_{2}^{4}}}=\frac{\|\bm{y}\|_{2}}{\sqrt{{n/(\tilde{s}l)}+3\frac{l-1}{l}}}\geq\frac{\|\bm{y}\|_{2}}{\sqrt{{n/(\tilde{s}l)}+3}}.$
(38)
∎
###### Proof of Theorem 4.
Recall that the DES with mixture Gaussian sampling is a special case of
Algorithm 4, so we can reuse Lemmas 1, 2, 3 and 4 which are derived for
Algorithm 4.
The first step in this proof is to obtain a similar bound as in Lemma 5. We
begin with rewriting $\mathbb{E}\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]$. For $\beta=0$, we have
$\bm{z}_{t}=\bm{x}_{t}$ and
$\begin{split}\mathbb{E}&\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]\\\ &=\mathbb{E}\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\left(\frac{1}{M}\sum_{i=1}^{M}\bm{v}_{i,K}^{t}-\bm{x}_{t}\right)\right]\\\
&=\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\text{sign}_{+}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:definition-sign-
signplus}}{=}\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left(1+\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\right)\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&=\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:sign_identity}}{=}\frac{1}{2M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\left|\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\left(-1+2\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}\right)\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:bound-
sum-0.5-series-2}}{\leq}-\frac{\alpha_{0}^{t}}{2M\sqrt{K}}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\mathbb{E}\left[\left|\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\right]\\\
&\;\;\;\;\;\;\;\;\;+\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\underbrace{\left|\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|\mathbb{I}\left\\{\text{sign}\left(f_{i}\left(\bm{v}_{i,k}^{t}\right)-f_{i}\left(\bm{v}_{i,k}^{t}+\alpha_{k}^{t}\bm{u}_{i,k}^{t}\right)\right)=\text{sign}\left(\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right)\right\\}}_{\overset{\Delta}{=}\mathfrak{A}}\right]\\\
&=-\frac{\alpha_{0}^{t}}{2M\sqrt{K}}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\mathbb{E}\left[{\left|\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{u}_{i,k}^{t}\right|}\right]+\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\mathfrak{A}\right]\end{split}$
(39)
Using the assumption $\|\nabla f(\bm{x})\|_{2}^{4}/\|\nabla
f(\bm{x})\|_{4}^{4}\geq\tilde{s}$ and Lemma 6, we have
$\mathbb{E}\left[\nabla f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]\\\
\leq-\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]+\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\mathbb{E}\left[\mathfrak{A}\right]$
where $V$ is a constant that can be set to
$V=\sqrt{3+3n/(\tilde{s}l)}.$ (40)
Note that Lemma 4 gives an upper bound for the term
$\mathbb{E}[\mathfrak{A}]$. We therefore have
$\begin{split}\mathbb{E}\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]&+\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]\\\
&\leq\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\left(\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}\mathbb{E}\left[\|\bm{u}_{i,k}^{t}\|^{2}\right]+\frac{L^{2}}{2\omega_{1}}\mathbb{E}\left[\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\|^{2}\right]+\frac{\sigma^{2}}{2\omega_{2}b}\right)\\\
&\leq\frac{1}{M}\sum_{i=1}^{M}\sum_{k=0}^{K-1}\alpha_{k}^{t}\left(\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}U+\frac{L^{2}}{2\omega_{1}}\mathbb{E}\left[\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\|^{2}\right]+\frac{\sigma^{2}}{2\omega_{2}b}\right).\end{split}$
Now use Lemma 3 to bound
$\mathbb{E}\left[\|\bm{v}_{i,k}^{t}-\bm{z}_{t}\|^{2}\right]$ and use the
setting $\beta=0$:
$\begin{split}\mathbb{E}\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]&+\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:client-drift-
bound},\beta=0}{\leq}\sum_{k=0}^{K-1}\alpha_{k}^{t}\left(\frac{\alpha_{k}^{t}L+\omega_{1}+\omega_{2}}{2}U+\frac{L^{2}}{\omega_{1}}UK(1+\log
K)(\alpha_{0}^{t})^{2}+\frac{\sigma^{2}}{2\omega_{2}b}\right)\\\
&=\frac{LU}{2}\sum_{k=0}^{K-1}(\alpha_{k}^{t})^{2}+\left(\frac{L^{2}}{\omega_{1}}UK(1+\log
K)(\alpha_{0}^{t})^{2}+\frac{\omega_{1}+\omega_{2}}{2}U+\frac{\sigma^{2}}{2\omega_{2}b}\right)\sum_{k=0}^{K-1}\alpha_{k}^{t}.\end{split}$
Letting
$\omega_{1}=\frac{L\alpha_{0}^{t}}{\sqrt{K}},\omega_{2}=\frac{\sigma}{\sqrt{Ub}}$
yields
$\begin{split}\mathbb{E}\left[\nabla
f\left(\bm{x}_{t}\right)^{T}\bm{d}_{t+1}\right]&+\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]\\\
&=\frac{LU}{2}\sum_{k=0}^{K-1}(\alpha_{k}^{t})^{2}+\left(LU\left(\sqrt{K}(1+\log
K)+\frac{1}{2\sqrt{K}}\right)\alpha_{0}^{t}+\frac{\sqrt{U}\sigma}{\sqrt{b}}\right)\sum_{k=0}^{K-1}\alpha_{k}^{t}\\\
&\overset{(\ref{eq:bound-sum-1-series},\ref{eq:bound-
sum-0.5-series})}{\leq}\frac{LU}{2}(1+\log
K)(\alpha_{0}^{t})^{2}+\left(LU\left(\sqrt{K}(1+\log
K)+\frac{1}{2\sqrt{K}}\right)\alpha_{0}^{t}+\frac{\sqrt{U}\sigma}{\sqrt{b}}\right)2\sqrt{K}\alpha_{0}^{t}\\\
&=\sqrt{K}LU\left(\left(\frac{1}{2\sqrt{K}}+2\sqrt{K}\right)(1+\log
K)+\frac{1}{\sqrt{K}}\right)(\alpha_{0}^{t})^{2}+\frac{\sqrt{U}\sigma}{\sqrt{b}}2\sqrt{K}\alpha_{0}^{t}.\end{split}$
(41)
On the other hand, by the smoothness assumption, we have
$\begin{split}\mathbb{E}&\left[f\left(\bm{x}_{t+1}\right)-f\left(\bm{x}_{t}\right)\right]\leq\mathbb{E}\left[\nabla
f(\bm{x}_{t})^{T}\bm{d}_{t+1}\right]+\frac{L}{2}\mathbb{E}\left[\|\bm{d}_{t+1}\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:inner-product-bound-mixture-
tmp}}{\leq}-\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]+\sqrt{K}LU\left(\left(\frac{1}{2\sqrt{K}}+2\sqrt{K}\right)(1+\log
K)+\frac{1}{\sqrt{K}}\right)(\alpha_{0}^{t})^{2}+\frac{\sqrt{U}\sigma}{\sqrt{b}}2\sqrt{K}\alpha_{0}^{t}+\frac{L}{2}\mathbb{E}\left[\|\bm{d}_{t+1}\|^{2}\right]\\\
&\overset{\lx@cref{creftype~refnum}{eq:descent-step-bound-
square}}{\leq}-\frac{\alpha_{0}^{t}\sqrt{K}}{2{V}}\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]+\sqrt{K}LU\underbrace{\left(\left(\frac{1}{2\sqrt{K}}+\frac{5}{2}\sqrt{K}\right)(1+\log
K)+\frac{1}{\sqrt{K}}\right)}_{\overset{\Delta}{=}\hat{\Psi}}(\alpha_{0}^{t})^{2}+\frac{\sqrt{U}\sigma}{\sqrt{b}}2\sqrt{K}\alpha_{0}^{t},\end{split}$
where in the last step we have reused the bound in Lemma 1. Summing the above
up for $t=0,\cdots,T-1$ gives
$\begin{split}\sum_{t=0}^{T-1}\alpha_{0}^{t}\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]}{{V}}&\leq
2\frac{f(\bm{x}_{0})-f_{*}}{\sqrt{K}}+4\frac{\sqrt{U}\sigma}{\sqrt{b}}\sum_{t=0}^{T-1}\alpha_{0}^{t}+2LU\hat{\Psi}\sum_{t=0}^{T-1}(\alpha_{0}^{t})^{2}\\\
&\overset{(\ref{eq:bound-sum-0.5-series},\ref{eq:bound-
sum-0.25-series})}{\leq}2\frac{f(\bm{x}_{0})-f_{*}}{\sqrt{K}}+\frac{16}{3}\frac{\sqrt{U}\sigma}{\sqrt{b}}\alpha
T^{\frac{3}{4}}+4LU\hat{\Psi}\alpha^{2}\sqrt{T}\\\
&\overset{b\geq\sqrt{T}}{\leq}2\frac{f(\bm{x}_{0})-f_{*}}{\sqrt{K}}+\frac{16}{3}\sqrt{U}\sigma\alpha\sqrt{T}+4LU\hat{\Psi}\alpha^{2}\sqrt{T}.\end{split}$
The left-hand side is bounded from below as
$\sum_{t=0}^{T-1}\alpha_{0}^{t}\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]}{{V}}{\overset{(\ref{eq:bound-
sum-0.25-series-2})}{\geq}}\alpha
T^{\frac{3}{4}}\frac{1}{T}\sum_{t=0}^{T-1}\frac{\mathbb{E}\left[\left\|\nabla
f\left(\bm{x}_{t}\right)\right\|_{2}\right]}{{V}}.$
We therefore have
$\frac{1}{T}\sum_{t=0}^{T-1}\frac{\mathbb{E}\left[\left\|\nabla |
# Efficacy of Bayesian Neural Networks in Active Learning
Vineeth Rakesh
Interdigital AI Lab
USA
<EMAIL_ADDRESS>Swayambhoo Jain
Interdigital AI Lab
USA
<EMAIL_ADDRESS>
###### Abstract
Obtaining labeled data for machine learning tasks can be prohibitively
expensive. Active learning mitigates this issue by exploring the unlabeled
data space and prioritizing the selection of data that can best improve the
model performance. A common approach to active learning is to pick a small
sample of data for which the model is most uncertain. In this paper, we
explore the efficacy of Bayesian neural networks for active learning, which
naturally models uncertainty by learning distribution over the weights of
neural networks. By performing a comprehensive set of experiments, we show
that Bayesian neural networks are more efficient than ensemble based
techniques in capturing uncertainty. Our findings also reveal some key
drawbacks of the ensemble techniques, which was recently shown to be more
effective than Monte Carlo dropouts.
## 1 Introduction
Although machine learning techniques have achieved a major breakthrough in
recent years, their performance comes at a cost of acquiring large volumes of
training data. This is especially true for supervised deep learning models
that demand a substantial amount of labeled data to achieve a reasonable
performance. For applications that require expert knowledge such as medical
and biological images, labels are extremely hard and expensive to obtain.
Active learning (AL) aims to mitigate this problem by smartly selecting data
points to label (from an expert) from a large pool of unlabeled data to
improve model performance. This sampling is typically based on some
acquisition function (AF) which provides a score for each unlabeled data that
signifies its level of importance. While there are many approaches to
implementing AF [1, 2], uncertainty-based based approaches are shown to be the
most effective [3, 4, 5, 6].
Bayesian neural network (BNN) naturally models uncertainty by learning a
probability distribution over the neural network weights. Therefore, for a
given input, as we take multiple realizations of the network, the variance
captured by the weights is reflected as the variation in the output, which in-
turn models uncertainty. BNNs learn by applying a prior distribution over
weights and performing variational inference to approximate the posterior
distribution. In [7], the authors proved that applying dropout to neural
networks is equivalent to a BNN. This theory was further leveraged by
proposing Monte Carlo Dropout (MCD) for uncertainty estimation in AL [5]. In a
recent work, [6] showed that ensemble of neural networks (EN) outperform MCD
when it comes to uncertainty estimation; thus, proving to be the choice for
active learning. Consequentially, it is natural to assume EN to perform better
than BNN since MCD is equivalent to BNN. However, dropout neural networks form
a special class of BNN where the posterior distribution is a special case of
_spike-and-slab_ distribution. Contrary to this, BNNs allow for a broader
class of prior and posterior distributions on weights.
In this paper, we re-establish the efficacy of BNNs in active learning over
ensembles and MCD by using a more general scaled normal prior based BNN
proposed in [8]. The scaled normal prior is a continuous relaxation of the
_spike-and-slab_ distribution and subsumes Dropout as a special case. Through
extensive experiments on multiple datasets namely, MNIST, Fashion MNIST,
CIFAR10 and CIFAR100 and a regression dataset on housing price prediction we
show that the scaled normal prior based BNN provides more robust and efficient
active learning over EN and MCD. We perform several experiments to demonstrate
the pros and cons of BNN over EN and MCD. For each round of active learning,
the models are trained using two different settings: (1) re-use the trained
state of the model from previous round and retrain on the newly appended
datapoints (termed as continual training) and (2) reset the model parameters
and retrain from scratch. Our results show that BNN performs significantly
better than EN in terms of classification accuracy when it comes to continual
training. In fact, the performance of EN is worse than MCD which can be
attributed to overfitting and catastrophic forgetting. That being said, when
retrained from scratch, BNN and EN perform on a similar level, which is still
an advantage for BNN since estimating uncertainty using ensembles is a costly
process. We found that EN requires about five ensembles in order to achieve
good active learning performance. This implies, training of five different
i.i.d networks and storing the trained state of every single network instance.
BNN on other hand, achieves similar yet a more robust performance with a
trade-off of just doubling parameter size of conventional neural network.
Besides illustrating the overall effectiveness of BNN for active learning, we
answer the following questions: (1) do acquisition functions behave the same
for Bayesian, ensemble and MC dropouts? (2) how does model capacity affect
the outcome, do BNNs with lower model capacity perform worse than EN (or MCD)?
(3) are BNNs better than EN when predicting challenging class labels?
Inspired by the performance of BNNs, we also propose a computationally
efficient uncertainty estimation method for fully connected dense layers with
ReLU non-linearity. Since AL involves repeated uncertainty estimation over
large unlabeled dataset, efficient uncertainty estimation is of huge practical
importance. In the proposed method, instead of taking multiple instantiations
of neural networks to estimate the uncertainty, we perform just one forward
pass. In this forward pass, at each neuron, we approximate the probability
distribution parametrically. We show that this algorithm is capable of
achieving performances that is on-par with the traditional uncertainty
estimation in BNN.
To the best of our knowledge, we are the first to perform comprehensive
empirical analysis to demonstrate the efficacy of BNNs for active learning.
While most existing research limit themselves to experiments on small
architectures and dataset, ours does not have such constraints.
## 2 Related Work
Active learning has been explored extensively in classical machine learning
literature [9]. Much of the focus of classical literature has been on the high
dimensional data arising in the context of linear models such as support
vector machine [10, 11, 12]. That being said, recently, there has been a
significant interest in AL for deep neural networks (DNNs). While there are
many ways to perform AL in DNNs, uncertainty-based sampling techniques are the
preferred choice due to their ease of implementation and computational
efficiency. Uncertainty in the output of neural networks can be estimated
using (a) Bayesian neural networks or (b) ensemble of neural networks.
A popular technique for AL using BNNs is called Monte Carlo dropout which was
first proposed in [5]. The basic idea is to pass the new unlabeled data
through the DNNs multiple times while retaining the dropout layer. This
results in different realizations at the output of the DNNs. These
realizations can be used to estimate the uncertainty in output using various
measures such as entropy or the variance ratio. The estimated uncertainty can
then be utilized for acquiring new unlabeled data points thus performing
active learning.
On the other hand, ensemble based uncertainty estimation involves having an
ensemble of neural networks which typically have same architecture but trained
with different random initialization [4]. Uncertainty is estimated by passing
unlabeled examples through individual ensembles and their outputs are then
used to estimate the uncertainty. It was shown in [6] that ensemble methods
outperforms the Monte Carlo dropout-based estimation. This performance is
primarily attributed to the higher capacity and diversity in the ensemble
models as compared to different realizations of neural networks in dropout
based networks. While there are some studies that leverage BNNs for active
learning [4, 13, 14, 15], they have a few or all of the following
shortcomings: (1) experiments are restricted to simple architectures with a
few dense or convolutional neural network (CNN) layers (2) evaluation is
restricted to basic datasets such as MNIST (3) no comparison of BNNs with
ensemble models (4) claim to use BNNs, but actually use Monte Carlo dropouts
as approximation to BNNs.
## 3 Active Learning Via Bayesian Neural Networks
Bayesian neural network: For a given dataset
$\mathcal{D}=\\{(\mathbf{x}_{i},\mathbf{y}_{i})\\}_{i=1}^{D}$, Bayesian neural
networks involves calculation of the distribution of weights given the
training data $p(\mathbf{w}|\mathcal{D})$. The predictive distribution for a
test data $\mathbf{x}$ can be obtained by marginalizing $\mathbf{w}$ as
follows:
$p(\mathbf{y}|\mathbf{x},\mathcal{D})=\int{p(\mathbf{y}|\mathbf{x},\mathbf{w})p(\mathbf{w}|\mathcal{D})d\mathbf{w}}$.
This is equivalent to averaging predictions from an ensemble of neural
networks weighted by the posterior probabilities of their parameters
$\mathbf{w}$. However, exact computation of the posterior is intractable,
therefore, we resort to variational inference. That is, we wish to approximate
$p(\mathbf{w}|\mathcal{D})=p(\mathcal{D}|\mathbf{w})p(\mathbf{w})/p(\mathcal{D})$
by positing an approximate posterior $q_{\phi}(\mathbf{w})$ of variational
parameters $\phi$. The problem then reduces to optimizing the evidence-lower-
bound (ELBO) defined as follows:
$\displaystyle\mathcal{L}(\phi)=\underbrace{\mathbb{E}_{q_{\phi}(\mathbf{w})}[\log
p(\mathcal{D}|\mathbf{w})]}_{(a)}+\underbrace{\textrm{KL}[q_{\phi}(\mathbf{w})||p(\mathbf{w})]}_{(b)}$
(1)
where term (a) is the data-dependent likelihood term, and (b) is the
regularizer that measures the KL divergence between the posterior and prior.
For prior we use the scaled normal prior proposed in [8] in which the scales
$z$ follow a log-uniform prior: $p(z)\propto|z|^{-1}$. For a given layer
weight matrix $\mathbf{W}\in\mathbf{R}^{m\times n}$ for a fully connected
layer of neural network with input dimension $n$ and output dimension $m$ the
scales of shared across input dimension as
$\displaystyle
p(\mathbf{W},\mathbf{z})\propto\prod_{j=1}^{n}\frac{1}{|z_{j}|}\prod_{i,j}^{m,n}\mathcal{N}(w_{ij}|0,z_{j}^{2}).$
(2)
The main rationale behind scaled normal prior is that it is continuous
relaxation of the _spike-and-slab_ distribution. Dropout which is the basis of
MCD based active learning is a special case of _spike-and-slab_ distribution
[8] and we hope to get better performance with the scaled normal prior. We
consider the following joint approximate posterior
$\displaystyle
q_{\phi}(\mathbf{W},\mathbf{z})=\prod_{j}^{n}\mathcal{N}(z_{j}|\mu_{z_{j}},\sigma^{2}_{z_{j}})\prod_{i,j}^{m,n}\mathcal{N}(w_{ij}|z_{j}\mu_{ij},z_{j}^{2}\sigma_{ij}^{2}),$
(3)
and the corresponding ELBO is given by:
$\displaystyle\mathcal{L}(\phi)$
$\displaystyle=\mathbb{E}_{q_{\phi(\mathbf{z})}q_{\phi}(\mathbf{W}|\mathbf{z})}[\text{log}\,p(\mathcal{D}|\mathbf{W})]$
(4) $\displaystyle-$
$\displaystyle\mathbb{E}_{q_{\phi}(\mathbf{z})}[\textrm{KL}(q_{\phi}(\mathbf{W}|\mathbf{z}))||p(\mathbf{W}|\mathbf{z})]-\textrm{KL}(q_{\phi}(\mathbf{z})||p(\mathbf{z})).$
The KL divergences can be replaced with the closed form expressions
$\displaystyle\textrm{KL}(q_{\phi}(\mathbf{W}|\mathbf{z}))||p(\mathbf{W}|\mathbf{z})=\frac{1}{2}\sum_{i,j}^{m,n}\log\frac{e^{\sigma_{ij}^{2}+\mu_{ij}^{2}-1}}{\sigma_{ij}^{2}},$
$\displaystyle\textrm{KL}(q_{\phi}(z)||p(z))\approx\sum_{j}^{n}k_{1}\left(1-\gamma\left(k_{2}-k_{3}\alpha_{j}\right)-\frac{m(\alpha_{j})}{2k_{1}}\right),$
where $\alpha_{j}=-\log(\sigma_{z_{j}}^{2}/\mu_{z_{j}}^{2})$, $\gamma(\cdot)$
and $m(\cdot)$ are the sigmoid and soft-plus functions respectively and the
constants $k_{1}=0.63576,k_{2}=1.87320,k_{3}=1.48695$ [16].
From practical implementation point-of-view this scaled normal prior BNN is
easier to implement as compared to other prior. By virtue of the closed form
KL-divergences the ELBO in (4) can be optimized within the framework of
standard stochastic gradient ascent. In addition to this, during test time it
can implemented as a single feedforward pass where we replace $\mathbf{W}$ at
layer with its mean
$\tilde{\mathbf{W}}=\mathbf{M}_{W}\textrm{diag}(\mathbf{\mu}_{z})$ where
$\mathbf{M}_{W}$ is the matrix of means $\mathbf{\mu}_{ij}$ and
$\mathbf{\mu}_{z}$ is the vector of means $\mu_{z_{j}}$.
Acquisition functions: Once the model is trained on a small dataset, we use
acquisition functions (AF) to fetch the most uncertain datapoints. In a recent
work [6], the authors experimented with several acquisition functions and
found entropy [17] and variation-ratio to be the best candidates [18].
Therefore, we use these two metrics as our choice of AF. A BNN trained for
multi-class classification problem of $C$ classes maps a given data vector
$\mathbf{x}$ to a $C$ dimensional vector containing the probability of various
classes in its components. For a given unlabelled data vector $\mathbf{x}$ the
entropy is calculated by $T$ forward passes, each time with new weight
realization $\mathbf{W}_{t}$ from the trained posterior. First, the $T$
outputs vectors are averaged to obtain the probability for a given class $c$
as $\hat{p}(y=c|\mathbf{x})=1/T\sum_{t}p(y=c|\mathbf{x},\mathbf{W}_{t})$ where
$\mathbf{W}_{t}$ is the $t^{th}$ realization of weights obtained from the
trained posterior. Next, with these probability estimates the entropy can be
calculated as follows:
$\displaystyle
H(y|\mathbf{x})=-\sum_{c}\hat{p}(y=c|\mathbf{x})\log\left(\hat{p}(y=c|\mathbf{x})\right).$
(5)
The variation ratio can be calculated as $v=1-f_{m}/T$ where $f_{m}$ is the
number of predictions falling into the modal class category.
Active learning algorithm: With the Bayesian neural networks and acquisition
functions defined, we now describe our active learning methodology in
Algorithm 1. The procedure starts by training the model with some seed sample
$\mathcal{S}$ (line 1). The size of this sample could be anywhere between 2-5
percent of the training set (depending on the complexity of data), we call
this step as seed training. At each round $r\in R$, we add $k$ new samples by
calling the active learning functions (line 1). The function first removes the
chosen sample $\mathcal{S}$ from the main dataset $\mathcal{D}$ and creates
$T$ instances of networks by sampling weights that was learned during the
training phase. Each instance is tested on unseen data points to obtain
ensemble of outputs (lines 1-1). Depending on the type of AF (i.e., variation-
ratio or entropy), the uncertainty over ensembles is calculated and the new
data-points are chosen based on the largest uncertainty score. Once the model
is trained on newly appended data points, the algorithm proceeds by validating
on the held-out training dataset $\mathcal{D}_{v}$ (line 1). Note that it is
not necessary to use the validation data $D_{v}$ and it is entirely optional.
We assumed that even in active learning setting, we can afford using a very
small percentage of unseen data for validation. In the subsequent rounds, the
best weights from the previous round are loaded (i.e., based on the validation
set) and the algorithm resumes performing AL and re-training. Unlike [6],
after each round we do not retrain the model from scratch. In our experiments,
we found that retraining from scratch does not perform as well as re-using
weights from previous rounds and re-training.
Accelerating uncertainty estimation: As discussed earlier while the scaled
normal prior BNN allows inference in a single feedforward pass but for the
uncertainty estimation passing it still requires a given unlabelled example
passed through multiple realization weights using the trained posterior. This
is computationally expensive and we alleviate this by approximating the
probability distribution of the input to the last layer directly. Equipped
with this distribution we propose to directly generate the random realizations
of the input before last layer non-linearity and use those to estimate the
uncertainty.
For the scaled normal posterior in (3) the analytical expression for the
distribution is challenging due to the non-linear transformations in the
neural network and dependencies due to structure of layers such as convolution
layers. To illustrate this, consider a simple linear transformation of
$\mathbf{x}$ by matrix $\mathbf{W}\in\mathbb{R}^{m\times n}$, bias vector
$\mathbf{b}\in\mathbb{R}^{m}$ given by $\mathbf{y}=\mathbf{Wx}+\mathbf{b}$
where $\mathbf{W}$ follows the posterior in (3). Each entry of $\mathbf{y}$ is
a sum of $n$ independent random variables with scaled normal distribution as
$y_{i}=\sum_{j=1}^{n}W_{ij}x_{j}+b_{i}$. Considering the fact that a typical
neural network has multiple layers and non-linearities, the calculation of
analytical distribution is non-trivial. However, under the assumption that
$y_{i}$ is Gaussian, the mean and variance of $y_{i}$ after passing through
ReLU non-linearity is analytically tractable. Based on this, for a multi-layer
neural network comprising of $L$ dense layers we can obtain the distribution
of components vectors before the non-linearity from the distribution of layer
input. Suppose weights of $l^{th}$ layer are represented by
$\mathbf{W}^{l}\in\mathbb{R}^{n^{l}\times n^{l+1}}$ where $n^{l+1},n^{l}$ are
the input and output dimension of this layer. Then the expectation and
variance of components of the vector
$\mathbf{y}^{l}=\mathbf{W}^{l}\mathbf{x}^{l-1}+\mathbf{b}^{l}$ can be obtained
terms of first and second order moments of components of $\mathbf{x}^{l-1}$ as
follows
$\displaystyle\mathbb{E}\left[y_{i}^{l}\right]$
$\displaystyle=\sum_{j=1}^{n^{l}}\mathbb{E}\left[w^{l}_{ij}\right]\mathbb{E}\left[x^{l-1}_{j}\right]+b_{i}^{l},$
(6) $\displaystyle\mathbb{V}[y_{i}^{l}]$
$\displaystyle=\sum_{j=1}^{n^{l}}\mathbb{E}\left[\left(w^{l}_{ij}\right)^{2}\right]\mathbb{E}\left[\left(x^{l-1}_{j}\right)^{2}\right]$
$\displaystyle\quad\quad\quad-\mathbb{E}\left[x^{l-1}_{j}\right]^{2}\mathbb{E}\left[w^{l}_{ij}\right]^{2},$
(7)
where
$\mathbb{E}\left[w^{l}_{ij}\right]^{2}=(\mathbf{\sigma}_{z_{j}}^{2}+\mu_{z_{j}}^{2})(\sigma_{ij}^{2}+\mu_{ij}^{2})$.
Further, the input to next layer is obtained passing $\mathbf{y}^{l}$ through
a ReLU non-linearity as
$\mathbf{x}^{l}=\textrm{ReLU}\left(\mathbf{y}^{l}\right)$. We assume that non-
linearity until the last layer is ReLU. Finally, assuming that components of
$\mathbf{y}^{l}$ are Gaussian with mean and variance computed as above, the
mean and variance of components of $\mathbf{x}^{l}$ are given below
$\displaystyle\mathbb{E}\left[x_{i}^{l}\right]$
$\displaystyle=\mathbb{E}\left[y_{i}^{l}\right]\Phi(\delta_{i}^{l})+\mathbb{V}[y_{i}^{l}]f(-\delta_{i}^{l}),$
(8) $\displaystyle\mathbb{E}[(x_{i}^{l})^{2}]$
$\displaystyle=\left(\mathbb{E}\left[y_{i}^{l}\right]^{2}+\mathbb{V}\left[y_{i}^{l}\right]\right)\Phi(\delta_{i}^{l})$
$\displaystyle\quad+\sqrt{\mathbb{E}\left[y_{i}^{l}\right]^{2}\mathbb{V}\left[y_{i}^{l}\right]}f(\delta_{i}^{l})$
(9)
where
$\delta_{i}^{l}=\mathbb{E}\left[y_{i}^{l}\right]/\mathbb{V}\left[y_{i}^{l}\right]$
, $\Phi$ and $f$ are c.d.f. and p.d.f. of standard Gaussian distribution.
Using above equations (6) to (9) the mean and variance at the input before
non-linearity of each layer can be computed iteratively till the last layer
$\mathbf{y}^{L}$. Equipped with this information the uncertainty can then be
computed by directly generating $\mathbf{y}^{L}$ and then passing them through
the last layer’s non-linearity.
1 Inputs: Training dataset $\mathcal{D}$, validation dataset
$\mathcal{D}_{v}$, test dataset $\mathcal{D}_{t}$, seed dataset $\mathcal{S}$,
model $\mathcal{M}$, number of epochs $E$, mini batch size $b$, number of
rounds $R$, acquisition size $k$, number of instances $T$.
2 ActiveSubSelectData _(typ)_
3 $\mathcal{D}\leftarrow\mathcal{D}-\mathcal{S}$
4 $ENO\leftarrow[\,]$(array to hold ensemble outputs)
5 for _$j\in T$_ do
6 $\mathcal{M}_{s}\sim\mathcal{M}(\mathbf{w})$(sample from a weight instance)
7 $ENO\leftarrow$ append the model output $\mathcal{M}_{s}(\mathcal{D})$
8 $new\leftarrow$ get uncertainty of ENO with eq(5) if $typ$ is ”Entropy” else
use variation-ratio $v$
9 $new\leftarrow$ sort $new$ and get top-$k$ datapoints with the most
uncertainty
10 $\mathcal{S}\leftarrow$ $\mathcal{S}\cup new$
11
12
13Train($\mathcal{M}$) with seed sample
14 for _each $r\in R$_ do
15 ActiveSubSelectData(”Entropy/Variation-Ratio”)
16 for _each $e\in E$_ do
17
18 Train($\mathcal{M}$) on $\mathcal{S}$
19 $v\\_loss\leftarrow$ Evaluate($\mathcal{M}$) on $\mathcal{D}_{v}$ and get
validation loss
20 if EarlyStoppingCriteria($v\\_loss$): break
21 $\mathcal{M}\leftarrow$ Load best weights based on $\mathcal{D}_{v}$ (if
continual training)
Test($\mathcal{M}$) on $\mathcal{D}_{t}$
Algorithm 1 Uncertainty-based smart data sampling
## 4 Experiments
Starting from simple multi-layer perceptrons to deep CNN models we perform
several experiments to understand the effectiveness and shortcomings of BNNs
over EN and MCD. Our objective is to analyze BNNs from the following
perspective: (1) overall efficiency in acquisition of new data points (2)
robustness of BNNs during minimal retraining (i.e., retraining by reusing the
trained model from previous round) and (3) impact of model capacity, and
ensemble size. In addition to this, we also show the outcome of the proposed
accelerated uncertainty estimation over dense neural networks.
Dataset and models: The experiments are performed on four image classification
datasets and one regression dataset. For classification, we chose MNIST,
Fashion MNIST (FMNIST) [19], CIFAR10 and CIFAR100 [20]. For regression, we
chose the housing price prediction dataset introduced by [21]. It consists of
535 unique houses sampled from the state of California. Each house is
represented by both visual and textual data with the visual features
representing the front side of the house, the kitchen, the bedroom and the
bathroom. The following neural network architectures are used in this paper
(for detailed description please refer Appendix 6.1). (a) LeNetD2: a simple
densely connected network with 300 and 100 neurons in the first and second
layer respectively, (b) LeNet5[22]: consists of 2 CNN layers followed by a
classifier network with three dense layers, (c) AlexNet Light (ANL): this is
simplified version of the Alexnet architecture that is similar to the K-CNN
used in [6]. It has 4 CNN layers followed by two dense layers, (d) VGG: there
are several variations of VGG. For our experiments, we use VGG19 [23] with 16
CNN layers followed by the classifier network, and (e) Densenet: we use
Densenet121 [24] with a growth rate of 32. The Bayesian versions of these
models were implemented from scratch. At the time of writing, there are no
standardized module for Pytorch111 Models were implemented using
Pytorch(pytorch.org) and Numpy(numpy.org) libraries that can seamlessly
convert a conventional neural network to its Bayesian counterpart. The
complete active learning library consisting of all the models and experiments
performed in this paper will be publicly hosted via Github
222github.com/VRM1/ActiveLearning.
Active learning setting: Our training procedure was explained in Algorithm 1.
Depending on the model complexity and the dataset, the number of rounds $R$ is
set anywhere between $40$ to $80$. The number of data-points added during each
round (i.e, $k$) varies from $5$ to $250$. The number of neural network
instances (NNI) to compute uncertainty (i.e., the variable $T$) is set as $15$
for Densenet and $25$ for all other neural networks. For BNN, the instances
are created by sampling from the posterior distribution of weights and for
MCD, the dropouts are activated during the active learning phase (but turned
off during testing phase). Similar to [6], for EN, the number of NNI are set
as 5 and the weights are initialized using the default Pytorch setting, which
follows Kaiming uniform distribution. The details of neural network
architecture, dataset, epochs, acquisition size, etc., are summarized in Table
1. Experiments are performed by either reusing the state of the model from
previous round and retraining, termed as continual training (CT) or completely
resetting the model and retraining from scratch (RFS). In CT, in each new
round, models are trained for a significantly lower number of epoch, typically
around 30-50, depending on the type of model and dataset.On the contrary, in
RFS the number of epochs range anywhere between 100-200, which makes the
training time much greater than CT.
Evaluation metrics: For classification task, the following metrics are used
(1) Top-1 Accuracy: is the ratio between the number of correct predictions and
the total number of predictions. We take the class that corresponds to the
highest probability (i.e., from the softmax layer) as the predicted label,
(2) Pr: precision is the fraction of true positives among the retrieved
instances, and (3) F1: F1 is the harmonic mean of the precision and recall,
where recall is defined as the fraction of the total relevant instances (i.e.,
both true positives and true negatives) that were retrieved.
Aside from the aforementioned performance metrics, it is also important to
know the calibration of our models since a high accuracy does not imply good
calibration and vice versa [25]. Therefore, for classification tasks, we use
expected calibration error (ECE) [26] as the metric of choice. ECE
approximates the level of calibration by partitioning predictions into $M$
equally-spaced bins and taking a weighted average of the bin’s
accuracy/confidence difference. More precisely,
$\displaystyle ECE=\sum_{m=1}^{M}\cfrac{|B_{m}|}{n}|acc(B_{m})-conf(B_{m})|$
(10)
where n is the total number of samples and $B_{m}$ indicates a group of
samples whose prediction confidence falls into a certain interval.
$acc(B_{m})$ is the ground truth accuracy and $conf(B_{m})$ is predicted
probability (termed as confidence) of $B_{m}$. For regression task, we use the
the coefficient of determination $R^{2}$, which is a measure of the closeness
of the predicted model relative to the actual model [27]. $R^{2}$ is defined
as $1-SSE/SST$, where $SSE=\sum_{i=1}^{n}(\hat{y}_{i}-y_{i})^{2}$ and
$SST=\sum_{i=1}^{n}(\bar{y}-y_{i})^{2}$, $\hat{y}_{i}$ is the predicted value
of $i$ and $\bar{y}$ is the observed average.
Dataset | Model | Epoch(CT/RFS) | #Rnd | Aq Size
---|---|---|---|---
F/MNIST | LenetD2 | 30/100 | 40 | 1K/100/4K
F/MNIST | Lenet5 | 30/100 | 40 | 100/100/4K
CIFAR10 | VGG16 | 50/200 | 80 | 1K/250/20K
CIFAR100 | VGG16 | 50/200 | 80 | 1K/250/20K
Housing | LenetD2 | 50 | 40 | 50/5/200
Table 1: Settings of active learning experiments for various datasets. The
acquisition size (Aq Size)is the number of new datapoints added in each round
and #Rnd is the number of rounds. Aq Size is divided into 3 parts (separated
by /), where the first indicates seed sample size, the second is the #
datapoints added in each round and the third is the total aquisition size
across all rounds.
### 4.1 Results
(a) (a)
(b) (b)
(c) (d)
(d) (c)
Figure 1: Active learning performance on LeNetD2 and Lenet5 architecture with
CT setting. For MNIST, BNN and EN perform on a similar level, while MCD trails
behind. For FMNIST, BNN clearly outperforms the rest and although EN performs
similar to MCD upto round 10, it clearly starts to trail behind EN and
surprisingly even looses to MCD.
Performance on shallow neural networks: We begin by looking at the accuracy
performance of BNN and other baselines over LeNetD2 and Lenet5 architectures.
In Figure 1, x-axis is the number of samples at each round, which is indicated
as $(\times 100)$. For instance at $x=5$, we have added 500 samples and round
0 marks the beginning of the active learning procedure. Each model is followed
by a hyphen and a letter, which indicates the type of acquisition function.
For example, E is entropy and VR is variation ratio. For representation
purpose we do not show random sampling, but in our experiments they
substantially trailed behind every aquisition function. Overall, across all
models, VR tends to outperform entropy when it comes to acquisition functions.
Nonetheless, this difference is more pronounced during the first half of
rounds, during the final rounds the difference becomes quite narrow. This
shows that EN tends to improve its acquisition quality when it sees more data.
When it comes to MNIST, BNN and EN perform on a similar level, while MCD
trails behind. However, due to the simplicity of dataset, the difference in
performance is not that discernible. For FMNIST, which is a more challenging
dataset, we start to see some interesting differences. In LenetD2 (Figure 1
(b)), while the performance of BNN and EN are on-par with each other (for VR),
BNN seems to yield better results for entropy. In Lenet5 (Figure 1 (d))
although EN performs similar to BNN upto round 10, BNN clearly produces better
accuracy than all other models upto round 50. Additionally, we observe EN’s
accuracy lagging quite significantly behind the rest after about 30 rounds.
This was quite surprising to us since in [6], the authors claim EN to perform
better than MCD. Upon further investigation, we found that the poor
performance of EN (compared to MCD) is only observed with CT. In the upcoming
experiments, we will show the results on both CT and RFS.
(a) (a)
(b) (b)
(c) (e)
(d) (f)
(e) (c)
(f) (d)
(g) (g)
(h) (h)
Figure 2: Active learning performance of ANL and VGG over Cifar10 –
classification accuracy and model calibration. BNN yields a more robust
performance when compared to EN and MCD, irrespective of the training
methology (a)-(d), while EN suffers quite significantly in the CT setting.
When retraining from scratch, EN gets a major boost in performance and just
slightly lags behind BNN for ANL and performs on par with BNN on VGG.
Unfortunately, BNN suffers from poor calibration (e)-(h), but it can be quite
easily corrected. The calibrated model BNN-VRc is shown as the light blue bar.
Performance on Cifar10: The active learning performance of ANL and VGG is
represented in Figure 2 (a)-(d). The results unravel some important
characteristics of the models. First, when it comes to CT, BNNs performance is
significantly better than both EN and MCD, especially for VGG. Here, BNN
achieves about 75% and MCD about 67% accuracy, but similar to previous results
(i.e., Lenet5) EN clearly under performs with just 55%. A possible reason for
this outcome could be the lack of regularization. ENs are full capacity models
without any dropouts and we observe quite substantial overfitting during CT
setting. This in-turn result in making incorrect decisions when acquiring new
data points during the active learning phase. Another reason could be
catastrophic forgetting, which is a well known problem in continual training
of neural networks [28, 29]. BNN on the other hand seems to be more robust to
such perturbations. That being said, when retraining from scratch EN starts to
outperform MCD as it doesn’t overfit the data acquired in previous arounds,
which aligns with the recent study [6]. For ANL (Figure 2 (b)) we clearly see
EN outperforming MCD, however, BNN still achieves better performance. When it
comes to VGG (Figure 2 (c)) we don’t not find any distinct performance gaps
between all three models. Overall, when retrained from scratch, all models
perform better compared to CT, but we observe a lot more variations in the
accuracy scores, while in CT the increase in accuracy is quite smooth.
Calibration characteristics: When it comes to measuring robustness of machine
learning models, relying solely on performance metrics such as accuracy is not
sufficient. In our experiments, the calibration score is measured as expected
calibration error (ECE) using equation (10). In Figure 2 (e)-(h), the y-axis
is the ECE and the x-axis is the elapsed round number. For instance, for both
ANL and VGG since the active learning is performed until round 80 (Figure 2
(a)-(d)), 25% implies the 20th round of the corresponding experiment. From
calibration plots, we observe that ENs have the best out-of-the-box ECE scores
followed by MCD. Although BNN offers great performance in terms of accuracy,
they seem to be quite poorly calibrated. Thankfully, calibration is a post-
training step and there are several ways to calibrate a model such as
histogram binning and temperature scaling; in this paper we adopt the later.
The ECE of the calibrated BNN model is shown as the blue bar. It is important
to note that even though we have significantly reduced the calibration error,
the accuracy of BNN still remains unaltered.
(a) (a)
(b) (b)
Figure 3: Active learning performance of Densenets on Cifar100 dataset.
Similar to the results on Cifar10, BNN produces a more robust performance than
EN and MCD whether it is CT or retraining from scratch.
Performance on Cifar100: From Figure 3, one can observe that even for larger
neural networks such as Densenets, BNN produces significantly better accuracy
compared to MCD and EN. As for EN, it looks like as the neural networks become
more complex, the CT methodology starts to substantially cripple the
performance. When RFS both BNN and EN have similar performance, while MCD
trails behind.
Performance on regression dataset: The robust performance of BNN is not just
restricted to classification, but also extends to regression. Figure 4 shows
the performance of BNN in terms of $R^{2}$ on housing price prediction. For
this dataset, we used the LenetD2 architecture where the input layer is a
feature vector that is from by concatenating the image and descriptive text
features of the individual houses [21]. For this dataset, we start with a seed
sample of 50 and in each round we add five new data points that is selected
via active learning. Here there is no major performance difference between BNN
and EN, but MCD clearly under performs. Additionally, when it comes to BNN
there seems to be a lot more variance in $R^{2}$ score in each round, while EN
tends to be smoother.
Figure 4: Housing price prediction dataset using LenetD2. There is no major
performance difference between BNN and EN, but MCD clearly under performs.
### 4.2 Ablation Study
| Class | BNN-VR | EN-VR
---|---|---|---
| 25% | 50% | 100% | 25% | 50% | 100%
Pr | F1 | Pr | F1 | Pr | F1 | Pr | F1 | Pr | F1 | Pr | F1
Cifar10 (VGG) | Bird | 51.2 | 47.9 | 63.8 | 67.1 | 79.3 | 76.9 | 48.8 | 46.2 | 62.8 | 63.4 | 75.7 | 75.2
Cat | 50.6 | 45.4 | 59.6 | 57.8 | 67.3 | 68.4 | 49.3 | 44.9 | 58.4 | 55.6 | 69 | 68.2
Dog | 36.4 | 44.0 | 62.7 | 65.1 | 77.8 | 77.2 | 54.1 | 58.2 | 60.9 | 64.9 | 74 | 74.9
FMNIST (Lenet5) | Pullover | 74.7 | 75.3 | 81 | 79.8 | 81.8 | 80.2 | 71.4 | 72.4 | 74.4 | 75.2 | 78.3 | 77
Shirt | 57.1 | 62.8 | 62.7 | 64.5 | 66 | 66.3 | 56.6 | 56.2 | 61.9 | 60.5 | 62.7 | 63.4
Table 2: Precision and F1 score of BNN and EN on challenging class labels. The
second row indicates the elapsed round and Pr indicates precision. Even though
for VGG, EN and BNN have similar accuracy (Figure 2(d)), the precision and F1
measure on challenging labels indicate that performing AL via BNN leads to
better results.
We perform data-centric and model-centric ablation studies to get a deeper
understanding of BNN’s performance.
Performance on challenging class labels: When retraining from scratch, we see
that the performance gap between BNN and EN is quite narrow, especially on VGG
(Figure 2(d)). Nonetheless, accuracy is a global score and when it comes to
AL, it is important to measure the performance on labels that are hard to
classify. Table 2 shows the precision and F1 measure of FMNIST and Cifar10
datasets for classes that are most challenging to predict. The corresponding
accuracy plots can be seen from Figure 1(d) and Figure 2(d) respectively. Even
though BNN and EN produce similar accuracy, for more challenging class labels,
the former outshines the later. Across all selected rounds of active learning,
both precision and F1 measures of BNN are clearly better than EN. This shows
that BNN’s uncertainty estimation is more effective in acquiring datapoints
that improve the model performance on challenging class labels.
(a) (a)
(b) (b)
Figure 5: Impact of number of NNI and model capacity on Cifar10. BNN-VR25
indicates 25 network instances used to estimate the uncertainty, while BNN-VR5
implies five. BNN-VR- is the ANL network with reduced capacity that matches
those of MCD when dropouts are activated.
Impact of NNI: The performance of Al is based on how robust is the uncertainty
estimation. When it comes to BNN, a key factor that determines the goodness of
estimation is the number of NNI (i.e., lines 5-7 of Algorithm 1). A major
advantage of EN over BNN (and MCD) is the low number of NNI that is needed to
estimate uncertainty, which is set as five in our experiments. On the other
hand, for BNNs we have 25 instances. Therefore, we wanted to test how good is
BNN when the NNI is reduced to that of EN. The results of this experiment is
shown in Figure 5. Here, one can observe that during CT, there is some
performance dip but it is mostly towards the final rounds. On the other hand,
for RFS, there is no noticeable loss in accuracy. We observed similar results
even with entropy as the acquisition function. A possible reason for this
outcome could be attributed to Bayesian’s ability to learn distribution over
weights (instead of point estimate), which naturally models uncertainty during
the training process (which EN and MCD lacks).
Impact of reduced model capacity: Besides NNI, another factor that determine
the performance of AL is the model capacity. Both BCN and EN enjoy the benefit
of being a full capacity model (during training). However, for MCD, due to
dropouts a significant portion of neurons in dense and CNN layers remain
inactive, which is one of main reasons for MCD’s inferior performance. In
fact, [6] show that when ENs are capacity-limited, their performance drops
roughly to that of MCD. To see this effect on BCN, we reduced the number of
CNN filters and dense layers to that of MCD (i.e., 50% less for dense and 25%
less for CNN). The outcome of this experiment is shown in Figure 5, where BNN-
VR- is the capacity-limited BCN. While we do see a noticeable performance dip
when compared to the regular BNN, the accuracy still manages to be on-par with
EN for both CT and RFS.
Performance of accelerated uncertainty estimation: Finally, we compare the
performance of the proposed Bayesian accelerated uncertainty estimation
learning (AUE) on LenetD2 architecture in Figure 6. It is interesting to see
that for MNIST, our approximation is as effective as uncertainties estimated
through iterative realizations of neural networks. For FMNIST (which is a more
challenging dataset), AUE still produces respectable results. It is important
to note that iterative procedure creates multiple instantiations of the
network and every instance needs to see the entire dataset to estimate
uncertainty. Although, AUE cannot match the performance of the iterative
approach, it significantly reduces the time taken to calculate the uncertainty
and this might be a worthy trade-off between performance and speed.
(a) (a) (b) (b)
Figure 6: Active learning performance of iterative uncertainty estimation Vs
accelerated uncertainty estimation (BNN-AUE) on MNIST and FMNIST datasets.
## 5 Conclusion
In this paper, we analyzed Bayesian neural networks for active learning and
showed that they are more efficient than ensemble and Monte Carlo dropouts in
capturing uncertainty. Our experiments revealed the following characteristics
of BNN: (a) BNNs are more robust when it comes to continual learning, and EN
performs worse than MCD in this setting, (b) produces respectable uncertainty
estimates even with reduced model capacity, and (c) produces better precision
and F1 on difficult class labels compared to EN. To overcome the computation
cost of repeated uncertainty estimation in active learning, we proposed an
accelerated uncertainty estimation technique for dense layers.
## References
* [1] Ozan Sener and Silvio Savarese. A geometric approach to active learning for convolutional neural networks. arXiv preprint arXiv, 1708:1, 2017.
* [2] Lin Yang, Yizhe Zhang, Jianxu Chen, Siyuan Zhang, and Danny Z Chen. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention, pages 399–407. Springer, 2017.
* [3] Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113–127, 2015\.
* [4] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402–6413, 2017.
* [5] Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1183–1192. JMLR. org, 2017.
* [6] William H Beluch, Tim Genewein, Andreas Nürnberger, and Jan M Köhler. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9368–9377, 2018.
* [7] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pages 1050–1059, 2016.
* [8] Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3288–3298, 2017.
* [9] Burr Settles. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009.
* [10] Alex Holub, Pietro Perona, and Michael C Burl. Entropy-based active learning for object recognition. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 1–8. IEEE, 2008.
* [11] Ajay J Joshi, Fatih Porikli, and Nikolaos Papanikolopoulos. Multi-class active learning for image classification. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 2372–2379. IEEE, 2009.
* [12] Xin Li and Yuhong Guo. Adaptive active learning for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 859–866, 2013.
* [13] Andreas Kirsch, Joost van Amersfoort, and Yarin Gal. Batchbald: Efficient and diverse batch acquisition for deep bayesian active learning. In Advances in Neural Information Processing Systems, pages 7026–7037, 2019.
* [14] Parmida Atighehchian, Frédéric Branchaud-Charron, and Alexandre Lacoste. Bayesian active learning for production, a systematic study and a reusable library. arXiv preprint arXiv:2006.09916, 2020.
* [15] Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, and Shadab Khan. Towards robust and reproducible active learning using neural networks. arXiv, pages arXiv–2002, 2020.
* [16] Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2498–2507. JMLR. org, 2017.
* [17] Claude E Shannon. A mathematical theory of communication. Bell system technical journal, 27(3):379–423, 1948.
* [18] Linton C Freeman. Elementary applied statistics: for students in behavioral science. John Wiley & Sons, 1965.
* [19] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
* [20] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [21] Eman H Ahmed and Mohamed Moustafa. House price estimation from visual and textual features.(2016). arXiv preprint cs.CV/1609.08399, 2016.
* [22] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
* [23] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [24] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017.
* [25] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
* [26] Mahdi Pakdaman Naeini, Gregory F Cooper, and Milos Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In Proceedings of the… AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, volume 2015, page 2901\. NIH Public Access, 2015.
* [27] Nico JD Nagelkerke et al. A note on a general definition of the coefficient of determination. Biometrika, 78(3):691–692, 1991.
* [28] Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211, 2013.
* [29] Ronald Kemker, Marc McClure, Angelina Abitino, Tyler Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. arXiv preprint arXiv:1708.02072, 2017.
|
# On Strong Small Loop Transfer Spaces Relative to Subgroups of Fundamental
Groups
S.Z. Pashaei<EMAIL_ADDRESS>B. Mashayekhy<EMAIL_ADDRESS>M. Abdullahi Rashid<EMAIL_ADDRESS>Department of Pure Mathematics,
Center of Excellence in Analysis on Algebraic Structures, Ferdowsi University
of Mashhad,
P.O.Box 1159-91775, Mashhad, Iran.
###### Abstract
Let $H$ be a subgroup of the fundamental group $\pi_{1}(X,x_{0})$. By
extending the concept of strong SLT space to a relative version with respect
to $H$, strong $H$-SLT space, first, we investigate the existence of a
covering map for strong $H$-SLT spaces. Moreover, we show that a semicovering
map is a covering map in the presence of strong $H$-SLT property. Second, we
present conditions under which the whisker topology agrees with the lasso
topology on $\widetilde{X}_{H}$. Also, we study the relationship between open
subsets of $\pi_{1}^{wh}(X,x_{0})$ and $\pi_{1}^{l}(X,x_{0})$. Finally, we
give some examples to justify the definition and study of strong $H$-SLT
spaces.
###### keywords:
Strong small loop transfer space, quasitopological fundamental group, whisker
topology, lasso topology, covering map, semicovering map.
###### MSC:
[2010]57M10, 57M12, 57M05, 55Q05.
## 1 Introduction and Motivation
Throughout this article, we consider a path connected topological space $X$
with a base point $x_{0}\in{X}$. Given a pointed topological space
$(X,x_{0})$, we denote the set of all paths in $X$ starting at $x_{0}$ by
$P(X,x_{0})$. Let $H$ be a subgroup of $\pi_{1}(X,x_{0})$ and
$\widetilde{X}_{H}=P(X,x_{0})/\sim$, where $\alpha\sim\beta$ if and only if
$\alpha(1)=\beta(1)$ and $[\alpha\ast\beta^{-1}]\in{H}$. The equivalence class
of $\alpha$ is denoted by $[\alpha]_{H}$. We denote the constant path at
$x_{0}$ by $c_{x_{0}}$. Note that $[\alpha]_{H}=[c_{x_{0}}]_{H}$ if and only
if $[\alpha]\in{H}$. The map $p_{H}:\widetilde{X}_{H}\rightarrow X$ is defined
to be the endpoint projection $p_{H}([\alpha]_{H})=\alpha(1)$. We denote
$p_{e}$ and $\widetilde{X}_{e}$ instead of $p_{H}$ and $\widetilde{X}_{H}$,
respectively, when $H$ is the trivial subgroup.
There are three famous topologies on $\widetilde{X}_{H}$. One of them is the
quotient topology induced by the compact-open topology on $P(X,x_{0})$. We
denote the space $\widetilde{X}_{H}$ equipped with this topology by
$\widetilde{X}^{top}_{H}$. The second topology is the whisker topology which
was introduced by Spanier [14, Theorem 2.5.13] and named by Brodskiy et al.
[6], as follows.
###### Definition 1.1.
For any pointed topological space $(X,x_{0})$ the whisker topology on the set
$\widetilde{X}_{H}$ is defined by the collection of all the following sets as
a basis
$B_{H}([\alpha]_{H},U)=\\{[\beta]_{H}\in{\widetilde{X}_{H}}\ |\
\beta\simeq\alpha\ast\lambda\ for\ some\ \lambda:I\rightarrow U,\
\lambda(0)=\alpha(1)\\},$
where $[\alpha]_{H}\in{\widetilde{X}_{H}}$ and $U$ is an open neighborhood of
$\alpha(1)$. In the case $H=1$, we denote the basis elements of
$\widetilde{X}_{e}$ by $B([\alpha],U)$.
The Spanier group $\pi(\mathcal{U},x)$ [14] with respect to an open cover
$\mathcal{U}=\\{U_{i}\ |\ i\in{I}\\}$ is defined to be the subgroup of
$\pi_{1}(X,x)$ which contains all homotopy classes having representatives of
the type $\prod_{j=1}^{n}\alpha_{j}\beta_{j}\alpha^{-1}_{j}$, where
$\alpha_{j}$’s are arbitrary paths starting at $x$ and each $\beta_{j}$ is a
loop inside one of the open sets $U_{j}\in{\mathcal{U}}$.
The next topology is lasso topology which has been introduced and studied in
[6].
###### Definition 1.2.
For any pointed topological space $(X,x_{0})$ the lasso topology on the set
$\widetilde{X}_{H}$ is defined by the collection of all the following sets as
a basis
$B_{H}([\alpha]_{H},\mathcal{U},U)=\\{[\beta]_{H}\in{\widetilde{X}_{H}}\ |\
\beta\simeq\alpha\ast\gamma\ast\delta\ for\ some\
[\gamma]\in{\pi(\mathcal{U},\alpha(1))}$ $\ \ \ \ \ \ \ and\ for\ some\
\delta:I\rightarrow U,\ \delta(0)=\alpha(1)\\},$
where $[\alpha]_{H}\in{\widetilde{X}_{H}}$, $\mathcal{U}$ is an open cover of
$X$ and $U\in{\mathcal{U}}$ is an open neighborhood of $\alpha(1)$. In the
case $H=1$, we denote the basis elements of $\widetilde{X}_{e}$ by
$B([\alpha],\mathcal{U},U)$.
We denote the space $\widetilde{X}_{H}$ equipped with whisker and lasso
topologies by $\widetilde{X}^{wh}_{H}$ and $\widetilde{X}^{l}_{H}$,
respectively. Moreover, we denote $\widetilde{X}^{top}_{e}$,
$\widetilde{X}^{wh}_{e}$ and $\widetilde{X}^{l}_{e}$ instead of
$\widetilde{X}^{top}_{H}$, $\widetilde{X}^{wh}_{H}$ and
$\widetilde{X}^{l}_{H}$, respectively, when $H$ is the trivial subgroup. Note
that $\pi_{1}^{qtop}(X,x_{0})$ and $\pi_{1}^{wh}(X,x_{0})$ and
$\pi_{1}^{l}(X,x_{0})$ can be considered as subspaces of
$\widetilde{X}^{top}_{e}$, $\widetilde{X}^{wh}_{e}$ and
$\widetilde{X}^{l}_{e}$, respectively. The relation between these three
different topologies are as follows, when $X$ is a connected, locally path
connected space (see [18]).
“$\widetilde{X}^{wh}_{e}$ is finer than $\widetilde{X}^{top}_{e}$” and “
$\widetilde{X}^{top}_{e}$ is finer than $\widetilde{X}^{l}_{e}$”
Note that similar statements to the above hold for $\widetilde{X}_{H}$ when
$H$ is a nontrivial subgroup (see [4]).
Small loop transfer (SLT for short) spaces were introduced for the first time
by Brodskiy et al. [7, Definition 4.7]. The main motivation of the definition
of SLT spaces is to determine the condition for coincidence of the compact-
open topology and the whisker topology on $\widetilde{X}_{e}$. Indeed,
Brodskiy et al. [7, Theorems 4.11, 4.12] proved that a locally path connected
space $X$ is an SLT space if and only if for every $x\in{X}$,
$\widetilde{X}^{top}_{e}=\widetilde{X}^{wh}_{e}$. Also, they defined a strong
version of this notion, strong SLT space [7, Definition 4.18], and showed that
a path connected space $X$ is a strong SLT space if and only if for every
$x\in{X}$, $\widetilde{X}^{l}_{e}=\widetilde{X}^{wh}_{e}$ [7, Proposition
4.19]. Moreover, Pashaei et al. [13] introduced and studied SLT spaces with
respect to a subgroups $H$ of $\pi_{1}(X,x_{0})$ ($H$-SLT for short) at point
$x_{0}$, and using this notion, presented a condition for the coincidence of
the whisker and the compact-open topology on $\widetilde{X}_{H}$. In this
paper by introducing a relative version of strong small loop transfer spaces
with respect to a subgroup $H$ of $\pi_{1}(X,x_{0})$ at a point $x_{0}$
(strong $H$-SLT at $x_{0}$), we are going to determine when the whisker and
the lasso topologies on $\widetilde{X}_{H}$ are identical. Also, we study the
relationship between covering and semicovering spaces of strong $H$-SLT spaces
at a point in $X$.
###### Definition 1.3.
Let $H$ be a subgroup of $\pi_{1}(X,x_{0})$. A topological space $X$ is called
strong $H$-small loop transfer (strong $H$-SLT for short) space at $x_{0}$ if
for every $x\in X$ and for every open neighborhood $U$ of $X$ containing
$x_{0}$ there is an open neighborhood $V$ containing $x$ such that for every
loop $\beta:I\rightarrow V$ based at $x$ and for every path
$\alpha:I\rightarrow X$ from $x_{0}$ to $x$ there is a loop
$\lambda:I\rightarrow U$ based at $x_{0}$ such that,
$[\alpha\ast\beta\ast\alpha^{-1}]_{H}=[\lambda]_{H}$. Also, $X$ is called a
strong $H$-SLT space if for every $x\in{X}$ and for every path $\delta$ from
$x_{0}$ to $x$, $X$ is a strong $[\delta^{-1}H\delta]$-SLT space at $x$. Note
that if $H$ is the trivial subgroup, then a strong $H$-SLT space is a strong
SLT space.
It is well known that covering spaces of a path connected, locally path
connected and semilocally simply connected space $X$ are classified by
subgroups of the fundamental group $\pi_{1}(X,x_{0})$. When $X$ has more
complicated local structure, there need not be a simply connected cover
corresponding to the trivial subgroup. Many people have attempted to extend
the covering-theoretic approach to more general spaces. A common approach is
to designate those properties of a covering map which are assumed important.
One of them is semicoverings [2] which are defined to be local homeomorphisms
with continuous lifting of paths and homotopies which are related to
topological group structures on fundamental groups [3, 10]. The other one is
generalized universal coverings which were introduced by Fischer and Zastrow
[9] and provide combinatorial information about fundamental groups of spaces
which are not semilocally simply connected such as the Hawaiian earring, the
Menger curve, and the Sierpinski carpet.
The following theorem determines the existence of coverings for locally path
connected spaces via Spanier groups [14, Theorem 2.5.13].
###### Theorem 1.4.
Let $X$ be connected, locally path connected and $H\leq\pi_{1}(X,x_{0})$. Then
there exists a covering map $p:\widetilde{X}\rightarrow X$ with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H$ if and only if there is an
open cover $\mathcal{U}$ of $X$ such that $\pi(\mathcal{U},x_{0})\leq H$.
Brazas [3, Theorem 4.8] and Torabi et al. [15, Theorem 2.1] have addressed the
existence of covering spaces of locally path connected via algebraic and
topological structures of fundamental groups. For instance, in [15, Theorem
2.1] it was shown that any open normal subgroup in $\pi_{1}^{qtop}(X,x_{0})$
contains a Spanier group. In Section 2, we show the existence of covering
spaces of a strong $H$-SLT space at point $x_{0}$ provided that $H$ is open in
$\pi_{1}^{qtop}(X,x_{0})$. In fact, we prove that if $K$ is an open subgroup
in $\pi_{1}^{qtop}(X,x_{0})$ containing $H$, then there is an open cover
$\mathcal{U}$ of $X$ such that $\pi(\mathcal{U},x_{0})$ is contained in $K$
when $X$ is a strong $H$-SLT space at $x_{0}$ (see Proposition 2.1).
It is obvious that every covering map $p:\widetilde{X}\rightarrow X$ is a
semicovering map but not vice versa [10]. Brazas showed that if $X$ is a
connected, locally path connected and semilocally simply connected space, then
these two concepts are the same (see [2, Corollary 7.2]). Moreover, Torabi et
al. [15, Theorem 4.4] proved this fact for connected, locally path connected
and semilocally small generated spaces. In Corollary 2.3, we show that if
$p:(\widetilde{X},\tilde{x}_{0})\rightarrow(X,x_{0})$ is a semicovering map
with $p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H\leq\pi_{1}(X,x_{0})$,
then $p$ is a covering map when $X$ is a connected, locally path connected and
strong $H$-SLT space at $x_{0}$. Consequently, we can show that every
semicovering map is a covering map in strong SLT spaces at $x_{0}$. Recall
that Brazas introduced the notion of $\textbf{lpc}_{0}$-covering maps in terms
of unique lifting property [4, Definition 5.3]. Note that
$\textbf{lpc}_{0}$-covering maps were inspired by the concept of generalized
universal covering maps which introduced by Fischer and Zastrow [9]. By the
definition, it can be easily seen that every covering map is an
$\textbf{lpc}_{0}$-covering map. Since the fibers of a covering map are
discrete, Example 4.15 in [9] implies that an $\textbf{lpc}_{0}$-covering map
is not necessarily a covering map. In Proposition 2.5, we prove that if $X$ is
a strong $H$-SLT space at $x_{0}$, then an $\textbf{lpc}_{0}$-covering map
$p:(\widetilde{X},\tilde{x}_{0})\rightarrow(X,x_{0})$ with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H$ is a covering map when the
fiber $p^{-1}(x_{0})$ is finite. Finally, we address the relationship between
some famous subgroups of $\pi_{1}(X,x_{0})$ in strong SLT and SLT spaces at
$x_{0}$.
The aim of Section 3 is to clarify the relationship between the whisker
topology and the lasso topology on $\widetilde{X}_{H}$. We show that these two
topologies on $\widetilde{X}_{H}$ are identical if and only if the space $X$
is a strong $H$-SLT space when $H$ is a normal subgroup (see Theorem 3.2).
Moreover, we show that if $X$ is strong $H$-SLT at $x_{0}$, then all open
subsets of $\pi_{1}^{wh}(X,x_{0})$ and $\pi_{1}^{l}(X,x_{0})$ containing
normal subgroup $H$ are the same. In the case that $H$ is not normal, we show
that open subgroups of $\pi_{1}^{wh}(X,x_{0})$ and $\pi_{1}^{l}(X,x_{0})$
containing $H$ are the same (see Proposition 3.6). However, we prove that if
$X$ is a strong $H$-SLT space at $x_{0}$, then closed normal subgroups of
$\pi_{1}^{wh}(X,x_{0})$ and $\pi_{1}^{l}(X,x_{0})$ containing $H$ are the
same. In Corollary 3.11, we show that a semicovering map can transfer the
property of being strong SLT from its codomain to its domain. Finally, in
order to justify the definition of strong $H$-SLT spaces, we give an example
of an strong $H$-SLT space which is not strong SLT and consequently, it is not
semilocally simply connected (see Example 3.13). Also, we give an example to
show that some results of the paper do not necessarily hold, for instance
Proposition 3.6, for $H$-SLT spaces at $x_{0}$ (see Example 3.14).
## 2 Relationship Between Strong SLT Spaces and Covering Maps
Since existence of covering maps have a significant relation with Spanier
groups (see Theorem 1.4), it is interesting to find conditions under which for
a subgroup $H$ of $\pi_{1}(X,x_{0})$ there is an open cover $\mathcal{U}$ of
$X$ such that $\pi(\mathcal{U},x_{0})\leq H$. Recall that Torabi et al. in
[15, Theorem 2.1] proved that if $H$ is an open normal subgroup of
$\pi_{1}^{qtop}(X,x_{0})$, then there is a Spanier group which is contained in
$H$. In the following proposition, we show that if $X$ is a strong $H$-SLT
spaces at $x_{0}$, then any open subgroup of $\pi_{1}^{qtop}(X,x_{0})$
containing $H$ contains a Spanier group.
###### Proposition 2.1.
Let $H\leq\pi_{1}(X,x_{0})$ and $X$ be a connected, locally path connected and
strong $H$-SLT space at $x_{0}$. If $K$ is an open subgroup of
$\pi_{1}^{qtop}(X,x_{0})$ containing $H$, then there is an open cover
$\mathcal{U}$ of $X$ such that $\pi(\mathcal{U},x)\leq K$, i.e., there exists
a covering map $p:\widetilde{X}\rightarrow X$ with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=K$.
###### Proof.
Let $K$ be an open subgroup of $\pi_{1}^{qtop}(X,x_{0})$ containing $H$. By
the definition of the quotient topology $\pi_{1}^{qtop}(X,x_{0})$, there is an
open basis neighborhood $W=\bigcap_{i=1}^{n}\langle I_{i},U_{i}\rangle$ of the
constant path $c_{x_{0}}$ in $\Omega(X,x_{0})=\\{\alpha\ |\
\alpha(0)=\alpha(1)=x_{0}\\}$ such that $W\subseteq\pi^{-1}(K)$, where
$\pi:\Omega(X,x_{0})\rightarrow\pi_{1}(X,x_{0})$ is the quotient map defined
by $\pi(\alpha)=[\alpha]$. Put $U=\bigcap_{i=1}^{n}U_{i}$. By the structure of
$W$, we have $x_{0}\in{U}$ and so $U\neq\emptyset$. Since $X$ is a connected,
locally path connected and strong $H$-SLT space at $x_{0}$, for every
$x\in{X}$ there is a path connected open neighborhood $V$ containing $x$ such
that for every loop $\beta:I\rightarrow V$ based at $x$ and for every path
$\alpha:I\rightarrow X$ from $x_{0}$ to $x$, there is a loop
$\lambda:I\rightarrow U$ based at $x_{0}$ such that
$[\alpha\ast\beta\ast\alpha^{-1}]_{H}=[\lambda]_{H}$. Assume $\mathcal{U}$ is
the open cover of $X$ consists of all neighborhoods $V$’s. We show that the
Spanier group $\pi(\mathcal{U},x)$ with respect to the open cover
$\mathcal{U}$ is contained in $K$. It is enough to show that this relation
holds for any generator of $\pi(\mathcal{U},x)$. Let
$[\alpha\ast\beta\ast\alpha^{-1}]$ be an arbitrary generator of
$\pi(\mathcal{U},x)$. Since all elements of $\mathcal{U}$ are path connected,
it is not hard to see that for any path $\alpha$ from $x_{0}$ to any $y\in{V}$
and for every loop $\beta$ inside $V$ based at $y$, there is a loop $\lambda$
inside $U$ such that $[\alpha\ast\beta\ast\alpha^{-1}]_{H}=[\lambda]_{H}$,
that is, $[\alpha\ast\beta\ast\alpha^{-1}\ast\lambda^{-1}]\in{H}$ or
equivalently, $[\alpha\ast\beta\ast\alpha^{-1}]\in H[\lambda]$. Since
$Im(\lambda)\subseteq U$, we have $\lambda\in{W}$, i.e.,
$\pi(\lambda)=[\lambda]\in K$. On the other hand, since $H\leq K$,
$[\alpha\ast\beta\ast\alpha^{-1}]\in K$. Hence the result holds. ∎
Recall that for any subgroup $H$ of a group $G$, the core of $H$ in $G$,
denoted by $H_{G}$, is defined to the join of all normal subgroups of $G$ that
are contained in $H$. Note that $H_{G}=\bigcap_{g\in{G}}g^{-1}Hg$ is the
largest normal subgroup of $G$ which contained in $H$. By the structure of
quasitopological group $\pi_{1}^{qtop}(X,x_{0})$, if $H_{\pi_{1}(X,x_{0})}$ is
open in $\pi_{1}^{qtop}(X,x_{0})$ then so is $H$, but the converse is not true
in general. Note that if the openness of $H$ in $\pi_{1}^{qtop}(X,x_{0})$
implies the openness of $H_{\pi_{1}(X,x_{0})}$, then Theorem 3.7 of [15]
implies that every semicovering map is a covering map. But there is a
semicovering map which is not a covering map (see [2, Example 3.8]). The
following corollary shows that the converse holds in relative version of
strong SLT spaces at one point.
###### Corollary 2.2.
Let $X$ be a connected, locally path connected and strong H-SLT space at
$x_{0}$. Then $H_{\pi_{1}(X,x_{0})}$ is open in $\pi_{1}^{qtop}(X,x_{0})$ if
and only if $H$ is an open subgroup in $\pi_{1}^{qtop}(X,x_{0})$.
###### Proof.
It is easy to see that if $H_{\pi_{1}(X,x_{0})}$ is open in
$\pi_{1}^{qtop}(X,x_{0})$, then so is $H$.
To prove the other direction, by Proposition 2.1, there is an open cover
$\mathcal{U}$ of $X$ such that $\pi(\mathcal{U},x)\leq H$. On the other hand,
since $H_{\pi_{1}(X,x_{0})}$ is the largest normal subgroup of
$\pi_{1}(X,x_{0})$ contained in $H$, we have $\pi(\mathcal{U},x)\leq
H_{\pi_{1}(X,x_{0})}$. Since $\pi(\mathcal{U},x)$ is an open subgroup and
$H_{\pi_{1}(X,x_{0})}$ is a subgroup in $\pi_{1}^{qtop}(X,x_{0})$,
$H_{\pi_{1}(X,x_{0})}$ is open in $\pi_{1}^{qtop}(X,x_{0})$. ∎
###### Corollary 2.3.
Let $H\leq\pi_{1}(X,x_{0})$ and $p:\widetilde{X}\rightarrow X$ be a
semicovering map with $p_{\ast}(\widetilde{X},\tilde{x}_{0})=H$. If $X$ is a
connected, locally path connected and strong $H$-SLT space at $x_{0}$, then
$p$ is a covering map.
###### Proof.
Let $p:\widetilde{X}\rightarrow X$ be a semicovering map with
$p_{\ast}(\widetilde{X},\tilde{x}_{0})=H$. By [3, Theorem 3.5], $H$ is an open
subgroup in $\pi_{1}^{qtop}(X,x_{0})$. Hence, Corollary 2.2 implies that
$H_{\pi_{1}(X,x_{0})}$ is open in $\pi_{1}^{qtop}(X,x_{0})$. Therefore, using
[15, Theorem 3.7], $p:\widetilde{X}\rightarrow X$ is a covering map. ∎
The classification of semicovering maps were given by Brazas in [3] when $X$
is connected and locally path connected. By the definition of a semicovering
map, it can be observed that every covering map is a semicovering map but the
converse does not hold, out of semilocally simply connected spaces [3, Example
3.8]. The following corollary shows that the converse does hold in a category
of spaces wider than semilocally simply connected spaces which is a direct
consequence of Corollary 2.3.
###### Corollary 2.4.
Let $X$ be a connected, locally path connected and strong SLT space at
$x_{0}$. Then every semicovering map is a covering map.
It turns out that every covering map is an $\textbf{lpc}_{0}$-covering map but
not vice versa. For example, the Hawiian Earring admits generalized universal
covering space [9, Proposition 3.6] but does not admit simply covering space
because it is not semilocally simply connected (see [14, Corollary 2.5.14]).
The following proposition provides some conditions under which any
$\textbf{lpc}_{0}$-covering map is a covering map.
###### Proposition 2.5.
Let $p:\widetilde{X}\rightarrow X$ be an $\textbf{lpc}_{0}$-covering map with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H\leq\pi_{1}(X,x_{0})$. Let $X$
be a strong $H$-SLT space at $x_{0}$. If $|p^{-1}(x_{0})|<\infty$, then $p$ is
a covering map.
###### Proof.
Let $p:\widetilde{X}\rightarrow X$ be an $\textbf{lpc}_{0}$-covering map with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H$. In [4, Lemma 5.10] it was
shown that $p$ is equivalent to the endpoint projection map
$p_{H}:\widetilde{X}_{H}^{wh}\rightarrow X$. Moreover, it was shown that the
fiber $p^{-1}(x_{0})$ is Hausdorff and hence $(p_{H}^{-1}(x_{0}))^{wh}$ is
Hausdorff (see [13, Corollary 3.10]). On the other hand, since
$|p^{-1}(x_{0})|<\infty$, we have $|p_{H}^{-1}(x_{0})|<\infty$. Thus,
$(p_{H}^{-1}(x_{0}))^{wh}$ is discrete. It is not hard to see that
$(p_{H}^{-1}(x_{0}))^{wh}$ agrees with $\frac{\pi_{1}^{wh}(X,x_{0})}{H}$ [13,
p. 246]. Therefore, the discreteness of $(p_{H}^{-1}(x_{0}))^{wh}$ implies
that $\frac{\pi_{1}^{wh}(X,x_{0}}{H}$ is discrete, that is, $H$ is open in
$\pi_{1}^{wh}(X,x_{0})$. Moreover, since $X$ is a strong $H$-SLT at $x_{0}$,
by [13, Proposition 3.7], $H$ is open in $\pi_{1}^{qtop}(X,x_{0})$.
Consequently, by Proposition 2.1, there exists an open cover $\mathcal{U}$ of
$X$ such that $\pi(\mathcal{U},x_{0})\leq H$ which implies that
$p:\widetilde{X}\rightarrow X$ is a covering map (see Proposition 1.4). ∎
In what follows in this section we investigate the relationship between some
subgroups of $\pi_{1}(X,x_{0})$ in strong SLT and SLT spaces. Based on some
works of [1, 12, 17] there is a chain of some effective subgroups of the
fundamental group $\pi_{1}(X,x_{0})$ as follows:
$\pi_{1}^{s}(X,x_{0})\leq\pi_{1}^{sg}(X,x_{0})\leq\widetilde{\pi}_{1}^{sp}(X,x_{0})\leq\pi_{1}^{sp}(X,x_{0}),\
\ \ \ (\star)$
where $\pi_{1}^{s}(X,x_{0})$ is the subgroup of all small loops at $x_{0}$
[17], $\pi_{1}^{sg}(X,x_{0})$ is the subgroup of all small generated loops,
i.e., the subgroup generated by the set of
$\\{[\alpha\ast\beta\ast\alpha^{-1}]\ |\
[\beta]\in{\pi_{1}^{sg}(X,\alpha(1))}\ and\ \alpha\in{P(X,x_{0})}\\}$. Also,
$\pi_{1}^{sp}(X,x_{0})$ is the Spanier group of $X$, the intersection of the
Spanier subgroups relative to open covers of $X$ [8, Definition 2.3], and
$\widetilde{\pi}_{1}^{sp}(X,x_{0})$ is the path Spanier group, i.e., the
intersection of all path Spanier subgroups
$\widetilde{\pi}(\mathcal{V},x_{0})$ [15, Definition 3.1], where $\mathcal{V}$
is a path open cover of $X$.
Recall that Jamali et al. in [11, Proposition 3.2] proved that the equality of
the first inequality of the above chain holds in SLT spaces at $x_{0}$. In the
following we are going to present some conditions for the equality of the
others inequalities.
###### Theorem 2.6.
Let $H$ be a subgroup of $\pi_{1}(X,x_{0})$ which is contained in
$\pi_{1}^{s}(X,x_{0})$. If $X$ is an $H$-SLT space at $x_{0}$, then
$\pi_{1}^{s}(X,x_{0})=\widetilde{\pi}_{1}^{sp}(X,x_{0})$.
###### Proof.
By the chain ($\star$), it is sufficient to show that
$\widetilde{\pi}_{1}^{sp}(X,x_{0})\leq\pi_{1}^{s}(X,x_{0})$. Consider
$[\lambda]\in{\widetilde{\pi}_{1}^{sp}(X,x_{0})}$. We show that $\lambda$ is a
small loop. Let $U$ be an open neighborhood of $X$ containing $x_{0}$. By
assumption, since $X$ is an $H$-SLT space at $x_{0}$, there is a path Spanier
subgroup $\widetilde{\pi}(\mathcal{V},x_{0})$ such that for
$[\lambda]\in{\widetilde{\pi}_{1}^{sp}(X,x_{0})}\leq\widetilde{\pi}(\mathcal{V},x_{0})$
there is a loop $\lambda_{U}:I\rightarrow U$ based at $x_{0}$ such that
$[\lambda]_{H}=[\lambda_{U}]_{H}$, i.e.,
$[\lambda\ast\lambda_{U}^{-1}]\in{H}$. On the other hand, since
$H\leq\pi_{1}^{s}(X,x_{0})$, there is a small loop $h:I\rightarrow U$ based at
$x_{0}$ such that $[\lambda\ast\lambda_{U}^{-1}]=[h]$, that is,
$[\lambda]=[h\ast\lambda_{U}]$ which implies that
$[\lambda]\in{\pi_{1}^{s}(X,x_{0})}$. Consequently,
$\pi_{1}^{s}(X,x_{0})=\widetilde{\pi}_{1}^{sp}(X,x_{0})$. ∎
###### Corollary 2.7.
Let $X$ be an SLT space at $x_{0}$. Then
$\pi_{1}^{s}(X,x_{0})=\widetilde{\pi}_{1}^{sp}(X,x_{0})$.
###### Theorem 2.8.
Let $H$ be a subgroup of $\pi_{1}(X,x_{0})$ which is contained in
$\pi_{1}^{s}(X,x_{0})$. If $X$ is a locally path connected strong $H$-SLT
space at $x_{0}$, then $\pi_{1}^{s}(X,x_{0})={\pi}_{1}^{sp}(X,x_{0})$.
###### Proof.
The proof is similar to that of Theorem 2.6. ∎
###### Corollary 2.9.
Let $X$ be a locally path connected strong SLT space at $x_{0}$. Then
$\pi_{1}^{s}(X,x_{0})={\pi}_{1}^{sp}(X,x_{0})$.
## 3 Relationship Between Open Subsets of $\pi_{1}^{wh}$ and $\pi_{1}^{l}$
Brodskiy et al. [7, Proposition 4.19]) showed that there is a remarkable
relation between the whisker topology and the lasso topology on
$\widetilde{X}_{e}$ in strong SLT spaces. Indeed, they showed that $X$ is a
strong SLT space if only if for every $x\in{X}$,
$\widetilde{X}_{e}^{wh}=\widetilde{X}_{e}^{l}$. Similarly, it is of interest
to determine when these topologies coincide on $\widetilde{X}_{H}$. In other
words,
“What is a necessary and sufficient condition for the equality
$\widetilde{X}_{H}^{wh}=\widetilde{X}_{H}^{l}$?”
Pashaei et al. in [13] gave a necessary and sufficient condition for the
coincidence of $\widetilde{X}_{H}^{wh}$ and $\widetilde{X}_{H}^{top}$. To
answer the above question, we need the notion of strong $H$-SLT space. For a
subgroup $H$ of $\pi_{1}(X,x_{0})$ we recall that $X$ is a strong $H$-SLT
space if for every $x\in{X}$ and for every path $\delta$ from $x_{0}$ to $x$,
$X$ is a strong $[\delta^{-1}H\delta]$-SLT space at $x$.
###### Remark 3.1.
Note that if X is a strong $H$-SLT space, then $X$ is a strong
$[\delta^{-1}H\delta]$-SLT space for every path $\delta$ from $x_{0}$ to $x$.
Let $X$ be a strong $H$-SLT space and $H$ be a normal subgroup of
$\pi_{1}(X,x_{0})$. Consider the isomorphism
$\varphi_{\delta}:\pi_{1}^{qtop}(X,x_{0})\rightarrow\pi_{1}^{qtop}(X,x)$
defined by $\varphi_{\delta}([\beta])=[\delta^{-1}\ast\beta\ast\delta]$. Then
the normality of $H$ implies that $\varphi_{\delta}(H)=[\delta^{-1}H\delta]$
is a normal subgroup of $\pi_{1}^{qtop}(X,x)$. Moreover, since
$\varphi_{\delta}$ is a homeomorphism, openness of $H$ implies that
$\varphi_{\delta}(H)=[\delta^{-1}H\delta]$ is open in $\pi_{1}^{qtop}(X,x)$.
###### Theorem 3.2.
Let $X$ be a path connected space and $H$ be a normal subgroup of
$\pi_{1}(X,x_{0})$. Then $X$ is a strong $H$-SLT space if and only if
$\widetilde{X}^{l}_{[\delta^{-1}H\delta]}=\widetilde{X}^{wh}_{[\delta^{-1}H\delta]}$
for every $x\in{X}$ and for any path $\delta$ from $x_{0}$ to $x$.
###### Proof.
Let $X$ be a strong $H$-SLT space. By definitions, it is routine to check that
$\widetilde{X}^{wh}_{K}$ is finer than $\widetilde{X}^{l}_{K}$ for any
subgroup $K\leq\pi_{1}(X,y)$. By Remark 3.1, it is sufficient to show that
$\widetilde{X}^{l}_{H}$ is finer than $\widetilde{X}^{wh}_{H}$. Consider an
open basis neighborhood $B_{H}([\alpha]_{H},U)$ in $\widetilde{X}^{wh}_{H}$,
where $\alpha$ is a path from $x_{0}$ to $x$. We find an open basis
neighborhood $B_{H}([\alpha]_{H},\mathcal{U},W)$ in $\widetilde{X}^{l}_{H}$
which is contained in $B_{H}([\alpha]_{H},U)$. By the definition of strong
$H$-SLT space, any point of $X$ has an open neighborhood $V$ defined by the
strong $H$-SLT space property which is applied to the point $\alpha(1)=x$ and
$U$, that is, for every loop $\gamma$ in $V$ and for every path $\sigma$ from
$x$ to $\gamma(0)$ there is a loop $\lambda$ in $U$ based at $x$ such that
$[\sigma\ast\gamma\ast\sigma^{-1}]_{[\alpha^{-1}H\alpha]}=[\lambda]_{[\alpha^{-1}H\alpha]}$.
Let $\mathcal{U}$ be the open cover of $X$ consisting of all neighborhoods
$V$’s. Put $W=U$ and consider $[\alpha\ast
l\ast\beta]_{H}\in{B_{H}([\alpha]_{H},\mathcal{U},U)}$. Assume $l$ is equal to
a finite concatenation of loops
$l=\prod_{i=1}^{n}\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}$, where
$\alpha_{i}$’s are paths from $x$ to $\alpha_{i}(1)$ and $\gamma_{i}$’s are
loops in some $V\in{\mathcal{U}}$ based at $\alpha_{i}(1)$. Since $X$ is
strong $H$-SLT, there are loops $\lambda_{i}$ in $U$ for $1\leq i\leq n$ such
that
$[\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}]_{[\alpha^{-1}H\alpha]}=[\lambda_{i}]_{[\alpha^{-1}H\alpha]}$,
i.e.,
$[\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\lambda^{-1}_{i}]\in{[\alpha^{-1}H\alpha]}$
for $1\leq i\leq n$. We have
$\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha_{i}^{-1}\ast\alpha^{-1}\simeq
h_{i}$ rel $\dot{I}$, where $[h_{i}]\in{H}$ ($1\leq i\leq n$). It is enough to
show that
$\big{[}\alpha\ast\big{(}\prod_{i=1}^{n}\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\big{)}\ast\beta\big{]}_{H}=\big{[}\alpha\ast\lambda\ast\beta\big{]}_{H}$,
or equivalently, it is sufficient to show that
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}\in{H}$,
where $\lambda=\lambda_{1}\ast\lambda_{2}\ast...\ast\lambda_{n}$ is a loop in
$U$ based at $x$. Crearly, it can be seen that
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}=[(\alpha\ast\alpha_{1}\ast\gamma_{1}\ast\alpha_{1}^{-1}\ast\lambda_{1}^{-1}\ast\alpha^{-1})\ast\alpha\ast\lambda_{1}\ast\alpha^{-1}\ast(\alpha\ast\alpha_{2}\ast\gamma_{2}\ast\alpha_{2}^{-1}\ast\lambda_{2}^{-1}\ast\alpha^{-1})\ast\alpha\ast\lambda_{2}\ast\alpha^{-1}....\ast(\alpha\ast\alpha_{n}\ast\gamma_{n}\ast\alpha_{n}^{-1}\ast\lambda_{n}^{-1}\ast\alpha^{-1})\ast\alpha\ast\lambda_{n}\ast\alpha^{-1}\ast(\alpha\ast\lambda_{n}^{-1}\ast\alpha^{-1})\ast(\alpha\ast\lambda_{n-1}^{-1}\ast\alpha^{-1})\ast...\ast(\alpha\ast\lambda_{1}^{-1}\ast\alpha^{-1})]$.
However, by the above observations, we have
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}=[h_{1}\ast(\alpha\ast\lambda_{1}\ast\alpha^{-1})\ast
h_{2}\ast(\alpha\ast\lambda_{2}\ast\alpha^{-1})....\ast
h_{n}\ast(\alpha\ast\lambda_{n}\ast\alpha^{-1})\ast(\alpha\ast\lambda_{n}^{-1}\ast\alpha^{-1})\ast(\alpha\ast\lambda_{n-1}^{-1}\ast\alpha^{-1})\ast...\ast(\alpha\ast\lambda_{1}^{-1}\ast\alpha^{-1})]$.
By normality of $H$, it is easy but laborious to obtain $[h]\in{H}$ such that
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}=[h_{1}\ast(\alpha\ast\gamma_{1}\ast\alpha^{-1})\ast
h\ast(\alpha\ast\gamma_{1}^{-1}\ast\alpha^{-1})]$. Hence
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}=[h_{1}\ast
h^{\prime}]$ which implies that
$\big{[}\big{(}\prod_{i=1}^{n}\alpha\ast\alpha_{i}\ast\gamma_{i}\ast\alpha^{-1}_{i}\ast\alpha^{-1}\big{)}\ast\alpha\ast\lambda^{-1}\ast\alpha^{-1}\big{]}\in{H}$,
where $[\alpha\ast\gamma_{1}\ast\alpha^{-1}\ast
h\ast\alpha\ast\gamma_{1}^{-1}\ast\alpha^{-1}]=[h^{\prime}]$.
Conversely, let $x\in{X}$ and $\delta$ be a path from $x_{0}$ to $x$. It is
enough to show that $X$ is a strong $[\delta^{-1}H\delta]$-SLT space at $x$.
Let $U$ be an open neighborhood in $X$ containing $x$. Consider open basis
neighborhood
$B_{[\delta^{-1}H\delta]}\big{(}[c_{x}]_{[\delta^{-1}H\delta]},U\big{)}$ in
$\widetilde{X}^{wh}_{[\delta^{-1}H\delta]}$. By
$\widetilde{X}^{l}_{[\delta^{-1}H\delta]}=\widetilde{X}^{wh}_{[\delta^{-1}H\delta]}$,
there is an open basis neighborhood
$B_{[\delta^{-1}H\delta]}\big{(}[c_{x}]_{[\delta^{-1}H\delta]},\mathcal{U},W\big{)}$
in $\widetilde{X}^{l}_{[\delta^{-1}H\delta]}$ such that
$B_{[\delta^{-1}H\delta]}\big{(}[c_{x}]_{[\delta^{-1}H\delta]},\mathcal{U},W\big{)}\subseteq
B_{[\delta^{-1}H\delta]}\big{(}[c_{x}]_{[\delta^{-1}H\delta]},U\big{)}$. Pick
$y\in{X}$. We know that there is an open neighborhood $V$ belonging to
$\mathcal{U}$ such that $y\in{V}$. Let $\alpha$ be a path from $x$ to $y$ and
$\beta$ be a loop in $V$ based at $y$. By the above relation,
$[c_{x}\ast\alpha\ast\beta\ast\alpha^{-1}\ast
w]_{[\delta^{-1}H\delta]}\in{B_{[\delta^{-1}H\delta]}\big{(}[c_{x}]_{[\delta^{-1}H\delta]},U\big{)}}$.
Put $w=c_{x}$. Therefore, there is a loop $\lambda$ in $U$ based at $x$ such
that
$[\alpha\ast\beta\ast\alpha^{-1}]_{[\delta^{-1}H\delta]}=[\lambda]_{[\delta^{-1}H\delta]}$
which implies that $X$ is a strong $H$-SLT space. ∎
One can easily get the following corollary of Theorem 3.2.
###### Corollary 3.3.
Let $X$ be path connected and $H$ be any normal subgroup of
$\pi_{1}(X,x_{0})$. Then $X$ is a strong $H$-SLT at $x_{0}$ if and only if
$(p^{-1}_{H}(x_{0}))^{wh}=(p^{-1}_{H}(x_{0}))^{l}$ or equivalently,
$\frac{\pi_{1}^{wh}(X,x_{0})}{H}=\frac{\pi_{1}^{l}(X,x_{0})}{H}$.
Using Corollary 3.3, one of the main result of this section can be concluded
as follows.
###### Corollary 3.4.
Let $H$ be any normal subgroup of $\pi_{1}(X,x_{0})$. If $X$ is a path
connected strong $H$-SLT space at $x_{0}$, then any subset $U$ of
$\pi_{1}(X,x_{0})$ containing $H$ is open in $\pi_{1}^{wh}(X,x_{0})$ if and
only if it is open in $\pi_{1}^{l}(X,x_{0})$.
The following proposition shows that the property of being strong $H$-SLT is a
necessary condition for the openness of the subgroup $H$ in
$\pi_{1}^{l}(X,x_{0})$.
###### Proposition 3.5.
Let $H$ be an open subgroup of $\pi_{1}^{l}(X,x_{0})$. Then $X$ is strong
$H$-SLT space.
###### Proof.
Since $H$ is an open subgroup of $\pi_{1}^{l}(X,x_{0})$, there is an open
basis neighborhood $B([c_{x_{0}}],\mathcal{U},W)$ in $\pi_{1}^{l}(X,x_{0})$
which is contained in $H$. Let $\delta$ be a path from $x_{0}$ to $x$ and $U$
be an open neighborhood containing $x$. Pick $y\in{X}$. By the definition of
open cover $\mathcal{U}$, there is an open neighborhood $V$ in $\mathcal{U}$
containing $y$. Let $\alpha$ be a path from $x$ to $y$ and $\beta$ be a loop
inside $V$ based at $y$. By the definition of $B([c_{x_{0}}],\mathcal{U},W)$,
we have
$[c_{x_{0}}\ast\delta\ast\alpha\ast\beta\ast\alpha^{-1}\ast\delta^{-1}\ast
w]\in{H}$. Put $w=c_{x_{0}}$. Hence,
$[\delta\ast\alpha\ast\beta\ast\alpha^{-1}\ast\delta^{-1}]\in{H}$ or
equivalently, $[\alpha\ast\beta\ast\alpha^{-1}]\in{[\delta^{-1}H\delta]}$,
that is,
$[\alpha\ast\beta\ast\alpha^{-1}]_{[\delta^{-1}H\delta]}=[c_{x}]_{[\delta^{-1}H\delta]}$.
Therefore, $X$ is a strong $H$-SLT space. ∎
###### Proposition 3.6.
Let $H\leq\pi_{1}(X,x_{0})$ and $X$ be a locally path connected strong $H$-SLT
space at $x_{0}$. Then $H$ is open in $\pi_{1}^{wh}(X,x_{0})$ if and only if
$H$ is open in $\pi_{1}^{l}(X,x_{0})$.
###### Proof.
As mentioned in the introduction, $\pi_{1}^{wh}(X,x_{0})$ is finer than
$\pi_{1}^{l}(X,x_{0})$.
To prove the other direction, let $H$ be open in $\pi_{1}^{wh}(X,x_{0})$. It
is easy to check that all strong $H$-SLT spaces at $x_{0}$ are $H$-SLT space
at $x_{0}$. Using [13, Proposition 3.7], $H$ is open in
$\pi_{1}^{qtop}(X,x_{0})$. On the other hand, by Proposition 2.1, there is an
open cover $\mathcal{U}$ of $X$ such that $\pi(\mathcal{U},x_{0})\leq H$.
Choose an arbitrary element $[\alpha]\in{H}$. Consider open basis neighborhood
$B([\alpha],\mathcal{U},W)$ in $\pi_{1}^{l}(X,x_{0})$, where
$W\in{\mathcal{U}}$ containing $x_{0}$. Let $[\alpha\ast l\ast
w]\in{B([\alpha],\mathcal{U},W)}$. Since $[w]\in{\pi(\mathcal{U},x_{0})}$ and
$\pi(\mathcal{U},x_{0})\leq H$, we have $[l][w]=[l\ast w]\in{H}$. Hence, by
$[\alpha]\in{H}$, we can conclude that $B([\alpha],\mathcal{U},W)\subseteq H$.
Therefore, $H$ is open in $\pi_{1}^{l}(X,x_{0})$. ∎
In the following, we show that closeness of $H$ in $\pi_{1}^{l}(X,x_{0})$ has
a remarkable relation to the fibers of the endpoint projection map
$p_{H}:\widetilde{X}^{l}_{H}\rightarrow X$.
###### Proposition 3.7.
Let $X$ be path connected and $H$ be a normal subgroup of $\pi_{1}(X,x_{0})$.
Then $(p_{H}^{-1}(x_{0}))^{l}$ is Hausdorff if and only if $H$ is closed in
$\pi_{1}^{l}(X,x_{0})$.
###### Proof.
Let $(p_{H}^{-1}(x_{0}))^{l}$ be Hausdorff. It is sufficient to show that
$\pi_{1}(X,x_{0})\setminus H$ is open in $\pi_{1}^{l}(X,x_{0})$. Let
$[\alpha]\in{\pi_{1}(X,x_{0})\setminus H}$, that is, $[\alpha]\notin{H}$ or
equivalently, $[\alpha]_{H}\neq[c_{x_{0}}]_{H}$. Since
$(p_{H}^{-1}(x_{0}))^{l}$ is Hausdorff, there are open basis neighborhoods
$M=B_{H}([\alpha]_{H},\mathcal{U},U)$ and
$N=B_{H}([c_{x_{0}}]_{H},\mathcal{V},V)$ in $(p_{H}^{-1}(x_{0}))^{l}$ such
that $M\cap N=\emptyset$, where $U$ and $V$ are open neighborhoods containing
$x_{0}$. Consider open cover $\mathcal{W}=\\{U\cap V\ |\ U\in{\mathcal{U}}\ ,\
V\in{\mathcal{V}}\\}$ and open basis neighborhood
$B([\alpha],\mathcal{W},U\cap V)$ containing $[\alpha]$. We show that
$B([\alpha],\mathcal{W},U\cap V)\cap H=\emptyset$. By contrary, suppose
$[\alpha\ast l\ast w]\in{H}$, where $[l]\in{\pi(\mathcal{W},x_{0})}$ and $w$
is a loop inside $U\cap V$ based at $x_{0}$. Note that $[\alpha\ast l\ast
w]\in{H}$ is equivalent to $[\alpha\ast l\ast w]_{H}=[c_{x_{0}}]_{H}$, i.e.,
$[\alpha\ast l\ast w]_{H}\in{M\cap N}$ which is a contradiction.
Conversely, assume $[\alpha]_{H}\neq[\beta]_{H}$, that is,
$[\alpha\ast\beta^{-1}]\notin{H}$, where $[\alpha]_{H}$,
$[\beta]_{H}\in{(p_{H}^{-1}(x_{0}))^{l}}$. Note that since $H$ is a normal
subgroup of $\pi_{1}(X,x_{0})$, it is easy to see that
$[\alpha^{-1}\ast\beta]\notin{H}$. Since $H$ is a closed subgroup of
$\pi_{1}^{l}(X,x_{0})$, there is an open cover $\mathcal{U}$ of $X$ such that
$B([\alpha^{-1}\ast\beta],\mathcal{U},U)\cap H=\emptyset$, where
$U\in{\mathcal{U}}$. Consider open basis neighborhoods
$M=B_{H}([\beta]_{H},\mathcal{U},U)$ and
$N=B_{H}([\alpha]_{H},\mathcal{U},U)$. It is enough to show that $M\cap
N=\emptyset$. By contrary, suppose $[\gamma]_{H}\in{M\cap N}$. So, there are
$[l]$, $[s]\in{\pi(\mathcal{U},x_{0})}$ and loops $w$ and $v$ inside $U$ based
at $x_{0}$ such that $[\gamma]_{H}=[\beta\ast l\ast w]_{H}=[\alpha\ast s\ast
v]_{H}$, that is, $[\beta\ast l\ast w\ast v^{-1}\ast s^{-1}\ast\alpha]\in{H}$.
By the normality of $H$, we have $[\alpha^{-1}\ast\beta\ast l\ast w\ast
v^{-1}\ast s^{-1}]\in{H}$ or equivalently, $[\alpha^{-1}\ast\beta\ast l\ast
w\ast v^{-1}\ast s^{-1}\ast c_{x_{0}}]\in{H}$, where $[l\ast w\ast v^{-1}\ast
s^{-1}]\in{\pi(\mathcal{U},x_{0})}$ and the constant path $c_{x_{0}}$ can be
seen as loop inside $U$ based at $x_{0}$. This is a contradiction because
$B([\alpha^{-1}\ast\beta],\mathcal{U},U)\cap H=\emptyset$. ∎
###### Corollary 3.8.
Let $H$ be any normal subgroup of $\pi_{1}(X,x_{0})$ and $X$ be a locally path
connected strong $H$-SLT space at $x_{0}$. Then $H$ is closed in
$\pi_{1}^{wh}(X,x_{0})$ if and only if $H$ is closed in
$\pi_{1}^{l}(X,x_{0})$.
###### Proof.
Note that $\pi_{1}^{wh}(X,x_{0})$ is finer than $\pi_{1}^{l}(X,x_{0})$, in
general [18].
To prove the other direction, let $H$ be closed in $\pi_{1}^{wh}(X,x_{0})$. We
know that all strong $H$-SLT spaces at $x_{0}$ are $H$-SLT space at $x_{0}$.
Hence by [13, Corollary 2.8], $H$ is closed in $\pi_{1}^{qtop}(X,x_{0})$. On
the other hand, by [5, Theorem 11], $p_{H}:\widetilde{X}^{wh}_{H}\rightarrow
X$ has unique path lifting property, that is, $p_{H}$ is
$\textbf{lpc}_{0}$-covering map (see [4, Theorem 5.11]). Moreover, by [13,
Corollary 3.10], $(p_{H}^{-1}(x_{0}))^{wh}$ is Hausdorff. Therefore, Corollary
3.3 and Proposition 3.7 imply that $(p_{H}^{-1}(x_{0}))^{l}$ is Hausdorff and
accordingly, $H$ is closed in $\pi_{1}^{l}(X,x_{0})$. ∎
In [4, Lemma 5.10] it was shown that every $\textbf{lpc}_{0}$-covering map is
equivalent to a certain endpoint projection map. On the other hand, the unique
path lifting property of the endpoint projection map
$p_{H}:\widetilde{X}^{wh}_{H}\rightarrow X$ implies that $p_{H}$ is an
$\textbf{lpc}_{0}$-covering map [4, Theorem 5.11]. The main advantage of the
following theorem is that the map $p:\widetilde{X}\rightarrow X$ with
$p_{\ast}\pi_{1}(\widetilde{X},\tilde{x}_{0})=H\leq\pi_{1}(X,x_{0})$ is an
$\textbf{lpc}_{0}$-covering map when $p_{H}:\widetilde{X}^{l}_{H}\rightarrow
X$ has unique path lifting property.
###### Proposition 3.9.
Let $H\unlhd\pi_{1}(X,x_{0})$. If $X$ is a locally path connected and strong
$H$-SLT space at $x_{0}$, then the following statements are equivalent.
* 1
. $p_{H}:\widetilde{X}^{wh}_{H}\rightarrow X$ has unique path lifting
property.
* 2
. $p_{H}:\widetilde{X}^{l}_{H}\rightarrow X$ has unique path lifting property.
###### Proof.
$1.\Rightarrow 2.$ It follows from the statement “$\widetilde{X}^{wh}_{H}$ is
finer than $\widetilde{X}^{l}_{H}$”.
$2.\Rightarrow 1.$ The unique path lifting property of
$p_{H}:\widetilde{X}^{l}_{H}\rightarrow X$ is equivalent to the closeness of
$H$ in $\pi_{1}^{l}(X,x_{0})$ (see [6, Theorem 5.6]). So, by Corollary 3.8,
$H$ is closed in $\pi_{1}^{wh}(X,x_{0})$.Therefore, using [13, Corollary 2.8],
$H$ is closed in $\pi_{1}^{qtop}(X,x_{0})$ which implies that
$p_{H}:\widetilde{X}^{wh}_{H}\rightarrow X$ has unique path lifting property
[5, Theorem 11]. ∎
###### Proposition 3.10.
Let $p:\widetilde{X}\rightarrow X$ be a semicovering map and $\widetilde{H}$
be a subgroup of $\pi_{1}(\widetilde{X},\tilde{x}_{0})$, where
$p(\tilde{x}_{0})=x_{0}$. Let $H=p_{\ast}(\widetilde{H})$ where
$p_{\ast}:\pi_{1}(\widetilde{X},\tilde{x}_{0})\rightarrow\pi_{1}(X,x_{0})$ is
the induced homomorphism by $p$. If $X$ is a strong $H$-SLT space, then
$\widetilde{X}$ is a strong $\widetilde{H}$-SLT space.
###### Proof.
Assume $\tilde{\lambda}$ is an arbitrary path from $\tilde{x}_{0}$ to
$\tilde{\lambda}(1)=\tilde{x}$. It suffices to show that $X$ is a strong
$[(\tilde{\lambda})^{-1}H\tilde{\lambda}]$-SLT space at $\tilde{x}$. Let
$\widetilde{S}$ be an open subset in $\widetilde{X}$ containing $\tilde{x}$
and $\tilde{y}$ be an arbitrary point in $\widetilde{X}$. Put
$p\circ\tilde{\lambda}=\lambda$ and $p(\tilde{x})=x$, where $\lambda$ is a
path in $X$ from $x_{0}$ to $x$. Since $p:\widetilde{X}\rightarrow X$ is a
local homeomorphism, there is an open subset $\widetilde{W}$ of $\tilde{x}$
such that $p|_{\widetilde{W}}:\widetilde{W}\rightarrow W$ is a homeomorphism.
Put $\widetilde{U}=\widetilde{S}\cap\widetilde{W}$. Note that
$p|_{\widetilde{U}}:\widetilde{U}\rightarrow U$ is a homeomorphism as well,
where $p(\widetilde{U})=U$ is an open subset of $x$ in $X$. By assumption,
since $X$ is a strong $[\lambda^{-1}H\lambda]$-SLT space at $x$, for point
$p(\tilde{y})=y$ there is an open subset $V$ containing $y$ such that for
every path $\alpha$ from $x$ to $y$ and for every loop $\beta$ at $y$ in $V$
there is a loop $\delta$ based at $x$ in $U$ such that
$[\alpha\ast\beta\ast\alpha^{-1}]_{[\lambda^{-1}H\lambda]}=[\delta]_{[\lambda^{-1}H\lambda]}$.
By the local homeomorphism property of $p:\widetilde{X}\rightarrow X$, we have
an open subset $\widetilde{V}$ of $\tilde{y}$ such that
$p|_{\widetilde{V}}:\widetilde{V}\rightarrow V$ is a homeomorphism. Now let
$\tilde{\alpha}$ be a path from $\tilde{x}$ to $\tilde{y}$ and $\tilde{\beta}$
is a loop based at $\tilde{y}$ in $\widetilde{V}$. Put
$p\circ\tilde{\alpha}=\alpha$ and $p\circ\tilde{\beta}=\beta$, where $\alpha$
is a path from $x$ to $y$ and $\beta$ is a loop at $y$ in $V$. Hence there is
a loop $\delta:I\rightarrow U$ at $x$ such that
$[\alpha\ast\beta\ast\alpha^{-1}]_{[\lambda^{-1}H\lambda]}=[\delta]_{[\lambda^{-1}H\lambda]}$
or equivalently,
$[\lambda\ast\alpha\ast\beta\ast\alpha^{-1}\delta^{-1}\ast\lambda^{-1}]\in{H}$.
By the homeomorphism $p|_{\widetilde{U}}:\widetilde{U}\rightarrow U$, there is
a loop $\tilde{\delta}$ at $\tilde{x}$ in $\widetilde{U}$ such that
$p(\tilde{\delta})=\delta$. On the other hand, we know that
$p_{\ast}([\tilde{\lambda}\ast\tilde{\alpha}\ast\tilde{\beta}\ast(\tilde{\alpha})^{-1}\ast(\tilde{\delta})^{-1}\ast(\tilde{\lambda})^{-1}])=[(p\circ\tilde{\lambda})\ast(p\circ\tilde{\alpha})\ast(p\circ\tilde{\beta})\ast(p\circ(\tilde{\alpha})^{-1})\ast(p\circ(\tilde{\delta})^{-1})\ast(p\circ(\tilde{\lambda})^{-1})]=[\lambda\ast\alpha\ast\beta\ast\alpha^{-1}\delta^{-1}\ast\lambda^{-1}]\in{H}$.
Moreover, by the definition of semicovering map [2, Definition 3.1],
$p_{\ast}$ is injection. Thus, by the definition of $\widetilde{H}$, we have
$[\tilde{\lambda}\ast\tilde{\alpha}\ast\tilde{\beta}\ast(\tilde{\alpha})^{-1}\ast(\tilde{\delta})^{-1}\ast(\tilde{\lambda})^{-1}]\in{\widetilde{H}}$
or equivalently,
$[\tilde{\alpha}\ast\tilde{\beta}\ast(\tilde{\alpha})^{-1}]_{[(\tilde{\lambda})^{-1}H\tilde{\lambda}]}=[(\tilde{\delta})^{-1}]_{[(\tilde{\lambda})^{-1}H\tilde{\lambda}]}$.
Therefore, $\widetilde{X}$ is a strong $\widetilde{H}$-SLT space. ∎
###### Corollary 3.11.
Let $p:\widetilde{X}\rightarrow X$ be a semicovering map. If $X$ is a strong
SLT space, then so is $\widetilde{X}$.
It is known that any semilocally simply connected space is a strong SLT space
and any strong SLT space is a strong $H$-SLT space for every subgroup $H$ of
$\pi_{1}(X,x_{0})$. Note that any space $X$ is a strong $\pi_{1}(X,x_{0})$-SLT
space. The following theorem help us to give an example of a strong $H$-SLT
space which is not a strong SLT space and hence, it is not semilocally simply
connected, where $H\neq\pi_{1}(X,x_{0})$.
###### Theorem 3.12.
Let $X$ be a locally path connected space and $H$ be an open normal subgroup
in $\pi_{1}^{qtop}(X,x_{0})$. Then $X$ is a strong $H$-SLT space.
###### Proof.
Let $H$ be an open normal subgroup in $\pi_{1}^{qtop}(X,x_{0})$. By Theorem
2.1 of [15], There is an open cover $\mathcal{U}$ in $X$ such that
$\pi(\mathcal{U},x_{0})\leq H$. On the other hand, by [6, Proposition 4.4],
$\widetilde{X}^{wh}_{H}=\widetilde{X}^{l}_{H}$. So, using Remark 3.1, we can
conclude that for each path $\delta$ from $x_{0}$ to $x$,
$\widetilde{X}^{l}_{[\delta^{-1}H\delta]}=\widetilde{X}^{wh}_{[\delta^{-1}H\delta]}$.
Therefore, Theorem 3.2 implies that $X$ is a strong $H$-SLT space. ∎
The following example can justify introducing the relative version of strong
SLT spaces with respect to subgroups of the fundamental group.
###### Example 3.13.
Let $(S^{1},0)$ be the unit circle, $(HA,x)$ be the Harmonic Archipelago,
where $x$ is the common point of boundary circles. We consider the wedge space
$X=\frac{S^{1}\sqcup HA}{0\sim x}$. In [16, Example 4.4] it is shown that
$\pi_{1}(X,x_{0})\neq\pi_{1}^{sg}(X,x_{0})$. On the other hand, $X$ is a
semilocally small generated space [16]. Accordingly, $\pi_{1}^{sg}(X,x_{0})$,
introduced by Virk [17], is an open subgroup of $\pi_{1}^{qtop}(X,x_{0})$.
Using Theorem 3.12, we conclude that $X$ is a strong
$\pi_{1}^{sg}(X,x_{0})$-SLT space. It is not hard to show that $X$ is not a
strong SLT space. To prove that $X$ is not a strong SLT space, consider an
arbitrary path in $X$ inside $HA$ from any semilocally simply connected point
to the wedge point.
In the following example we show that some results of the paper does not
necessarily hold, for instance Proposition 3.7, for $H$-SLT spaces at a point.
Also, note that the example below shows that the concepts of relative version
of strong SLT and SLT spaces are not necessarily the same.
###### Example 3.14.
In [10], Fischer and Zastrow presented an open subgroup $H$ of
$\pi_{1}^{qtop}(HE,x_{0})$ which does not contain any open normal subgroup and
does not correspond to a covering space [3, Theorem 4.8], where $x_{0}$ is the
common point of circles. On the other hand, by Definition 1.2 and Proposition
1.4, it is not hard to observe that covering subgroups correspond to open
subgroups of $\pi_{1}^{l}(X,x_{0})$. Moreover, since $H$ is an open subgroup
of $\pi_{1}^{qtop}(HE,x_{0})$, Proposition 3.6 of [13] implies that $HE$ is an
$H$-SLT space at $x_{0}$ and hence, using [13, Proposition 3.7], $H$ is an
open subgroup of $\pi_{1}^{wh}(X,x_{0})$. One can see that $H$ is not an open
subgroup in $\pi_{1}^{l}(HE,x_{0})$ because it is not a covering subgroup.
Therefore, the property of $H$-SLT at $x_{0}$ is not strong enough to prove
some results of this paper. Also, we can conclude that $HE$ is an $H$-SLT
space which is not a strong $H$-SLT space.
## Reference
## References
* [1] M. Abdullahi Rashid, B. Mashayekhy, H. Torabi, S.Z. Pashaei, On topologized fundamental subgroups and generalized coverings, to appear in Bull. Iranian Math. Soc.
* [2] J. Brazas, Semicoverings: a generalization of covering space theory, Homol. Homotopy Appl. 14 (2012) 33–63.
* [3] J. Brazas, Semicoverings, coverings, overlays and open subgroups of the quasitopological fundamental group, Topology Proc. 44 (2014) 285–313.
* [4] J. Brazas, Generalized covering space theories, Theory Appl. Categ. 30 (2015) 1132–1162.
* [5] J. Brazas, P. Fabel, On fundamental group with the quotient topology, Homotopy Rel. Struc. 10 (2015) 71–91.
* [6] N. Brodskiy, J. Dydak, B. Labuz, A. Mitra, Covering maps for locally path connected spaces, Fund. Math. 218 (2012) 13–46.
* [7] N. Brodskiy, J. Dydak, B. Labuz, A. Mitra, Topological and uniform structures on universal covering spaces, arXiv:1206.0071.
* [8] H. Fischer, D. Repovš, Ž. Virk, A. Zastrow, On semilocally simply connected spaces, Topol. Appl. 158 (2011) 397–408.
* [9] H. Fischer, A. Zastrow, Generalized universal covering spaces and the shape group, Fund. Math. 197 (2007) 167–196.
* [10] H. Fischer, A. Zastrow, A core-free semicovering of the Hawaiian Earring, Topol. Appl. 160 (2013) 1957–1967.
* [11] N. Jamali, B. Mashayekhy, H. Torabi, S.Z. Pashaei and M. Abdullahi Rashid (2017), On Topologized Fundamental Groups with Small Loop Transfer Viewpoints, arXiv:1708.02606v1.
* [12] B. Mashayekhy, A. Pakdaman and H. Torabi, Spanier spaces and covering theory of non-homotopically path Hausdorff spaces, Georgian Math. J. 20 (2013) 303–317.
* [13] S.Z. Pashaei, B. Mashayekhy, H. Torabi and M. Abdullahi Rashid, Small loop transfer spaces with respect to subgroups of fundamental groups, Topol. Appl. 232 (2017) 242–255.
* [14] E.H. Spanier, Algebraic Topology, McGraw-Hill, New York, 1966.
* [15] H. Torabi, A. Pakdaman, B. Mashayekhy, On the Spanier groups and covering and semicovering spaces, arXiv:1207.4394.
* [16] H. Torabi, A. Pakdaman, B. Mashayekhy, Topological Fundamental Groups and Small Generated coverings, Math. Slovaca 65 (2015) 1153–1164.
* [17] Ž. Virk, Small loop spaces, Topol. Appl. 157 (2010) 451–455.
* [18] Ž. Virk, A. Zastrow, The comparison of topologies related to various concepts of generalized covering spaces, Topol. Appl. 170 (2014) 52–62.
|
compat=1.1.0
# Dirac Radiative Neutrino Mass with Modular Symmetry and Leptogenesis
Arnab Dasgupta<EMAIL_ADDRESS>School of Liberal Arts, Seoul-Tech,
Seoul 139-743, Korea PITT PACC, Department of Physics and Astronomy,
University of Pittsburgh, Pittsburgh, PA 15260, USA Takaaki Nomura
<EMAIL_ADDRESS>College of Physics, Sichuan University, Chengdu 610065,
China Hiroshi Okada<EMAIL_ADDRESS>Asia Pacific Center for
Theoretical Physics, Pohang 37673, Republic of Korea Department of Physics,
Pohang University of Science and Technology, Pohang 37673, Republic of Korea
Oleg Popov<EMAIL_ADDRESS>Institute of Convergence Fundamental Studies,
Seoul National University of Science and Technology, Seoul 139-743, Korea
Department of Physics, Korea Advanced Institute of Science and Technology, 291
Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea Scuola Normale
Superiore, Piazza dei Cavalieri 7, 56126 Pisa, Italy Morimitsu Tanimoto
<EMAIL_ADDRESS>Department of Physics, Niigata University,
Niigata 950-2181, Japan
###### Abstract
Minimalistic Dirac radiative neutrino mass model based on modular symmetry is
proposed. We predict maximum number of observables possible including neutrino
mass splittings, neutrino mass scale, lepton mixing angles, and Dirac phases
in the leptonic sector with minimum number of input parameters possible. Model
is capable of accommodating multicomponent dark matter, thanks to the
$\emph{R}-$parity and accidental scotogenic $\mathbb{Z}_{2}$ discrete
symmetry. Furthermore, even-though neutrinos are Dirac in our model, matter-
antimatter asymmetry of the Universe is achieved via neutrinogenesis
mechanism. Phenomenology of the dark sector including various dark matter
candidates is briefly discussed.
neutrino mass, modular symmetry, flavor symmetry, dark matter
###### pacs:
14.60.Pq, 95.35.+d, 12.60.-i, 14.60.St
††preprint: APCTP Pre2021 - 029, CTP-SCU/2021033††preprint: Prepared for
submission to Phys. Rev. D
###### Contents
1. I Introduction
2. II Model
1. Boson sector
2. Charged-lepton masses
3. III Leptogenesis
4. IV Neutrino masses
1. Heavy neutral masses
2. Neutrino mass generation
5. V Neutrino analysis and discussion
6. VI Discussion
7. VII Conclusion
## I Introduction
Physics beyond the standard model (SM) is required in explaining some issues
such as non-zero neutrino masses, existence of dark matter (DM) and matter-
antimatter asymmetry of the universe. In extending the SM, a new symmetry
plays important roles to restrict structure of new physics which can realize,
for example, stability of DM, neutrino mass generation at loop level
forbidding tree level mass and origin of structure for neutrino mass matrix.
Thus it is interesting to find a symmetry providing such properties with high
predictability.
In controlling flavor structure, attractive framework of symmetries is
proposed by papers Feruglio (2019); de Adelhart Toorop _et al._ (2012), in
2017, where they applied modular non-Abelian discrete flavor symmetries to
quark and lepton sectors. Remarkably this framework has advantage that any
dimensionless couplings can also be transformed as non-trivial representations
under those symmetries. As a result, we do not need copious scalars to find a
predictive mass matrix. Furthermore we have a modular weight from the modular
origin that can play a role in stabilizing DM when appropriate charge
assignments are assigned to each of the fields in models. Along the line of
this idea, many approaches have appeared in the literature, e.g., based on
modular $A_{4}$ Feruglio (2019); Criado and Feruglio (2018); Kobayashi _et
al._ (2018a); Okada and Tanimoto (2019a); Nomura and Okada (2019); Okada and
Tanimoto (2019b); de Anda _et al._ (2018); Novichkov _et al._ (2019a);
Nomura and Okada (2021a); Okada and Orikasa (2019a); Ding _et al._ (2019a);
Nomura _et al._ (2020); Kobayashi _et al._ (2019a); Asaka _et al._ (2020a);
Zhang (2019); Ding _et al._ (2019b); Kobayashi _et al._ (2020a); Nomura _et
al._ (2021a); Wang (2020); Okada and Shoji (2020); Okada and Tanimoto (2020);
Behera _et al._ (2020a, b); Nomura and Okada (2020a, b); Asaka _et al._
(2020b); Okada and Tanimoto (2021a); Nagao and Okada (2020); Okada and
Tanimoto (2021b); Yao _et al._ (2021a); Chen _et al._ (2021); Kashav and
Verma (2021); Okada _et al._ (2021); de Medeiros Varzielas and Lourenço
(2021); Nomura _et al._ (2021b); Hutauruk _et al._ (2020); Ding _et al._
(2021a); Nagao and Okada (2021); Okada and Qi (2021), $S_{3}$ Kobayashi _et
al._ (2018b, 2019b, 2019c); Okada and Orikasa (2019b); Mishra (2020); Du and
Wang (2021), $S_{4}$ Penedo and Petcov (2019); Novichkov _et al._ (2019b);
Kobayashi _et al._ (2019d); King and Zhou (2019); Okada and Orikasa (2019c);
Criado _et al._ (2020); Wang and Zhou (2020); Zhao and Zhang (2021); King and
Zhou (2021); Ding _et al._ (2021b); Zhang and Zhou (2021); Qu _et al._
(2021); Nomura and Okada (2021b), $A_{5}$ Novichkov _et al._ (2019c); Ding
_et al._ (2019c); Criado _et al._ (2020), double covering of $A_{5}$ Wang
_et al._ (2021); Yao _et al._ (2021b); Wang and Zhou (2021); Behera and
Mohanta (2021), larger groups Baur _et al._ (2019a), multiple modular
symmetries De Medeiros Varzielas _et al._ (2019), and double covering of
$A_{4}$ Liu and Ding (2019); Chen _et al._ (2020a); Li _et al._ (2021),
$S_{4}$ Novichkov _et al._ (2021a); Liu _et al._ (2021), and the other types
of groups Kikuchi _et al._ (2020); Almumin _et al._ (2021); Ding _et al._
(2021c); Feruglio _et al._ (2021); Kikuchi _et al._ (2021); Novichkov _et
al._ (2021b) in which masses, mixing, and CP phases for the quark and/or
lepton have been predicted 111For interested readers, some literature reviews
would be useful to understand the non-Abelian group and its applications to
flavor structure Altarelli and Feruglio (2010); Ishimori _et al._ (2010,
2012); Hernandez and Smirnov (2012); King and Luhn (2013); King _et al._
(2014); King (2017); Petcov (2018).. Majorana neutrino mass matrix with two
texture zeros can be also realized applying modular $\mathcal{A}_{4}$ symmetry
Zhang (2019). In addition to the lepton sector, stability of DM can be
realized at fixed points under the modular $A_{4}$ symmetry Kobayashi _et
al._ (2021).
Further researches are found; a systematic approach to understand the origin
of CP transformations has been discussed in Ref. Baur _et al._ (2019b),
CP/flavor violation in models with modular symmetry was discussed in Refs.
Kobayashi _et al._ (2020b); Novichkov _et al._ (2019d), a possible
correction from Kähler potential was discussed in Ref. Chen _et al._ (2020b),
and systematic analysis of the fixed points (stabilizers) has been discussed
in Ref. de Medeiros Varzielas _et al._ (2020).
In this paper, we construct a model realizing Dirac neutrino mass based on
framework of modular $A_{4}$ symmetry with supersymmetry in which we try to
find minimal contents for new particles and modular forms. The manuscript is
organized as follows: section II describes the model at hand, generation of
particle masses, including bosons and fermions, section III describes the
generation of the matter-anti-matter asymmetry of the universe with relevant
constraints on the model, generation of the neutrino masses and other heavy
neutral states is described in section IV, analysis and predictions for
leptonic sector is executed in section V, section VI discusses possible dark
matter candidates and various phenomenology of the model, section VII
concludes the work.
## II Model
In this section we introduce our model that is SuperSymmetric (SUSY) and
applies modular $A_{4}$ symmetry. Model contains no flavon fields and is build
on Minimal SuperSymmetric Model (MSSM) by appending it with
$\bar{\nu},\eta,\eta^{\prime},\chi,$ and $N$ superfields. The particle content
is given in Tab. 3 where we summarize representations under modular $A_{4}$,
modular-weight $k$ and $3(B-L)$ value of each superfield. While $\bar{\nu}$ is
added to make neutrinos Dirac, $\eta,\chi$, and $N$ superfields are needed for
scotogenic (radiative) neutrino mass mechanism, where scotogenic symmetry is
modular $\mathcal{A}_{4}$ symmetry together with $\mathcal{R}-$parity. Here
$\eta^{\prime}$ is added to cancel gauge anomaly since our model is
SuperSymmetric. Model Lagrangian, _aka_ superpotential, is given as
$\displaystyle\mathcal{W}$
$\displaystyle=\bar{u}\boldsymbol{y_{u}}QH_{u}-\bar{d}\boldsymbol{y_{d}}QH_{d}+\mu
H_{u}H_{d}$ (1)
$\displaystyle-\delta_{1}L_{1}(\boldsymbol{y_{e}}\bar{e})_{1}H_{d}-\delta_{2}L_{1^{\prime}}(\boldsymbol{y_{e}}\bar{e})_{1^{\prime\prime}}H_{d}-\delta_{3}L_{1^{\prime\prime}}(\boldsymbol{y_{e}}\bar{e})_{1^{\prime}}H_{d}$
$\displaystyle+\alpha_{1}L_{1}(\boldsymbol{y_{l}}N)_{1}\eta+\alpha_{2}L_{1^{\prime}}(\boldsymbol{y_{l}}N)_{1^{\prime\prime}}\eta+\alpha_{3}L_{1^{\prime\prime}}(\boldsymbol{y_{l}}N)_{1^{\prime}}\eta$
$\displaystyle+\beta_{1}(N\boldsymbol{y_{\nu}})_{1}\bar{\nu}_{1}\chi_{1}+\beta_{2}(N\boldsymbol{y_{\nu}})_{1^{\prime}}\bar{\nu}_{1^{\prime\prime}}\chi_{1}+\beta_{3}(N\boldsymbol{y_{\nu}})_{1^{\prime\prime}}\bar{\nu}_{1^{\prime}}\chi_{1}$
$\displaystyle+\tilde{\beta}_{1}(N\boldsymbol{y_{\nu}})_{1}\bar{\nu}_{1}\chi_{2}+\tilde{\beta}_{2}(N\boldsymbol{y_{\nu}})_{1^{\prime}}\bar{\nu}_{1^{\prime\prime}}\chi_{2}+\tilde{\beta}_{3}(N\boldsymbol{y_{\nu}})_{1^{\prime\prime}}\bar{\nu}_{1^{\prime}}\chi_{2}$
$\displaystyle+\gamma_{h}\boldsymbol{y_{3}}\eta\chi_{1}H_{d}+\tilde{\gamma}_{h}\boldsymbol{y_{3}}\eta\chi_{2}H_{d}+\gamma_{N}\Lambda(N\boldsymbol{y_{n}}N)_{1}+\sum_{ij}\epsilon^{ij}\mu_{\chi}\boldsymbol{y_{\chi}}\chi_{i}\chi_{j}+\text{H.c.}\,,$
where we are using a two component notation, following Dreiner _et al._
(2010). Here $\boldsymbol{y_{X}}(\boldsymbol{X}=e,l,\nu,3,\chi,\chi^{\prime})$
denotes modular forms whose representations and corresponding modular weight
are summarized in Table 2. We write modular forms
$\boldsymbol{y_{3}^{(2)}}=(y_{1},y_{2},y_{3})^{T}$,
$\boldsymbol{y_{1}^{(4)}}=y^{2}_{1}+2y_{2}y_{3}$ and
$\boldsymbol{y_{3}^{(4)}}=(y^{2}_{1}-y_{2}y_{3},y^{2}_{3}-y_{1}y_{2},y^{2}_{2}-y_{1}y_{3})^{T}$,
where $y_{i}$ is given by Dedekind eta$-$function $\eta(\tau)$ of modulus
$\tau$ and its derivative $\eta^{\prime}(\tau)$, as given in ref. Feruglio
(2019) ($y_{i}$ is written as $Y_{i}$ in the reference). On the other hand
$\\{\delta_{a},\alpha_{a},\beta_{a},\tilde{\beta}_{a},\gamma_{h},\tilde{\gamma}_{h},\epsilon^{ij}\\}$
are coupling constants. Terms that are forbidden by various symmetries are
$\displaystyle\mathcal{W}_{\not{P_{R}}}$
$\displaystyle=\mathcal{W}_{\not{P_{R}}}^{\text{MSSM}}+LH_{u}+\eta
H_{d}+\bar{\nu}N+N\chi+\boldsymbol{y_{\chi}^{\prime}}\chi\chi\chi,$ (2a)
$\displaystyle\mathcal{W}_{\not{\mathcal{A}_{4}}}$
$\displaystyle=y_{1}^{(2)}L\bar{\nu}H_{u}+\Lambda
y_{1}^{(2)}\bar{\nu}\bar{\nu}+y_{1,1^{\prime},1^{\prime\prime}}^{(3)}\Lambda\eta
L+y_{1}^{(3)}\Lambda\bar{\nu}\chi+y_{3}^{(1)}H_{u}H_{d}N,$ (2b)
$\displaystyle\mathcal{W}_{\not{P_{R}}\&\not{\mathcal{A}_{4}}}$
$\displaystyle=y_{1}^{(2)}\chi H_{u}H_{d},$ (2c)
where matter parity and $\mathcal{R}-$parity are defined as
$P_{M}=(-1)^{3(B-L)}$ and $P_{\mathcal{R}}=(-1)^{3(B-L)+2s}$ with $s$ being
spin of the particle, respectively. Parameters and observables of the leptonic
sector are listed in Tab. 3.
If scalar $N$ somehow gets a non$-$zero VEV (it is $\mathcal{R}-$parity even),
then neutrinos would get a tree$-$level (still Dirac) mass via Dirac seesaw
(as explained in model$-$I tree level scenario of Ma and Popov (2017)). This
does not happen in our case because $y_{3}^{(1)}H_{u}H_{d}N$(eq. 2b) term is
forbidden due to modular invariance of the super$-$potential.
$y_{3}^{(1)}H_{u}H_{d}N$ is the only possible source term that can induce
$\left\langle N\right\rangle\neq 0$. In other words, the VEV of $N$ is not
induced by VEVs of $H_{u,d}$ and since $\mathcal{R}-$parity does not protect
$\left\langle N\right\rangle=0$, it indicates that there must be another
induced or accidental symmetry in the Lagrangian related to the fact that
$\left\langle N\right\rangle=0$.
The extra symmetry is $\mathbb{Z}_{2}$, which can be seen from the $\Lambda
N\boldsymbol{y_{n}}N$ term of eq. 1. The $\mathbb{Z}_{2}-$odd particles under
this accidental symmetry(this accidental symmetry is present because of
modular $\mathcal{A}_{4}$ symmetry invariance of the superpotential) are
$\hat{N},\hat{\chi},\hat{\eta}$. From this we can conclude that the lightest
of these $P_{\mathcal{R}}-$even states is a dark matter (DM) candidate which
makes our model a multi-particle dark matter model.
S$-$Field | SU(3)c | SU(2)L | U(1)Y | $\mathcal{A}_{4}$ | $-k$ | $3(B-L)$
---|---|---|---|---|---|---
Q | 3 | 2 | $\frac{1}{6}$ | $1$ | $0$ | $1$
$\bar{u}$ | $\boldsymbol{\bar{3}}$ | 1 | $-\frac{2}{3}$ | $1$ | $0$ | $-1$
$\bar{d}$ | $\boldsymbol{\bar{3}}$ | 1 | $\frac{1}{3}$ | $1$ | $0$ | $-1$
L | 1 | 2 | $-\frac{1}{2}$ | $\boldsymbol{1,1^{\prime},1^{\prime\prime}}$ | $-1$ | $-3$
$\bar{e}$ | 1 | 1 | $1$ | $\boldsymbol{3}$ | $-1$ | $3$
$\bar{\nu}$ | 1 | 1 | $0$ | $\boldsymbol{1,1^{\prime},1^{\prime\prime}}$ | $-1$ | $3$
$N$ | 1 | 1 | $0$ | $\boldsymbol{3}$ | $-1$ | $0$
$H_{u}$ | 1 | 2 | $\frac{1}{2}$ | $\boldsymbol{1}$ | $0$ | $0$
$H_{d}$ | 1 | 2 | $-\frac{1}{2}$ | $\boldsymbol{1}$ | $0$ | $0$
$\eta$ | 1 | 2 | $\frac{1}{2}$ | $\boldsymbol{1}$ | $-2$ | $3$
$\eta^{\prime}$ | 1 | 2 | $-\frac{1}{2}$ | $\boldsymbol{1}$ | $-2$ | $3$
$\chi$ | 1 | 1 | $0$ | $\boldsymbol{1}$ | $-2$ | $-3$
Table 1: Model particle content.333The matter$-$parity $(3(B-L))$ twin ($-$,$+$) particles are $(L,H_{d}),(\eta,H_{u}),(\bar{\nu}\chi,N)$. $k$ is the modular$-$weight. Field | $\mathcal{A}_{4}$ | $-k$
---|---|---
$\boldsymbol{y_{e}}=\boldsymbol{y_{3}^{(2)}}$ | $\boldsymbol{3}$ | $2$
$\boldsymbol{y_{l}}=\boldsymbol{y_{3}^{(4)}}$ | $\boldsymbol{3}$ | $4$
$\boldsymbol{y_{\nu}}=\boldsymbol{y_{3}^{(4)}}$ | $\boldsymbol{3}$ | $4$
$\boldsymbol{y_{3}}=\boldsymbol{y_{1}^{(4)}}$ | $\boldsymbol{1}$ | $4$
$\boldsymbol{y_{n}}=\boldsymbol{y_{3}^{(2)}}$ | $\boldsymbol{3}$ | $2$
$\boldsymbol{y_{\chi}}=\boldsymbol{y_{1}^{(4)}}$ | $\boldsymbol{1}$ | $4$
$\boldsymbol{y_{\chi}^{\prime}}=\boldsymbol{y_{1}^{(6)}}=6y_{1}y_{2}y_{3}$ | $\boldsymbol{1}$ | $6$
Table 2: Modular transformations of Yukawas and dimensionfull parameters of the model. Observable | Predicted/Input/Constrained
---|---
$\Delta m^{2}_{\text{sol}}$ | P
$\Delta m^{2}_{\text{atm}}$ | I
$m_{1}$ | C
$\sin^{2}\theta_{12}$ | P
$\sin^{2}\theta_{23}$ | P
$\sin^{2}\theta_{13}$ | P
$\delta_{CP}$ | P
$m_{ee}$ | P/C
$\sum m_{i}$ | P
$m_{e},m_{\mu},m_{\tau}$ | I
Model parameter | Constrained by/Free
$\boldsymbol{y_{e}}$ | $\alpha_{i},\tau$
$\boldsymbol{y_{\nu}}$ | $\alpha_{i},\tau$
$\tau$ | Scan
$\mu_{H}$ | $m_{\nu}$
$\mu$ | $G_{F},m_{h},\frac{\partial V}{\partial H_{u}^{0}}=0,\frac{\partial V}{\partial H_{d}^{0}}=0$
$v_{u},v_{d}\leftrightarrow v,\tan\beta$ | $m_{e},m_{\mu},m_{\tau},G_{F}$
Table 3: Parameters and observables of the leptonic sector.
### Boson sector
Here we discuss mass eigenstates and mixings of neutral scalar bosons which
are R-parity odd. Because of the term
$\gamma_{h}y_{3}\eta\chi_{1}H_{d}+\tilde{\gamma}_{h}y_{3}\eta\chi_{2}H_{d}$,
the neutral components of inert bosons $\eta$ and $\chi_{1,2}$ mix with each
other. We formulate their mixings as
$\displaystyle\left(\begin{matrix}\chi_{1}^{0}\\\ \chi_{2}^{0}\\\ \eta_{0}\\\
\end{matrix}\right)$ $\displaystyle=\begin{pmatrix}1&0&0\\\
0&c_{H_{23}}&-s_{H_{23}}\\\ 0&s_{H_{23}}&c_{H_{23}}\\\
\end{pmatrix}\begin{pmatrix}c_{H_{13}}&0&-s_{H_{13}}\\\ 0&1&0\\\
s_{H_{13}}&0&c_{H_{13}}\\\
\end{pmatrix}\begin{pmatrix}c_{H_{12}}&-s_{H_{12}}&0\\\
s_{H_{12}}&c_{H_{12}}&0\\\ 0&0&1\\\ \end{pmatrix}\left(\begin{matrix}H_{1}\\\
H_{2}\\\ H_{3}\\\ \end{matrix}\right),$ $\displaystyle\equiv
U_{H}\left(\begin{matrix}H_{1}\\\ H_{2}\\\ H_{3}\\\ \end{matrix}\right),$ (3)
where we consider mixing angles
$s_{H_{ij}}\equiv\sin\theta_{H_{ij}}$,$c_{H_{ij}}\equiv\cos\theta_{H_{ij}}$,
and mass eigenvalues $m_{H_{1,2,3}}$ as free parameters. In this paper, we do
not discuss charged scalar bosons since they are irrelevant for leptogenesis
and neutrino mass generation. Higgs sector in our model is the same as that of
MSSM and we do not discuss here.
### Charged-lepton masses
In this subsection we discuss charged lepton masses. Charged-lepton mass
matrix is given through the following Lagrangian:
$\displaystyle{\cal L}_{\ell}$
$\displaystyle=\delta_{1}(\bar{e}_{1}y_{1}+\bar{e}_{2}y_{3}+\bar{e}_{3}y_{2})H_{d}L_{1}+\delta_{2}(\bar{e}_{2}y_{2}+\bar{e}_{1}y_{3}+\bar{e}_{3}y_{1})H_{d}L_{2}$
$\displaystyle+\delta_{3}(\bar{e}_{3}y_{3}+\bar{e}_{1}y_{2}+\bar{e}_{2}y_{1})H_{d}L_{3}+{\rm
h.c.},$ (4)
where $\boldsymbol{y_{3}^{(2)}}\equiv[y_{1},y_{2},y_{3}]^{T}$ is applied for
terms for second line in Eq. 1. After the EW spontaneously breaking, we find
$\displaystyle
m_{\ell}=\left[\frac{v_{d}}{\sqrt{2}}\left(\begin{matrix}y_{1}&y_{3}&y_{2}\\\
y_{3}&y_{2}&y_{1}\\\ y_{2}&y_{1}&y_{3}\\\
\end{matrix}\right)\left(\begin{matrix}\delta_{1}&0&0\\\ 0&\delta_{2}&0\\\
0&0&\delta_{3}\\\ \end{matrix}\right)\right]_{\bar{e}L}.$ (5)
Then, the mass matrix is diagonalized by $D_{\ell}\equiv
V_{e_{R}}^{\dagger}m_{\ell}V_{e_{L}}$;
$|D_{\ell}|^{2}=V_{e_{L}}^{\dagger}m^{\dagger}_{\ell}m_{\ell}V_{e_{L}}$.
## III Leptogenesis
The resultant leptonic asymmetry is realised by satisfying the Sakharov’s
condition Sakharov (1967). Now, in order to get a non-zero $CP$ violation one
needs to satisfy the Nanopolous-Weinberg theorem Nanopoulos and Weinberg
(1979) and along with that the condition pointed out by Adhikari-Rangarajan
Adhikari and Rangarajan (2002). The first condition specifies atleast how many
$B/L$ violating couplings on needs and the second condition tells us precisely
where to keep place such couplings.
In our scenario the only particle which breakes the lepton-number is $\chi$
through it’s mass term. The decay of $\chi$ will create an asymmetry in the
right$-$handed sector. Then, eventually the asymmetry from the right$-$handed
sector goes to the left$-$handed sector through the process
$\bar{\nu}\widetilde{\chi}\rightarrow\nu^{\dagger}\widetilde{\eta}^{\dagger}$.
The processes responsible for the generation of an asymmetry are given as
follows:
$\chi$$N$$\bar{\nu}$ | $\chi$$N$$\bar{\nu}$$\bar{\nu}$$\chi$$N$ | $\chi$$N$$\bar{\nu}$$\bar{\nu}$$\chi$$N$
---|---|---
Figure 1: The above figure shows the processes responsible for giving the $CP$
violation.
The diagram shown in fig.1 does gives a non-zero $CP$ which would eventually
give an asymmetry in the right handed sector. The asymmetry then would be
communicated to the left-handed sector through the following channel shown in
fig 2.
$\tilde{\eta}^{0}$$\nu$$\widetilde{\chi}$$\bar{\nu}$$\widetilde{N}\quad\widetilde{N}$
Figure 2: The above process is responsible for transferring the asymmetry from
the right handed sector to the left handed sector.
Although the above process would act as a source term and as well as the major
wash-out channel. So, in order to get the most asymmetry one would require
this wash-out factor to go out-of-equilibrium at the earlier time. Which boils
down in satisfying the following relation
$\displaystyle\frac{\gamma^{eq}_{Scatt}(\widetilde{\chi}\bar{\nu}\rightarrow\tilde{\eta}^{0}\nu)}{Hs}<1.$
(6)
In order to get the resultant asymmetry we need to solve the coupled boltzmann
equations.
$\displaystyle\frac{dY_{\chi}}{dz}$
$\displaystyle=\frac{-1}{zsH}\left[\left(\frac{Y_{\chi}}{Y^{eq}_{\chi}}-1\right)\gamma_{D}(\chi\rightarrow\nu_{R}\widetilde{N})+\left(\frac{Y^{2}_{\chi}}{(Y^{eq}_{\chi})^{2}}-1\right)\gamma^{eq}_{Scatt}(\chi\chi\rightarrow
all)\right],$ (7a) $\displaystyle\frac{dY_{\Delta R}}{dz}$
$\displaystyle=\frac{1}{zsH}\left[\left(\frac{Y_{\chi}}{Y^{eq}_{\chi}}-1\right)\epsilon\gamma_{D}(\chi\rightarrow\nu_{R}\widetilde{N})-\frac{Y_{\Delta
R}}{Y^{eq}_{l}}\gamma_{D}(\chi\rightarrow\nu_{R}\widetilde{N})\right.$ (7b)
$\displaystyle-\left.2\frac{Y_{\Delta
R}}{Y^{eq}_{l}}\left[\gamma^{eq}_{scatt}(\nu_{R}\widetilde{N}\rightarrow\bar{\nu}_{R}\widetilde{N})+\gamma^{eq}_{scatt}(\nu_{R}\nu_{R}\rightarrow\widetilde{N}\widetilde{N})\right]\right.$
$\displaystyle+\left.\left(\frac{Y_{\Delta L}-Y_{\Delta
R}}{Y^{eq}_{l}}\right)\gamma^{eq}_{Scatt}(\widetilde{\chi}\nu_{R}\rightarrow
H\nu_{L})\right],$ $\displaystyle\frac{dY_{\Delta L}}{dz}$
$\displaystyle=\frac{1}{zsH}\left[\left(\frac{Y_{\Delta R}-Y_{\Delta
L}}{Y^{eq}_{l}}\right)\gamma^{eq}_{Scatt}(\widetilde{\chi}\nu_{R}\rightarrow
H\nu_{L})\right],$ (7c)
where $z=M_{\chi}/T$, $\gamma_{D}(i_{1}\rightarrow f_{1}+f_{2}+\cdots)$ and
$\gamma^{eq}_{scatt}(i_{1}i_{2}\rightarrow f_{1}+f_{2}+\cdots)$ is given as,
$\displaystyle\gamma_{D}(i_{1}\rightarrow f_{1}+f_{2}+\cdots)$
$\displaystyle=\frac{g_{i}}{2\pi^{2}}m^{2}_{i_{1}}TK_{1}(m_{i_{1}}/T)\Gamma(i_{1}\rightarrow
f_{1}+f_{2}+\cdots),$ (8a)
$\displaystyle\gamma^{eq}_{Scatt}(i_{1}i_{2}\rightarrow f_{1}+f_{2}+\cdots)$
$\displaystyle=\frac{g_{i_{1}}g_{i_{2}}T}{8\pi^{4}}\int^{\infty}_{s_{in}}ds\frac{p_{in}p_{out}}{\sqrt{s}}|\mathcal{M}(i_{1}i_{2}\rightarrow
f_{1}+f_{2}+\cdots)|^{2}K_{1}(\sqrt{s}/T),$ (8b)
in which $K_{1}$ is the modified Bessel function. As to get the estimate of
the asymmetry we start by calculating the CP asymmetry parameter $\varepsilon$
for $\chi\rightarrow N\nu_{R}$ decays which is given as
$\displaystyle\varepsilon_{i}$
$\displaystyle=\frac{1}{8\pi(Y_{\nu}^{\dagger}Y_{\nu})_{ii}}\Im[(Y^{\dagger}_{\nu}Y_{\nu})^{2}_{ij}]\frac{1}{\sqrt{x_{ji}}}\mathcal{F}(x_{ji}),$
(9a) $\displaystyle\mathcal{F}(x_{ji})$
$\displaystyle=\sqrt{x_{ji}}\left[f(x_{ji})-\frac{\sqrt{x_{ji}}}{x_{ji}-1}\right],$
(9b) $\displaystyle f(x_{ji})$
$\displaystyle=\sqrt{x_{ji}}\left[1+(1+x_{ji})\ln\left(\frac{x_{ji}}{x_{ji}+1}\right)\right],$
(9c)
with $x_{ji}=M^{2}_{j}/M^{2}_{i}$ and
$(Y_{\nu})_{1(2)}\equiv\beta_{a}\boldsymbol{y_{\nu}}(\tilde{\beta}_{a}\boldsymbol{y_{\nu}})$
omitting flavor indices. Furthermore, the decay $\Gamma_{i}$ is given as
$\displaystyle\Gamma_{i}$
$\displaystyle=\frac{M_{\chi_{i}}}{8\pi}(Y^{\dagger}_{\nu}Y_{\nu})_{ii}.$ (10)
Now assuming only the resonant case and taking the mass differences of the
decaying $\chi$ masses to be $M_{\chi_{j}}-M_{\chi_{i}}=\Gamma_{i}/2$. Which
simplifies the total asymmetry as follows
$\displaystyle\varepsilon$
$\displaystyle=\sum_{i}\varepsilon_{i}=\sin(2\phi).$ (11)
If we demand the out-of-equilibrium to occur around $T\sim M_{\chi}$ we can
safely put a constraint on $y_{\chi}$’s by the following relation
$\displaystyle\frac{\Gamma_{\chi}}{H}$
$\displaystyle=\left(\frac{8\pi^{3}g_{*}}{90}\right)^{-1/2}\frac{M_{pl}}{M_{\chi}}\frac{(Y^{\dagger}_{\nu}Y_{\nu})_{11}}{8\pi}=1\quad\textrm{assuming}\quad
M_{\chi_{1}}>M_{\chi_{2}},$ (12a) $\displaystyle(Y_{\nu})^{2}_{1}$
$\displaystyle=\frac{M_{\chi_{1}}}{M_{pl}}\sqrt{\frac{8\pi^{3}g_{*}}{90}},$
(12b) $\displaystyle(Y_{\nu})^{2}_{1}$ $\displaystyle=1.43\times
10^{-15}\left(\frac{M_{\chi_{1}}}{1{\rm TeV}}\right)=(Y_{\nu})^{2}_{2},$ (12c)
Now, in order to get the estimate we assume the processes leading to left-
right equilibration include
$\widetilde{\eta}\nu_{L}\rightarrow\widetilde{\chi}\nu_{R}$ mediated with
s-channel exchange of an $\widetilde{N}$. Approximately, at high temperatures
these processes have a rate
$\displaystyle\Gamma_{L-R}$ $\displaystyle\sim
2\frac{|(Y_{\nu})_{1}|^{2}|Y_{l}|^{2}}{M^{4}_{\widetilde{N}}}T^{5},$ (13)
where $Y_{l}\equiv\alpha_{a}\boldsymbol{y_{l}}$. This should be compared with
the Hubble rate
$\displaystyle H$
$\displaystyle=\sqrt{\frac{8\pi^{3}g_{*}}{90}}\frac{T^{2}}{M_{pl}}.$ (14)
The considerable constraint will be coming from the highest temperatures when
$T\simeq M_{\chi}$, i.e. those at which the asymmetry is generated
$\displaystyle
2\frac{|(Y_{\nu})_{1}|^{2}|Y_{l}|^{2}}{M_{\widetilde{N}}}M^{3}_{\chi_{1}}$
$\displaystyle\lesssim\frac{1}{M_{pl}}\sqrt{\frac{8\pi^{3}g_{*}}{90}}.$ (15)
The ratio boils down to
$\displaystyle|Y_{l}|^{2}$
$\displaystyle\lesssim\frac{M^{4}_{\widetilde{N}}}{2M^{4}_{\chi_{1}}}|Y_{l}|^{2}\leq
4\pi,$ (16a) $\displaystyle M_{\widetilde{N}}$
$\displaystyle=(8\pi)^{1/4}M_{\chi_{1}},$ (16b)
which basically tells us that the ratio $M_{\widetilde{N}}\sim M_{\chi_{1}}$.
Hence, the final asymmetry can be given as
$\displaystyle\eta_{B}$ $\displaystyle=a_{sph}\frac{86}{2387}\varepsilon
Y^{eq}_{\chi_{1}}(z=1),$ (17a) $\displaystyle\eta_{B}$
$\displaystyle=4.479\times 10^{-5}\sin(2\phi),$ (17b)
where $a_{sph}=28/79$ and the observed baryonic asymmetry
$\eta^{obs}_{B}=6\times 10^{-10}$ which can be transformed to
$\displaystyle\sin(2\phi)$ $\displaystyle=1.34\times
10^{-5}\quad\sim\phi=6.7\times 10^{-6}.$ (18)
## IV Neutrino masses
Here generation of the Dirac neutrino masses is discussed, but we start with
the masses of the heavy neutral fermions.
### Heavy neutral masses
Before formulating the active neutrino mass matrix, let us formulate the
heavier Majorana neutral fermion $N$. The explicit form of Lagrangian is found
as
$\displaystyle{\cal L}_{N}$
$\displaystyle=M_{0}\left[y_{1}(2N_{1}N_{1}-N_{2}N_{3}-N_{3}N_{2})+y_{2}(2N_{2}N_{2}-N_{1}N_{3}-N_{3}N_{1})\right.$
$\displaystyle\left.+y_{3}(2N_{3}N_{3}-N_{1}N_{2}-N_{2}N_{1})\right]+{\rm
h.c.}$ (19)
Thus, the Majorana mass matrix is give by
$\displaystyle M_{N}=M_{0}\left(\begin{matrix}2y_{1}&-y_{3}&-y_{2}\\\
-y_{3}&2y_{2}&-y_{1}\\\ -y_{2}&-y_{1}&2y_{3}\\\ \end{matrix}\right).$ (20)
Then, this is diagonalized by $D_{N}\equiv U^{T}M_{N}U$; $|D_{N}|^{2}\equiv
U^{\dagger}M^{\dagger}_{N}M_{N}U$, furthermore $N\equiv U\psi$, where $\psi$
is mass eigenstate of $N$.
### Neutrino mass generation
In our model tree level Dirac neutrino mass term is fobidden by modular
invariance of the superpotential, while scotogenic Dirac neutrino mass term is
allowed due to presence of Majorana fermion $N$. The relevant interactions to
generate Dirac neutrino mass matrix is given by
$\displaystyle{\cal L}_{\nu}$
$\displaystyle=\alpha_{1}(N_{1}^{T}y^{\prime}_{1}+N_{2}^{T}y^{\prime}_{3}+N_{3}^{T}y^{\prime}_{2})\eta
L_{1}+\alpha_{2}(N_{2}^{T}y^{\prime}_{2}+N_{1}^{T}y^{\prime}_{3}+N_{3}^{T}y^{\prime}_{1})\eta
L_{2}+\alpha_{3}(N_{3}^{T}y^{\prime}_{3}+N_{1}^{T}y^{\prime}_{2}+N_{2}^{T}y^{\prime}_{1})\eta
L_{3}$
$\displaystyle+\beta_{1}\bar{\nu}_{1}(N_{1}y^{\prime}_{1}+N_{2}y^{\prime}_{3}+N_{3}y^{\prime}_{2})\chi_{1}+\beta_{2}\bar{\nu}_{2}(N_{2}y^{\prime}_{2}+N_{1}y^{\prime}_{3}+N_{3}y^{\prime}_{1})\chi_{1}+\beta_{3}\bar{\nu}_{3}(N_{3}y^{\prime}_{3}+N_{1}y^{\prime}_{2}+N_{2}y^{\prime}_{1})\chi_{1}+{\rm
h.c.}$
$\displaystyle+\tilde{\beta}_{1}\bar{\nu}_{1}(N_{1}y^{\prime}_{1}+N_{2}y^{\prime}_{3}+N_{3}y^{\prime}_{2})\chi_{2}+\tilde{\beta}_{2}\bar{\nu}_{2}(N_{2}y^{\prime}_{2}+N_{1}y^{\prime}_{3}+N_{3}y^{\prime}_{1})\chi_{2}+\tilde{\beta}_{3}\bar{\nu}_{3}(N_{3}y^{\prime}_{3}+N_{1}y^{\prime}_{2}+N_{2}y^{\prime}_{1})\chi_{2}+{\rm
h.c.},$ $\displaystyle\supset
N^{T}y_{\eta}\nu\eta^{0}+\bar{\nu}y_{\chi}N\chi^{0}_{1}+\bar{\nu}\tilde{y}_{\chi}N\chi^{0}_{2}+{\rm
h.c.},$
$\displaystyle=\psi^{T}U^{T}y_{\eta}\nu((U_{H})_{31}H_{1}+(U_{H})_{32}H_{2}+(U_{H})_{33}H_{3})+\bar{\nu}y_{\chi}U\psi((U_{H})_{11}H_{1}+(U_{H})_{12}H_{2}+(U_{H})_{13}H_{3})$
$\displaystyle+\bar{\nu}\tilde{y}_{\chi}U\psi((U_{H})_{21}H_{1}+(U_{H})_{22}H_{2}+(U_{H})_{23}H_{3})+{\rm
h.c.},$ (21)
where ${y_{3}^{(4)}}\equiv[y_{1}^{\prime},y_{2}^{\prime},y_{3}^{\prime}]^{T}$,
and we rewrite the interaction with mass eigenvector in the last line. Then,
we find each of Yukawa matrix to be
$\displaystyle y_{\eta}=\alpha_{1}\tilde{y}_{\eta}$
$\displaystyle=\alpha_{1}\left[\left(\begin{matrix}y_{1}^{\prime}&y_{3}^{\prime}&y_{2}^{\prime}\\\
y_{3}^{\prime}&y_{2}^{\prime}&y_{1}^{\prime}\\\
y_{2}^{\prime}&y_{1}^{\prime}&y_{3}^{\prime}\\\
\end{matrix}\right)\left(\begin{matrix}1&0&0\\\ 0&\tilde{\alpha}_{2}&0\\\
0&0&\tilde{\alpha}_{3}\\\ \end{matrix}\right)\right]_{N^{T}L},$ (22)
$\displaystyle y_{\chi}(\tilde{y}_{\chi})$
$\displaystyle=\left[\left(\begin{matrix}y_{1}^{\prime}&y_{3}^{\prime}&y_{2}^{\prime}\\\
y_{3}^{\prime}&y_{2}^{\prime}&y_{1}^{\prime}\\\
y_{2}^{\prime}&y_{1}^{\prime}&y_{3}^{\prime}\\\
\end{matrix}\right)\left(\begin{matrix}\beta_{1}(\tilde{\beta}_{1})&0&0\\\
0&\beta_{2}(\tilde{\beta}_{2})&0\\\ 0&0&\beta_{3}(\tilde{\beta}_{3})\\\
\end{matrix}\right)\right]_{\bar{\nu}N},$ (23)
where $\tilde{\alpha}_{2,3}\equiv\alpha_{2,3}/\alpha_{1}$ and $\alpha_{1}$ is
factored out for convinience in numerical analysis. In terms of these
interactions Dirac scotogenic neutrino mass diagram is given in Fig. 3.
Analytic form of the mass matrix is estimated to be
$\displaystyle
m_{\nu_{ij}}=-\frac{\alpha_{1}}{(4\pi)^{2}}\sum_{a=1}^{3}\sum_{A=1}^{3}(U^{T}\tilde{y}_{\eta})_{ia}D_{N_{a}}\left[(y_{\chi}U)_{aj}(U_{H})_{3A}(U_{H})_{1A}+(\tilde{y}_{\chi}U)_{aj}(U_{H})_{3A}(U_{H})_{2A}\right]f(r^{a}_{A}),$
(24a) $\displaystyle f(r^{a}_{A})=\frac{r^{a}_{A}\ln r^{a}_{A}}{1-r_{A}^{a}},$
(24b)
where $r^{a}_{A}\equiv\frac{m_{H_{A}}^{2}}{D_{a}^{2}}$. Here, we redefine
$\tilde{m}_{\nu}\equiv\frac{m_{\nu}}{\alpha_{1}}$. Then the neutrino mass
matrix is diagonalized by
$U_{\nu}^{T}\tilde{m}_{\nu}U_{\nu}\equiv$diag.($\tilde{m}_{1},\tilde{m}_{2},\tilde{m}_{3}$).
Finally, we find
$\displaystyle\alpha_{1}^{2}=\frac{\Delta m^{2}_{\rm
atm}}{\tilde{m}^{2}_{3}-\tilde{m}^{2}_{1}},\quad\Delta m^{2}_{\rm
sol}=\frac{\tilde{m}^{2}_{2}-\tilde{m}^{2}_{1}}{\tilde{m}^{2}_{3}-\tilde{m}^{2}_{1}}\Delta
m^{2}_{\rm atm},\quad({\rm NH}),$ (25a)
$\displaystyle\alpha_{1}^{2}=\frac{\Delta m^{2}_{\rm
atm}}{\tilde{m}^{2}_{2}-\tilde{m}^{2}_{3}},\quad\Delta m^{2}_{\rm
sol}=\frac{\tilde{m}^{2}_{2}-\tilde{m}^{2}_{1}}{\tilde{m}^{2}_{2}-\tilde{m}^{2}_{3}}\Delta
m^{2}_{\rm atm},\quad({\rm IH}),$ (25b)
where we require $\alpha_{1}^{2}\leq 4\pi$ to guarantee perturbativity of the
Yukawa coupling. Then, one finds $U_{PMNS}=V^{\dagger}_{eL}U_{\nu}$. Each of
mixing is given in terms of the component of $U_{MNS}$ as follows:
$\displaystyle\sin^{2}\theta_{13}=|(U_{PMNS})_{13}|^{2},\quad\sin^{2}\theta_{23}=\frac{|(U_{PMNS})_{23}|^{2}}{1-|(U_{PMNS})_{13}|^{2}},\quad\sin^{2}\theta_{12}=\frac{|(U_{PMNS})_{12}|^{2}}{1-|(U_{PMNS})_{13}|^{2}}.$
(26)
Figure 3: Dirac scotogenic neutrino mass diagram
Neutral fermion mass matrices are given by
$\displaystyle\left(\begin{matrix}\begin{matrix}0&m_{d}\\\
m_{d}&{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}0}\end{matrix}&\Large{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\boldsymbol{0}}_{\mathcal{\not{A}}_{4}}}\\\
\Large{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\boldsymbol{0}}_{\mathcal{\not{A}}_{4}}}&\begin{matrix}0&\boldsymbol{y_{3}}v_{d}\\\
\boldsymbol{y_{3}}v_{d}&\varepsilon\mu_{\chi}\boldsymbol{y_{\chi}}\end{matrix}\end{matrix}\right)$
$\displaystyle\left(\begin{matrix}\begin{matrix}0&\mu\\\
\mu&0\end{matrix}&\Large{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\boldsymbol{0}}_{\mathcal{\not{A}}_{4}}}\\\
\Large{{\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}\boldsymbol{0}}_{\mathcal{\not{A}}_{4}}}&\gamma_{N}\Lambda\boldsymbol{y_{n}}\end{matrix}\right)$
(27)
in the $(\nu_{L},\bar{\nu}_{L},\eta_{L},\chi_{L})(P_{\mathcal{R}}=+)$ and
$(\tilde{h}_{uL}^{0},\tilde{h}_{dL}^{0},\tilde{N}_{L})(P_{\mathcal{R}}=-)$
basis, respectively, where bar over $\nu$ is notational like in Martin (1997).
Entries indicated in dark blue are forbidden by $\mathcal{A}_{4}$ modular
invariance. Therefore, we conclude that neutrinos are of Dirac type and do not
mix with other neutral fermions due to modular $\mathcal{A}_{4}$ symmetry and
$P_{\mathcal{R}}$ parity conservation.
Considering the constraints from eq.(12b) for $(Y_{\nu})_{1,2}$ and also
assuming $M_{\widetilde{N}}=M_{\chi_{1}}$ we have $\widetilde{y}_{l}\lesssim
1/\sqrt{2}$ from eq. (16a). Now, assuming the mass of $\chi$’s to be
$\mathcal{O}(1)$ TeV, $Y_{\nu}\simeq 3.78\times 10^{-8}$ to obtain observed
baryon asymmetry. We thus chose small $\beta_{a}$ and $\tilde{\beta}_{a}$
values in our numerical calculation below to achieve condition for
$(Y_{\nu})_{1(2)}\equiv\beta_{a}\boldsymbol{y_{\nu}}(\tilde{\beta}_{a}\boldsymbol{y_{\nu}})$.
## V Neutrino analysis and discussion
In this section, we perform numerical $\Delta\chi^{2}$ analysis searching for
allowed region, satisfying neutrino oscillation data and LFVs. Also we show
our predictions, where we apply the best fit values for charged-lepton masses.
Here, we concentrate on the NH case, since IH is disfavored which would be
clarified by analytical estimation as can be seen in the previous section(if
possible).
In our numerical analysis, we randomly scan free parameters in the following
ranges
$\displaystyle\\{\tilde{\alpha}_{2},\tilde{\alpha}_{3}\\}\in[10^{-4},1.0],\quad\\{\beta_{1,2,3},\tilde{\beta}_{1,2,3}\\}\in[10^{-10},10^{-6}],\quad\sin\theta_{H_{12,13,23}}\in[-0.5,0.5],$
$\displaystyle\\{m_{H_{1}},M_{0}\\}\in[10^{3},10^{4}]\ {\rm GeV},\quad
m_{H_{2}}\in[1.0,1.1]M_{0},\quad m_{H_{3}}\in[1.1,1.2]M_{0},$ (28)
where $\tau$ runs over the fundamental region. Here we choose small
$\beta_{a}(\tilde{\beta}_{a})$ values that are required to realize Baryon
assymmetry indicated by Eq. 12c. Then, we perform numerical analysis and
discuss below. In Fig. 4, we shows the allowed region between real part of
$\tau$ and imaginary part of $\tau$, where the blue points are allowed within
2, green ones within 3, and red one within 5 of $\sqrt{\Delta\chi^{2}}$ for
five accurately known observables $\Delta m^{2}_{\rm atm},\Delta m^{2}_{\rm
sol},s_{12}^{2},s_{23}^{2},s_{13}^{2}$ in Nufit 5.0 Esteban _et al._ (2019,
2020, ). The real part of $\tau$ runs whole the range in the fundamental
region, while the imaginary one runs over the region of $[1.0-1.7]$.
Figure 4: Allowed region of modulus $\tau$, where the blue points are allowed
within 2, green ones within 3, and red one within 5 of $\Delta\chi^{2}$
analysis.
Fig. 5 demonstrates the correlation between the sum of neutrino mass
eigenvalues ($\sum m_{i}$ eV) and Dirac CP phase $\delta_{CP}$. The legend is
the same as Fig.1. Dirac CP runs whole the ranges, while $\sum m_{i}$ tends to
be localized at around $0.06$ eV. It implies that the lightest neutrino mass
is very small compared to the other two masses.
Figure 5: Correlation between the sum of neutrino mass eigenvalues ($\sum
m_{i}$ eV) and Dirac CP phase $\delta_{CP}$, where the legend is the same as
the case of Fig.1.
Figs. 6 show relations between LFVs and $s^{2}_{23}$. All the three BR’s of
LFVs are less than $10^{-19}$, which are much smaller than the current upper
experimental bounds.
Figure 6: Relations between LFVs and $s^{2}_{23}$, where the legend is the
same as the case of Fig.1.
Finally, we show a benchmark in table 4, where we select it so that
$\sqrt{\Delta\chi^{2}}$ is minimum. The mass matrices for dimensionless
neutrino and charged-lepton are found as
$\displaystyle V_{e_{L}}$
$\displaystyle=\left[\begin{array}[]{ccc}-0.200&0.959&0.200\\\
-0.0596+0.0129i&0.187-0.0407i&-0.957+0.208i\\\
-0.955-0.211i&-0.203-0.0450i&0.0197+0.00429i\\\ \end{array}\right],$ (32)
$\displaystyle U_{\nu}$
$\displaystyle=\left[\begin{array}[]{ccc}0.338&0.730&-0.594\\\
-0.206-0.132i&-0.355-0.412i&-0.554-0.582i\\\
-0.906-0.0648i&0.408+0.0710i&-0.0147+0.0505i\\\ \end{array}\right].$ (36)
$\tau$ | $0.102244+1.61566i$
---|---
$[s_{H_{12}},s_{H_{23}},s_{{31}}]$ | $[0.393,0.0975,0.00692]$
$[\alpha_{\ell},\beta_{\ell},\gamma_{\ell}]$ | $[0.0596,0.000298,0.979]$
$[\alpha_{1},\tilde{\alpha}_{2},\tilde{\alpha}_{3}]$ | $[6.64\times 10^{-7},0.00966+0.000711i,0.00272+0.758i]$
$[\beta_{1},\beta_{2},\beta_{3}]$ | $[(17.9-3.90i)\times 10^{-10},(16.1-2.82i)\times 10^{-9},(208+2.83i)\times 10^{-9}]$
$[\tilde{\beta}_{1},\tilde{\beta}_{2},\tilde{\beta}_{3}]$ | $[(7.42+5.70i)\times 10^{-10},(175-2.74i)\times 10^{-9},(769+9.91i)\times 10^{-10}]$
$[m_{H_{1}},m_{H_{2}},m_{H_{3}}]$ | $[4661,4871,5493]\ {\rm GeV}$
$[M_{0},D_{N_{1}},D_{N_{2}},D_{N_{3}}]$ | $[8100,6340,10062,16402]\ {\rm GeV}$
$\Delta m^{2}_{\rm atm}$ | $2.52\times 10^{-3}{\rm eV}^{2}$
$\Delta m^{2}_{\rm sol}$ | $7.53\times 10^{-5}{\rm eV}^{2}$
$\sin^{2}\theta_{12}$ | $0.295$
$\sin^{2}\theta_{23}$ | $0.451$
$\sin^{2}\theta_{13}$ | $0.0219$
$\delta_{CP}$ | $18.0^{\circ}$
$\sum m_{i}$ | $59.0$ meV
$\sqrt{\Delta\chi^{2}}$ | $1.32$
Table 4: A benchmark point of our input parameters and observables, where we
select it so that $\sqrt{\Delta\chi^{2}}$ is minimum.
## VI Discussion
Our model is capable of accommodating multiparticle dark matter (DM) due to
presence of $P_{\mathcal{R}}$ and modular $\mathcal{A}_{4}$ symmetry
invariances. A $P_{\mathcal{R}}$ _odd_ candidate
($\widetilde{N}_{L},\widetilde{\eta}_{s},\widetilde{\chi}_{s}$) is the
canonical SUSY WIMP, which in our model is connected to the radiative neutrino
mass generation via scotogenic mechanism. Second component of the
multiparticle DM is guaranteed by the accidental $\mathbb{Z}_{2}$ discrete
symmetry which is induced by the modular $\mathcal{A}_{4}$ invariance of the
model. Here possible DM candidates are odd under this $\mathbb{Z}_{2}$
symmetry and even under $P_{\mathcal{R}}$, which are
$N_{s},\eta^{0}_{L},\chi_{L}$ superfields. Furthermore, in our scenario we are
assuming $\chi_{L}$ to be heavier in order to generate asymmetry in the right-
handed sector. The mixture of $\eta^{0}_{L},\chi_{L}$ (as shown in eq. 27) is
not singlet under SM gauge symmetry therefore $N_{s}$, which is a SM singlet,
is the best DM candidate odd under accidental $\mathbb{Z}_{2}$ symmetry.
Further details on dark matter abundance and direct detection are left for
elsewhere.
## VII Conclusion
In this work a Dirac radiative neutrino mass model based on modular
$\mathcal{A}_{4}$ symmetry was presented. Being a scotogenic neutrino mass
model, it demonstrates a natural connection between naturally small Dirac
neutrino mass origin and existence of dark matter. Modular $\mathcal{A}_{4}$
symmetry in this work achieves simultaneously three main goals: Firstly, the
Diracness of the neutrinos, _aka_ forbids all majorana neutrino mass terms,
secondly it builds a connection of neutrino with dark matter through the well
know scotogenic mechanism, and finally, it predicts and reproduces Dirac phase
as well as neutrino mass splittings in the leptonic sector. This model favors
normal neutrino mass hierarchy, while disfavors the inverted one. Heavy
neutral, dark, fermions are of the order of $\mathcal{O}(1-10)$ TeV, dark
neutral scalars are of the order $\mathcal{O}(1-5)$ TeV, whereas light
neutrino masses are of the order of $0.1$ eV, and the sum of neutrino masses
is around $60$ meV. Even though, neutrinos are Dirac and the lepton number is
conserved in our model, we achieve the matter asymmetry of the universe by
means of the lepton number violation in the right$-$handed neutrino sector via
a mechanism known as _neutrinogenesis_. The required phase in the combination
of the lepton sector yukawa couplings is of the order $\mathcal{O}(10^{-6})$.
Model is able to predict preferred Dirac phase in the leptonic sector, as well
as neutrino mass splittings and mass order, accommodate multicomponent dark
matter, thanks to the _R_ $-$parity and accidental scotogenic $\mathbb{Z}_{2}$
symmetry with minimum set of input parameters. This is the first model of
Dirac scotogenic neutrino mass based on modular symmetries.
###### Acknowledgements.
This research was supported by an appointment to the JRG Program at the APCTP
through the Science and Technology Promotion Fund and Lottery Fund of the
Korean Government. This was also supported by the Korean Local Governments -
Gyeongsangbuk-do Province and Pohang City (H.O.). A.D. and O.P. are supported
by the National Research Foundation of Korea Grants No. 2017K1A3A7A09016430
and No. 2017R1A2B4006338. OP is also supported by the Samsung Science and
Technology Foundation under Grant No. SSTF-BA1602-04 and National Research
Foundation of Korea under Grant Number 2018R1A2B6007000. Feynman diagrams were
created using Ti _k_ Z-Feynman package Ellis (2017).
## References
* Feruglio (2019) F. Feruglio, in _From My Vast Repertoire …: Guido Altarelli’s Legacy_ , edited by A. Levy, S. Forte, and G. Ridolfi (2019) pp. 227–266, arXiv:1706.08749 [hep-ph] .
* de Adelhart Toorop _et al._ (2012) R. de Adelhart Toorop, F. Feruglio, and C. Hagedorn, Nucl. Phys. B858, 437 (2012), arXiv:1112.1340 [hep-ph] .
* Criado and Feruglio (2018) J. C. Criado and F. Feruglio, SciPost Phys. 5, 042 (2018), arXiv:1807.01125 [hep-ph] .
* Kobayashi _et al._ (2018a) T. Kobayashi, N. Omoto, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, JHEP 11, 196 (2018a), arXiv:1808.03012 [hep-ph] .
* Okada and Tanimoto (2019a) H. Okada and M. Tanimoto, Phys. Lett. B791, 54 (2019a), arXiv:1812.09677 [hep-ph] .
* Nomura and Okada (2019) T. Nomura and H. Okada, Phys. Lett. B 797, 134799 (2019), arXiv:1904.03937 [hep-ph] .
* Okada and Tanimoto (2019b) H. Okada and M. Tanimoto, (2019b), arXiv:1905.13421 [hep-ph] .
* de Anda _et al._ (2018) F. J. de Anda, S. F. King, and E. Perdomo, (2018), arXiv:1812.05620 [hep-ph] .
* Novichkov _et al._ (2019a) P. P. Novichkov, S. T. Petcov, and M. Tanimoto, Phys. Lett. B793, 247 (2019a), arXiv:1812.11289 [hep-ph] .
* Nomura and Okada (2021a) T. Nomura and H. Okada, Nucl. Phys. B 966, 115372 (2021a), arXiv:1906.03927 [hep-ph] .
* Okada and Orikasa (2019a) H. Okada and Y. Orikasa, (2019a), arXiv:1907.13520 [hep-ph] .
* Ding _et al._ (2019a) G.-J. Ding, S. F. King, and X.-G. Liu, (2019a), arXiv:1907.11714 [hep-ph] .
* Nomura _et al._ (2020) T. Nomura, H. Okada, and O. Popov, Phys. Lett. B 803, 135294 (2020), arXiv:1908.07457 [hep-ph] .
* Kobayashi _et al._ (2019a) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, Phys. Rev. D 100, 115045 (2019a), [Erratum: Phys.Rev.D 101, 039904 (2020)], arXiv:1909.05139 [hep-ph] .
* Asaka _et al._ (2020a) T. Asaka, Y. Heo, T. H. Tatsuishi, and T. Yoshida, JHEP 01, 144 (2020a), arXiv:1909.06520 [hep-ph] .
* Zhang (2019) D. Zhang, (2019), arXiv:1910.07869 [hep-ph] .
* Ding _et al._ (2019b) G.-J. Ding, S. F. King, X.-G. Liu, and J.-N. Lu, JHEP 12, 030 (2019b), arXiv:1910.03460 [hep-ph] .
* Kobayashi _et al._ (2020a) T. Kobayashi, T. Nomura, and T. Shimomura, Phys. Rev. D 102, 035019 (2020a), arXiv:1912.00637 [hep-ph] .
* Nomura _et al._ (2021a) T. Nomura, H. Okada, and S. Patra, Nucl. Phys. B 967, 115395 (2021a), arXiv:1912.00379 [hep-ph] .
* Wang (2020) X. Wang, Nucl. Phys. B 957, 115105 (2020), arXiv:1912.13284 [hep-ph] .
* Okada and Shoji (2020) H. Okada and Y. Shoji, Nucl. Phys. B 961, 115216 (2020), arXiv:2003.13219 [hep-ph] .
* Okada and Tanimoto (2020) H. Okada and M. Tanimoto, (2020), arXiv:2005.00775 [hep-ph] .
* Behera _et al._ (2020a) M. K. Behera, S. Singirala, S. Mishra, and R. Mohanta, (2020a), arXiv:2009.01806 [hep-ph] .
* Behera _et al._ (2020b) M. K. Behera, S. Mishra, S. Singirala, and R. Mohanta, (2020b), arXiv:2007.00545 [hep-ph] .
* Nomura and Okada (2020a) T. Nomura and H. Okada, (2020a), arXiv:2007.04801 [hep-ph] .
* Nomura and Okada (2020b) T. Nomura and H. Okada, (2020b), arXiv:2007.15459 [hep-ph] .
* Asaka _et al._ (2020b) T. Asaka, Y. Heo, and T. Yoshida, Phys. Lett. B 811, 135956 (2020b), arXiv:2009.12120 [hep-ph] .
* Okada and Tanimoto (2021a) H. Okada and M. Tanimoto, Phys. Rev. D 103, 015005 (2021a), arXiv:2009.14242 [hep-ph] .
* Nagao and Okada (2020) K. I. Nagao and H. Okada, (2020), arXiv:2010.03348 [hep-ph] .
* Okada and Tanimoto (2021b) H. Okada and M. Tanimoto, JHEP 03, 010 (2021b), arXiv:2012.01688 [hep-ph] .
* Yao _et al._ (2021a) C.-Y. Yao, J.-N. Lu, and G.-J. Ding, JHEP 05, 102 (2021a), arXiv:2012.13390 [hep-ph] .
* Chen _et al._ (2021) P. Chen, G.-J. Ding, and S. F. King, JHEP 04, 239 (2021), arXiv:2101.12724 [hep-ph] .
* Kashav and Verma (2021) M. Kashav and S. Verma, JHEP 09, 100 (2021), arXiv:2103.07207 [hep-ph] .
* Okada _et al._ (2021) H. Okada, Y. Shimizu, M. Tanimoto, and T. Yoshida, JHEP 07, 184 (2021), arXiv:2105.14292 [hep-ph] .
* de Medeiros Varzielas and Lourenço (2021) I. de Medeiros Varzielas and J. a. Lourenço, (2021), arXiv:2107.04042 [hep-ph] .
* Nomura _et al._ (2021b) T. Nomura, H. Okada, and Y. Orikasa, Eur. Phys. J. C 81, 947 (2021b), arXiv:2106.12375 [hep-ph] .
* Hutauruk _et al._ (2020) P. T. P. Hutauruk, D. W. Kang, J. Kim, and H. Okada, (2020), arXiv:2012.11156 [hep-ph] .
* Ding _et al._ (2021a) G.-J. Ding, S. F. King, and J.-N. Lu, (2021a), arXiv:2108.09655 [hep-ph] .
* Nagao and Okada (2021) K. I. Nagao and H. Okada, (2021), arXiv:2108.09984 [hep-ph] .
* Okada and Qi (2021) H. Okada and Y.-h. Qi, (2021), arXiv:2109.13779 [hep-ph] .
* Kobayashi _et al._ (2018b) T. Kobayashi, K. Tanaka, and T. H. Tatsuishi, Phys. Rev. D 98, 016004 (2018b), arXiv:1803.10391 [hep-ph] .
* Kobayashi _et al._ (2019b) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, T. H. Tatsuishi, and H. Uchida, Phys. Lett. B 794, 114 (2019b), arXiv:1812.11072 [hep-ph] .
* Kobayashi _et al._ (2019c) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, (2019c), arXiv:1906.10341 [hep-ph] .
* Okada and Orikasa (2019b) H. Okada and Y. Orikasa, Phys. Rev. D 100, 115037 (2019b), arXiv:1907.04716 [hep-ph] .
* Mishra (2020) S. Mishra, (2020), arXiv:2008.02095 [hep-ph] .
* Du and Wang (2021) X. Du and F. Wang, JHEP 02, 221 (2021), arXiv:2012.01397 [hep-ph] .
* Penedo and Petcov (2019) J. T. Penedo and S. T. Petcov, Nucl. Phys. B939, 292 (2019), arXiv:1806.11040 [hep-ph] .
* Novichkov _et al._ (2019b) P. P. Novichkov, J. T. Penedo, S. T. Petcov, and A. V. Titov, JHEP 04, 005 (2019b), arXiv:1811.04933 [hep-ph] .
* Kobayashi _et al._ (2019d) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, and T. H. Tatsuishi, (2019d), arXiv:1907.09141 [hep-ph] .
* King and Zhou (2019) S. F. King and Y.-L. Zhou, (2019), arXiv:1908.02770 [hep-ph] .
* Okada and Orikasa (2019c) H. Okada and Y. Orikasa, (2019c), arXiv:1908.08409 [hep-ph] .
* Criado _et al._ (2020) J. C. Criado, F. Feruglio, and S. J. D. King, JHEP 02, 001 (2020), arXiv:1908.11867 [hep-ph] .
* Wang and Zhou (2020) X. Wang and S. Zhou, JHEP 05, 017 (2020), arXiv:1910.09473 [hep-ph] .
* Zhao and Zhang (2021) Y. Zhao and H.-H. Zhang, JHEP 03, 002 (2021), arXiv:2101.02266 [hep-ph] .
* King and Zhou (2021) S. F. King and Y.-L. Zhou, JHEP 04, 291 (2021), arXiv:2103.02633 [hep-ph] .
* Ding _et al._ (2021b) G.-J. Ding, S. F. King, and C.-Y. Yao, Phys. Rev. D 104, 055034 (2021b), arXiv:2103.16311 [hep-ph] .
* Zhang and Zhou (2021) X. Zhang and S. Zhou, JCAP 09, 043 (2021), arXiv:2106.03433 [hep-ph] .
* Qu _et al._ (2021) B.-Y. Qu, X.-G. Liu, P.-T. Chen, and G.-J. Ding, Phys. Rev. D 104, 076001 (2021), arXiv:2106.11659 [hep-ph] .
* Nomura and Okada (2021b) T. Nomura and H. Okada, (2021b), arXiv:2109.04157 [hep-ph] .
* Novichkov _et al._ (2019c) P. P. Novichkov, J. T. Penedo, S. T. Petcov, and A. V. Titov, JHEP 04, 174 (2019c), arXiv:1812.02158 [hep-ph] .
* Ding _et al._ (2019c) G.-J. Ding, S. F. King, and X.-G. Liu, (2019c), arXiv:1903.12588 [hep-ph] .
* Wang _et al._ (2021) X. Wang, B. Yu, and S. Zhou, Phys. Rev. D 103, 076005 (2021), arXiv:2010.10159 [hep-ph] .
* Yao _et al._ (2021b) C.-Y. Yao, X.-G. Liu, and G.-J. Ding, Phys. Rev. D 103, 095013 (2021b), arXiv:2011.03501 [hep-ph] .
* Wang and Zhou (2021) X. Wang and S. Zhou, JHEP 07, 093 (2021), arXiv:2102.04358 [hep-ph] .
* Behera and Mohanta (2021) M. K. Behera and R. Mohanta, (2021), arXiv:2108.01059 [hep-ph] .
* Baur _et al._ (2019a) A. Baur, H. P. Nilles, A. Trautner, and P. K. S. Vaudrevange, Phys. Lett. B 795, 7 (2019a), arXiv:1901.03251 [hep-th] .
* De Medeiros Varzielas _et al._ (2019) I. De Medeiros Varzielas, S. F. King, and Y.-L. Zhou, (2019), arXiv:1906.02208 [hep-ph] .
* Liu and Ding (2019) X.-G. Liu and G.-J. Ding, JHEP 08, 134 (2019), arXiv:1907.01488 [hep-ph] .
* Chen _et al._ (2020a) P. Chen, G.-J. Ding, J.-N. Lu, and J. W. F. Valle, Phys. Rev. D 102, 095014 (2020a), arXiv:2003.02734 [hep-ph] .
* Li _et al._ (2021) C.-C. Li, X.-G. Liu, and G.-J. Ding, JHEP 10, 238 (2021), arXiv:2108.02181 [hep-ph] .
* Novichkov _et al._ (2021a) P. P. Novichkov, J. T. Penedo, and S. T. Petcov, Nucl. Phys. B 963, 115301 (2021a), arXiv:2006.03058 [hep-ph] .
* Liu _et al._ (2021) X.-G. Liu, C.-Y. Yao, and G.-J. Ding, Phys. Rev. D 103, 056013 (2021), arXiv:2006.10722 [hep-ph] .
* Kikuchi _et al._ (2020) S. Kikuchi, T. Kobayashi, H. Otsuka, S. Takada, and H. Uchida, JHEP 11, 101 (2020), arXiv:2007.06188 [hep-th] .
* Almumin _et al._ (2021) Y. Almumin, M.-C. Chen, V. Knapp-Pérez, S. Ramos-Sánchez, M. Ratz, and S. Shukla, JHEP 05, 078 (2021), arXiv:2102.11286 [hep-th] .
* Ding _et al._ (2021c) G.-J. Ding, F. Feruglio, and X.-G. Liu, SciPost Phys. 10, 133 (2021c), arXiv:2102.06716 [hep-ph] .
* Feruglio _et al._ (2021) F. Feruglio, V. Gherardi, A. Romanino, and A. Titov, JHEP 05, 242 (2021), arXiv:2101.08718 [hep-ph] .
* Kikuchi _et al._ (2021) S. Kikuchi, T. Kobayashi, and H. Uchida, Phys. Rev. D 104, 065008 (2021), arXiv:2101.00826 [hep-th] .
* Novichkov _et al._ (2021b) P. P. Novichkov, J. T. Penedo, and S. T. Petcov, JHEP 04, 206 (2021b), arXiv:2102.07488 [hep-ph] .
* Note (1) For interested readers, some literature reviews would be useful to understand the non-Abelian group and its applications to flavor structure Altarelli and Feruglio (2010); Ishimori _et al._ (2010, 2012); Hernandez and Smirnov (2012); King and Luhn (2013); King _et al._ (2014); King (2017); Petcov (2018).
* Kobayashi _et al._ (2021) T. Kobayashi, H. Okada, and Y. Orikasa, (2021), arXiv:2111.05674 [hep-ph] .
* Baur _et al._ (2019b) A. Baur, H. P. Nilles, A. Trautner, and P. K. S. Vaudrevange, Nucl. Phys. B 947, 114737 (2019b), arXiv:1908.00805 [hep-th] .
* Kobayashi _et al._ (2020b) T. Kobayashi, Y. Shimizu, K. Takagi, M. Tanimoto, T. H. Tatsuishi, and H. Uchida, Phys. Rev. D 101, 055046 (2020b), arXiv:1910.11553 [hep-ph] .
* Novichkov _et al._ (2019d) P. P. Novichkov, J. T. Penedo, S. T. Petcov, and A. V. Titov, JHEP 07, 165 (2019d), arXiv:1905.11970 [hep-ph] .
* Chen _et al._ (2020b) M.-C. Chen, S. Ramos-Sánchez, and M. Ratz, Phys. Lett. B 801, 135153 (2020b), arXiv:1909.06910 [hep-ph] .
* de Medeiros Varzielas _et al._ (2020) I. de Medeiros Varzielas, M. Levy, and Y.-L. Zhou, JHEP 11, 085 (2020), arXiv:2008.05329 [hep-ph] .
* Dreiner _et al._ (2010) H. K. Dreiner, H. E. Haber, and S. P. Martin, Phys. Rept. 494, 1 (2010), arXiv:0812.1594 [hep-ph] .
* Ma and Popov (2017) E. Ma and O. Popov, Phys. Lett. B764, 142 (2017), arXiv:1609.02538 [hep-ph] .
* Note (2) The matter$-$parity $(3(B-L))$ twin ($-$,$+$) particles are $(L,H_{d}),(\eta,H_{u}),(\bar{\nu}\chi,N)$.
* Sakharov (1967) A. D. Sakharov, Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967), [Usp. Fiz. Nauk161,no.5,61(1991)].
* Nanopoulos and Weinberg (1979) D. V. Nanopoulos and S. Weinberg, Phys. Rev. D20, 2484 (1979).
* Adhikari and Rangarajan (2002) R. Adhikari and R. Rangarajan, Phys. Rev. D65, 083504 (2002), arXiv:hep-ph/0110387 [hep-ph] .
* Martin (1997) S. P. Martin, , 1 (1997), [Adv. Ser. Direct. High Energy Phys.18,1(1998)], arXiv:hep-ph/9709356 [hep-ph] .
* Esteban _et al._ (2019) I. Esteban, M. C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni, and T. Schwetz, JHEP 01, 106 (2019), arXiv:1811.05487 [hep-ph] .
* Esteban _et al._ (2020) I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz, and A. Zhou, JHEP 09, 178 (2020), arXiv:2007.14792 [hep-ph] .
* (95) I. Esteban, M. C. Gonzalez-Garcia, A. Hernandez-Cabezudo, M. Maltoni, and T. Schwetz, NuFIT 5.0 (2029), www.nu-fit.org, (2020) .
* Ellis (2017) J. Ellis, Comput. Phys. Commun. 210, 103 (2017), arXiv:1601.05437 [hep-ph] .
* Altarelli and Feruglio (2010) G. Altarelli and F. Feruglio, Rev. Mod. Phys. 82, 2701 (2010), arXiv:1002.0211 [hep-ph] .
* Ishimori _et al._ (2010) H. Ishimori, T. Kobayashi, H. Ohki, Y. Shimizu, H. Okada, and M. Tanimoto, Prog. Theor. Phys. Suppl. 183, 1 (2010), arXiv:1003.3552 [hep-th] .
* Ishimori _et al._ (2012) H. Ishimori, T. Kobayashi, H. Ohki, H. Okada, Y. Shimizu, and M. Tanimoto, _An introduction to non-Abelian discrete symmetries for particle physicists_ , Vol. 858 (2012).
* Hernandez and Smirnov (2012) D. Hernandez and A. Y. Smirnov, Phys. Rev. D 86, 053014 (2012), arXiv:1204.0445 [hep-ph] .
* King and Luhn (2013) S. F. King and C. Luhn, Rept. Prog. Phys. 76, 056201 (2013), arXiv:1301.1340 [hep-ph] .
* King _et al._ (2014) S. F. King, A. Merle, S. Morisi, Y. Shimizu, and M. Tanimoto, New J. Phys. 16, 045018 (2014), arXiv:1402.4271 [hep-ph] .
* King (2017) S. F. King, Prog. Part. Nucl. Phys. 94, 217 (2017), arXiv:1701.04413 [hep-ph] .
* Petcov (2018) S. T. Petcov, Eur. Phys. J. C 78, 709 (2018), arXiv:1711.10806 [hep-ph] .
|
# On the structure vector field in lightlike hypersurfaces
Samuel Ssekajja School of Mathematics
University of the Witwatersrand
Private Bag 3, Wits 2050
South Africa<EMAIL_ADDRESS>
###### Abstract.
We study lightlike hypersurfaces of an indefinite almost contact metric-
manifold $\bar{M}$. We prove that there are only two types of such
hypersurfaces, known as ascreen and inascreen, with respect to the position of
the structure vector field of $\bar{M}$. We also show that the second class of
hypersurfaces naturally admits an almost Hermitian structure.
###### Key words and phrases:
Lightlike hypersurfaces, Ascreen hypersurfaces, Inascreen hypersurfaces
###### 2010 Mathematics Subject Classification:
Primary 53C25; Secondary 53C40, 53C50
## 1\. Introduction
Lightlike hypersurfaces, $M$, of indefinite almost contact metric manifolds,
$\bar{M}$, have been studied by many authors. Among those are the following
[1], [3], [4], [5], [6], [7], [8], [9] and [10]. A lot of research has been
done on such hypersurfaces to-date, and most of the work is on tangential
hypersurfaces, that is; those which are tangent to the structure vector field
of $\bar{M}$. Although it is relatively easy to study tangential lightlike
hypersurfaces, it has been shown that most of the well-known lightlike
hypersurfaces are non-existent in this case. For example, these hypersurfaces
can not be totally umbilical (although they have been assumed to be totally
umbilical in the paper [9]), totally screen umbilical or screen conformal.
This shows that the position of the structure vector field relative to each
hypersurface has a great impact on the underlying geometry. In an effort to
extend the study of lightlike hypersurfaces to non-tangential ones, D. H. Jin
introduced a new class of lightlike hypersurfaces and named it ascreen [5, 6,
7, 8]. In this class of hypersurfaces, the structure vector field is nowhere
tangent to the hypersurface. In fact, this vector field lies in the orthogonal
complement of the screen distribution, over $M$, in the ambient space. Some
results have been proved regarding this class of hypersurfaces. Some of those
results can be seen in the above articles. A natural question arises here;
###### Question 1.1.
Given a lightlike hypersurface $M$ of an indefinite almost contact metric-
manifold $\bar{M}$. Is it possible to precisely locate the position of the
structure vector field of $\bar{M}$ relative to $M$?
Our answer to this question is affirmative. In fact, we prove, in Theorem 3.5,
that there are only two types of such hypersurfaces, i.e. the ascreen and what
we have named inascreen. The second class (inascreen) also includes the well-
down tangential lightlike hypersurfaces. Finally, we prove that this class of
hypersurfaces admits an an almost Hermitian structure (see Theorem 4.5).
## 2\. Preliminaries
An odd-dimensional semi-Riemannian manifold $(\bar{M},\bar{g})$ is called an
almost contact metric-manifold [11, 12] if there are a $(1,1)$ tensor field
$\bar{\phi}$, a vector field $\zeta$ , called structure vector field, and a
1-form $\eta$ such that
$\displaystyle\eta(\zeta)=1\quad\mbox{and}\quad\bar{\phi}^{2}=-I+\eta\otimes\zeta,$
(2.1)
where $I$ denotes the identity transformation. From (2.1), we have
$\displaystyle\bar{\phi}\zeta=0\quad\mbox{and}\quad\eta\circ\bar{\phi}=0.$
(2.2)
Futhermore, on an almost contact manifold $\bar{M}$ the following holds
$\displaystyle\bar{g}(\bar{\phi}X,\bar{\phi}Y)$
$\displaystyle=\bar{g}(X,Y)-\eta(X)\eta(Y),$ (2.3)
for any $X$ and $Y$ tangent to $\bar{M}$. It follows from (2.1) and (2.3) that
$\displaystyle\bar{g}(X,\zeta)=\eta(X).$ (2.4)
We can, also, see from (2.1), (2.2) and (2.3) that $\bar{\phi}$ is skew-
symmetric with respect to $\bar{g}$, i.e.
$\displaystyle\bar{g}(\bar{\phi}X,Y)=-\bar{g}(X,\bar{\phi}Y),$ (2.5)
for any $X$ and $Y$ tangent to $\bar{M}$.
Let $(\bar{M},\bar{g})$ be a semi-Riemannian manifold, and let $(M,g)$ be a
hypersurface of $\bar{M}$, where $g=\bar{g}|_{M}$ is the induced metric tensor
on $M$. We call $M$ a lightlike hypersurface if the normal bundle $TM^{\perp}$
of $M$ is a vector subbundle of the tangent bundle $TM$, of rank 1. Moreover,
it is known [2, 3] that the complementary bundle to $TM^{\perp}$ in $TM$,
called the screen distribution and denoted by $S(TM)$, is non-degenerate and
the following decomposition holds;
$\displaystyle TM=S(TM)\perp TM^{\perp},$ (2.6)
where $\perp$ denotes the orthogonal direct sum. A lightlike hypersurface $M$
with a chosen screen distribution will be denoted by $M=(M,g,S(TM))$. Then,
there exists a unique vector bundle $\mathrm{tr}(TM)$, called the lightlike
transversal bundle of $M$ with respect to $S(TM)$, of rank 1 over $M$ such
that for any non-zero section $\xi$ of $TM^{\perp}$ on a coordinate
neighbourhood $\mathcal{U}\subset M$, there exists a unique section $N$ of
$\mathrm{tr}(TM)$ on $\mathcal{U}$ satisfying the conditions
$\displaystyle\bar{g}(\xi,N)=1\quad\mbox{and}\quad\bar{g}(N,N)=\bar{g}(N,Z)=0,$
(2.7)
for any $Z$ tangent to $S(TM)$. Consequently, we have the following
decomposition.
$\displaystyle T\bar{M}|_{M}$
$\displaystyle=S(TM)\perp\\{TM^{\perp}\oplus\mathrm{tr}(TM)\\}$ (2.8)
$\displaystyle=TM\oplus\mathrm{tr}(TM),$
where $\oplus$ denotes a direct sum, not necessarily orthogonal.
Let $\bar{\nabla}$ be the Levi-Civita connection of $\bar{M}$ and let $P$ be
the projection morphism of $TM$ onto $S(TM)$, with respect to (2.6). Then the
local Gauss-Weingarten equations of $M$ and $S(TM)$ are given by [2, 3].
$\displaystyle\bar{\nabla}_{X}Y=\nabla_{X}Y+B(X,Y)N,\quad\bar{\nabla}_{X}N=-A_{N}X+\tau(X)N,$
and
$\displaystyle\nabla_{X}PY=\nabla^{*}_{X}PY+C(X,PY)\xi,\quad\nabla_{X}\xi=-A^{*}_{\xi}X-\tau(X)\xi,$
respectively, for all $X$ and $Y$ tangent to $M$, $\xi$ tangent to
$TM^{\perp}$ and $N$ tangent to $\mathrm{tr}(TM)$. $\nabla$ and $\nabla^{*}$
are the induced connections on $TM$ and $S(TM)$, respectively. $B$ and $C$ are
the local second fundamental forms of $M$ and $S(TM)$, respectively.
Furthermore, $A_{N}$ and $A^{*}_{\xi}$ are the shape operators of $TM$ and
$S(TM)$ respectively, while $\tau$ is a 1-form on $TM$. Moreover,
$\displaystyle g(A^{*}_{\xi}X,Y)=B(X,Y)\quad\mbox{and}\quad
g(A_{N}X,PY)=C(X,PY),$
for any $X$ and $Y$ tangent to $M$. Although $\nabla^{*}$ is a metric
connection, $\nabla$ is generally not, and certisfy the relation
$\displaystyle(\nabla_{X}g)(Y,Z)=B(X,Y)\theta(Z)+B(X,Z)\theta(Y),$
where $\theta$ is 1-form given by $\theta(X)=\bar{g}(X,N)$, for all $X$, $Y$
and $Z$ tangent to $M$. For more details on lightlike hypersurfaces, we refer
the reader to the books [2, 3].
## 3\. A precise location of the structure vector field $\zeta$
Let $M$ be a lightlike hypersurface of an almost contact metric manifold
$\bar{M}$. Let $\xi$ and $N$ be the lightlike sections spanning the normal
bundle $TM^{\perp}$ and transversal bundle $\mathrm{tr}(TM)$ over $M$,
respectively. Since the structure vector field $\zeta$ is tangent to
$\bar{M}$, we decompose it according to (2.8) as
$\displaystyle\zeta=W+a\xi+bN,$ (3.1)
where $W$ is a smooth section of $S(TM)$, while $a$ and $b$ are smooth
functions on $M$. Using (2.5) and (2.4), we have
$\displaystyle a=\eta(N)\quad\mbox{and}\quad b=\eta(\xi).$ (3.2)
Furthermore, using the first relation in (2.1), together with (2.4) and (3.1),
we derive $g(W,W)+2ab=1$. We say that $M$ is tangent to $\zeta$ whenever
$b=\eta(\xi)=0$. In this case, C. Calin [1] has shown that $a=0$ too, i.e.
$\zeta$ belongs to $S(TM)$.
Let $M$ be a lightlike hypersurface of an indefinite almost contact metric
manifold $\bar{M}$. By virtue of relation (2.5), we have
$\bar{g}(\bar{\phi}\xi,\xi)=-\bar{g}(\xi,\bar{\phi}\xi)$, where $\xi$ is
tangent to $TM^{\perp}$. It follows that $\bar{g}(\bar{\phi}\xi,\xi)=0$ and
hence the $\bar{\phi}$ is always tangent to $M$. Let us choose a screen
distribution $S(TM)$ such that $\bar{\phi}\xi$ is tangent to it. Thus, by
(2.5), we have $\bar{g}(\bar{\phi}N,N)=0$ and
$\bar{g}(\bar{\phi}N,\xi)=-\bar{g}(N,\bar{\phi}\xi)=0$. These relations shows
that $\bar{\phi}N$ is also tangent to $M$, and in particular belonging to
$S(TM)$. Furthermore, we have $g(\bar{\phi}\xi,\bar{\phi}N)=1-ab$. Hence,
$\bar{\phi}TM^{\perp}$ and $\bar{\phi}\mathrm{tr}(TM)$ are vector subbundles
of $S(TM)$ of rank 1. Thus, there exists a non-degenerate distribution
$D^{\prime}$, such that
$\displaystyle
S(TM)=\\{\bar{\phi}TM^{\perp}\oplus\bar{\phi}\mathrm{tr}(TM)\\}\perp
D^{\prime}.$ (3.3)
From (2.6), (2.8) and (3.3), the decompositions of $TM$ and $T\bar{M}$ becomes
$\displaystyle TM$
$\displaystyle=\\{\bar{\phi}TM^{\perp}\oplus\bar{\phi}\mathrm{tr}(TM)\\}\perp
D^{\prime}\perp TM^{\perp};$ (3.4) $\displaystyle T\bar{M}_{|M}$
$\displaystyle=\\{\bar{\phi}TM^{\perp}\oplus\bar{\phi}\mathrm{tr}(TM)\\}\perp
D^{\prime}\perp\\{TM^{\perp}\oplus\mathrm{tr}(TM)\\}.$ (3.5)
Furthermore, from (3.3), we can decompose $W$ as
$\displaystyle W=W^{\prime}+f_{1}\bar{\phi}N+f_{2}\bar{\phi}\xi,$ (3.6)
where $W^{\prime}$ is a smooth section of $D^{\prime}$, while $f_{1}$ and
$f_{2}$ are smooth functions on $M$.
###### Proposition 3.1.
$\bar{\phi}D^{\prime}\subset S(TM)$.
###### Proof.
On one hand, using (2.5) and (3.3), we derive
$\displaystyle\bar{g}(\bar{\phi}X^{\prime},\xi)=-\bar{g}(X^{\prime},\bar{\phi}\xi)=0,$
(3.7)
for every $X^{\prime}$ tangent to $D^{\prime}$. Relation (3.7) shows that
$\bar{\phi}X^{\prime}$ is tangent to $M$. On the other hand, we have
$\displaystyle\bar{g}(\bar{\phi}X^{\prime},N)=-\bar{g}(X^{\prime},\bar{\phi}N)=0,$
which, indeed, shows that $\bar{\phi}X^{\prime}$ is tangent to $S(TM)$. ∎
###### Proposition 3.2.
$D^{\prime}$ is $\bar{\phi}$-invariant, i.e. $\bar{\phi}D^{\prime}\subseteq
D^{\prime}$, if and only if one of the following holds:
1. (1)
$a=b=0$;
2. (2)
$W^{\prime}=0$.
###### Proof.
In view of the first relation in (2.3), we have
$\displaystyle\bar{g}(\bar{\phi}X^{\prime},\bar{\phi}N)=-a\eta(X^{\prime})\quad\mbox{and}\quad\bar{g}(\bar{\phi}X^{\prime},\bar{\phi}\xi)=-b\eta(X^{\prime}),$
(3.8)
for every $X^{\prime}$ tangent to $D^{\prime}$. Now, if $D^{\prime}$ is
$\bar{\phi}$-invariant then (3.8) gives
$\displaystyle a\eta(X^{\prime})=b\eta(X^{\prime})=0.$ (3.9)
Relation (3.9) shows that either $a=b=0$ or
$\eta(X^{\prime})=g(X^{\prime},W^{\prime})=0$, that is $W^{\prime}=0$. The
converse is obvious. ∎
###### Definition 3.3.
A lightlike hypersurface $M$ of an indefinite almost contact metric-manifold
$\bar{M}$ is said to be an ascreen [5, 6, 7, 8] lightlike hypersurface of
$\bar{M}$ if the vector field $\zeta$ belongs to
$S(TM)^{\perp}=TM^{\perp}\oplus\mathrm{tr}(TM)$.
In any ascreen lightlike hypersurface, $W^{\prime}=0$ and $a,b\neq 0$. It
follows that $D^{\prime}$ is $\bar{\phi}$-invariant. Moreover, the following
result about ascreen hypersurfaces is known.
###### Theorem 3.4 ([5]).
Let $M$ be a lightlike hypersurface of an indefinite almost contact metric-
manifold $\bar{M}$. Then $M$ is an ascreen lightlike hypersurface of $\bar{M}$
if and only if $\bar{\phi}TM^{\perp}=\bar{\phi}\mathrm{tr}(TM)$.
In the next theorem, we show that there are only two types of lightlike
hypersurface $M$ of an indefinite almost contact manifold $\bar{M}$ according
to the position of the structure vector field $\zeta$.
###### Theorem 3.5.
Let $M$ be a lightlike hypersurface of an indefinite almost contact metric-
manifold $\bar{M}$. Then, $\zeta$ takes exactly one of the following forms:
1. (1)
$\zeta=a\xi+bN$, i.e. $M$ is an ascreen hypersurface;
2. (2)
$\zeta=W^{\prime}+a\xi+bN$, where $W^{\prime}$ is a non-zero section of
$D^{\prime}$.
###### Proof.
From relations (3.1) and (3.6), $\zeta$ can be written as
$\displaystyle\zeta=W^{\prime}+f_{1}\bar{\phi}N+f_{2}\bar{\phi}\xi+a\xi+bN.$
(3.10)
Taking the inner product of (3.10) with $\bar{\phi}\xi$ and $\bar{\phi}N$ in
turns, and considering (2.3) and (2.2), we have
$\displaystyle b^{2}f_{2}=f_{1}(1-ab)\quad\mbox{and}\quad
a^{2}f_{1}=f_{2}(1-ab),$ (3.11)
respectively. It follows from (3.11), that
$\displaystyle(2ab-1)f_{1}=0\quad\mbox{and}\quad(2ab-1)f_{2}=0.$ (3.12)
From (3.12), we have the following cases:
1. (1)
$2ab=1$: Taking the inner product on both sides of (3.10) with $\zeta$, we get
$g(W^{\prime},W^{\prime})+2ab=1$. Thus, we have $g(W^{\prime},W^{\prime})=0$,
which implies that $W^{\prime}=0$ by the fact that $D^{\prime}$ is non-
degenerate. Then $\zeta$ in (3.10) reduces to
$\displaystyle\zeta=f_{1}\bar{\phi}N+f_{2}\bar{\phi}\xi+a\xi+bN.$ (3.13)
Applying $\bar{\phi}$ to (3.13), and remembering that $\bar{\phi}\zeta=0$, we
get
$\displaystyle a\bar{\phi}\xi+b\bar{\phi}N-f_{2}\xi-f_{1}N+f\zeta=0,$ (3.14)
where $f=af_{1}+bf_{2}$. Then using (3.13) and (3.14), we get
$\displaystyle(ff_{2}+a)\bar{\phi}\xi+$
$\displaystyle(ff_{1}+b)\bar{\phi}N=0,$ (3.15) $\displaystyle af$
$\displaystyle=f_{2},$ (3.16) $\displaystyle bf$ $\displaystyle=f_{1}.$ (3.17)
Note that none of the functions $ff_{2}+a$ and $ff_{1}+b$ seen in (3.15)
vanishes. In fact, if one assumes that $ff_{2}+a=0$, then (3.16) leads to
$\displaystyle aff_{2}+a^{2}=f^{2}_{2}+a^{2}=0.$ (3.18)
Relation (3.18) shows that $f_{2}=a=0$. But this is impossible since $2ab=1$.
Also, if $ff_{1}+b=0$, relation (3.17) leads to $f_{1}=b=0$, which is again
impossible. Thus, we may rewrite (3.15) as
$\displaystyle\bar{\phi}\xi=\lambda\bar{\phi}N,$ (3.19)
where $\lambda=-\frac{ff_{1}+b}{ff_{2}+a}$ is a non-zero smooth function on
$M$. It is clear from (3.19) that
$\bar{\phi}TM^{\perp}=\bar{\phi}\mathrm{tr}(TM)$, and hence by Theorem 3.4,
$M$ is ascreen.
2. (2)
$2ab\neq 1$: In this case, we see that $f_{1}=f_{2}=0$ and therefore
$\zeta=W^{\prime}+a\xi+bN$, from which we get
$g(W^{\prime},W^{\prime})+2ab=1$. Furthermore, $W^{\prime}\neq 0$ since if one
takes $W^{\prime}=0$, we get $2ab=1$, which is a contradiction,
which completes the proof. ∎
###### Corollary 3.6.
Every lightlike hypersurface of an indefinite almost contact metric-manifold
in which $D^{\prime}$ is $\bar{\phi}$-invariant, i.e.
$\bar{\phi}D^{\prime}\subseteq D^{\prime}$, is either ascreen or tangential
with $\zeta$ tangent to $D^{\prime}$.
Using part (2) of Theorem 3.5, we have the following definition.
###### Definition 3.7.
A lightlike hypersurface $M$ of an definite almost contact metric-manifold
$\bar{M}$ is called inascreen if $W^{\prime}\neq 0$. In addition, if $b\neq 0$
then $M$ will be called a proper inascreen lightlike hypersurface.
###### Example 3.8.
Any lightlike hypersurface of an indefinite almost contact metric-manifold
that is tangent to the structure vector field $\zeta$ is inascreen with
$a=b=0$.
In view of Proposition 3.2 and Definition 3.7, we hve the following.
###### Proposition 3.9.
The only inascreen lightlike hypersurfaces of an indefinite almost contact
metric-manifold with a $\bar{\phi}$-invariant $D^{\prime}$ are those tangent
to the structure vector field $\zeta$.
## 4\. Proper inascreen lightlike hypersurfaces
Since inascreen lightlike hypersurfaces with $b=0$ are the well-known
tangential lightlike hypersurfaces, in this section we shall focus only on the
proper ones, i.e. those in which $b\neq 0$.
###### Proposition 4.1.
Let $M$ be a proper inascreen lightlike hypersurface of an indefinite almost
contact metric-manifold $\bar{M}$ then, the following holds:
1. (1)
$D^{\prime}$ is never a $\bar{\phi}$-invariant distribution;
2. (2)
$\bar{\phi}N$ and $\bar{\phi}\xi$ are linearly independent vector fields;
3. (3)
$\zeta$ is linearly independent with any $X$ tangent to $M$.
###### Proof.
In a proper inascreen lightlike hypersurface, we know that $W^{\prime}\neq 0$
and $b\neq 0$. It follows from Proposition 3.2 that $D^{\prime}$ is never
$\bar{\phi}$-invariant. Suppose that
$\displaystyle l_{1}\bar{\phi}N+l_{2}\bar{\phi}\xi=0,$ (4.1)
for some smooth functions $l_{1}$ and $l_{2}$ on $M$. Taking the inner product
of (4.1) with $\bar{\phi}N$ and $\bar{\phi}\xi$ by turns, we get
$\displaystyle a^{2}l_{1}=l_{2}(1-ab)\quad\mbox{and}\quad
b^{2}l_{2}=l_{1}(1-ab),$ (4.2)
respectively. The two relations in (4.2) leads to
$\displaystyle l_{1}(1-2ab)=0\quad\mbox{and}\quad l_{2}(1-2ab)=0.$ (4.3)
Since $2ab\neq 1$, (4.3) gives $l_{1}=l_{2}=0$. Finally, suppose that
$\displaystyle l_{3}X+l_{4}\zeta=0,$ (4.4)
for any tangent vector field $X$, where $l_{3}$ and $l_{4}$ are some smooth
functions on $M$. Then, taking the inner product of this relation with $\xi$
leads to $l_{4}\eta(\xi)=l_{4}b=0$. Since $b\neq 0$, we get $l_{4}=0$.
Consequently, $l_{3}X=0$, which gives $l_{3}=0$. ∎
Using part (2) of Proposition 4.1, we deduce the following.
###### Proposition 4.2.
Let $M$ be a proper inascreen lightlike hypersurface of an indefinite almost
contact metric-manifold $\bar{M}$ then, $\dim M\geq 4$ and $\dim\bar{M}\geq
5$.
As $\zeta$ is nowhere tangent to $M$, we put
$\displaystyle\bar{\phi}X=\phi X+\omega(X)\zeta,$ (4.5)
for any $X$ tangent to $M$. Here, $\phi X$ is the tangential part (with
respect to $\zeta$) of $\bar{\phi}X$ to $M$. It is easy to see that $\phi$ and
$\omega$ are tensor fields of type $(1,1)$ and $(0,1)$, respectively, on $M$.
We say that $M$ is invariant if $\omega=0$. Since $\bar{\phi}\xi$ is tangent
to $M$, it follows from (4.5) that
$\displaystyle\bar{\phi}\xi=\phi\xi\quad\mbox{and}\quad\omega(\xi)=0.$ (4.6)
Applying $\bar{\phi}$ to (4.5), and using (2.1) and (2.2), we have
$\displaystyle-X+\eta(X)\zeta=\phi^{2}X+\omega(\phi X)\zeta.$ (4.7)
Comparing components in (4.7) we get
$\displaystyle\phi^{2}X$ $\displaystyle=-X\quad\mbox{and}\quad\omega(\phi
X)=\eta(X),$ (4.8)
for any $X$ tangent to $M$. Thus, the tensor $\phi$ is an almost complex
structure on $M$.
###### Proposition 4.3.
There exist no invariant inascreen lightlike hypersurface of an indefinite
almost contact metric-manifold $\bar{M}$.
###### Proof.
$M$ is invariant whenever $\omega=0$. It follows from (4.8) that $\eta(X)=0$,
for any $X$ tangent to $M$. Taking $X=\xi$ in the last relation, we get
$0=\eta(\xi)=b$, which is a contradiction to $b\neq 0$. ∎
By a direct calculation, while using (2.3), (2.5) and (4.5), we derive
$\displaystyle g(\phi X,\phi Y)$
$\displaystyle=g(X,Y)-\eta(X)\eta(Y)+\omega(X)\omega(Y),$ (4.9) $\displaystyle
g(\phi X,Y)+\omega(X)\eta(Y)$ $\displaystyle=-g(X,\phi Y)-\omega(Y)\eta(X),$
(4.10)
for any $X$ and $Y$ tangent to $M$.
###### Proposition 4.4.
There exit no proper inascreen lightlike hypersurface of an indefinite almost
contact metric-manifold such that;
1. (1)
$(g,\phi)$ is an almost Hermitian structure on $M$;
2. (2)
$\phi$ is skew-symmetric with respect to $g$,
for any $X$ and $Y$ tangent to $M$.
###### Proof.
Suppose that $(g,\phi)$ is an almost Hermitian structure on $M$. Then $g$ must
satisfy the relation $g(\phi X,\phi Y)=g(X,Y)$, for any $X$ and $Y$ tangent to
$M$. In view of (4.9), this implies that
$\displaystyle\eta(X)\eta(Y)=\omega(X)\omega(Y),$ (4.11)
for any $X$ and $Y$ tangent to $M$. With $X=Y=\xi$ in (4.11), and considering
relation (4.6), we get
$\displaystyle b^{2}=\eta(\xi)\eta(\xi)=\omega(\xi)\omega(\xi)=0,$
which is impossible for a proper inascreen lightlike hypersurface. Next,
suppose that $\phi$ is skew-symmetric with respect to $g$. Then relation
(4.10) implies that
$\displaystyle\omega(X)\eta(Y)=-\omega(Y)\eta(X),$ (4.12)
for any $X$ and $Y$ tangent to $M$. Taking $Y=\xi$ in (4.12) and considering
(4.6), we have $\omega(X)b=0$. Since $b\neq 0$, we get $\omega=0$ meaning that
$M$ is invariant. However, this is not possible by Proposition 4.3. ∎
Let us set $\tilde{g}=g+\omega\otimes\omega$. This metric $\tilde{g}$ is
degenerate on $M$ since $\tilde{g}(X,\xi)=0$ and $\tilde{g}(X,\phi\xi)=0$, for
every $X$ tangent to $M$. Moreover, using (4.8) and (4.9) we have
$\displaystyle\tilde{g}(\phi X,\phi Y)$ $\displaystyle=g(\phi X,\phi
Y)+\omega(\phi X)\omega(\phi Y)$
$\displaystyle=g(X,Y)-\eta(X)\eta(Y)+\omega(X)\omega(Y)+\omega(\phi
X)\omega(\phi Y)$ $\displaystyle=g(X,Y)+\omega(X)\omega(Y)$
$\displaystyle=\tilde{g}(X,Y).$ (4.13)
Therefore, in view of (4), we have the following.
###### Theorem 4.5.
Every proper inascreen lightlike hypersurface of an indefinite almos contact
metric-manifold admits an almost Hermitian structure $(\tilde{g},\phi)$.
## References
* [1] C. Calin, Contributions to geometry of CR-submanifold, Thesis, University of Iasi, Iasi, Romania, 1998.
* [2] K . L. Duggal and A. Bejancu, Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications, vol. 364 of Mathematics and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996.
* [3] K. L. Duggal and B. Sahin, Differential Geometry of Lightlike Submanifolds, Birkhauser Veriag AG Basel-Boston-Berlin, 2010.
* [4] K. L. Duggal and B. Sahin, Generalized Cauchy-Riemann lightlike submanifolds of Kaehler manifolds, Acta Mathematica Hungarica, vol. 112, no. 1-2, pp. 107-130, 2006.
* [5] D. H. Jin, Ascreen lightlike hypersurfaces of indefinite Sasakian manifold, J. Korean Soc. Math. Educ. Ser. B: Pure Appl. Math, 20(1), pp. 25-35, 2013.
* [6] D. H. Jin, Special Lightlike Hypersurfaces of Indefinite Kaehler Manifolds, Filomat 30:7, 1919-1930, 2016.
* [7] D. H. Jin, Geometry of lightlike hypersurfaces of an indefinite Sasakian manifold, Indian J Pure Appl Math 41, pp. 569-581, 2010.
* [8] D. H. Jin, Geometry of lightlike hypersurfaces of an indefinite cosymplectic manifold, Commun. Korean Math. Soc. 27, No. 1, pp. 185-195, 2012.
* [9] T. H. Kang, S. D. Jung, B. H. Kim, H. K. Pak and J. S. Pak, Lightlike hypersurfaces of indefinite Sasakian manifolds, Indian Journal of Pure and Applied Mathematics, vol. 34, no. 9, pp. 1369-1380, 2003.
* [10] F. Massamba, Totally contact umbilical lightlike hypersurfaces of indefinite Sasakian manifolds, Kodai Math. J. 31(3), pp. 338-358, 2008.
* [11] T. Takahashi, Sasakian manifold with pseudo-Riemannian metric, The Tohoku Mathematical Journal, vol. 21, pp. 271-290, 1969.
* [12] S. Tanno, Sasakian manifolds with constant $\phi$-holomorphic sectional curvature, The Tohoku Mathematical Journal, vol. 21, pp. 501-507, 1969.
|
# Blow-up solutions of damped Klein-Gordon equation on the Heisenberg group
Michael Ruzhansky Michael Ruzhansky: Department of Mathematics: Analysis,
Logic and Discrete Mathematics Ghent University, Belgium and School of
Mathematical Sciences Queen Mary University of London United Kingdom E-mail
address<EMAIL_ADDRESS>and Bolys Sabitbek Bolys Sabitbek:
School of Mathematical Sciences Queen Mary University of London United Kingdom
and Al-Farabi Kazakh National University Almaty, Kazakhstan E-mail address
<EMAIL_ADDRESS>
###### Abstract.
In this note, we prove the blow-up of solutions of the semilinear damped
Klein-Gordon equation in a finite time for arbitrary positive initial energy
on the Heisenberg group. This work complements the paper [21] by the first
author and Tokmagambetov, where the global in time well-posedness was proved
for the small energy solutions.
###### Key words and phrases:
Blow-up, sub-Laplacian, Heisenberg group, damped Klein-Grodon equation
###### 1991 Mathematics Subject Classification:
35L71; 35R03, 35B44.
The first and second authors were supported by EPSRC grant EP/R003025/2. The
first author was also supported by FWO Odysseus 1 grant G.0H94.18N: Analysis
and Partial Differential Equations and the Methusalem programme of the Ghent
University Special Research Fund (BOF) (Grant number 01M01021).
## 1\. Introduction
### 1.1. Setting of the problem
This note is devoted to study the blow up of solutions of the Cauchy problem
for the semilinear damped Klein-Gordon equation for the sub-Laplacian
$\mathcal{L}$ on the Heisenberg group $\mathbb{H}^{n}$:
$\displaystyle\begin{cases}u_{tt}(t)-\mathcal{L}u(t)+bu_{t}(t)+mu(t)=f(u),&t>0,\\\
u(x,0)=u_{0}(x),\,\,\,&u_{0}\in H^{1}_{\mathcal{L}}(\mathbb{H}^{n}),\\\
u_{t}(x,0)=u_{1}(x),\,\,\,&u_{1}\in L^{2}(\mathbb{H}^{n}),\end{cases}$ (1.1)
with the damping term determined by $b>0$ and the mass $m>0$. A total energy
of problem (1.1) is defined as
$\displaystyle E(t)$
$\displaystyle=\frac{1}{2}||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m}{2}||u||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{2}||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-\int_{\mathbb{H}^{n}}F(u)dx,$
where we assume the following
$\displaystyle g:[0,\infty]\rightarrow\mathbb{R},$ $\displaystyle
F(z)=g(|z|)\,\,\text{ for }\,\,z\in\mathbb{C}^{n},$ (1.2) $\displaystyle
f(z)=\frac{g^{\prime}(|z|)z}{|z|}.$
Then we have that
$\displaystyle\frac{\partial}{\partial\varepsilon}F(z+\varepsilon\xi)|_{\varepsilon=0}$
$\displaystyle=\frac{\partial}{\partial\varepsilon}g(|z+\varepsilon\xi|)|_{\varepsilon=0}$
$\displaystyle=g^{\prime}(|z+\varepsilon\xi|)\frac{\partial}{\partial\varepsilon}(|z+\varepsilon\xi|)|_{\varepsilon=0}$
$\displaystyle=\frac{g^{\prime}(|z|)}{|z|}\frac{1}{2}(\overline{z}\xi+z\overline{\xi})$
$\displaystyle={\rm Re}\left(f(z)\overline{\xi}\right),$
and
$\displaystyle\frac{\partial}{\partial x_{j}}F(u(x))$
$\displaystyle=g^{\prime}(|u(x)|)\frac{\partial|u(x)|}{\partial x_{j}}$
$\displaystyle=\frac{g^{\prime}(|u(x)|)}{2|u(x)|}\left(u(x)\frac{\partial\overline{u}(x)}{\partial
x_{j}}+\frac{\partial u(x)}{\partial x_{j}}\overline{u}(x)\right)$
$\displaystyle={\rm Re}\left(f(u(x))\frac{\partial\overline{u}(x)}{\partial
x_{j}}\right).$
The conservation of energy law follows from
$\displaystyle\frac{\partial E(t)}{\partial t}$
$\displaystyle=\frac{\partial}{\partial
t}\left[\frac{1}{2}||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m}{2}||u||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{2}||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-\int_{\mathbb{H}^{n}}F(u)dx\right]$
$\displaystyle={\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}_{t}[u_{tt}+mu-\mathcal{L}u-f(u)]dx$
$\displaystyle=-b\int_{\mathbb{H}^{n}}|u_{t}|^{2}dx,$
this gives
$E(t)+b\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds=E(0),$ (1.3)
where
$E(0):=\frac{1}{2}||u_{1}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m}{2}||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{2}||\nabla_{H}u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}-\int_{\mathbb{H}^{n}}F(u_{0})dx.$
Also, let us define the Nehari functional
$\displaystyle
I(u)=m||u||^{2}_{L^{2}(\mathbb{H}^{n})}+||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-{\rm
Re}\int_{\mathbb{H}^{n}}f(u)\overline{u}dx.$
We assume that the nonlinear term $f(u)$ satisfies the condition
$f(0)=0,\,\,\,\text{ and }\,\,\alpha F(u)\leq{\rm Re}[f(u)\overline{u}],$
where $\alpha>2$. In particular, this includes the case
$f(u)=|u|^{p-1}u\,\,\,\text{ for }p>1.$
### 1.2. Literature overview
The study of the damped wave equation on the Heisenberg group started in
Bahouri-Gerrard-Xu [1] to prove the dispersive and Strichartz inequalities
based on the analysis in Besov-type spaces. Later, Greiner-Holcman-Kannai [4]
explicitly computed the wave kernel for the class of second-order subelliptic
operators, where their class contains degenerate elliptic and hypoelliptic
operators such as the sub-Laplacian and the Grushin operator. Also, Müller-
Stein [15] established $L^{p}$-estimates for the wave equation on the
Heisenberg group. Recently, Müller-Seeger [16] obtained the sharp version of
$L^{p}$ estimates on the $H$-type groups.
The blow-up solutions of evolution equations on the Heisenberg group were
considered by Georgiev-Palmieri [5] where they proved the global existence and
nonexistence results of the Cauchy problem for the semilinear damped wave
equation on the Heisenberg group with the power nonlinear term. The proof of
blow-up solutions is based on the test function method. The first author and
Yessirkegenov [22] established the existence and non-existence of global
solutions for semilinear heat equations and inequalities on sub-Riemannian
manifolds. In [23], by using the comparison principle they obtain blow-up type
results and global in $t$-boundedness of solutions of nonlinear equations for
the heat $p$-sub-Laplacian on the stratified Lie groups. The global existence
and nonexistence for the nonlinear porous medium equation were studied by the
authors in [19] on the stratified Lie groups.
This work is motivated by the paper [21] of the first author and Tokmagambetov
where the global existence of solutions for small data of problem (1.1) was
shown on the Heisenberg group and on general graded Lie groups. In the sense
of the potential wells theory, we can understand this result in the sense that
when the initial energy is less than the mountain pass level $E(0)<d$ and the
Nehari functional is positive $I(u_{0})>0$, there exists a global solution of
the problem (1.1). A natural question arises when the solution of problem
(1.1) blows up in a finite time or $E(0)>0$ and $I(u_{0})<0$.
The main aim of this paper is to obtain the blow-up solutions of problem (1.1)
in a finite time for arbitrary positive initial energy. Our proof is based on
an adopted concavity method, which was introduced by Levine [9] to establish
the blow-up solutions of the abstract wave equation of the form
$Pu_{tt}=-Au+F(u)$ (including the Klein-Gordon equation) for the negative
initial energy. It was also used for parabolic type equations (see [10, 12,
11, 13, 14]). Modifying the concavity method, Wang [29] proved the
nonexistence of global solutions to nonlinear damped Klein-Gordon equation for
arbitrary positive initial energy under sufficient conditions. Later, Yang-Xu
[27] extended this result by introducing a new auxiliary function and the
adopted concavity method.
### 1.3. Preliminaries on the Heisenberg group
Let us give a brief introduction of the Heisenberg group. Let $\mathbb{H}^{n}$
be the Heisenberg group, that is, the set $\mathbb{R}^{2n+1}$ equipped with
the group law
$\xi\circ\widetilde{\xi}:=(x+\widetilde{x},y+\widetilde{y},t+\widetilde{t}+2\sum_{i=1}^{n}(\widetilde{x}_{i}y_{i}-x_{i}\widetilde{y}_{i})),$
where $\xi:=(x,y,t)\in\mathbb{H}^{n}$, $x:=(x_{1},\ldots,x_{n})$,
$y:=(y_{1},\ldots,y_{n})$, and $\xi^{-1}=-\xi$ is the inverse element of $\xi$
with respect to the group law. The dilation operation of the Heisenberg group
with respect to the group law has the following form (see e.g. [7], [20])
$\delta_{\lambda}(\xi):=(\lambda x,\lambda
y,\lambda^{2}t)\,\,\text{for}\,\,\lambda>0.$
The Lie algebra $\mathfrak{h}$ of the left-invariant vector fields on the
Heisenberg group $\mathbb{H}^{n}$ is spanned by
$X_{i}:=\frac{\partial}{\partial x_{i}}+2y_{i}\frac{\partial}{\partial
t}\,\,\text{for}\,\,1\leq i\leq n,$ $Y_{i}:=\frac{\partial}{\partial
y_{i}}-2x_{i}\frac{\partial}{\partial t}\,\,\text{for}\,\,1\leq i\leq n,$
and with their (non-zero) commutator
$[X_{i},Y_{i}]=-4\frac{\partial}{\partial t}.$
The horizontal gradient of $\mathbb{H}^{n}$ is given by
$\nabla_{H}:=(X_{1},\ldots,X_{n},Y_{1},\ldots,Y_{n}),$
so the sub-Laplacian on $\mathbb{H}^{n}$ is given by
$\mathcal{L}:=\sum_{i=1}^{n}\left(X_{i}^{2}+Y_{i}^{2}\right).$
###### Definition 1.1 (Weak solution).
A function
$\displaystyle u\in C([0,T_{1});H^{1}_{\mathcal{L}}(\mathbb{H}^{n}))\cap
C^{1}([0,T_{1});L^{2}(\mathbb{H}^{n})),$ $\displaystyle u_{t}\in
L^{2}([0,T_{1});H^{1}_{\mathcal{L}}(\mathbb{H}^{n})),$ $\displaystyle
u_{tt}\in L^{2}([0,T_{1});H^{-1}_{\mathcal{L}}(\mathbb{H}^{n})),$
satisfying
${\rm Re}\langle u_{tt},v\rangle+{\rm
Re}\int_{\mathbb{H}^{n}}\nabla_{H}u\cdot\nabla_{H}vdx+m{\rm
Re}\int_{\mathbb{H}^{n}}uvdx+b{\rm Re}\int_{\mathbb{H}^{n}}u_{t}vdx\ ={\rm
Re}\int_{\mathbb{H}^{n}}f(u)vdx,$ (1.4)
for all $v\in H^{1}_{\mathcal{L}}(\mathbb{H}^{n})$ and a.e. $t\in[0,T_{1})$
with $u(0)=u_{0}(x)$ and $u_{t}(0)=u_{1}(x)$ represents a weak solution of
problem (1.1).
Note that $T_{1}$ denotes the lifespan of the solution $u(x,t)$ and
$\langle\cdot,\cdot\rangle$ is the duality between
$H^{-1}_{\mathcal{L}}(\mathbb{H}^{n})$ and
$H^{1}_{\mathcal{L}}(\mathbb{H}^{n})$. Here
$H^{1}_{\mathcal{L}}(\mathbb{H}^{n})$ denotes the sub-Laplacian Sobolev space,
analysed by Folland [6], see also [7].
## 2\. Main Result
We now present the main result of this paper.
###### Theorem 2.1.
Let $b>0$, $m>0$ and $\mu=\max\\{b,m,\alpha\\}$. Assume that nonlinearity
$f(u)$ satisfies
$\alpha F(u)\leq{\rm Re}[f(u)\overline{u}]\,\,\,\text{ for }\,\,\alpha>2,$
(2.1)
where $F(u)$ is as in (1.1). Assume that the Cauchy data $u_{0}\in
H_{\mathcal{L}}^{1}(\mathbb{H}^{n})$ and $u_{1}\in L^{2}(\mathbb{H}^{n})$
satisfy
$I(u_{0})=m||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}+||\nabla_{H}u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}-{\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}_{0}f(u_{0})dx<0,$ (2.2)
and
${\rm
Re}(u_{0},u_{1})_{L^{2}(\mathbb{H}^{n})}\geq\frac{\alpha(\mu+1)}{m(\alpha-2)}E(0).$
(2.3)
Then the solution of equation (1.1) blows up in finite time $T^{*}$ such that
$0<T^{*}\leq\frac{2(\mu+1)(bT_{0}+1)}{(\alpha-2)(\mu+1-m)}\frac{||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}}{{\rm
Re}(u_{0},u_{1})},$
where the blow-up time $T^{*}\in(0,T_{0})$ with $T_{0}<+\infty$.
###### Remark 2.2.
* (i)
Note that we have times $T^{*}$, $T_{0}$ and $T_{1}$. The relationship between
this times is the blow-up time $T^{*}\in(0,T_{0})\subset(0,T_{1})$ where
$T_{0}<+\infty$ and $T_{1}=+\infty$.
* (ii)
The local existence for the Klein-Gordon equation was shown in [2] and [3].
The global in time well-posedness of problem (1.1) was proved by the first
author and Tokmagambetov [21] for the small energy solutions and the
nonlinearity $f(u)$ satisfying
$\displaystyle|f(u)-f(v)|\leq C(|u|^{p-1}+|v|^{p-1})|u-v|,$
with $1<p\leq 1+1/n$.
###### Proof of Theorem 2.1 .
First, recall the Nehari functional
$\displaystyle
I(u)=m||u||^{2}_{L^{2}(\mathbb{H}^{n})}+||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-{\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}f(u)dx.$
Then the proof includes two steps.
Step I. In this step, we claim that
$I(u(t))<0,\,\,\,\text{ and
}\,\,\,A(t)>\frac{2\alpha(\mu+1)}{m(\alpha-2)}E(0),$
for $0\leq t<T_{1}$ where $\mu=\max\\{b,m,\alpha\\}$ and
$\displaystyle A(t)=2{\rm Re}(u,u_{t})+b||u||^{2}_{L^{2}(\mathbb{H}^{n})}.$
By using (1.4) along with $v=\overline{u}$ we get
$\displaystyle A^{\prime}(t)$
$\displaystyle=2||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+2{\rm Re}\langle
u_{tt},u\rangle+2b{\rm Re}\int_{\mathbb{H}^{n}}\overline{u}u_{t}dx$
$\displaystyle=2||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}-2I(u),\,\,\,0\leq
t<T_{1}.$ (2.4)
In the last line we have used that
$\displaystyle{\rm Re}\langle u_{tt},u\rangle$ $\displaystyle={\rm
Re}\int_{\mathbb{H}^{n}}f(u)\overline{u}dx-\int_{\mathbb{H}^{n}}|\nabla_{H}u|^{2}dx-m\int_{\mathbb{H}^{n}}|u|^{2}dx-b{\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}u_{t}dx$ $\displaystyle=-I(u)-b{\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}u_{t}dx.$
Now let us suppose by contradiction that
$I(u(t))<0\,\,\,\text{ for all }\,\,0\leq t<t_{0},$
and
$I(u(t_{0}))=0.$
Hereafter $0<t_{0}<T_{1}$. It is easy to see that $A^{\prime}(t)>0$ over
$[0,t_{0})$ and
$\displaystyle A(t)>A(0)\geq 2{\rm
Re}(u_{0},u_{1})\geq\frac{2\alpha(\mu+1)}{m(\alpha-2)}E(0).$ (2.5)
Since $u(t)$ and $u_{t}(t)$ are both continuous in $t$ that gives
$\displaystyle A(t_{0})\geq\frac{2\alpha(\mu+1)}{m(\alpha-2)}E(0).$ (2.6)
Next we need to show a contradiction to (2.6). Using (1.3) and (2.1), we have
$\displaystyle E(0)$
$\displaystyle=E(t)+b\int_{0}^{t}||u_{s}||^{2}_{L^{2}(\mathbb{H}^{n})}ds$
$\displaystyle=\frac{1}{2}||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m}{2}||u||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{2}||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle-\int_{\mathbb{H}^{n}}F(u)dx+b\int_{0}^{t}||u_{s}||^{2}_{L^{2}(\mathbb{H}^{n})}ds$
$\displaystyle\geq\frac{1}{2}||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m}{2}||u||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{2}||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle-\frac{1}{\alpha}{\rm
Re}\int_{\mathbb{H}^{n}}\overline{u}f(u)dx+b\int_{0}^{t}||u_{s}||^{2}_{L^{2}(\mathbb{H}^{n})}ds$
$\displaystyle=\frac{1}{2}||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{1}{\alpha}I(u)+\left(\frac{m}{2}-\frac{m}{\alpha}\right)||u||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle+\left(\frac{\alpha-2}{2\alpha}\right)||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}+b\int_{0}^{t}||u_{s}||^{2}_{L^{2}(\mathbb{H}^{n})}ds.$
If we use $I(u(t_{0}))=0$ and $\frac{m(\alpha-2)}{\alpha(\mu+1)}<1$, then
$\displaystyle E(0)$
$\displaystyle\geq\frac{1}{2}||u_{t}(t_{0})||^{2}_{L^{2}(\mathbb{H}^{n})}+\frac{m(\alpha-2)}{2\alpha}||u(t_{0})||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle\geq\frac{m(\alpha-2)}{2\alpha(\mu+1)}\left(||u_{t}(t_{0})||^{2}_{L^{2}(\mathbb{H}^{n})}+(\mu+1)||u(t_{0})||^{2}_{L^{2}(\mathbb{H}^{n})}\right)$
$\displaystyle>\frac{m(\alpha-2)}{2\alpha(\mu+1)}\left(2{\rm
Re}(u(t_{0}),u_{t}(t_{0}))+\mu||u(t_{0})||^{2}_{L^{2}(\mathbb{H}^{n})}\right)$
$\displaystyle\geq\frac{m(\alpha-2)}{2\alpha(\mu+1)}A(t_{0}).$ (2.7)
Note that for the strict inequality above we use that the assumption (2.2)
implies that $||u_{0}||_{L^{2}(\mathbb{H}^{n})}\neq 0$. We have also used the
fact $a^{2}+b^{2}-2ab\geq 0$, where
$a=||u_{t}(t_{0})||_{L^{2}(\mathbb{H}^{n})}$ and
$b=||u(t_{0})||_{L^{2}(\mathbb{H}^{n})}$. It gives the contradiction to (2.6).
This proves our claim.
Step II. Define the functional
$\displaystyle
M(t)=||u||^{2}_{L^{2}(\mathbb{H}^{n})}+b\int_{0}^{t}||u(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds+b(T_{0}-t)||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})},$
for $0\leq t\leq T_{0}$. Then
$\displaystyle M^{\prime}(t)$ $\displaystyle=2{\rm
Re}(u,u_{t})+b||u(t)||^{2}_{L^{2}(\mathbb{H}^{n})}-b||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle=2{\rm Re}(u,u_{t})+2b\int_{0}^{t}{\rm Re}(u(s),u_{s}(s))ds,$
since
$\displaystyle\int_{0}^{t}\frac{d}{ds}||u(s)||_{L^{2}(\mathbb{H}^{n})}^{2}ds=||u(t)||_{L^{2}(\mathbb{H}^{n})}^{2}-||u(0)||_{L^{2}(\mathbb{H}^{n})}^{2}.$
We observe the following estimates
$\displaystyle|{\rm Re}(u,u_{t})|^{2}$
$\displaystyle\leq||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}||u||^{2}_{L^{2}(\mathbb{H}^{n})},$
$\displaystyle\left(\int_{0}^{t}|{\rm Re}(u(s),u_{s}(s))|ds\right)^{2}$
$\displaystyle\leq\left(\int_{0}^{t}||u(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right)\left(\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right),$
and
$\displaystyle 2{\rm Re}(u,u_{t})\int_{0}^{t}{\rm Re}(u(s),u_{s}(s))ds$
$\displaystyle\leq
2||u||_{L^{2}(\mathbb{H}^{n})}||u_{t}||_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle\times\left(\int_{0}^{t}||u(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right)^{1/2}\left(\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right)^{1/2}$
$\displaystyle\leq||u||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds+||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}\int_{0}^{t}||u(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds.$
Using the above inequalities, we calculate
$\displaystyle(M^{\prime}(t))^{2}$ $\displaystyle=4\left(|{\rm
Re}(u,u_{t})|^{2}+2b{\rm Re}(u,u_{t})\int_{0}^{t}{\rm
Re}(u(s),u_{s}(s))ds+b^{2}\left(\int_{0}^{t}{\rm
Re}(u(s),u_{s}(s))ds\right)^{2}\right)$ $\displaystyle\leq
4\left(||u||^{2}_{L^{2}(\mathbb{H}^{n})}+b\int_{0}^{t}||u(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right)\left(||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right),$
for all $0\leq t\leq T_{0}$. The second derivate with respect to time of
$M(t)$ is
$\displaystyle
M^{\prime\prime}(t)=2||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}-2I(u),$
for all $0\leq t\leq T_{0}$, where we used the equality from (2). Then we
construct the differential inequality as follows
$\displaystyle
M^{\prime\prime}(t)M(t)-\frac{\omega+3}{4}(M^{\prime}(t))^{2}\geq
M(t)\left(M^{\prime\prime}(t)-(\omega+3)\left(||u_{t}||^{2}+b\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds\right)\right)$
$\displaystyle=M(t)\left(-(\omega+1)||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}-(\omega+3)b\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds-2I(u)\right),$
where we assume that $\omega>1$. We shall now show that the following term is
nonnegative
$\displaystyle\eta(t)$
$\displaystyle=-(\omega+1)||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}-(\omega+3)b\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds-2I(u)$
$\displaystyle\geq(\alpha-\omega-1)||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+b(2\alpha-\omega-3)\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds$
$\displaystyle+m(\alpha-2)||u||^{2}_{L^{2}(\mathbb{H}^{n})}+(\alpha-2)||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-2\alpha
E(0)$
$\displaystyle=(\alpha-\omega-1)\left[||u_{t}||^{2}_{L^{2}(\mathbb{H}^{n})}+(b+1)||u||^{2}_{L^{2}(\mathbb{H}^{n})}\right]+(\alpha-2)||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-2\alpha
E(0)$
$\displaystyle+b(2\alpha-\omega-3)\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds+(m(\alpha-2)-(b+1)(\alpha-\omega-1)|)||u||^{2}_{L^{2}(\mathbb{H}^{n})}$
$\displaystyle\geq(\alpha-\omega-1)\left[2{\rm
Re}(u,u_{t})+b||u||^{2}_{L^{2}(\mathbb{H}^{n})}\right]+(\alpha-2)||\nabla_{H}u||^{2}_{L^{2}(\mathbb{H}^{n})}-2\alpha
E(0)$
$\displaystyle+b(2\alpha-\omega-3)\int_{0}^{t}||u_{s}(s)||^{2}_{L^{2}(\mathbb{H}^{n})}ds+(m(\alpha-2)-(b+1)(\alpha-\omega-1)|)||u||^{2}_{L^{2}(\mathbb{H}^{n})}.$
In the second line that we have used (2). By selecting
$\omega=\alpha-1-\frac{m(\alpha-2)}{\mu+1}$ which satisfies $\omega>1$ since
$\mu+1>m$ and using the argument from Step I, we obtain
$\displaystyle\eta(t)$ $\displaystyle>\frac{m(\alpha-2)}{\mu+1}(2{\rm
Re}(u,u_{t})+b||u||^{2}_{L^{2}(\mathbb{H}^{n})})-2\alpha E(0)$
$\displaystyle>\frac{m(\alpha-2)}{\mu+1}(2{\rm
Re}(u_{0},u_{1})+b||u_{0}||^{2}_{L^{2}(\mathbb{H}^{n})})-2\alpha E(0)$
$\displaystyle>\left(\frac{m(\alpha-2)}{\mu+1}\right)2{\rm
Re}(u_{0},u_{1})-2\alpha E(0)$ $\displaystyle\geq 0,$
Note that we have used the fact $A^{\prime}(t)>0$ and the expression (2.5)
with $A(t)=2{\rm Re}(u,u_{t})+b||u||^{2}_{L^{2}(\mathbb{H}^{n})}$, and the
condition (2.3) in the last line, respectively. So we obtain the inequality
$M^{\prime\prime}(t)M(t)-\frac{\omega+3}{4}(M^{\prime}(t))^{2}>0.$
Then
$\frac{d}{dt}\left[\frac{M^{\prime}(t)}{M^{\frac{\omega+3}{4}}(t)}\right]>0\Rightarrow\begin{cases}M^{\prime}(t)\geq\left[\frac{M^{\prime}(0)}{M^{\frac{\omega+3}{4}}(0)}\right]M^{\frac{\omega+3}{4}}(t),\\\
M(0)=(bT_{0}+1)||u_{0}||_{L^{2}(\mathbb{H}^{n})}^{2}.\end{cases}$
Let us denote $\sigma=\frac{\omega-1}{4}$. Then we have
$-\frac{1}{\sigma}\left[M^{-\sigma}(t)-M^{-\sigma}(0)\right]\geq\frac{M^{\prime}(0)}{M^{\sigma+1}(0)}t,$
that gives
$\displaystyle M(t)\geq\left(\frac{1}{M^{\sigma}(0)}-\frac{\sigma
M^{\prime}(0)}{M^{\sigma+1}(0)}t\right)^{-\frac{1}{\sigma}}.$
Then the blow-up time $T^{*}$ satisfies
$0<T^{*}\leq\frac{M(0)}{\sigma M^{\prime}(0)},$
where $M^{\prime}(0)=2{\rm Re}(u_{0},u_{1})$. This completes the proof. ∎
## References
* [1] Bahouri H., Gerard P., Xu C,J.: Spaces de Besov et estimations de Strichartz généralisées sur le groupe de Heisenberg. J. Anal. Math., 82, 93–118 (2000)
* [2] Cazenave T.: Uniform estimates for solutions of nonlinear Klein-Gordon equations. Journal of Functional Analysis, 60, 36–55 (1985)
* [3] Cazenave T., Haraux A.: An introduction to semilinear evolution equations. Oxford Lecture Series in Mathematics and its Applications, 13. The Clarendon Press, Oxford University Press, New York, 1998
* [4] Greiner P.C., Holcman D., Kannai Y.: Wave kernels related to second-order operators. Duke Math. J., 114 (2), 329–386 (2002)
* [5] Georgiev V., Palmieri A.: Critical exponent of Fujita-type for the semilinear damped wave equation on the Heisenberg group with power nonlinearity. J. Differential Equations, 269, no. 1, 420–448 (2020)
* [6] Folland G. B.: Subelliptic estimates and function spaces on nilpotent Lie groups. Ark. Mat., 13(2), 161–207 (1975)
* [7] Fischer V., Ruzhansky M.: Quantization on nilpotent Lie groups, volume 314 of Progress in Mathematics. Birkhäuser/Springer, [Open access book], 2016
* [8] Folland G. B., Stein E. M.: Hardy spaces on homogeneous groups, volume 28 of Mathematical Notes. Princeton University Press, Princeton, N.J.; University of Tokyo Press, Tokyo, 1982
* [9] Levine H. A.: Some additional remarks on nonexistence of global solutions to nonlinear wave equations of the form $Pu_{tt}=-Au+\mathcal{F}(u)$. Trans. Amer. Math. Soc., 55, 52–72 (1974)
* [10] Levine H. A.: A note on a nonexistence theorem for some nonlinear wave equations. SIAM J. Math. Anal., 5, 138–146 (1974)
* [11] Levine H. A.: The role of critical exponents in blow-up theorems. SIAM Rev., 32, 262–288 (1990)
* [12] Levine H. A.: Some nonexistence and instability theorems for formally parabolic equations of the form $Pu_{t}=-Au+\mathcal{F}(u)$. Arch. Ration. Mech. Anal., 51, 277–284 (1973)
* [13] Levine H. A., Payne L. E.: Nonexistence theorems for the heat equation with nonlinear boundary conditions and for the porous medium equation backward in time. J. Differential Equations, 16, 319–334 (1974)
* [14] Levine H. A., Payne L. E.: Some nonexistence theorems for initial-boundary value problems with nonlinear boundary constraints. Proc. Amer. Math. Soc., 46, 277–284 (1974)
* [15] Müller D., Stein E.M.: $L^{p}$-estimates for the wave equation on the Heisenberg group. Rev. Mat. Iberoam., 15 (2), 297–334 (1999)
* [16] Müller D., Seeger A.: Sharp $L^{p}$ bounds for the wave equation on groups of Heisenberg type. Anal. PDE, 8 (5) 1051–1100 (2015)
* [17] Pang Y., Yang Y.: A note on finite time blow-up for dissipative Klein-Gordon equation. Nonlinear Analysis, 195, 111729 (2020)
* [18] Payne L. E., Sattinger D. H.: Saddle points and instability of nonlinear hyperbolic equations. Israel J. Math., 22, 273–303 (1975)
* [19] Ruzhansky M., Sabitbek B., Torebek B.: Global existence and blow-up of solutions to porous medium equation and pseudo-parabolic equation, I. Stratified Groups. Manuscripta Math., to appear, (2021)
* [20] Ruzhansky M., Suragan D.: Hardy inequalities on homogeneous groups. Progress in Math. Vol. 327, Birkhäuser, 588 pp, 2019. (open access book)
* [21] Ruzhansky M., Tokmagambetov N.: Nonlinear damped wave equations for the sub-Laplacian on the Heisenberg group and for Rockland operators on graded Lie groups. J. Differential Equations, 265, 5212–5236 (2018)
* [22] Ruzhansky M., Yessirkegenov N.: Existence and non-existence of global solutions for semilinear heat equations and inequalities on sub-Riemannian manifolds, and Fujita exponent on unimodular Lie groups. J. Differential Equations, 308, 455–473 (2022)
* [23] Ruzhansky M., Yessirkegenov N.: A comparison principle for higher order nonlinear hypoelliptic heat operators on graded Lie groups. Nonlinear Analysis, 215, 112621 (2022)
* [24] Sattinger D. H. On global solution of nonlinear hyperbolic equations. Arch. Rat. Mech. Anal., 30, 148–172 (1968)
* [25] Xu R., Ding Y.: Global solutions and finite time blow up for damped Klein-Gordon equation. Acta Mathematica Scientia, 33B(3), 643–652 (2013)
* [26] Xu R. Z., Zhang M. Y., Chen S.H., Yang Y.B., Shen J. H.: The initial-boundary value problems for a class of six order nonlinear wave equation. Discrete Contin. Dyn. Syst., 37 (11), 5631–5649 (2017)
* [27] Yang Y., Xu R.: Finite time blowup for nonlinear Klein–Gordon equations with arbitrarily positive initial energy. Applied Mathematics Letters, 77, 21–26 (2018)
* [28] Zhang J.: Sharp conditions of global existence of nonlinear Schrödinger and Klein-Gordon equations. Nonlinear Analysis, 48, 191–207 (2002)
* [29] Wang Y.: A sufficient condition for finite time blow-up of the nonlinear Klein-Gordon equations with arbitrary positive initial energy. Proc. Amer. Math. Soc., 136, 3477–3482 (2008)
|
# Is the universe static?
David F. Crawford Astronomical Society of Australia, 44 Market St., Naremburn
NSW, 2065, Australia
###### Abstract
A fundamental property of an expanding universe is that any time dependent
characteristic of distant objects must appear to scale by the factor $(1+z$).
This is called time dilation. Light curves of type Ia supernovae and the
duration of Gamma-Ray Bursts (GRB) are the only observations that can directly
measure time dilation over a wide range of redshifts. An analysis of raw
observations of 2,333 type Ia supernovae light-curves shows that their widths,
relative to a standard template, have a power-law exponent as a function of
${(1+z)}$, of (0.083 +/- 0.024) which is consistent with no time dilation and
inconsistent with standard time dilation. In addition, it is shown that the
standard method for calibrating the type Ia supernovae light curves (SALT2) is
flawed, which explains why this lack of time dilation has not been previously
observed.
Nearby observations show that the peak absolute magnitude of type Ia
supernovae is also constant. Here it is shown that the peak absolute magnitude
is independent of redshift if a static universe cosmology, Curvature
Cosmology, is used to provide the distance moduli. Furthermore it is explained
why the modified $\Lambda$-CDM model provides similar results.
Analysis of the duration of GRB shows that they are consistent with no time
dilation and have no support for standard time dilation. Consequently, this
paper argues for a fundamental change from the current paradigm of an
expanding universe to one for a static universe. Some of the major
consequences of Curvature Cosmology are listed.
###### keywords:
cosmology: large-scale structure of universe,gamma-ray bursts: general, stars:
supernovae
## 1 INTRODUCTION
Nearby type Ia supernovae are well known to have essentially identical light
curves that make excellent cosmological probes. It is argued in Section 3 that
the only characteristics of the light curve that changes with redshift are the
scaling parameters of peak luminosity and width. In particular the width which
must vary with redshift in exactly the same way as time dilation and the peak
absolute magnitude should be independent of redshift. Investigation of the
variation of the peak magnitude with redshift requires a cosmological model to
provide the distance moduli. This is done by using Curvature Cosmology
(Crawford, 2010) which will be described later (Section 7).
#### Time dilation
It is convenient to assume that the width dependence on redshift can be
described by a parameter $\alpha$ such that the redshift dependence of time
dilation is equal to $(1+z)^{\alpha}$ and then to estimate $\alpha$ from the
observations as a test of time dilation. For all current expansion models
$\alpha$ must be one and for static models, it must zero. Any significant
difference of $\alpha$ from either of these two values could only be explained
by a completely new cosmology.
#### Photon energy
In Section 2 it is argued that in quantum mechanics the apparent wavelength of
a photon is a measurement of its energy and as a consequence its redshift may
be due to any process that causes a loss of energy. Thus in quantum mechanics,
the rigid nexus between the shift of spectral lines and other time variations
is broken.
#### Intrinsic widths
The observational evidence for standard time dilation has a long history with
notable papers being Goldhaber et al. (2001) and Blondin et al. (2008). The
observed width of any light curve from a distant object is the product of an
intrinsic width, with a cosmological width due to time dilation. If the
observed wavelength is $\lambda$ then the intrinsic wavelength is
$\epsilon=\lambda/(1+z)$ which is shorter than the observed wavelength. Since
many of the intrinsic wavelengths are beyond the visible range, their
intrinsic widths cannot be easily determined from nearby supernovae. A
suitable method of solving this problem is to generate a reference template
from all the supernovae light curves that provides a complete light curve for
each intrinsic wavelength, and then to use these templates to accurately
calibrate the observations by eliminating any intrinsic effects.
#### Salt2
The SALT2 method (Guy et al., 2007, 2010) determines these intrinsic templates
by combining a large number of observations over a wide range of redshifts and
has been used by Betoule et al. (2014), Conley et al. (2011), Foley et al.
(2018), Scolnic et al. (2017) and Jones et al. (2018). In other words, the
reference template is the average of the light curves from many supernovae as
a function of intrinsic wavelength. Consequently the measurement of the
intrinsic width at a particular intrinsic wavelength can come from many
supernovae with different redshifts.
It has been first shown by Crawford (2017) and here in Section 4 that there is
a fundamental problem with the SALT2 analysis in that a systematic variation
in width as a function of redshift is included in the template as a systematic
variation of the width as a function of intrinsic wavelength. The SALT2
calibration process is very good at removing the intrinsic variations but at
the same time, it removes systematic redshift variations such as time
dilation. Thus supernovae light curves that have been calibrated by SALT2, or
a similar method, have all the cosmological information that is a power-law
function of redshift removed.
#### Type Ia supernova light curve widths
A major part of this paper (Section 5) is an examination of the raw
observations for 2,333 supernovae to investigate how the widths of their light
curves vary with redshift. This was done separately for each filter so that
observations of each supernova can provide up to five distinct and unrelated
values for the width of the observed light curve.
The first step of the analysis is to determine the intrinsic width as a
function of intrinsic wavelength and the second step is to use the intrinsic
width to remove the intrinsic component from the raw observations in order to
obtain a cosmological width that is entirely due to cosmological processes.
#### Type Ia supernovae absolute magnitudes
It has also been observed that nearby type Ia supernovae have peak absolute
magnitudes that are independent of redshift. The aim here is to extend this
analysis of peak absolute magnitudes to higher redshifts. However there is a
major problem in that the estimation of the absolute magnitude requires a
distance modulus to convert the apparent flux density to an absolute magnitude
and this requires the use of a cosmological model. This analysis is done for
two different models: the standard $\Lambda$-CDM model (summarised in Appendix
C) and a static model, Curvature Cosmology (Section 7).
The first step of the analysis is to get the average absolute intrinsic
magnitude as a function of intrinsic wavelength. Then to remove these
intrinsic effects from the observations to provide cosmological peak absolute
magnitudes for each filter and each supernova. This is done, separately, for
both cosmological models.
#### Gamma- Ray Bursts
Gamma-Ray Bursts (GRB) are the only other observations that can provide direct
measurements of time dilation over a wide range of redshifts. The analysis
used here in Section 6 investigates the observed durations of the bursts as a
function of redshift and again finds that they are consistent with no time
dilation.
#### Curvature Cosmology
Following the analysis a brief description of Curvature Cosmology is provided
(Section 7) as well as the description of the cosmological consequences of
Curvature Cosmology (Section 7.10).
#### Basic assumptions
It is assumed for this analysis that the intrinsic properties of the type Ia
supernovae light curves and the GRB burst durations are the same at all
redshifts. In other words, we are assuming that there is no evolution. Part of
this assumption for type Ia supernovae is that minor differences in the
subtypes and effects of the host galaxy do not have a significant dependence
on redshift. Hence their main effect is to increase the background noise.
Furthermore it is assumed that the wavelength dependence of a filter can be
replaced by a single value at the effective wavelength of the filter.
## 2 REDSHIFTS AND TIME DILATION
The Hubble redshift law states that distant objects appear, on average, to
have an apparent velocity of recession that is proportional to their distance.
Since this is consistent with models in General Relativity that have universal
expansion, such expanding models have become the standard cosmological
paradigm. Classically, this redshift was obvious because in these models
spectral lines are shifted in wavelength exactly like any other time-dependent
phenomena.
However quantum mechanics tells us that light is transmitted by photons whose
effective wavelength is determined from their momentum by the de Broglie
equation $\lambda=hc/E$ where $E$ is their energy and $\lambda$ is their
effective wavelength. Thus their effective wavelength is simply a measurement
of their energy and is not a proper wavelength in the classical sense.
Nevertheless it does describe how photons can be diffracted and their energy
measured by an interferometer. The Doppler effect and the universal expansion
are explained by an actual loss (or gain) of photon energy. A consequence is
that redshifts may be due to any process that causes a loss of photon energy.
Thus in quantum mechanics, the rigid nexus between the shifts in wavelength of
spectral lines and other time variations is broken.
## 3 COSMOLOGICAL CHARACTERISTICS OF LIGHT CURVES
Let us assume that the intrinsic radiation characteristics of type Ia
supernovae are independent of redshift and that in an expanding universe the
rate of universal expansion is constant for the duration of the light curve.
Since cosmology only controls the transmission of the light, it follows that
the shape of the received light-curve must be the same as the shape of the
intrinsic light curve but with different scale factors. In other words, the
cosmology can only change the peak flux density and the width of the light
curve. Consequently, all of the cosmological information is contained in the
dependence of these two variables with redshift. Thus, it is only necessary to
measure these two scaling parameters in order to investigate the cosmology of
supernovae light curves. Furthermore, we only need determine these two
parameters for intrinsic light curves in order to distinguish them from
cosmological effects.
Since the observed light curve is the intrinsic light curve multiplied by any
time dilation, then the observed width is the product of the intrinsic width
and the time dilation width. We assume that the time dilation width has the
power law of ${(1+z)}^{\alpha}$, where $\alpha$ is the exponent and has a
value of one for standard time dilation. By definition, the redshift, $z$, is
defined by $\epsilon=\lambda/{(1+z)}$, where $\lambda$ is the observed
wavelength and $\epsilon$ is the intrinsic (emitted) wavelength.
The observed wavelength is determined by the filter used and the redshift of
the supernova is usually measured from the observed wavelength shift of
emission or absorption lines. The intrinsic width is a function of the
intrinsic wavelength and we assume for part of this work that it has the power
law $\epsilon^{\beta}$, where $\beta$ is determined by observations. Although
the intrinsic light curve width almost certainly has a more complicated
function of wavelength, it is only that part of it that can be described by
this power law that will enable it to be confused with time dilation. Hence
the model used here is that the observed width, $w(\lambda)$ is
$w(\lambda)={(1+z)}^{\alpha}\epsilon^{\beta},$ (1)
where the reference width is one. Substituting for $\epsilon$ provides the
more informative equation
$w(\lambda)=\lambda^{\beta}{(1+z)}^{(\alpha-\beta)}.$ (2)
This shows the close relationship between intrinsic and cosmological widths.
Note that there is a common intrinsic variation in either light curve width or
absolute peak magnitude for all filters. This intrinsic curve replaces all the
k-corrections, colour corrections, and similar methods used to correct
observations to standard filters or redshifts.
## 4 INTRINSIC DEPENDENCE OF LIGHT CURVES
Observations of local type Ia supernovae show that the emission from the
expanding gas cloud is multicoloured and the intensity is a function of both
wavelength and time. A major practical problem is that the emitted wavelengths
are often much shorter than the observed wavelengths and since the shape and
size of the intrinsic light curve is a function of the wavelength the analysis
of observations requires that this intrinsic dependence is known. For high
redshift supernovae, many of the emitted wavelengths are outside the visual
range, which means that we cannot, in general, use nearby supernovae to obtain
the required calibrations.
An ingenious solution, exemplified by the SALT2 method (Guy et al., 2007,
2010), is to determine the calibration spectra from averaging the light curves
of many supernovae at many different redshifts. Because the only observations
available are from filters that cover a large wavelength range this is a
difficult process. This and similar methods carefully deconstruct the average
light curve, as a function of intrinsic wavelengths from a large number of
observations, and then generate a light-curve template for each intrinsic
wavelength. Thus the light curve for any particular intrinsic wavelength will
have contributions from supernovae at many observed wavelengths.
### 4.1 A flaw in the SALT2 method
However, there is a problem first described by Crawford (2017) with the SALT2
method of determining the characteristics of the intrinsic light curve. Let
$w(\lambda)$ be the observed width at, $\lambda$, and let $W(\epsilon)$ be the
width at the intrinsic (rest frame) wavelength $\epsilon$. (The use of $w$ and
$W$ was chosen to mimic the familiar use of $m$ and $M$ for magnitudes.)
Similarly, let $f(\lambda,z)$ and $F(\epsilon,z)$ be the observed and emitted
flux densities.
Equation 2 shows that there is a close correspondence between a systematic
variation in width with intrinsic wavelength and time dilation. Although this
is interesting, the extension to a wide range of wavelengths needs a more
refined analysis.
The supernovae observations typically consist of the flux-density measure in
filters that essentially cover the visual wavelengths. Since each filter has a
filter gain function, $g(\lambda)$ which is the fraction of power transmitted
per unit wavelength, then the flux density observed by a particular filter at
the wavelength $\lambda$ is given by
$f(\lambda,z)\propto\int g(\lambda^{*})F(\epsilon,z)\,d\lambda^{*},$ (3)
where $\lambda^{*}$ is a dummy integration variable. Now the width of the
light curve can be determined by measuring the difference between the two half
peak flux density points. To the first order, this process is a linear
function of the flux densities and will certainly be sufficiently linear in
the last step of an iterative process of measuring the width. Thus we can
apply Eq. 3 to the widths to get
$w(\lambda,z)=\int g(\lambda^{*})W(\epsilon)\,d\lambda^{*}.$ (4)
Now suppose the intrinsic light curves have a power-law wavelength dependence
so that $W(\epsilon)=\epsilon^{\beta}$ where $\beta$ is a constant. Then
including this power law in Eq. 4 gives
$w(\lambda,z)=\int
g(\lambda^{*})\left(\frac{\lambda^{*}}{1+z}\right)^{\beta}\,d\lambda^{*}.$ (5)
Since the $(1+z)$ term is independent of $\lambda^{*}$ it can be taken outside
of the integral to get
$w(\lambda,z)=A(1+z)^{-\beta},$ (6)
where
$A=\int g(\lambda)\lambda^{\beta}\,d\lambda.$ (7)
Clearly, $A$ is only a function of $\beta$ and the filter characteristics and
it is the same for all observations with this filter. Consequently, if the
intrinsic widths have a power-law dependence on wavelength, proportional to
$\epsilon^{\beta}$ and from Eq. 6, this will be seen as a power-law dependence
of the observed widths that is proportional to $(1+z)^{-\beta}$.
Conversely, if there is no intrinsic variation of the widths of the light
curve with wavelength but there is a time dilation with exponent $\alpha$,
then the derived intrinsic wavelength dependence in the SALT2 template from
multiple supernovae will have a power-law dependence with $\beta=-\alpha$.
In practice, this means that during the generation of a reference spectrum,
any observed time dilation is recorded in the templates as an intrinsic
wavelength dependence. When this is used to calibrate new observations that
(by definition) have the same time dilation, this redshift dependence in the
observations will be cancelled by the wavelength dependence in the template
and the calibrated widths will be independent of redshift.
Consequently, if the SALT2 or similar calibration method is used then any
cosmological information that was in the calibration observations in the form
of a power law of $(1+z)$, will be removed from subsequent analyses. Simply
put, the SALT2 calibration removes all power laws as function of $(1+z)$,
whether artificial or genuine, leaving the calibrated light curve without any
of this power-law information and therefore without any cosmological
information.
### 4.2 SALT2 template analysis
In order to remove the expected time dilation, the first step of the standard
SALT2 analysis is to divide all the epoch differences by ${(1+z)}$. If there
is no time dilation, this will produce an effective time dilation of
${(1+z)}^{-1}$. Figure 1 shows the relative width (in black and yellow points)
for each intrinsic wavelength of the light curves in the SALT2 template (cf.
Appendix A). Since there are clearly problems with some of the widths, shown
in yellow, the analysis was confined to the black points. The explanation for
the bad widths is unknown but one contributing factor could be poor data for
wavelengths between the filters. Shown in Figure 1 are some filter response
curves for the nearby supernovae where this effect would be most pronounced.
Figure 1: A plot, in black, of the relative widths of the light curves for
the SALT2 templates as a function of intrinsic wavelength. The blue line is
the best fit power law with an exponent $1.240\pm 0.014$. Yellow points are
assumed to be invalid. For comparison some of the filter response curves for
nearby supernovae are also shown.
The blue line is the best power-law fit of the black points and has an
exponent of $\beta=1.240\pm 0.011$. Then allowing for the initial division of
the epoch differences by ${(1+z)}$ either there is no time dilation
($\alpha=0$) and an intrinsic dependence with exponent $\beta=0.240\pm 0.011$,
or that it has the standard time dilation ($\alpha=1$) and a large intrinsic
dependence with exponent $\beta=1.240\pm 0.011$. Since the initial division of
the epoch differences by ${(1+z)}$ could produce the slope $\beta=1.0$ this
analysis shows strong support for zero time dilation.
Note that if there is no time dilation and the effects of auxiliary parameters
are small, then the SALT2 stretch factors are estimates of the true width.
## 5 THE ANALYSIS OF TYPE Ia SUPERNOVAE LIGHT CURVES
### 5.1 The raw observations
Crawford (2017) describes the selection and analysis of the original
observations of type Ia supernovae light curves that have been selected by
Betoule et al. (2014), who have provided an update of the Conley et al. (2011)
analysis with better optical calibrations and more supernovae. This JLA (Joint
Light-curve Analysis) list sample has supernovae from the Supernova Legacy
Survey (SNLS),nearby supernovae (LowZ), the Sloan Digital Sky Survey (SDSS)
(Holtzman et al., 2008; Kessler et al., 2009) and those observed by the (HST)
(Hubble Space Telescope) (Riess et al., 2007; Jones et al., 2013). Also
included are 1169 supernova from the Pan-STARRS supernova survey (Kaiser et
al., 2010; Jones et al., 2018; Scolnic et al., 2018). The sources of the raw
observations are listed in the Appendix A.
For each type Ia supernova, the data used here was, for each filter, a set of
epochs with calibrated flux densities and uncertainties. The observations
taken with the $U$ and $u$ filters are very noisy, and following Conley et al.
(2011) and Betoule et al. (2014), the observations for these filters were not
used. Table 1 shows the number of type Ia supernovae, number of accepted
supernovae, and number of accepted filter data for each survey. Most of the
missing filter sets were because they did not have at least one observation
prior to five days before the peak flux density epoch and at least one five
days after the peak.
Table 1: Surveys and numbers Survey | Nsupernovae | Naccepted | Nfilters
---|---|---|---
LOWZ | 241 | 96 | 290
SDSS | 625 | 485 | 1563
SNLS | 252 | 196 | 463
PS1MD | 1169 | 731 | 1362
HST | 46 | 6 | 6
Total | 2333 | 1514 | 3684
### 5.2 Type Ia supernovae reference template
The essential aim of this analysis is to determine $\alpha$ and $\beta$ in Eq.
1, by examining the raw observations of type Ia supernovae. A critical part of
any investigation into type Ia supernovae light curves is to have a reference
template. In order to remove any possible bias, a standard independent
template, the $B$ band Parab-18 from Table 2 Goldhaber et al. (2001) which has
a half-peak width of 22.3 days has been used. Then the procedure is (for each
supernova) to determine the observed width of the light curve for each filter,
relative to the template light curve, and then estimate $\alpha$ and $\beta$
from all the widths as a function of redshift.
### 5.3 The analysis of raw observations
As an example of a typical type Ia supernova Figure 2 shows the light curves
for four filters for the SNLS supernova SN2007af with the filters used being
shown in the legend (Goobar & Leibundgut, 2011). Accepted data points are
shown as coloured symbols whereas the rejected points are shown with an open
square.
The first feature to notice is that the epoch of the peak flux density depends
on the filter type and is therefore a function of the intrinsic wavelength.
Secondly, there is a secondary peak at about 25 days after the first peak for
the longer wavelength filters. Although this second peak is intrinsic to the
supernova, it does not appear to be very consistent (Elias et al., 1981;
Meikle & Hernandez, 2000; Goobar & Leibundgut, 2011). Consequently, as shown
in Figure 2, all filters, except $B$, $j$, and HST, had their epochs more than
15 days after the main peak rejected.
Figure 2: The light curves for the SNLS type Ia supernova SN2007af. Valid
points are shown as full squares and invalid points as open squares. To avoid
confusion, the filter results have been vertically displaced, The secondary
peak is clearly apparent.
Unfortunately, the direct analysis of the data to obtain the epoch of the peak
flux density, the value of the peak flux density and the light curve width
using a $\chi^{2}$ method has an intrinsic problem in that position of the
peak flux density and the width are not completely independent. However the
value of the peak flux density is almost independent of the width estimate.
The first step was to estimate the value of the peak flux density using a
minimum $\chi^{2}$ procedure. Next the program uses the reference light curve
and the ratio of the flux density to the peak flux density to obtain a flux
density epoch. This epoch has an uncertainty equal to the flux density
uncertainty divided by the absolute value of the slope of the reference light
curve at that epoch. Then a simple weighted regression of the observed epochs
verses the flux density epochs provides the peak flux density offset and the
width of the light curve. This estimate of the peak flux density offset was
ignored. However it was found by minimising the standard $\chi^{2}$ for the
flux densities that was calculated using the estimates for the widths and the
flux densities.
This method has the bonus of providing uncertainty estimates for the widths.
Note that each supernova had separate values for the peak flux density and the
width for each filter.
One problem that was noticed is that the range of the uncertainties in the
flux densities for many filters was too large. There were 336 cases out of
42,818 filter sets where the ratio of the smallest uncertainty to the largest
uncertainty was less than 10%. The problem is that one of these very precise
flux densities could have a weight one hundred times larger than other flux
densities which could produce anomalous results. The observation method for
most of these supernovae is to observe the same patch of sky with the same
telescope and settings for each epoch. Although there can be nights with bad
seeing, the expected uncertainty in the flux density is the same for each
observation. Consequently all the flux density uncertainties were replaced by
a common value of 3% of the peak flux density.
After all the parameters were estimated for a particular filter and the
supernova, the flux density for each epoch was tested to see if it was an
outlier. This was done by computing a value
$l_{i}=f_{i}/f_{peak}-h_{i},$ (8)
where for each epoch, $f_{i}$ is its flux density, $f_{peak}$ is the peak flux
density, $h_{i}$ is the height of the reference light curve at that epoch, and
$i$ is its index. Then an epoch it was rejected as an outlier if the value
$|l_{i}-\bar{l}|$, where $\bar{l}$ is the average for all the other epochs
$l_{i}$, was greater than five times the rms for all the other epochs. The
epoch with the largest discrepancy was eliminated, then a full analysis was
repeated and this continued until there were no more outliers. Out of 30,850
accepted epochs, there were 964 (3%) outliers.
The major selection criteria for each valid supernova was that, for each
filter, there was at least one epoch less than five days from the peak epoch
and one that was five days greater than the peak epoch, and that there were at
least 4 valid epochs. And in order to show a reasonable fit to the light curve
the width uncertainty must be greater than 0.005 and less than 0.3.
### 5.4 Light curve widths
Figure 3: The $\chi^{2}$ fit for light curve widths as a function of $\alpha$.
For each $\alpha$ the intrinsic width is set to be $\beta=\alpha-0.352$.
fig4fig5
Figure 4: This is a plot of the observed widths of type Ia the light curves
as a function of intrinsic wavelength and redshift. The symbol and colour for
each filter are shown in the legend. The black line is the average value as a
function of $\epsilon$. For the right figure, the black line shows the width
for $\alpha=0$ and the red line shows the width for standard time dilation
$\alpha=1$. The symbol and colour for each filter are shown in the legend. In
order to avoid clutter only the HST observations (shown in black) have
uncertainty estimates.
Because of Eq. 2 there is confusion in measuring $\alpha$ and $\beta$ from the
raw observations. However Eq. 2 shows that a regression of the logarithm of
the width verses the logarithm of $(1+z)$ will provide an estimate of the
combined $\alpha-\beta$. The result for this raw width is
$\alpha-\beta=0.324\pm 0.025.$ (9)
One way to determine which time dilation is appropriate for these observations
is to plot the $\chi^{2}$ of the fit as a function of $\alpha$ where for each
$\alpha$ we put $\beta=0.324-\alpha$, thus preserving the raw width. The
$\chi^{2}$ function is
$\chi^{2}=\sum_{i}[\log(w_{i})-A-\alpha\log(1+z_{i})-\beta\log(\lambda/(1+z_{i}))]^{2},$
(10)
where the summation is over all the observations with index $i$, $w_{i}$ is
the width and $A$ is a dummy variable that is determined for each $\alpha$.
The result is shown in Figure 3. Clearly the best fit is when $\alpha\approx
0.2$ which is compatible with zero time dilation which implies that
$\beta\approx-0.124$.
This is supported by another estimate of $\beta$ from the average value of the
logarithm of the ratio of the widths of two filters from the same supernovae.
Although this method assumes that both filters have the same width it is
independent of the value of the width. For 496 supernovae that had valid
observations in the $g$ and $z$ filters the estimate is $\beta=-0.302\pm
0.023$.
A more general and better analysis is to assume that $\alpha=0$ and the get
the intrinsic widths a function of $\epsilon$ rather than restrict them to a
simple function $(1+z)^{\beta}$. This is easily done by averaging the widths
for each value of $\epsilon$. Figure 4 shows a scatter distribution of
intrinsic width for each observation together with its average value. For
reference purposes the value of the average exponent $\beta$ was also
estimated. For 3,684 accepted widths the result was $\beta=0.212\pm 0.007$.
This value is in excellent agreement with the SALT2 value of $\beta=0.240\pm
0.011$(Section 4.2).
The final step is the estimate $\alpha$ by a regression of the logarithm of
the raw widths corrected for intrinsic width verses $log(1+z)$. The correction
was done by dividing the observed width by the intrinsic width. The regression
equation is
$w_{cosmological}=(-0.044\pm 0.004)(1+z)^{0.083\pm 0.024}.$ (11)
This width is self-consistent with zero which strongly favours a static
universe with no time dilation. Figure 4 shows the intrinsic distribution and
the plot of the cosmological widths for 3,684 filters from 2,333 type Ia
supernovae.
### 5.5 Redshift dependence of the light curve widths
The observed light curve for each of four redshift ranges was computed for
each epoch in the range from -15 days to 40 days from the peak flux-density
epoch. This was done by selecting all relative flux densities that were within
one day of this epoch and setting the value of the light curve to be the
median of these selected flux densities. The median was used because it is
insensitive to extreme values. Although this method is using the same data,
its advantage is that it depends only on the relative flux density for each
epoch and does not depend on the fitting procedure. This is similar to the
type of analysis done by Goldhaber et al. (2001) and Blondin et al. (2008).
Figure 5: The type Ia supernovae light curves for four redshift ranges for
the static model. The legend shows the colour for each redshift range. The
template light curve is shown in black. The epochs have been corrected for the
intrinsic width by multiplying each epoch difference by the appropriate
intrinsic width
The results are presented graphically in Figure 5 which shows the average
light curve for four ranges of redshift. The black curve shows the master
template light curve. Table 2 shows the redshift range, the mean redshift, the
number of points, and the average width for each range. Note that the observed
width for each epoch has been corrected for its intrinsic component by
multiplying the epoch distance from the peak flux density epoch by the
intrinsic width show by the black line in Figure 4.
Table 2: Light curve widths for four redshift ranges Range | $\bar{z}$ | number | Width
---|---|---|---
0.00-0.15 | 0.069 | 25 | $0.903\pm 0.005$
0.15-0.30 | 0.225 | 25 | $0.954\pm 0.009$
0.30-0.50 | 0.383 | 25 | $1.059\pm 0.024$
0.50-1.30 | 0.649 | 25 | $1.195\pm 0.033$
The power law fit for these four widths with respect to $(1+z)$ has an
exponent of $0.023\pm 0.012$ which has a negligible dependence on redshift.
### 5.6 Type Ia supernovae peak magnitudes
fig7fig8
Figure 6: Scatter plots of the intrinsic absolute magnitude as a function of
$\epsilon$ and the cosmological peak absolute magnitude as a function of
redshift for the $\Lambda$-CDM model. The black line in the left figure shows
the average value of the intrinsic magnitude.
fig9fig10
Figure 7: Scatter plots of the intrinsic absolute magnitude as a function of
$\epsilon$ and the cosmological peak absolute magnitude as a function of
redshift for Curvature Cosmology. The black line in the left figure shows the
average value of the intrinsic magnitude.
Since the observed light curve is the intrinsic light curve multiplied by time
dilation, then the observed flux density is the product of the intrinsic flux
density and the cosmological scaling factor. There is strong evidence from
nearby supernovae that the expected peak absolute magnitude is the same for
each supernova. Clearly, this commonality requires a valid distance modulus
coming from a valid cosmological model. The analysis is done for the standard
$\Lambda$-CDM cosmology and for a static cosmology. A suitable static model is
Curvature Cosmology which is a complete cosmology that shows excellent
agreement with all major cosmological observations. A brief description of the
cosmology is given later (Section 7).
For convenience all the flux densities were converted into AB magnitudes. The
analysis procedure used is identical to that used above for the light curve
widths which starts with the measurement of the intrinsic absolute magnitudes
which are then subtracted from the raw magnitudes to get the cosmological
magnitudes. Table 3 shows the results for both cosmologies for the exponent
$\alpha$ from the regression of the absolute peak magnitudes verses
$-2.5\log_{10}(1+z)$. The top row is for the raw peak magnitudes and the
bottom row is for the cosmological peak magnitudes.
Table 3: Peak magnitude exponents Parameter | $\Lambda$-CDM | Curvature Cosmology
---|---|---
$\alpha_{raw}$ | $0.182\pm 0.030$ | $-0.385\pm 0.043$
$\alpha_{cosmological}$ | $-0.014\pm 0.030$ | $0.050\pm 0.030$
Both models are consistent with the peak absolute magnitude being independent
of redshift. Since the type Ia supernovae observations have been a major
contribution the $\Lambda$-CDM model it is not surprising that it has a good
fit to this data.
The good fit of Curvature Cosmology is a strong endorsement of this model.
Note that it was formulated long before good supernovae observations became
available and its distance modulus has no fitted parameters except for $H$
(Eq. 20) which an additive constant. Figure 6 shows the intrinsic absolute
magnitude as a function of $\epsilon$ and the cosmological peak absolute
magnitude as a function of redshift for the $\Lambda$-CDM model. Figure 7
shows the same results for Curvature Cosmology.
## 6 GAMMA RAY BURSTS
The website of the Neil Gehrels Swift Observatory, which runs the Swift
satellite, that contains the Burst Alert Telescope (BAT) describes them as:
“Gamma-ray bursts (GRBs) are the most powerful explosions the Universe has
seen since the Big Bang. They occur approximately once per day and are brief,
but intense, flashes of gamma radiation. They come from all different
directions of the sky and last from a few milliseconds to a few hundred
seconds.” An important characteristic of the BAT is that it has a photon
counting detector (Barthelmy et al., 2005) that detects photons in the 15-150
keV energy range with a resolution of about 7 keV. It can also image up to 350
keV without position information. An important parameter for each burst is T90
which is a measure of the burst duration. The start and end times of T90 are
defined as the times the fraction of photons in the accumulated light-curve
reaches 5% and 95%.
The Third Swift Burst Alert Telescope Gamma-Ray Burst Catalog (Lien et al.,
2016) states that “Many studies have shown that the observed burst durations
do not present a clear-cut effect of time dilation for GRBs at higher
redshift.” Indeed the upper panel of their Figure 25 shows that there is no
obvious trend of the burst length with redshift except for a decrease in the
number of short bursts with larger redshifts. This shows some support for the
“tip-of-the-iceberg” effect which is sometimes used to explain the lack of
strong time dilation in the GRB durations. However, there is no obvious change
in the duration of longer bursts with redshift.
Now the number distribution of photons in GRB bursts is close to a power law
with an exponent of about -1.6. As the redshift increases, the number of
detectable photons will rapidly decrease as many more photons will be below
the detector limit. If we assume that the distribution of photons as a
function of energy is independent of the position of the photon in the burst,
then there should be no expected change in T90 with redshift. On the other
hand, if the higher energy photons are clustered towards the centre of the
GRB, then the intrinsic T90 should decrease with increasing redshift.
Consequently, we would expect to see the normal time dilation or maybe a
little less in the T90 measurements.
This analysis directly examines the exponent of a power-law regression of
measured T90 of raw GRB data (c.f. Appendix B), that had burst durations above
2 seconds, as a function of $(1+z)$(Lien et al., 2016). Since there were no
T90 uncertainties provided, the analysis used an unweighted regression. The
power-law fits were done for the T90 duration with the exponent shown in row 1
of Table 4 which is consistent with no time dilation. The problem with this
and similar analyses is that the variables have a very large scatter in values
which would require very large numbers of GRB to achieve absolutely conclusive
results.
In a recent analysis Zhang et al. (2013) claim that the GRB T90 widths are
consistent with an expanding universe. They measured T90 in the observed
energy range between $140/(1+z)\,$keV and $350/(1+z)\,$keV, corresponding to
an intrinsic energy range of $140-350\,$keV. Their exponent for these selected
is $0.94\pm 0.26$ which is consistent with the standard expanding model.
My reanalysis of their raw, T90 widths using the data in their Table 1, is
shown in columns 2 and 3 of Table 4 and are consistent with no time dilation.
The Swift and Zhang et al. (2013) unselected T90 widths are displayed in
Figure 8. Although they have many common GRB, there are small differences in
the T90 widths. This is because Zhang et al. (2013) have used their own
analysis of the original data to get their own values for T90 widths.
Examination of Figure 8 shows the large scatter of the T90 widths and it also
shows that they are consistent with no time dilation and are unlikely to be
consistent with standard time dilation.
My determination of the exponents of their energy selected widths as a
function of $(1+z)$ is shown in rows 4 and 5 of Table 4. The unweighted result
in row 4 agrees with their result. However the exponent for the weighted
analysis shown in row 5 is consistent with no time dilation.
Table 4: Exponents for redshift dependence of GRB
Row | Data | Weighta | N | Exponent
---|---|---|---|---
1 | Swift | U | 298 | $0.39\pm 0.17$
2 | Zhangb | U | 139 | $0.10\pm 0.26$
3 | Zhangb | W | 139 | $-0.16\pm 0.20$
4 | Zhangc | U | 139 | $0.94\pm 0.26$
5 | Zhangc | W | 139 | $0.31\pm 0.23$
a U denotes an unweighted fit, W denotes weighted
b Raw T90 from Zhang et al. (2013) in their Table 1
c T90,z for intrinsic energy range of 140-350 keV.
The use of energy selection for T90 implies that there is an intrinsic
dependence of burst duration on the photon energy. Since the BAT has a photon
counting detector, any measurement of T90 is independent of the selected
photon energies. The only restriction is that the photon energies must be
within the detector limits. Thus BAT does not have the energy selection that
is necessary for this analysis. Furthermore, it is difficult to understand how
any subset of photons that are detected can have a different time dilation
from the rest of the photons in the same GRB. If we ignore the energy-selected
Zhang et al. (2013) results, the conclusion is that the burst length of GRB is
consistent with no time dilation and has very little support for the standard
model.
Figure 8: A plot of T90 as a function of redshift. The green line shows the
line for no time dilation, the red line shows the line for standard time
dilation, and the black line shows a time dilation with the fitted exponent of
$0.39$. The Swift data are shown in blue filled circles and the Zhang (Zhang
et al., 2013) data are shown as red diamonds.
## 7 CURVATURE COSMOLOGY
Curvature Cosmology (Crawford, 1987a, b, 1991, 1993, 1995a, 1995b, 1999, 2006,
2009b, 2009a, 2010) is a complete cosmology for a static universe that shows
excellent agreement with all major cosmological observations without needing
dark matter or dark energy.(Note that (Crawford, 2010) is an update with
corrections of the previous work.) This cosmology depends on the hypotheses of
Curvature Redshift and Curvature Pressure described below.
The basic cosmological model is one in which the cosmic plasma dominates the
mass distribution and hence the curvature of space-time. In this first-order
model, the effects of galaxies and stars are neglected. The geometry of this
cosmology is that of a three-dimensional surface of a four-dimensional hyper-
sphere. It is almost identical to that for Einstein’s static universe. For a
static universe, there is no ambiguity in the definition of distances and
times. One can use a cosmic time and define distances in light travel times or
any other convenient measure. Curvature Cosmology obeys the perfect
cosmological principle of being statistically the same at all places and at
all times.
### 7.1 Curvature Redshift
The derivation of Curvature Redshift is based on the fundamental hypothesis of
Einstein’s general theory of relativity that space-time is curved. As a
consequence, for positive curvature, the trajectories of initially parallel
point particles, geodesics, will move closer to each other as time increases.
Consequently the cross-sectional area of a bundle of geodesics will slowly
decrease.
In applying this idea to photons, we assume that a photon is described in
quantum mechanics as a localised wave where the geodesics correspond to the
rays of the wave. Note that this wave is quite separate from an
electromagnetic wave that corresponds to the effects of many photons. It is
fundamental to the hypothesis that we can consider the motion in space-time of
individual photons. Because the curvature of space-time causes the focussing
of a bundle of geodesics, this focussing also applies to a wave. As the photon
progresses, the cross-sectional area of the wave associated with it will
decrease. However, in quantum mechanics properties such as angular momentum
are computed by an integration of a radial coordinate over the volume of the
wave. If the cross-sectional area of the wave decreases, then the angular
momentum will also decrease. However, angular momentum is a quantised
parameter that has a fixed value.
The solution to this dilemma is that the photon splits into two very low-
energy photons and a third that has the same direction as the original photon
and nearly all the energy. It is convenient to consider the interaction as a
primary photon losing a small amount of energy to two secondary photons. This
energy loss will be perceived as a small decrease in frequency. By symmetry
the two secondary photons with identical energies are emitted at right angles
to the trajectory, which means that there is no apparent angular scattering.
Since in quantum mechanics electrons and other particles are considered as
waves, a similar process will also apply. It is argued that electrons will
interact with curved space-time to lose energy by the emission of very low-
energy photons.
From Crawford (2010) we get the basic equation for the fractional change in
energy of the photon. This is based on the equation of geodesic deviation
(Misner et al., 1973).
$\frac{1}{E}\frac{dE}{ds}=-\left(\frac{8\pi
G\rho}{c^{2}}\right)^{1/2}=-1.366\times 10^{-13}\sqrt{\rho}\,\mbox{m}^{-1}.$
(12)
For many astrophysical types of plasma, it is useful to measure density by the
equivalent number of hydrogen atoms per cubic metre: that is we can put
$\rho=$N MH and get
$\frac{1}{E}\frac{dE}{ds}=-\sqrt{\left(\frac{8{\pi}GNM_{H}}{c^{2}}\right)}=-5.59\times
10^{-27}\sqrt{N}\,\mbox{m}^{-1}.$ (13)
The rate of energy loss per distance travelled depends only on the square root
of the density of the material, which may consist of gas, plasma, or dust. For
many astrophysical plasmas the frequency of the emitted photons will be less
than the plasma frequency and they will be absorbed and heat the plasma.
Another important factor is that if there is any other competing interaction
which occurs before the secondary photons are produced it will inhibit the
Curvature Redshift. Such an interaction is the coherent multiple scattering
that produces refractive index. This can be important for ground bases
experiments and for radio frequency observations in the Galaxy. For example,
most lower frequency radio observations in our galaxy will be unaffected by
Curvature Redshift.
### 7.2 Curvature Pressure
The hypothesis of Curvature Pressure is that for moving particles there is a
pressure generated that acts back on the matter that causes the curved space-
time. In this case, Curvature Pressure acts on the matter that is producing
curved space-time in such a way as to try to decrease the curvature. In other
words, the plasma produces curved space-time through its density entering the
stress-energy tensor in Einstein’s field equations and the actions of the
velocities of the plasma particles is to try and decrease this curvature..
A simple cosmological model using Newtonian physics illustrates some of the
basic physics subsequently used to derive the features of Curvature Pressure.
The model assumes that the universe is composed of gas confined to the three-
dimensional surface of a four-dimensional hyper-sphere. Since the
visualisation of four dimensions is difficult let us suppress one of the
normal dimensions and consider the gas to occupy the two-dimensional surface
of a normal sphere. From Gauss’s law (i.e. the gravitational effect of a
spherical distribution of particles with radial symmetry is identical to that
of a point mass equal in value to the total mass situated at the centre of
symmetry) the gravitational acceleration at the radius $r$ of the surface is
normal to the surface, directed inward and it has the magnitude
$\ddot{r}=-GM/r^{2}$ where $M$ is the total mass of the particles and the dots
denote a time derivative. For equilibrium, and assuming all the particles have
the same mass and velocity we can equate the radial acceleration to the
gravitational acceleration and get the simple equation from celestial
mechanics of
$v^{2}=\frac{{GM}}{{r}}.$
The effect of this balancing of the accelerations against the gravitational
potential is seen within the shell as a Curvature Pressure that is a direct
consequence of the geometric constraint of confining the particles to a shell.
If the radius $r$ decreases then there is an increase in this Curvature
Pressure that attempts to increase the surface area by increasing the radius.
For a small change in radius in a quasi-equilibrium process where the particle
velocities do not change the work done by this Curvature Pressure (two
dimensions) with an incremental increase of area $dA$ is $p_{\rm c}dA$ and
this must equal the gravitational force times the change in distance to give
$p_{\rm c}dA=\frac{{GM^{2}}}{{r^{2}}}\,dr,$
where $M=\sum{m_{i}}$ with the sum going over all the particles. Therefore,
using equation (7.2) we can rewrite the previous equation in terms of the
velocities as
$p_{\rm c}dA=\frac{{M\left\langle{v^{2}}\right\rangle}}{r}\,dr.$
Now $dA/dr=2A/r$, hence the two-dimensional Curvature Pressure is
$p_{\rm c}=\frac{{M\left\langle{v^{2}}\right\rangle}}{{2A}}.$
Thus in this two-dimensional model the Curvature Pressure is like the average
kinetic energy per unit area. This simple Newtonian model provides a guide as
to what the Curvature Pressure would be in the full General Relativistic
model.
The extension to different particle masses and velocities uses the basic
property of General Relativity that gravitation is an acceleration and not a
force. This is supported by Eötvös, Pekár & Fekete (1922), Dicke (1964), and
Braginskij & Panov (1971) who have shown that the passive gravitational mass
is equal to the inertial mass to about one part in $10^{12}$. The usual
interpretation of this agreement is that they are fundamentally the same
thing. However, an alternative viewpoint is that the basic equation is wrong
and that the passive gravitational mass and the inertial mass should not
appear in Newton’s gravitational equation. Consequently Newton’s gravitational
equation is an equation of accelerations and not of forces. The equation for
Curvature Pressure in a 3 dimensional high temperature plasma is
$p_{\rm c}=\frac{1}{3}\left\langle{\gamma^{2}-1}\right\rangle\rho c^{2},$ (14)
where $\gamma$ is the Lorentz factor and $\langle\rangle$ denotes an average.
In effect, my hypothesis is that the cosmological model must include this
Curvature Pressure as well as thermodynamic pressure. Note that although this
has a similar form to thermodynamic pressure it is quite different. In
particular, it is proportional to an average over the squared velocities and
the thermodynamic pressure is proportional to an average over the kinetic
energies. This means that, for plasma with free electrons and approximate
thermodynamic equilibrium, the electrons will dominate the average due to
their much larger velocities.
Including Curvature Pressure into the Friedmann equations provides stable
static cosmological model. Including Curvature Pressure from Eq. 14 the
modified Friedmann equations are
$\displaystyle\ddot{R}$ $\displaystyle=$ $\displaystyle-\frac{4\pi
G\rho}{3}\left[{1-\left\langle{\gamma^{2}-1}\right\rangle}\right]R,$
$\displaystyle\dot{R}^{2}$ $\displaystyle=$ $\displaystyle\frac{8\pi
G\rho}{3}R^{2}-c^{2}.$
Clearly, there is a static solution if $<\gamma^{2}-1>=1$, in which case
$\ddot{R}=0$. The second equation, with $\dot{R}=0$ provides the radius of the
universe which is given by
$R=\sqrt{\frac{3c^{2}}{8\pi G\rho}}{\mbox{\, }}=\sqrt{\frac{3c^{2}}{8\pi
GM_{\rm H}N}}.$ (15)
Thus, the model is a static cosmology with positive curvature. Although the
geometry is similar to the original Einstein static model, this cosmology
differs in that it is stable. The basic instability of the static Einstein
model is well known (Tolman, 1934; Ellis, 1984). On the other hand, the
stability of Curvature Cosmology is shown by considering a perturbation
$\Delta R$, about the equilibrium position. Then the perturbation equation is
$\Delta\ddot{R}\propto\left(\frac{d\langle\gamma^{2}-1\rangle}{dR}\right)\Delta
R.$ (16)
For any realistic equation of state for the cosmic plasma, the average
velocity will decrease as $R$ increases. Thus the right-hand side is negative,
showing that the result of a small perturbation is for the universe return to
its equilibrium position. Thus, Curvature Cosmology is intrinsically stable.
Of theoretical interest is that Eq. 16 predicts that oscillations could occur
about the equilibrium position.
### 7.3 X-ray background radiation
Since Giacconi et al. (1962) observed the X-ray background, there have been
many suggestions made to explain its characteristics. Although much of the
unresolved X-ray emission comes from active galaxies, there is a part of the
spectrum between about 10 keV and 1 MeV that is not adequately explained by
emission from discrete sources.
Curvature Cosmology can explain the X-ray emission in the energy range from
about 10 keV to 300 keV as coming from a very hot intergalactic plasma. A
simple model has a mixture of hydrogen with 8% helium and a measured density
of $N=1.55\pm 0.01$ hydrogen atoms per cubic metre or 2.57$\times
10^{-27}\,$kg m-3.
For this density the predicted a temperature is $2.56\times 10^{9}\,$K for the
cosmic plasma. The temperature estimated from fitting the X-ray data is
$(2.62\pm 0.04)\times 10^{9}\,$K which is a good fit. Although this is similar
to early explanations of the X-ray emission, it differs in that it depends on
the current plasma density. The earlier explanations required the X-ray
emission to come from a plasma with about three times that density which
conflicted with other observations.
### 7.4 Nuclear abundances
One of the successes of the standard model is in its explanation of the
primordial abundances of the light elements. In Curvature Cosmology, the
primordial abundance refers to the abundance in the cosmic plasma from which
the galaxies are formed.
The first point to note is that the predicted temperature of the cosmic plasma
is $2.56\times 10^{9}{\mbox{ K}}$ at which temperature nuclear reactions can
proceed. It is postulated that there is a continuous recycling of material
from the cosmic gas to galaxies and stars and then back to the gas. Because of
the high temperature, nuclear reactions will take place whereby the more
complex nuclei are broken down to hydrogen, deuterium, and helium.
### 7.5 Cosmic microwave background radiation
The Cosmic microwave background radiation (CMBR) is produced by very high
energy electrons via Curvature Redshift radiation in the cosmic plasma. With
$N=1.55$ the predicted temperature of the CMBR is 3.18 K to be compared with
an observed value of 2.725 K (Mather et al., 1990). This prediction does
depend on the nuclei mix in the cosmic plasma and could vary from this value
by several tenths of a degree.
Although the CMBR photons are subject to continuous Curvature Redshift they
will be quantised, and since all energy levels are freely available, the black
body (Plank function) is their thermal equilibrium spectrum.
Differences in the local environment, especially high density lower
temperature gas clouds, will decrease the flux density of the CMBR and could
explain some of the observed spatial fluctuations in the CMBR.
### 7.6 No Dark matter and galactic rotation
In 1937 Zwicky (1937) found in an analysis of the Coma cluster of galaxies
that the ratio of total mass obtained by using the virial theorem to the total
luminosity was 500 whereas the expected ratio was 3. The virial theorem is a
statistical theorem that states that for an inverse square law the average
kinetic energy of a bound system is equal to half the potential energy. This
huge discrepancy was the start of the concept of dark matter. It is surprising
that in more than eight decades since that time there is no direct evidence
for dark matter. Similarly the concept of dark energy (some prefer
quintessence) has been introduced to explain discrepancies in the observations
of type 1a supernovae.
X-ray observations show that the Coma cluster has a large plasma cloud in its
centre. The Curvature Cosmology model is that the galactic velocity dispersion
in the cluster is entirely due to Curvature Redshift of photons passing
through the central plasma cloud. For 583 galaxies the rms (root-mean-square)
velocity was 893 km s-1 and the computed theoretical value was $554$ km s-1.
Considering that it was assumed that both the galaxy distribution and plasma
distribution had a very simple geometries this shows that Curvature Cosmology
can explain the velocity dispersion in the Coma cluster.
One of the most puzzling questions in astronomy is: why the observed velocity
of rotation in spiral galaxies does not go to zero towards the edge of the
galaxy. Simple Keplerian mechanics suggest that there should be a rapid rise
to a maximum and then a decrease in velocity that is inversely proportional to
the square root of the radius once nearly all the mass has been passed.
Although the details vary between galaxies, the observations typically show a
rapid rise and then an essentially constant tangential velocity as a function
of radius out to distances where the velocity cannot be measured due to lack
of material. The standard explanation is that this is due to the gravitational
attraction of a halo of dark matter that extends well beyond the galaxy.
Observations show that our own Galaxy and other spiral galaxies have a gas
halo that is larger than the main concentration of stars. It is clear that if
the observed redshifts are due to Curvature Redshift acting within this halo,
the halo must be asymmetric; otherwise, it could not produce the asymmetric
rotation curve. Now the observed velocities in the flat part of the curves are
typically 100 to 200 km s-1. For realistic values of the densities and sizes
of the halo, the velocity is about 163 km s-1. Thus, the magnitude is
feasible. Although there could be a natural asymmetry in a particular galaxy,
the fact that the flattened rotation curve is seen for most spiral galaxies
suggests that there is a common cause for the asymmetry. One possibility is
that the asymmetry could arise from ram pressure due to the galaxy moving
through the intergalactic medium.
Although the explanation for galactic rotation observations is limited, Coma
cluster observations show no support for dark matter. Since Curvature
Cosmology can explain all the supernova observations there is nor support for
dark energy.
### 7.7 No Black Holes
A theory of Curvature Pressure in a very dense medium where quantum mechanics
dominate and where general relativity may be required is needed to develop
this model. Nevertheless it is clear that Curvature Pressure would resist a
hot compact object from collapsing to a black hole. Because of the potential
energy released during collapse, it is extremely unlikely for a cold object to
stay cold long enough to overcome the Curvature Pressure and collapse to a
black hole.
What is expected is that the final stage of gravitational collapse is a very
dense object, larger than a black hole but smaller than a neutron star. This
compact object would have most of the characteristics of black holes. Such
objects could have large masses and be surrounded by accretion discs. Thus,
many of the observations that are thought to show the presence of a black hole
could equally show the presence of these compact objects.
If the compact object is rotating there is the tantalising idea that Curvature
Pressure may produce the emission of material in two jets along the spin axis.
This could be the ”jet engine” that produces the astrophysical jets seen in
stellar-like objects and in many huge radio sources. Furthermore this could be
a mechanism to return material to the cosmic plasma. Currently there are no
accepted models for the origin of these jets.
### 7.8 Olber’s Paradox
In Curvature Cosmology Olber’s Paradox is not a problem. Visible light from
distant galaxies is shifted into the infrared where it is no longer seen and
the energy is eventually absorbed back into the cosmic plasma. Everything is
recycled. The plasma radiates energy into the microwave background radiation
and into X-rays. The galaxies develop from the cosmic plasma, stars are formed
which pass through their normal evolution. Eventually all their material is
returned to the cosmic plasma.
### 7.9 Basic equations for Curvature Cosmology
The geometry is that of a three-dimensional surface of a four-dimensional
hyper sphere. For this geometry the radius is $r=R\chi$ where
$\chi=\ln(1+z)/\sqrt{3}.$ (17)
(NB. work prior to 2009 has $\chi=\ln(1+z)/\sqrt{2}$))
The area is
$A(r)=4\pi R^{2}\sin^{2}(\chi).$ (18)
The surface is finite and $\chi$ can vary from 0 to $\pi$. The volume within a
redshift $z$ is given by
$V(z)=2\pi R^{3}\left[{\chi-\frac{1}{2}\sin(2\chi)}\right].\\\ $
Using the density $N=1.55\,{m}^{-3})$ the Hubble constant is predicted to be
$\displaystyle H$ $\displaystyle=$
$\displaystyle-\frac{c}{E}\frac{{dE}}{{ds}}=\left(8\pi GM_{\rm
H}N\right)^{\frac{1}{2}}$ $\displaystyle=$ $\displaystyle
51.69N^{\frac{1}{2}}\;\mbox{kms}^{-1}\;\mbox{Mpc}^{-1}$ $\displaystyle=$
$\displaystyle 64.4\pm 0.2\;\mbox{kms}^{-1}\;\mbox{Mpc}^{-1}.$
The only other result required here is the equation for the distance modulus
($\mu=m-M$), which is
$\mu=5\log_{10}[(\sqrt{3}\sin(\chi))/h]+2.5\log_{10}(1+z)+42.384.\\\ $ (20)
where h=H/(100 $\mbox{kms}^{-1}\;\mbox{Mpc}^{-1}$).
### 7.10 Basic consequences of Curvature Cosmology
Since the ramifications of a static universe are quite profound, a list of the
major consequences of Curvature Cosmology is given here. All the numerical
results are derived using the cosmic plasma density $N=1.55$ H atoms m-3.
1. 1.
It obeys the perfect cosmological principle
2. 2.
It is stable (Section 7.2).
3. 3.
There is no dark matter. (Section 7.6)
4. 4.
There is no dark energy. Meaningless.
5. 5.
There is no inflation. Meaningless.
6. 6.
There is no horizon problem. Meaningless.
7. 7.
The cosmic plasma has a density N$=1.55\pm 0.01\,$M${}_{H}\,{m}^{-3})$.
8. 8.
The cosmic plasma has a temperature of $(2.64\pm 0.04)\times 10^{9}$ K.
9. 9.
The Hubble constant is $64.4\pm 0.2\;\mbox{kms}^{-1}\;\mbox{(}Mpc)^{-1}$.
10. 10.
It is consistent with supernovae observations.
11. 11.
It is consistent with GRB observations.
12. 12.
It is consistent with quasar luminosity observations.
13. 13.
It is consistent with galaxy luminosity observations.
14. 14.
It is consistent with Tolman surface brightness observations.
15. 15.
It is consistent with radio source counts.
16. 16.
It is consistent with quasar variability
17. 17.
It is consistent with angular size observations.
18. 18.
It is can explain the Cosmic Microwave Background Radiation (Section 7.5).
19. 19.
The CMBR radiation has temperature of $3.18$ K (Section 7.5.
20. 20.
It is can provide a partial explanation for fluctuations in CMBR.
21. 21.
It is can provide a partial explanation for galactic rotation curves (Section
7.6).
22. 22.
It is can explain the X-ray background radiation (Section 7.3).
23. 23.
It is can possibly explain the cosmic nuclear abundances (Section 7.4).
24. 24.
Curvature redshift can be investigated with laboratory measurements.
25. 25.
There are no black holes (Section 7.7).
26. 26.
Universal radius: $3.11\times 10^{26}$ m or $1.008\times 10^{10}$ pc.
27. 27.
Volume: $8.95\times 10^{80}$ m3 or $2.02\times 10^{31}$ pc3.
28. 28.
Mass: $2.54\times 10^{54}$ kg or $1.28\times 10^{23}$ $\cal M_{\bigodot}$.
Of interest is that in Curvature Cosmology (CC) distant objects will always
have a fainter absolute magnitude than for the standard model. Table 5 shows
the distance moduli for both cosmologies, their difference, and the absolute
flux density ratio for a range redshifts.
Table 5: Relative absolute magnitudes Redshift | $\Lambda$-CDM | CC | diff. | ratio
---|---|---|---|---
0 | 42.394 | 42.394 | 0.000 | 1.000
1.0 | 43.512 | 43.067 | 0.445 | 1.507
2.0 | 45.189 | 44.418 | 0.771 | 2.035
5.0 | 47.425 | 45.078 | 1.447 | 3.792
10.0 | 49.102 | 46.927 | 2.175 | 7.412
20.0 | 50.750 | 47.629 | 3.122 | 17.729
50.0 | 52.884 | 48.050 | 4.834 | 85.859
100.0 | 54.468 | 47.682 | 6.786 | 517.962
## 8 CONCLUSION
The first part of this paper argued that the only effect of cosmology on
supernovae light curves is to change the scaling parameters of peak flux
density and width. The shape of the light curve is intrinsic to the supernovae
and is unchanged by cosmology.
Next, it was argued that the redshift of photons is a measure of their energy
and could be caused by any systematic energy loss or by time dilation.
In Section 4 and 5 it has been shown that there is a major problem in using
SALT2, and similar calibration methods, to remove the intrinsic wavelength
dependence of widths from type Ia supernovae light-curve observations. The
process of generating the templates means that if the observed light curves
have widths that contain the effects of time dilation, these effects are
incorporated into the template. The subsequent use of the template will remove
this time dilation affects whether artificial or genuine, from the new
observations.
Consequently, SALT2 calibrated light curves cannot contain any cosmological
data that is in the form of a power law. Consequently previous analyses of
type Ia supernovae gave self-consistent results because of a flaw in the
standard analysis program SALT2.
The light curve widths of type Ia supernovae are consistent with no time
dilation with an exponent of $0.083\pm 0.024$ which is completely inconsistent
with standard time dilation which means that the universe is static.
The absolute magnitudes are consistent with Curvature Cosmology with an
exponent, $\alpha$, of $-0.050\pm 0.030$, whereas the $\Lambda$-CDM has an
exponent of $-0.014\pm 0.030$. From the excellent agreement of the
$\Lambda$-CDM model it is apparent that the $\Lambda$-CDM distance modulus has
been modified to achieve this goal. This has occurred because of the strong
belief that the standard time dilation and the $\Lambda$-CDM model are both
valid.
One way to partially validate these conclusions would be to redo the SALT2
analysis without the initial division of the epoch differences by $(1+z)$.
In addition the duration of Gamma Ray Bursts are completely consistent with a
static universe.
###### Acknowledgements.
This research has made use of the NASA/IPAC Extragalactic Database (NED) that
is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration. The calculations have used Ubuntu Linux and the graphics have
used the DISLIN plotting library provided by the Max-Plank-Institute in
Lindau.
## Appendix A SOURCE OF SUPERNOVAE OBSERVATIONS
All of the original type Ia supernovae observations were retrieved from the
SNANA (Kessler et al., 2009) in the download package $snana.tar.gz$ on the
website http://www.snana.uchicago.edu using the index files shown in Table 6.
Table 6: Index source files for SNANA data file
---
lcmerge/LOWZ_JRK07
lcmerge/JLA2014_CSP.LIST
lcmerge/JLA2014_CfAIII_KEPLERCAM.LIST
lcmerge/SNLS3year_JRK07.LIST
lcmerge/SDSS_allCandidates+BOSS_HEAD.FITS
lcmerge/JLA2014_SNLS.LIST
lcmerge/JLA2024_HST.LIST
lcmerge/SDSS_HOLTZ08
A current SALT2 template file for the JLA (Joint Light-curve Analysis)
analysis was taken from the SNANA website in the directory
$models/SALT2/\\-SALT2/JLA\\-B14$.
The Pan-STARSS supernovae were accessed from the site
https://archive.stsci.edu/prepds/ps1cosmo/jones and the file datatable.html.
Basic information for all the filters used is shown in Table 7 where column 1
is the filter name, column 2 is the mean wavelength in $\mu\,$m, column 3 (N)
is the final number of supernovae with a valid light curve for this filter,
and column 4 is the HST filter name.
Table 7: Filter characteristics Name | Wavelength/$\mu\,m$ | N | HST
---|---|---|---
B | 0.436 | 202 |
V | 0.541 | 205 |
R | 0.619 | 130 |
I | 0.750 | 239 |
g | 0.472 | 1,933 |
r | 0.619 | 2,035 |
i | 0.750 | 2,071 |
z | 0.888 | 1,936 |
6 | 0.907 | 5 | F850LP
7 | 1.249 | 1 | F125W
## Appendix B SOURCE OF GRB OBSERVATIONS
The raw GRB data was taken from
$https://swift.gsfc.nasa.gov/archive/grb_{t}able$ that had burst durations
longer than 2 seconds and valid measurements for the redshift, T90, the
fluence and the peak one-second photon flux rate. The data labelled “Zhang”
comes from Table 1 in Zhang et al. (2013).
## Appendix C Equations for $\Lambda$-CDM COSMOLOGY
The equations needed for the modified $\Lambda$-CDM model (Hogg, 1999; Goliath
et al., 2001; Barboza & Alcaniz, 2008), with $\Omega_{M}=0.27$, $\Omega_{K}=0$
and where $h$ is the reduced Hubble constant, are listed below. The symbol
$w^{*}$ is used for the acceleration parameter in order to avoid confusion
with the width $w$. These equations depend on the function $E(z)$ defined here
by
$E(z)=\int_{0}^{z}\frac{dz}{\sqrt{\Omega_{M}(1+z)^{3}+(1-\Omega_{M})(1+z)^{(1+w^{*})}}}.$
(21)
The distance modulus is
$\mu_{B}(z)=5\log_{10}(E(z)(1+z)/h)+42.384.$ (22)
The co-moving volume is
$v_{B}(z)=\frac{4\pi}{3}(2.998E(z)/h)^{3}{Gpc}^{3}.$ (23)
The equation of state parameter $w^{*}$ in the expansion model distance
modulus is included to investigate the effects of including the cosmological
constant. Conley et al. (2011) found that the parameter, $w^{*}$, has a value
$w^{*}=-0.91$, whereas Sullivan et al. (2011) found that $w^{*}=-1.069$.
Although its actual value is not critical for this paper the value of $w^{*}$
is chosen to be $w^{*}=-1.11$, so that $E_{B}$ would be the best fiducial
constant with the values for the peak magnitudes and stretch factors provided
by B14.
## References
* Barboza & Alcaniz (2008) Barboza E. M., Alcaniz J. S., 2008, http://dx.doi.org/10.1016/j.physletb.2008.08.012 Physics Letters B, http://adsabs.harvard.edu/abs/2008PhLB..666..415B 666, 415
* Barthelmy et al. (2005) Barthelmy S. D., et al., 2005, http://dx.doi.org/10.1007/s11214-005-5096-3 Space Sci. Rev., https://ui.adsabs.harvard.edu/#abs/2005SSRv..120..143B 120, 143
* Betoule et al. (2014) Betoule M., et al., 2014, http://dx.doi.org/10.1051/0004-6361/201423413 A&A, http://adsabs.harvard.edu/abs/2014A
* Blondin et al. (2008) Blondin S., et al., 2008, http://dx.doi.org/10.1086/589568 ApJ, http://adsabs.harvard.edu/abs/2008ApJ…682..724B 682, 724
* Braginskij & Panov (1971) Braginskij V. B., Panov V. I., 1971, Uspekhi Fizicheskikh Nauk, https://ui.adsabs.harvard.edu/#abs/1971UsFiN.105..779B 105, 779
* Conley et al. (2011) Conley A., et al., 2011, http://dx.doi.org/10.1088/0067-0049/192/1/1 ApJS, http://adsabs.harvard.edu/abs/2011ApJS..192….1C 192, 1
* Crawford (1987a) Crawford D. F., 1987a, Australian Journal of Physics, http://adsabs.harvard.edu/abs/1987AuJPh..40..449C 40, 449
* Crawford (1987b) Crawford D. F., 1987b, Australian Journal of Physics, http://adsabs.harvard.edu/abs/1987AuJPh..40..459C 40, 459
* Crawford (1991) Crawford D. F., 1991, http://dx.doi.org/10.1086/170330 ApJ, http://adsabs.harvard.edu/abs/1991ApJ…377….1C 377, 1
* Crawford (1993) Crawford D. F., 1993, http://dx.doi.org/10.1086/172765 ApJ, http://adsabs.harvard.edu/abs/1993ApJ…410..488C 410, 488
* Crawford (1995a) Crawford D. F., 1995a, http://dx.doi.org/10.1086/175288 ApJ, http://adsabs.harvard.edu/abs/1995ApJ…440..466C 440, 466
* Crawford (1995b) Crawford D. F., 1995b, http://dx.doi.org/10.1086/175375 ApJ, http://adsabs.harvard.edu/abs/1995ApJ…441..488C 441, 488
* Crawford (1999) Crawford D. F., 1999, Australian Journal of Physics, http://adsabs.harvard.edu/abs/1999AuJPh..52..753C 52, 753
* Crawford (2006) Crawford D. F., 2006, Curvature Cosmology. BrownWalker Press
* Crawford (2009a) Crawford D. F., 2009a, preprint, https://ui.adsabs.harvard.edu/#abs/2009arXiv0901.4169C p. arXiv:0901.4169 (http://arxiv.org/abs/0901.4169 arXiv:0901.4169)
* Crawford (2009b) Crawford D. F., 2009b, preprint, https://ui.adsabs.harvard.edu/#abs/2009arXiv0901.4172C p. arXiv:0901.4172 (http://arxiv.org/abs/0901.4172 arXiv:0901.4172)
* Crawford (2010) Crawford D. F., 2010, preprint, https://ui.adsabs.harvard.edu/#abs/2009arXiv0901.4169C p. arXiv:1009:0953 (1009:0953)
* Crawford (2017) Crawford D. F., 2017, http://dx.doi.org/10.1515/astro-2017-0013 Open Astronomy, http://adsabs.harvard.edu/abs/2017OAst…26..111C 26, 111
* Dicke (1964) Dicke R. H., 1964, http://dx.doi.org/10.1038/202432a0 Nature, https://ui.adsabs.harvard.edu/#abs/1964Natur.202..432D 202, 432
* Elias et al. (1981) Elias J. H., Frogel J. A., Hackwell J. A., Persson S. E., 1981, http://dx.doi.org/10.1086/183683 ApJ, https://ui.adsabs.harvard.edu/#abs/1981ApJ…251L..13E 251, L13
* Ellis (1984) Ellis G. F. R., 1984, http://dx.doi.org/10.1146/annurev.aa.22.090184.001105 Annual Review of Astronomy and Astrophysics, https://ui.adsabs.harvard.edu/#abs/1984ARA&A..22..157E 22, 157
* Eötvös et al. (1922) Eötvös R. V., Pekár D., Fekete E., 1922, http://dx.doi.org/10.1002/andp.19223730903 Annalen der Physik, https://ui.adsabs.harvard.edu/#abs/1922AnP…373…11E 373, 11
* Foley et al. (2018) Foley R. J., et al., 2018, http://dx.doi.org/10.1093/mnras/stx3136 MNRAS, http://adsabs.harvard.edu/abs/2018MNRAS.475..193F 475, 193
* Giacconi et al. (1962) Giacconi R., Gursky H., Paolini F. R., Rossi B. B., 1962, http://dx.doi.org/10.1103/PhysRevLett.9.439 Phys. Rev. Lett., https://ui.adsabs.harvard.edu/#abs/1962PhRvL…9..439G 9, 439
* Goldhaber et al. (2001) Goldhaber G., et al., 2001, http://dx.doi.org/10.1086/322460 ApJ, https://ui.adsabs.harvard.edu/#abs/2001ApJ…558..359G 558, 359
* Goliath et al. (2001) Goliath M., Amanullah R., Astier P., Goobar A., Pain R., 2001, http://dx.doi.org/10.1051/0004-6361:20011398 A&A, http://adsabs.harvard.edu/abs/2001A
* Goobar & Leibundgut (2011) Goobar A., Leibundgut B., 2011, http://dx.doi.org/10.1146/annurev-nucl-102010-130434 Annual Review of Nuclear and Particle Science, http://adsabs.harvard.edu/abs/2011ARNPS..61..251G 61, 251
* Guy et al. (2007) Guy J., et al., 2007, http://dx.doi.org/10.1051/0004-6361:20066930 A&A, http://adsabs.harvard.edu/abs/2007A
* Guy et al. (2010) Guy J., et al., 2010, http://dx.doi.org/10.1051/0004-6361/201014468 A&A, http://adsabs.harvard.edu/abs/2010A
* Hogg (1999) Hogg D. W., 1999, ArXiv Astrophysics e-prints, http://adsabs.harvard.edu/abs/1999astro.ph..5116H
* Holtzman et al. (2008) Holtzman J. A., et al., 2008, http://dx.doi.org/10.1088/0004-6256/136/6/2306 AJ, http://adsabs.harvard.edu/abs/2008AJ….136.2306H 136, 2306
* Jones et al. (2013) Jones D. O., et al., 2013, http://dx.doi.org/10.1088/0004-637X/768/2/166 ApJ, http://adsabs.harvard.edu/abs/2013ApJ…768..166J 768, 166
* Jones et al. (2018) Jones D., et al., 2018, in American Astronomical Society Meeting Abstracts #231. p. 308.06
* Kaiser et al. (2010) Kaiser N., et al., 2010, in Ground-based and Airborne Telescopes III. p. 77330E, http://dx.doi.org/10.1117/12.859188 doi:10.1117/12.859188
* Kessler et al. (2009) Kessler R., et al., 2009, http://dx.doi.org/10.1088/0067-0049/185/1/32 ApJS, http://adsabs.harvard.edu/abs/2009ApJS..185…32K 185, 32
* Lien et al. (2016) Lien A., et al., 2016, http://dx.doi.org/10.3847/0004-637X/829/1/7 ApJ, http://adsabs.harvard.edu/abs/2016ApJ…829….7L 829, 7
* Mather et al. (1990) Mather J. C., et al., 1990, http://dx.doi.org/10.1086/185717 ApJ, https://ui.adsabs.harvard.edu/#abs/1990ApJ…354L..37M 354, L37
* Meikle & Hernandez (2000) Meikle P., Hernandez M., 2000, Memorie della Societa Astronomica Italiana, https://ui.adsabs.harvard.edu/#abs/2000MmSAI..71..299M 71, 299
* Misner et al. (1973) Misner C. W., Thorne K. S., Wheeler J. A., 1973, Gravitation
* Riess et al. (2007) Riess A. G., et al., 2007, http://dx.doi.org/10.1086/510378 ApJ, http://adsabs.harvard.edu/abs/2007ApJ…659…98R 659, 98
* Scolnic et al. (2017) Scolnic D. M., et al., 2017, preprint, http://adsabs.harvard.edu/abs/2017arXiv171000845S (http://arxiv.org/abs/1710.00845 arXiv:1710.00845)
* Scolnic et al. (2018) Scolnic D. M., et al., 2018, http://dx.doi.org/10.3847/1538-4357/aab9bb ApJ, https://ui.adsabs.harvard.edu/#abs/2018ApJ…859..101S 859, 101
* Sullivan et al. (2011) Sullivan M., et al., 2011, http://dx.doi.org/10.1088/0004-637X/737/2/102 ApJ, http://adsabs.harvard.edu/abs/2011ApJ…737..102S 737, 102
* Tolman (1934) Tolman R. C., 1934, Relativity, Thermodynamics, and Cosmology
* Zhang et al. (2013) Zhang F.-W., Fan Y.-Z., Shao L., Wei D.-M., 2013, http://dx.doi.org/10.1088/2041-8205/778/1/L11 ApJ, http://adsabs.harvard.edu/abs/2013ApJ…778L..11Z 778, L11
* Zwicky (1937) Zwicky F., 1937, http://dx.doi.org/10.1086/143864 ApJ, https://ui.adsabs.harvard.edu/#abs/1937ApJ….86..217Z 86, 217
|
Renormalization of the nonprojectable Hořava theory
Jorge Bellorín1,a, Claudio Bórquez2,b and Byron Droguett1,c
1Department of Physics, Universidad de Antofagasta, 1240000 Antofagasta,
Chile.
2Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián,
Lago Panguipulli 1390, Puerto Montt, Chile.
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
Abstract
> We present the proof of renormalization of the Hořava theory, in the
> nonprojectable version. We obtain a form of the quantum action that exhibits
> a manifest BRST-symmetry structure. Previous analysis have shown that the
> divergences produced by irregular loops cancel completely between them. The
> remaining divergences are local. The renormalization is achieved by using
> the approach developed by Barvinsky et al. with the background-field
> formalism.
## 1 Introduction
In this paper we present the proof of renormalization of the Hořava theory,
considering its nonprojectable version [1, 2]. This theory, whose gauge group
is given by the foliation-preserving diffeomorphisms (FDiff), is a proposal
for a quantum theory of gravitation. The theory is unitary [3], we are
presenting here its renormalization, and it might yield the classical dynamics
of general relativity at the large-distance limit.
We base the proof on three main aspects. First, the theory is quantized under
the Batalin-Fradkin-Vilkovisky (BFV) formalism, incorporating the second-class
constraints [4, 5, 6]. This formalism allows us to introduce a local gauge-
fixing condition which leads to regular propagators for most of the fields [7,
8], together with the measure of the second-class constraints [9, 10]. After
the integration on some ghost fields and the redefinition of the Becchi-Rouet-
Stora-Tyutin (BRST) symmetry transformations, we get a form of the quantum
action with manifest BRST-symmetry structure. Second, it is known from
previous studies [11, 12] that the only divergences produced by the
integration along the frequency (called irregular loops) cancel exactly
between them. In the integration on the spatial momentum, the behavior of the
irregular propagators is equivalent to the regular ones. Hence, all
divergences are local [13, 14]. The highest superficial degree of divergence
is equal to the order of the bare Lagrangian. Third, we use the approach
developed by Barvinsky et al. [15] to undertake the renormalization of gauge
theories, which is based on the background-field formalism [16, 17].
The proof of renormalization of the projectable case, which is a version of
the Hořava theory defined by the restriction that the lapse function depends
exclusively on time, is known [7, 15]. The need for an anisotropic gauge-
fixing condition that leads to regular propagators was identified in this
case. The resulting Lagrangian is local when it is expressed in terms of
canonical variables. However, a fundamental difference in the quantization of
the projectable and nonprojectable cases is the absence of second-class
constraints in the former. In the nonprojectable case, a similar quantization
can be carried on, with the analogous gauge-fixing condition. The second-class
constraints lead to a modification of the measure of the path integral. The
measure has the effect of yielding irregular propagators on some auxiliary
fields, despite the fact that the rest of quantum fields acquire regular
propagators. Since the regular structure is important for the control of
divergences [14], a careful study of the consequences of the irregular
propagators is required. For this reason it becomes essential the previously
mentioned analysis showing that, not only the divergences produced by the
irregular loops are canceled, but also the fact that these are the only
divergences in the direction of the frequency.
As most of the modern approaches of renormalization of gauge theories, the
proof relies on the BRST symmetry and the background-field formalism. The
Slavnov-Taylor and Ward identities are useful to determine the divergences of
the effective action. On the other hand, the BFV quantization is based on the
Hamiltonian formalism. Therefore, it is important to arrive at a form of the
quantum action being separated in the usual way: a sector invariant under the
FDiff gauge symmetry, and other sector fixing the gauge symmetry by means of
the BRST operator. We find such a BRST-symmetry structure. This allows us to
apply the background-field formalism of Ref. [15].
The analysis of renormalization requires to know the propagators. With the
vertices, the feature that we require is the highest order in spatial
derivatives, independently of their explicit form. For this reason, in this
study we need only to write explicitly the higher order terms in the
Lagrangian that contribute to the propagators.
## 2 BFV Quantization of the Hořava theory
The initial assumption in the definition of he Hořava theory is the existence
of a foliation of spatial slices along a given direction of time with absolute
meaning. In the classical theory, the fields representing the gravitational
interaction are the Arnowitt-Deser-Misner (ADM) field variables
$N(t,\vec{x})$, $N^{i}(t,\vec{x})$ and $g_{ij}(t,\vec{x})$. We deal only with
the nonprojectable version, on which the lapse function $N$ can depend on time
and space. The corresponding gauge symmetry is given by the FDiff. In terms of
a given coordinate system $(t,\vec{x})$, the FDiff acts infinitesimally as
$\delta t=f(t)$ and $\delta x^{i}=-\zeta^{i}(t,\vec{x})$. In the quantum
theory we impose asymptotically-flat boundary conditions. Since $f(t)$ is
independent of the spatial point, the compatibility with the boundary
condition requires the restriction $f(t)=0$. After this restriction, the FDiff
transformations, which we denote by $\delta_{\zeta}$, are given by111The signs
of the FDiff transformations are the opposite of the standard diffeomorphisms.
$\displaystyle\delta_{\zeta}N=\zeta^{k}\partial_{k}N\,,$ (2.1)
$\displaystyle\delta_{\zeta}N^{i}=\zeta^{k}\partial_{k}N^{i}-N^{k}\partial_{k}\zeta^{i}+\dot{\zeta}^{i}\,,$
(2.2)
$\displaystyle\delta_{\zeta}g_{ij}=\zeta^{k}\partial_{k}g_{ij}+2g_{k(i}\partial_{j)}\zeta^{k}\,.$
(2.3)
It is important to compare with the spatial diffeomorphisms, since many
variables behave as spatial tensors that evolve in time, as the case of $N$,
$g_{ij}$, and the arbitrary parameter $\zeta^{i}$ itself. For the case of a
time-dependent spatial tensor $T^{ij\cdots}$, its FDiff transformation is
functionally identical to a spatial diffeomorphism:
$\delta_{\zeta}T^{ij\cdots}=\zeta^{k}\partial_{k}T^{ij\cdots}-T^{kj\cdots}\partial_{k}\zeta^{i}-T^{ik\cdots}\partial_{k}\zeta^{j}-\cdots\,,$
(2.4)
and the analogous standard formula for the case of a time-dependent spatial
tensor density. Throughout this study, the term FDiff gauge symmetry of the
Hořava theory refers to the transformations (2.1) – (2.3), and (2.4) for the
case of time-dependent spatial tensors (with the extension for densities).
Among the ADM variables, only the shift vector $N^{i}$ has a FDiff
transformation that is functionally different to a spatial diffeomorphism.
The primary classical Hamiltonian of the nonprojectable Hořava theory is given
by [18, 19, 20]
$H_{0}=\int
d^{3}x\sqrt{g}N\left(\frac{\pi^{ij}\pi_{ij}}{g}+\bar{\sigma}\frac{\pi^{2}}{g}+\mathcal{V}\right)\,.$
(2.5)
The classical canonical conjugate pairs are $(g_{ij},\pi^{ij})$ and $N$ with
its conjugate momentum. We denote the trace $\pi\equiv g^{ij}\pi_{ij}$. The
canonical momentum of $N$ is zero due to the constraints of the theory; we
discard it from the phase space. $\mathcal{V}=\mathcal{V}[g_{ij},a_{i}]$,
where $a_{i}=\partial_{i}N/N$, is called the potential. It contains all the
terms with spatial derivatives that are compatible with the FDiff gauge
symmetry, including the higher-order ones characteristic of the Hořava theory,
which in the $(3+1)$-dimensional theory are of sixth order. For the evaluation
of the propagators we require all the sixth-order terms that contribute to the
second order in perturbations, which are [21] 222The coupling constants of the
theory are $\lambda,\alpha_{3},\alpha_{4},\beta_{3},\beta_{4}$. We use the
shorthand $\bar{\sigma}=\lambda/(1-3\lambda)$.
$\mathcal{V}=-\alpha_{3}\nabla^{2}R\nabla_{i}a^{i}-\alpha_{4}\nabla^{2}a_{i}\nabla^{2}a^{i}-\beta_{3}\nabla_{i}R_{jk}\nabla^{i}R^{jk}-\beta_{4}\nabla_{i}R\nabla^{i}R\,.$
(2.6)
The BFV quantization requires to identify the constraints that are involutive
under Dirac brackets [4, 5, 6]. In the case of the Hořava theory this is the
momentum constraint $\mathcal{H}_{i}=-2g_{ij}\nabla_{k}\pi^{kj}=0$. The
second-class constraints are given by the vanishing of the momentum conjugate
to $N$, which we have already considered as solved, and the constraint
$\begin{split}\theta_{1}\equiv\frac{N}{\sqrt{g}}\left(\pi^{ij}\pi_{ij}+\bar{\sigma}\pi^{2}\right)+\sqrt{g}N\mathcal{V}-\alpha_{3}\sqrt{g}\nabla^{2}(N\nabla^{2}R)+2\alpha_{4}\sqrt{g}\nabla^{i}\nabla^{2}(N\nabla^{2}a_{i})=0\,.\end{split}$
(2.7)
The primary Hamiltonian (2.5) is equivalent to the integral of this second-
class constraint, $H_{0}=\int d^{3}x\,\theta_{1}$. The BFV quantization adds
the canonical pair $(N^{i},\pi_{i})$ and the BFV ghost pairs
$(C^{i},\bar{\mathcal{P}}_{i})$, $(\bar{C}_{i},\mathcal{P}^{i})$. The quantum
action of the BFV path integral takes the form
$S=\int
dtd^{3}x\left(\pi^{ij}\dot{g_{ij}}+\pi_{i}\dot{N}^{i}+\bar{\mathcal{P}}_{i}\dot{C}^{i}+\mathcal{P}^{i}\dot{\bar{C}}_{i}-\mathcal{H}_{\Psi}+\mathcal{A}\theta_{1}-\bar{\eta}\frac{\delta\theta_{1}}{\delta
N}\eta\right)\,.$ (2.8)
The integration includes the auxiliary fields $\mathcal{A},\eta,\bar{\eta}$,
where $\mathcal{A}$ is a bosonic scalar and $\eta,\bar{\eta}$ is a pair of
scalar ghosts. The last two terms of the action (2.8) containing these
auxiliary fields are the contribution of the measure of the second-class
constraints (details can be found in Ref. [3]). The gauge-fixed Hamiltonian
density is defined by
$\mathcal{H}_{\Psi}=\mathcal{H}_{0}+\\{\Psi,\Omega\\}_{\text{D}}$, where
$\Omega$ is the generator of the BRST symmetry, given by
$\Omega=\int
d^{3}x\left(\mathcal{H}_{k}C^{k}+\pi_{k}\mathcal{P}^{k}-C^{k}\partial_{k}C^{l}\bar{\mathcal{P}}_{l}\right)\,.$
(2.9)
$\Psi$ is a gauge fermion and $\\{\,,\\}_{\text{D}}$ indicates Dirac brackets.
For the gauge fixing we may use the original BFV structure of the gauge
fermion [8, 22]. The gauge-fixing condition has the general form
$\dot{N}^{i}-\chi^{i}=0$, and its associated gauge fermion is
$\Psi=\bar{\mathcal{P}}_{i}N^{i}+\bar{C}_{i}\chi^{i}$, where $\chi^{i}$ is a
factor that must be chosen. To write $\chi^{i}$ explicitly, we introduce
perturbative variables around a flat background. The perturbation of the
metric tensor is denoted by $g_{ij}=\delta_{ij}+h_{ij}$. We choose $\chi^{i}$
to be the local expression333The coefficients in (2.10) are chosen to simplify
the resulting propagators (see Refs. [7, 8]).
$\chi^{i}=\rho\mathfrak{D}^{ij}\pi_{j}-2\rho\Delta^{2}\partial_{j}h_{ij}+2\rho\lambda(1+\kappa)\Delta^{2}\partial_{i}h-2\kappa\rho\Delta\partial_{i}\partial_{j}\partial_{k}h_{jk}\,,$
(2.10)
where
$\mathfrak{D}^{ij}=\delta_{ij}\Delta^{2}+\kappa\Delta\partial_{i}\partial_{j}$,
$\Delta=\partial_{k}\partial_{k}$ and $\rho,\kappa$ are independent constants.
## 3 The BRST-symmetry structure
In the BFV formalism, the BRST symmetry transformations on the canonical
fields are generated by $\Omega$, according to the rule
$\delta_{\Omega}\Phi=\\{\Phi\,,\Omega\\}_{\text{D}}\,\epsilon$, where
$\epsilon$ is the fermionic parameter of the transformation. The auxiliary
fields $\mathcal{A},\eta,\bar{\eta}$ are not canonical. We define their BRST
transformation in such a way that the measure is left invariant. The required
transformations are FDiff along $C^{i}\epsilon$:
$\delta_{\Omega}\mathcal{A}=\delta_{C\epsilon}\mathcal{A}$,
$\delta_{\Omega}\eta=\delta_{C\epsilon}\eta$, and
$\delta_{\Omega}\bar{\eta}=\delta_{C\epsilon}\bar{\eta}$.
On the action (2.8) we perform the integration on the ghost fields
$\mathcal{P}^{i},\bar{\mathcal{P}}_{i}$. The resulting action can be grouped
in two sectors, $S=S_{0}[\varphi^{a}]+S_{\Omega}$, where
$\displaystyle S_{0}[\varphi^{a}]=\int
dtd^{3}x\left(\pi^{ij}\dot{g_{ij}}-\mathcal{H}_{0}-N^{i}\mathcal{H}_{i}+\mathcal{A}\theta_{1}-\bar{\eta}\frac{\delta\theta_{1}}{\delta
N}\eta\right)\,,$ (3.1) $\displaystyle S_{\Omega}=\int
dtd^{3}x\left[\pi_{i}\left(\dot{N}^{i}-\chi^{i}\right)-\dot{\bar{C}}_{i}\left(\dot{C}^{i}+C^{j}\partial_{j}N^{i}-N^{j}\partial_{j}C^{i}\right)-\bar{C}_{i}\\{\chi^{i},\mathcal{H}_{j}\\}C^{j}\right]\,.$
(3.2)
$S_{0}[\varphi^{a}]$ depends exclusively on the set of fields
$\varphi^{a}=\\{g_{ij},\pi^{ij},N,N^{i},\mathcal{A},\eta,\bar{\eta}\\}$. At
this point it is useful to clarify that all the fields of the quantum theory
transform as time-dependent spatial tensors/densities under FDiff
transformations, except for $N^{i}$. Moreover, $S_{0}$ is invariant under
arbitrary FDiff gauge transformations. We remark that, for a FDiff
transformation with a time-dependent vector parameter $\zeta^{i}$, the first
and third terms of (3.1) combine themselves to cancel time derivatives of
$\zeta^{i}$,
$\delta_{\zeta}\int\left(\pi^{ij}\dot{g_{ij}}-N^{i}\mathcal{H}_{i}\right)=0$,
as it is well known from the ADM formulation of general relativity. The rest
of terms in (3.1) contain no time derivatives and are independent of $N^{i}$.
Their invariance under FDiff is automatic since they are written totally in
terms of spatial tensors/densities. In contrast, $S_{\Omega}$ is the gauge-
fixing sector of this symmetry.
As a consequence of the previous integration, the BRST symmetry must be
revised. Specifically, the transformation of $N^{i}$ is affected since the BFV
rule yields $\delta_{\Omega}N^{i}=\mathcal{P}^{i}\epsilon$. The second term in
(3.2) is key to unfold the new transformation, since it has the form of a
FDiff (2.2) along $C^{i}$. Therefore, we define the new BRST transformation of
$N^{i}$ to be the FDiff
$\delta_{\Omega}N^{i}=\delta_{C\epsilon}N^{i}=\left(C^{j}\partial_{j}N^{i}-N^{j}\partial_{j}C^{i}+\dot{C}^{i}\right)\epsilon$.
This transformation is nilpotent. After the (re)definitions we have done, it
turns out that the BRST transformation of all the $\varphi^{a}$ fields
corresponds to a FDiff along $C^{i}\epsilon$. Therefore, the BRST invariance
of $S_{0}[\varphi^{a}]$ is automatic.
The quantum action (3.1) – (3.2) can be written in the standard notation of
the BRST symmetry. We denote by $\boldsymbol{s}$ the BRST operator. The action
of $\boldsymbol{s}$ on the $\varphi^{a}$ fields is a FDiff transformation with
vector parameter equal to $C^{i}$. $\bar{C}_{i}$ and $\pi_{i}$ are the usual
auxiliary fields of the BRST symmetry. The action of the BRST operator is
$\boldsymbol{s}\varphi^{a}=\delta_{C}\varphi^{a}\,,\quad\boldsymbol{s}C^{i}=-C^{j}\partial_{j}C^{i}\,,\quad\boldsymbol{s}\bar{C}_{i}=\pi_{i}\,,\quad\boldsymbol{s}\pi_{i}=0\,.$
(3.3)
The sector $S_{\Omega}$ (3.2) is equal to the action of the BRST operator on a
gauge fermion, $S_{\Omega}=\int\boldsymbol{s}\tilde{\Psi}$, where
$\tilde{\Psi}=\bar{C}_{i}\left(\dot{N}^{i}-\chi^{i}\right)\,,$ (3.4)
and $\chi^{i}$ is given in (2.10). Therefore, the quantum action (3.1) – (3.2)
has the BRST-invariant form
$S=S_{0}[\varphi^{a}]+\int dtd^{3}x\boldsymbol{s}\tilde{\Psi}\,.$ (3.5)
We highlight that the whole Lagrangian is completely local.
## 4 Propagators and locality of divergences
The propagators can be calculated from the action (3.1) – (3.2), expanded at
second order in perturbations. We obtain two classes of propagators: the
regular and the irregular propagators. The regularity condition is appropriate
for the study of ultraviolet divergences in Lorentz-violating theories
[14].444Throughout this study we assume that infrared divergences have been
regularized. The gauge condition (2.10) is intended to get regular propagators
for the quantum fields [7, 8]. Nevertheless, the persistence of irregular
propagators in the nonprojectable theory demands a careful study of the
divergences [11, 12]. Specifically, the irregular propagators are associated
with the fields $\mathcal{A},\eta,\bar{\eta}$. We remark that this effect is a
consequence of the measure of the second-class constraints.
The most important feature of the regular propagators of this theory is that
they are given in terms of products of the four factors (in Fourier space
$(\omega,\vec{k})$, after a Wick rotation):
$\mathcal{T}_{A}=\frac{1}{\omega^{2}+\sigma_{A}k^{6}}\,,\quad A=1,2,3,4\,,$
(4.1)
where $\sigma_{A}$ are combinations of the coupling constants, that must
satisfy the condition $\sigma_{A}>0$ in order to maintain the definition of
regular propagator. There are other polynomials in the numerators of the
regular propagators; their growth at the ultraviolet is subordinated by the
factors $\mathcal{T}_{A}$. On the other hand, the three irregular propagators
are given by
$\langle\mathcal{A}\mathcal{A}\rangle=\langle\mathcal{A}n\rangle=\langle\bar{\eta}\eta\rangle=-\frac{1}{\alpha_{4}k^{6}}\,.$
(4.2)
These propagators are independent of $\omega$, violating the condition of
regularity in the time direction. But, as long as $\alpha_{4}\neq 0$, they
maintain a strict regular dependence on the spatial momentum $\vec{k}$.
In the action (3.1) – (3.2), time derivatives arise uniquely in terms that are
of second order in perturbations. As a consequence, vertices do not depend on
the frequency $\omega$. Hence, for the integration on $\omega$ we only need to
consider propagators. We call irregular loop to a loop formed completely with
the irregular propagators (4.2). Since these propagators do not depend on
$\omega$, an irregular loop produces a divergence of the kind $\sim\int
d\omega$. Such a divergence multiplies any diagram containing (at least) one
irregular loop. In previous analysis [11, 12], we have shown that all the
diagrams with irregular loops cancel completely between them. Moreover, this
is the only divergence produced by the integration on $\omega$, due to the
fact that the regular propagators automatically render the integration on
$\omega$ finite. Let us suppose first a loop composed completely of regular
propagators. The regular propagators with the lowest scaling in $\omega^{-1}$
are of order $\sim\omega^{-1}$. If the loop consists only of one propagator of
this kind, then the integral is zero since these propagators are odd in
$\omega$. The next order is a product of two propagators of this kind, or a
single propagator with scaling $\sim\omega^{-2}$. In both cases the integral
in $\omega$ is finite. By increasing the number of regular propagators, the
convergence in the integration on $\omega$ becomes faster. Now consider the
presence of one or more irregular propagators (4.2) in the loop, but not all
since we know that irregular loops cancel completely. Since the irregular
propagators are independent of $\omega$, the analysis of the integration on
$\omega$ is identical to the previous case of a loop made exclusively of
regular propagators.
In the integration on spatial momentum $k^{i}$, all the propagators have a
regular structure on this variable, including the ones of the fields
$\mathcal{A},\eta,\bar{\eta}$. According to the analysis of Lorentz-violating
theories [13, 14], the locality of divergences produced by the integration on
$k^{i}$ is ensured.
On the basis of the scaling of propagators and the maximal number of spatial
derivatives in the vertices, we may compute the superficial degree of diverge
$D_{\text{div}}$. The diagrams with the highest divergence (those without
external legs for $N^{i}$ and $\pi^{ij}$, and no spatial derivatives on the
external legs) have $D_{\text{div}}=6$. This order is equal to the order of
the bare Lagrangian, in agreement with the power-counting criterium used in
the formulation of the classical theory.
## 5 The background fields and the renormalization
The aim of introducing background fields is to get a background-gauge symmetry
in the gauge-fixed quantum action. This symmetry transforms simultaneously the
quantum fields $\varphi^{a}$ and the background fields $\phi^{a}$ in the form
of the original FDiff gauge transformations (2.1) – (2.4), with the same
parameter for both classes of fields. Specifically, one requires to handle the
subset of fields $\varphi^{a}$ involved in the gauge-fixing condition, in
terms of the linear combination $\varphi^{a}-\phi^{a}$. In our case, we
require to introduce background fields only for $g_{ij}$ and $N^{i}$, which we
denote by $\bar{g}_{ij}$ and $\bar{N}^{i}$ respectively (hence
$\phi^{a}=\\{\bar{g}_{ij},\bar{N}^{i}\\}$). We use a notation for the
difference of fields: $h_{ij}=g_{ij}-\bar{g}_{ij}$ and
$n^{i}=N^{i}-\bar{N}^{i}$.555Not to confuse $h_{ij}$ with the variable of
section 2. Due to the linearity of the gauge transformations on the parameter
and the fields, $h_{ij}$ and $n^{i}$ transform exactly as time-dependent
spatial tensors under background-gauge transformations.
The gauge fermion (3.4) is replaced by a background-dependent one,
$\begin{split}\Psi_{\text{bg}}=\,&\bar{C}_{i}\left(D_{t}n^{i}-\rho\Theta^{ijk}h_{jk}-\rho\mathcal{D}^{ij}(\pi_{j}/\sqrt{\bar{g}})\right)-\mathbb{T}^{ij}h_{ij}-\mathbb{K}_{ij}\pi^{ij}-\mathbb{T}_{i}n^{i}\\\\[4.30554pt]
&-\mathbb{T}N-\mathbb{S}\mathcal{A}-\bar{\mathbb{N}}\eta-\bar{\eta}\mathbb{N}+\bar{J}_{i}C^{i}\,,\end{split}$
(5.1)
where666In the definition of the operator $\Theta^{ijk}$ we have chosen the
simplest combination of fifth-order covariant derivatives that reproduces the
flat case (2.10). Unitarity requires the operator $\mathcal{D}^{ij}$ be
invertible.
$\displaystyle
D_{t}n^{i}=\dot{n}^{i}-\bar{N}^{k}\bar{\nabla}_{k}n^{i}+n^{k}\bar{\nabla}_{k}\bar{N}^{i}\,,$
(5.2)
$\displaystyle\Theta^{ijk}=-2\bar{g}^{ij}\bar{\nabla}^{4}\bar{\nabla}^{k}+2\lambda(1+\kappa)\bar{g}^{jk}\bar{\nabla}^{4}\bar{\nabla}^{i}-2\kappa\bar{\nabla}^{2}\bar{\nabla}^{i}\bar{\nabla}^{j}\bar{\nabla}^{k}\,,$
(5.3)
$\displaystyle\mathcal{D}^{ij}=\bar{g}^{ij}\bar{\nabla}^{4}+\kappa\bar{\nabla}^{2}\bar{\nabla}^{i}\bar{\nabla}^{j}\,.$
(5.4)
All indices are raised and lowered with the background metric $\bar{g}_{ij}$,
and $\bar{\nabla}$ is its covariant derivative. In Eq. (5.1) we have inserted
external sources for the BRST transformations of the $(\varphi^{a}-\phi^{a})$
fields. We denote these sources collectively by
$\gamma_{a}=\\{\mathbb{T}^{ij},\mathbb{K}_{ij},\mathbb{T}_{i},\mathbb{T},\mathbb{S},\bar{\mathbb{N}},\mathbb{N}\\}$,
whereas $\bar{J}_{i}$ is the source for $\boldsymbol{s}C^{i}$. All these
sources transform as time-dependent spatial tensors/densities under FDiff.
$D_{t}n^{i}$ transforms as a spatial vector under background-gauge
transformations. The operators $\Theta^{ijk}$ and $\mathcal{D}^{ij}$ are made
completely of spatial covariant derivatives; hence $\Psi_{\text{bg}}$ is
invariant under background-gauge transformations.
To write the action in the background-field approach, one introduces the
operator
$\boldsymbol{Q}=\boldsymbol{s}+\Omega^{a}\frac{\delta}{\delta\phi^{a}}$, where
$\Omega^{a}=\\{\Omega_{ij},\Omega^{i}\\}$ are external Grassmann fields.
$\boldsymbol{Q}$ is a nilpotent operator. The quantum gauge-fixed action in
the presence of background fields takes the form
$\Sigma_{0}=S_{0}[\varphi^{a}]+\int dtd^{3}x\boldsymbol{Q}\Psi_{\text{bg}}\,.$
(5.5)
By following standard procedures, we may compute the identities on the
effective action $\Gamma$ due to the underlying gauge symmetry. These are the
Slavnov-Taylor identity, the Ward identity for the background-gauge symmetry,
and the field equation for the ghost field $\bar{C}_{i}$. $\Gamma$ is defined
in the standard way by means of a Legendre transformation on the generating
functional of connected diagrams $W$.
We collect the several results we have found here and in previous analysis:
the BRST-invariant form (3.5) of the quantum action, together with its
background-field extension (5.5); the completely local form of the gauge-fixed
Lagrangian; the regularity of all the propagators that do not involve the
fields $\mathcal{A},\eta,\bar{\eta}$; the cancellation of the irregular loops
formed by the propagators of these fields; the absence of divergences along
the $\omega$ direction, regardless of the presence of irregular propagators in
diagrams; the regular structure of all the propagators with respect to the
dependence on $k^{i}$, leading to the locality of the divergences, and the
superficial degree of divergence of all diagrams that is not greater than the
order of the bare Lagrangian. On the basis of these results, the
renormalization of the theory is achieved by following the procedure developed
in Ref. [15].
Under an inductive approach in the order in loops, in Ref. [15] it is found a
field redefinition that brings the action at $L$-th order in loops to the
BRST-invariant form
$\Sigma_{L}=S_{L}[\varphi^{a}]+\int dtd^{3}x\,\boldsymbol{Q}\Psi_{L}\,,$ (5.6)
where $S_{L}[\varphi^{a}]$ is a FDiff gauge invariant local functional. The
gauge fermion is invariant under background-gauge transformations and has the
form
$\Psi_{L}=\bar{C}_{i}\left(D_{t}n^{i}-\rho\Theta^{ijk}h_{jk}-\mathcal{D}^{ij}\pi_{j}\right)-\gamma_{a}(\varphi^{a}-\phi^{a})+\bar{J}_{i}C^{i}+\mathcal{O}(\hbar^{L+1})\,.$
(5.7)
In the generating functional $W$, the fields that couple to the external
sources, denoted by $\tilde{\varphi}^{a}$ and $\tilde{C}^{i}$, are given by
the gauge fermion in the form
$\tilde{\varphi}^{a}_{L}=\phi^{a}-\frac{\delta\Psi_{L}}{\delta\gamma_{a}}$ and
$\tilde{C}^{i}_{L}=\frac{\delta\Psi_{L}}{\delta\bar{J}_{i}}$. This functional
relationship is preserved by the field redefinition at the $L$-th order.
## References
* [1] P. Hořava, Quantum Gravity at a Lifshitz Point, Phys. Rev. D 79 084008 (2009) [arXiv:0901.3775 [hep-th]].
* [2] D. Blas, O. Pujolas and S. Sibiryakov, Consistent Extension of Hořava Gravity, Phys. Rev. Lett. 104 181302 (2010) [arXiv:0909.3525 [hep-th]].
* [3] J. Bellorín, C. Bórquez and B. Droguett, BRST symmetry and unitarity of the Hořava theory, Phys. Rev. D 107 044059 (2023) [arXiv:2212.14079 [hep-th]].
* [4] E. S. Fradkin and G. A. Vilkovisky, Quantization of relativistic systems with constraints, Phys. Lett. B 55 224 (1975).
* [5] I. A. Batalin and G. A. Vilkovisky, Relativistic S Matrix of Dynamical Systems with Boson and Fermion Constraints, Phys. Lett. B 69 309 (1977).
* [6] E. S. Fradkin and T. E. Fradkina, Quantization of Relativistic Systems with Boson and Fermion First and Second Class Constraints, Phys. Lett. B 72 343 (1978).
* [7] A. O. Barvinsky, D. Blas, M. Herrero-Valea, S. M. Sibiryakov and C. F. Steinwachs, Renormalization of Hořava gravity, Phys. Rev. D 93 064022 (2016) [arXiv:1512.02250 [hep-th]].
* [8] J. Bellorín, C. Bórquez and B. Droguett, Quantum Lagrangian of the Hořava theory and its nonlocalities, Phys. Rev. D 105 024065 (2022) [arXiv:2112.10295 [hep-th]].
* [9] P. Senjanovic, Path Integral Quantization of Field Theories with Second Class Constraints, Annals Phys. 100 227 (1976) [erratum: Annals Phys. 209 248 (1991)].
* [10] E. S. Fradkin, Acta Universitatis Wratislaviensis No. 207, in Proceedings of X-th Winter School of Theoretical Physics in Karpacz (Wydawnictwo Uniwersytetu Wroclawskiego Sp., Poland, 1973).
* [11] J. Bellorín, C. Bórquez and B. Droguett, Cancellation of divergences in the nonprojectable Hořava theory, Phys. Rev. D 106 044055 (2022) [arXiv:2207.08938 [hep-th]].
* [12] J. Bellorín, C. Bórquez and B. Droguett, Effective action of the Hořava theory: Cancellation of divergences, Phys. Rev. D 109 084007 (2024) [arXiv:2312.16327 [hep-th]].
* [13] D. Anselmi and M. Halat, Renormalization of Lorentz violating theories, Phys. Rev. D 76 125011 (2007) [arXiv:0707.2480 [hep-th]].
* [14] D. Anselmi, Weighted power counting and Lorentz violating gauge theories. I. General properties, Annals Phys. 324 874 (2009) [arXiv:0808.3470 [hep-th]].
* [15] A. O. Barvinsky, D. Blas, M. Herrero-Valea, S. M. Sibiryakov and C. F. Steinwachs, Renormalization of gauge theories in the background-field approach, JHEP 07 035 (2018) [arXiv:1705.03480 [hep-th]].
* [16] B. S. DeWitt, Quantum Theory of Gravity. 2. The Manifestly Covariant Theory, Phys. Rev. 162 1195 (1967); Quantum Theory of Gravity. 3. Applications of the Covariant Theory, Phys. Rev. 162 1239 (1967).
* [17] L. F. Abbott, Introduction to the Background Field Method, Acta Phys. Polon. B 13 33 (1982).
* [18] J. Kluson, Note About Hamiltonian Formalism of Healthy Extended Hořava-Lifshitz Gravity, JHEP 07 038 (2010) [arXiv:1004.3428 [hep-th]].
* [19] W. Donnelly and T. Jacobson, Hamiltonian structure of Hořava gravity, Phys. Rev. D 84 104019 (2011) [arXiv:1106.2131 [hep-th]].
* [20] J. Bellorín and A. Restuccia, Consistency of the Hamiltonian formulation of the lowest-order effective action of the complete Hořava theory, Phys. Rev. D 84 104037 (2011) [arXiv:1106.5766 [hep-th]].
* [21] M. Colombo, A. E. Gumrukcuoglu and T. P. Sotiriou, Hořava gravity with mixed derivative terms, Phys. Rev. D 91 044021 (2015) [arXiv:1410.6360 [hep-th]].
* [22] J. Bellorín and B. Droguett, BFV quantization of the nonprojectable (2+1)-dimensional Hořava theory, Phys. Rev. D 103 064039 (2021) [arXiv:2102.04595 [hep-th]].
|
# Policies for elementary link generation in quantum networks
Sumeet Khatri Hearne Institute for Theoretical Physics, Department of Physics
and Astronomy, and Center for Computation and Technology, Louisiana State
University, Baton Rouge, Louisiana, 70803, USA
###### Abstract
Protocols in a quantum network involve multiple parties performing actions on
their quantum systems in a carefully orchestrated manner over time in order to
accomplish a given task. This sequence of actions over time is often referred
to as a strategy, or policy. In this work, we consider policy optimization in
a quantum network. Specifically, as a first step towards developing full-
fledged quantum network protocols, we consider policies for generating
elementary links in a quantum network. We start by casting elementary link
generation as a quantum partially observable Markov decision process, as
defined in [Phys. Rev. A 90, 032311 (2014)]. Then, we analyze in detail the
commonly used memory cutoff policy. Under this policy, once an elementary link
is established it is kept in quantum memory for some amount $t^{\star}$ of
time, called the cutoff, before it is discarded and the elementary link
generation is reattempted. For this policy, we determine the average quantum
state of the elementary link as a function of time for an arbitrary number of
nodes in the link, as well as the average fidelity of the link as a function
of time for any noise model for the quantum memories. Finally, we show how
optimal policies can be obtained in the finite-horizon setting using dynamic
programming. By casting elementary link generation as a quantum decision
process, this work goes beyond the analytical results derived here by
providing the theoretical framework for performing reinforcement learning of
practical quantum network protocols.
###### Table of Contents
1. 1 Introduction
1. 1.1 Summary of results
2. 1.2 Relation to prior work
2. 2 Elementary link generation
1. 2.1 Formulation as a quantum decision process
2. 2.2 Link quantities
3. 3 The memory cutoff policy for elementary link generation
1. 3.1 Calculation of link quantities
2. 3.2 Waiting time
3. 3.3 Multiple parallel links
4. 3.4 Total number of active links
5. 3.5 Collective link status
4. 4 Finite-horizon policy optimization
5. 5 Summary and outlook
## 1 Introduction
A quantum network is a collection of nodes, each equipped with quantum
information processing capabilities, that are connected to each other by
quantum channels. The nodes in such a network can, in principle, perform tasks
such as quantum teleportation [1, 2], quantum key distribution [3, 4, 5, 6],
quantum clock synchronization [7, 8, 9], distributed quantum computation [10],
and distributed quantum metrology and sensing [11, 12, 13, 14, 15, 16]. The
future quantum internet [17, 18, 19, 20, 21] will be an interconnected network
of such quantum networks, much like today’s internet, that will enable these
applications to be performed on a global scale.
Figure 1: Representation of a quantum network as a hypergraph. The nodes
represent the senders, receivers, or repeaters depending on the situation.
Edges represent entangled states shared by the corresponding nodes. Edges
between two nodes (shown in red) represent bipartite entanglement, while
hyperedges (consisting of more than two nodes and indicated by a blue bubble)
represent multipartite entanglement. Nodes can be connected by multiple edges,
indicating that they can share multiple entangled states simultaneously.
As shown in Figure 1, a quantum network can be modeled as a graph. The nodes
of the graph represent the senders/receivers in the network, and the edges
represent elementary links, which in this work we take to be an entangled
state shared by the corresponding nodes. The edges can be between two nodes
only, as indicated by the red lines, or they can be hyperedges connecting
three or more nodes, as indicated by the blue bubbles. Groups of nodes can be
connected by more than one edge, and in this case the graph is called a
multigraph. Multiple edges between nodes are shown explicitly in Figure 1 for
two-node edges, although we can also have multiple hyperedges between a set of
adjacent nodes. Each of these edges is regarded as a distinct edge in the
graph.
In general, the goal in a quantum network is to transmit quantum information
between a collection of distant nodes, i.e., nodes that are not connected to
each other by a single elementary link. In this setting, any node in the
network that is not either a sender or a receiver can function as a so-called
quantum repeater. A quantum repeater can be thought of as a helper node whose
task is to mitigate the effects of loss and noise along a path connecting a
sender and a receiver, thereby making the quantum information transmission
more reliable. Quantum repeaters are needed because directly transmitting
quantum information from a sender to a receiver is often too lossy and noisy
to be useful for the applications mentioned above. In fact, the loss in an
optical fiber, a commonly used medium for quantum information transmission,
increases exponentially with distance [22, 23], limiting direct transmission
distances to roughly hundreds of kilometers. The original quantum repeater
proposal in [24, 25] consists of placing quantum repeaters at intermediate
points along a straight line connecting the sender and receiver. The protocol
to generate sender-receiver entanglement then consists of first generating
bipartite entanglement along the elementary links between the repeaters. The
repeaters then perform entanglement distillation [26, 27, 28] and entanglement
swapping [1, 29] to iteratively extend the entanglement range to the desired
distance.
A vast body of literature exists on a variety of quantum repeater schemes [24,
25, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]. (See also [60, 61, 62] and
the references therein.) Considering a quantum network such as the one in
Figure 1, as opposed to just one line between a sender and a receiver, is a
much more complicated setting that leads to questions about, e.g., routing
[63, 64, 65, 66, 67, 68, 69, 70, 71, 72] and multicast communication
(communication between several senders and receivers simultaneously).
Consequently, protocols in a general quantum network can be much more varied
than protocols along a linear chain of nodes. General quantum network
protocols have been described in [73, 74, 75, 76, 69, 70, 77]. Linear
programs, and other techniques for obtaining optimal entanglement distribution
rates in a quantum network, have been explored in [78, 79, 80, 81, 77].
Figure 2: Elementary link generation in a quantum network as a quantum
partially observable Markov decision process (see Section 2.1 and Definition
2.1 for details). As shown in the case of a bipartite link, the agent consists
of the nodes belonging to the elementary link, and the environment is the
quantum systems distributed to the nodes from the entanglement source.
In this work, we view entanglement distribution protocols in quantum networks
from the lens of decision processes [82], which form the theoretical
foundation for reinforcement learning [83] and artificial intelligence [84].
In a decision process, an agent interacts with its environment through a
sequence of actions, and it receives rewards from the environment based on
these actions. The goal of the agent is to perform actions that maximize its
expected total reward. We consider a particular quantum generalization of a
decision process given in [85] (see also [86]), called a quantum partially
observable Markov decision process, in which the agent is classical and the
environment is quantum. The agent’s action at each time step results in a
transformation of the quantum state of the environment, and the agent receives
both partial (classical) information about the new quantum state of the
environment along with a reward. Such decision processes have been considered
previously in the context of quantum control [87, 88, 89, 90, 91], quantum-
enhanced parameter estimation [92, 93, 94, 95, 96], and quantum error
correction [89, 97, 98]. We now apply this concept to quantum networks.
Specifically, we consider elementary link generation, which is the first step
towards obtaining long-range entanglement distribution in quantum networks,
and we show that elementary link generation can be cast as a quantum partially
observable Markov decision process; see Figure 2. The advantage of viewing
elementary link generation from the point of view of decision processes is
that we are able to systematically study different policies and determine
which policy is optimal in terms of both the fidelity of the link and the
probability that the link is active at any given time. We can also keep track
of the quantum state of the link over time, which is useful calculating
entanglement measures and determining rates for entanglement distillation.
Furthermore, because decision processes form the theoretical foundation for
reinforcement learning, our work provides the tools needed in order to perform
reinforcement learning in quantum networks.
### 1.1 Summary of results
The following is a summary of the main results of this work.
1. 1.
Our first result, in Section 2.1, is conceptual in nature and is captured by
Figure 2 and detailed in Definition 2.1. In Definition 2.1, we formally cast
elementary link generation as a quantum partially observable Markov decision
process (of the type considered in [85]) by establishing in the context of
elementary link generation all of the elements that define a quantum partially
observable Markov decision process. In this framework, at each time step, the
agent (which is all of the nodes in the elementary link as a collective
entity) either requests entanglement from a source station (which is the
environment), or keeps the entangled state currently stored in memory. The
agent’s choice of action can depend on, e.g., the quality of the initial
entanglement and coherence times of the quantum memories. We formally define a
policy for elementary link generation, and we describe mathematically how the
quantum state of the environment (i.e., the entangled state of the elementary
link) transforms based on the actions taken by the agent. In Section 2.2, we
define various quantities of interest relating to an elementary link.
2. 2.
With elementary link generation cast within the framework of a decision
process, we proceed to provide a closed-form expression for the average
quantum state of any elementary link in a network at any time (Theorem 2.1 and
Corollary 2.1), as well as the fidelity of the link at any time (Theorem 2.2).
These results hold for any policy and for any noise model for the quantum
memories.
3. 3.
In Section 3, we consider in detail the so-called memory cutoff policy. In
this policy, an elementary link, once established, is kept in quantum memories
at the nodes for some amount $t^{\star}$ of time, called the cutoff, before it
is discarded and the link is reattempted. The memory cutoff policy has been
considered in prior work [31, 32, 33, 34, 99, 100, 101, 102, 103, 104, 105],
and it is a natural policy to consider for near-term protocols, in which
quantum memories have relatively short coherence times, and there is limited
capability to perform entanglement distillation. For this policy, given any
number of nodes in the elementary link and any noise model for the quantum
memories, our main results are expressions for the link activity probability
in the finite- and infinite-horizon settings111By definition, the finite-
horizon setting corresponds to a given, finite interaction time between the
agent and the environment. In the infinite-horizon setting, the interaction
proceeds indefinitely. (Proposition 3.2 and Theorem 3.2, respectively), which
immediately lead to expressions for the average quantum state of the link as a
function of time and for the average fidelity of the link as a function of
time, again in both the finite- and infinite-horizon settings. We also derive
in Section 3.1 formulas for the other elementary link quantities defined in
Section 2.2, and in Section 3.2 we derive formulas for the expected waiting
time for an elementary link.
4. 4.
In Section 4, we show how to obtain an optimal policy using the techniques of
dynamic programming in the finite-horizon setting, i.e., in the case that the
termination time of the elementary link generation procedure is fixed at the
outset and is finite. The main result of this section is Theorem 4.1, in which
we prove that the optimal policy can be obtained using a backward recursion
procedure such that the optimal action at each time is deterministic.
We expect the results derived in this work to be useful as a building block
for large-scale quantum network protocols. For example, the policies for
elementary link generation obtained through the results of this work can be
used as an underlying policy layer over which routing policies can be applied
in order to obtain an overall policy for generating end-to-end entanglement in
a network. We elaborate on this and on further avenues for development of the
mathematical tools established in this work in Section 5. Furthermore, because
our results apply to elementary links consisting of an arbitrary number of
nodes and to any noise model for the quantum memories, they can be applied to
protocols that go beyond bipartite entanglement distribution, namely to
protocols for distributing multipartite entanglement [106, 107, 108, 39, 38,
44, 67, 47, 42]. We also expect our results to be useful in the analysis of
entanglement distribution using all-photonic quantum repeaters [36] and in the
analysis of entanglement distribution using satellite-based quantum networks
[109, 110, 111, 112, 113], in which an elementary link can easily be on the
order of 1000 km [114] while still having a high fidelity. Finally, since
decision processes are at the heart of reinforcement learning, the results of
this work provide the theoretical foundation for performing reinforcement
learning of (near-optimal) quantum network protocols that can be used in
practice.
### 1.2 Relation to prior work
Policy-based approaches to quantum network protocols have been considered
before in [115, 116, 99, 117] (see also [62]), where terms such as “rule-set”
or “schedule” have been used instead of “policy”. In [99], the authors
consider different control protocols for elementary link generation in a
quantum network based on different configurations of the sources and heralding
stations and the impact they have on end-to-end entanglement distribution
rates. In [115], the authors look at protocols for end-to-end entanglement
distribution along a chain of quantum repeaters and simulate different
scheduling protocols for entanglement distillation along elementary links.
Similarly, in [116], the authors use finite state machines to analyze the
different layers of an end-to-end entanglement distribution protocol in
quantum networks, such as entanglement distillation and entanglement swapping.
Finally, in [117], the authors use an approach based on rule-sets to determine
end-to-end entanglement distribution rates and fidelities of the end-to-end
pairs along a chain of quantum repeaters.
One of the goals of this work is to explicitly formalize the approaches taken
in the aforementioned works within the context of decision processes, because
this allows us to systematically study different policies and calculate
quantities that are relevant for quantum networks, such as entanglement
distribution rates and fidelities of the quantum states of the links.
This work is complementary to other prior work on using Markov chains to
analyze waiting times and entanglement distribution rates for a chain of
quantum repeaters [118, 119, 120, 102], and to prior work on analyzing the
quantum state in a quantum repeater chain with noisy quantum memories [121,
122, 123, 124, 125]. It is also complementary to [126], in which the authors
use reinforcement learning to discover protocols for quantum teleportation,
entanglement distillation, and end-to-end bipartite entanglement distribution
along a chain of quantum repeaters. While the work in [126] is largely
numerical, our work is focused on formally developing the mathematical tools
needed to perform reinforcement learning of protocols in general quantum
networks. The development of the mathematical tools is essential when an agent
acts in a quantum-mechanical environment, because it is important to
understand how the agent’s actions affect the quantum state of the
environment. Furthermore, we expect that the protocols learned in [126],
particularly those for entanglement distillation and entanglement swapping,
could be incorporated as subroutines within the mathematical framework of
decision processes developed in this work, so that large-scale quantum network
protocols (going beyond the elementary link level) can be discovered using
reinforcement learning.
This work is also related to the work in [127], in which the authors develop a
link-layer protocol for generating elementary links in a quantum network, and
they perform simulations of entanglement distribution using a discrete-event
simulator under various scenarios. The effect of different scheduling
strategies is also considered. The protocols in [127] consider actions in a
more fine-grained manner than what we consider in this work. In particular,
the steps required for heralding (namely, the communication signals for the
results of the heralding) are explicitly taken into account. In Remark 2.3, we
briefly explain how the approach of [127] can be viewed in terms of the
framework being considered here.
The approach to policy optimization taken in this work is similar to the
approach in [128], in the sense that both approaches make use of the principle
of dynamic programming. While in [128] the focus is on obtaining end-to-end
bipartite entanglement in a chain of quantum repeaters, the goal here is
simply to examine elementary link generation and to determine the optimal
sequence of actions that should be performed in order to maximize both the
fidelity of any given elementary link and the probability that any given
elementary link is active at any given time.
## 2 Elementary link generation
Let us go back to the graphical representation of a quantum network in Figure
1. In this work, we suppose that all of the edges in the graph represent
entangled states shared by the corresponding nodes. These entangled states are
distributed to the nodes by stations containing an entanglement source. These
source stations can be on the ground at fixed locations, they can be at one of
the nodes in the edge, or they can be on satellites orbiting the earth [109,
110].
The model for transmission of quantum states from the source stations to the
nodes is as follows. The source prepares a $k$-partite state $\rho^{S}$, where
$k$ is the number of nodes belonging to the edge. Each of the $k$ quantum
systems is encoded into $d$ bosonic modes, with $d\geq 1$. The source state
$\rho^{S}$ is typically of the form $|\psi^{S}\rangle\langle\psi^{S}|$, where
$|\psi^{S}\rangle=\sqrt{\smash[b]{p_{0}^{S}}}|\text{vac}\rangle+\sqrt{\smash[b]{p_{1}^{S}}}|\psi_{1}^{S}\rangle+\sqrt{\smash[b]{p_{2}^{S}}}|\psi_{2}^{S}\rangle+\dotsb,$
(2.1)
where $|\psi_{n}^{S}\rangle$ is a state vector with $n$ photons in total for
each of the $k$ parties and the numbers $p_{n}^{S}\geq 0$ are probabilities,
so that $\sum_{n=0}^{\infty}p_{n}^{S}=1$. For example, in the case $k=2$ and
$d=2$, the following source state is generated from a parametric down-
conversion process (see, e.g., [129, 124]):
$\displaystyle|\psi^{S}\rangle$
$\displaystyle=\sum_{n=0}^{\infty}\frac{\sqrt{n+1}r^{n}}{\text{e}^{q}}|\psi_{n}\rangle,$
(2.2) $\displaystyle|\psi_{n}\rangle$
$\displaystyle=\frac{1}{\sqrt{n+1}}\sum_{m=0}^{n}(-1)^{m}|n-m,m;m,n-m\rangle,$
(2.3)
where $r$ and $q$ are parameters characterizing the process. One often
considers a truncated version of this state as an approximation, so that [124]
$|\psi\rangle=\sqrt{p_{0}}|0,0;0,0\rangle+\sqrt{\frac{p_{1}}{2}}(|1,0;0,1\rangle+|0,1;1,0\rangle)\\\
+\sqrt{\frac{p_{2}}{3}}(|2,0;0,2\rangle+|1,1;1,1\rangle+|0,2;2,0\rangle),$
(2.4)
where $p_{0}+p_{1}+p_{2}=1$. Typically, a source state of the form (2.1) is
not ideal, in the sense that the desired state is given by one of the state
vectors $|\psi_{j}^{S}\rangle$, and the other terms arise due to the naturally
imperfect nature of the source. For example, for the state in (2.4), the
desired bipartite state is the maximally entangled state
$|\Psi^{+}\rangle\equiv\frac{1}{\sqrt{2}}(|1,0;0,1\rangle+|0,1;1,0\rangle)$.
Once the source state is prepared, each mode is sent through a bosonic pure-
loss/attenuation channel $\mathcal{L}_{\eta}$ [130], where $\eta\in(0,1]$ is
the transmissivity of the medium. This channel provides a good model for
transmission of photons through an optical fiber, in which case
$\eta=\text{e}^{-\frac{L}{L_{0}}}$ [22, 23], where $L$ is the transmission
distance and $L_{0}$ is the attenuation length of the fiber. Letting
$\mathcal{L}_{\eta}^{(d)}\coloneqq\underbrace{\mathcal{L}_{\eta}\otimes\mathcal{L}_{\eta}\otimes\dotsb\otimes\mathcal{L}_{\eta}}_{d\text{
times}}$ (2.5)
denote the quantum channel that acts on the $d$ modes of each of the $k$
systems, the overall quantum channel through which the source state $\rho^{S}$
is sent is
$\mathcal{L}_{\vec{\eta}}^{(k;d)}\coloneqq\underbrace{\mathcal{L}_{\eta_{1}}^{(d)}\otimes\mathcal{L}_{\eta_{2}}^{(d)}\otimes\dotsb\otimes\mathcal{L}_{\eta_{k}}^{(d)}}_{k\text{
times}},$ (2.6)
where $\vec{\eta}=(\eta_{1},\eta_{2},\dotsc,\eta_{k})$ and $\eta_{j}$ is the
transmissivity of the medium to the $j^{\text{th}}$ node in the edge. The
quantum state shared by the $k$ nodes after transmission from the source is
then
$\rho_{\text{out}}^{S}\coloneqq\mathcal{L}_{\vec{\eta}}^{(k;d)}(\rho^{S}).$
(2.7)
After transmission from the source to the nodes, the nodes typically have to
execute a heralding procedure, which is a sequence of local operations and
classical communication between the nodes that confirms whether all of the
nodes received their quantum systems and whether they are in the desired
subspace. If the heralding procedure succeeds, then the nodes store their
quantum systems in a quantum memory. Mathematically, the heralding procedure
can be described by a set $\\{\mathcal{M}_{0},\mathcal{M}_{1}\\}$ of
completely positive trace non-increasing maps such that
$\mathcal{M}_{0}+\mathcal{M}_{1}$ is trace preserving. The map
$\mathcal{M}_{0}$ corresponds to failure of the heralding procedure, and the
map $\mathcal{M}_{1}$ corresponds to success. The outcome of the heralding
procedure can then be captured by the following transformation of the state
$\rho_{\text{out}}^{S}$ to a classical-quantum state:
$\rho_{\text{out}}^{S}\mapsto|0\rangle\langle
0|\otimes\mathcal{M}_{0}(\rho_{\text{out}}^{S})+|1\rangle\langle
1|\otimes\mathcal{M}_{1}(\rho_{\text{out}}^{S})=|0\rangle\langle
0|\otimes\widetilde{\tau}^{\varnothing}+|1\rangle\langle
1|\otimes\widetilde{\rho}_{0},$ (2.8)
where the classical register holds the binary outcome of the heralding
procedure (1 for success and 0 for failure) and the quantum register holds the
quantum state of the nodes corresponding to the outcome. In particular,
$\widetilde{\tau}^{\varnothing}\coloneqq\mathcal{M}_{0}(\rho_{\text{out}}^{S})$
is the (unnormalized) quantum state corresponding to failure, and
$\widetilde{\rho}_{0}\coloneqq\mathcal{M}_{1}(\rho_{\text{out}}^{S})$ is the
(unnormalized) quantum state corresponding to success. The subscript “0” in
$\widetilde{\rho}_{0}$ indicates that the quantum memories of the nodes are in
their initial state immediately after success of the heradling procedure; we
expand on this below. The quantum states conditioned on success and failure,
respectively, are defined to be
$\rho_{0}\coloneqq\frac{\widetilde{\rho}_{0}}{\operatorname{Tr}[\widetilde{\rho}_{0}]},\quad\tau^{\varnothing}\coloneqq\frac{\widetilde{\tau}^{\varnothing}}{\operatorname{Tr}[\widetilde{\tau}^{\varnothing}]}.$
(2.9)
Throughout this work, we let
$p\coloneqq\operatorname{Tr}[\widetilde{\rho}_{0}]$ (2.10)
denote the overall probability of success of the transmission from the source
and of the heralding procedure.
Now, as mentioned above, once the heralding procedure succeeds, the nodes
store their quantum systems in their local quantum memory. Quantum memories
have been made using trapped ions [131], Rydberg atoms [132, 133], atom-cavity
systems [134, 135], NV centers in diamond [136, 137, 138, 100, 139, 101],
individual rare-earth ions in crystals [140], and superconducting processors
[141]. The quantum memories are in general imperfect, which means that the
quantum systems decohere over time. We describe this decoherence by a quantum
channel $\mathcal{N}_{j}$ acting on each quantum system
$j\in\\{1,2,\dotsc,k\\}$ of the elementary link. The decoherence channel is
applied at every time step in which the quantum system is in memory. The
overall quantum channel acting on all of the quantum systems in the elementary
link is
$\widehat{\mathcal{N}}\coloneqq\mathcal{N}_{1}\otimes\mathcal{N}_{2}\otimes\dotsb\otimes\mathcal{N}_{k}.$
(2.11)
The quantum state of the elementary link after $m$ time steps in the memories
is therefore given by
$\rho(m)\coloneqq\widehat{\mathcal{N}}^{\circ m}(\rho_{0}).$ (2.12)
For a particular target/desired quantum state of the elementary link, which we
assume to be a pure state $\psi=|\psi\rangle\langle\psi|$, we let
$f_{m}(\rho_{0};\psi)\coloneqq\langle\psi|\rho(m)|\psi\rangle=\langle\psi|\widehat{\mathcal{N}}^{\circ
m}(\rho_{0})|\psi\rangle$ (2.13)
denote the fidelity of the state $\rho(m)$ with respect to the target state
$\psi$. For brevity, we suppress the dependence of $f_{m}$ on the target state
$\psi$ whenever it is understood or is unimportant. We also suppress, for
brevity, the dependence of $f_{m}$ on the decoherence channels of the quantum
memories.
### 2.1 Formulation as a quantum decision process
Let us now describe elementary link generation from the point of view of
decision processes. Specifically, in this section, we cast elementary link
generation as a quantum partially observable Markov decision process, as
defined in [85] (see also [86]), which is a particular quantum generalization
of Markov decision processes. We start by reviewing the general definition of
a classical Markov decision process. Then, we proceed to the general
definition of a quantum partially observable Markov decision process. We then
apply this definition to come up with a quantum partially observable Markov
decision process that is specific to elementary link generation.
Figure 3: Schematic diagrams of classical (left) and quantum partially
observable (right) Markov decision processes. See [82, 142] for details on
classical Markov decision processes. Our definition of quantum partially
observable Markov decision processes is based on the definition given in [85].
See the main text for details on the elements of both types of decision
processes.
A classical Markov decision process, depicted in the left panel of Figure 3,
is a sequence of interactions between an agent and its environment that is
defined by the following elements. (We follow the definition presented in [82,
Chapter 2].)
* •
A set $\mathcal{X}$ of states of the environment, with associated random
variables $X(t)$ for all $t\geq 1$ whose values are contained in
$\mathcal{X}$. We also have a set $\mathcal{A}$ of actions of the agent, with
associated random variables $A(t)$ for all $t\geq 1$ whose values are
contained in $\mathcal{A}$. The sequence
$H(t)\coloneqq(X(1),A(1),X(2),A(2),\dotsc,A(t-1),X(t))$ (2.14)
of state and action random variables tells us the history of the agent-
environment interaction up to some time $t\geq 1$. Any realization of the
history is a sequence of the form
$h^{t}\coloneqq(x_{1},a_{1},x_{2},a_{2},\dotsc,a_{t-1},x_{t}),$ (2.15)
where $x_{j}\in\mathcal{X}$ and $a_{j}\in\mathcal{A}$. Given any history
$h^{t}$ of the form shown above, we let
$h^{t}_{j}\coloneqq(x_{1},a_{1},x_{2},a_{2},\dotsc,a_{j-1},x_{j})$ (2.16)
denote the history up to time $j\geq 2$. For $j=1$, we let $h^{t}_{1}=x_{1}$.
Then, we can regard the state and action random variables as functions such
that, for any history $h^{t}$ as in (2.15),
$X(j)(h^{t})=x_{j},\quad A(j)(h^{t})=a_{j}$ (2.17)
for all $1\leq j\leq t$.
* •
A transition function
$T_{t}:\mathcal{X}\times\mathcal{A}\times\mathcal{X}\to[0,1]$ for all $t\geq
1$ such that
$T_{t}(x_{t},a_{t},x_{t+1})=\Pr[X(t+1)=x_{t+1}|X(t)=x_{t},A(t)=a_{t}]$. In
other words, the transition function gives us the probability that, at time
$t$, the environment transitions to a particular state at time $t+1$ given its
state at time $t$ and the agent’s action at time $t$.
* •
A reward function
$r_{t}:\mathcal{X}\times\mathcal{A}\times\mathcal{X}\to\mathbb{R}$ for $t\geq
2$ such that $r_{t}(x_{t-1},a_{t-1},x_{t})$ is the reward received by the
agent at time $t$ based on the state $x_{t-1}$ of the environment at time
$t-1$, the agent’s action $a_{t-1}$ at time $t-1$, and the new state $x_{t}$
of the environment at time $t$ based on the agent’s action. The reward $r_{1}$
at time $t=1$ is a given fixed value at the start of the decision process.
* •
A decision function $d_{t}$ for $t\geq 1$ such that
$d_{t}(h^{t})(a_{t})\coloneqq\Pr[A(t)=a_{t}|H(t)=h^{t}].$ (2.18)
In other words, the decision function gives us the probability that, at time
$t$, the agent takes the action $a_{t}$ conditioned on the history $h^{t}$ of
the interaction up to time $t$. The sequence
$\pi\coloneqq(d_{1},d_{2},\dotsc)$ is called a policy for the agent, and it
tells us how action decisions are made at each time step.
At each time step $t\geq 1$, the environment is in some state
$x_{t}\in\mathcal{X}$. The agent receives information about the state of the
environment and selects an action $a_{t}\in\mathcal{A}$. The environment,
based on this action, transitions to a different state $x_{t+1}\in\mathcal{X}$
according to the transition function $T_{t}$ and simultaneously provides the
agent with some reward according to the reward function $r_{t+1}$. The agent
also receives full (or partial) information about the new state of the
environment, which they can then use to select the next action. The agent’s
goal is to perform actions that maximize its long-term reward. Specifically,
in the finite-horizon setting, the agent’s goal is to maximize the expected
value of the quantity $\sum_{t=1}^{T}r_{t}$ up to a given amount $T<\infty$ of
time, called the horizon time. In the infinite-horizon setting, the agent’s
goal is to maximize the expected value of the quantity
$\sum_{t=1}^{\infty}\gamma^{t-1}r_{t}$, where $\gamma\in(0,1]$ is a discount
factor. A thorough introduction to classical Markov decision processes can be
found in [82, 142]. Note that what makes a classical Markov decision process
Markovian is the fact that the transition function and the reward function at
each time depend only on the state and action of the previous time step.
However, the decision function can in general depend on the entire history of
the interaction, even in a Markov decision process.
By the basic rules of probability, the probability of any history $h^{t}$ is
given by
$\displaystyle\Pr[H(t)=h^{t}]$
$\displaystyle=\Pr[X(t)=x_{t}|H(t-1)=h_{t-1}^{t},A(t-1)=a_{t-1}]\cdot$
$\displaystyle\qquad\qquad\Pr[A(t-1)=a_{t-1}|H(t-1)=h_{t-1}^{t}]\cdot\Pr[H(t-1)=h_{t-1}^{t}]$
(2.19)
$\displaystyle=\Pr[X(1)=x_{1}]\prod_{j=2}^{t}\left(\Pr[X(j)=x_{j}|H(j-1)=h_{j-1}^{t},A(j-1)=a_{j-1}]\right.\cdot$
$\displaystyle\qquad\qquad\qquad\qquad\left.\Pr[A(j-1)=a_{j-1}|H(j-1)=h_{j-1}^{t}]\right)$
(2.20)
$\displaystyle=\Pr[X(1)=x_{1}]\prod_{j=2}^{t}\left(T_{j-1}(x_{j-1},a_{j-1},x_{j})\cdot
d_{j-1}(h_{j-1}^{t})(a_{j-1})\right).$ (2.21)
We now state the definition of a quantum partially observable Markov decision
process (which we refer to from now on as a “quantum decision process” for
brevity), as defined in [85]; see the right panel of Figure 3 for a schematic
diagram. Roughly speaking, a quantum decision process is similar to a
classical Markov decision process, with the main differences being that in the
quantum case the environment is a quantum system, and each of the agent’s
actions in the set $\mathcal{A}$ of actions corresponds to a physical
evolution of the quantum system, which is described by a completely positive
trace non-increasing map acting on the quantum state of the environment. At
each time step, the agent only receives classical information about the state
of the quantum system, which we call observations, and they are elements in
the set $\mathcal{X}$, hence making the process “partially observable”. In
detail, we have the following.
* •
We define an orthonormal basis $\\{|x\rangle\\}_{x\in\mathcal{X}}$ of vectors
corresponding to the set $\mathcal{X}$ of classical observations of the agent,
and an orthonormal basis $\\{|a\rangle\\}_{a\in\mathcal{A}}$ of vectors
corresponding to the set $\mathcal{A}$ of the agent’s actions. For every time
step $t\geq 1$, we define classical registers $X_{t}$ and $A_{t}$ for the
observation and action values, respectively, at time $t$. We denote the
collection of observation and action value classical registers up to time $t$
by $H_{t}\equiv X_{1}A_{1}\dotsb A_{t-1}X_{t}$. Then, based on the definition
of a history in (2.15), we define
$|h^{t}\rangle_{H_{t}}\coloneqq|x_{1}\rangle_{X_{1}}\otimes|a_{1}\rangle_{A_{1}}\otimes|x_{2}\rangle_{X_{2}}\otimes|a_{2}\rangle_{A_{2}}\otimes\dotsb\otimes|a_{t-1}\rangle_{A_{t-1}}\otimes|x_{t}\rangle_{X_{t}}.$
(2.22)
* •
The transition functions are completely positive trace non-increasing maps
such that, at time $t\geq 1$, $\mathcal{T}^{t;x_{t},a_{t},x_{t+1}}$ gives the
evolution of the quantum state of the environment under the given values
$x_{t}\in\mathcal{X}$, $a_{t}\in\mathcal{A}$, and $x_{t+1}\in\mathcal{X}$ of
the observation, action, and observation at the next time step, respectively.
The transition maps are such that the sum
$\sum_{x_{t+1}\in\mathcal{X}}^{1}\mathcal{T}^{t;x_{t},a_{t},x_{t+1}}$ is a
trace preserving map for all $t\geq 1$, all observations
$x_{t}\in\mathcal{X}$, and all actions $a_{t}\in\mathcal{A}$. At time $t=0$,
before the start of the interaction, we have a set
$\\{\mathcal{T}^{0;x_{1}}\\}_{x_{1}\in\mathcal{X}}$ of completely positive
trace non-increasing maps that give rise to the first observation
$x_{1}\in\mathcal{X}$ of the agent, and they have the property that
$\sum_{x_{1}\in\mathcal{X}}\mathcal{T}^{0;x_{1}}$ is a trace preserving map
* •
The reward at times $t\geq 2$ is given by a set
$\\{R^{t;x_{t-1},a_{t-1},x_{t}}:x_{t-1},x_{t}\in\mathcal{X},a_{t-1}\in\mathcal{A}\\}$
of Hermitian operators acting on the Hilbert space corresponding to the
quantum system of the environment, such that the value of the reward is the
expectation value of the appropriate Hermitian operator. The reward at time
$t=1$ is given by a set $\\{R^{1;x_{1}}:x_{1}\in\mathcal{X}\\}$ of Hermitian
operators acting on the Hilbert space corresponding to the quantum system of
the environment. See (2.34) and (2.37) below for details on how the reward is
calculated.
* •
The decision function and policy are defined exactly as in the classical case:
for any history $h^{t}$, with $t\geq 1$, $d_{t}(h^{t}):\mathcal{A}\to[0,1]$ is
a probability distribution over actions, and $\pi=(d_{1},d_{2},\dotsc)$ is a
policy. In addition to this, to each each element of the policy, we define a
density operator as follows:
$\pi(t;h^{t})\coloneqq\sum_{a\in\mathcal{A}}d_{t}(h^{t})(a)|a\rangle\langle
a|,\quad t\geq 1,$ (2.23)
for all histories $h^{t}$.
* •
An additional defining element is the initial quantum state $\rho_{E_{0}}$ of
the environment, where $E_{0}$ is a label for the quantum system of the
environment before its interaction with the agent. We denote by $E_{t}$ the
quantum system of the environment at times $t\geq 1$ during its interaction
with the agent.
Figure 4: An alternative depiction of the quantum decision process shown in
the right panel of Figure 3. In this diagram, we explicitly illustrate the
interaction of the agent and the environment over time via a sequence of
quantum channels (shown here up to time $t=3$). The decision channels
$\mathcal{D}^{t}$ correspond to the decision functions of the agent (see
(2.24)), and the environment response channels $\mathcal{E}^{t}$ correspond to
the transition maps of the environment (see (2.25) and (2.26)). The operator
$\widehat{R}$ corresponding to the reward is defined in (2.42), and the states
$\widehat{\sigma}(t)$ are defined in (2.27).
An alternative way of depicting a quantum decision process is shown in Figure
4 up to time $t=3$. In this depiction, we explicitly show the progression
through time of the interaction between the agent and the environment. From
the diagram in Figure 4, we see that a quantum decision process falls into the
general paradigm of agent-environment interactions considered previously in
[143, 144], and more generally we have that it falls within the theoretical
framework of quantum combs/games [145, 146, 147] (see also [148]).
The decision channels $\mathcal{D}^{t}$ in Figure 4 are defined as
$\mathcal{D}_{H_{t}\to H_{t}A_{t}}^{t}(|h^{t}\rangle\langle
h^{t}|_{H_{t}})\coloneqq|h^{t}\rangle\langle
h^{t}|_{H^{t}}\otimes\sum_{a\in\mathcal{A}}d_{t}(h^{t})(a)|a\rangle\langle
a|_{A_{t}},$ (2.24)
and the quantum channels $\mathcal{E}^{t}$, called environment response
channels, are defined as
$\displaystyle\mathcal{E}_{E_{0}\to
H_{1}E_{1}}^{0}(\rho_{E_{0}})\coloneqq\sum_{x_{1}\in\mathcal{X}}|x_{1}\rangle\langle
x_{1}|_{H_{1}}\otimes\mathcal{T}_{E_{0}\to E_{1}}^{0;x_{1}}(\rho_{E_{0}}),$
(2.25) $\displaystyle\mathcal{E}_{H_{t}A_{t}E_{t}\to
H_{t+1}E_{t+1}}^{t}(\omega_{H_{t}}\otimes\pi_{A_{t}}\otimes\rho_{E_{t}})$
$\displaystyle\coloneqq\sum_{\begin{subarray}{c}x_{t},x_{t+1}\in\mathcal{X}\\\
a_{t}\in\mathcal{A}\end{subarray}}\operatorname{Tr}_{X_{t}A_{t}}[(\omega_{H_{t}}\otimes\pi_{A_{t}})|x_{t},a_{t}\rangle\langle
x_{t},a_{t}|_{X_{t}A_{t}}]|x_{t},a_{t},x_{t+1}\rangle\langle
x_{t},a_{t},x_{t+1}|_{X_{t}A_{t}X_{t+1}}\otimes\mathcal{T}_{E_{t}\to
E_{t+1}}^{t;x_{t},a_{t},x_{t+1}}(\rho_{E_{t}})$ (2.26)
for any states $\rho_{E_{0}},\omega_{H_{t}},\pi_{A_{t}},\rho_{E_{t}}$. Using
the decision channels and the environment response channels, it is
straightforward to show that the classical-quantum states
$\widehat{\sigma}(t)$ at the end of each time step $t\geq 1$ are given by
$\widehat{\sigma}_{H_{t}E_{t}}(t)=\sum_{h^{t}}|h^{t}\rangle\langle
h^{t}|_{H_{t}}\otimes\widetilde{\sigma}_{E_{t}}(t;h^{t}),$ (2.27)
where
$\widetilde{\sigma}_{E_{t}}(t;h^{t})=\left(\prod_{j=1}^{t-1}d_{j}(h_{j}^{t})(a_{j})\right)\left(\mathcal{T}_{E_{t-1}\to
E_{t}}^{t-1;x_{t-1},a_{t-1},x_{t}}\circ\dotsb\circ\mathcal{T}_{E_{1}\to
E_{2}}^{1;x_{1},a_{1},x_{2}}\circ\mathcal{T}_{E_{0}\to
E_{1}}^{0;x_{1}}\right)(\rho_{E_{0}}).$ (2.28)
The expected quantum state at time $t\geq 1$ is then
$\sigma_{E_{t}}(t)\coloneqq\operatorname{Tr}_{H_{t}}[\widehat{\sigma}_{H_{t}E_{t}}(t)]=\sum_{h^{t}}\widetilde{\sigma}_{E_{t}}(t;h^{t}).$
(2.29)
Any quantum decision process as defined above induces a classical Markov
decision process such that the probability of any history $h^{t}$ is given by
$\Pr[H(t)=h^{t}]=\operatorname{Tr}[\widetilde{\sigma}_{E_{t}}(t;h^{t})]$.
Using this, along with (2.20), it is straightforward to show that the
transition probabilities of the induced classical Markov decision process are
given by
$\Pr[X(t+1)=x_{t+1}|X(t)=x_{t},A(t)=a_{t}]=\operatorname{Tr}[\mathcal{T}_{E_{t}\to
E_{t+1}}^{t;x_{t},a_{t},x_{t+1}}(\sigma_{E_{t}}(t|h^{t}))],$ (2.30)
for all histories $h^{t}$, where
$\sigma_{E_{t}}(t|h^{t})\coloneqq\frac{\widetilde{\sigma}_{E_{t}}(t;h^{t})}{\Pr[H(t)=h^{t}]}.$
(2.31)
is the conditional quantum state of the environment.
Finally, let us discuss how rewards are calculated. Throughout this work, we
focus our attention on so-called episodic processes, in which the decision
process proceeds for a given finite horizon time $T$ and the reward is given
only at this final time step. In particular, then, using the Hermitian
operators
$\\{R_{E_{t}}^{t;x_{t-1},a_{t-1},x_{t}}:x_{t-1},x_{t}\in\mathcal{X},a_{t-1}\in\mathcal{A},t\geq
2\\}$ and $\\{R_{E_{1}}^{1;x_{1}}:x_{1}\in\mathcal{X}\\}$ for the rewards as
described above, for any history $h^{t}=(x_{1},a_{1},\dotsc,a_{t-1},x_{t})$ we
have that
$\displaystyle t=1:$ $\displaystyle\quad
r_{1}(x_{1})\coloneqq\left\\{\begin{array}[]{l
l}\operatorname{Tr}[R_{E_{1}}^{1;x_{1}}\sigma_{E_{1}}(1|x_{1})]&\text{if
}T=1,\\\ 0&\text{otherwise},\end{array}\right.$ (2.34)
$\displaystyle\forall~{}t\geq 2:$ $\displaystyle\quad
r_{t}(x_{t-1},a_{t-1},x_{t})\coloneqq\left\\{\begin{array}[]{l l}0&\text{if
}t<T,\\\
\operatorname{Tr}[R_{E_{t}}^{t;x_{t-1},a_{t-1},x_{t}}\sigma_{E_{t}}(t|h^{t})]&\text{if
}t=T.\end{array}\right.$ (2.37)
The expected total reward up to time $T\geq 2$ is then
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}r_{t}\right]$
$\displaystyle=\mathbb{E}[r_{T}]$ (2.38)
$\displaystyle=\sum_{h^{T}}\Pr[H(T)=h^{T}]r_{T}(x_{T-1},a_{T-1},x_{T})$ (2.39)
$\displaystyle=\sum_{h^{T}}\operatorname{Tr}[R_{E_{T}}^{T;x_{T-1},a_{T-1},x_{T}}\widetilde{\sigma}_{E_{T}}(T;h^{T})]$
(2.40)
$\displaystyle=\operatorname{Tr}[\widehat{R}_{X_{T-1}A_{T-1}X_{T}E_{T}}(T)\widehat{\sigma}_{H_{T}E_{T}}(T)],$
(2.41)
where in the last line we defined the operator
$\widehat{R}_{X_{t-1}A_{t-1}X_{t}E_{t}}(t)\coloneqq\sum_{\begin{subarray}{c}x_{t-1},x_{t}\in\mathcal{X}\\\
a_{t-1}\in\mathcal{A}\end{subarray}}|x_{t-1},a_{t-1},x_{t}\rangle\langle
x_{t-1},a_{t-1},x_{t}|_{X_{t-1}A_{t-1}X_{t}}\otimes
R_{E_{t}}^{t;x_{t-1},a_{t-1},x_{t}}$ (2.42)
for all $t\geq 2$.
From now on, for simplicity of notation, we drop the subscripts containing the
labels of the classical registers of the history and the quantum systems of
the environment when writing quantum states and other operators, unless they
are needed for clarity.
We are now ready to take the generic definition of a quantum decision process
given above and apply it to the task of elementary link generation in a
quantum network. Roughly speaking, as outlined in Section 1.1 and illustrated
in Figure 2, the decision process for any elementary link is such that, at
each time step, the agent (which we define to be all of the nodes in the
elementary link as a collective entity) either requests entanglement from a
source station (which we define to be the environment) or keeps the entangled
state currently stored in memory, in which case the decoherence channel is
applied to each of the quantum systems comprising the entangled state of the
elementary link. This process goes on for a given time $T<\infty$, after which
a reward is given. Formally, we have the following.
###### Definition 2.1 (Quantum decision process for elementary link
generation).
Given any elementary link in a quantum network, as shown in Figure 2, we
define a quantum decision process for the elementary link by letting the
source station used to establish the entangled state of the elementary link be
the environment, and we let the nodes belonging to the elementary link
collectively be the agent. Then, the other elements of the quantum decision
process are defined as follows.
* •
We let $\mathcal{X}=\\{0,1\\}$ tell us whether or not the elementary link is
active at a particular time. In particular, then, for the random variables
$X(t)$ for all $t\geq 1$, we have:
* –
$X(t)=0$: link is inactive;
* –
$X(t)=1$: link is active.
We let $\mathcal{A}=\\{0,1\\}$ be the set of possible actions of the agent, so
that for the random variables $A(t)$ for all $t\geq 1$, we have:
* –
$A(t)=0$: wait/keep the entangled state;
* –
$A(t)=1$: discard the entangled state and request a new entangled state.
* •
The transition maps are defined to be time independent, and we denote them by
$\mathcal{T}^{x_{t},a_{t},x_{t+1}}\equiv\mathcal{T}^{t;x_{t},a_{t},x_{t+1}}$
for all $x_{t},a_{t},x_{t+1}\in\\{0,1\\}$ and all $t\geq 1$, where
$\displaystyle\mathcal{T}^{x_{t},1,1}(\sigma)$
$\displaystyle\coloneqq\operatorname{Tr}[\sigma]\,\widetilde{\rho}_{0}\quad\forall~{}x_{t}\in\\{0,1\\},$
(2.43) $\displaystyle\mathcal{T}^{x_{t},1,0}(\sigma)$
$\displaystyle\coloneqq\operatorname{Tr}[\sigma]\,\widetilde{\tau}^{\varnothing}\quad\forall~{}x_{t}\in\\{0,1\\},$
(2.44) $\displaystyle\mathcal{T}^{1,0,1}(\sigma)$
$\displaystyle\coloneqq\widehat{\mathcal{N}}(\sigma),$ (2.45)
$\displaystyle\mathcal{T}^{0,0,0}(\sigma)$ $\displaystyle\coloneqq\sigma$
(2.46)
for any linear operator $\sigma$, where we recall the definitions of
$\widetilde{\tau}^{\varnothing}$ and $\widetilde{\rho}_{0}$ from (2.8). The
maps $\mathcal{T}^{0;x_{1}}$, $x_{1}\in\\{0,1\\}$, are defined to be
$\displaystyle\mathcal{T}^{0;0}(\sigma)$
$\displaystyle\coloneqq(\mathcal{M}_{0}\circ\mathcal{L})(\sigma),$ (2.47)
$\displaystyle\mathcal{T}^{0;1}(\sigma)$
$\displaystyle\coloneqq(\mathcal{M}_{1}\circ\mathcal{L})(\sigma)$ (2.48)
for any linear operator $\sigma$, where we recall from the discussion at the
beginning of Section 2 that $\mathcal{L}$ is the transmission channel from the
source to the nodes and $\mathcal{M}_{0}$ and $\mathcal{M}_{1}$ are completely
positive trace non-increasing maps corresponding to the heralding procedure at
the nodes.
* •
The reward at time $t\geq 1$ is defined as follows. For any history
$h^{t}=(x_{1},a_{1},\dotsc,a_{t-1},x_{t})$ and horizon time $0<T<\infty$,
$\displaystyle t=1:$ $\displaystyle\quad
r_{1}(x_{1})=\left\\{\begin{array}[]{l l}0&\text{if }T>1,\\\
\delta_{x_{1},1}\langle\psi|\sigma(1|x_{1})|\psi\rangle&\text{if
}T=1,\end{array}\right.$ (2.51) $\displaystyle\forall~{}t\geq 2:$
$\displaystyle\quad r_{t}(x_{t-1},a_{t-1},x_{t})=\left\\{\begin{array}[]{l
l}0&\text{if }t<T,\\\
\delta_{x_{t},1}\langle\psi|\sigma(t|h^{t})|\psi\rangle&\text{if
}t=T,\end{array}\right.$ (2.54)
where $|\psi\rangle\langle\psi|$ is the target/desired entangled state of the
elementary link.
* •
The initial state $\rho_{E_{0}}$ of the environment is the state $\rho^{S}$
produced by the source station (see the discussion at the beginning of Section
2). $\blacktriangleleft$
###### Remark 2.1.
Let us make some remarks about our definition of the quantum decision process
for elementary link generation.
* •
Note that our definition of the transition maps is consistent with our
description of the decision process given before Definition 2.1: if the action
is to wait, and the link is currently active, then we apply the decoherence
channel $\widehat{\mathcal{N}}$ to the quantum state of the link; if the
action is to request a new entangled state, then the current quantum state of
the link is discarded and a new link is attempted. If the link is currently
not active and the action is to wait, then the quantum state stays as it is.
* •
Using (2.30) and the definition of the transition maps, we have the following
values for the transition probabilities for all $t\geq 1$ and for any history
$h^{t}=(x_{1},a_{1},\dotsc,a_{t-1},x_{t})$:
$\displaystyle\Pr[X(t+1)=0|X(t)=x_{t},A(t)=1]$ $\displaystyle=1-p,$ (2.55)
$\displaystyle\Pr[X(t+1)=1|X(t)=x_{t},A(t)=1]$ $\displaystyle=p,$ (2.56)
$\displaystyle\Pr[X(t+1)=x_{t+1}|X(t)=x_{t},A(t)=0]$
$\displaystyle=\delta_{x_{t},x_{t+1}}\quad\forall~{}x_{t+1}\in\\{0,1\\}.$
(2.57)
Observe that the transition probabilities are time independent.
* •
Our definition of the reward in (2.51) and (2.54) is similar to the definition
of the reward given in [88, Eq. (1)], in which the reward is equal to zero for
all times except for the final time, at which point the reward is simply the
fidelity of the quantum state of the environment with respect to a desired
pure state. Using the fidelity as the reward makes sense from the perspective
of entanglement generation in a quantum network, because having higher
fidelities at the elementary link level allows for more joining measurements,
and therefore entanglement distribution over longer distances. In Section 4,
when we determine optimal policies for elementary link generation, we provide
further justification for defining the reward as in (2.51) and (2.54), and we
discuss other possible quantities to use as the reward.
Using (2.39)–(2.41), the expected total reward after $T$ time steps is
$\mathbb{E}[r_{T}]=\sum_{h^{T}}\delta_{x_{T},1}\langle\psi|\widetilde{\sigma}(T;h^{T})|\psi\rangle=\operatorname{Tr}[(|1\rangle\langle
1|_{X_{T}}\otimes|\psi\rangle\langle\psi|)\widehat{\sigma}(T)].$ (2.58)
In Theorem 2.2 below, we show that this quantity is simply the expected
fidelity of the elementary link when the link is active. $\blacktriangleleft$
We now derive an explicit expression for the conditional quantum state
$\sigma(t|h^{t})$, as defined in (2.31), for any elementary link.
###### Theorem 2.1 (Quantum state of an elementary link).
For every time step $t\geq 1$ and for any history
$h^{t}=(x_{1},a_{1},\dotsc,a_{t-1},x_{t})$, we have
$\sigma(t|h^{t})=x_{t}\rho(M(t)(h^{t}))+(1-x_{t})\tau^{\varnothing},$ (2.59)
where $M(t)$ is defined to be the random variable that indicates the number of
time steps that the quantum state of the elementary link has been held in
memory at time $t\geq 1$, and it satisfies the recursion relation
$M(t)=\left\\{\begin{array}[]{l l}M(t-1)+X(t)&\text{if }A(t-1)=0,\\\
X(t)-1&\text{if }A(t-1)=1,\end{array}\right.$ (2.60)
where $A(0)\equiv 1$ and $M(0)\equiv-1$. Furthermore,
$\Pr[H(t)=h^{t}]=\left(\prod_{j=1}^{t-1}d_{j}(h_{j}^{t})(a_{j})\right)p^{N_{\text{succ}}(t)(h^{t})}(1-p)^{N_{\text{req}}(t)(h^{t})-N_{\text{succ}}(t)(h^{t})}$
(2.61)
for all histories $h^{t}$, where
$N_{\text{req}}(t)\coloneqq\sum_{j=1}^{t}A(j-1),\quad
N_{\text{succ}}(t)\coloneqq\sum_{j=1}^{t}A(j-1)X(j)$ (2.62)
are the number of link requests and the number of successful link requests,
respectively, up to time $t$.
###### Remark 2.2.
Intuitively, the quantity $M(t)$ is the number of consecutive time steps up to
the $t^{\text{th}}$ time step that the action “wait” is performed since the
most recent “request” action. The value $M(t)=-1$ can be thought of as the
resting state of the memory, when it is not loaded. The values that $M(t)$ can
take are $-1,0,1,\dotsc,t-1$. $\blacktriangleleft$
###### Proof of Theorem 2.1.
First, let us observe that the statement of the proposition is true for $t=1$,
since by (2.71) and (2.72) we can write
$\widetilde{\sigma}(1;x_{1})=x_{1}\widetilde{\rho}_{0}+(1-x_{1})\widetilde{\tau}^{\varnothing}.$
(2.63)
Then, indeed, we have $M(1)=0$ according to the definition in (2.60), as
required, if $x_{1}=1$. Furthermore,
$\operatorname{Tr}[\widetilde{\sigma}(1;x_{1})]=x_{1}p+(1-x_{1})(1-p)=p^{x_{1}}(1-p)^{1-x_{1}},$
(2.64)
so that
$\displaystyle\sigma(1|x_{1})$
$\displaystyle=\frac{x_{1}\widetilde{\rho}_{0}+(1-x_{1})\widetilde{\tau}^{\varnothing}}{p^{x_{1}}(1-p)^{1-x_{1}}}$
(2.65) $\displaystyle=\left\\{\begin{array}[]{l l}\rho_{0}&\text{if
}x_{1}=1,\\\ \tau^{\varnothing}&\text{if }x_{1}=0,\end{array}\right.$ (2.68)
$\displaystyle=x_{1}\rho_{0}+(1-x_{1})\tau^{\varnothing}$ (2.69)
where we recall the definitions of $\rho_{0}$ and $\tau^{\varnothing}$ from
(2.9).
Now, recall from (2.28) and Definition 2.1 that
$\widetilde{\sigma}(t;h^{t})=\left(\prod_{j=1}^{t-1}d_{j}(h_{j}^{t})(a_{j})\right)\left(\mathcal{T}^{x_{t-1},a_{t-1},x_{t}}\circ\dotsb\circ\mathcal{T}^{x_{1},a_{1},x_{2}})(\widetilde{\sigma}(1;x_{1})\right),$
(2.70)
where
$\displaystyle\widetilde{\sigma}(1;0)$
$\displaystyle\coloneqq\widetilde{\tau}^{\varnothing},$ (2.71)
$\displaystyle\widetilde{\sigma}(1;1)$
$\displaystyle\coloneqq\widetilde{\rho}_{0}.$ (2.72)
Using the definition of the transition maps, for each time step $j>1$ in which
the action “wait” (i.e., $A(j)=0$) is performed and the link is active (i.e.,
$X(j)=1$), the link stays active at time step $j+1$, and thus by definition
the memory time must be incremented by one, which is consistent with the
definition of the memory time $M(t)$ given in (2.60), and the quantum state of
the link goes from $\rho(M(t))$ to $\rho(M(t)+1)$. If instead the link is
active at time $j$ and the action “request” is performed (i.e., $A(j)=1$),
then the quantum state of the link is discarded and is replaced either by the
state $\rho_{0}$ (if $X(j+1)=1$) with probability $p$ or by the state
$\tau^{\varnothing}$ (if $X(j+1)=0$) with probability $1-p$. In the former
case, the memory time must be reset to zero, consistent with (2.60), and in
the latter case, the memory time is $-1$, also consistent with (2.60).
Furthermore, by definition of the transition maps, each time the action
“request” is performed, we obtain a factor of $p$ (if the request succeeds) or
$1-p$ (if the request fails). If the action “wait” is performed, then we
obtain no additional multiplicative factors. The quantity
$N_{\text{succ}}(t-1)$ is, by definition, equal to the number of requests that
succeeded in $t-1$ time steps. Therefore, overall, we obtain a factor
$p^{N_{\text{succ}}(t-1)}$ at the $(t-1)^{\text{st}}$ time step for the number
of successful requests. The number of failed requests in $t-1$ time steps is
given by
$\displaystyle\sum_{j=1}^{t-1}A(j-1)(1-X(j))$
$\displaystyle=\sum_{j=1}^{t-1}A(j-1)-\sum_{j=1}^{t-1}A(j-1)X(j)$ (2.73)
$\displaystyle=N_{\text{req}}(t-1)-N_{\text{succ}}(t-1),$ (2.74)
so that we obtain an overall factor of
$(1-p)^{N_{\text{req}}(t-1)-N_{\text{succ}}(t-1)}$ at the $(t-1)^{\text{st}}$
time step for the failed requests. Also, the memory time at the
$(t-1)^{\text{st}}$ time step is $M(t-1)(h_{t-1}^{t})$, and the since the
quantum state is either $\rho(M(t-1)(h_{t-1}^{t}))$ or $\tau^{\varnothing}$,
we obtain
$\displaystyle\widetilde{\sigma}(t;h^{t})$
$\displaystyle=\left(\prod_{j=1}^{t-1}d_{j}(h_{j}^{t})(a_{j})\right)p^{N_{\text{succ}}(t-1)(h_{t-1}^{t})}(1-p)^{N_{\text{req}}(t-1)(h_{t-1}^{t})-N_{\text{succ}}(t-1)(h_{t-1}^{t})}$
$\displaystyle\qquad\qquad\times\left(x_{t-1}\mathcal{T}^{1,a_{t-1},x_{t}}(\rho(M(t-1)(h_{t-1}^{t})))+(1-x_{t-1})\mathcal{T}^{0,a_{t-1},x_{t}}(\tau^{\varnothing})\right)$
(2.75)
$\displaystyle=\left(\prod_{j=1}^{t-1}d_{j}(h_{j}^{t})(a_{j})\right)p^{N_{\text{succ}}(t-1)(h_{t-1}^{t})}(1-p)^{N_{\text{req}}(t-1)(h_{t-1}^{t})-N_{\text{succ}}(t-1)(h_{t-1}^{t})}$
$\displaystyle\qquad\qquad\times
p^{a_{t-1}x_{t}}(1-p)^{a_{t-1}(1-x_{t})}(x_{t}\rho(M(t)(h^{t}))+(1-x_{t})\tau^{\varnothing})$
(2.76)
$\displaystyle=\left(\left(\prod_{j=1}^{t}d_{j}(h_{j}^{t})(a_{j})\right)p^{N_{\text{succ}}(t)(h^{t})}(1-p)^{N_{\text{req}}(t)(h^{t})-N_{\text{succ}}(t)(h^{t})}\right)(x_{t}\rho(M(t)(h^{t}))+(1-x_{t})\tau^{\varnothing}).$
(2.77)
Then, since $\Pr[H(t)=h^{t}]=\operatorname{Tr}[\widetilde{\sigma}(t;h^{t})]$,
we have
$\Pr[H(t)=h^{t}]=\left(\prod_{j=1}^{t}d_{j}(h_{j}^{t})(a_{j})\right)p^{N_{\text{succ}}(t)(h^{t})}(1-p)^{N_{\text{req}}(t)(h^{t})-N_{\text{succ}}(t)(h^{t})},$
(2.78)
as required. Finally,
$\sigma(t|h^{t})=\frac{\widetilde{\sigma}(t;h^{t})}{\operatorname{Tr}[\widetilde{\sigma}(t;h^{t})]}=x_{t}\rho(M(t)(h^{t}))+(1-x_{t})\tau^{\varnothing},$
(2.79)
which completes the proof. ∎
Using Theorem 2.1, we immediately obtain an expression for the expected
quantum state of the link at any time $t\geq 1$.
###### Corollary 2.1 (Average quantum state of an elementary link).
For any $t\geq 1$, the average quantum state of any elementary link is
$\sigma(t)=(1-\Pr[X(t)=1])\tau^{\varnothing}+\sum_{m=0}^{t-1}\Pr[X(t)=1,M(t)=m]\rho(m).$
(2.80)
###### Proof.
Using the result of Theorem 2.1, along with (2.29), the expected quantum state
of the link at time $t\geq 1$ is given by
$\displaystyle\sigma(t)=\operatorname{Tr}_{H_{t}}[\widehat{\sigma}(t)]$
$\displaystyle=\sum_{h^{t}}\widetilde{\sigma}(t;h^{t})$ (2.81)
$\displaystyle=\sum_{h^{t}}\Pr[H(t)=h^{t}]\left(X(t)(h^{t})\rho(M(t)(h^{t}))+(1-X(t)(h^{t}))\tau^{\varnothing}\right)$
(2.82)
$\displaystyle=\sum_{h^{t}:x_{t}=0}\Pr[H(t)=h^{t}]\tau^{\varnothing}+\sum_{h^{t}:x_{t}=1}\Pr[H(t)=h^{t}]\rho(M(t)(h^{t}))$
(2.83)
$\displaystyle=(1-\Pr[X(t)=1])\tau^{\varnothing}+\sum_{m=0}^{t-1}\Pr[X(t)=1,M(t)=m]\rho(m),$
(2.84)
where to obtain the last equality we rearranged the sum over the set
$\\{h^{t}:x_{t}=1\\}$ so that the sum is over the possible values of the
memory time $m$, which are $0,1,\dotsc,t-1$ when the link is active. This
completes the proof. ∎
The expected quantum state of the link at time $t\geq 1$, given that the link
is active at time $t$, is defined to be
$\displaystyle\sigma(t|X(t)=1)$
$\displaystyle\coloneqq\frac{\operatorname{Tr}_{H_{t}}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]}{\operatorname{Tr}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]}$ (2.85)
$\displaystyle=\sum_{m=0}^{t-1}\Pr[M(t)=m|X(t)=1]\rho(m).$ (2.86)
Note that $\operatorname{Tr}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]=\Pr[X(t)=1]$; see Theorem 2.2 below.
Observe that the expressions in (2.80) and (2.86) hold for any policy of the
agent. Given a particular policy, determining the average quantum state means
determining the joint probability distrubtion of the random variables $X(t)$
and $M(t)$, i.e., determining the quantities $\Pr[X(t)=1,M(t)=m]$ for all
possible values of $m$. The probability distribution of $X(t)$ can then be
obtained via marginalization, i.e., via
$\Pr[X(t)=1]=\sum_{m}\Pr[X(t)=1,M(t)=m]$, where the sum is over all possible
values of the memory random variable $M(t)$ (which can depend on the policy).
###### Remark 2.3.
Throughout this section, we have assumed that there are only two actions that
the agent can perform during the elementary link generation process. In
practice, it might be necessary to add other actions to the decision process,
as done in, e.g., [127]. All that has to be done in this case is to
appropriately define the transition maps in order to accomodate the additional
actions, and the general formulas in (2.27)–(2.30) still hold. We can
similarly incorporate other classical discrete-valued properties of the link
into the link random variable $X(t)$ if needed. $\blacktriangleleft$
### 2.2 Link quantities
In the previous section, we defined two elementary link quantities, the status
$X(t)$ and the memory time $M(t)$. We are interested throughout this work with
several other quantities, which we now define.
###### Definition 2.2 (Link quantities).
Given any edge in a graph corresponding to a quantum network, we define the
following quantities for the associated elementary link.
* •
The random variable $X(t)$ for the status of the elementary link at time $t$:
$X(t)=0$ if the link is inactive, and $X(t)=1$ if the link is active.
* •
The random variable $M(t)$ for the amount of time that the quantum state of
the elementary link is held in memory at time $t$. It is defined by the
recursion relation in (2.60). For any $t\geq 1$, the values that $M(t)$ can
take are $-1,0,1,\dotsc,t-1$. An explicit expression for $M(t)$ is the
following:
$\displaystyle M(t)$
$\displaystyle=A(0)(X(1)+X(2)+\dotsb+X(t)-1)\overline{A(1)}\,\overline{A(2)}\dotsb\overline{A(t-1)}$
$\displaystyle\quad+A(1)(X(2)+X(3)+\dotsb+X(t)-1)\overline{A(2)}\,\overline{A(3)}\dotsb\overline{A(t-1)}$
$\displaystyle\quad+A(2)(X(3)+X(4)+\dotsb+X(t)-1)\overline{A(3)}\,\overline{A(4)}\dotsb\overline{A(t-1)}$
$\displaystyle\quad+\dotsb$ $\displaystyle\quad+A(t-1)(X(t)-1)$ (2.87)
$\displaystyle=\sum_{j=1}^{t}A(j-1)\left(\sum_{\ell=j}^{t}X(\ell)-1\right)\prod_{k=j}^{t-1}\overline{A(k)},$
(2.88)
where $A(0)\equiv 1$ and $\overline{A(k)}\coloneqq 1-A(k)$ for all $k\geq 1$.
It can be shown that this definition is equivalent to the recursive definition
given in (2.60).
* •
The random variable
$\widetilde{F}(t;\psi)\coloneqq X(t)f_{M(t)}(\rho_{0};\psi),$ (2.89)
which is the fidelity of the quantum state of the link with respect to the
target pure state $\psi$ when the link is active.
* •
The random variable
$F(t;\psi)\coloneqq\frac{\widetilde{F}(t;\psi)}{\Pr[X(t)=1]}=\frac{X(t)f_{M(t)}(\rho_{0};\psi)}{\Pr[X(t)=1]},$
(2.90)
which is the fidelity of the quantum state of the link with respect to the
target pure state $\psi$ given that the link is active.
* •
$N^{\max}$, which is the number of parallel edges between the nodes, and thus
the maximum number of entangled states that can be shared by the nodes of the
edge per time step (see Figure 1). We refer to each of the $N^{\max}$ parallel
edges as a parallel link. We then let
$N(t)\coloneqq\sum_{j=1}^{N^{\max}}X_{j}(t)$ (2.91)
be the number of active parallel links at time $t$, where $X_{j}(t)$ is the
status of the $j^{\text{th}}$ parallel link of the edge at time $t$.
* •
The success rate up to time $t$ of the link:
$S(t)\coloneqq\frac{\displaystyle\sum_{\ell=1}^{N^{\max}}\sum_{j=1}^{t}A_{\ell}(j-1)X_{\ell}(j)}{\displaystyle\sum_{\ell=1}^{N^{\max}}\sum_{j=1}^{t}A_{\ell}(j-1)},$
(2.92)
which is simply the ratio of the number of successful transmissions when a
request is made to the total number of requests made within time $t$. We let
$A(0)\equiv 1$.
* •
The link activity rate up to time $t$:
$R(t)\coloneqq\frac{1}{t}\sum_{j=1}^{t}N(j),$ (2.93)
which is the average number of active links along the edge per unit time up to
time $t$.
When we need to refer to a particular edge in the graph, we indicate the edge
on the associated elementary link quantities with a subscript, e.g.,
$X_{e}(t)$ for the status of the elementary link associated with the edge $e$.
When considering any distinct pair of edges in the graph, the corresponding
random variables defined above are independent by definition. For example, for
two edges $e\neq e^{\prime}$, the random variables $X_{e}(t)$ and
$X_{e^{\prime}}(t)$ are independent for all $t\geq 1$, and similarly for the
other random variables. $\blacktriangleleft$
###### Remark 2.4.
In a graph-theoretic setting, the quantity $N^{\max}$ can be interpreted as
the capacity of an edge. The quantity $N(t)$ is then called the flow along the
edge; see Section 3.3 for details. $\blacktriangleleft$
The success rate $S(t)$ and the link activity rate $R(t)$ are two rate
measures that we have defined for an elementary link. The first measure is the
number of successful requests per channel use up to time $t$ (indeed, notice
that the quantity $\sum_{\ell=1}^{N^{\max}}\sum_{j=1}^{t}A_{\ell}(j-1)$ in the
denominator of $S(t)$ is the number of uses of the transmission channel in $t$
time steps). The second rate measure is the average number of parallel links
obtained per unit time up to time $t$. When $N^{\max}=1$, $R(t)$ can be
thought of as the fraction of time that the link is active in $t$ time steps.
###### Theorem 2.2 (Link success probability and fidelity).
Given any elementary link in a quantum network, the probability distribution
of the link value $X(t)$ (equivalently, the expectation value
$\mathbb{E}[X(t)]$) can be written as
$\Pr[X(t)=1]=\operatorname{Tr}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]=\mathbb{E}[X(t)],$ (2.94)
where we recall the definition of the classical-quantum state
$\widehat{\sigma}(t)$ in (2.27). We also have that
$\displaystyle\mathbb{E}[\widetilde{F}(t;\psi)]$
$\displaystyle=\sum_{m=0}^{t-1}f_{m}(\rho_{0};\psi)\Pr[X(t)=1,M(t)=m]$ (2.95)
$\displaystyle=\operatorname{Tr}[(|1\rangle\langle
1|_{X_{t}}\otimes\psi)\widehat{\sigma}(t)],$ (2.96)
and
$\displaystyle\mathbb{E}[F(t;\psi)]$
$\displaystyle=\frac{\mathbb{E}[\widetilde{F}(t;\psi)]}{\Pr[X(t)=1]}$ (2.97)
$\displaystyle=\sum_{m=0}^{t-1}f_{m}(\rho_{0};\psi)\Pr[M(t)=m|X(t)=1]$ (2.98)
$\displaystyle=\frac{\operatorname{Tr}[(|1\rangle\langle
1|_{X_{t}}\otimes\psi)\widehat{\sigma}(t)]}{\operatorname{Tr}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]}.$ (2.99)
###### Proof.
To see the first equality in (2.94), observe that
$\operatorname{Tr}[|1\rangle\langle
1|_{X_{t}}\widehat{\sigma}(t)]=\sum_{h^{t}:X(t)(h^{t})=1}\Pr[H(t)=h^{t}].$
(2.100)
The expression on the right-hand side of this equation is equal to
$\Pr[X(t)=1]$ by definition of the random variable $X(t)$. The second equality
in (2.94) holds because $X(t)$ is a binary/Bernoulli random variable.
The equality in (2.95) follows by definition of $\widetilde{F}(t;\psi)$ and by
definition of expectation. The equality in (2.96) follows by considering that
$\displaystyle\operatorname{Tr}[(|1\rangle\langle
1|_{X_{t}}\otimes\psi)\widehat{\sigma}(t)]$
$\displaystyle=\sum_{h^{t}:x_{t}=1}\Pr[H(t)=h^{t}]\langle\psi|\rho(M(t)(h^{t}))|\psi\rangle$
(2.101)
$\displaystyle=\sum_{h^{t}:x_{t}=1}\Pr[H(t)=h^{t}]f_{M(t)(h^{t})}(\rho_{0};\psi),$
(2.102)
and by considering that the sum over $\\{h^{t}:x_{t}=1\\}$ can be rearranged
into a sum over the possible values of the memory time $M(t)$ when the link is
active, which are $0,1,\dotsc,t-1$. The expressions in (2.98) and (2.99) are
then immediate. ∎
In the following section, we consider a particular policy, the so-called
“memory cutoff” policy, and we determine analytic expressions for
$\Pr[X(t)=1]$, $\Pr[X(t)=1,M(t)=m]$, and for the expected values of
$\widetilde{F}(t)$ and $F(t)$, under this policy.
## 3 The memory cutoff policy for elementary link generation
A natural policy to consider, and one that has been considered extensively
previously [31, 32, 33, 34, 99, 100, 101, 102, 103, 104, 105], is the
following. A link is requested at every time step until the link is
established, and once the link is established, it is held in quantum memories
for some pre-specified amount $t^{\star}$ of time (usually called the “memory
cutoff” and not necessarily equal to the memory coherence time) before the
link is discarded and requested again. The cutoff $t^{\star}$ can be any non-
negative integer. There are two extreme cases of this policy: when
$t^{\star}=0$, a request is made at every time step regardless of whether the
previous request succeeded; if $t^{\star}=\infty$, then a link request is made
at every time step until the link request succeeds, and once the link request
succeeds the quantum systems remain in memory indefinitely—no further request
is ever made. In this section, we provide a complete analysis of this policy,
including analytic formulas for the link value probability and the expected
link fidelity as a function of time, along with the infinite-horizon
($t\to\infty$) behavior of the link.
For the memory cutoff policy with cutoff $t^{\star}$, we denote the memory
time random variable by $M_{t^{\star}}(t)$. It turns out to be more convenient
to use the following simpler formula for the memory time $M_{t^{\star}}(t)$
than the general formula given in (2.88) when $t^{\star}<\infty$:
$M_{t^{\star}}(t)=\left(\sum_{j=1}^{t}X(t)-1\right)\text{mod}(t^{\star}+1),\quad
t^{\star}<\infty.$ (3.1)
With this formula, the memory time is always in $\\{0,1,\dotsc,t^{\star}\\}$
when $t^{\star}<\infty$. Also note that, with this formula, we get a memory
value of $-1\text{mod}(t^{\star}+1)=t^{\star}$ even when the memory is not
loaded. The advantage of this is that, if $M_{t^{\star}}(t)<t^{\star}$, then
$X(t)=1$. When $t^{\star}=\infty$, we have
$M_{\infty}(t)=\sum_{j=1}^{t}X(t)-1,$ (3.2)
and so the values that the memory time can take are $-1,0,1,\dotsc,t-1$.
Mathematically, the memory cutoff policy is described as follows for all
$t^{\star}\geq 0$:
$d_{t}(h^{t})(a_{t})=\Pr[A(t)=a_{t}|H(t)=h^{t}]=\delta_{a_{t},M_{t^{\star}}^{\prime}(t)(h^{t})},$
(3.3)
where for all $t^{\star}<\infty$,
$M_{t^{\star}}^{\prime}(t)(h^{t})\coloneqq\delta_{M(t)(h^{t}),t^{\star}}=\left\\{\begin{array}[]{l
l}0&\text{if }M_{t^{\star}}(t)(h^{t})<t^{\star},\\\ 1&\text{if
}M_{t^{\star}}(t)(h^{t})=t^{\star}\end{array}\right.$ (3.4)
is the function that tells us whether or not the memory time is equal to
$t^{\star}$. For $t^{\star}=\infty$, we have
$M_{\infty}^{\prime}(t)(h^{t})\coloneqq\left\\{\begin{array}[]{l l}1&\text{if
}M_{\infty}(t)(h^{t})=-1,\\\ 0&\text{otherwise}.\end{array}\right.$ (3.5)
From this, we see that the memory cutoff policy is deterministic and that the
action at each time step is determined by the value of
$M_{t^{\star}}^{\prime}(t)$ for all $t^{\star}\geq 0$. In particular,
$A(t)=0\quad\Longleftrightarrow\quad M_{t^{\star}}^{\prime}(t)=0,\qquad
A(t)=1\quad\Longleftrightarrow\quad M_{t^{\star}}^{\prime}(t)=1.$ (3.6)
In other words,
$\Pr[X(t+1)=x_{t+1}|H(t)=h^{t},A(t)=a_{t}]=\Pr[X(t+1)=x_{t+1}|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=a_{t}],$
(3.7)
for all histories $h^{t}$, all $a_{t},x_{t+1}\in\\{0,1\\}$, and all
$t^{\star}\geq 0$. In particular, we can use (2.57) to conclude that
$\Pr[X(t+1)=x_{t+1}|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=0]=\left\\{\begin{array}[]{l
l}0&\text{if }x_{t+1}=0,\\\ 1&\text{if }x_{t+1}=1.\end{array}\right.$ (3.8)
The transition probabilities given in (2.55)–(2.57) therefore reduce to the
following for the memory cutoff policy:
$\displaystyle\Pr[X(t+1)=0|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=1]$
$\displaystyle=1-p,$ (3.9)
$\displaystyle\Pr[X(t+1)=1|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=1]$
$\displaystyle=p,$ (3.10)
$\displaystyle\Pr[X(t+1)=0|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=0]$
$\displaystyle=0,$ (3.11)
$\displaystyle\Pr[X(t+1)=1|H(t)=h^{t},M_{t^{\star}}^{\prime}(t)=0]$
$\displaystyle=1,$ (3.12)
for all histories $h^{t}$ and all $t^{\star}\geq 0$. The following conditional
probabilities then hold for any $t^{\star}<\infty$:
$\displaystyle\Pr[X(t+1)=1,M_{t^{\star}}(t+1)=0|X(t)=0,M_{t^{\star}}(t)=m]$
$\displaystyle=p,\quad 0\leq m\leq t^{\star},$ (3.13)
$\displaystyle\Pr[X(t+1)=1,M_{t^{\star}}(t+1)=0|X(t)=1,M_{t^{\star}}(t)=t^{\star}]$
$\displaystyle=p,$ (3.14)
$\displaystyle\Pr[X(t+1)=0,M_{t^{\star}}(t+1)=t^{\star}|X(t)=0,M_{t^{\star}}(t)=m]$
$\displaystyle=1-p,\quad 0\leq m\leq t^{\star},$ (3.15)
$\displaystyle\Pr[X(t+1)=0,M_{t^{\star}}(t+1)=t^{\star}|X(t)=1,M_{t^{\star}}(t)=t^{\star}]$
$\displaystyle=1-p,$ (3.16)
$\displaystyle\Pr[X(t+1)=1,M_{t^{\star}}(t+1)=m+1|X(t)=1,M_{t^{\star}}(t)=m]$
$\displaystyle=1,\quad 0\leq m\leq t^{\star}-1,$ (3.17)
$\displaystyle\Pr[X(t+1)=0,M_{t^{\star}}(t+1)=t^{\star}|X(t)=0,M_{t^{\star}}(t)=t^{\star}]$
$\displaystyle=1,$ (3.18)
with all other possible conditional probabilities equal to zero. Since these
transition probabilities are time-independent, and since the pair
$(X(t+1),M_{t^{\star}}(t+1))$ depends only on $(X(t),M_{t^{\star}}(t))$, we
have that $((X(t),M_{t^{\star}}(t)):t\geq 1)$ is a stationary/time-homogeneous
Markov process. As such, the conditional probabilities can be organized into
the transition matrix $T(t^{\star})$, $t^{\star}<\infty$, defined as follows:
$\left(T(t^{\star})\right)_{\begin{subarray}{c}x,m\\\
x^{\prime},m^{\prime}\end{subarray}}\coloneqq\Pr[X(t+1)=x,M_{t^{\star}}(t+1)=m|X(t)=x^{\prime},M_{t^{\star}}(t)=m^{\prime}],\\\
x,x^{\prime}\in\\{0,1\\},~{}m,m^{\prime}\in\\{0,1,\dotsc,t^{\star}\\}.$ (3.19)
For $t^{\star}=\infty$, observe that the action at time $t\geq 1$ depends only
the current value of the link, not on the entire history of the link. In other
words, the definition of $M_{\infty}^{\prime}(t)$ in (3.5) is equivalent to
$M_{\infty}^{\prime}(t)=1-X(t).$ (3.20)
Indeed, if $X(t)=0$, then by definition of the $t^{\star}=\infty$ cutoff
policy a request is made, so that $M_{\infty}^{\prime}(t)=1$, as required. If
$X(t)=1$, then the link is kept, meaning that $M_{\infty}^{\prime}(t)=0$. The
transition probabilities in (3.9)–(3.12) can therefore be simplified to the
following when $t^{\star}=\infty$:
$\displaystyle\Pr[X(t+1)=0|X(t)=0]$ $\displaystyle=1-p,$ (3.21)
$\displaystyle\Pr[X(t+1)=0|X(t)=1]$ $\displaystyle=0,$ (3.22)
$\displaystyle\Pr[X(t+1)=1|X(t)=0]$ $\displaystyle=p,$ (3.23)
$\displaystyle\Pr[X(t+1)=1|X(t)=1]$ $\displaystyle=1.$ (3.24)
These transition probabilities are time-independent and Markovian, so they can
be organized into the transition matrix $T(\infty)$ defined as follows:
$\left(T(\infty)\right)_{\begin{subarray}{c}x\\\
x^{\prime}\end{subarray}}\coloneqq\Pr[X(t+1)=x|X(t)=x^{\prime}],\quad
x,x^{\prime}\in\\{0,1\\}.$ (3.25)
To begin our analysis of the memory cutoff policy, let us consider what the
histories $h^{t}$ look like by considering a particular example. Consider a
link for which $t^{\star}=3$, and let us consider the link values up to time
$t=10$. Given that each link request succeeds with probability $p$ and fails
with probability $1-p$, in Table 1 we write down the probability for each
sequence of link values according to the formula in (2.61). Note that we only
include those histories that have non-zero probability (indeed, some sequences
$h^{t}=(x_{1},a_{1},\dotsc,a_{t-1},x_{t})\in\\{0,1\\}^{2t-1}$ will have zero
probability under the memory cutoff policy). We also include in the table the
memory times $M_{t^{\star}}(t)$, which are calculated using the formula in
(3.1). Since the memory cutoff policy is deterministic, it suffices to keep
track only of the link values and not of the action values, since the action
values are given deterministically by the link values. For the link value
sequences, we define two quantities that are helpful for obtaining analytic
formulas for the link quantities defined in Section 2.2. The first quantity is
$Y_{1}(t)$, which we define to be the number of full blocks of ones (having
length $t^{\star}+1$) in link value sequences up to time $t-1$. The values
that $Y_{1}(t)$ can take are
$0,1,\dotsc,\lfloor\frac{t-1}{t^{\star}+1}\rfloor$ if $t^{\star}<\infty$, and
0 if $t^{\star}=\infty$. We also define the quantity $Y_{2}(t)$ to be the
number of trailing ones in link value sequences up to time $t$. The values
that $Y_{2}(t)$ can take are $0,1,\dotsc,t^{\star}+1$ if $t^{\star}<\infty$,
and $0,1,\dotsc,t$ if $t^{\star}=\infty$.
$x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $x_{7}$ | $x_{8}$ | $x_{9}$ | $x_{10}$ | $Y_{1}(t)(h^{t})$ | $Y_{2}(t)(h^{t})$ | $\Pr[H(t)=h^{t}]$ | $M_{t^{\star}}(t)(h^{t})$
---|---|---|---|---|---|---|---|---|---|---|---|---|---
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | $(1-p)^{10}$ | 3
1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | $p(1-p)^{6}$ | 3
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 2 | 0 | $p^{2}(1-p)^{2}$ | 3
1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 2 | 0 | $p^{2}(1-p)^{2}$ | 3
0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 2 | 0 | $p^{2}(1-p)^{2}$ | 3
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | $p(1-p)^{9}$ | 0
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 2 | $p(1-p)^{8}$ | 1
0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 3 | $p(1-p)^{7}$ | 2
0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 4 | $p(1-p)^{6}$ | 3
1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | $p^{2}(1-p)^{5}$ | 0
1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 2 | $p^{2}(1-p)^{4}$ | 1
0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 2 | $p^{2}(1-p)^{4}$ | 1
0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 2 | $p^{2}(1-p)^{4}$ | 1
0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 2 | $p^{2}(1-p)^{4}$ | 1
0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | $p^{2}(1-p)^{4}$ | 1
1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 3 | $p^{2}(1-p)^{3}$ | 2
0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 3 | $p^{2}(1-p)^{3}$ | 2
0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 3 | $p^{2}(1-p)^{3}$ | 2
0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 3 | $p^{2}(1-p)^{3}$ | 2
1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 4 | $p^{2}(1-p)^{2}$ | 3
0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 4 | $p^{2}(1-p)^{2}$ | 3
0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 4 | $p^{2}(1-p)^{2}$ | 3
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 2 | 1 | $p^{3}(1-p)$ | 0
1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 2 | 1 | $p^{3}(1-p)$ | 0
0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 1 | $p^{3}(1-p)$ | 0
1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 2 | $p^{3}$ | 1
Table 1: Link value sequences for a link with $t^{\star}=3$ up to time $t=10$.
The quantity $Y_{1}(t)$ is the number of full blocks of ones in link value
sequences up to time $t-1$, and $Y_{2}(t)$ is the number of trailing ones in
link value sequences up to time $t$. $M_{t^{\star}}(t)$ is the memory time at
time $t$, given by the formula in (3.1).
Using the quantities $Y_{1}(t)$ and $Y_{2}(t)$, along with the general formula
in (2.61), we obtain the following formula for the probability of histories
with non-zero probability.
###### Proposition 3.1.
For any $t\geq 1$, any $t^{\star}<\infty$, any $p\in[0,1]$, and for any
history $h^{t}=(x_{1},a_{1},x_{2},\allowbreak a_{2},\dotsc,a_{t-1},x_{t})$
with non-zero probability,
$\Pr[H(t)=h^{t}]=p^{Y_{1}(t)(h^{t})}(1-p)^{t-(t^{\star}+1)Y_{1}(t)(h^{t})}\delta_{Y_{2}(t)(h^{t}),0}\\\
+(1-\delta_{Y_{2}(t)(h^{t}),0})p^{Y_{1}(t)(h^{t})+1}(1-p)^{t-Y_{2}(t)(h^{t})-(t^{\star}+1)Y_{1}(t)(h^{t})},$
(3.26)
where $Y_{1}(t)(h^{t})$ is defined to be the number of full blocks of ones of
length $t^{\star}+1$ up to time $t-1$ in the sequence
$(x_{1},x_{2},\dotsc,x_{t})$ of link values, and $Y_{2}(t)(h^{t})$ is defined
to be the number of trailing ones in the sequence
$(x_{1},x_{2},\dotsc,x_{t})$. For $t^{\star}=\infty$,
$\Pr[H(t)=h^{t}]=(1-p)^{t}\delta_{Y_{2}(t)(h^{t}),0}+(1-\delta_{Y_{2}(t)(h^{t}),0})p(1-p)^{t-Y_{2}(t)(h^{t})}.$
(3.27)
###### Proof.
The result in (3.26) follows immediately from the formula in (2.61) by
observing that $N_{\text{succ}}(t)=Y_{1}(t)+1-\delta_{Y_{2}(t),0}$ and
$N_{\text{req}}(t)=t-(t^{\star}+1)Y_{1}(t)-Y_{2}(t)$. For $t^{\star}=\infty$,
we only ever have trailing ones in the link value sequences, so that
$Y_{1}(t)(h^{t})=0$ for all $t\geq 1$ and all histories $h^{t}$. The result in
(3.27) then follows. ∎
Next, let us consider the number of link value sequences with non-zero
probability, which we need in order to calculate the link quantities defined
in Section 2.2. Using Table 1 as a guide, we obtain the following.
###### Lemma 3.1.
For any $t\geq 1$ and any $t^{\star}\geq 0$, let $\Omega(t;t^{\star})$ denote
the set of link value sequences for the $t^{\star}$ memory cutoff policy that
have non-zero probability. Then, for all $t^{\star}<\infty$,
$\left|\Omega(t;t^{\star})\right|=\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\sum_{k=0}^{t^{\star}+1}\left(\binom{t-1-xt^{\star}}{x}\delta_{k,0}+(1-\delta_{k,0})\binom{t-k-
xt^{\star}}{x}\boldsymbol{1}_{t-k-x(t^{\star}+1)\geq 0}\right).$ (3.28)
For $t^{\star}=\infty$, $\left|\Omega(t;\infty)\right|=1+t$.
###### Proof.
We start by counting the number of link value sequences when the number of
trailing ones is equal to zero, i.e., when $k\equiv Y_{2}(t)(h^{t})=0$. If we
also let the number $x\equiv Y_{1}(t)(h^{t})$ of full blocks of ones in time
$t-1$ be equal to one, then there are $t^{\star}+1$ ones and $t-t^{\star}-2$
zeros up to time $t-1$. The total number of link value sequences is then equal
to the number of ways that the single block of ones can be moved around in the
link value sequence up to time $t-1$. This quantity is equivalent to the
number of permutations of $t-1-t^{\star}$ objects with $t-t^{\star}-2$ of them
being identical (these are the zeros), which is given by
$\frac{(t-1-t^{\star})!}{(t-2-t^{\star})!(t-1-t^{\star}-t+t^{\star}+2)!}=\frac{(t-1-t^{\star})!}{(t-t^{\star}-2)!(1)!}=\binom{t-1-t^{\star}}{1}.$
(3.29)
We thus have the $x=0$ and $k=0$ term in the sum in (3.28). If we stick to
$k=0$ but now consider more than one full block of ones in time $t-1$ (i.e.,
let $x\equiv Y_{1}(t)(h^{t})\geq 1$), then the number of link value sequences
is given by a similar argument as before: it is equal to the number of ways of
permuting $t-1-xt^{\star}$ objects, with $x$ of them being identical (the
blocks of ones) and the remaining $t-1-x(t^{\star}+1)$ objects also identical
(the number of zeros), i.e., $\binom{t-1-xt^{\star}}{x}$. The total number of
link value sequences with zero trailing ones is therefore
$\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\binom{t-1-xt^{\star}}{x}.$
(3.30)
Let us now consider the case $k\equiv Y_{2}(t)(h^{t})>0$. Then, the number of
time slots in which full blocks of ones can be shuffled around is $t-k$. If
there are $x$ blocks of ones in time $t-k$, then by the same arguments as
before, the number of such link value sequences is given by the number of ways
of permuting $t-k-xt^{\star}$ objects, with $x$ of them being identical (the
full blocks of ones) and the remaining $t-k-x(t^{\star}+1)$ of them also
identical (these are the zeros up to time $t-k$). In other words, the number
of link value sequences with $k>0$ and $x\geq 0$ is
$\binom{t-k-xt^{\star}}{x}\boldsymbol{1}_{t-k-x(t^{\star}+1)\geq 0}.$ (3.31)
We must put the indicator function $\boldsymbol{1}_{t-k-x(t^{\star}+1)\geq 0}$
in order to ensure that the binomial coefficient makes sense. This also means
that, depending on the time $t$, not all values of $k$ between 0 and
$t^{\star}+1$ can be considered in the total number of link value sequences
(simply because it might not be possible to fit all possible values of
trailing ones and full blocks of ones within that amount of time). By
combining (3.30) and (3.31), we obtain the desired result.
In the case $t^{\star}=\infty$, because there are never any full blocks of
ones and only trailing ones, we have $t$ link value sequences, each containing
$k$ trailing ones, where $1\leq k\leq t$. We also have a link value sequence
consisting of all zeros, giving a total of $t+1$ link value sequences. ∎
###### Remark 3.1.
Note that when $t^{\star}=0$, we get
$\displaystyle\left|\Omega(t;0)\right|$
$\displaystyle=\sum_{x=0}^{t-1}\sum_{k=0}^{1}\left(\binom{t-1}{x}\delta_{k,0}+(1-\delta_{k,0})\binom{t-k}{x}\boldsymbol{1}_{t-k-x\geq
0}\right)$ (3.32)
$\displaystyle=\sum_{x=0}^{t-1}\binom{t-1}{x}+\sum_{x=0}^{t-1}\binom{t-1}{x}\underbrace{\boldsymbol{1}_{t-1-x\geq
0}}_{1~{}\forall x}$ (3.33) $\displaystyle=2^{t-1}+2^{t-1}$ (3.34)
$\displaystyle=2^{t}.$ (3.35)
In other words, when $t^{\star}=0$, all $t$-bit strings are valid link value
sequences.
For $t\leq t^{\star}+1$, no full blocks of ones in time $t-1$ are possible, so
we get
$\displaystyle\left|\Omega(t;t^{\star})\right|$
$\displaystyle=\sum_{k=0}^{t^{\star}+1}\left(\binom{t-1}{0}\delta_{k,0}+(1-\delta_{k,0})\binom{t-k}{0}\boldsymbol{1}_{t-k\geq
0}\right)$ (3.36) $\displaystyle=\binom{t-1}{0}+\sum_{k=1}^{t}\binom{t-k}{0}$
(3.37) $\displaystyle=1+t.$ (3.38)
This coincides with the result for $t^{\star}=\infty$, because when
$t^{\star}=\infty$ the condition $t\leq t^{\star}+1$ is satisfied for all
$t\geq 1$. $\blacktriangleleft$
### 3.1 Calculation of link quantities
We now provide analytic expressions for the link quantities defined in Section
2.2 in both the finite-horizon and infinite-horizon cases. All of the results
here apply to any individual elementary link along an edge of the graph
associated with the given quantum network, including individual parallel links
in the case that an edge has $N^{\max}>1$ parallel edges, since all of the
parallel elementary links corresponding to the parallel edges are mutually
independent. We discuss multiple parallel links in more detail in Section 3.3.
We begin by considering the expected fidelity of the link and the expected
quantum state of the link. Recall from Corollary 2.1 and Theorem 2.2 that the
main ingredient for the calculation of the expected fidelity and the expected
quantum state is the joint probability distribution of the random variables
$X(t)$ and $M(t)$.
###### Proposition 3.2.
For any $t\geq 1$, $t^{\star}\geq 0$, and $p\in[0,1]$,
$\Pr[M_{t^{\star}}(t)=m,X(t)=1]=p(1-p)^{t-(m+1)},\quad t\leq
t^{\star}+1,~{}0\leq m\leq t-1,$ (3.39)
and
$\Pr[M_{t^{\star}}(t)=m,X(t)=1]=\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\binom{t-(m+1)-xt^{\star}}{x}\boldsymbol{1}_{t-(m+1)-x(t^{\star}+1)\geq
0}p^{x+1}(1-p)^{t-(m+1)-x(t^{\star}+1)},\\\ t>t^{\star}+1,~{}0\leq m\leq
t^{\star}.$ (3.40)
###### Proof.
For $t\leq t^{\star}+1$, because no full blocks of ones up to time $t-1$ are
possible, the possible values for the memory time are $0,1,\dotsc,t-1$.
Furthermore, for each value of $m\in\\{0,1,\dotsc,t-1\\}$, there is only one
link value sequence for which $M_{t^{\star}}(t)=m$, and this sequence has
$Y_{2}(t)=m+1$ trailing ones and thus probability $p(1-p)^{t-1-m}$ by
Proposition 3.1.
For $t>t^{\star}+1$, we proceed similarly by considering the number $Y_{1}(t)$
of full blocks of ones in time $t-1$ and the number $Y_{2}(t)$ of trailing
ones in link value sequences $(x_{1},x_{2},\dotsc,x_{t})$ such that $x_{t}=1$.
Since we must have $x_{t}=1$, we require $Y_{2}(t)\geq 1$. Now, in order to
have a memory time of $M_{t^{\star}}(t)=m$, we can have link value sequences
consisting of any number $x=Y_{1}(t)$ of full blocks of ones ranging from 0 to
$\lfloor\frac{t-1}{t^{\star}+1}\rfloor$ as long as $Y_{2}(t)=m+1$. (Note that
at the end of each full block of ones the memory time is equal to
$t^{\star}$.) The number of such link value sequences is
$\binom{t-(m+1)-xt^{\star}}{x}\boldsymbol{1}_{t-(m+1)-x(t^{\star}+1)\geq 0},$
(3.41)
as given by (3.31), and the probability of each such link value sequence is
$p^{x+1}(1-p)^{t-(m+1)-x(t^{\star}+1)}$. By summing over all $0\leq
x\leq\lfloor\frac{t-1}{t^{\star}+1}\rfloor$, we obtain the desired result. ∎
As an immediate corollary of Proposition 3.2, we obtain the probability
distribution of the link value random variable $X(t)$.
###### Corollary 3.1.
For any $t\geq 1$, any $t^{\star}\geq 0$, and any $p\in[0,1]$,
$\Pr[X(t)=1]=\left\\{\begin{array}[]{l l}1-(1-p)^{t},&t\leq
t^{\star}+1,\\\\[14.22636pt]
\displaystyle\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\sum_{k=1}^{t^{\star}+1}\binom{t-k-
xt^{\star}}{x}\boldsymbol{1}_{t-k-x(t^{\star}+1)\geq
0}p^{x+1}(1-p)^{t-k-(t^{\star}+1)x},&t>t^{\star}+1\end{array}\right.$ (3.42)
###### Proof.
This follows immediately from the fact that
$\Pr[X(t)=1]=\sum_{m=0}^{t-1}\Pr[X(t)=1,M_{t^{\star}}(t)=m]$ for $t\leq
t^{\star}+1$ and that
$\Pr[X(t)=1]=\sum_{m=0}^{t^{\star}}\Pr[X(t)=1,M_{t^{\star}}(t)=m]$ for
$t>t^{\star}+1$. ∎
Figure 5: (Left) The expected link value, given by (3.42), as a function of
the link success probability $p$ for various values of $t$ and $t^{\star}$.
(Right) The expected link value, given by (3.42), as a function of $t$ for
fixed values of $p$ and $t^{\star}$.
See Figure 5 for plots of $\mathbb{E}[X(t)]$ as a function of the time steps
$t$ and as a function of the elementary link generation probability $p$.
Let us now recall the following quantities:
$\widetilde{F}(t;\psi)=X(t)f_{M(t)}(\rho_{0};\psi),\quad
F(t;\psi)=\frac{\widetilde{F}(t;\psi)}{\Pr[X(t)=1]},$ (3.43)
the latter being the fidelity of the link given that the link is active. From
Proposition 3.2 and Corollary 3.1, along with (2.95) and (2.98), we
immediately obtain analytic expressions for the expectation values of these
quantities under the memory cutoff policy:
$\displaystyle\mathbb{E}[\widetilde{F}(t;\psi)]$
$\displaystyle=\left\\{\begin{array}[]{l
l}\displaystyle\sum_{m=0}^{t-1}f_{m}(\rho_{0};\psi)p(1-p)^{t-(m+1)}&t\leq
t^{\star}+1,\\\\[14.22636pt]
\displaystyle\sum_{m=0}^{t^{\star}}f_{m}(\rho_{0};\psi)\Pr[M_{t^{\star}}(t)=m,X(t)=1],&t>t^{\star}+1,\end{array}\right.$
(3.46) $\displaystyle\mathbb{E}[F(t;\psi)]$
$\displaystyle=\left\\{\begin{array}[]{l
l}\displaystyle\sum_{m=0}^{t-1}f_{m}(\rho_{0};\psi)\frac{p(1-p)^{t-(m+1)}}{1-(1-p)^{t}}&t\leq
t^{\star}+1,\\\\[14.22636pt]
\displaystyle\sum_{m=0}^{t^{\star}}f_{m}(\rho_{0};\psi)\frac{\Pr[M_{t^{\star}}(t)=m,X(t)=1]}{\Pr[X(t)=1]},&t>t^{\star}+1,\end{array}\right.$
(3.49)
where in (3.46) and (3.49) the expression for $\Pr[M_{t^{\star}}(t)=m,X(t)=1]$
for $t>t^{\star}+1$ is given in (3.40), and the expression for $\Pr[X(t)=1]$
for $t>t^{\star}+1$ is given in (3.42). Furthermore, from (2.80), we have that
the expected quantum state of the link at time $t\geq 1$ is
$\sigma(t)=\left\\{\begin{array}[]{l
l}\displaystyle(1-p)^{t}\tau^{\varnothing}+\sum_{m=0}^{t-1}p(1-p)^{t-(m+1)}\rho(m),&t\leq
t^{\star}+1,\\\\[14.22636pt]
\displaystyle(1-\Pr[X(t)=1])\tau^{\varnothing}+\sum_{m=0}^{t^{\star}}\Pr[M_{t^{\star}}(t)=m,X(t)=1]\rho(m)&t>t^{\star}+1,\end{array}\right.$
(3.50)
where for $t>t^{\star}+1$ the expressions for $\Pr[X(t)=1]$ and
$\Pr[M_{t^{\star}}(t)=m,X(t)=1]$ are given in (3.42) and (3.40), respectively.
From (2.86), we also have
$\sigma(t|X(t)=1)=\left\\{\begin{array}[]{l
l}\displaystyle\sum_{m=0}^{t-1}\frac{p(1-p)^{t-(m+1)}}{1-(1-p)^{t}}\rho(m),&t\leq
t^{\star}+1,\\\\[14.22636pt]
\displaystyle\sum_{m=0}^{t^{\star}}\frac{\Pr[M_{t^{\star}}(t)=m,X(t)=1]}{\Pr[X(t)=1]}\rho(m),&t>t^{\star}+1,\end{array}\right.$
(3.51)
where again for $t>t^{\star}+1$ the expressions for $\Pr[X(t)=1]$ and
$\Pr[M_{t^{\star}}(t)=m,X(t)=1]$ are given in (3.42) and (3.40), respectively.
Let us now consider the $t\to\infty$, or infinite-horizon behavior of the
link.
###### Theorem 3.1.
For all $t^{\star}\geq 0$ and $p\in[0,1]$.
$\lim_{t\to\infty}\mathbb{E}[X(t)]=\frac{(t^{\star}+1)p}{1+t^{\star}p}.$
(3.52)
###### Proof.
Since we consider the limit $t\to\infty$, it suffices to consider the
expression for $\Pr[X(t)=1]$ in (3.42) for $t>t^{\star}+1$. Also due to the
$t\to\infty$ limit, we can disregard the indicator function in (3.42), so that
$\lim_{t\to\infty}\mathbb{E}[X(t)]=\lim_{t\to\infty}\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\sum_{k=1}^{t^{\star}+1}\binom{t-k-
xt^{\star}}{x}p^{x+1}(1-p)^{t-k-(t^{\star}+1)x}.$ (3.53)
Next, consider the binomial expansion of $(1-p)^{t-k-(t^{\star}+1)x}$:
$(1-p)^{t-k-(t^{\star}+1)x}=\sum_{j=0}^{\infty}\binom{t-k-(t^{\star}+1)x}{j}(-1)^{j}p^{j}.$
(3.54)
Substituting this into (3.53) gives us
$\displaystyle\lim_{t\to\infty}\mathbb{E}[X(t)]$
$\displaystyle=p\lim_{t\to\infty}\sum_{x,j=0}^{\infty}\sum_{k=1}^{t^{\star}+1}\binom{t-k-t^{\star}x}{x}\binom{t-k-(t^{\star}+1)x}{j}(-1)^{j}p^{x+j}$
(3.55)
$\displaystyle=p\lim_{t\to\infty}\sum_{\ell=0}^{\infty}\sum_{j=0}^{\ell}\sum_{k=1}^{t^{\star}+1}\binom{t-k-t^{\star}j}{j}\binom{t-k-(t^{\star}+1)j}{\ell-j}(-1)^{\ell-j}p^{\ell}.$
(3.56)
Now, for brevity, let $a\equiv t-k$, and let us focus on the sum
$\sum_{j=0}^{\ell}(-1)^{\ell-j}\binom{a-t^{\star}j}{j}\binom{a-t^{\star}j-j}{\ell-j}.$
(3.57)
We start by expanding the binomial coefficients to get
$\displaystyle\binom{a-t^{\star}j}{j}\binom{a-t^{\star}j-j}{\ell-j}$
$\displaystyle=\frac{(a-t^{\star}j)!}{j!(\ell-j)!(a-t^{\star}j-\ell)!}$ (3.58)
$\displaystyle=\frac{1}{j!(\ell-j)!}\prod_{s=0}^{\ell-1}(a-t^{\star}j-s)$
(3.59)
$\displaystyle=\frac{1}{\ell!}\binom{\ell}{j}\prod_{s=0}^{\ell-1}(a-t^{\star}j-s).$
(3.60)
Next, we have
$\prod_{s=0}^{\ell-1}(a-t^{\star}j-s)=\sum_{n=0}^{\ell}(-1)^{\ell-n}\begin{bmatrix}\ell\\\
n\end{bmatrix}(a-t^{\star}j)^{n},$ (3.61)
where $\begin{bmatrix}\ell\\\ n\end{bmatrix}$ is the (unsigned) Stirling
number of the first kind222This number is defined to be the number of
permutations of $\ell$ elements with $n$ disjoint cycles.. Performing the
binomial expansion of $(a-t^{\star}j)^{n}$, the sum in (3.57) becomes
$\sum_{j=0}^{\ell}\sum_{n=0}^{\ell}\sum_{i=0}^{n}(-1)^{\ell-j}\frac{1}{\ell!}\binom{\ell}{j}\begin{bmatrix}\ell\\\
n\end{bmatrix}\binom{n}{i}(-1)^{i}(t^{\star})^{i}j^{i}a^{n-i}.$ (3.62)
Now, it holds that
$\sum_{j=0}^{\ell}(-1)^{\ell-j}\frac{1}{\ell!}\binom{\ell}{j}j^{i}=(-1)^{2\ell}\begin{Bmatrix}i\\\
\ell\end{Bmatrix},$ (3.63)
where $\begin{Bmatrix}i\\\ \ell\end{Bmatrix}$ is the Stirling number of the
second kind333This number is defined to be the number of ways to partition a
set of $i$ objects into $\ell$ non-empty subsets.. For $i<\ell$, it holds that
$\begin{Bmatrix}i\\\ \ell\end{Bmatrix}=0$, and $\begin{Bmatrix}\ell\\\
\ell\end{Bmatrix}=1$. Since $i$ ranges from 0 to $n$, and $n$ itself ranges
from 0 to $\ell$, the sum in (3.63) is zero except for when $i=\ell$. The sum
in (3.63) is therefore effectively equal to $(-1)^{2\ell}\delta_{i,\ell}$.
Substituting this into (3.62) leads to
$\sum_{n=0}^{\ell}\sum_{i=0}^{n}(-1)^{2\ell}\delta_{i,\ell}\begin{bmatrix}\ell\\\
n\end{bmatrix}\binom{n}{i}(-1)^{i}(t^{\star})^{i}a^{n-i}=(-1)^{\ell}(t^{\star})^{\ell},$
(3.64)
where we have used the fact that $\begin{bmatrix}\ell\\\ \ell\end{bmatrix}=1$.
Altogether, we have shown that
$\sum_{j=0}^{\ell}(-1)^{\ell-j}\binom{a-t^{\star}j}{j}\binom{a-t^{\star}j-j}{\ell-j}=(-1)^{\ell}(t^{\star})^{\ell}$
(3.65)
for all $\ell\geq 0$. The sum is independent of $a=t-k$. Substituting this
result into (3.56), and using the fact that
$\sum_{\ell=0}^{\infty}(-1)^{\ell}x^{\ell}=\frac{1}{1+x},\quad x\neq-1,$
(3.66)
we get
$\lim_{t\to\infty}\mathbb{E}[X(t)]=p\sum_{\ell=0}^{\infty}\sum_{k=1}^{t^{\star}+1}(-1)^{\ell}(t^{\star}p)^{\ell}=p(t^{\star}+1)\sum_{\ell=0}^{\infty}(-1)^{\ell}(t^{\star}p)^{\ell}=\frac{(t^{\star}+1)p}{1+t^{\star}p},$
(3.67)
as required. ∎
Note that if $t^{\star}=\infty$, then
$\lim_{t^{\star}\to\infty}\lim_{t\to\infty}\mathbb{E}[X(t)]=\lim_{t^{\star}\to\infty}\frac{(t^{\star}+1)p}{1+t^{\star}p}=1,$
(3.68)
which is what we expect, because if $t^{\star}=\infty$, then the link, once
established, never has to be dropped.
###### Theorem 3.2.
For any $t^{\star}<\infty$, $p\in[0,1]$, and $m\in\\{0,1,\dotsc,t^{\star}\\}$,
$\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m,X(t)=1]=\frac{p}{1+t^{\star}p}.$
(3.69)
###### Proof.
The proof is very similar to the proof of Theorem 3.1. Using the result of
Proposition 3.2, in the limit $t\to\infty$ we have
$\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m,X(t)=1]=\lim_{t\to\infty}\sum_{x=0}^{\infty}\binom{t-(m+1)-xt^{\star}}{x}p^{x+1}(1-p)^{t-(m+1)-x(t^{\star}+1)}.$
(3.70)
Using the binomial expansion of $(1-p)^{t-(m+1)-x(t^{\star}+1)}$, exactly as
in the proof of Theorem 3.1, we can write
$\displaystyle\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m,X(t)=1]$
$\displaystyle=\lim_{t\to\infty}\sum_{x=0}^{\infty}\sum_{j=0}^{\infty}p\binom{t-(m+1)-xt^{\star}}{x}\binom{t-(m+1)-(t^{\star}+1)x}{j}(-1)^{j}p^{x+j}$
(3.71)
$\displaystyle=\lim_{t\to\infty}\sum_{\ell=0}^{\infty}\sum_{j=0}^{\ell}p\binom{t-(m+1)-jt^{\star}}{j}\binom{t-(m+1)-(t^{\star}+1)j}{\ell-j}(-1)^{\ell-j}p^{\ell}.$
(3.72)
Then, using (3.65), we have that
$\sum_{j=0}^{\ell}(-1)^{\ell-j}\binom{t-(m+1)-jt^{\star}}{j}\binom{t-(m+1)-(t^{\star}+1)j}{\ell-j}=(-1)^{\ell}(t^{\star})^{\ell}$
(3.73)
for all $t\geq 1$ and all $m\in\\{0,1,\dotsc,t^{\star}\\}$. Finally, using
(3.66), we obtain
$\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m,X(t)=1]=p\sum_{\ell=0}^{\infty}(-1)^{\ell}(t^{\star}p)^{\ell}=\frac{p}{1+t^{\star}p},$
(3.74)
as required. ∎
For $t^{\star}<\infty$, the conditional probabliity
$\Pr[M_{t^{\star}}(t)=m|X(t)=1]$ in the limit $t\to\infty$ is equal to
$\displaystyle\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m|X(t)=1]$
$\displaystyle=\lim_{t\to\infty}\frac{\Pr[M_{t^{\star}}(t)=m,X(1)=1]}{\Pr[X(t)=1]}$
(3.75)
$\displaystyle=\frac{\lim_{t\to\infty}\Pr[M_{t^{\star}}(t)=m,X(t)=1]}{\lim_{t\to\infty}\Pr[X(t)=1]}$
(3.76) $\displaystyle=\frac{1}{t^{\star}+1}.$ (3.77)
As an immediate consequence of Theorem 3.1 and Theorem 3.2, we obtain the
following:
$\displaystyle\lim_{t\to\infty}\mathbb{E}[\widetilde{F}(t;\psi)]$
$\displaystyle=\frac{p}{1+t^{\star}p}\sum_{m=0}^{t^{\star}}f_{m}(\rho_{0};\psi),\quad
t^{\star}<\infty,$ (3.78)
$\displaystyle\lim_{t\to\infty}\mathbb{E}[F(t;\psi)]$
$\displaystyle=\frac{1}{t^{\star}+1}\sum_{m=0}^{t^{\star}}f_{m}(\rho_{0};\psi),\quad
t^{\star}<\infty.$ (3.79)
For the expected quantum state, we obtain
$\displaystyle\lim_{t\to\infty}\sigma(t)$
$\displaystyle=\frac{1-p}{1+t^{\star}p}\tau^{\varnothing}+\frac{p}{1+t^{\star}p}\sum_{m=0}^{t^{\star}}\rho(m),\quad
t^{\star}<\infty,$ (3.80) $\displaystyle\lim_{t\to\infty}\sigma(t|X(t)=1)$
$\displaystyle=\frac{1}{t^{\star}+1}\sum_{m=0}^{t^{\star}}\rho(m),\quad
t^{\star}<\infty.$ (3.81)
Let us also determine the probabilities $\Pr[M_{t^{\star}}(t)=m,X(t)=0]$ for
finite $t^{\star}$. Observe that this probability is non-zero only when
$M_{t^{\star}}(t)=t^{\star}$. This is due to the fact that we can have
$X(t)=0$ in only one of two possible ways: either there are some full blocks
of ones of length $t^{\star}+1$ before time $t$, or there are no full blocks
of ones before time $t$. In both cases, the value of the memory can only be
equal to $t^{\star}$. We thus obtain the following.
###### Proposition 3.3.
For any $t\geq 1$, $t^{\star}<\infty$, $p\in[0,1]$, and
$m\in\\{0,1,\dotsc,t^{\star}\\}$,
$\Pr[M_{t^{\star}}(t)=m,X(t)=0]=\left\\{\begin{array}[]{l
l}\delta_{m,t^{\star}}(1-p)^{t},\quad t\leq t^{\star}+1,\\\\[14.22636pt]
\displaystyle\delta_{m,t^{\star}}\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\binom{t-1-xt^{\star}}{x}p^{x}(1-p)^{t-(t^{\star}+1)x},\quad
t>t^{\star}+1.\end{array}\right.$ (3.82)
For $t^{\star}=\infty$,
$\Pr[M_{\infty}(t)=m,X(t)=0]=\delta_{m,-1}(1-p)^{t}$ (3.83)
for all $m\in\\{-1,0,1,\dotsc,t-1\\}$.
###### Proof.
For finite $t^{\star}$, when $t\leq t^{\star}+1$, there is only one link value
sequence ending with a zero, and that is the sequence consisting of all zeros,
which has probability $(1-p)^{t}$. Furthermore, since the value of the memory
for this sequence is equal to $t^{\star}$, only the case
$M_{t^{\star}}(t)=t^{\star}$ has non-zero probability. When $t>t^{\star}+1$,
we can again have non-zero probability only for $M_{t^{\star}}(t)=t^{\star}$.
In this case, because every link value sequence has to end with a zero, we
must have $Y_{2}(t)=0$. Therefore, using (3.26), along with (3.30), we obtain
the desired result.
For $t^{\star}=\infty$, only the link value sequence consisting of all zeros
ends with a zero, and in this case we have $M_{\infty}(t)=-1$. The result then
follows. ∎
By following arguments very similar to the proof of Theorem 3.2, we arrive at
the following infinite-horizon expression for $\Pr[M_{t^{\star}}(t)=m,X(t)=0]$
when $t^{\star}<\infty$:
$\lim_{t\to\infty}\Pr[M(t)=m,X(t)=0]=\delta_{m,t^{\star}}\frac{1-p}{1+t^{\star}p},\quad
m\in\\{0,1,\dotsc,t^{\star}\\}.$ (3.84)
Finally, let us consider the expected success rate $\mathbb{E}[S(t)]$. Letting
$N^{\max}=1$, recall that
$S(t)=\frac{\sum_{j=1}^{t}A(j-1)X(j)}{\sum_{j=1}^{t}A(j-1)}.$ (3.85)
The success rate is simply the ratio of the number of successful requests up
to time $t$ to the total number of requests up to time $t$.
###### Proposition 3.4.
For any $t^{\star}\geq 0$, any $t\geq 1$, and any $p\in[0,1]$,
$\mathbb{E}[S(t)]=\sum_{j=0}^{t-1}\frac{1}{j+1}p(1-p)^{j},\quad t\leq
t^{\star}+1.$ (3.86)
For $t>t^{\star}+1$,
$\mathbb{E}[S(t)]=\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\left(\frac{x}{t-t^{\star}x}\binom{t-1-xt^{\star}}{x}p^{x}(1-p)^{t-(t^{\star}+1)x}\right.\\\
\left.+\sum_{k=1}^{t^{\star}+1}\frac{x+1}{t-k-t^{\star}x+1}\binom{t-k-
xt^{\star}}{x}p^{x+1}(1-p)^{t-k-(t^{\star}+1)x}\boldsymbol{1}_{t-k-(t^{\star}+1)x\geq
0}\right).$ (3.87)
###### Proof.
We start with the observation that, for any history $h^{t}$, the number of
successful requests can be written in terms of the number $Y_{1}(t)(h^{t})$ of
blocks of ones of length $t^{\star}+1$ and the number $Y_{2}(t)(h^{t})$ of
trailing ones in the link value sequence corresponding to $h^{t}$ as
$Y_{1}(t)(h^{t})+1-\delta_{Y_{2}(t)(h^{t}),0}.$ (3.88)
Similarly, the total number of failed requests is
$t-Y_{2}(t)(h^{t})-(t^{\star}+1)Y_{1}(t)(h^{t}).$ (3.89)
Therefore,
$\displaystyle S(t)(h^{t})$
$\displaystyle=\frac{Y_{1}(t)(h^{t})+1-\delta_{Y_{2}(t)(h^{t}),0}}{t-Y_{2}(t)(h^{t})-(t^{\star}+1)Y_{1}(t)(h^{t})+Y_{1}(t)(h^{t})+1-\delta_{Y_{2}(t)(h^{t}),0}}$
(3.90)
$\displaystyle=\frac{Y_{1}(t)(h^{t})+1-\delta_{Y_{2}(t)(h^{t}),0}}{t-Y_{2}(t)(h^{t})-t^{\star}Y_{1}(t)(h^{t})+1-\delta_{Y_{2}(t)(h^{t}),0}}.$
(3.91)
Now, for $t\leq t^{\star}+1$, we always have $Y_{1}(t)(h^{t})=0$ for all
histories $h^{t}$, and the link value sequence can consist only of a positive
number of trailing ones not exceeding $t$. Thus, from Proposition 3.1, the
probability of any such history is $p(1-p)^{t-Y_{2}(t)(h^{t})}$. Using (3.91)
then leads to
$\mathbb{E}[S(t)]=\sum_{h^{t}}S(t)(h^{t})\Pr[H(t)=h^{t}]=\sum_{k=1}^{t}\frac{1}{t-k+1}p(1-p)^{t-k}=\sum_{j=0}^{t-1}\frac{1}{j+1}p(1-p)^{j}$
(3.92)
for $t\leq t^{\star}+1$, as required, where the last equality follows by a
change of summation variable.
For $t>t^{\star}+1$, we use (3.91) again, keeping in mind this time that the
number of trailing ones can be equal to zero, to get
$\displaystyle\mathbb{E}[S(t)]$
$\displaystyle=\sum_{h^{t}}S(t)(h^{t})\Pr[H(t)=h^{t}]$ (3.93)
$\displaystyle=\sum_{h^{t}:Y_{2}(t)(h^{t})=0}S(t)(h^{t})\Pr[H(t)=h^{t}]+\sum_{h^{t}:Y_{2}(t)(h^{t})\geq
1}S(t)(h^{t})\Pr[H(t)=h^{t}]$ (3.94)
$\displaystyle=\sum_{x=0}^{\lfloor\frac{t-1}{t^{\star}+1}\rfloor}\left(\frac{x}{t-t^{\star}x}\Pr[H(t)=h^{t}:Y_{1}(t)(h^{t})=x,Y_{2}(t)(h^{t})=0]\right.$
$\displaystyle\qquad\qquad\qquad\left.+\sum_{k=1}^{t^{\star}+1}\frac{x+1}{t-k-t^{\star}x+1}\Pr[H(t)=h^{t}:Y_{1}(t)(h^{t})=x,Y_{2}(t)(h^{t})=k]\right).$
(3.95)
Using Proposition 3.1, we arrive at the desired result. ∎
Figure 6: The expected success rate, as given by the expressions in
Proposition 3.4, for an elementary link with $p=0.3$ and various cutoffs.
See Figure 6 for a plot of the expected rate $\mathbb{E}[S(t)]$ as a function
of time for various values of the cutoff. We find that the rate has
essentially the shape of a decaying square wave, which is clearer for larger
values of the cutoff. In particular, the “plateaus” in the curves have a
period of $t^{\star}+1$ time steps. Let us now consider the values of these
pleateaus. The largest plateau can be found by considering the case
$t^{\star}=\infty$, because in this case the condition $t\leq t^{\star}+1$ is
satisfied for all $t\geq 1$, and it is when this condition is true that the
largest plateau occurs. Using Proposition 3.4 with $t^{\star}=\infty$, we find
that the value of the largest plateau approaches
$\lim_{t\to\infty}\mathbb{E}[S(t)]=\lim_{t\to\infty}\sum_{j=0}^{t-1}\frac{1}{j+1}p(1-p)^{j}=-\frac{p\ln
p}{1-p},\quad t^{\star}=\infty,$ (3.96)
for all $p\in(0,1)$. In the case $t^{\star}<\infty$, as we see in Figure 6,
there are multiple plateaus, with each plateau lasting for a period of
$t^{\star}+1$ time steps, as mentioned earlier. The values of these pleateaus
depend on the number $x\geq 0$ of full blocks of ones in the link value
sequence. Specifically, the values of the plateaus approach
$\lim_{t\to\infty}\sum_{k=1}^{t-(t^{\star}+1)x}\frac{x+1}{t-k-t^{\star}x+1}\binom{t-k-t^{\star}x}{x}p^{x+1}(1-p)^{t-k-(t^{\star}+1)x}\\\
=\lim_{t\to\infty}\sum_{j=(t^{\star}+1)x}^{t-1}\frac{x+1}{j-t^{\star}x+1}\binom{j-t^{\star}x}{x}p^{x+1}(1-p)^{j-(t^{\star}+1)x}=p\cdot{~{}}_{2}F_{1}(1,1,2+x,1-p),$
(3.97)
for all $x\geq 0$, where ${}_{2}F_{1}(a,b,c,z)$ is the hypergeometric
function. Then, using the fact that
$\lim_{x\to\infty}{~{}}_{2}F_{1}(1,1,2+x,1-p)=1$ [149], we conclude that the
plateaus approach the value of $p$, i.e.,
$\lim_{t\to\infty}\mathbb{E}[S(t)]=p,\quad t^{\star}<\infty.$ (3.98)
### 3.2 Waiting time
Let us now consider the waiting time for an elementary link. The waiting time
is defined to be the number of time steps needed to establish a link from the
time that a link is requested. We focus here on just one elementary link.
Detailed analyses of the waiting time for a chain of bipartite links have been
conducted in [34, 150, 102, 120].
It is well known for the model being considered here that the waiting time,
which we denote by $W$, is a geometric random variable, so that
$\Pr[W=t]=p(1-p)^{t-1}$, where $p$ is the success probability of the link. The
expected waiting time is then $\mathbb{E}[W]=\frac{1}{p}$. We can confirm this
using the formalism developed in this work by noting that the waiting time
probability distribution is given simply by the probability that it takes $t$
time steps to establish the link, starting from $t=1$:
$\Pr[W=t]=\Pr[X(1)=0,X(2)=0,\dotsc,X(t)=1].$ (3.99)
Using the result of Proposition 3.1, we immediately obtain
$\Pr[W=t]=p(1-p)^{t-1}$, from which the expected waiting time is
$\mathbb{E}[W]=\sum_{t=1}^{\infty}tp(1-p)^{t-1}=\frac{1}{p}$, as expected.
Note that this result holds regardless of the value of $t^{\star}$, and it
assumes that the initial request for entanglement is made at time $t=0$.
Let us now consider a scenario in which the elementary link generation process
is persistent, even if no end-user request is made. In other words, we
consider an “always-on”/continuous link generation procedure that is ready to
go whenever end-user entanglement is requested, rather than have the entire
process begin only when end-user entanglement is requested. Then, if an end-
user request for entanglement occurs at time $t_{\text{req}}\geq 0$, then the
waiting time random variable $W(t_{\text{req}})$ has probability distribution
$\Pr[W(t_{\text{req}})=t]=\Pr[X(t_{\text{req}}+1)=0,X(t_{\text{req}}+2)=0,\dotsc,X(t_{\text{req}}+t)=1]$
(3.100)
for all $t\geq 1$. In other words, the waiting time is given by the amount of
time it takes to establish the link after the end-user request is made. Note
that $W=W(0)$. With non-zero memory cutoff and $t_{\text{req}}>0$, we can
obtain a lower expected waiting time than $\frac{1}{p}$, which we now show.
###### Proposition 3.5.
For any $t^{\star}<\infty$, for any $t_{\text{req}}\geq 0$, and for any
$p\in(0,1)$,
$\mathbb{E}[W(t_{\text{req}})]=\frac{\Pr[M_{t^{\star}}(t_{\text{req}}+1)=t^{\star},X(t_{\text{req}}+1)=0]}{p(1-p)}.$
(3.101)
For $t^{\star}=\infty$,
$\mathbb{E}[W(t_{\text{req}})]=\frac{\Pr[X(t_{\text{req}}+1)=0]}{p(1-p)}=\frac{(1-p)^{t_{\text{req}}}}{p}.$
(3.102)
###### Remark 3.2.
As a check, let us first observe the following:
* •
If $t_{\text{req}}=0$, then since $\Pr[M_{t^{\star}}(1)=t^{\star},X(1)=0]=1-p$
for all $t^{\star}<\infty$ (see Proposition 3.3), we obtain
$\mathbb{E}[W(0)]=\frac{1}{p}$, as expected. We get the same result for
$t^{\star}=\infty$.
* •
If $t^{\star}=0$, then we get
$\Pr[M_{t^{\star}}(t_{\text{req}}+1)=0,X(t_{\text{req}}+1)=0]=1-p$ for all
$t_{\text{req}}\geq 0$ (see Proposition 3.3), which means that
$\mathbb{E}[W(t_{\text{req}})]=\frac{1}{p}$ for all $t_{\text{req}}\geq 0$.
This makes sense, because in the $t^{\star}=0$ policy the link is never held
in memory. $\blacktriangleleft$
###### Proof of Proposition 3.5.
Using (3.100), we have
$\displaystyle\Pr[W(t_{\text{req}})=t]$
$\displaystyle\quad=\Pr[X(t_{\text{req}}+1)=0,\dotsc,X(t_{\text{req}}+t)=1]$
(3.103)
$\displaystyle\quad=\sum_{m_{1},\dotsc,m_{t}=0}^{t^{\star}}\Pr[X(t_{\text{req}}+1)=0,M_{t^{\star}}(t_{\text{req}}+1)=m_{1},\dotsc,X(t_{\text{req}}+t)=1,M_{t^{\star}}(t_{\text{req}}+t)=m_{t}].$
(3.104)
Using the transition matrix $T(t^{\star})$ defined in (3.13)–(3.19), we obtain
$\Pr[W(t_{\text{req}})=t]\\\
=\sum_{m_{1},\dotsc,m_{t}=0}^{t^{\star}}(T(t^{\star}))_{\begin{subarray}{c}1,m_{t}\\\
0,m_{t-1}\end{subarray}}\dotsb(T(t^{\star}))_{\begin{subarray}{c}0,m_{3}\\\
0,m_{2}\end{subarray}}(T(t^{\star}))_{\begin{subarray}{c}0,m_{2}\\\
0,m_{1}\end{subarray}}\Pr[M_{t^{\star}}(t_{\text{req}}+1)=m_{1},X(t_{\text{req}}+1)=0].$
(3.105)
Using (3.82), along with (3.13)–(3.19), we have that
$\Pr[W(t_{\text{req}})=t]=\Pr[M_{t^{\star}}(t_{\text{req}}+1)=t^{\star},X(t_{\text{req}}+1)=0]p(1-p)^{t-2},$
(3.106)
for all $t\geq 1$. The result then follows.
For $t^{\star}=\infty$, using the transition matrix $T(\infty)$ defined in
(3.25) leads to
$\Pr[X(t_{\text{req}}+1)=0,\dotsc,X(t_{\text{req}}+t)=1]\\\
=(T(\infty))_{\begin{subarray}{c}1\\\
0\end{subarray}}(T(\infty))_{\begin{subarray}{c}0\\\
0\end{subarray}}\dotsb(T(\infty))_{\begin{subarray}{c}0\\\
0\end{subarray}}\Pr[X(t_{\text{req}}+1)=0].$ (3.107)
Then, from (3.42), we have that
$\Pr[X(t_{\text{req}}+1)=0]=(1-p)^{t_{\text{req}}+1}$, so that
$\Pr[W(t_{\text{req}})=t]=p(1-p)^{t-2}(1-p)^{t_{\text{req}}+1}$ (3.108)
for all $t\geq 1$. The result then follows. ∎
Figure 7: The expected waiting time for a single elementary link, given by
(3.101), as a function of the request time $t_{\text{req}}$. We let $p=0.3$,
and we take various values for the cutoff $t^{\star}$.
In the limit $t_{\text{req}}\to\infty$, we obtain using (3.84),
$\lim_{t_{\text{req}}\to\infty}\mathbb{E}[W(t_{\text{req}})]=\frac{1}{p(1+t^{\star}p)},\quad
t^{\star}<\infty.$ (3.109)
See Figure 7 for plots of the expected waiting time, given by (3.101), as a
function of the request time $t_{\text{req}}$ for various values of
$t^{\star}$. As long as $t^{\star}$ is strictly greater than zero, the waiting
time is strictly less than $\frac{1}{p}$, despite the oscillatory behavior for
small values of $t_{\text{req}}$. In the limit $t_{\text{req}}\to\infty$, we
see that the waiting time is monotonically decreasing with increasing
$t^{\star}$, which is also apparent from (3.109).
### 3.3 Multiple parallel links
So far, we have considered only one edge connecting a particular set of nodes
corresponding to an elementary link in a network. Suppose now that those nodes
have $N^{\max}>1$ parallel edges connecting them (see Figure 1). Therefore, at
each time step, the set of nodes can have at most $N^{\max}$ active parallel
links. In this scenario, as described in the Introduction, the network is
described by a multigraph, since each parallel link corresponds to a distinct
edge connecting the nodes. All of these parallel links are mutually
independent by definition. Therefore, if $E$ is a subset of edges in a graph
corresponding to a quantum network, then we can write the classical-quantum
state of an edge $e\in E$ at time $t$ as
$\bigotimes_{j=1}^{N_{e}^{\max}}\widehat{\sigma}_{e,j}(t)$ for all $t\geq 1$,
where each $\widehat{\sigma}_{e,j}(t)$ is given by (2.27) and $N_{e}^{\max}$
is the maximum number of parallel links in the edge $e\in E$. By tracing out
the classical history registers of each parallel link, we obtain the overall
expected quantum state of an edge $e\in E$ at time $t$ as follows:
$\bigotimes_{j=1}^{N_{e}^{\max}}\sigma_{e,j}(t),$ (3.110)
where each $\sigma_{e,j}(t)$ is given by the expression in (3.50). In the
limit $t\to\infty$, we use the expression in (3.80) to obtain
$\bigotimes_{j=1}^{N_{e}^{\max}}\left(\frac{1-p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}\tau_{e,j}^{\varnothing}+\frac{p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}\sum_{m=0}^{t_{e,j}^{\star}}\rho_{e,j}(m)\right)\quad(t\to\infty),$
(3.111)
where $\\{p_{e,j}\\}_{j=1}^{N_{e}^{\max}}$ and
$\\{t_{e,j}^{\star}\\}_{j=1}^{N_{e}^{\max}}$ are the success probabilities and
cutoffs of the parallel links (all finite) for the edge $e\in E$.
Using the joint state of the parallel links in (3.110), it is possible to
apply an entanglement distillation protocol in order to increase the fidelity
of the link, which is important for achieving a high-fidelity long-distance
entangled state. See [26, 27, 28] for examples of bipartite entanglement
distillation protocols, and [151, 152, 153, 154, 155, 156] for examples of
multipartite entanglement distillation protocols. (See also [157, 158, 159,
160, 161, 162, 163, 164, 165, 166].) Upper bounds on the fidelity that can be
achieved after an entanglement distillation protocol, in the non-asymptotic
setting, can be calculated using a semi-definite program (SDP), as shown in
[167]. For practical entanglement distillation schemes, which typically only
consist of one round of local operations and classical communication and also
have non-unit success probability, SDP upper bounds have been provided in
[165]. In [126], the authors use reinforcement learning to discover protocols
for entanglement distillation. See [25, 121, 122, 104] for an analysis of
quantum repeater protocols with entanglement distillation.
Using the expressions in (3.110) and (3.111), along with the fact that the
elementary link generation for each edge in the set $E$ is indepdendent of the
other edges, we can write the overall expected quantum state corresponding to
the set $E$ of edges as follows:
$\sigma_{E}(t)\coloneqq\bigotimes_{e\in
E}\bigotimes_{j=1}^{N_{e}^{\max}}\sigma_{e,j}(t)$ (3.112)
When all of the cutoffs are finite, in the limit $t\to\infty$, we get
$\lim_{t\to\infty}\sigma_{E}(t)=\bigotimes_{e\in
E}\bigotimes_{j=1}^{N_{e}^{\max}}\left(\frac{1-p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}\tau_{e,j}^{\varnothing}+\frac{p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}\sum_{m=0}^{t_{e,j}^{\star}}\rho_{e,j}(m)\right).$
(3.113)
With multiple parallel links in an edge $e\in E$, the probability of having at
least one active parallel link at time $t$ is
$\Pr[N_{e}(t)\geq 1]=1-\prod_{j=1}^{N_{e}^{\max}}(1-\Pr[X_{e,j}(t)=1]),$
(3.114)
where $X_{e,j}(t)$ is the link random variable for the $j^{\text{th}}$
parallel link. In the limit $t\to\infty$, using (3.52), this probability
becomes
$\lim_{t\to\infty}\Pr[N_{e}(t)\geq
1]=1-\prod_{j=1}^{N_{e}^{\max}}\frac{1-p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}.$
(3.115)
We can also determine the expected number of active parallel links at any
given time. Recalling that $N_{e}(t)=\sum_{j=1}^{N_{e}^{\max}}X_{e,j}(t)$ is
the random variable for the total number of parallel links at time $t\geq 1$,
we find that the expected number of parallel links is simply
$\sum_{j=1}^{N_{e}^{\max}}\mathbb{E}[X_{e,j}(t)]$, with
$\mathbb{E}[X_{e,j}(t)]$ given by (3.42) for each parallel link. In the limit
$t\to\infty$, the expected number of parallel links becomes
$\lim_{t\to\infty}\mathbb{E}[N_{e}(t)]=\sum_{j=1}^{N_{e}^{\max}}\frac{(t_{e,j}^{\star}+1)p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}.$
(3.116)
If all of the parallel links have the same success probability $p_{e}$ and the
same cutoff $t_{e}^{\star}$, then this reduces to
$\lim_{t\to\infty}\mathbb{E}[N_{e}(t)]=N_{e}^{\max}\frac{(t_{e}^{\star}+1)p_{e}}{1+t_{e}^{\star}p_{e}}$
(3.117)
parallel links on average in the $t\to\infty$ limit. Note that if all of the
parallel links have the same memory cutoff and the same success probability,
then $N_{e}(t)$ is simply a binomial random variable with parameter
$\Pr[X_{e}(t)=1]$; otherwise, $N_{e}(t)$ is a Poisson-binomial random variable
(see, e.g., [168]). Also, as mentioned in Remark 2.4, the quantity
$N_{e}^{\max}$ can be thought of as an edge capacity, because it is the
maximum number of entangled states that can be shared along an edge per unit
time. Then, $N_{e}(t)$ can be thought of as the edge flow, and
$\mathbb{E}[N_{e}(t)]$ the expected edge flow. In the case of bipartite links,
Menger’s theorem [169, 170] tells us that we can use the expected flow to
determine at any time step the expected number of edge-disjoint paths between
two given nodes in the network, which then gives us the number of entangled
states that they can share; see, e.g., [74]. In the case of multipartite
links, the expected edge flow can be used to determine the expected number of
edge-disjoint Steiner/spanning trees [171, 172, 173] for a given set of nodes
in the network in order to determine the number of multipartite entangled
states that they can share; see, e.g., [75, 78, 174].
Let us now consider the rate $R(t)$ for any edge under the memory cutoff
policy in the $t\to\infty$ limit. First, recall that
$R(t)=\frac{1}{t}\sum_{j=1}^{t}N(j)$ (3.118)
is the average number of active parallel links in an edge per unit time in $t$
time steps.
###### Theorem 3.3.
For any elementary link consisting of $N^{\max}$ parallel links, with
$\\{p_{j}\\}_{j=1}^{N^{\max}}$ being the success probabilities for the
parallel links and $\\{t_{j}^{\star}\\}_{j=1}^{N^{\max}}$ the memory cutoffs
for the parallel links, the expected rate $\mathbb{E}[R(t)]$ of elementary
link generation in the limit $t\to\infty$ is as follows:
$\lim_{t\to\infty}\mathbb{E}[R(t)]=\lim_{t\to\infty}\frac{1}{t}\sum_{j=1}^{t}\mathbb{E}[N(j)]=\sum_{j=1}^{N^{\max}}\frac{(t_{j}^{\star}+1)p_{j}}{1+t_{j}^{\star}p_{j}}.$
(3.119)
###### Proof.
The expected rate $\mathbb{E}[R(t)]$ is, by defintion, the Cesáro mean of the
sequence $(\mathbb{E}[N(j)])_{j=1}^{t}$ (see, e.g., [175]). Then, because
$\lim_{j\to\infty}\mathbb{E}[N(j)]$ exists and is given by (3.116), we use the
well-known result that the limit of Cesáro means is equal to the limit of the
original sequence [175], leading to the desired result. ∎
### 3.4 Total number of active links
Consider a subset $E$ of edges in a graph corresponding to a quantum network.
Then, for any time $t\geq 1$, the number of active links in the set $E$ is
$L_{E}(t)\coloneqq\sum_{e\in E}\sum_{j=1}^{N_{e}^{\max}}X_{e,j}(t),$ (3.120)
where $X_{e,j}(t)$ is the link status random variable for the $j^{\text{th}}$
parallel link of the edge $e\in E$. When only the number of edges/elementary
links is relevant, we use the notation $L_{M}(t)$ to refer to the number of
active links at time $t$, where $M=|E|$ is the number of edges/elementary
links under consideration.
The total number of active elementary links was introduced in [103] as a
figure of merit on the performance of an entanglement distribution network,
and it was shown that the quantity provides an upper bound on the average
largest cluster size (i.e., the size of the largest connected component) in
the network. In particular, for the case that all of the elementary links have
the same success probability $p$ and $N_{e}^{\max}=1$ for all $e\in E$, with
$M=|E|$, it was shown that $\frac{1}{M}\mathbb{E}[L_{M}(t)]\leq 1-(1-p)^{t}$
for all $t\geq 1$.
Figure 8: The expected total number $\mathbb{E}[L_{M}(t)]$ of active
elementary links when $M$ edges in total are being considered. We use the
notation
$\mathbb{E}[L_{M}(\infty)]\equiv\lim_{t\to\infty}\mathbb{E}[L_{M}(t)]$. (Left)
$M=2$ edges in the limit $t\to\infty$ with $N^{\max}=1$ parallel link for each
edge. One link has success probability $p_{1}$ and cutoff $t_{1}^{\star}=5$,
and the other link has success probability $p_{2}$ and cutoff
$t_{2}^{\star}=2$. (Right) $M=4$ edges after $t=50$ time steps, with
$N^{\max}=1$ parallel link for each edge. Two of the links have success
probability $p_{1}$ with cutoffs $5,15$, and the other two links have success
probability $p_{2}$ with cutoffs $10,20$.
Using the results of Section 3.1, we can now extend the result of [103]. In
particular,
$\mathbb{E}[L_{E}(t)]=\sum_{e\in
E}\sum_{j=1}^{N_{e}^{\max}}\mathbb{E}[X_{e,j}(t)],$ (3.121)
with $\mathbb{E}[X_{e,j}(t)]$ given by the expression in (3.42), i.e.,
$\Pr[X_{e,j}(t)=1]\\\ =\left\\{\begin{array}[]{l l}1-(1-p_{e,j})^{t},&t\leq
t_{e,j}^{\star}+1,\\\\[14.22636pt]
\displaystyle\sum_{x=0}^{\lfloor\frac{t-1}{t_{e,j}^{\star}+1}\rfloor}\sum_{k=1}^{t_{e,j}^{\star}+1}\binom{t-k-
xt_{e,j}^{\star}}{x}\boldsymbol{1}_{t-k-x(t_{e,j}^{\star}+1)\geq
0}p_{e,j}^{x+1}(1-p_{e,j})^{t-k-(t_{e,j}^{\star}+1)x},&t>t_{e,j}^{\star}+1\end{array}\right.$
(3.122)
where $\\{p_{e,j}:e\in E,1\leq j\leq N_{e}^{\max}\\}$ is the set of success
probabilities and $\\{t_{e,j}^{\star}:e\in E,1\leq j\leq N_{e}^{\max}\\}$ is
the set of cutoffs. In the $t\to\infty$ limit, this reduces to the following
simple expression using Theorem 3.1:
$\lim_{t\to\infty}\mathbb{E}[L_{E}(t)]=\sum_{e\in
E}\sum_{j=1}^{N_{e}^{\max}}\frac{(t_{e,j}^{\star}+1)p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}.$
(3.123)
See Figure 8 for plots of $\mathbb{E}[L_{M}(t)]$. Given a subset of elementary
links with given memory cutoffs, we can estimate the success probabilities
that need to be attained in order to achieve a desired expected number of
active elementary links after a given amount of time.
### 3.5 Collective link status
Consider a subset $E$ of edges in a graph corresponding to a quantum network.
Then, in the case that each edge has only one parallel edge, for any time
$t\geq 1$ we define the collective link status $X_{E}^{\text{tot}}$ to be
$X_{E}^{\text{tot}}(t)\coloneqq\prod_{e\in E}X_{e}(t)$ (3.124)
When only the number of edges/elementary links is relevant, we use the
notation $X_{M}^{\text{tot}}(t)$ to refer to the collective link status at
time $t$, where $M=|E|$ is the number of edges/elementary links under
consideration. Note that the collective link status is equal to one if and
only if all of the links are active at the given time. In other words,
$\Pr[X_{E}^{\text{tot}}(t)=1]=\mathbb{E}[X_{E}^{\text{tot}}(t)]=\prod_{e\in
E}\Pr[X_{e}(t)=1].$ (3.125)
In the limit $t\to\infty$,
$\lim_{t\to\infty}\mathbb{E}[X_{E}(t)]=\prod_{e\in
E}\frac{(t_{e}^{\star}+1)p_{e}}{1+t_{e}^{\star}p_{e}},$ (3.126)
where $\\{p_{e}\\}_{e\in E}$ and $\\{t_{e}^{\star}\\}_{e\in E}$ are the
success probabilities and cutoffs, respectively, of the links. The collective
link status can be used to estimate the probability of having a long-distance
entangled link between a collection of non-adjacent nodes that are connected
to each other along a path given by the subset $E$ of edges.
Figure 9: The expected collective link status
$\mathbb{E}[X_{M}^{\text{tot}}(t)]$ of a collection of $M$ edges. We use the
notation
$X_{M}^{\text{tot}}(\infty)\equiv\lim_{t\to\infty}\mathbb{E}[X_{M}^{\text{tot}}(t)]$.
(Left) $M=2$ edges in the limit $t\to\infty$ with $N^{\max}=1$ parallel link
for each edge. One link has success probability $p_{1}$ and cutoff
$t_{1}^{\star}=5$, and the other link has success probability $p_{2}$ and
cutoff $t_{2}^{\star}=2$. (Right) $M=4$ edges after $t=50$ time steps, with
$N^{\max}=1$ parallel link for each edge. Two of the links have success
probability $p_{1}$ with cutoffs $5,15$, and the other two links have success
probability $p_{2}$ with cutoffs $10,20$.
In general, if each edge $e\in E$ has a number $N_{e}^{\max}\geq 1$ of
parallel edges, then the probability that all corresponding elementary links
have at least one active parallel link at time $t\geq 1$ is given by
$\prod_{e\in E}\Pr[N_{e}(t)\geq 1]=\prod_{e\in
E}\left(1-\prod_{j=1}^{N_{e}^{\max}}(1-\Pr[X_{e,j}(t)=1])\right).$ (3.127)
In the limit $t\to\infty$, this becomes
$\prod_{e\in
E}\left(1-\prod_{j=1}^{N_{e}^{\max}}\frac{1-p_{e,j}}{1+t_{e,j}^{\star}p_{e,j}}\right).$
(3.128)
See Figure 9 for plots of $\mathbb{E}[X_{M}^{\text{tot}}]$. Given a subset of
elementary links with given memory cutoffs, we can estimate the success
probabilities that need to be attained in order to achieve a desired expected
collective link status after a given amount of time.
## 4 Finite-horizon policy optimization
Having analyzed the memory cutoff policy, let us now turn to obtaining optimal
policies. We stick to the finite-horizon case, meaning that the final time for
the link evolution is fixed at the outset and is finite, and the task is to
optimize the expected total reward up to the final time. In Theorem 4.1 below,
we show that policy optimization can be done using dynamic programming.
Recall from Section 2.1 that a policy is of the form
$\pi=(d_{1},d_{2},\dotsc)$, where the $d_{j}$ are decision functions, which in
general give conditional probability distributions over actions conditioned on
histories. Also, to each element of the policy $\pi$, recall from (2.23) that
we can associate the following density operator:
$\pi(j;h^{j})=\sum_{a=0}^{1}d_{j}(h^{j})(a)|a\rangle\langle a|,\quad j\geq
1,~{}h^{j}\in\\{0,1\\}^{2j-1}.$ (4.1)
Then, we can write the operator $\widetilde{\sigma}(t;h^{t})$ in (2.28) for
any $t\geq 1$ and any history $h^{t}=(x_{1},a_{1},x_{2},\allowbreak
a_{2},\dotsc,a_{t-1},x_{t})$ as follows:
$\displaystyle\widetilde{\sigma}(t;h^{t})$
$\displaystyle=\left(\prod_{j=1}^{t-1}\operatorname{Tr}[\pi(j;h_{j}^{t})|a_{j}\rangle\langle
a_{j}|]\right)(\mathcal{T}^{x_{t-1},a_{t-1},x_{t}}\circ\dotsb\circ\mathcal{T}^{x_{1},a_{1},x_{2}})(\widetilde{\sigma}(1;x_{1}))$
(4.2)
$\displaystyle=\left(\prod_{j=1}^{t-1}\operatorname{Tr}[\pi(j;h_{j}^{t})|a_{j}\rangle\langle
a_{j}|]\right)\widetilde{\sigma}^{\prime}(t;h^{t}),$ (4.3)
where
$\widetilde{\sigma}^{\prime}(t;h^{t})\coloneqq(\mathcal{T}^{x_{t-1},a_{t-1},x_{t}}\circ\dotsb\circ\mathcal{T}^{x_{1},a_{1},x_{2}})(\widetilde{\sigma}(1;x_{1})).$
(4.4)
Policy optimization is then the task of optimizing the reward up to a given
time $t$ with respect to the density operators $\\{\pi(j;h^{j})$: $1\leq j\leq
t-1,~{}h^{j}\in\\{0,1\\}^{2j-1}\\}$ that define a policy.
Now, before getting to Theorem 4.1, let us consider how policies for
elementary link generation should be evaluated. From Definition 2.1, Remark
2.1, and Theorem 2.2, we have that the quantity $\mathbb{E}[\widetilde{F}(t)]$
represents the expected total reward in the decision process corresponding to
elementary link generation. The objective function when optimizing over
policies would thus be the quantity $\mathbb{E}[\widetilde{F}(t)]$. Using this
quantity makes sense from the perspective of elementary link generation in a
quantum network, because with higher elementary link fidelities more joining
measurements can be performed in order to obtain high-fidelity entanglement
distribution over longer distances without having to perform entanglement
distillation. Another way to justify using $\mathbb{E}[\widetilde{F}(t)]$ as
the objective function is by considering two alternatives.
The first alternative to $\mathbb{E}[\widetilde{F}(t)]$ is the expected link
value $\mathbb{E}[X(t)]$. If we use $\mathbb{E}[X(t)]$ as the objective
function for policy optimization, then it is clear that the policy consisting
of the action “request” at every time step before the link is established, and
the action “wait” at every time step after the link is established, is
optimal, in the sense that it achieves the highest value of $\mathbb{E}[X(t)]$
for all $t\geq 1$. (Observe that this policy is simply the $t^{\star}=\infty$
memory cutoff policy.) A higher value of $\mathbb{E}[X(t)]$ comes, of course,
at the cost of a lower fidelity, since each “wait” action decreases the
fidelity of the quantum state stored in memory. If instead we consider
maximizing the expected fidelity $\mathbb{E}[F(t)]$ of the link given that the
link is active, then it is clear that the action “request” at each time step
is optimal, because then the quantity $\mathbb{E}[F(t)]$ is equal to the
initial fidelity $f_{0}$ at all time steps, which is the highest that can be
obtained (without entanglement distillation). (Observe that this policy is
simply the $t^{\star}=0$ memory cutoff policy.) This highest value of the
fidelity comes at the cost of a lower expected link value, since the
probability that the link is active stays at $p$ for all times under this
policy, i.e., $\Pr[X(t)=1]=p$ for all $t\geq 1$ if at every time step the
agent requests a link. The quantity
$\mathbb{E}[\widetilde{F}(t)]=\mathbb{E}[X(t)f_{M(t)}(\rho_{0})]$ by
definition incorporates the trade-off between the link value and the link
fidelity, and therefore it is a better figure of merit for elementary link
generation.
Having justified the use of $\mathbb{E}[\widetilde{F}(t)]$ as the objective
function for policy optimization, let us discuss one simple policy
optimization strategy, which is intuitive but not necessarily optimal, before
getting to our main result in Theorem 4.1. Since the agent, at each time step,
has to decide whether to keep the current link or to discard it and request a
new one, a simple policy is for the agent to deterministically pick the action
$a_{t}$ at time $t$ that maximizes the quantity
$\mathbb{E}[\widetilde{F}(t+1)]$ at the next time step. Recalling that the |
# The maximum sum of sizes of non-empty pairwise cross-intersecting
families††thanks: This work is supported by NSFC (Grant No. 11931002). E-mail
addresses<EMAIL_ADDRESS>(Yang Huang<EMAIL_ADDRESS>(Yuejian Peng,
corresponding author).
Yang Huang, Yuejian Peng†
School of Mathematics, Hunan University
Changsha, Hunan, 410082, P.R. China
###### Abstract
Two families $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting if $A\cap
B\neq\emptyset$ for any $A\in\mathcal{A}$ and $B\in\mathcal{B}$. We call $t$
families $\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}$ pairwise
cross-intersecting families if $\mathcal{A}_{i}$ and $\mathcal{A}_{j}$ are
cross-intersecting when $1\leq i<j\leq t$. Additionally, if
$\mathcal{A}_{j}\neq\emptyset$ for each $j\in[t]$, then we say that
$\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}$ are non-empty pairwise
cross-intersecting. Let $\mathcal{A}_{1}\subset{[n]\choose
k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ be non-empty pairwise
cross-intersecting families with $t\geq 2$, $k_{1}\geq k_{2}\geq\cdots\geq
k_{t}$, and $n\geq k_{1}+k_{2}$, we determine the maximum value of
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ and characterize all extremal families.
This answers a question of Shi, Frankl and Qian [Combinatorica 42 (2022)] and
unifies results of Frankl and Tokushige [J. Combin. Theory Ser. A 61 (1992)]
and Shi, Frankl and Qian [Combinatorica 42 (2022)]. The key techniques in
previous works cannot be extended to our situation. A result of Kruskal-Katona
is applied to allow us to consider only families $\mathcal{A}_{i}$ whose
elements are the first $|\mathcal{A}_{i}|$ elements in lexicographic order. We
bound $\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ by a function $f(R)$ of the last
element $R$ (in the lexicographic order) of $\mathcal{A}_{1}$, introduce the
concepts ‘$c$-sequential’ and ‘down-up family’, and show that $f(R)$ has
several types of local convexities.
Key words: Cross-Intersecting families; Extremal finite sets
2010 Mathematics Subject Classification. 05D05, 05C65, 05D15.
## 1 Introduction
Let $[n]=\\{1,2,\dots,n\\}$. For $0\leq k\leq n$, let ${[n]\choose k}$ denote
the family of all $k$-subsets of $[n]$. A family $\mathcal{A}$ is $k$-uniform
if $\mathcal{A}\subset{[n]\choose k}$. A family $\mathcal{A}$ is intersecting
if $A\cap B\neq\emptyset$ for any $A$ and $B\in\mathcal{A}$. Many researches
in extremal set theory are inspired by the foundational result of
Erdős–Ko–Rado [6] showing that a maximum $k$-uniform intersecting family is a
full star. This theorem of Erdős–Ko–Rado has many interesting generalizations.
Two families $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting if $A\cap
B\neq\emptyset$ for any $A\in\mathcal{A}$ and $B\in\mathcal{B}$. We call $t$
$(t\geq 2)$ families $\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}$
pairwise cross-intersecting families if $\mathcal{A}_{i}$ and
$\mathcal{A}_{j}$ are cross-intersecting when $1\leq i<j\leq t$. Additionally,
if $\mathcal{A}_{j}\neq\emptyset$ for each $j\in[t]$, then we say that
$\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}$ are non-empty pairwise
cross-intersecting. The following result was proved by Hilton.
###### Theorem 1.1 (Hilton, [16]).
Let $n,k$ and $t$ be positive integers with $n\geq 2k$ and $t\geq 2$. If
$\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}\subset{[n]\choose k}$
are pairwise cross-intersecting, then
$\displaystyle\sum_{i=1}^{t}|\mathcal{A}_{i}|\leq\begin{cases}{n\choose
k},&\text{if $t\leq\frac{n}{k}$};\\\ t{n-1\choose k-1},&\text{if
$t\geq\frac{n}{k}$},\end{cases}$
and the bound is tight. If
$|\mathcal{A}_{1}|\geq|\mathcal{A}_{2}|\geq\cdots\geq|\mathcal{A}_{t}|$,
$n\neq 2k$ when $t=2$, and the equality holds, then either
$\mathcal{A}_{1}={[n]\choose k}$,
$\mathcal{A}_{2}=\cdots=\mathcal{A}_{t}=\emptyset$ and $t\leq\frac{n}{k}$, or
$\mathcal{A}_{1}=\mathcal{A}_{2}=\cdots=\mathcal{A}_{t}=\\{F\in{[n]\choose
k}:x\in F,\ {\rm where}\ x\in[n]\\}$ and $t\geq\frac{n}{k}$.
For non-empty situation, Hilton and Milner gave the following result.
###### Theorem 1.2 (Hilton–Milner, [14]).
Let $n$ and $k$ be positive integers with $n\geq 2k$ and
$\mathcal{A},\mathcal{B}\subset{[n]\choose k}$. If $\mathcal{A}$ and
$\mathcal{B}$ are non-empty cross-intersecting, then
$|\mathcal{A}|+|\mathcal{B}|\leq{n\choose k}-{n-k\choose k}+1.$
The upper bound is achievable at $\mathcal{A}=\\{[k]\\}$ and
$\mathcal{B}=\\{F\in{[n]\choose k}:F\cap[k]\neq\emptyset\\}$. More generally,
Frankl and Tokushige showed that
###### Theorem 1.3 (Frankl-Tokushige, [11]).
Let $\mathcal{A}\subset{[n]\choose k}$ and $\mathcal{B}\subset{[n]\choose l}$
be non-empty cross-intersecting families with $n\geq k+l$ and $k\geq l$. Then
$|\mathcal{A}|+|\mathcal{B}|\leq{n\choose k}-{n-l\choose k}+1.$
The upper bound is achievable at $\mathcal{A}=\\{[l]\\}$ and
$\mathcal{B}=\\{F\in{[n]\choose k}:F\cap[l]\neq\emptyset\\}$. Borg and Feghali
[4] got the analogous maximum sum problem for the case when
$\mathcal{A}\subset{[n]\choose\leq r}$ and $\mathcal{B}\subset{[n]\choose\leq
s}$.
###### Theorem 1.4 (Borg–Feghali, [4]).
Let $n\geq 1,1\leq r\leq s,\mathcal{A}\subset{[n]\choose\leq r}$ and
$\mathcal{B}\subset{[n]\choose\leq s}$. If $\mathcal{A}$ and $\mathcal{B}$ are
non-empty cross-intersecting, then
$|\mathcal{A}|+|\mathcal{B}|\leq 1+\sum_{i=1}^{s}\left({n\choose
i}-{n-r\choose i}\right),$
and equality holds if $\mathcal{A}=\\{[r]\\}$ and
$\mathcal{B}=\\{B\in{[n]\choose\leq s}:B\cap[r]\neq\emptyset\\}.$
Recently, Shi, Frankl and Qian proved the following result.
###### Theorem 1.5 (Shi–Frankl–Qian, [21]).
Let $n,k,l,r$ be integers with $n\geq k+l,l\geq r\geq 1$, $c$ be a positive
constant and $\mathcal{A}\subset{[n]\choose k},\mathcal{B}\subset{[n]\choose
l}$. If $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting and
${n-r\choose l-r}\leq|\mathcal{B}|\leq{n-1\choose l-1}$, then
$|\mathcal{A}|+c|\mathcal{B}|\leq\textup{max}\left\\{{n\choose k}-{n-r\choose
k}+c{n-r\choose l-r},\,{n-1\choose k-1}+c{n-1\choose l-1}\right\\}$
and the upper bound is attained if and only if one of the following holds:
(i).
${n\choose k}-{n-r\choose k}+c{n-r\choose l-r}\geq{n-1\choose
k-1}+c{n-1\choose l-1},$ (1)
$n>k+l$, $\mathcal{A}=\\{A\in{[n]\choose k}:A\cap[r]\neq\emptyset\\}$ and
$\mathcal{B}=\\{B\in{[n]\choose l}:[r]\subset B\\}$;
(ii).
${n\choose k}-{n-r\choose k}+c{n-r\choose l-r}\leq{n-1\choose
k-1}+c{n-1\choose l-1},$ (2)
$n>k+l$, $\mathcal{A}=\\{A\in{[n]\choose k}:i\in A\\}$ and
$\mathcal{B}=\\{B\in{[n]\choose l}:i\in B\\}$ for some $i\in[n]$;
(iii). $n=k+l,c<1$, $\mathcal{B}\subset{[n]\choose l}$ with
$|\mathcal{B}|={n-r\choose l-r}$ and $\mathcal{A}={[n]\choose
k}\setminus\overline{\mathcal{B}}$;
(iv). $n=k+l,c=1$, $\mathcal{B}\subset{[n]\choose l}$ with ${n-r\choose
l-r}\leq|\mathcal{B}|\leq{n-1\choose l-1},\mathcal{A}={[n]\choose
k}\setminus\overline{\mathcal{B}}$;
(v). $n=k+l,c>1$, $\mathcal{B}\subset{[n]\choose l}$ with
$|\mathcal{B}|={n-1\choose l-1}$ and $\mathcal{A}={[n]\choose
k}\setminus\overline{\mathcal{B}}$;
where $\overline{\mathcal{B}}=\\{[n]\setminus B:B\in\mathcal{B}\\}$.
Setting $c=t-1$ in Theorem 1.5, they got the following interesting corollary
which is a generalization of Theorem 1.2.
###### Corollary 1.6 (Shi–Frankl–Qian, [21]).
Let $n$ and $k$ be positive integers with $n\geq 2k$ and $t\geq 2$. If
$\mathcal{A}_{1},\mathcal{A}_{2},\dots,\mathcal{A}_{t}\subset{[n]\choose k}$
are non-empty pairwise cross-intersecting families, then
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}\leq\textup{max}\left\\{{n\choose
k}-{n-k\choose k}+t-1,t{n-1\choose k-1}\right\\},$
and the upper bound is sharp.
Furthermore, Shi, Frankl and Qian [21] proposed the following problem.
###### Problem 1.7.
(Shi–Frankl–Qian, [21]) Let $\mathcal{A}_{1}\subset{[n]\choose
k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ be non-empty pairwise
cross-intersecting families with $t\geq 2$, $k_{1}\geq k_{2}\geq\cdots\geq
k_{t}$, and $n\geq k_{1}+k_{2}$. Is it true that
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}\leq\max\left\\{{n\choose
k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose
k_{i}-k_{t}}},\sum_{i=1}^{t}{n-1\choose k_{i}-1}\right\\}?$
As mentioned above that Shi, Frankl and Qian [21] obtained a positive answer
to the above problem for the special case that $k_{1}=k_{2}=\cdots=k_{t}$
(Corollary 1.6) by taking $c=t-1$ in the result of the maximum value of
$|\mathcal{A}|+c|\mathcal{B}|$ for two non-empty cross-intersecting families
$\mathcal{A}$ and $\mathcal{B}$ (Theorem 1.5). This will not work (will not
get a tight upper bound) if elements in different families have different
orders. In this paper, we get a positive answer to the above problem. The
following theorem is our main result.
###### Theorem 1.8.
Let $\mathcal{A}_{1}\subset{[n]\choose
k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ be non-empty pairwise
cross-intersecting families with $t\geq 2$, $k_{1}\geq k_{2}\geq\cdots\geq
k_{t}$, and $n\geq k_{1}+k_{2}$. Then
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}\leq\textup{max}\left\\{{n\choose
k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose
k_{i}-k_{t}}},\,\,\sum_{i=1}^{t}{n-1\choose k_{i}-1}\right\\}.$
The equality holds if and only if one of the following holds.
(i) ${n\choose k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose
k_{i}-k_{t}}}>\sum_{i=1}^{t}{n-1\choose k_{i}-1}$, and there is some
$k_{t}$-element set $T\subset[n]$ such that
$\mathcal{A}_{1}=\\{F\in{[n]\choose k_{1}}:F\cap T\neq\emptyset\\}$ and
$\mathcal{A}_{j}=\\{F\in{[n]\choose k_{j}}:T\subset F\\}$ for each
$j\in[2,t]$;
(ii) ${n\choose k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose
k_{i}-k_{t}}}\leq\sum_{i=1}^{t}{n-1\choose k_{i}-1}$, there are some $i\neq j$
such that $n>k_{i}+k_{j}$, and there is some $a\in[n]$ such that
$\mathcal{A}_{j}=\\{F\in{[n]\choose k_{j}}:a\in F\\}$ for each $j\in[t]$;
(iii) $t=2,n=k_{1}+k_{2}$, $\mathcal{A}_{1}\subset{[n]\choose k_{1}}$ and
$\mathcal{A}_{2}={[n]\choose k_{2}}\setminus\overline{\mathcal{A}_{1}}$;
(iv) $t\geq 3,k_{1}=k_{2}=\cdots=k_{t}=k,n=2k$ and
$\mathcal{A}_{1}=\mathcal{A}_{2}=\cdots=\mathcal{A}_{t}={[n]\choose
k}\setminus\overline{\mathcal{A}_{1}}$.
For $t=2$ in the above theorem, Theorem 1.3 and Theorem 1.5 already revealed
it. Our method works for $t=2$ as well, so we still include this case in the
proof. In both [21] and our paper, a result of Kruskal-Katona (Theorem 2.1) is
applied to allow us to consider only families $\mathcal{A}_{i}$ whose elements
are the first $|\mathcal{A}_{i}|$ elements in lexicographic order. The proof
technique in [21] (this kind of technique is also used by Wang and Zhang [22],
and Frankl and Kupavskii [10]) cannot be extended to more than two families of
subsets with different orders. We analyze the relationship between
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ and the last element (in the lexicographic
order) of $\mathcal{A}_{1}$. Let $R$ be the last element of $\mathcal{A}_{1}$,
we will bound $\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ by a function $f(R)$. In
order to do this, we will prove a strong version of a result of Frankl-
Kupavskii [10] (Proposition 2.7). Namely, we prove Proposition 2.8 which gets
rid of some restrictions of Proposition 2.7. The main challenge left is to
estimate $f(R)$. In order to do this, we introduce new concepts
‘$c$-sequential’ and ‘down-up family’, and show four types of ‘local
convexity’ of $f(R)$ in Lemmas 2.11, 2.12, 2.13 and 2.14.
There are also studies regarding the problem of maximizing the product of
sizes of pairwise cross-intersecting families. This problem was first
addressed by Pyber [20] who proved that if $\mathcal{A}\subset{[n]\choose k}$
and $\mathcal{B}\subset{[n]\choose l}$ are cross-intersecting and either
$k=l\leq n/2$ or $k<l$ and $n\geq 2l+k-2$, then
$|\mathcal{A}||\mathcal{B}|\leq{n-1\choose k-1}{n-1\choose l-1}$.
Subsequently, Matsumoto and Tokushige [19] proved this for any $k\leq l\leq
n/2$, and they also determined the optimal structures. There are also related
product-version results in [2, 3, 9, 12, 13] (Due to the limitation of our
knowledge, we might have missed some references). Note that
$|\mathcal{A}||\mathcal{B}|\leq{n-1\choose k-1}{n-1\choose l-1}$ for two
cross-intersecting families implies that
$\prod_{i=1}^{t}|\mathcal{A}_{i}|\leq\prod_{i=1}^{t}{n-1\choose k_{i}-1}$ for
pairwise cross-intersecting families $\mathcal{A}_{1}\subset{[n]\choose
k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ and this bound is tight
by taking each $\mathcal{A}_{i}$ to be a full star. For the sum-version, tight
bound for the sum of sizes of two cross-intersecting families will not imply
the tight bound of the sum of sizes of more pairwise cross-intersecting
families of subsets of different orders.
Families $\mathcal{F}_{1},\dots,\mathcal{F}_{t}\subset{[n]\choose k}$ are said
to be cross-intersecting if $F_{1}\cap\cdots\cap F_{t}\neq\emptyset$ for all
$F_{i}\in\mathcal{F}_{i},i\in[t]$. If $F_{1}\cup\cdots\cup F_{t}\neq[n]$ for
all $F_{i}\in\mathcal{F}_{i},i\in[t]$, then we say
$\mathcal{F}_{1},\dots,\mathcal{F}_{t}\subset{[n]\choose k}$ are cross-union.
Cross-union can be viewed as the dual notation of cross-intersecting. It’s
easy to see that $\mathcal{F}_{1},\cdots,\mathcal{F}_{t}$ are cross-
intersecting if and only if
$\overline{\mathcal{F}_{1}},\cdots,\overline{\mathcal{F}_{t}}$ are cross-
union, where $\overline{\mathcal{F}_{i}}=\\{[n]-F:F\in\mathcal{F}_{i}\\}$.
Recently, Cambie–Kim–Liu–Tran [5] proved a conjecture of Frankl [7] about the
maximum sum of the sizes of cross union families. Formulated in terms of
cross-intersecting families, their result is
###### Theorem 1.9.
(Cambie–Kim–Liu–Tran, [5]) Let $n=((t-1)k-l)/(t-2)$ where $1\leq l\leq n-k$
and $t\geq 4l+1$. If $\mathcal{F}_{1},\dots,\mathcal{F}_{t}\subset{[n]\choose
k}$ are non-empty cross-intersecting, then
$|\mathcal{F}_{1}|+\cdots+|\mathcal{F}_{t}|\leq t{n-1\choose k-1}.$
The condition $l\leq n-k$ (i. e. , $n\geq{t\over t-1}k$ ) in the above theorem
is natural since $n<{t\over t-1}k$ implies that all families
$\mathcal{F}_{1},\dots,\mathcal{F}_{t}\subset{[n]\choose k}$ are cross-
intersecting automatically. However, the condition $l\geq 1$ (i.e.
$n\leq{(t-1)k-1\over t-2}$) is because that an upper bound of the sum of the
sizes of $(t-1)$ cross-intersecting families will give an upper bound of the
sum of the sizes of $t$ cross-intersecting families, in other words, results
on small $t$ build a sort of foundation for large $t$. Indeed, due to the
requirement of large $t$ in the above theorem, the authors in [5] also pointed
out a natural question what happens if $t$ is smaller. Our result is a basis
for a more general question not requiring that all $t$ families are
$k$-uniform and generalizing to that any $s$ families from these $t$ families
are cross-intersecting. Precisely, let $2\leq s\leq t$, we say that
$\mathcal{F}_{1}\subset{[n]\choose
k_{1}},\dots,\mathcal{F}_{t}\subset{[n]\choose k_{t}}$ are $s$-wise-cross-
intersecting families if $\mathcal{F}_{i_{1}},\ldots,\mathcal{F}_{i_{s}}$ are
cross-intersecting for any $1\leq i_{1}<i_{2}<\cdots<i_{s}\leq t$.
###### Question 1.10.
Let $\mathcal{F}_{1}\subset{[n]\choose
k_{1}},\dots,\mathcal{F}_{t}\subset{[n]\choose k_{t}}$ be $s$-wise-cross-
intersecting with $k_{1}\geq k_{2}\geq\dots\geq k_{t}$, $2\leq s\leq t$ and
$n\geq(k_{1}+\cdots+k_{s})/(s-1)$. What is
$\max\sum_{i=1}^{t}|\mathcal{F}_{i}|$?
There is the condition $n\geq(k_{1}+\cdots+k_{s})/(s-1)$ since all
$\mathcal{F}_{1}\subset{[n]\choose
k_{1}},\dots,\mathcal{F}_{s}\subset{[n]\choose k_{s}}$ are automatically
cross-intersecting if $n<(k_{1}+\cdots+k_{s})/(s-1)$. Clearly, if
$\mathcal{F}_{1}\subset{[n]\choose
k_{1}},\dots,\mathcal{F}_{t}\subset{[n]\choose k_{t}}$ are $s$-wise-cross-
intersecting, then $\mathcal{F}_{1},\dots,\mathcal{F}_{t}$ are $(s-1)$-wise
cross-intersecting, hence a result for $s_{0}$-wise cross-intersecting
families yields a result for all $s$-wise cross-intersecting families for
$s\geq s_{0}$ and the same range of $n$. Theorem 1.8 answers the question
above for $s\in[2,t]$ and $n\geq k_{1}+k_{2}$. It’s interesting to study
further for $s\geq 3$ and $(k_{1}+\cdots+k_{s})/(s-1)\leq n<k_{1}+k_{2}$.
The condition that $n\geq k_{1}+k_{2}$ in Theorem 1.8 is to guarantee that no
two families are automatically cross-intersecting. If $n<k_{1}+k_{t}$, then
$\mathcal{A}_{1}$ and $\mathcal{A}_{i}$ are automatically cross-intersecting
for each $i\in[2,t]$ and we can remove $\mathcal{A}_{1}$. Hence $n\geq
k_{1}+k_{t}$ is a natural condition when we consider extremal problems for
non-empty pairwise cross-intersecting families
$\mathcal{A}_{1}\subset{[n]\choose k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ with $t\geq 2$,
$k_{1}\geq k_{2}\geq\cdots\geq k_{t}$. On the other hand, it is interesting to
consider the same question under the condition $k_{1}+k_{t}\leq
n<k_{1}+k_{2}$. For example, if $k_{1}+k_{3}\leq n<k_{1}+k_{2}$, then all
$k_{1}$-uniform families and all $k_{2}$-uniform families are automatically
cross-intersecting, on the other hand, $k_{i}$-uniform families and
$k_{j}$-uniform families are not automatically cross-intersecting for
$\\{i,j\\}\neq\\{1,2\\}$. Our method can go further by relaxing the
requirement to $n\geq k_{1}+k_{t}$. Indeed, our method is a basis, there are
more ingredients in the proof, we will reveal this in another manuscript.
## 2 Proof for Theorem 1.8
When we write a set $A=\\{a_{1},a_{2},\ldots,a_{s}\\}\subset[n]$, we always
assume that $a_{1}<a_{2}<\ldots<a_{s}$ throughout the paper. Let us introduce
the lexicographic (lex for short) order of subsets of positive integers. Let
$A$ and $B$ be finite subsets of the set of positive integers
$\mathbb{Z}_{>0}$. We say that $A\prec B$ if either $A\supset B$ or
$\min(A\setminus B)<\min(B\setminus A)$. In particular, $A\prec A$. Let
$\mathcal{L}([n],r,k)$ denote the first $r$ subsets in ${[n]\choose k}$ in the
lex order. Given a set $R$, we denote
$\mathcal{L}([n],R,k):=\\{F\in{[n]\choose k}:F\prec R\\}$. Let
$\mathcal{F}\subset{[n]\choose k}$ be a family, we say $\mathcal{F}$ is
L-initial if $\mathcal{F}=\mathcal{L}([n],|\mathcal{F}|,k)$.
The well-known Kruskal-Katona theorem [17, 18] will play an important role in
our discussion, an equivalent formulation of which was given in [8, 15] as
follows.
###### Theorem 2.1 (Kruskal-Katona, [17, 18]).
For $\mathcal{A}\subset{[n]\choose k}$ and $\mathcal{B}\subset{[n]\choose l}$,
if $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting, then
$\mathcal{L}([n],|\mathcal{A}|,k)$ and $\mathcal{L}([n],|\mathcal{B}|,l)$ are
cross-intersecting as well.
By Theorem 2.1, to prove the quantitative part of Theorem 1.8 we may assume
that $\mathcal{A}_{i}$ is L-initial, that is,
$\mathcal{A}_{i}=\mathcal{L}([n],|\mathcal{A}_{i}|,k_{i})$ for each $i\in[t]$.
From now on, we assume that $\mathcal{A}_{1}\subset{[n]\choose
k_{1}},\mathcal{A}_{2}\subset{[n]\choose
k_{2}},\dots,\mathcal{A}_{t}\subset{[n]\choose k_{t}}$ are non-empty pairwise
cross-intersecting families with $k_{1}\geq k_{2}\geq\cdots\geq k_{t},n\geq
k_{1}+k_{2}$, and $\mathcal{A}_{j}$ is L-initial for each $j\in[t]$.
###### Remark 2.2.
If $|\mathcal{A}_{i}|\leq{n-1\choose k_{i}-1}$ for each $i\in[t]$, then
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}\leq\sum_{i=1}^{t}{{n-1\choose k_{i}-1}}$,
as desired.
From now on, we may assume that $|\mathcal{A}_{i}|\geq{n-1\choose k_{i}-1}$
for some $i\in[t]$, and we fix such an $i$.
### 2.1 Sketch of the proof of Theorem 1.8
In this section, we give an outline of the proof and leave the proofs of some
propositions and lemmas to Subsection 2.2 and Section 3.
We will first show that $|\mathcal{A}_{i}|$ cannot be too large ( See
Proposition2.3 whose proof will be given in Subsection 2.2). Let
$m=\min_{j\neq i}k_{j}.$ (3)
###### Proposition 2.3.
$|\mathcal{A}_{i}|\leq{n-1\choose k_{i}-1}+\cdots+{n-m\choose k_{i}-1}$.
One important ingredient of the proof is to bound
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ by a function of the last element of
$\mathcal{A}_{i}$. Let us list the set of the last elements of all possible
$\mathcal{A}_{i}$.
Let $Z={n-2\choose k_{i}-1}+\cdots+{n-m\choose
k_{i}-1},R_{0}=\\{1,n-k_{i}+2,n-k_{i}+3,\dots,n\\},R_{1}=\\{2,3,\cdots,k_{i}+1\\}$,
$R_{Z}=\\{m,n-k_{i}+2,n-k_{i}+3,\dots,n\\}$ and $R_{0}\precneqq
R_{1}\precneqq\cdots\precneqq R_{Z}$ in lex order with each $|R_{j}|=k_{i}$
for $j\in[Z]$. We denote
$\mathcal{R}:=\\{R_{0},R_{1},\dots,R_{Z}\\}.$ (4)
By Proposition 2.3, we have ${n-1\choose
k_{i}-1}\leq|\mathcal{A}_{i}|\leq{n-1\choose k_{i}-1}+Z$. Since
$\mathcal{A}_{i}$ is L-initial, we have the following remark.
###### Remark 2.4.
Let $0\leq r\leq Z$. If $|\mathcal{A}_{i}|={n-1\choose k_{i}-1}+r$, then
$\mathcal{A}_{i}=\mathcal{L}([n],R_{r},k_{i})$.
Let $R$ be the last element of $\mathcal{A}_{i}$ (we call $R$ the ID of
$\mathcal{A}_{i}$), clearly $R\in\mathcal{R}$. We will bound
$\sum_{i=1}^{t}{|\mathcal{A}_{i}|}$ by a function of $R$. In order to do this,
we will extend a result of Frankl-Kupavskii (Proposition 2.7).
###### Definition 2.5.
We say that $A$ and $B$ strongly intersect at their last element $q$ if $A\cap
B=\\{q\\}$ and $A\cup B=[q]$. We also say $A$ is $B$’s partner.
###### Definition 2.6.
Let $t\geq 2$. We say that $\mathcal{F}_{1}\subset{[n]\choose
l_{1}},\mathcal{F}_{2}\subset{[n]\choose
l_{2}},\dots,\mathcal{F}_{t}\subset{[n]\choose l_{t}}$ are maximal pairwise
cross-intersecting if whenever $\mathcal{F}^{\prime}_{1}\subset{[n]\choose
l_{1}},\mathcal{F}^{\prime}_{2}\subset{[n]\choose
l_{2}},\dots,\mathcal{F}^{\prime}_{t}\subset{[n]\choose l_{t}}$ are pairwise
cross-intersecting with
$\mathcal{F}^{\prime}_{1}\supset\mathcal{F}_{1},\dots,\mathcal{F}^{\prime}_{t}\supset\mathcal{F}_{t}$,
then
$\mathcal{F}_{1}=\mathcal{F}^{\prime}_{1},\dots,\mathcal{F}_{t}=\mathcal{F}^{\prime}_{t}$.
###### Proposition 2.7 (Frankl-Kupavskii [10]).
Let $a,b\in\mathbb{Z}_{>0},a+b\leq n$. Let $P$ and $Q$ be non-empty subsets of
$[n]$ with $|P|\leq a$ and $|Q|\leq b$. If $Q$ is the partner of $P$, then
$\mathcal{L}([n],P,a)$ and $\mathcal{L}([n],Q,b)$ are maximal cross-
intersecting families.
This result cannot be applied to our situation directly. We will get rid of
the condition $|Q|\leq b$ in Proposition 2.7 and show the following result in
Subsection 2.2.
###### Proposition 2.8.
Let $a,b,n\in\mathbb{Z}_{>0}$ and $a+b\leq n$. For $P\subset[n]$ with $|P|\leq
a$, let $Q$ be the partner of $P$. Then $\mathcal{L}([n],Q,b)$ is the maximum
L-initial $b$-uniform family that is cross-intersecting to
$\mathcal{L}([n],P,a)$. Moreover, $\mathcal{L}([n],Q,b)\neq\emptyset$ if and
only if $\min P\leq b$.
We will give a formula to calculate the size of an $L-$initial family as
follows. The proof will be given in Subsection 2.2.
###### Proposition 2.9.
Let $k,l,n$ be positive integers. Let
$A=\\{a_{1},a_{2},\dots,a_{s_{a}}\\}\subset[n]$ and
$B=\\{b_{1},b_{2},\dots,b_{s_{b}}\\}$ be $A$’s partner. Then
$\displaystyle|\mathcal{L}([n],A,k)|={n-b_{1}\choose k-b_{1}}+{n-b_{2}\choose
k-b_{2}+1}+\cdots+{n-b_{s_{b}}\choose k-b_{s_{b}}+s_{b}-1},$ (5)
$\displaystyle|\mathcal{L}([n],B,l)|={n-a_{1}\choose l-a_{1}}+{n-a_{2}\choose
l-a_{2}+1}+\cdots+{n-a_{s_{a}}\choose l-a_{s_{a}}+s_{a}-1}.$ (6)
Combining Proposition 2.8 and Proposition 2.9, we can bound
$\sum_{j=1}^{t}|\mathcal{A}_{j}|$ based on the ID of $\mathcal{A}_{i}$ as
follows.
###### Corollary 2.10.
Let $R=\\{a_{1},a_{2},\dots,a_{k_{i}}\\}$ be the ID of $\mathcal{A}_{i}$ and
$T=\\{b_{1},b_{2},\dots,b_{s_{b}}\\}$ be the partner of $R$. Then
$\displaystyle\sum_{j=1}^{t}|\mathcal{A}_{j}|$
$\displaystyle\leq{n-b_{1}\choose k_{i}-b_{1}}+{n-b_{2}\choose
k_{i}-b_{2}+1}+\cdots+{n-b_{s_{b}}\choose k_{i}-b_{s_{b}}+s_{b}-1}$ (7)
$\displaystyle\quad+\sum_{j\neq i}\left[{n-a_{1}\choose
k_{j}-a_{1}}+{n-a_{2}\choose k_{j}-a_{2}+1}+\cdots+{n-a_{k_{i}}\choose
k_{j}-a_{k_{i}}+k_{i}-1}\right]$
$\displaystyle\overset{\triangle}{=}f_{i}(R).$
Thus, to show Theorem 1.8, it is sufficient to show that
$f_{i}(R)\leq\textup{max}\left\\{{n\choose k_{1}}-{n-k_{t}\choose
k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose
k_{i}-k_{t}}},\,\,\sum_{i=1}^{t}{n-1\choose k_{i}-1}\right\\}.$
Note that
$\displaystyle
f_{1}(\\{1,n-k_{1}+2,n-k_{1}+3,\ldots,n\\})=\sum_{i=1}^{t}{n-1\choose
k_{i}-1},{\rm\ and\ correspondingly,}$
$\displaystyle|\mathcal{A}_{j}|={n-1\choose k_{j}-1}{\rm\ for\ each\ }j\in[t]$
(8)
in view of (7). And
$\displaystyle f_{1}(\\{m\\}\cup[n-k_{1}+2,n])$ $\displaystyle=$
$\displaystyle f_{1}(\\{k_{t}\\}\cup[n-k_{1}+2,n])(\rm\ in\ view\ of\
(\ref{m}))$ $\displaystyle=$ $\displaystyle{n\choose k_{1}}-{n-k_{t}\choose
k_{1}}+\sum_{i=2}^{t}{{n-k_{t}\choose k_{i}-k_{t}}}{\rm\ and\
correspondingly,}$ $\displaystyle|\mathcal{A}_{1}|$ $\displaystyle=$
$\displaystyle{n\choose k_{1}}-{n-k_{t}\choose k_{1}},{\rm\ and\
}|\mathcal{A}_{j}|={n-k_{t}\choose k_{j}-k_{t}}{\rm\ if\ }j\in[t]$ (9)
in view of (7).
Hence, to show Theorem 1.8, it is sufficient to show that
$f_{i}(R)\leq\max\\{f_{1}(\\{1,n-k_{i}+2,n-k_{i}+3,\ldots,n\\}),f_{1}(\\{m,n-k_{i}+2,n-k_{i}+3,\ldots,n\\})\\}$.
In order to do this, we will show that
$f_{i}(R)\leq\max\\{f_{i}(\\{1,n-k_{i}+2,n-k_{i}+3,\ldots,n\\}),f_{i}(\\{m,n-k_{i}+2,n-k_{i}+3,\ldots,n\\})\\}$
and
$\max\\{f_{i}(\\{1,n-k_{i}+2,n-k_{i}+3,\ldots,n\\})=f_{1}(\\{1,n-k_{1}+2,n-k_{1}+3,\ldots,n\\})$,
$\max\\{f_{i}(\\{m,n-k_{i}+2,n-k_{i}+3,\ldots,n\\})=f_{1}(\\{m,n-k_{1}+2,n-k_{1}+3,\ldots,n\\})$.
For this purpose, we will introduce the concept ‘$c$-sequential’ and show some
‘local convexity’ of $f_{i}(R)$.
Let $\mathcal{A}\subset{[n]\choose k}$ be a family and $c\in[k]$. We say that
$\mathcal{A}$ is $c$-sequential if there are $A\subset[n]$ with $|A|=k-c$ and
$a\geq\max A$ (For a set $A\subset[n]$, denote $\max A=\max\\{a:a\in A\\}$ and
$\min A=\min\\{a:a\in A\\}$) such that
$\mathcal{A}=\\{A\sqcup\\{a+1,\dots,a+c\\},A\sqcup\\{a+2,\dots,a+c+1\\},\dots,A\sqcup\\{b-c+1,\dots,b\\}\\}$,
and we say $A$ is the head of $\mathcal{A}$ and $\mathcal{A}$ is
$c$-sequential from $a+c$ to $b$, write
$A_{1}\overset{c}{\prec}A_{2}\overset{c}{\prec}\cdots\overset{c}{\prec}A_{b-a-c+1}$,
where
$A_{1}=A\sqcup\\{a+1,\dots,a+c\\},A_{b-a-c+1}=A\sqcup\\{b-c+1,\dots,b\\}$. In
particular, if $l_{2}=l_{1}+1$, we write
$A_{l_{1}}\overset{c}{\prec}A_{l_{2}}$; if $\max A_{l_{2}}=n$, write
$A_{l_{1}}\overset{c}{\longrightarrow}A_{l_{2}}$. Note that if
$|\mathcal{A}|=1$, then $\mathcal{A}$ is $c$-sequential for any $c\in[k]$. Let
$\mathcal{F}$ be a family and $F_{1},F_{2}\in\mathcal{F}$. If $F_{1}\precneqq
F_{2}$ and there is no $F^{\prime}\in\mathcal{F}$ such that $F_{1}\precneqq
F^{\prime}\precneqq F_{2}$, then we say $F_{1}<F_{2}$ in $\mathcal{F}$, or
$F_{1}<F_{2}$ simply if there is no confusion.
Let $R$ and $R^{\prime}$ satisfy $R\prec R^{\prime}$ with the corresponding
partners $T$ and $T^{\prime}$ respectively. In order to measure
$f_{i}(R^{\prime})-f_{i}(R)$, we define
$\displaystyle\alpha(R,R^{\prime}):=|\mathcal{L}([n],R^{\prime},k_{i})|-|\mathcal{L}([n],R,k_{i})|,$
(10) $\displaystyle\beta(R,R^{\prime}):=\sum_{j\neq
i}(|\mathcal{L}([n],T,k_{j})|-|\mathcal{L}([n],T^{\prime},k_{j})|).$ (11)
Consequently,
$\displaystyle
f_{i}(R^{\prime})-f_{i}(R)=\alpha(R,R^{\prime})-\beta(R,R^{\prime}).$
We will prove the following four crucial lemmas showing some ‘local convexity’
of $f_{i}(R)$ in Section 3.
###### Lemma 2.11.
Let $c\in[k_{i}]$ and $F,G,H\in\mathcal{R}$ with
$F\overset{c}{\prec}G\overset{c}{\prec}H$. Assume that $n>k_{1}+k_{2}$ or
$t>2$. If $\alpha(F,G)\geq\beta(F,G)$, then $\alpha(G,H)>\beta(G,H)$. This
means that $f_{i}(G)\geq f_{i}(F)$ implies $f_{i}(H)>f_{i}(G)$.
Denote $\mathcal{R}_{k}:=\\{R\in\mathcal{R}:[n-k+1,n]\subset R\\}$, and
$\mathcal{R}(k):=\\{R\setminus[n-k+1,n]:R\in\mathcal{R}_{k}\\}$ for
$k\in[k_{i}-1]$. In addition, we will write $\mathcal{R}(0)=\mathcal{R}$. When
we consider $f_{i}(R),\alpha(R,T)$ and $\beta(R,T)$ for
$R,T\in\mathcal{R}_{k}$, we simply write $f_{i}(R\setminus[n-k+1,n])$ etc. In
particular, $f_{i}(\\{1\\})$ is indeed
$f_{i}(\\{1,n-k_{i}+1,n-k_{i}+2,\dots,n\\})$, and $f_{i}(\\{m\\})$ is indeed
$f_{i}(\\{m,n-k_{i}+1,n-k_{i}+2,\dots,n\\})$.
###### Lemma 2.12.
For any $j\in[0,k_{i}-1]$, let $1\leq c\leq k_{i}-j$ and
$F,G,H\in\mathcal{R}(j)$ with $F\overset{c}{\prec}G\overset{c}{\prec}H$.
Assume that $n>k_{1}+k_{2}$ or $t>2$. If $\alpha(F,G)\geq\beta(F,G)$, then
$\alpha(G,H)>\beta(G,H)$. This means that $f_{i}(G)\geq f_{i}(F)$ implies
$f_{i}(H)>f_{i}(G)$.
###### Lemma 2.13.
Suppose $k_{i}\geq 2$. Let $3\leq j\leq k_{i}+1$. Assume that $n>k_{1}+k_{2}$
or $t>2$. If $f(\\{2,3,\dots,j\\})\leq f(\\{2,3,\dots,j-1\\})$, then
$f(\\{2,3,\dots,j-1\\})<f(\\{2,3,\dots,j-2\\})$.
###### Lemma 2.14.
Let $m+1\leq j\leq m+k_{i}-1$. Assume that $n>k_{1}+k_{2}$ or $t>2$. If
$f(\\{m,m+1,\dots,j\\})\leq f(\\{m,m+1,\dots,j-1\\})$, then
$f(\\{m,m+1,\dots,j-1\\})<f(\\{m,m+1,\dots,j-2\\})$.
Combining these four lemmas, we will be able to show that
$f_{i}(R)\leq\max\\{f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}$. Let us be precise
below.
First, if $n=k_{1}+k_{2}$ and $t=2$, then note that a set
$A\in{k_{1}+k_{2}\choose k_{1}}$ intersects with any set
$B\in{k_{1}+k_{2}\choose k_{2}}$ except $B=\overline{A}$. So the maximum value
of $|\mathcal{A}_{1}|+|\mathcal{A}_{2}|$ is reached when
$\mathcal{A}_{1}\subset{[n]\choose k_{1}}$ and $\mathcal{A}_{2}={[n]\choose
k_{2}}\setminus\overline{\mathcal{A}_{1}}$.
Next, we may assume that $n>k_{1}+k_{2}$ or $t>2$.
For a family $\mathcal{F}$, denote
$f(\mathcal{F})=\max\\{f(F):F\in\mathcal{F}\\}$. Applying Lemma 2.11
repeatedly, we have
$f_{i}(\mathcal{R})=\max\\{f_{i}(\\{2,3,\dots,k_{i}+1\\}),f_{i}(\\{m,m+1,\dots,m+k_{i}-1\\}),f_{i}(\mathcal{R}(1))\\}.$
(12)
(Let us explain the above observation. For example, suppose that
$k_{i}=3,a<b<c\in[n]$ and $\\{a,b,c\\}\in\mathcal{R}$. Applying Lemma 2.11, we
have
$\displaystyle f_{i}(\\{a,b,c\\})$
$\displaystyle\leq\max\\{f_{i}(\\{a,b,b+1\\},f(_{i}\\{a,b,n\\})\\}$
$\displaystyle\leq\max\\{f_{i}(\\{a,b,b+1\\},f_{i}(\mathcal{R}(1))\\}$
$\displaystyle\leq\max\\{f_{i}(\\{a,a+1,a+2\\}),f_{i}(\mathcal{R}(1))\\}$
$\displaystyle\leq\max\\{f_{i}(\\{2,3,4\\}),f_{i}(\\{m,m+1,m+2\\}),f_{i}(\mathcal{R}(1))\\}.)$
Similarly, applying Lemma 2.12 repeatedly, we have
$\displaystyle
f_{i}(\mathcal{R}(1))=\max\\{f_{i}(\\{2,3,\dots,k_{i}\\}),f_{i}(\\{m,m+1,\dots,m+k_{i}-2\\}),f_{i}(\mathcal{R}(2))\\},$
$\displaystyle
f_{i}(\mathcal{R}(2))=\max\\{f_{i}(\\{2,3,\dots,k_{i}-1\\}),f_{i}(\\{m,m+1,\dots,m+k_{i}-3\\}),f_{i}(\mathcal{R}(3))\\},$
$\displaystyle\quad\vdots$ $\displaystyle
f_{i}(\mathcal{R}(k_{i}-1))=\max\\{f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}.$ (13)
By Lemma 2.13, we have
$\displaystyle\max\\{f_{i}(\\{2,3,\dots,k_{i}+1\\}),f_{i}(\\{2,3,\dots,k_{i}\\}),\dots,f_{i}(\\{2,3\\})\\}$
$\displaystyle\leq\max\\{f_{i}(\\{2,3,\dots,k_{i}+1\\}),f_{i}(\\{2\\})\\}$
$\displaystyle\leq\max\\{f_{i}(\\{2,3,\dots,k_{i}+1\\}),\max\\{f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}\\}$
(14)
By Lemma 2.14 , we have
$\displaystyle\max\\{f_{i}(\\{m,m+1,\dots,m+k_{i}-1\\}),f_{i}(\\{m,m+1,\dots,m+k_{i}-2\\}),\dots,f_{i}(\\{m\\})\\}$
$\displaystyle=\max\\{f_{i}(\\{m,m+1,\dots,m+k_{i}-1\\}),f_{i}(\\{m\\})\\}.$
(15)
Note that $\\{m-1,n-k_{i}+2,\dots,n\\}<\\{m,m+1,\dots,m+k_{i}-1\\}$ in
$\mathcal{R}$. By Proposition 2.19,
$\beta(\\{m-1,n-k_{i}+2,\dots,n\\},\\{m,m+1,\dots,m+k_{i}-1\\})=\sum_{j\neq
i}{n-(m+k_{i}-1)\choose k_{j}-m+1}\geq 1,$
so
$f_{i}(\\{m,m+1,\dots,m+k_{i}-1\\})\leq
f_{i}(\\{m-1,n-k_{i}+2,\dots,n\\})=f_{i}(\\{m-1\\})\leq\max\\{f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}.$
(16)
Combining (12), (13), (2.1), (2.1) and (16), we have
$f_{i}(\mathcal{R})=\max\\{f_{i}(\\{2,3,\dots,k_{i}+1\\}),f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}.$
(17)
Recall that $\\{1,n-k_{i}+2,\dots,n\\}<\\{2,3,\dots,k_{i}+1\\}$ in
$\mathcal{R}$. So we have
$\alpha(\\{1\\},\\{2,3,\dots,k_{i}+1\\})=1.$
By Proposition 2.19, we get
$\beta((\\{1\\},\\{2,3,\dots,k_{i}+1\\}))=\sum_{j\neq i}{n-(k_{i}+1)\choose
k_{j}-1}\geq 1.$
So $f_{i}(\\{1\\})\geq f_{i}(\\{2,3,\dots,k_{i}+1\\})$. Combining with (17),
we have
$f_{i}(\mathcal{R})=\max\\{f_{i}(\\{1\\}),f_{i}(\\{m\\})\\}.$ (18)
The quantitative part of Theorem 1.8 will be complete by showing the following
result in Subsection 2.2.
###### Proposition 2.15.
Suppose that $n\geq k_{1}+k_{2}$ and $k_{1}\geq k_{2}\geq\dots\geq k_{t}$. Let
$i\in[t]$ and $m$ be defined in (3), then for $1\leq s\leq m$, we have
$f_{1}(\\{s\\})=\max\\{f_{j}(\\{s\\}):j\in[t]\\}.$
In particular,
$\displaystyle f_{1}(\\{1\\})=\max\\{f_{j}(\\{1\\}):j\in[t]\\},$
$\displaystyle f_{1}(\\{m\\})=\max\\{f_{j}(\\{m\\}):j\in[t]\\}.$
What left is to discuss when the equality holds in the above inequality.
Firstly, we assume that $\sum_{j=1}^{t}{n-1\choose k_{j}-1}<{n\choose
k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{n-k_{t}\choose k_{i}-k_{t}}$ and
$\sum_{i=1}^{t}|\mathcal{A}_{i}|={n\choose k_{1}}-{n-k_{t}\choose
k_{1}}+\sum_{i=2}^{t}{n-k_{t}\choose k_{i}-k_{t}}.$
Combining Lemma 2.12 and Lemma 2.14, we have
$\sum_{i=1}^{t}|\mathcal{A}_{i}|=f(\\{m\\})$. In view of (2.1), we have
$|\mathcal{A}_{1}|={n\choose k_{1}}-{n-k_{t}\choose k_{1}}$ and
$|\mathcal{A}_{j}|={n-k_{t}\choose k_{i}-k_{t}}$ for $j\in[2,t]$, in
particular, $|\mathcal{A}_{t}|=1$. Let $\mathcal{A}_{t}=\\{T\\}$ for some
$T\in{[n]\choose k_{t}}$. Since $\mathcal{A}_{1}$ and $\mathcal{A}_{t}$ are
cross-intersecting and $|\mathcal{A}_{1}|={n\choose k_{1}}-{n-k_{t}\choose
k_{1}}$, we have $\mathcal{A}_{1}=\\{F\in{[n]\choose k_{1}}:F\cap
T\neq\emptyset\\}$. Since $\mathcal{A}_{j}$ and $\mathcal{A}_{t}$ are cross-
intersecting and $|\mathcal{A}_{j}|={n-k_{t}\choose k_{i}-k_{t}}$, we get
$\mathcal{A}_{j}=\\{F\in{[n]\choose k_{j}}:T\subset F\\}$ for $j\in[2,t]$. As
desired.
Next, we assume that $\sum_{j=1}^{t}{n-1\choose k_{j}-1}>{n\choose
k_{1}}-{n-k_{t}\choose k_{1}}+\sum_{i=2}^{t}{n-k_{t}\choose k_{i}-k_{t}}$ and
$\sum_{i=1}^{t}|\mathcal{A}_{i}|=\sum_{j=1}^{t}{n-1\choose k_{j}-1}.$
Combining Lemma 2.12 and Lemma 2.14, we have
$\sum_{i=1}^{t}|\mathcal{A}_{i}|=f(\\{1\\})$ and
$|\mathcal{A}_{j}|={n-1\choose k_{j}-1}$ for each $j\in[t]$ and
$|\mathcal{A}_{i}|+|\mathcal{A}_{j}|={n-1\choose k_{i}-1}+{n-1\choose
k_{j}-1}$ for any $i,j\in[t]$. If there are some $i\neq j\in[t]$ such that
$n>k_{i}+k_{j}$, then by taking $c=1$ in Theorem 1.5 (ii), there is $a\in[n]$
such that $\mathcal{A}_{i}=\\{F\in{[n]\choose k_{i}}:a\in F\\}$ and
$\mathcal{A}_{j}=\\{F\in{[n]\choose k_{j}}:a\in F\\}$. Thus,
$\mathcal{A}_{j}=\\{F\in{[n]\choose k_{j}}:a\in F\\}$ for $j\in[t]$, as
desired. Otherwise, we will meet the following case: $t\geq 3$,
$k:=k_{1}=k_{2}=\cdots=k_{t}$ and $n=2k$. since
$|\mathcal{A}_{1}|+|\mathcal{A}_{j}|=2{n-1\choose k-1}$ for each $j\in[2,t]$,
then by theorem 1.5 (iv), we can see that
$\mathcal{A}_{2}=\cdots=\mathcal{A}_{t}={[n]\choose
k}\setminus\overline{\mathcal{A}_{1}}$, similarly, we have
$\mathcal{A}_{1}=\mathcal{A}_{3}=\cdots=\mathcal{A}_{t}$, therefore,
$\mathcal{A}_{1}=\mathcal{A}_{2}=\cdots=\mathcal{A}_{t}={[n]\choose
k}\setminus\overline{\mathcal{A}_{1}}$.
We owe the proofs of Proposition 2.3, 2.8, 2.9, 2.15 and 2.19, and Lemmas
2.11, 2.12, 2.13 and 2.14. The proofs of Propositions 2.8, 2.9, 2.15 and 2.19
will be given in Section 2.2, and the proofs of Lemmas 2.11, 2.12, 2.13 and
2.14 will be given in Section 3.
### 2.2 Proofs of Propositions 2.3, 2.8, 2.9 2.15 and 2.19
###### Claim 2.16.
Let $s\geq 1$ be integer and $j\in[t]\setminus\\{i\\}$. If
$|\mathcal{A}_{i}|\geq{n-1\choose k_{i}-1}+{n-2\choose
k_{i}-1}+\cdots+{n-s\choose k_{i}-1}$, then $[s]\subset F$ for any
$F\in\mathcal{A}_{j}$.
###### Proof.
Suppose that there is $j\in[t]\setminus\\{i\\},a\in[s]$ and
$F\in\mathcal{A}_{j}$ such that $a\not\in F$. Since $n\geq k_{1}+k_{2}\geq
k_{i}+k_{j}$, there exists $F^{\prime}\subset[n]\setminus F$ with $a\in
F^{\prime}$ and $|F^{\prime}|=k_{i}$. Since $\mathcal{A}_{i}$ is L-initial and
$|\mathcal{A}_{i}|\geq{n-1\choose k_{i}-1}+{n-2\choose
k_{i}-1}+\cdots+{n-s\choose k_{i}-1}$, $F^{\prime}\in\mathcal{A}_{i}$.
However, $F\cap F^{\prime}=\emptyset$, a contradiction to that
$\mathcal{A}_{i}$ and $\mathcal{A}_{j}$ are cross-intersecting. ∎
Now we apply the above fact to show Proposition 2.3. For convenience, let us
restate Proposition 2.3.
Proposition 2.3. $|\mathcal{A}_{i}|\leq{n-1\choose k_{i}-1}+\cdots+{n-m\choose
k_{i}-1}$.
###### Proof.
Let $j\in[t]\setminus\\{i\\}$ and $F\in\mathcal{A}_{j}$. If
$|\mathcal{A}_{i}|>{n-1\choose k_{i}-1}+\cdots+{n-m\choose k_{i}-1}$, then, by
Claim 2.16, $[m]\subset F$. Let
$\mathcal{A^{\prime}}\in\\{\mathcal{A}_{1},\dots,\mathcal{A}_{t}\\}\setminus\\{\mathcal{A}_{i}\\}$
be $m$-uniform. Then $\mathcal{A^{\prime}}=\\{[m]\\}$. Since $\mathcal{A}_{i}$
is L-initial and $|\mathcal{A}_{i}|>{n-1\choose k_{i}-1}+\cdots+{n-m\choose
k_{i}-1}$, there exists $G\in\mathcal{A}_{i}$ such that $G\cap[m]=\emptyset$,
so $\mathcal{A^{\prime}}$ and $\mathcal{A}_{i}$ are not cross-intersecting, a
contradiction. ∎
Now we give the proof of Proposition 2.9.
Proposition 2.9. Let $k,l,n$ be positive integers. Let
$A=\\{a_{1},a_{2},\dots,a_{s_{a}}\\}\subset[n]$ and
$B=\\{b_{1},b_{2},\dots,b_{s_{b}}\\}$ be $A$’s partner. Then
$\displaystyle|\mathcal{L}([n],A,k)|={n-b_{1}\choose k-b_{1}}+{n-b_{2}\choose
k-b_{2}+1}+\cdots+{n-b_{s_{b}}\choose k-b_{s_{b}}+s_{b}-1},$ (19)
$\displaystyle|\mathcal{L}([n],B,l)|={n-a_{1}\choose l-a_{1}}+{n-a_{2}\choose
l-a_{2}+1}+\cdots+{n-a_{s_{a}}\choose l-a_{s_{a}}+s_{a}-1}.$ (20)
###### Proof.
We give the proof of (19) only, since the proof of (20) is similar to (19).
W.l.o.g., assume $b_{1}=1$. Let
$D_{1}:=\\{1,\dots,a_{1}-1\\},D_{j}:=\\{a_{j-1}+1,\dots,a_{j}-1\\}$ for
$j\in[2,s_{a}-1]$, and $D_{s_{a}}:=\\{a_{s_{a-1}}+1,\dots,a_{s_{a}}\\}$. Let
$B:=\sqcup_{j=1}^{s_{a}}D_{j}$. If $k<s_{a}$, then
$\displaystyle\mathcal{L}([n],A,k)$ $\displaystyle=\\{F\in{[n]\choose
k}:F\prec A\\}$ $\displaystyle=\\{F\in{[n]\choose k}:F\cap
D_{1}\neq\emptyset\\}\sqcup\\{F\in{[n]\choose k}:F\cap
D_{1}=\emptyset,a_{1}\in F,F\cap D_{2}\neq\emptyset\\}$
$\displaystyle\quad\sqcup\cdots\sqcup\\{F\in{[n]\choose k}:F\cap
D_{j}=\emptyset,a_{j}\in F\,\,\text{for}\,\,j\in[k-1],F\cap
D_{k}\neq\emptyset\\}.$
If $k\geq s_{a}$, then
$\displaystyle\mathcal{L}([n],A,k)$ $\displaystyle=\\{F\in{[n]\choose k}:F\cap
D_{1}\neq\emptyset\\}\sqcup\\{F\in{[n]\choose k}:F\cap
D_{1}=\emptyset,a_{1}\in F,F\cap D_{2}\neq\emptyset\\}$
$\displaystyle\quad\sqcup\cdots\sqcup\\{F\in{[n]\choose k}:F\cap
D_{j}=\emptyset,a_{j}\in F\,\,\text{for}\,\,j\in[{s_{a}}-1],F\cap
D_{s_{a}}\neq\emptyset\\}$ $\displaystyle\quad\sqcup\\{F\in{[n]\choose
k}:F\cap[a_{s_{a}}]=A\\}.$
Thus,
$\displaystyle|\mathcal{L}([n],A,k)|$
$\displaystyle=\sum_{d=1}^{s_{a}}\sum_{j\in D_{d}}{n-j\choose k-d}$ (21)
$\displaystyle={n-b_{1}\choose k-b_{1}}+{n-b_{2}\choose
k-b_{2}+1}+\cdots+{n-b_{s_{b}}\choose k-b_{s_{b}}+s_{b}-1},$ (22)
as desired. ∎
We have the following observations.
###### Remark 2.17.
Let $k,n\in\mathbb{Z}_{>0}$ and $A=\\{a_{1},a_{2},\dots,a_{|A|}\\}\subset[n]$
with $|A|>k$. Let $j=\max\\{q:q\in[a_{k}]\setminus A\\}$ and
$A^{\prime}=(A\cap[j])\cup\\{j\\}$. Then
$\mathcal{L}([n],A,k)=\mathcal{L}([n],A^{\prime},k)$.
###### Remark 2.18.
Let $k,l,n\in\mathbb{Z}_{>0},n\geq k+l,R\subset[n],|R|=k$ and $\max R=n$. Let
$p$ be the last element of $R$ not continuing to $n$ and
$R^{\prime}=R\cap[p]$. Let $T$ and $T^{\prime}$ be the partners of $R$ and
$R^{\prime}$ respectively. Then
$\mathcal{L}([n],R,k)=\mathcal{L}([n],R^{\prime},k)$ and
$\mathcal{L}([n],T,l)=\mathcal{L}([n],T^{\prime},l)$.
Proposition 2.8. Let $a,b,n$ be positive integers satisfying $a+b\leq n$. For
$P\subset[n]$ with $|P|\leq a$, let $Q$ be the partner of $P$. Then
$\mathcal{L}([n],Q,b)$ is the maximum L-initial $b$-uniform family that is
cross-intersecting to $\mathcal{L}([n],P,a)$. Moreover,
$\mathcal{L}([n],Q,b)\neq\emptyset$ if and only if $\min P\leq b$.
###### Proof of Proposition 2.8.
Let $P_{0}$ be the last element of $\mathcal{L}([n],P,a)$. If $|P|=a$, then
$P_{0}=P$ and if $|P|<a$, then $P_{0}=P\cup[n-a+|P|+1,n]$. So $\min P_{0}=\min
P$. Then $[b]\cap P_{0}=\emptyset$ if and only if $\min P>b$, this implies
that $\mathcal{L}([n],Q,b)\neq\emptyset$ if and only if $\min P\leq b$. As
desired. So we may assume $\min P\leq b$.
By Proposition 2.7, we only need to consider the case that $|Q|>b$. We first
show that $\mathcal{L}([n],Q,b)$ and $\mathcal{L}([n],P,a)$ are cross-
intersecting. For any $F\in\mathcal{L}([n],Q,b)$, we have $\min F\setminus
Q<\min Q\setminus F$. Let $z_{1}=\min F\setminus Q$. Then $z_{1}\in P$ since
$P$ is $Q$’s partner and $z_{1}<\min Q\setminus F\leq\max Q=\max P$. This
implies that $F\cap P_{0}\neq\emptyset$. Let $P^{\prime}\precneqq P_{0}$ with
$|P^{\prime}|=a$. If $P\subseteq P^{\prime}$, then $F\cap
P^{\prime}\neq\emptyset$ since $z_{1}\in F\cap P^{\prime}$. So we may assume
$P\not\subseteq P^{\prime}$. This implies $\min P^{\prime}\setminus P<\min
P\setminus P^{\prime}$. Let $z_{2}=\min P^{\prime}\setminus P$, then $z_{2}\in
Q$ since $Q$ is $P$’s partner and $z_{2}<\min P\setminus P^{\prime}\leq\max
P=\max Q$. If $z_{2}\in F$, then $F\cap P^{\prime}\neq\emptyset$. Suppose
$z_{2}\not\in F$. If $z_{1}\in P^{\prime}$, then $F\cap
P^{\prime}\neq\emptyset$. So assume that $z_{1}\not\in P^{\prime}$. Since
$z_{1}\in P$, we get $z_{2}=\min P^{\prime}\setminus P<\min P\setminus
P^{\prime}\leq z_{1}$. However, $z_{2}\in Q,z_{2}\not\in F$, so $z_{2}\geq\min
Q\setminus F>\min F\setminus Q=z_{1}$, a contradiction. We have proved that
$\mathcal{L}([n],P,a)$ and $\mathcal{L}([n],Q,b)$ are cross-intersecting.
Next we show that $\mathcal{L}([n],Q,b)$ is the maximal L-initial $b$-uniform
family that is cross-intersecting to $\mathcal{L}([n],P,a)$. Let $Q_{b}$ be
the $b$-th element of $Q$. Since $\min P\leq b$, then $Q_{b}>b$ and
$[Q_{b}]\setminus Q\neq\emptyset$. Let $y=\max\\{q:q\in[Q_{b}]\setminus Q\\}$
and $Q^{\prime}=(Q\cap[y])\cup\\{y\\}$. Then $|Q^{\prime}|\leq b$. By Remark
2.17, $\mathcal{L}([n],Q^{\prime},b)=\mathcal{L}([n],Q,b)$. Suppose that
$\mathcal{G}$ is another $b$-uniform L-initial family cross-intersecting with
$\mathcal{L}([n],P,a)$ and $|\mathcal{G}|>|\mathcal{L}([n],Q^{\prime},b)|$.
Then $\mathcal{G}\supsetneqq\mathcal{L}([n],Q^{\prime},b)$. Let $H$ be the
last set in $\mathcal{L}([n],Q^{\prime},b)$ and $G$ be the first set in
$\mathcal{G}\setminus\mathcal{L}([n],Q^{\prime},b)$. Clearly $y=\max
Q^{\prime}<n$. Let $|Q^{\prime}|=p$. We have the following two cases.
Case (i) $|Q^{\prime}|=b$. In this case $H=Q^{\prime}$. Then
$G=(Q^{\prime}\setminus\\{y\\})\cup\\{y+1\\}$. Since $y\not\in Q$ and $y<\max
Q=\max P$, $y\in P$. By our definition of $y$, $y+1\in Q$. And $y+1\not\in P$,
otherwise $|Q|=b$, also a contradiction. However, by the definition of
$Q^{\prime}$, we have $Q^{\prime}\cap P=\\{y\\}$, so $G\cap P=\emptyset$,
therefore, $G\cap P_{0}=\emptyset$, a contradiction again.
Case (ii) $|Q^{\prime}|<b$. In this case
$H=Q^{\prime}\cup\\{n-b+p+1,\dots,n\\}$ and
$G=(Q^{\prime}\setminus\\{y\\})\cup\\{y+1,y+2,\dots,y+b-p+1\\}.$ Moreover, by
the definitions of $Q_{b}$ and $y$, we can see that $y+b-p+1=Q_{b}$ and
$\\{y+1,y+2,\dots,y+b-p+1\\}\subset Q$. Since $Q$ is the partner of $P$ and
$Q_{b}<\max Q=\max P$, $\\{y+1,y+2,\dots,y+b-p+1\\}\cap P=\emptyset$. Recall
that $Q^{\prime}\cap P=\\{y\\}$, so $G\cap P=\emptyset$, therefore, $G\cap
P_{0}=\emptyset$, a contradiction. So we have shown that
$\mathcal{L}([n],Q^{\prime},b)$, the same as $\mathcal{L}([n],Q,b)$ (see
Remark 2.17) is the maximum $b$-uniform L-initial family that is cross-
intersecting to $\mathcal{L}([n],P,a)$, as desired. ∎
###### Proposition 2.19.
Let $F<G\in\mathcal{R}$ and $\max G=q$. Then $\beta(F,G)=\sum_{j\neq
i}{n-q\choose k_{j}-(q-k_{i})}$.
###### Proof.
Let $F^{\prime},G^{\prime}$ be the partners of $F,G$ respectively. We have the
following two cases.
Case (i) $\max F<n$. In this case, $\max F=q-1$ and
$F\setminus\\{q-1\\}=G\setminus\\{q\\}$. By (11) and Proposition 2.9, we have
$\displaystyle\beta(F,G)$ $\displaystyle=\sum_{j\neq
i}(|\mathcal{L}([n],F^{\prime},k_{j})|-|\mathcal{L}([n],G^{\prime},k_{j})|)$
$\displaystyle=\sum_{j\neq i}\left[{n-(q-1)\choose
k_{j}-(q-k_{i})}-{n-q\choose k_{j}-(q-k_{i}+1)}\right]$
$\displaystyle=\sum_{j\neq i}{n-q\choose k_{j}-(q-k_{i})},$
as desired.
Case (ii) $\max F=n$. Let $p$ be the last element of $F$ not continuing to
$n$. Then $G=(F\cap[p-1])\cup\\{p+1,p+2,\dots,q\\}$. Let
$\widetilde{F}=F\cap[p]$ and $\widetilde{F^{\prime}}$ be the partner of
$\widetilde{F}$. It follows from Remark 2.18 that
$\sum_{j\neq i}|\mathcal{L}([n],F^{\prime},k_{j})|=\sum_{j\neq
i}|\mathcal{L}([n],\widetilde{F^{\prime}},k_{j})|.$
Therefore,
$\displaystyle\beta(F,G)$ $\displaystyle=\sum_{j\neq
i}(|\mathcal{L}([n],F^{\prime},k_{j})|-|\mathcal{L}([n],G^{\prime},k_{j})|)$
$\displaystyle=\sum_{j\neq
i}(|\mathcal{L}([n],\widetilde{F^{\prime}},k_{j})|-|\mathcal{L}([n],G^{\prime},k_{j})|)$
$\displaystyle=\sum_{j\neq i}\left\\{{n-p\choose
k_{j}-p+(k_{i}-q+p)}\right.-\left[{n-(p+1)\choose
k_{j}-(p+1)+(k_{i}-q+p)}\right.$
$\displaystyle\quad\left.\left.+{n-(p+2)\choose
k_{j}-(p+1)+(k_{i}-q+p)}+\cdots+{n-q\choose
k_{j}-(p+1)+(k_{i}-q+p)}\right]\right\\}$ $\displaystyle=\sum_{j\neq
i}\left\\{{n-p\choose k_{j}-(q-k_{i})}-\left[{n-p\choose
k_{j}-(q-k_{i})}-{n-q\choose k_{j}-(q-k_{i})}\right]\right\\}$
$\displaystyle=\sum_{j\neq i}{n-q\choose k_{j}-(q-k_{i})},$
as desired. ∎
Now we show Proposition 2.15.
Proposition 2.15. Suppose that $n\geq k_{1}+k_{2}$ and $k_{1}\geq
k_{2}\geq\dots\geq k_{t}$. Let $i\in[t]$ and $m$ be defined in (3), then for
$1\leq s\leq m$, we have
$f_{1}(\\{s\\})=\max\\{f_{j}(\\{s\\}):j\in[t]\\}.$
In particular,
$\displaystyle f_{1}(\\{1\\})=\max\\{f_{j}(\\{1\\}):j\in[t]\\},$
$\displaystyle f_{1}(\\{m\\})=\max\\{f_{j}(\\{m\\}):j\in[t]\\}.$
Proof of Proposition 2.15. Note that for each $j\in[t]$, we have
$f_{j}(\\{1\\})=\sum_{q=1}^{t}{n-1\choose k_{q}-1},$ (23)
since $\mathcal{A}_{j}$ is the family of all sets having lex order smaller
than or equal to $\\{1\\}$, this means that $\mathcal{A}_{j}$ is the full star
containing $1$. Consequently, all sets in other $\mathcal{A}_{l}$ are also the
full star containing $1$ since they are pairwise cross-intersecting. So
$f_{1}(\\{1\\})=\max\\{f_{j}(\\{1\\}):j\in[t]\\}$.
We next prove that for $2\leq s\leq m$,
$f_{1}(\\{s\\})=\max\\{f_{j}(\\{s\\}):j\in[t]\\}.$
Since $n\geq k_{1}+k_{2}$ and $k_{1}\geq k_{2}\geq\cdots\geq k_{t}$, we only
need to prove that
$f_{1}(\\{s\\})\geq f_{2}(\\{s\\}).$ (24)
By the definition of $f_{j}(R)$, we have
$\displaystyle f_{1}(\\{s\\})={n-1\choose k_{1}-1}+\cdots+{n-s\choose
k_{1}-1}+\sum_{j=2}^{t}{n-s\choose k_{j}-s},$
and
$\displaystyle f_{2}(\\{s\\})={n-1\choose k_{2}-1}+\cdots+{n-s\choose
k_{2}-1}+\sum_{j\neq 2,j=1}^{t}{n-s\choose k_{j}-s}.$
It is easy to see that if $n=k_{1}+k_{2}$, then we have
$f_{1}(\\{s\\})=f_{2}(\\{s\\})$. Let us denote
$g(n)=f_{1}(\\{s\\})-f_{2}(\\{s\\})$, then $g(k_{1}+k_{2})=0$. Inequality (24)
immediately follows from the forthcoming claim.
###### Claim 2.20.
For any integer $q$ with $q\geq k_{1}+k_{2}$ and $k_{1}\geq k_{2}$, we have
$g(q+1)-g(q)\geq 0.$ (25)
Proof of Claim 2.20. Indeed,
$\displaystyle g(q)$ $\displaystyle={q-1\choose k_{1}-1}+\cdots+{q-s\choose
k_{1}-1}+{q-s\choose k_{2}-s}$ $\displaystyle\quad-\left\\{{q-1\choose
k_{2}-1}+\cdots+{q-s\choose k_{2}-1}+{q-s\choose k_{1}-s}\right\\}$
$\displaystyle={q-2\choose k_{1}-1}+\cdots+{q-s\choose
k_{1}-1}+\sum_{j=2}^{s}{q-j\choose k_{1}-j+1}$
$\displaystyle\quad-\left\\{{q-2\choose k_{2}-1}+\cdots+{q-s\choose
k_{2}-1}+\sum_{j=2}^{s}{q-j\choose k_{2}-j+1}\right\\},$
and
$\displaystyle g(q+1)$ $\displaystyle={q-1\choose
k_{1}-1}+\cdots+{q+1-s\choose k_{1}-1}+\sum_{j=2}^{s}{q+1-j\choose k_{1}-j+1}$
$\displaystyle\quad-\left\\{{q-1\choose k_{2}-1}+\cdots+{q+1-s\choose
k_{2}-1}+\sum_{j=2}^{s}{q+1-j\choose k_{2}-j+1}\right\\}.$
Since $q\geq k_{1}+k_{2}$ and $k_{1}\geq k_{2}$, then for all $j\geq 0$, we
have
${q-j\choose k_{1}-j}\geq{q-j\choose k_{2}-j}.$ (26)
This gives
$\displaystyle\sum_{j=2}^{s}{q+1-j\choose
k_{1}-j+1}-\sum_{j=2}^{s}{q+1-j\choose k_{2}-j+1}-\sum_{j=2}^{s}{q-j\choose
k_{1}-j+1}+\sum_{j=2}^{s}{q-j\choose k_{2}-j+1}$
$\displaystyle=\sum_{j=2}^{s}\left\\{{q-j\choose k_{1}-j}-{q-j\choose
k_{2}-j}\right\\}$ $\displaystyle\geq 0.$
Hence, to get (25), it is sufficient to show the following claim.
###### Claim 2.21.
For any integer $q$ with $q\geq k_{1}+k_{2}$ and $k_{1}\geq k_{2}$, we have
${q-1\choose k_{1}-1}-{q-s\choose k_{1}-1}\geq{q-1\choose k_{2}-1}-{q-s\choose
k_{2}-1}.$ (27)
###### Proof of Claim 2.21.
If ${q-s\choose k_{1}-1}\leq{q-s\choose k_{2}-1}$, then applying (26) for
$j=1$, we have ${q-1\choose k_{1}-1}\geq{q-1\choose k_{2}-1}$, so the desired
inequality (27) holds. Suppose that ${q-s\choose k_{1}-1}>{q-s\choose
k_{2}-1}$. Since $k_{1}\geq k_{2}$ and $q\geq k_{1}+k_{2}$, we have
$\frac{{q-s\choose k_{1}-2}}{{q-s\choose k_{1}-1}}\geq\frac{{q-s\choose
k_{2}-2}}{{q-s\choose k_{2}-1}},$
this implies that ${q-s\choose k_{1}-2}>{q-s\choose k_{2}-2}$. Similarly, for
all $j\geq 0$,
$\frac{{q-s+j\choose k_{1}-2}}{{q-s\choose k_{1}-2}}\geq\frac{{q-s+j\choose
k_{2}-2}}{{q-s\choose k_{2}-2}}.$
So ${q-s+j\choose k_{1}-2}\geq{q-s+j\choose k_{2}-2}$ holds for any $j\geq 0$,
yielding
$\displaystyle{q-1\choose k_{1}-1}-{q-s\choose k_{1}-1}-\left\\{{q-1\choose
k_{2}-1}-{q-s\choose k_{2}-1}\right\\}$
$\displaystyle=\sum_{j=2}^{s}\left\\{{q-j\choose k_{1}-2}-{q-j\choose
k_{2}-2}\right\\}$ $\displaystyle\geq 0,$
as desired. ∎
## 3 Proofs of Lemmas 2.11, 2.12, 2.13 and 2.14
We show some preliminary properties. We need the following preparation.
###### Claim 3.1.
Let
$F_{1},F_{2},F^{\prime}_{1},F^{\prime}_{2}\in\mathcal{R},c\in[k_{i}],F_{1}\overset{c}{\prec}F_{2}$
and $F^{\prime}_{1}\overset{c}{\prec}F^{\prime}_{2}$. If $\max F_{1}=\max
F^{\prime}_{1}$, then
$\alpha(F_{1},F_{2})=\alpha(F^{\prime}_{1},F^{\prime}_{2})$ and
$\beta(F_{1},F_{2})=\beta(F^{\prime}_{1},F^{\prime}_{2})$.
###### Proof.
Let $A$ be the head of $F_{1}$ and $F_{2}$, $A^{\prime}$ be the head of
$F^{\prime}_{1}$ and $F^{\prime}_{2}$ and let $\max F_{1}=\max F_{2}=q$. Then
$\max F_{2}=\max F^{\prime}_{2}=q+1.$ It is easy to see that $F_{1}\setminus
A=F^{\prime}_{1}\setminus A^{\prime}$ and $F_{2}\setminus
A=F^{\prime}_{2}\setminus A^{\prime}$, by Proposition 2.9, we conclude that
$\beta(F_{1},F_{2})=\beta(F^{\prime}_{1},F^{\prime}_{2})$. Let
$G_{1},G_{2},G^{\prime}_{1},G^{\prime}_{2}$ be the partners of
$F_{1},F_{2},F^{\prime}_{1},F^{\prime}_{2}$ respectively. Then $G_{1}\setminus
G_{2}=G^{\prime}_{1}\setminus G^{\prime}_{2}$ and $G_{2}\setminus
G_{1}=G^{\prime}_{2}\setminus G^{\prime}_{1}$, by Proposition 2.9, we have
$\alpha(F_{1},F_{2})=\alpha(F^{\prime}_{1},F^{\prime}_{2})$, as promised. ∎
###### Claim 3.2.
Let $F,H,G\in\mathcal{R}$ with $F\prec H\prec G$. Then
$\alpha(F,G)=\alpha(F,H)+\alpha(H,G)$ and $\beta(F,G)=\beta(F,H)+\beta(H,G)$.
###### Proof.
By (10), we have
$\displaystyle\alpha(F,H)+\alpha(H,G)$
$\displaystyle=|\mathcal{L}([n],H,k_{i})|-|\mathcal{L}([n],F,k_{i})|+|\mathcal{L}([n],G,k_{i})|-|\mathcal{L}([n],H,k_{i})|$
$\displaystyle=|\mathcal{L}([n],G,k_{i})|-|\mathcal{L}([n],F,k_{i})|$
$\displaystyle=\alpha(F,G),$
as desired. Let $F^{\prime},H^{\prime},G^{\prime}$ be the partners of $F,H,G$
respectively. Then by (11), we have
$\displaystyle\beta(F,H)+\beta(H,G)$ $\displaystyle=\sum_{j\neq
i}(|\mathcal{L}([n],F^{\prime},k_{j})|-|\mathcal{L}([n],H^{\prime},k_{j})|+|\mathcal{L}([n],H^{\prime},k_{j})|-|\mathcal{L}([n],G^{\prime},k_{j})|)$
$\displaystyle=\sum_{j\neq
i}(|\mathcal{L}([n],F^{\prime},k_{j})|-|\mathcal{L}([n],G^{\prime},k_{j})|)$
$\displaystyle=\beta(F,G),$
as desired. ∎
By Claims 3.1 and 3.2, the following corollary is obvious.
###### Corollary 3.3.
Let $c\in[k_{i}]$ and $F,G,F^{\prime},G^{\prime}\in\mathcal{R}$. If $F,G$ are
$c$-sequential, $F^{\prime},G^{\prime}$ are $c$-sequential and $\max F=\max
F^{\prime},\max G=\max G^{\prime}$, then
$\alpha(F,G)=\alpha(F^{\prime},G^{\prime})$ and
$\beta(F,G)=\beta(F^{\prime},G^{\prime})$.
###### Claim 3.4.
Let $2\leq c\leq k_{i}$ and $F,G,H,F_{1}\in\mathcal{R}$ with
$F\overset{c}{\prec}G\overset{c}{\prec}H,F\overset{c-1}{\prec}F_{1}$ and $\max
F=q$. Then
$\displaystyle\alpha(F,G)=\alpha(F,F_{1})+\alpha(G,H),$
$\displaystyle\beta(F,G)=\beta(F,F_{1})+\beta(G,H)+\sum_{j\neq
i}{n-(q+2)\choose k_{j}-(q-k_{i}+1)}.$
###### Proof.
Let $A$ be the head of $F,G,H$. Then by the definition,
$F=A\sqcup\\{q-c+1,\dots,q\\},G=A\sqcup\\{q-c+2,\dots,q+1\\},H=A\sqcup\\{q-c+3,\dots,q+2\\}$
and $F_{1}=A\sqcup\\{q-c+1\\}\sqcup\\{q-c+3,\dots,q+1\\}$. Define $F_{2}$ as
$F_{2}<G$ in $\mathcal{R}$. Since $G\setminus A$ continues and $q-c+1\not\in
G$, then $q-c+1\in F_{2}$. Hence,
$F_{2}=A\cup\\{q-c+1,n-c+2,n-c+3,\dots,n\\}$. Similarly, define $F_{3}$ as
$F_{3}<H$ in $\mathcal{R}$. Then
$F_{3}=A\sqcup\\{q-c+2,n-c+2,n-c+3,\dots,n\\}$. Moreover, $F_{1}$ and $F_{2}$
are $(c-1)$-sequential, and $G$ and $F_{3}$ are $(c-1)$-sequential. Clearly,
$\max F_{1}=\max G=q+1$ and $\max F_{2}=\max F_{3}=n$. By Corollary 3.3, we
have $\alpha(F_{1},F_{2})=\alpha(G,F_{3})$ and
$\beta(F_{1},F_{2})=\beta(G,F_{3})$. By the definition, we get
$\alpha(F_{2},G)=\alpha(F_{3},H)=1$. Combining with Claim 3.2, we have
$\displaystyle\alpha(F,G)$ $\displaystyle=\alpha(F,F_{1})+\alpha(F_{1},G)$
$\displaystyle=\alpha(F,F_{1})+\alpha(F_{1},F_{2})+\alpha(F_{2},G)$
$\displaystyle=\alpha(F,F_{1})+\alpha(G,F_{3})+\alpha(F_{3},H)$
$\displaystyle=\alpha(F,F_{1})+\alpha(G,H),$
and
$\displaystyle\beta(F,G)$ $\displaystyle=\beta(F,F_{1})+\beta(F_{1},G)$
$\displaystyle=\beta(F,F_{1})+\beta(F_{1},F_{2})+\beta(F_{2},G)$
$\displaystyle=\beta(F,F_{1})+\beta(G,F_{3})+\beta(F_{3},H)+\beta(F_{2},G)-\beta(F_{3},H)$
$\displaystyle=\beta(F,F_{1})+\beta(G,H)+\sum_{j\neq i}{n-(q+2)\choose
k_{j}-(q-k_{i}+1)},$
where the last equality follows from Proposition 2.19. More specifically, we
can see that
$\displaystyle\beta(F_{2},G)=\sum_{j\neq i}{n-(q+1)\choose
k_{j}-(q+1-k_{i})},$ $\displaystyle\beta(F_{3},H)=\sum_{j\neq
i}{n-(q+2)\choose k_{j}-(q+2-k_{i})}.$
∎
###### Definition 3.5.
Let $M\geq 2$ and
$\mathcal{G}=\\{G_{1},G_{2},\dots,G_{M}\\}\subset\mathcal{R}$ with $G_{1}\prec
G_{2}\prec\dots\prec G_{M}$. If there is $g\in[0,M-1]$ satisfying the
following two conditions:
(i) $f(G_{j+1})<f(G_{j})$ for $1\leq j\leq g$,
(ii) $f(G_{j+1})\geq f(G_{j})$ for $g+1\leq j\leq M-1$,
then we say that $\mathcal{G}$ is a down-up family and $g$ is the down degree
of $\mathcal{G}$, write $d_{\mathcal{G}}^{\downarrow}$.
Recall that $i\in[t]$ is the fixed index satisfying
$|\mathcal{A}_{i}|\geq{n-1\choose k_{i}-1}$. Let
$l=\max_{j\neq i}k_{j}.$
### 3.1 Proof of Lemma 2.11
To show Lemma 2.11, we need the following preparations. All arguments below
are under the assumption of Lemma 2.11, i.e., assume that $c\in[k_{i}]$ and
$F,G,H\in\mathcal{R}$ with $F\overset{c}{\prec}G\overset{c}{\prec}H$
satisfying $\alpha(F,G)\geq\beta(F,G)$. We need to show that
$\alpha(G,H)>\beta(G,H)$.
###### Claim 3.6.
Let $c^{\prime}\in[k_{i}]$ and $R,R^{\prime},T\in\mathcal{R}$ with $R\precneqq
T\precneqq R^{\prime}$. If $R,R^{\prime}$ are $c^{\prime}$-sequential, then
$\max T\geq\max R+1$.
###### Proof.
Let $A$ be the head of $R$ and $R^{\prime}$. Since $R\precneqq T\precneqq
R^{\prime}$, we have $A\subset T$. Since $\min R\setminus T<\min T\setminus R$
and $R\setminus A$ continues to $\max R$, we have $\max T>\max R$. ∎
Let $A$ be the head of $F,G,H$ and $\max F=q$. Then
$F=A\sqcup\\{q-c+1,\dots,q\\},G=A\sqcup\\{q-c+2,\dots,q+1\\}$ and
$H=A\sqcup\\{q-c+3,\dots,q+2\\}$.
###### Claim 3.7.
If $q\geq k_{i}+l-1$, then $\alpha(G,H)>\beta(G,H)$.
###### Proof.
Since $q\geq k_{i}+l-1$, we have $\max G\geq k_{i}+l$ and $\max H\geq
k_{i}+l+1$. Let $G<T_{1}<T_{2}<\cdots<T_{\lambda}<H$ in $\mathcal{R}$. By
Claim 3.6, $\max T_{j}\geq k_{i}+l+1$ for all $j\in[\lambda]$. By Proposition
2.19, $\beta(G,T_{1})=\beta(T_{1},T_{2})=\cdots=\beta(T_{\lambda},H)=0$.
Consequently,
$\alpha(G,H)=\alpha(G,T_{1})+\alpha(T_{1},T_{2})+\cdots+\alpha(T_{\lambda},H)>0,$
and
$\beta(G,H)=\beta(G,T_{1})+\beta(T_{1},T_{2})+\cdots+\beta(T_{\lambda},H)=0.$
So we conclude that $\alpha(G,H)>\beta(G,H)$. ∎
By Claim 3.7, we may assume that $k_{i}+1\leq q\leq k_{i}+l-2$. We will show
Lemma 2.11 by induction on $c$.
Let $c=1$. Then $\alpha(F,G)=1$. Since $q\leq k_{i}+l-2,then\max G\leq
k_{i}+l-1<n$. By Proposition 2.19, $\beta(F,G)\geq\sum_{j\neq
i}{n-(k_{i}+l-1)\choose k_{j}-(l-1)}>1$, then $\alpha(F,G)<\beta(F,G)$. So
Lemma 2.11 holds for $c=1$. Let $c\geq 2$. Assume it holds for all
$c^{\prime}\leq c-1$, we will prove that it holds for $c$. We will define
$c_{1},c_{2},\dots,c_{h}$ and $t_{1},t_{2},\dots,t_{h}$, one by one, until
$t_{1}+t_{2}+\cdots+t_{h}=k_{i}+l-q$, where $h$ is to be determined later.
Let $t_{0}=0,F_{0}^{+}=F$ and $c_{0}=c$. We determine $c_{1}$ first.
###### Claim 3.8.
There exists a unique integer $c_{1}\in[1,c_{0}-1]$satisfying the following
two conditions.
(i) If $F_{1}$ satisfies $F_{0}^{+}\overset{c_{1}}{\prec}F_{1}$, then
$\alpha(F_{0}^{+},F_{1})<\beta(F_{0}^{+},F_{1})$;
(ii) For any $1\leq j\leq c_{0}-c_{1}$ and $F^{\prime}$ satisfying
$F_{0}^{+}\overset{c_{1}+j}{\prec}F^{\prime}$, we have
$\alpha(F_{0}^{+},F^{\prime})\geq\beta(F_{0}^{+},F^{\prime})$.
###### Proof.
Let $F^{\prime}$ be the set such that $F_{0}^{+}\overset{1}{\prec}F^{\prime}$,
i.e., $F_{0}^{+}<F^{\prime}$. Since $q\leq k_{i}+l-2$, $\max F^{\prime}\leq
k_{i}+l-1$. By Proposition 2.19,
$\displaystyle\beta(F_{0}^{+},F^{\prime})=\sum_{j\neq i}{n-(q+1)\choose
k_{j}-(q+1-k_{i})}>1=\alpha(F_{0}^{+},F^{\prime}).$
Note that $F\overset{c}{\prec}G$ and $\alpha(F,G)\geq\beta(F,G)$. Let $c_{1}$
be the largest integer in $[1,c_{0}-1]$ satisfying
$\alpha(F_{0}^{+},F^{\prime})<\beta(F_{0}^{+},F^{\prime})$ for $F^{\prime}$
satisfying $F_{0}^{+}\overset{c_{1}}{\prec}F^{\prime}$. Then $c_{1}$ satisfies
both (i) and (ii). ∎
Define $\mathcal{F}_{0}^{+}$ to be the $c_{1}$-sequential family that range
from $q$ to $n$ with $F_{0}^{+}$ as it’s first member. Since $c_{1}<c$, by
induction hypothesis and the definition of down-up family, we can see that
$\mathcal{F}_{0}^{+}$ is a down-up family. Let
$t_{1}:=d_{\mathcal{F}_{0}^{+}}^{\downarrow}$. Clearly, $1\leq t_{1}\leq
k_{i}+l-q$ (in view of Claim 3.7). If $t_{1}=k_{i}+l-q$, then we stop and
$h=1$. Otherwise, if $t_{1}\leq k_{i}+l-q-1$, then we continue to find $c_{2}$
and $t_{2}$. Before performing the next step, we give the following
definitions.
Let
$\mathcal{F}_{0}^{+}:=\\{F_{0}^{+},F_{1},M_{2}^{(1)},M_{3}^{(1)},\dots,M_{t_{1}}^{(1)},M_{t_{2}}^{(1)},\dots,G_{1}\\}$,
where
$F_{0}^{+}\overset{c_{1}}{\prec}F_{1}\overset{c_{1}}{\prec}M_{2}^{(1)}\overset{c_{1}}{\prec}M_{3}^{(1)}\overset{c_{1}}{\prec}\cdots\overset{c_{1}}{\prec}M_{t_{1}}^{(1)}\overset{c_{1}}{\prec}M_{t_{2}}^{(1)}\overset{c_{1}}{\prec}\cdots\overset{c_{1}}{\prec}G_{1}.$
Let $M_{1}^{(1)}:=F_{1}$ and $\max G_{1}=n$. Actually, $\max
M_{t_{1}}^{(1)}=q+t_{1}$.
Since $d_{\mathcal{F}_{0}^{+}}^{\downarrow}=t_{1}$ and
$f(M_{t_{1}+1}^{(1)})>f(M_{t_{1}}^{(1)})$, that is,
$\alpha(M_{t_{1}}^{(1)},M_{t_{1}+1}^{(1)})>\beta(M_{t_{1}}^{(1)},M_{t_{1}+1}^{(1)})$,
then we can define
$F_{1}^{+},F_{2}^{+},\dots,F_{t_{1}}^{+},F_{2},F_{3},\dots,F_{t_{1}}$ as
follows:
$F_{0}^{+}\overset{c_{1}+1}{\prec}F_{1}^{+}\overset{c_{1}+1}{\prec}F_{2}^{+}\overset{c_{1}+1}{\prec}\cdots\overset{c_{1}+1}{\prec}F_{t_{1}}^{+},$
and
$F_{1}^{+}\overset{c_{1}}{\prec}F_{2},\,\,F_{2}^{+}\overset{c_{1}}{\prec}F_{3},\,\,\dots,\,\,F_{t_{1}-1}^{+}\overset{c_{1}}{\prec}F_{t_{1}}.$
Let $2\leq p\leq h$. Assume that $c_{1},c_{2},\dots,c_{p-1}$ and
$t_{1},t_{2},\dots,t_{p-1}$ have been determined and the condition to
terminate is not reached (i.e., $t_{p-1}<k_{i}+l-q$). We next determine
$c_{p}$.
For $0\leq k\leq h$, let $a_{k}:=\sum_{j=0}^{k}t_{j}$.
###### Claim 3.9.
There exists a unique integer $c_{p}$, $1\leq c_{p}\leq c_{p-1}-1$, satisfying
the following two conditions.
(i) If $F_{a_{p-1}}^{+}\overset{c_{p}}{\prec}F_{a_{p-1}+1}$, then
$\alpha(F_{a_{p-1}}^{+},F_{a_{p-1}+1})<\beta(F_{a_{p-1}}^{+},F_{a_{p-1}+1})$;
(ii) For any $1\leq j\leq c_{p-1}-c_{p}$ and $F^{\prime}$ satisfying
$F_{a_{p-1}}^{+}\overset{c_{p}+j}{\prec}F^{\prime}$, we have
$\alpha(F_{a_{p-1}}^{+},F^{\prime})\geq\beta(F_{a_{p-1}}^{+},F^{\prime}).$
As Claim 3.8, after the $(p-1)$-th step, we have defined the following family:
$\mathcal{F}_{a_{p-2}}^{+}=\\{F_{a_{p-2}}^{+},F_{a_{p-2}+1},M_{a_{p-2}+2}^{(p-1)},\dots,M_{a_{p-1}}^{(p-1)},M_{a_{p-1}+1}^{(p-1)},\dots,G_{a_{p-2}+1}\\},$
where the sets of $\mathcal{F}_{a_{p-2}}^{+}$ satisfy
$F_{a_{p-2}}^{+}\overset{c_{p-1}}{\prec}F_{a_{p-2}+1}\overset{c_{p-1}}{\prec}M_{a_{p-2}+2}^{(p-1)}\overset{c_{p-1}}{\prec}\cdots\overset{c_{p-1}}{\prec}M_{a_{p-1}}^{(p-1)}\overset{c_{p-1}}{\prec}M_{a_{p-1}+1}^{(p-1)}\overset{c_{p-1}}{\prec}\cdots\overset{c_{p-1}}{\prec}G_{a_{p-2}+1}.$
Define
$F_{a_{p-2}+1}^{+},\,F_{a_{p-2}+2}^{+},\,\dots,\,F_{a_{p-1}}^{+},\,F_{a_{p-2}+1},\,F_{a_{p-2}+2},\,\dots,\,F_{a_{p-1}}$
as follows:
$F_{a_{p-2}}^{+}\overset{c_{p-1}+1}{\prec}F_{a_{p-2}+1}^{+}\overset{c_{p-1}+1}{\prec}F_{a_{p-2}+2}^{+}\overset{c_{p-1}+1}{\prec}\cdots\overset{c_{p-1}+1}{\prec}F_{a_{p-1}}^{+},$
and
$F_{a_{p-2}}^{+}\overset{c_{p-1}}{\prec}F_{a_{p-2}+1},\,\,F_{a_{p-2}+1}^{+}\overset{c_{p-1}}{\prec}F_{a_{p-2}+2},\,\,\dots,\,\,F_{a_{p-1}-1}^{+}\overset{c_{p-1}}{\prec}F_{a_{p-1}}.$
###### Proof of Claim 3.9.
First, we can see that $c_{p-1}\geq 2$. Since if not, that is, $c_{p-1}=1$,
then $t_{p-1}=d_{\mathcal{F}_{a_{p-2}}}^{\downarrow}=k_{i}+l-q-a_{p-2}$. On
the other hand, since $p-1<h$, we have $t_{p-1}<k_{i}+l-q-a_{p-2}$, a
contradiction. Let $F^{\prime}$ be the set satisfying
$F_{a_{p-1}}^{+}\overset{1}{\prec}F^{\prime}$. Then
$\alpha(F_{a_{p-1}}^{+},F^{\prime})=1<\beta(F_{a_{p-1}}^{+},F^{\prime})$ by
Proposition 2.19. Let $F^{\prime}$ be the set satisfying
$F_{a_{p-1}}^{+}\overset{c_{p-1}}{\prec}F^{\prime}$. Since
$M_{a_{p-1}}^{(p-1)}\overset{c_{p-1}}{\prec}M_{a_{p-1}+1}^{(p-1)}$ and $\max
F_{a_{p-1}}^{+}=\max M_{a_{p-1}}^{(p-1)}=q+a_{p-1}$, by Claim 3.1,
$\displaystyle\alpha(F_{a_{p-1}}^{+},F^{\prime})=\alpha(M_{a_{p-1}}^{(p-1)},M_{a_{p-1}+1}^{(p-1)})\geq\beta(M_{a_{p-1}}^{(p-1)},M_{a_{p-1}+1}^{(p-1)})=\beta(F_{a_{p-1}}^{+},F^{\prime}).$
Let $c_{p}$ be the maximum integer in $[1,c_{p-1}-1]$ such that if
$F_{a_{p-1}}^{+}\overset{c_{p}}{\prec}F_{a_{p-1}+1}$, then
$\alpha(F_{a_{p-1}}^{+},F_{a_{p-1}+1})<\beta(F_{a_{p-1}}^{+},F_{a_{p-1}+1}).$
Then $c_{p}$ satisfies both $(i)$ and $(ii)$. ∎
After $h$ steps, we can get $k_{i}+l-q$ sets, namely, $F_{j}$ for $1\leq j\leq
k_{i}+l-q$. We also defined
$G_{1},G_{a_{1}+1},G_{a_{2}+1},\dots,G_{a_{h-1}+1}$ as
$F_{1}\overset{c_{1}}{\longrightarrow}G_{1},F_{a_{1}+1}\overset{c_{2}}{\longrightarrow}G_{a_{1}+1},\dots,F_{a_{h-1}+1}\overset{c_{h}}{\longrightarrow}G_{a_{h-1}+1}$.
For each $1\leq j\leq h$ and $a_{j-1}+1\leq p\leq a_{j}$, we now define
$G_{p}$ as $F_{p}\overset{c_{j}}{\longrightarrow}G_{p}$.
###### Claim 3.10.
If $c_{1}+1=c$, then $\alpha(G,H)>\beta(G,H)$.
###### Proof.
If $c_{1}+1=c$, then $F_{1}^{+}=G$. Since
$F_{1}\overset{c_{1}}{\longrightarrow}G_{1}$, we have $G_{1}<G$. Then
$\alpha(F,G)=\alpha(F,F_{1})+\alpha(F_{1},G_{1})+\alpha(G_{1},G)$
and
$\beta(F,G)=\beta(F,F_{1})+\beta(F_{1},G_{1})+\beta(G_{1},G).$
By the choice of $c_{1}$ and $t_{1}\geq 1$, we have
$\alpha(F,F_{1})\leq\beta(F,F_{1})$. Due to $\max G=q+1$, applying Proposition
2.19, we have
$\beta(G_{1},G)=\sum_{j\neq i}{n-(q+1)\choose k_{j}-(q+1-k_{i})}.$
Since $\alpha(F,G)>\beta(F,G)$, we get
$\alpha(F_{1},G_{1})+1>\beta(F_{1},G_{1})+\beta(G_{1},G)$. Let $\widetilde{G}$
be the set satisfying $\widetilde{G}<H$. Then
$G\overset{c_{1}}{\longrightarrow}\widetilde{G}$. Since $\max G=\max F_{1}$
and $\max G_{1}=\max\widetilde{G}$, by Corollary 3.3, we obtain
$\alpha(F_{1},G_{1})=\alpha(G,\widetilde{G})$ and
$\beta(F_{1},G_{1})=\beta(G,\widetilde{G})$. Due to $\max H=q+2$, applying
Proposition 2.19, we have
$\beta(\widetilde{G},H)=\sum_{j\neq i}{n-(q+2)\choose
k_{j}-(q+2-k_{i})}<\beta(G_{1},G).$
So
$\displaystyle\alpha(G,H)$
$\displaystyle=\alpha(G,\widetilde{G})+\alpha(\widetilde{G},H)$
$\displaystyle=\alpha(F_{1},G_{1})+1$
$\displaystyle>\beta(F_{1},G_{1})+\beta(G_{1},G)$
$\displaystyle>\beta(G,\widetilde{G})+\beta(\widetilde{G},H)$
$\displaystyle=\beta(G,H).\qed$
By Claim 3.10, we may assume that $c_{1}<c-1$.
###### Claim 3.11.
Let $0\leq p\leq k_{i}+l-q-1$. Then
$\alpha(F_{p}^{+},F_{p+1}^{+})\geq\beta(F_{p}^{+},F_{p+1}^{+})$ and
$\alpha(F_{p}^{+},F_{p+1})<\beta(F_{p}^{+},F_{p+1}).$
###### Proof.
Without loss of generality, assume that $a_{j}\leq p\leq a_{j+1}-1$ for some
$0\leq j\leq h-1$. We next consider the family $\mathcal{F}_{a_{j}}^{+}$.
Recall that
$\mathcal{F}_{a_{j}}^{+}=\\{F_{a_{j}}^{+},F_{a_{j}+1},M_{a_{j}+2}^{(j+1)},\dots,M_{a_{j+1}}^{(j+1)},M_{a_{j+1}+1}^{(j+1)},\dots,G_{a_{j}+1}\\}$,
where
$F_{a_{j}}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+1}\overset{c_{j+1}}{\prec}M_{a_{j}+2}^{(j+1)}\overset{c_{j+1}}{\prec}\cdots\overset{c_{j+1}}{\prec}M_{a_{j+1}}^{(j+1)}\overset{c_{j+1}}{\prec}M_{a_{j+1}+1}^{(j+1)}\overset{c_{j+1}}{\prec}\dots\overset{c_{j+1}}{\prec}G_{a_{j}+1},$
and $\max G_{a_{j}+1}=n.$ We also have the following relations
$F_{a_{j}}^{+}\overset{c_{j+1}+1}{\prec}F_{a_{j}+1}^{+}\overset{c_{j+1}+1}{\prec}F_{a_{j}+2}^{+}\overset{c_{j+1}+1}{\prec}\cdots\overset{c_{j+1}+1}{\prec}F_{a_{j+1}}^{+},$
$F_{a_{j}}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+1},\,F_{a_{j}+1}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+2},\,\dots,\,F_{a_{j+1}-1}^{+}\overset{c_{j+1}}{\prec}F_{a_{j+1}}.$
By the choice of $c_{j+1}$, we have
$\alpha(F_{a_{j}}^{+},F_{a_{j}+1}^{+})\geq\beta(F_{a_{j}}^{+},F_{a_{j}+1}^{+})$.
Since $F_{a_{j}}^{+}\overset{c_{j+1}+1}{\prec}F_{a_{j}+1}^{+}$ and
$c_{j+1}+1\leq c_{1}+1<c$, by induction hypothesis, we have
$\alpha(F_{p}^{+},F_{p+1}^{+})\geq\beta(F_{p}^{+},F_{p+1}^{+}).$ (28)
By the definition of $t_{j+1}$, since
$F_{a_{j}}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+1}$, for each $a_{j}\leq p\leq
a_{j+1}-1$, we get
$\displaystyle\alpha(F_{a_{j}}^{+},F_{a_{j}+1})<\beta(F_{a_{j}}^{+},F_{a_{j}+1}),$
(29)
$\displaystyle\alpha(F_{a_{j}+1},M_{a_{j}+2}^{(j+1)})<\beta(F_{a_{j}+1},M_{a_{j}+2}^{(j+1)}).$
Moreover, for each $a_{j}+2\leq u\leq a_{j+1}-1$, we get
$\alpha(M_{u}^{(j+1)},M_{u+1}^{(j+1)})<\beta(M_{u}^{(j+1)},M_{u+1}^{(j+1)}).$
(30)
Thus Claim 3.11 holds for $p=a_{j}$.
Next we consider $a_{j}+1\leq p\leq a_{j+1}-1$. Note that
$F_{a_{j}+1}\overset{c_{j+1}}{\prec}M_{a_{j}+2}^{(j+1)},\,F_{a_{j}+1}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+2}$
and $\max F_{a_{j}+1}=\max F_{a_{j}+1}^{+}$. Additionally, for $a_{j}+2\leq
p\leq a_{j+1}-1$, we have
$M_{p}^{(j+1)}\overset{c_{j+1}}{\prec}M_{p+1}^{(j+1)},F_{p}^{+}\overset{c_{j+1}}{\prec}F_{p+1}$
and $\max M_{p}^{(j+1)}=\max F_{p}^{+}$. So Claim 3.1 yields
$\alpha(F_{p}^{+},F_{p+1})=\alpha(M_{p}^{(j+1)},M_{p+1}^{(j+1)})~{}~{}\text{and}~{}~{}\beta(F_{p}^{+},F_{p+1})=\beta(M_{p}^{(j+1)},M_{p+1}^{(j+1)}).$
Hence, for each $a_{j}+1\leq p\leq a_{j+1}-1$, by (30), we conclude that
$\alpha(F_{p}^{+},F_{p+1})<\beta(F_{p}^{+},F_{p+1}).$ (31)
The proof of Claim 3.11 is complete. ∎
###### Claim 3.12.
$\max F_{p}=\max F_{p}^{+}=q+p$ for all $1\leq p\leq k_{i}+l-q$.
###### Proof.
Let $a_{j}+1\leq p\leq a_{j+1}$ for some $0\leq j\leq h-1$. Then
$F_{p-1}^{+}\overset{c_{j+1}}{\prec}F_{p}$ and
$F_{p-1}^{+}\overset{c_{j+1}+1}{\prec}F_{p}$, so
$\max F_{p}=\max F_{p}^{+}.$ (32)
We next prove that $\max F_{p}=q+p$. For $j=0$, then $1\leq p\leq t_{1}$.
Recall that
$f_{0}^{+}\overset{c_{1}}{\prec}F_{1},f_{1}^{+}\overset{c_{1}}{\prec}F_{2},\dots,f_{t_{1}-1}^{+}\overset{c_{1}}{\prec}F_{t_{1}}$.
By (32), $\max F_{0}^{+}=q$ implies $\max F_{p}=q+p,$ as desired. Assume it
holds for all $j^{\prime}\leq j-1$, we want to prove it holds for $j$. Recall
that
$F_{a_{j}}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+1},\,F_{a_{j}+1}^{+}\overset{c_{j+1}}{\prec}F_{a_{j}+2},\,\dots,\,F_{a_{j+1}-1}^{+}\overset{c_{j+1}}{\prec}F_{a_{j+1}}$.
By induction hypothesis, $\max F_{a_{j}}^{+}=q+a_{j}$, then $\max
F_{a_{j}+1}=q+a_{j}+1,\dots,\max F_{a_{j+1}}=q+a_{j+1}$, as desired. ∎
###### Claim 3.13.
Let $1\leq p\leq k_{i}+l-q$. Then
$\alpha(F_{p},F_{p}^{+})-\beta(F_{p},G_{p})>\sum_{j\neq i}{n-(q+p)\choose
k_{j}-(q+p-k_{i})}$.
###### Proof.
By Claim 3.12, $\max F_{p}^{+}=q+p$. By our definition of $G_{p}$,
$G_{p}<F_{p}^{+}$. Applying Proposition 2.19, we get
$\beta(G_{p},F_{p}^{+})=\sum_{j\neq i}{n-(q+p)\choose k_{j}-(q+p-k_{i})}.$
By Claim 3.11, $\alpha(F_{p-1}^{+},F_{p}^{+})\geq\beta(F_{p-1}^{+},F_{p}^{+})$
and $\alpha(F_{p-1}^{+},F_{p})<\beta(F_{p-1}^{+},F_{p})$. On the other hand,
we have
$\alpha(F_{p-1}^{+},F_{p}^{+})=\alpha(F_{p-1}^{+},F_{p})+\alpha(F_{p},F_{p}^{+}),$
and
$\displaystyle\beta(F_{p-1}^{+},F_{p}^{+})$
$\displaystyle=\beta(F_{p-1}^{+},F_{p})+\beta(F_{p},G_{p})+\beta(G_{p},F_{p}^{+})$
$\displaystyle=\beta(F_{p-1}^{+},F_{p})+\beta(F_{p},G_{p})+\sum_{j\neq
i}{n-(q+p)\choose k_{j}-(q+p-k_{i})}.$
Thus $\alpha(F_{p},F_{p}^{+})-\beta(F_{p},G_{p})>\sum_{j\neq i}{n-(q+p)\choose
k_{j}-(q+p-k_{i})}$. ∎
Define $H_{p}$ and $J_{p}$ for each $1\leq p\leq k_{i}+l-q+1$ (i.e.,
$a_{0}\leq p\leq a_{h}+1$) as follows.
$J_{1}=G\overset{c_{1}+1}{\prec}J_{2}\overset{c_{1}+1}{\prec}\cdots\overset{c_{1}+1}{\prec}J_{a_{1}}\overset{c_{2}+1}{\prec}J_{a_{1}+1}\overset{c_{2}+1}{\prec}\cdots\overset{c_{2}+1}{\prec}J_{a_{2}}\overset{c_{3}+1}{\prec}\cdots\overset{c_{h}+1}{\prec}J_{a_{h}}\overset{c_{h}+1}{\prec}J_{a_{h}+1},$
where the last set $J_{a_{h}+1}$ exists since Claim 3.12 implies that $\max
J_{a_{h}+1}=q+k_{i}+l-q+1\leq n$. Let $H_{p}$ be the set such that
$H_{p}<J_{p+1}$ in $\mathcal{R}$.
By the definition of $J_{p},1\leq p\leq k_{i}+l-q+1$, we get $\max J_{p}=q+p$.
Proposition 2.19 gives
$\beta(H_{p},J_{p+1})=\sum_{j\neq i}{n-(q+p+1)\choose k_{j}-(q+p+1-k_{i})}.$
(33)
###### Claim 3.14.
Let $1\leq p\leq k_{i}+l-q+1$. Then $\alpha(J_{p},H_{p})=\alpha(F_{p},G_{p})$
and $\beta(J_{p},H_{p})=\beta(F_{p},G_{p})$.
###### Proof.
By Claim 3.12 and $\max J_{p}=q+p$, we have $\max J_{p}=\max F_{p}$.
Trivially, $\max H_{p}=\max G_{p}=n$. By our definition, $J_{p}$ and $H_{p}$
are $c_{x}$-sequential for some $x$, and $F_{p}$ and $G_{p}$ are
$c_{x}$-sequential as well. It follows from Corollary 3.3 that
$\alpha(J_{p},H_{p})=\alpha(F_{p},G_{p})$ and
$\beta(J_{p},H_{p})=\beta(F_{p},G_{p})$. ∎
Accordingly,
$\displaystyle\alpha(J_{p},J_{p+1})-\beta(J_{p},H_{p})$
$\displaystyle=\alpha(J_{p},H_{p})+1-\beta(F_{p},G_{p})$
$\displaystyle=\alpha(F_{p},G_{p})+1-\beta(F_{p},G_{p})$
$\displaystyle=\alpha(F_{p},F_{p}^{+})-\beta(F_{p},G_{p})$
$\displaystyle>\sum_{j\neq i}{n-(q+p)\choose k_{j}-(q+p-k_{i})},$ (34)
where the first and second equalities hold by Claim 3.14 and the last
inequality holds by Claim 3.13. Furthermore,
$\displaystyle\alpha(J_{p},J_{p+1})-\beta(J_{p},J_{p+1})$
$\displaystyle=\alpha(J_{p},J_{p+1})-\beta(J_{p},H_{p})-\beta(H_{p},J_{p+1})$
$\displaystyle=\alpha(J_{p},J_{p+1})-\beta(J_{p},H_{p})-\sum_{j\neq
i}{n-(q+p+1)\choose k_{j}-(q+p+1-k_{i})}$ $\displaystyle>\sum_{j\neq
i}\left[{n-(q+p)\choose k_{j}-(q+p-k_{i})}-{n-(q+p+1)\choose
k_{j}-(q+p+1-k_{i})}\right],$ (35)
where the second equality holds by (33) and the last inequality holds by
(3.1).
Let $J_{n-q}$ be the set such that
$J_{a_{h}+1}\overset{c_{h}+1}{\longrightarrow}J_{n-q}$. In particular, if
$n=k_{i}+l+1$, then $J_{n-q}=J_{a_{h}+1}$.
###### Claim 3.15.
Let $1\leq p\leq k_{i}+l-q$. Then
$\alpha(J_{p},J_{n-q})-\beta(J_{p},J_{n-q})>\sum_{j\neq i}{n-(q+p)\choose
k_{j}-(q+p-k_{i})}.$
###### Proof.
Without loss of generality, let $a_{j-1}+1\leq p\leq a_{j}$ for some $1\leq
j\leq h$. By our definition,
$J_{p}\overset{c_{j}+1}{\prec}J_{p+1}\overset{c_{j}+1}{\prec}\cdots\overset{c_{j}+1}{\prec}J_{a_{j}}\overset{c_{j+1}+1}{\prec}J_{a_{j}+1}\overset{c_{j+1}+1}{\prec}\cdots\overset{c_{h}+1}{\prec}J_{a_{h}}\overset{c_{h}+1}{\prec}J_{a_{h}+1}\overset{c_{h}+1}{\prec}\cdots\overset{c_{h}+1}{\prec}J_{n-q}.$
(36)
Let $T_{1},T_{2},\dots,T_{Y}\in\mathcal{R}$ be the sets such that
$J_{a_{h}}<T_{1}<T_{2}<\dots<T_{Y}<J_{n-q}.$ By Claim 3.6, $\max T_{j}\geq\max
J_{a_{h}}+1=k_{i}+l-q+1$ holds for all $j\in[Y]$. By Proposition 2.19,
$\beta(J_{a_{h}},J_{n-q})=\beta(J_{a_{h}},T_{1})+\beta(T_{1},T_{2})+\cdots+\beta(T_{Y},J_{n-q})=0.$
Then (3.1) and (36) give
$\displaystyle\alpha(J_{p},J_{n-q})-\beta(J_{p},J_{n-q})$
$\displaystyle=\alpha(J_{p},J_{p+1})+\cdots+\alpha(J_{a_{h}-1},J_{a_{h}})+\alpha(J_{a_{h}},J_{n-q})$
$\displaystyle\quad-\beta(J_{p},J_{p+1})-\cdots-\beta(J_{a_{h}-1},J_{a_{h}})-\beta(J_{a_{h}},J_{n-q})$
$\displaystyle>\sum_{j\neq i}{n-(q+p)\choose k_{j}-(q+p-k_{i})}.$
∎
It is easy to see that $1\leq c_{1}+1-c_{h}<c-c_{h}$. If $c-c_{h}=2$, then
$h=1$ and $c_{1}+1=c-1$. By (36),
$J_{1}\overset{c_{1}+1}{\longrightarrow}J_{n-q}$ and $J_{n-q}<H$. By
Proposition 2.19 and $\max H=q+2$,
$\beta(J_{n-q},H)=\sum_{j\neq i}{n-(q+2)\choose k_{j}-(q+2-k_{i})}.$ (37)
Then by Claim 3.15,
$\displaystyle\alpha(G,H)-\beta(G,H)$
$\displaystyle=\alpha(J_{1},H)-\beta(J_{1},H)$
$\displaystyle=\alpha(J_{1},J_{n-q})+\alpha(J_{n-q},H)-\beta(J_{1},J_{n-q})-\beta(J_{n-q},H)$
$\displaystyle>\sum_{j\neq i}\left[{n-(q+1)\choose
k_{j}-(q+1-k_{i})}-{n-(q+2)\choose k_{j}-(q+2-k_{i})}\right]+1$
$\displaystyle>0,$
where the first inequality holds by Claim 3.15 and equation (37). As desired.
Next we assume that $c-c_{h}>2$.
Since $c_{h}<c_{h-1}<\cdots<c_{1}<c$ and $c-c_{h}>2$, we may define sequential
families $\mathcal{F}_{p}$, $1\leq p\leq c-c_{h}-2$, as follows. Let
$c_{d}-c_{h}+1\leq p\leq c_{d-1}-c_{h}$ for some $d=2,\dots,h$ or
$c_{1}-c_{h}+1\leq p\leq c-c_{h}-2$. We define
$\mathcal{F}_{p}:J_{a_{d-1}+1}\overset{c_{h}+1+p}{\prec}J_{a_{d-1}+2}^{(p)}\overset{c_{h}+1+p}{\prec}\cdots\overset{c_{h}+1+p}{\prec}J_{a_{d}}^{(p)}\overset{c_{h}+1+p}{\prec}J_{a_{d}+1}^{(p)}\overset{c_{h}+1+p}{\prec}\cdots\overset{c_{h}+1+p}{\prec}J_{n-q}^{(p)}.$
By our definition, for any $J_{j}^{(p)}\in\mathcal{F}_{p}$, we get
$\max J_{j}^{(p)}=q+j.$ (38)
Knowing that
$J_{a_{h-1}+1}\overset{c_{h}+1}{\prec}\cdots\overset{c_{h}+1}{\prec}J_{a_{h}}\overset{c_{h}+1}{\prec}J_{a_{h}+1}\overset{c_{h}+1}{\longrightarrow}J_{n-q}$,
$J_{a_{h-1}+1}\overset{c_{h}+2}{\prec}J_{a_{h-1}+2}^{(1)}$ and
$\alpha(J_{n-q},J_{a_{h-1}+2}^{(1)})=1$, we also denote
$J_{n-q,1}^{(1)},J_{n-q,2}^{(1)},\dots,J_{{n-q},t_{h}}^{(1)}$ as follows
$\displaystyle
J_{a_{h-1}+2}^{(1)}\overset{c_{h}+1}{\longrightarrow}J_{n-q,1}^{(1)},\,\,i.e.,\,\,J_{{n-q},1}^{(1)}<J_{a_{h-1}+3}^{(1)};$
$\displaystyle
J_{a_{h-1}+3}^{(1)}\overset{c_{h}+1}{\longrightarrow}J_{n-q,2}^{(1)},\,\,i.e.,\,\,J_{{n-q},2}^{(1)}<J_{a_{h-1}+4}^{(1)};$
$\displaystyle\quad\vdots$ $\displaystyle
J_{a_{h}+1}^{(1)}\overset{c_{h}+1}{\longrightarrow}J_{n-q,t_{h}}^{(1)},\,\,i.e.,\,\,J_{{n-q},t_{h}}^{(1)}<J_{a_{h}+2}^{(1)}.$
Consequently,
$\displaystyle\alpha(J_{a_{h-1}+1},J_{n-q}^{(1)})-\beta(J_{a_{h-1}+1},J_{n-q}^{(1)})$
$\displaystyle=\alpha(J_{a_{h-1}+1},J_{n-q})+\alpha(J_{n-q},J_{a_{h-1}+2}^{(1)})+\alpha(J_{a_{h-1}+2}^{(1)},J_{{n-q},1}^{(1)})+\cdots$
$\displaystyle\quad+\alpha(J_{{n-q},t_{h}-1}^{(1)},J_{a_{h}+1}^{(1)})+\alpha(J_{a_{h}+1}^{(1)},J_{n-q}^{(1)})-\beta(J_{a_{h-1}+1},J_{n-q})$
$\displaystyle\quad-\beta(J_{n-q},J_{a_{h-1}+2}^{(1)})-\beta(J_{a_{h-1}+2}^{(1)},J_{{n-q},1}^{(1)})-\cdots$
$\displaystyle\quad-\beta(J_{{n-q},t_{h}-1}^{(1)},J_{a_{h}+1}^{(1)})-\beta(J_{a_{h}+1}^{(1)},J_{n-q}^{(1)}).$
Applying Corollary 3.3, we get
$\displaystyle\alpha(J_{a_{h-1}+2}^{(1)},J_{{n-q},1}^{(1)})=\alpha(J_{a_{h-1}+2},J_{n-q}),\,\beta(J_{a_{h-1}+2}^{(1)},J_{{n-q},1}^{(1)})=\beta(J_{a_{h-1}+2},J_{n-q}),$
$\displaystyle\alpha(J_{a_{h-1}+3}^{(1)},J_{{n-q},2}^{(1)})=\alpha(J_{a_{h-1}+3},J_{n-q}),\,\beta(J_{a_{h-1}+3}^{(1)},J_{{n-q},2}^{(1)})=\beta(J_{a_{h-1}+3},J_{n-q}),$
$\displaystyle\quad\vdots$
$\displaystyle\alpha(J_{a_{h}+1}^{(1)},J_{{n-q},t_{h}}^{(1)})=\alpha(J_{a_{h}+1},J_{n-q}),\,\beta(J_{a_{h}+1}^{(1)},J_{{n-q},t_{h}}^{(1)})=\beta(J_{a_{h}+1},J_{n-q}).$
By Proposition 2.19 and (38),
$\displaystyle\beta(J_{n-q},J_{a_{h-1}+2}^{(1)})=\sum_{j\neq
i}{n-(a_{h-1}+2+q)\choose k_{j}-(a_{h-1}+2+q-k_{i})},$
$\displaystyle\beta(J_{{n-q},1}^{(1)},J_{a_{h-1}+3}^{(1)})=\sum_{j\neq
i}{n-(a_{h-1}+3+q)\choose k_{j}-(a_{h-1}+3+q-k_{i})},$
$\displaystyle\quad\vdots$
$\displaystyle\beta(J_{{n-q},t_{h}-1}^{(1)},J_{a_{h}+1}^{(1)})=\beta(J_{a_{h}+1}^{(1)},J_{n-q}^{(1)})=0.$
Then by Claim 3.15, we have
$\displaystyle\alpha(J_{a_{h-1}+1},J_{n-q})-\beta(J_{a_{h-1}+1},J_{n-q})$ (39)
$\displaystyle>\sum_{j\neq i}\left[{n-(a_{h-1}+1+q)\choose
k_{j}-(a_{h-1}+1+q-k_{i})}-{n-(a_{h-1}+2+q)\choose
k_{j}-(a_{h-1}+2+q-k_{i})}\right.$
$\displaystyle\quad\left.+{n-(a_{h-1}+2+q)\choose
k_{j}-(a_{h-1}+2+q-k_{i})}-\cdots+{n-(a_{h}+1+q)\choose
k_{j}-(a_{h}+1+q-k_{i})}\right]$ $\displaystyle=\sum_{j\neq
i}{n-(a_{h-1}+1+q)\choose k_{j}-(a_{h-1}+1+q-k_{i})}.$ (40)
Using the same argument, we get
$\displaystyle\alpha(J_{a_{h-1}+2}^{(1)},J_{n-q}^{(1)})-\beta(J_{a_{h-1}+2}^{(1)},J_{n-q}^{(1)})>\sum_{j\neq
i}{n-(a_{h-1}+2+q)\choose k_{j}-(a_{h-1}+2+q-k_{i})},$ (41)
$\displaystyle\quad\vdots$
$\displaystyle\alpha(J_{a_{h}}^{(1)},J_{n-q}^{(1)})-\beta(J_{a_{h}}^{(1)},J_{n-q}^{(1)})>\sum_{j\neq
i}{n-(a_{h}+q)\choose k_{j}-(a_{h}+q-k_{i})},$ (42)
$\displaystyle\alpha(J_{a_{h}+1}^{(1)},J_{n-q}^{(1)})-\beta(J_{a_{h}+1}^{(1)},J_{n-q}^{(1)})>0.$
(43)
###### Claim 3.16.
Let $1\leq k\leq c-c_{h}-2$ and $D\in\mathcal{F}_{k}$ with $\max D=p+q$. Then
$\alpha(D,J_{n-q}^{(k)})-\beta(D,J_{n-q}^{(k)})>\sum_{j\neq i}{n-(p+q)\choose
k_{j}-(p+q-k_{i})}.$
###### Proof.
By induction on $k$. For $k=1$, following from (39)– (43), we are done. Assume
that it holds for $\mathcal{F}_{j},j\in[1,c-c_{h}-3]$, we want to prove it
holds for $\mathcal{F}_{j+1}$. Define
$\widetilde{J}_{2}^{(j)},\dots,\widetilde{J}_{t_{1}}^{(j)},\\\
\widetilde{J}_{t_{1}+1}^{(j)},\dots,\widetilde{J}_{n-q}^{(j)}$ as follows:
$\widetilde{J}_{p}^{(j)}<J_{p}^{(j)},p=2,\dots,n$. Note that
$\widetilde{J}_{2}^{(j)}=J_{n-q}^{(j-1)}$. By induction hypothesis, and $\max
J_{1}=q+1$, we have
$\displaystyle\alpha(J_{1},\widetilde{J}_{2}^{(j)})-\beta(J_{1},\widetilde{J}_{2}^{(j)})$
$\displaystyle=\alpha(J_{1},J_{n-q}^{(j-1)})-\beta(J_{1},J_{n-q}^{(j-1)})$
$\displaystyle>\sum_{j\neq i}{n-(q+1)\choose k_{j}-(q+1-k_{i})}.$ (44)
And for $2\leq p\leq n-q$, we have
$\alpha(J_{p}^{(j-1)},\widetilde{J}_{p}^{(j)})-\beta(J_{p}^{(j-1)},\widetilde{J}_{p}^{(j)})>\sum_{j\neq
i}{n-(q+p)\choose k_{j}-(q+p-k_{i})}.$ (45)
Recall that for $2\leq p\leq n-q-1$, we have
$J_{p}^{(j)}\overset{c_{h}+j}{\longrightarrow}\widetilde{J}_{p+1}^{(j)},\,\,J_{p}^{(j-1)}\overset{c_{h}+j}{\longrightarrow}J_{n-q}^{(j-1)}$
and
$\max J_{p}^{(j)}=\max
J_{p}^{(j-1)}=q+p,\,\,\max\widetilde{J}_{p+1}^{(j)}=\max J_{n-q}^{(j-1)}=n.$
Applying Corollary 3.3, we get
$\alpha(J_{p}^{(j)},{J}_{p+1}^{(j)})=\alpha(J_{p}^{(j-1)},J_{n-q}^{(j-1)})$
and
$\beta(J_{p}^{(j)},{J}_{p+1}^{(j)})=\beta(J_{p}^{(j-1)},J_{n-q}^{(j-1)}).$
By Proposition 2.19 and inequalities (44), (45), if $2\leq p\leq n-q-1$, then
$\displaystyle\alpha(J_{p}^{(c_{h}+j-1)},J_{n-q}^{(c_{h}+j-1)})-\beta(J_{p}^{(c_{h}+j-1)},J_{n-q}^{(c_{h}+j-1)})$
$\displaystyle=\alpha(J_{p}^{(c_{h}+j-1)},\widetilde{J}_{p+1}^{(c_{h}+j-1)})+\alpha(\widetilde{J}_{p+1}^{(c_{h}+j-1)},J_{p+1}^{(c_{h}+j-1)})+\alpha(J_{p+1}^{(c_{h}+j-1)},\widetilde{J}_{p+2}^{(c_{h}+j-1)})+\cdots$
$\displaystyle\quad+\alpha(\widetilde{J}_{n-q}^{(c_{h}+j-1)},J_{n-q}^{(c_{h}+j-1)})-\beta(J_{p}^{(c_{h}+j-1)},\widetilde{J}_{p+1}^{(c_{h}+j-1)})-\beta(\widetilde{J}_{p+1}^{(c_{h}+j-1)},J_{p+1}^{(c_{h}+j-1)})$
$\displaystyle\quad-\beta(J_{p+1}^{(c_{h}+j-1)},\widetilde{J}_{p+2}^{(c_{h}+j-1)})-\cdots-\beta(\widetilde{J}_{n-q}^{(c_{h}+j-1)},J_{n-q}^{(c_{h}+j-1)})$
$\displaystyle>\sum_{j\neq i}\left[{n-(q+p)\choose
k_{j}-(q+p-k_{i})}-{n-(q+p+1)\choose k_{j}-(q+p+1-k_{i})}+{n-(q+p+1)\choose
k_{j}-(q+p+1-k_{i})}\right.$
$\displaystyle\quad\left.-\cdots+{n-(k_{i}+l)\choose
k_{j}-(k_{i}+l-k_{i})}-{n-(k_{i}+l+1)\choose k_{j}-(k_{i}+l+1-k_{i})}\right]$
$\displaystyle=\sum_{j\neq i}{n-(q+p)\choose k_{j}-(q+p-k_{i})},$
where the second inequality follows from (45) and Proposition 2.19.
For $p=1$, by (44) and using the same argument as above, we get
$\alpha(J_{1},J_{n-q}^{(c_{h}+j-1)})-\beta(J_{1},J_{n-q}^{(c_{h}+j-1)})>\sum_{j\neq
i}{n-(q+1)\choose k_{j}-(q+1-k_{i})}.$ (46)
∎
Next, we are going to complete the proof of Lemma 2.11.
Recall that $J_{n-q}^{(c-3)}<H$ and $\max H=q+2,G=J_{1}$, so
$\displaystyle\alpha(G,H)-\beta(G,H)$
$\displaystyle=\alpha(G,J_{n-q}^{(c-3)})+\alpha(J_{n-q}^{(c-3)},H)-\beta(G,J_{n-q}^{(c-3)})-\beta(J_{n-q}^{(c-3)},H)$
$\displaystyle>\sum_{j\neq i}\left[{n-(q+1)\choose
k_{j}-(q+1-k_{i})}-{n-(q+2)\choose k_{j}-(q+2-k_{i})}\right]+1$
$\displaystyle>0,$
where the second inequality follows from (46) and Proposition 2.19. The proof
of Lemma 2.11 is complete.
### 3.2 Proof of Lemma 2.12
Recall that $\mathcal{R}_{k}:=\\{R\in\mathcal{R}:[n-k+1,n]\subset
R\\},\mathcal{R}(k):=\\{R\setminus[n-k+1,n]:R\in\mathcal{R}_{k}\\}$ for
$k\in[k_{i}-1]$. By Remark 2.17 and using the same argument as Claim 3.1, we
have the following claim.
###### Claim 3.17.
Let $1\leq j\leq k_{i}-1$ and $1\leq c\leq k_{i}-j$. Let
$F,H,F^{\prime},H^{\prime}\in\mathcal{R}(j)$ and
$F\overset{c}{\prec}H,F^{\prime}\overset{c}{\prec}H^{\prime}$. If $\max F=\max
F^{\prime}$, then $\alpha(F,H)=\alpha(F^{\prime},H^{\prime})$ and
$\beta(F,H)=\beta(F^{\prime},H^{\prime})$.
###### Claim 3.18.
Let $F_{1}<G_{1},F_{2}<G_{2}$ in $\mathcal{R}(j),j\in[0,k_{i}-1]$ with $\max
G_{1}=\max G_{2}$. Then $\alpha(F_{1},G_{1})=\alpha(F_{2},G_{2})$ and
$\beta(F_{1},G_{1})=\beta(F_{2},G_{2})$.
###### Proof.
For $j=0$, we can see that $\alpha(F_{1},G_{1})=\alpha(F_{2},G_{2})=1$, then
Claim 3.18 follows from Proposition 2.19. Now assume that $j\geq 1$. Let
$F^{\prime}_{1}=F_{1}\sqcup\\{n-j+1,\dots,n\\},F^{\prime}_{2}=F_{2}\sqcup\\{n-j+1,\dots,n\\},G^{\prime}_{1}=G_{1}\sqcup\\{n-j+1,\dots,n\\},G^{\prime}_{2}=G_{2}\sqcup\\{n-j+1,\dots,n\\},$
then
$F^{\prime}_{1},F^{\prime}_{2},G^{\prime}_{1},G^{\prime}_{2}\in\mathcal{R}$.
Let $H_{1}$ and $H_{2}$ be the sets such that $F^{\prime}_{1}<H_{1}$ and
$F^{\prime}_{2}<H_{2}$ in $\mathcal{R}$. We get
$H_{1}\overset{j}{\longrightarrow}G^{\prime}_{1}$ and
$H_{2}\overset{j}{\longrightarrow}G^{\prime}_{2}$. By the definitions of
$F_{1},G_{1},F_{2}$ and $G_{2}$, we have $\max H_{1}=\max H_{2}$ and $\max
G^{\prime}_{1}=\max G^{\prime}_{2}$. So Corollary 3.3 gives
$\alpha(F^{\prime}_{1},G^{\prime}_{1})=\alpha(F^{\prime}_{2},G^{\prime}_{2})$
and
$\beta(F^{\prime}_{1},G^{\prime}_{1})=\beta(F^{\prime}_{2},G^{\prime}_{2})$,
that is $\alpha(F_{1},G_{1})=\alpha(F_{2},G_{2})$ and
$\beta(F_{1},G_{1})=\beta(F_{2},G_{2})$. ∎
It’s easy to check the following corollary by using a similar argument of
Corollary3.3.
###### Corollary 3.19.
Let $c\in[k_{i}-j]$ and $F,G,F^{\prime},G^{\prime}\in\mathcal{R}(j)$. If $F,G$
are $c$-sequential, $F^{\prime},G^{\prime}$ are $c$-sequential satisfying
$\max F=\max F^{\prime}$ and $\max G=\max G^{\prime}$, then
$\alpha(F,G)=\alpha(F^{\prime},G^{\prime})$ and
$\beta(F,G)=\beta(F^{\prime},G^{\prime})$.
###### Proof.
We prove Lemma 2.12 by induction on $j$. It holds for $j=0$ by Lemma 2.11.
Suppose it holds for $j\in[0,k_{i}-2]$, we are going to prove it holds for
$j+1$. Let $F,G,H\in\mathcal{R}(j+1)$ with
$F\overset{c}{\prec}G\overset{c}{\prec}H$ and $\alpha(F,G)>\beta(F,G)$. We are
going to apply induction assumption to show $\alpha(G,H)>\beta(G,H)$. Let
$F^{\prime}=F\sqcup\\{\max F+1\\},G^{\prime}=G\sqcup\\{\max G+1\\}$ and
$H^{\prime}=H\sqcup\\{\max H+1\\}$. Then
$F^{\prime},G^{\prime},H^{\prime}\in\mathcal{R}(j)$. Moreover,
$F^{\prime}\overset{c+1}{\prec}G^{\prime}\overset{c+1}{\prec}H^{\prime}$ in
$\mathcal{R}(j)$.
Let $G_{1},G_{2},H_{1},F_{1},F_{2}$ be sets satisfying
$G_{1}<G^{\prime}<G_{2},H_{1}<H^{\prime},F^{\prime}\overset{c}{\prec}F_{1}$
and $F^{\prime}<F_{2}$. Let
$\widetilde{F}=F\sqcup\\{n-j\\},\widetilde{G}=G\sqcup\\{n-j\\}$,
$\widetilde{H}=H\sqcup\\{n-j\\}$. Then
$\widetilde{F},\widetilde{G},\widetilde{H}\in\mathcal{R}(j)$. We get
$F^{\prime}<F_{2}\overset{1}{\longrightarrow}\widetilde{F}<F_{1}\overset{c}{\longrightarrow}G_{1}<G^{\prime}<G_{2}\overset{1}{\longrightarrow}\widetilde{G}\,\,\text{and}\,\,G^{\prime}\overset{c}{\longrightarrow}H_{1}<H^{\prime}.$
(47)
###### Claim 3.20.
$\alpha(F_{1},G_{1})>\beta(F_{1},G_{1}).$
###### Proof.
Suppose on the contrary that $\alpha(F_{1},G_{1})\leq\beta(F_{1},G_{1}).$ By
(47),
$\displaystyle\alpha(\widetilde{F},\widetilde{G})=\alpha(\widetilde{F},F_{1})+\alpha(F_{1},G_{1})+\alpha(G_{1},G^{\prime})+\alpha(G^{\prime},\widetilde{G}),$
$\displaystyle\beta(\widetilde{F},\widetilde{G})=\beta(\widetilde{F},F_{1})+\beta(F_{1},G_{1})+\beta(G_{1},G^{\prime})+\beta(G^{\prime},\widetilde{G}).$
Note that $\alpha(F,G)\geq\beta(F,G)$ means
$\alpha(\widetilde{F},\widetilde{G})\geq\beta(\widetilde{F},\widetilde{G})$.
Since $\alpha(F_{1},G_{1})\leq\beta(F_{1},G_{1})$, then
$\alpha(\widetilde{F},F_{1})+\alpha(G_{1},G^{\prime})+\alpha(G^{\prime},\widetilde{G})\geq\beta(\widetilde{F},F_{1})+\beta(G_{1},G^{\prime})+\beta(G^{\prime},\widetilde{G}).$
(48)
Note that $\max F_{2}=\max G^{\prime}.$ By Claim 3.18, we have
$\beta(F^{\prime},F_{2})=\beta(G_{1},G^{\prime})$ and
$\alpha(F^{\prime},F_{2})=\alpha(G_{1},G^{\prime})$. Note that
$F_{2}\overset{1}{\longrightarrow}\widetilde{F},G^{\prime}\overset{1}{\longrightarrow}\widetilde{G}$,
$\max F_{2}=\max G^{\prime}$ and $\max\widetilde{F}=\max\widetilde{G}$, it
follows from Corollary 3.19 that
$\alpha(F_{2},\widetilde{F})=\alpha(G^{\prime},\widetilde{G})$ and
$\beta(F_{2},\widetilde{F})=\beta(G^{\prime},\widetilde{G}).$ Then
$\displaystyle\alpha(\widetilde{F},F_{1})+\alpha(G_{1},G^{\prime})+\alpha(G^{\prime},\widetilde{G})$
$\displaystyle=\alpha(\widetilde{F},F_{1})+\alpha(F^{\prime},F_{2})+\alpha(F_{2},\widetilde{F})=\alpha(F^{\prime},F_{1}).$
Similarly, we have
$\beta(\widetilde{F},F_{1})+\beta(G_{1},G^{\prime})+\beta(G^{\prime},\widetilde{G})=\beta(F^{\prime},F_{1}).$
So inequality (48) gives
$\alpha(F^{\prime},F_{1})\geq\beta(F^{\prime},F_{1})$.
Note that
$F^{\prime}\overset{c}{\prec}F_{1},F_{1}\overset{c}{\longrightarrow}G_{1}\in\mathcal{R}(j),c\in[k_{i}-j]$,
by induction hypothesis, $\alpha(F_{1},G_{1})>\beta(F_{1},G_{1})$. A
contradiction to our assumption. ∎
By (47), we have
$G^{\prime}\overset{c}{\longrightarrow}H_{1},F_{1}\overset{c}{\longrightarrow}G_{1},\max
G^{\prime}=\max F_{1},\max H_{1}=\max G_{1}$, by Corollary 3.19 and Claim
3.20, we get
$\alpha(G^{\prime},H_{1})>\beta(G^{\prime},H_{1}).$ (49)
Since $G^{\prime}<G_{2}$ and $H_{1}<H^{\prime}$ in $\mathcal{R}(j)$, by Claim
3.18, $\alpha(G^{\prime},G_{2})=\alpha(H_{1},H^{\prime})$ and
$\beta(G^{\prime},G_{2})=\beta(H_{1},H^{\prime})$. Then
$f(G_{2})<f(H^{\prime})$ following from (49). Recall that
$G_{2}\overset{1}{\longrightarrow}\widetilde{G}$ and
$H^{\prime}\overset{1}{\longrightarrow}\widetilde{H}$. Hence,
$f(\widetilde{G})<f(\widetilde{H})$ by applying Corollary 3.19. This implies
$\alpha(G,H)>\beta(G,H)$, as desired. The proof of Lemma 2.12 is complete.
∎
### 3.3 Proofs of Lemma 2.13 and Lemma 2.14
We only give the proof of Lemma 2.13, Lemma 2.14 can be proved by the same
argument.
###### Proof of Lemma 2.13.
Since $f(\\{2,3,\dots,j\\})\leq f(\\{2,3,\dots,j-1\\})$, we have
$\alpha(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})\geq\beta(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\}).$
(50)
We need the following claim.
###### Claim 3.21.
$\displaystyle\alpha(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})=\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\}),$
$\displaystyle\beta(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})=\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\}).$
###### Proof of Claim 3.21.
Note that the sets in
$\mathcal{L}([n],\\{2,3,\dots,j-1\\},k_{i})\setminus\mathcal{L}([n],\\{2,3,\dots,j\\},k_{i})$
are the $k_{i}$-sets containing $\\{2,3,\dots,j-1,j\\}$ but containing neither
$\\{1\\}$ nor $\\{j-1\\}$. Then we can see that
$\displaystyle\alpha(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})$
$\displaystyle=|\mathcal{L}([n],\\{2,3,\dots,j-1\\},k_{i})|-|\mathcal{L}([n],\\{2,3,\dots,j\\},k_{i})|$
$\displaystyle={n-j\choose k_{i}-j+2}.$
Since the sets in
$\mathcal{L}([n],\\{2,3,\dots,j-2,j\\},k_{i})\setminus\mathcal{L}([n],\\{2,3,\dots,j-2,j-1\\},k_{i})$
are the $k_{i}$-sets containing $\\{2,3,\dots,j-2\\}$ but containing neither
$\\{1\\}$ nor $\\{j-1\\}$, we also get
$\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\})={n-j\choose k_{i}-j+2}.$
So we have
$\alpha(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})=\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\}),$
$\displaystyle\beta(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})$
$\displaystyle=\sum_{p\neq i}\left[{n-2\choose k_{p}-2}+\cdots+{n-j\choose
k_{p}-2}-{n-2\choose k_{p}-2}-\right.$
$\displaystyle\quad\left.\cdots-{n-(j-1)\choose k_{p}-2}\right]$
$\displaystyle=\sum_{p\neq i}{n-j\choose k_{p}-2},$
and
$\displaystyle\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\})$
$\displaystyle=\sum_{p\neq i}\left[{n-2\choose k_{p}-2}+\cdots+{n-(j-1)\choose
k_{p}-2}\right.$ $\displaystyle\quad\left.-{n-2\choose
k_{p}-2}-\cdots-{n-(j-2)\choose k_{p}-2}-{n-j\choose k_{p}-3}\right]$
$\displaystyle=\sum_{p\neq i}{n-j\choose k_{p}-2}.$
Thus, we get
$\beta(\\{2,3,\dots,j\\},\\{2,3,\dots,j-1\\})=\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\}).$
This completes the proof. ∎
By (50) and Claim 3.21, we have
$\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\})\geq\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,j\\}).$
By Lemma 2.12, we have
$\displaystyle\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,n-k_{i}+j-4\\})$
$\displaystyle>\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2,n-k_{i}+j-4\\}),$
that is,
$\alpha(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2\\})>\beta(\\{2,3,\dots,j-1\\},\\{2,3,\dots,j-2),$
or equivalently,
$f(\\{2,3,\dots,j-1\\})<f(\\{2,3,\dots,j-2\\}),$
as desired. ∎
## 4 Acknowledgements
We are thankful to two reviewers for giving us insightful comments to help
improve the presentation. This research is supported by National natural
science foundation of China (Grant No. 11931002).
## References
* [1] P. Borg, The maximum product of sizes of cross-t-intersecting uniform families, Australas. J. Combin. 60 (2014) 69–78.
* [2] P. Borg, A cross-intersection theorem for subsets of a set, Bull. Lond. Math. Soc. 47 (2015) 248–256.
* [3] P. Borg, The maximum product of sizes of cross-intersecting families, Discrete Math. 340 (2017) 2307–2317.
* [4] P. Borg, C. Feghali, The maximum sum of sizes of cross-intersecting families of subsets of a set, Discrete Math. 345 (2022) 112981.
* [5] S. Cambie, J. Kim, H. Liu, T. Tran, A proof of Frankl’s conjecture on cross-union families, 9 pages, 2022 https://arxiv.org/pdf/2202.10365v1.pdf.
* [6] P. Erdős, C. Ko, R. Rado, Intersection theorems for systems of finite sets, Quart. J. Math. Oxf. 2(12) (1961) 313–320.
* [7] P. Frankl. On the arithmetic mean of the size of cross-union families, Acta Math. Hungar., 164(1):312–325, 2021.
* [8] P. Frankl, A. Kupavskii, Sharp results concerning disjoint cross-intersecting families, Europ J. Combin 86 (2020) 103089.
* [9] P. Frankl, A. Kupavskii, A size-sensitive inequality for cross-intersecting families, European J. Combin. 62 (2017) 263–271.
* [10] P. Frankl, A. Kupavskii, Erdős-Ko-Rado theorem for {0, ±1}-vectors, J. Comb. Theory Ser. A 155 (2018), 157–179.
* [11] P. Frankl, N. Tokushige, Some best possible inequalities concerning crossing-intersecting families, J. Combin. Theory Ser. A 61 (1992) 87-97.
* [12] P. Frankl, Jian Wang, A product version of the Hilton-Milner theorem, 24 pages, 2022, https://arxiv.org/pdf/2206.07218.pdf
* [13] P. Frankl, Jian Wang, A product version of the Hilton-Milner-Frankl theorem, 22 pages, 2022, https://arxiv.org/pdf/2206.07217.pdf
* [14] A.J.W. Hilton, E.C. Milner, Some intersection theorems for systems of finite sets, Quart. J. Math. Oxf. 2 (18) (1967) 369-384.
* [15] A.J.W. Hilton, The Erdős-Ko-Rado theorem with valency conditions, Unpublished Manuscript, 1976.
* [16] A.J.W. Hilton, An intersection theorem for a collection of families of subsets of a finite set, J. London Math. Soc. 2 (1977) 369-376.
* [17] G.O.H. Katona, A theorem of finite sets, in: Theory of Graphs, Proc. Colloq. Tihany, Akadémai Kiadó, (1968) 187–207.
* [18] J.B. Kruskal, The number of simplices in a complex, in: Math. Opt. Techniques, Univ. of Calif. Press, (1963) 251-278.
* [19] M. Matsumoto, N. Tokushige, The exact bound in the Erdős-Ko-Rado theorem for cross-intersecting families, J. Combin. Theory Ser. A 52 (1989), 90–97.
* [20] L. Pyber, A new generalization of the Erdős-Ko-Rado, J. Combin. Theory Ser. A 43 (1986) 85–90.
* [21] C. Shi, P. Frankl, J. Qian, On non-empty cross-intersecting families, Combinatorica 42 (2022) 1513–1525
* [22] J. Wang, H. Zhang, Nontrivial independent sets of bipartite graphs and cross-intersecting families, J. Combin. Theory, Ser. A 120 (2013) 129–141.
|
# Adaptive Contrast Test for Dose-Response Studies and Modeling
Masahiro Kojima Kyowa Kirin Co., Ltd The Graduate University for Advanced
Studies
###### Abstract
We propose a powerful adaptive contrast test with ordinal constraint contrast
coefficients determined by observed responses. The adaptive contrast test can
perform using easily calculated contrast coefficients and existing statistical
software. We provide the sample SAS program codes of analysis and calculation
of power for the adaptive contrast test. After the adaptive contrast test
shows the statistically significant dose-response, we consider to select the
best dose-response model from multiple dose-response models. Based on the best
model, we identify a recommended dose. We demonstrate the adaptive contrast
test for sample data. In addition, we show the calculation of coefficient,
test statistic, and recommended dose for the actual study. We perform the
simulation study with eleven scenarios to evaluate the performance of the
adaptive contrast test.
We confirmed the statistically significant dose-response for the sample data
and the actual study. In the simulation study, we confirmed that the adaptive
contrast test has higher power in most scenarios compared to the conventional
method. In addition, we confirmed that the type 1 error rate of the adaptive
contrast test was maintained at a significance level when there was no
difference between the treatment groups.
We conclude that the adaptive contrast test can be applied unproblematically
to the dose-response study.
## 1 Introduction
A primary objective of a dose-response trial is to verify a statistically
significant dose-response relationship. After confirming the dose-response, a
recommended dose is selected based on efficacy, safety, pharmacokinetic,
efficiency of a manufacturing, and so on. In general, various analyses are
proposed to confirm the dose-response relationship. For example, there are to
identify a recommended dose according to the dose-response from the viewpoint
of safety [1, 2, 3, 4, 5, 6]. The analyses method are also proposed to
identify a recommended dose from the perspective of safety and efficacy [7, 8,
9]. In this paper, we consider an analysis method to verify the statistically
significant dose-response to verify a proof-of-concept (PoC) in terms of
efficacy. Various analyses to confirm dose-response have been proposed [10,
11, 12]. In particular, multiple comparison procedures with modeling
techniques (MCP-Mod) have been used in various clinical trials [13, 14, 15,
16, 17, 18, 19, 20, 21]. In an MCP part of the MCP-Mod, contrast coefficients
are given based on multiple dose-response models, and statistically
significant dose-responses are confirmed from contrast tests adjusted for
multiplicity. After the dose-response is confirmed, in a Mod part, a dose-
response model is selected by using Akaike information criterion (AIC) or
Tmax. A recommended dose is selected based on the clinical meaningful
difference from the control group. However, we feel a hassle to calculate the
contrast coefficient based on the dose-response model and the multivariate
t-distribution to adjust for multiplicity. In particular, we can not analyzed
the existing statistical analysis software (SAS) procedure. SAS is basically
used in a new drug applications. In the R software, there is a MCP-Mod
procedure. However, the U.S. Food and Drug Administration (FDA), for example,
requires that the MCP-Mod package follow the guidelines of the General
Principles of Software Validation. We consider it not easy to confirm to the
FDA that the MCP-Mod package is in compliance with the guidelines and that
there is no problem using the MCP-Mod package.
In this paper, we propose the simple adaptive contrast test with ordinal
constraint contrast coefficients determined by observed response. The adaptive
contrast test can perform using easily calculated contrast coefficients and
existing statistical software. We provide the sample SAS program codes of
analysis and calculation of power for the adaptive contrast test. After the
adaptive contrast test shows the statistically significant dose-response, we
consider to select the best dose-response model from multiple dose-response
models. Based on the best model, we identify a recommended dose. We
demonstrate the adaptive contrast test for sample data given by Bretz et
al.[10]. In addition, we show the calculation of coefficient, test statistic,
and recommended dose for the actual study by Akizawa et al. [22]. We perform
the simulation study to evaluate the performance of the adaptive contrast test
compared to MCP-Mod.
This paper is organized as follows. Chapter 2 introduces the adaptive contrast
test and model selection. In addition, the analyses of sample data and actual
study are demonstrated. We have shown the configuration of simulation. Chapter
3 describes the results of simulation. Chapter 4 is discussion. Chapter 5
shows a sample program codes for power calculation and analysis in SAS.
## 2 Methods
We consider the randomized, placebo-controlled, multicenter, parallel-group,
dose-finding study. The number of arms including the placebo group is $k$. The
number of patients treated is $n_{i}$ $(i=1,2,\ldots,k)$. The subscript ”1” of
$n_{1}$ refers to the placebo group. The observed responses are
${\overline{\text{\boldmath$Y$}}}=({\overline{Y}}_{1},\ldots,{\overline{Y}}_{k})^{T}$
such as the sample means or means adjusted by an analysis of covariance or
mixed-effects model for repeated measures. We assume that a larger
${\overline{Y}}$ indicates a trend toward improvement. However, even if the
improvement trend is reversed, an analysis can conduct without any problem. In
Section 2.3, we show an example of improvement as ${\overline{Y}}$ is lower.
The standard deviations are $S_{1},\ldots,S_{k}$. The statistical hypothesis
testing for verifying proof of concept (PoC) is conducted by a contrast test.
The test statistic is
$T=\frac{\sum_{i=1}^{k}c_{i}{\overline{Y}}_{i}}{\sqrt{\left(\sum_{i=1}^{k}\frac{c_{i}^{2}}{n_{i}}\right)S^{2}}}$,
where $\sum_{i=1}^{k}c_{i}=0$ and
$S^{2}=\frac{1}{\sum_{i=1}^{k}n_{i}-k}\sum_{i=1}^{k}\left(n_{i}-1\right)S_{i}^{2}$.
When $T$ exceeds the upper 2.5% point of the distribution followed by the
statistic, a statistically significant dose-response is shown, and the PoC for
an investigational drug is accepted.
### 2.1 Adaptive contrast test
We propose a novel adaptive contrast test. First of all, we give an ordinal
constraint of each element of the contrast coefficients $c$. For example, we
assume that $c$ increases quasi-monotonically in a dose-dependent, the ordinal
constraint of $c$ is $c_{1}\leq c_{2}\leq c_{3}\leq c_{4}\leq\cdots\leq
c_{k}$. The constraint should be defined before the start of study. Under the
constraint, the each of $c$ is calculated based on the observed responses
${\overline{\text{\boldmath$Y$}}}$. $c_{1}$ is given by
$\frac{1}{k}\left((k-1){\overline{Y}}_{1}-\sum^{k}_{i=2}\underset{j\in{1,\ldots,i}}{\operatorname{max}}({\overline{Y}}_{j})\right)$
and
$c_{i}=\underset{j\in{1,\ldots,i}}{\operatorname{max}}({\overline{Y}}_{j})-\underset{j\in{1,\ldots,i-1}}{\operatorname{max}}({\overline{Y}}_{j})+c_{i-1}$,
$i=2,3,\ldots,k$. The reason for taking the maximum value is to satisfy the
ordinal constraint $c_{i}\leq c_{j}$ at a dose $j$ that is larger than dose
$i$. We show examples of observed response ${\overline{\text{\boldmath$Y$}}}$
and contrast coefficient $c$ for four arms in Figure 1. The constraint of $c$
is $c_{1}\leq c_{2}\leq c_{3}$. The $c_{4}$ has no constraint because we are
interested in the $c$ adapting flexibly to an umbrella shape. For the case 1
to case 5, the $c$ shows a similar trend in the observed response. For the
case 6, the observed response of dose $3$ is lower than that of dose 2. Hence,
the contrast coefficient of dose $2$ is the same with the coefficient of dose
$3$. The formulas for each $c$ in the example are
$c_{1}=\frac{1}{4}\left(3{\overline{Y}}_{1}-\underset{j\in{1,2}}{\operatorname{max}}({\overline{Y}}_{j})-\underset{j\in{1,2,3}}{\operatorname{max}}({\overline{Y}}_{j})-{\overline{Y}}_{4}\right)$,
$c_{2}=\underset{j\in{1,2}}{\operatorname{max}}({\overline{Y}}_{j})-{\overline{Y}}_{1}+c_{1}$,
$c_{3}=\underset{j\in{1,2,3}}{\operatorname{max}}({\overline{Y}}_{j})-\underset{j\in{1,2}}{\operatorname{max}}({\overline{Y}}_{j})+c_{2}$,
$c_{4}={\overline{Y}}_{4}-\underset{j\in{1,2,3}}{\operatorname{max}}({\overline{Y}}_{j})+c_{3}$.
As an example of the specific calculation of $c$ using actual response values,
in the case 1, for ${\overline{\text{\boldmath$Y$}}}=(0.2,0.4,0.6,0.8)$, each
of $c$ is $c_{1}=\frac{1}{4}(3\times 0.2-0.4-0.6-0.8)=-0.3$,
$c_{2}=0.4-0.2+(-0.3)=-0.1$, $c_{3}=0.6-0.4+(-0.1)=0.1$, and
$c_{4}=0.8-0.6+0.1=0.3$. In the case 6, under
${\overline{\text{\boldmath$Y$}}}=(0.2,0.4,{\bf 0.2},0.6)$, each of $c$ is
$c_{1}=\frac{1}{4}(3\times 0.2-0.4-{\bf 0.4}-0.6)=-0.2$,
$c_{2}=0.4-0.2+(-0.2)=0$, $c_{3}={\bf 0.4}-0.4+0=0$, and $c_{4}=0.6-{\bf
0.4}+0=0.2$. The test statistic $T$ is calculated using the calculated $c$.
Because the test statistic using $c$ with ordinal constraints does not follow
the t-distribution, we use the permutation method to calculate the p-value.
The permutation method is design-based analysis method which is suitable for
randomized design in dose-response studies. In other words, the randomized
design is not a random sampling design. If all response values are the same or
all the investigational drug groups are lower than the placebo group, the test
statistic is set to zero.
Figure 1: Examples of actual responses ${\overline{\text{\boldmath$Y$}}}$ and
coefficients $c$
#### 2.1.1 Modeling
When a statistically significant dose-response is verified, we are interested
in identifying the recommended dose from a dose-response model fitting the
observed response. We consider to use the AIC to select a dose-response model.
Candidate models include Linear, Log-Linear, Emax, Exponential, Quadratic, and
Logistic models. If the best model is selected, the recommended dose should be
selected using minimal effective dose (MED). The MED is a clinically
meaningful difference from placebo. If there is a clinically meaningful change
in response from baseline in the medical guideline, the recommended dose can
select from the doses that are changed meaningful rather than looking at the
difference from placebo.
### 2.2 Analysis of sample dataset
We demonstrate the adaptive contrast test by using the sample dataset given by
Bretz et al.[10]. The sample dataset consists of data from 20 patients per
group in the placebo and four drug groups (dosages: 0.05, 0.20, 0.60, and 1)
in a randomized trial. The responses of each group follow a normal
distribution. The sample means are
${\overline{\text{\boldmath$Y$}}}=(0.345,0.457,0.810,0.934,0.949)^{T}$, the
standard deviations are $S_{1}=0.517$, $S_{2}=0.490$, $S_{3}=0.740$,
$S_{4}=0.765$, and $S_{5}=0.947$. The elements of $c$ are
$c_{1}=\frac{1}{5}(4\times 0.345-0.457-0.810-0.934-0.949)=-0.354$,
$c_{2}=0.457-0.345+c_{1}=-0.242$, $c_{3}=0.810-0.457+c_{2}=0.111$,
$c_{4}=0.934-0.810+c_{3}=0.235$, and $c_{5}=0.949-0.934+c_{4}=0.250$. The test
statistic is $T=3.330$, the one-sided p-value of the permutation method is
$p=0.0003$. We can have confirmed the statistically significant dose-response.
We consider the model selection. We choose the best model from Emax, Linear
log, Linear, Exponential, Quadratic, and Logistic models in terms of
prediction for each dose response by using the AIC. Because the AIC of Emax
model is the smallest, Emax is selected as the dose-response model. We have
summarized the transition for each model in Figure 2.
Figure 2: Dose-response model and observe responses shown as dots
### 2.3 Actual study (Phase 2b study of evocalcet)
We re-analyze the phase 2b study of evocalcet for hemodialysis patients with
secondary hyperparathyroidism using the summary data. The objective of the
study is to confirm the PoC of efficacy for the randomized, double-blind,
placebo-controlled, multicenter, parallel-group, dose-finding design. The
patients were assigned randomly to a placebo, 0.5, 1, 2 mg/day of evocalcet
group for 3 weeks treatment period. The primary endpoint is the percent change
from baseline in intact parathyroid hormone (PTH) at the end of treatment. The
primary analysis is contrast test with seven contrast patterns for a dose-
response. The PoC was shown by a statistically significant decrease in the
percent change in iPTH. The secondary analysis calculated the sample mean and
standard deviation of percent change from baseline in the intact PTH of each
group at end of treatment. The results (Mean$\pm$SD) of percent change from
baseline in the intact PTH were 5.44$\pm$25.85% in placebo, -8.40$\pm$25.43%
in 0.5 mg, -10.56$\pm$22.86% in 1 mg, and -20.16$\pm$34.23% in 2 mg. Because a
lower value for the percent change indicates an clinical improvement, a
constraint on the contrast coefficient is given as $c_{1}\geq c_{2}\geq
c_{3}\geq c_{4}$. By calculating coefficients $c$ based on the formula
replacing the maximum function with a minimum function, $c_{1}=13.86$,
$c_{2}=0.02$, $c_{3}=-2.14$, and $c_{4}=-7.56$. Pooled variance is
$S^{2}=773.17$.
$T=\frac{13.86*5.44-0.02*8.40+2.14*10.56+11.74*20.16}{\sqrt{S^{2}*(13.86^{2}/28+0.02^{2}/30+(-2.14)^{2}/30+(-11.74)^{2}/28)}}=3.54$.
Because we can not have access to the individual data for this study, we show
the upper points for t-distribution. The upper $2.5\%$ point of the
t-distribution is $1.98$, The upper $0.25\%$ point of the t-distribution is
$2.86$. The upper $0.05\%$ point of the t-distribution is $3.38$. We have
confirmed that the result is statistically significant even when the
significance level is sufficiently small. We confirmed the power based on the
sample mean and standard deviation in this study, and the power was 92.04%.
Based on these results, we assume that the permutation method shows
statistically significant. Although this study shows the 90.0% power via
multiple contrasts test, the power for the adaptive contrast test following
the setting of sample size in this study was 91.7%.
We consider the model selection. Emax model is $E_{0}+E_{max}d/({\theta}+d)$.
$E_{0}$ is initial value $5.44$ at placebo. Emax is the minimum value $-20.16$
at 2 mg, and ${\theta}$ is parameter. The estimator ${\hat{\theta}}$ is
$0.40$. The AIC is $22.4$. Linear log-dose model is $E_{0}+{\theta}\log(d+1)$.
The estimator ${\hat{\theta}}$ is $-24.21$. The AIC is $21.3$. Linear model is
$E_{0}+{\theta}d$. The estimator ${\hat{\theta}}$ is -14.12. The AIC is 25.9.
Exponential model is $E_{0}+{\theta}_{1}\exp(d/{\theta}_{2})$. ${\theta}_{1}$
and ${\theta}_{2}$ is parameter. The estimators ${\hat{\theta}}_{1}$ and
${\hat{\theta}}_{2}$ are $1.48$ and $-6.87$, respectively. The AIC is 35.1.
Quadratic model is $E_{0}+{\theta}_{1}d+{\theta}_{2}d^{2}$. The estimator
${\hat{\theta}}_{1}$ is -24.19 and ${\hat{\theta}}_{2}$ is $5.79$. The AIC is
$22.9$. Logistic model is
$E_{0}+E_{max}/(1+\exp(({\theta}_{1}-d)/{\theta}_{2}))$. The estimator
${\hat{\theta}}_{1}$ is 0.66 and ${\hat{\theta}}_{2}$ is $0.36$. The AIC is
$25.8$. The minimum AIC is shown for the Linear log-dose model, we select the
Linear log-dose model as the best dose-response model. We show the all models
in Figure 3. We show examples of recommended dose selection. If a 10% decrease
in the rate of change in iPTH has clinical implications, we can select the
dose 1.0 or more. If a difference of 10% or more from placebo is a clinical
meaningful, we can select the dose 0.5 or more.
Figure 3: Figure 3. Model
### 2.4 Simulation study
We evaluate the statistical power of adaptive contrast test compared to the
MCP-Mod via simulation study. We assume the randomized dose-response study
with five arms and one-sided significance level 2.5%. The dosages are
${\text{\boldmath$d$}}=(d_{1},d_{2},d_{3},d_{4},d_{5})^{T}=(0,0.05,0.2,0.6,1.0)^{T}$.
The number of simulations was set to 10,000. The constraint of contrast
coefficient is $c_{1}\leq c_{2}\leq c_{3}\leq c_{4}$. The coefficient $c_{5}$
for the highest dose has no constraint because we also consider the umbrella
shape. The number of permutations for permutation method is 50,000. The MCP
part of MCP-Mod evaluates the models shown in the Table 1 referred by [10].
The true mean value of each dose is shown in Table 2. The scenario 1 refers to
the constant mean values to confirm the significance level maintaining at
2.5%. For the scenario 2 to the scenario 7, the true mean values are generated
by the dose-response models in the Table 1. For the scenario 8 to the scenario
11, we assume the results with the dose-response relationship that is not
based on a dose-response model. The standard deviation is 1.5.
Table 1: Dose-response model Model name | Equation
---|---
Constant | $0.2$
Linear | $0.2+0.6d_{i}$
Linear in log-dose | $0.2+0.6\log(5d_{i}+1)/\log(6)$
Emax | $0.2+0.7d_{i}/(0.2+d_{i})$
Exponential | $0.183+0.017\exp(2d_{i}\log(6))$
Quadratic | $0.2+2.049d_{i}-1.749d_{i}^{2}$
Logistic | $0.193+0.607/(1+\exp(10\log(3)(0.4-d_{i})))$
Table 2: True mean value | true mean values
---|---
Scenario 1 (Constant) | $(0.2,0.2,0.2,0.2,0.2)$
Scenario 2 (Linear) | $(0.2,0.23,0.32,0.56,0.8)$
Scenario 3 (Linear in log-dose) | $(0.2,0.275,0.432,0.664,0.8)$
Scenario 4 (Emax) | $(0.2,0.34,0.55,0.725,0.783)$
Scenario 5 (Exponential) | $(0.2,0.201,0.206,0.226,0.264)$
Scenario 6 (Quadratic) | $(0.2,0.298,0.54,0.8,0.5)$
Scenario 7 (Logistic) | $(0.271,0.289,0.362,0.631,0.767)$
Scenario 8 | $(0.2,0.4,0.6,0.6,0.8)$
Scenario 9 | $(0.2,0.4,0.6,0.6,0.6)$
Scenario 10 | $(0.2,0.6,0.6,0.6,0.6)$
Scenario 11 | $(0.2,0.6,0.6,0.8,0.8)$
## 3 Results
The results of the simulation study are shown in Table 4 and Figure 4. The
scenario 1 confirmed that the significance level of 2.5% was maintained for
all adaptive contrast test and MCP-Mod. The power increased with increasing
sample size for the adaptive contrast test and MCP-Mod except in Scenarios 1
and 5. For the adaptive contrast test (N=100), the power was higher than the
MCP-Mod in scenarios 3, 4, 6, 8, 9, 10, and 11, while MCP had higher power in
scenarios 2, 5, and 7. MCP had lower power in scenarios 9, 10, and 11, which
were not generated from the model, compared to scenarios 2 to 7, which were
generated from the model. Supplementally, we show the results of quasi-
monotonically increasing for the ordinal constraint of all contrast
coefficients (not apply Umbrella shape) in Supplementary Analysis 6.
Table 3: Power of each scenario in simulation study | Adaptive contrast test | MCP-Mod
---|---|---
| $N=50$ | $N=75$ | $N=100$ | $N=50$ | $N=75$ | $N=100$
Scenario 1 | 2.40 | 2.45 | 2.52 | 2.39 | 2.46 | 2.66
Scenario 2 | 51.82 | 75.71 | 85.63 | 56.81 | 75.65 | 86.86
Scenario 3 | 52.26 | 79.32 | 88.01 | 55.27 | 75.04 | 86.81
Scenario 4 | 49.92 | 76.84 | 87.61 | 49.97 | 69.79 | 82.69
Scenario 5 | 2.20 | 1.15 | 1.06 | 3.45 | 3.80 | 3.85
Scenario 6 | 50.01 | 70.85 | 76.57 | 26.37 | 39.42 | 51.97
Scenario 7 | 40.57 | 62.43 | 67.57 | 43.81 | 61.97 | 75.41
Scenario 8 | 36.59 | 68.59 | 78.57 | 40.12 | 57.98 | 71.42
Scenario 9 | 21.80 | 42.50 | 51.79 | 18.27 | 27.92 | 38.06
Scenario 10 | 23.53 | 42.27 | 49.80 | 10.61 | 15.30 | 20.84
Scenario 11 | 50.17 | 75.31 | 85.46 | 37.82 | 54.42 | 69.23
Figure 4: Power of each scenario in simulation study
## 4 Discussion
We proposed the adaptive contrast test and model selection. The contrast
coefficients are given the ordinal constraint before the study starts and
adaptively determined in a data-dependent. We have confirmed that the adaptive
contrast test has higher power because of the contrast coefficients determined
adaptively. On the practical side, the contrast coefficients are easy to
calculate. The statistical test is performed by permutation method, which can
be easily computed using, for example, the multtest procedure in SAS, permute
in STATA, or perm package in R. In addition, we provide the sample SAS program
codes of analysis and calculation of power for the adaptive contrast test in
Chapter 5. We proposed a procedure to choose a dose-response model from
candidate models and select a recommended dose after a statistically
significant dose-response has been confirmed. We also provide the sample
program codes using the existing SAS procedure for model selection in Chapter
5.
We demonstrated the adaptive contrast test for the sample study given by Bretz
et al.[10]. We have confirmed the statistically significant dose-response. In
addition, we selected the dose-response model via the AIC. In addition, We re-
analyzed the actual phase 2b study by Akizawa et al.[22]. We showed the
determined contrast coefficients and the contrast test statistic. The test
statistic implied that the results were statistically significant, and we
selected the best dose-response model. We presented recommended doses with
clinically meaningful efficacy based on the dose-response models. The power of
the adaptive contrast test showed higher than the permutation test for
multiple contrasts used in the actual phase 2b study.
We performed the simulation study to evaluate the power of adaptive contrast
test compared to the MCP-Mod. In many scenarios, the adaptive contrast test
has higher power than the MCP-Mod. We consider that the power was high by
identifying the optimal contrast coefficients in a data-dependent. We
confirmed that the one-sided type 1 error rate of the adaptive contrast test
was maintained at 2.5% when there was no difference between the treatment
groups. Hence, there was no problem with the performance. The power of MCP-Mod
was relatively high when the true mean of each group was based on the dose-
response model. However, the power of MCP-Mod decreased when the true mean was
generated not-dose-response model. In reality, because the true mean values do
not transition based on the dose-response model the MCP-Mod may not be able to
maintain the expected power. For the ordinal constraint of the contrast
coefficient, when the quasi-monotonic increase assumption was made for all
coefficients without assuming an umbrella type, the power increased for the
all scenarios except for the scenario generated from an umbrella type. We
recommend the assumption of quasi-monotonic increase for all coefficients when
no umbrella type is assumed in the efficacy data.
The adaptive contrast test is a powerful test that can perform using easily
calculated contrast coefficients and existing statistical software. We
confirmed that the adaptive contrast test is the higher power than not only
the permutation test with multiple contrast patterns but also the MCP-Mod.
When we plan to use the permutation test with multiple contrast patterns and
MCP-Mod, we need to explain the procedure of those methods, assumption of
dose-response models and adjustment of multiplicity to clinicians and
decision-makers. We proposed the analysis method that can avoid the
multiplicity and be easy-to-understand of analysis procedure. The adaptive
contrast test can be easy to execute by the simple analysis program. We
provide the sample SAS codes, SAS is basically used in new drug applications.
Therefore, we hope that the adaptive contrast test will be used in many dose-
response studies.
## 5 Software
Listing 1: Sample program code of analysis of biom dataset
⬇
proc import out=dat
datafile=”\biom.xlsx”
/* Add path. biom.xlsx converted from data(biom) of R MCPMoD package*/
dbms=Excel replace;
getnames=no;
run;
/*Variable name tn is arm name (numeric) and res is response (numeric)*/
data dat;
length t $200.;
set dat;
if tn=0 then t=”1_0”;
else if tn=0.05 then t=”2_0.05”;
else if tn=0.2 then t=”3_0.2”;
else if tn=0.6 then t=”4_0.6”;
else if tn=1 then t=”5_1”;
run;
proc univariate data=dat noprint;
where tn=0;
var res;
output out=out1 mean=mean1 std=std1;
run;
proc univariate data=dat noprint;
where tn=0.05;
var res;
output out=out2 mean=mean2 std=std2;
run;
proc univariate data=dat noprint;
where tn=0.2;
var res;
output out=out3 mean=mean3 std=std3;
run;
proc univariate data=dat noprint;
where tn=0.6;
var res;
output out=out4 mean=mean4 std=std4;
run;
proc univariate data=dat noprint;
where tn=1;
var res;
output out=out5 mean=mean5 std=std5;
run;
data out;
merge out1-out5;
run;
%macro _do;
data out;
set out;
mean1=round(mean1,1E-5);
mean2=round(mean2,1E-5);
mean3=round(mean3,1E-5);
mean4=round(mean4,1E-5);
mean5=round(mean5,1E-5);
max2=max(of mean1-mean2);
max3=max(of mean1-mean3);
max4=max(of mean1-mean4);
max5=max(of mean1-mean5);
c1=round(-(max2+max3+max4+max5-4*mean1)/5,1E-5);
c2=round((max2-mean1)+c1,1E-5);
c3=round((max3-max2)+c2,1E-5);
c4=round((max4-max3)+c3,1E-5);
c5=round((max5-max4)+c4,1E-5);
call symput(”cc1”, c1);
call symput(”cc2”, c2);
call symput(”cc3”, c3);
call symput(”cc4”, c4);
call symput(”cc5”, c5);
if c1=0 and c2=0 and c3=0 and c4=0 and c5=0 then do; %let _FL=Y; end;
else do;
%let _FL=N;
end;
run;
%if &_FL.=Y %then %do;
data pValues_1;
set pValues_1;
Permutation=1;
run;
%end;
%else %do;
ods output pValues = pValues_1;
proc multtest data=dat permutation nsample=10000 seed=2021;
class t;
test mean (res / ddfm=pooled upper);
contrast ’Adaptive Contrast’ &cc1. &cc2. &cc3. &cc4. &cc5.;
run;
ods listing;
%end;
data pValues_1;
set pValues_1;
c1=&cc1.;
c2=&cc2.;
c3=&cc3.;
c4=&cc4.;
c5=&cc5.;
run;
%mend;
%_do;
/*Dataset pValues_1 shows p-value.*/
/*AIC is derived below codes*/
data dr;
input d res E0 Emax;
datalines;
0 0.34491 0.34491 0.94871
0.05 0.45675 0.34491 0.94871
0.2 0.81032 0.34491 0.94871
0.6 0.93444 0.34491 0.94871
1 0.94871 0.34491 0.94871
;
run;
title ”Emax model” ;
ods output FitStatistics=AIC_Emax ;
proc nlmixed data = dr;
parms ED50 = 1 SD=1;
mu = E0 +Emax*d / (ED50+d);
model res ~ normal(mu, SD**2);
run;
ods output close;
title ”Linear log-dose model” ;
ods output FitStatistics=AIC_Lld ;
proc nlmixed data = dr;
parms de = 1 SD=1;
mu = E0 +de*log(d+1);
model res ~ normal(mu, SD**2);
run;
ods output close;
title ”Linear model” ;
ods output FitStatistics=AIC_L ;
proc nlmixed data = dr;
parms de = 1 SD=1;
mu = E0 +de*d;
model res ~ normal(mu, SD**2);
run;
ods output close;
title ”Exponential model” ;
ods output FitStatistics=AIC_Exp ;
proc nlmixed data = dr;
parms sl=1 de = 1 SD=1;
mu = E0 +sl*exp(d/de);
model res ~ normal(mu, SD**2);
run;
ods output close;
title ”Quadratic model” ;
ods output FitStatistics=AIC_Q ;
proc nlmixed data = dr;
parms be1 = 1 be2=1 SD=1;
mu = E0 +be1*d+be2*d**2;
model res ~ normal(mu, SD**2);
run;
ods output close;
title ”Logistic model” ;
ods output FitStatistics=AIC_Log ;
proc nlmixed data = dr;
parms ED50=1 de=1 SD=1;
mu = E0 +Emax/(1+exp((ED50-d)/de));
model res ~ normal(mu, SD**2);
run;
ods output close;
Listing 2: Sample program code for calculation of power for the adaptive
contrast test
⬇
data res1; set _NULL_; run;
%macro _func(_num,_m1,_m2,_m3,_m4,_m5,_sd);
data test;
CALL STREAMINIT(100);
do i=1 to 10000;
do j=1 to &_num.;
x1=rand(’NORMAL’,&_m1.,&_sd.);
x2=rand(’NORMAL’,&_m2.,&_sd.);
x3=rand(’NORMAL’,&_m3.,&_sd.);
x4=rand(’NORMAL’,&_m4.,&_sd.);
x5=rand(’NORMAL’,&_m5.,&_sd.);
output;
end;
end;
run;
proc univariate data=test noprint;
by i;
var x1;
output out=out1 mean=mean1 std=std1;
run;
proc univariate data=test noprint;
by i;
var x2;
output out=out2 mean=mean2 std=std2;
run;
proc univariate data=test noprint;
by i;
var x3;
output out=out3 mean=mean3 std=std3;
run;
proc univariate data=test noprint;
by i;
var x4;
output out=out4 mean=mean4 std=std4;
run;
proc univariate data=test noprint;
by i;
var x5;
output out=out5 mean=mean5 std=std5;
run;
data out;
merge out1-out5;
by i;
run;
data out;
set out;
mean1=round(mean1,1E-5);
mean2=round(mean2,1E-5);
mean3=round(mean3,1E-5);
mean4=round(mean4,1E-5);
mean5=round(mean5,1E-5);
max2=max(of mean1-mean2);
max3=max(of mean1-mean3);
max4=max(of mean1-mean4);
max5=max(of mean1-mean5);
c1=round(-(max2+max3+max4+mean5-4*mean1)/5,1E-5);/
*Non-Umblella, round(-(max2+max3+max4+max5-4*mean1)/5,1E-5);*/
c2=round((max2-mean1)+c1,1E-5);
c3=round((max3-max2)+c2,1E-5);
c4=round((max4-max3)+c3,1E-5);
c5=round((mean5-max4)+c4,1E-5);/*Non-Umblella, round((max5-max4)+c4,1E-5);*/
call symput(”cc1”, c1);
call symput(”cc2”, c2);
call symput(”cc3”, c3);
call symput(”cc4”, c4);
call symput(”cc5”, c5);
S=(std1**2+std2**2+std3**2+std4**2)*(&_num.-1)/(&_num.*4-4);
if c1=0 and c2=0 and c3=0 and c4=0 and c5=0 then do; %let _FL=Y; end;
else do;
%let _FL=N;
end;
run;
*proc freq data=out noprint;
* table FL/out=res;
*run;
data test1(keep=val trtpn i);
set test;
rename x1=val;
TRTPN=1;
run;
data test2(keep=val trtpn i);
set test;
rename x2=val;
TRTPN=2;
run;
data test3(keep=val trtpn i);
set test;
rename x3=val;
TRTPN=3;
run;
data test4(keep=val trtpn i);
set test;
rename x4=val;
TRTPN=4;
run;
data test5(keep=val trtpn i);
set test;
rename x5=val;
TRTPN=5;
run;
data testt;
set test1-test5;
run;
%if &_FL.=Y %then %do;
data pValues_1;
set pValues_1;
Permutation=1;
run;
%end;
%else %do;
ods output pValues = pValues_1;
proc sort data=testt;by i ;run;
proc multtest data=testt permutation nsample=50000 seed=2021;
by i;
class TRTPN;
test mean (val / ddfm=pooled upper);
contrast ’Adaptive Contrast’ &cc1. &cc2. &cc3. &cc4. &cc5.;
run;
ods listing;
%end;
data pValues_1;
set pValues_1;
if Permutation<0.025 then FL=”Y”;
else if Permutation>=0.025 then FL=”N”;
run;
proc freq data=pValues_1 noprint;
table FL/out=res2;
run;
data res2;
set res2;
SS=&_num.;
m1=&_m1.;
m2=&_m2.;
m3=&_m3.;
m4=&_m4.;
m5=&_m5.;
sd=&_sd.;
run;
data res1;
set res1 res2;
run;
%mend _func;
%macro _loop(_n);
%_func(&_n.,0.2,0.2,0.2,0.2,0.2,1.5);
%_func(&_n.,0.2,0.23,0.32,0.56,0.8,1.5);
%_func(&_n.,0.2,0.275,0.432,0.664,0.8,1.5);
%_func(&_n.,0.2,0.34,0.55,0.725,0.783,1.5);
%_func(&_n.,0.2,0.201,0.206,0.226,0.264,1.5);
%_func(&_n.,0.2,0.298,0.54,0.8,0.5,1.5);
%_func(&_n.,0.271,0.289,0.362,0.631,0.767,1.5);
%_func(&_n.,0.2,0.4,0.6,0.6,0.8,1.5);
%_func(&_n.,0.2,0.4,0.6,0.6,0.6,1.5);
%_func(&_n.,0.2,0.6,0.6,0.6,0.6,1.5);
%_func(&_n.,0.2,0.6,0.6,0.8,0.8,1.5);
%mend;
%_loop(50);
%_loop(75);
%_loop(100);
proc export data = res1
outfile = ”output.xlsx” /*Add path*/
dbms = xlsx replace;
run;
## 6 Supplementary Result
We show supplemental result with the ordinal constraint with assuming an
umbrella type (Adaptive contrast test 1) and without assuming an umbrella type
(Adaptive contrast test 2). The adaptive contrast test 1 assumes $c_{1}\leq
c_{2}\leq c_{3}\leq c_{4}$. The adaptive contrast test 2 assumes $c_{1}\leq
c_{2}\leq c_{3}\leq c_{4}\leq c_{5}$. The adaptive contrast test 2 was higher
power than the adaptive contrast test 1 except for the scenario 6 (Umbrella
shape).
Table 4: Power of each scenario in supplemental simulation study | Adaptive contrast test 1 | Adaptive contrast test 2
---|---|---
| $N=50$ | $N=75$ | $N=100$ | $N=50$ | $N=75$ | $N=100$
Scenario 1 | 2.40 | 2.45 | 2.52 | 2.47 | 2.45 | 2.26
Scenario 2 | 51.82 | 75.71 | 85.63 | 61.32 | 83.42 | 88.47
Scenario 3 | 52.26 | 79.32 | 88.01 | 62.53 | 83.31 | 88.06
Scenario 4 | 49.92 | 76.84 | 87.61 | 62.94 | 80.2 | 85.27
Scenario 5 | 2.20 | 1.15 | 1.06 | 3.06 | 4.59 | 3.36
Scenario 6 | 50.01 | 70.85 | 76.57 | 48.37 | 64.76 | 72.12
Scenario 7 | 40.57 | 62.43 | 67.57 | 49.86 | 72.44 | 77.18
Scenario 8 | 36.59 | 68.59 | 78.57 | 51.73 | 72.62 | 80.3
Scenario 9 | 21.80 | 42.50 | 51.79 | 38.07 | 50.43 | 54.88
Scenario 10 | 23.53 | 42.27 | 49.80 | 39.27 | 50.64 | 66.03
Scenario 11 | 50.17 | 75.31 | 85.46 | 63.28 | 79.6 | 86.49
Acknowledgments. The author would like to thank Associate Professor Hisashi
Noma for his encouragement and helpful suggestions.
## References
* [1] Liu S., Yuan Y.. Bayesian optimal interval designs for phase I clinical trials. JRSS C. 2015;64(Part 3):507-523.
* [2] Yan F., Mandrekar S.J., Yuan Y.. Keyboard: A Novel Bayesian Toxicity Probability Interval Design for Phase I Clinical Trials. Clin Cancer Res. 2017;23(15):3994-4003.
* [3] Lee S., Ursino M., Cheung Y.K., Zohar S.. Dose-finding designs for cumulative toxicities using multiple constraints. Biostatistics. 2019;20:17-29.
* [4] Lin R., Yuan Y.. Time-to-event model-assisted designs for dose-finding trials with delayed toxicity. Biostatistics. 2020;21(4):807-824.
* [5] Kojima M.. Early completion of phase I cancer clinical trials with Bayesian optimal interval design. Statistics in Medicine. 2021;40:3215-3226.
* [6] Kojima M.. Early Completion of Model-Assisted Designs for Dose-Finding Trials. JCO PO. in press;.
* [7] Mozgunov P., Jaki T.. A benchmark for dose finding studies with continuous outcomes. Biostatistics. 2020;21(2):189-201.
* [8] Mozgunov P., Paoletti X., Jaki T.. A benchmark for dose-finding studies with unknown ordering. Biostatistics. 2020;.
* [9] Lin R., Zhou Y., Yan F., Li D., Yuan Y.. BOIN12: Bayesian Optimal Interval Phase I/II Trial Design for Utility-Based Dose Finding in Immunotherapy and Targeted Therapies. JCO PO. 2020;:1393-1402.
* [10] Bretz F., Pinheiro J.C., Branson M.. Combining Multiple Comparisons and Modeling Techniques in Dose-Response Studies. Biometrics. 2005;61:738-748.
* [11] Pinheiro J., Bornkamp B., Glimm E., Bretz F.. Model-based dose finding under model uncertainty using general parametric models. Statistics in Medicine. 2014;33:1646-1661.
* [12] Ma S., McDermott M.P.. Generalized multiple contrast tests in dose-response studies. Statistics in Medicine. 2020;39:757-772.
* [13] Bornkamp B., Bretz F., Pinheiro J.. Efficient statistical methodology for model-based design and analysis of Phase II dose finding studies under model uncertainty. : Novartis Pharmaceuticals; 2013.
* [14] ClinicalTrials.gov . Targeted Dose Finding of Canakinumab (ACZ885) for Management of Acute Flare in Refractory or Contraindicated Gout Patients. 2012\.
* [15] ClinicalTrials.gov . Cardiovascular Risk Reduction Study (Reduction in Recurrent Major CV Disease Events) (CANTOS). 2022\.
* [16] ClinicalTrials.gov . Safety, Tolerability, Efficacy and Optimal Dose Finding Study of BAF312 in Patients With Relapsing-remitting Multiple Sclerosis. 2020\.
* [17] ClinicalTrials.gov . Double-blind, Randomized Study Evaluating the Efficacy and Safety of Brivaracetam in Adults With Partial Onset Seizures. 2021\.
* [18] ClinicalTrials.gov . A Randomized, Double-blind, Placebo Controlled Study to Assess Efficacy, Safety and Tolerability of LCQ908 in Subjects With Familial Chylomicronemia Syndrome. 2015\.
* [19] ClinicalTrials.gov . A Pilot Study to Assess the Efficacy and Safety of LCQ908 Alone and in Combination With Fenofibrate or Lovaza® in Patients With Severe Hypertriglyceridemia. 2020\.
* [20] ClinicalTrials.gov . Dose-finding Study of LIK066 Compared With Placebo or Sitagliptin to Evaluate Change in HbA1c in Patients With Diabetes. 2014\.
* [21] ClinicalTrials.gov . Dose Finding Study for QAW039 in Asthma. 2020\.
* [22] Akizawa T., Shimazaki R., Fukagawa M.. Phase 2b study of evocalcet (KHK7580), a novel calcimimetic, in Japanese patients with secondary hyperparathyroidism undergoing hemodialysis: A randomized, double-blind, placebo-controlled, dose-finding study. Pros One. 2018;.
* [23] Menon S., Zink R.C.. Modern Approaches to Clinical Trials Using SAS: Classical, Adaptive, and Bayesian Methods. SAS Institute; 2015.
* [24] Lin R., Yin G., Shi H.. Bayesian adaptive model selection design for optimal biological dose finding in phase I/II clinical trials. Biostatistics. 2021;.
* [25] Kennes L.N., Volkers G., Kralidis G.. Study design aspects and inter‐subject variability in longitudinal clinical phase II dose‐finding trials. Pharmaceutical Statistics. 2019;18:248-259.
|
# RACCER: Towards Reachable and Certain Counterfactual Explanations for
Reinforcement Learning
Jasmina Gajcin<EMAIL_ADDRESS>0000-0002-8731-1236 Trinity College
DublinCollege GreenDublinIreland and Ivana Dusparic 0000-0003-0621-5400
<EMAIL_ADDRESS>Trinity College DublinCollege GreenDublinIreland
###### Abstract.
While reinforcement learning (RL) algorithms have been successfully applied to
numerous tasks, their reliance on neural networks makes their behavior
difficult to understand and trust. Counterfactual explanations are human-
friendly explanations that offer users actionable advice on how to alter the
model inputs to achieve the desired output from a black-box system. However,
current approaches to generating counterfactuals in RL ignore the stochastic
and sequential nature of RL tasks and can produce counterfactuals which are
difficult to obtain or do not deliver the desired outcome. In this work, we
propose RACCER, the first RL-specific approach to generating counterfactual
explanations for the behaviour of RL agents. We first propose and implement a
set of RL-specific counterfactual properties that ensure easily reachable
counterfactuals with highly-probable desired outcomes. We use a heuristic tree
search of agent’s execution trajectories to find the most suitable
counterfactuals based on the defined properties. We evaluate RACCER in two
tasks as well as conduct a user study to show that RL-specific counterfactuals
help users better understand agent’s behavior compared to the current state-
of-the-art approaches.
Reinforcement Learning, Explainability, Counterfactual Explanations
††ccs: Computing methodologies Reinforcement learning††ccs: Human-centered
computing User studies
## 1\. Introduction
Reinforcement learning (RL) algorithms have shown remarkable success in many
fields in the recent years and are being developed for high-risk areas such as
healthcare and autonomous driving (Arulkumaran et al., 2017). However, RL
algorithms often use neural networks to represent their policies, which makes
them difficult to understand and hinders their applicability to real-life
tasks. Explainable RL (XRL) is a growing research field that addresses the
need for improving the understanding of black-box RL models. XRL compiles
research in developing methods for explaining RL models both locally, focusing
on a decision in a specific state, and globally, explaining the behavior of a
model as a whole (Puiutta and Veith, 2020). For example, saliency maps are a
local explanation method that is used to identify parts of the image that most
contributed to a decision in a specific state (Huber et al., 2021; Rosynski et
al., 2020). On a global level, RL models have been distilled into
interpretable formats, such as decision trees (Coppens et al., 2019). However,
most methods in XRL generate explanations targeted at developers and expert
users. Such explanations often deal in low-level, domain-specific terms which
are not easy to comprehend for non-expert users. Non-expert users require more
abstract, high-level explanations, that help them better understand and
interact with the system.
Counterfactual explanations are local user-friendly explanations for
interpreting decisions of black-box algorithms. In machine learning,
counterfactuals are defined as an answer to the question: “Given that the
black-box model M outputs $A$ for input features $f_{1},...,f_{k}$, how can
the features change to elicit output B from M?” (Verma et al., 2021).
Counterfactual explanations can help users by giving them actionable advice on
how to change their input to obtain a desired output. For example, a user
rejected for a loan by an AI system, might not only be interested to know why
the decision was made, but also how they can improve their application, in
order to be approved in the future. Additionally, counterfactuals are inherent
to human reasoning, as we rely on them to assign blame and understand events
(Byrne, 2019).
In the recent years, numerous methods for generating counterfactual
explanations for supervised learning tasks have been proposed (Wachter et al.,
2017; Dandl et al., 2020; Poyiadzi et al., 2020; Looveren and Klaise, 2021;
Mothilal et al., 2020; Samoilescu et al., 2021; Laugel et al., 2017). The
majority of these methods rely on optimizing counterfactual properties such as
proximity, sparsity and data manifold closeness, which leads to the a
realistic and easily obtainable counterfactual instance. In RL, to the best of
our knowledge, there currently exists only one approach for generating
counterfactual explanations. Olson et al. (2019) use generative deep learning
to create counterfactual explanations and rely on the same feature-based
properties proximity, sparsity and data manifold closeness used in supervised
learning to guide the generating process. However, relying on these
traditional counterfactual properties in RL tasks can result in
counterfactuals that are similar in features to the original instance but
difficult to reach, due to the sequential nature of RL tasks or do not deliver
the desired output with certainty, due to stochasticity in the RL environment
(Gajcin and Dusparic, 2022). Previous work recognizes that counterfactual
explanations can suggest user potentially life-changing actions and as such
carry great responsibility (Gajcin and Dusparic, 2022). Offering users
counterfactuals which are not easy to reach or do not deliver on the promised
outcome can cost users substantial time, and cause them to lose trust in the
AI system.
In this work, we propose RACCER (Reachable And Certain Counterfactual
Explanations for Reinforcement Learning), to the best of our knowledge the
first approach for generating counterfactual explanations for RL tasks which
takes into account the sequential and stochastic nature of the RL framework.
Firstly, we propose three novel RL-specific counterfactual properties –
reachability, stochastic certainty and cost-efficiency, that should be
considered instead of the commonly used proximity, sparsity and data manifold
closeness properties when searching for easily obtainable counterfactuals.
These counterfactual properties rely on the stochastic and sequential nature
of RL tasks and ensure that counterfactuals are easy to reach and deliver the
desired outcome with high probability. RACCER searches for the most suitable
counterfactual by optimizing a loss function consisting of the three RL-
specific properties using heuristic tree search of agent’s execution tree. We
evaluate RACCER in two environments of varying complexity – Stochastic
GridWorld task and chess, and show that our approach produces counterfactuals
that can be reached faster and deliver the desired output more often compared
to the baseline methods relying on the traditional counterfactual properties.
Additionally, we conduct a user study in which we compare the effect of
counterfactual explanations on user understanding of RL agents and show that
RACCER generates counterfactuals that help humans better understand and
predict the behavior of RL agents.
Our contributions are as follows:
1. (1)
We design three RL-specific counterfactual properties – reachability,
stochastic certainty and cost-efficiency, and provide metrics for their
estimation.
2. (2)
We propose RACCER, the first algorithm for generating RL-specific
counterfactual explanations, which relies on the above counterfactual
properties.
3. (3)
We conduct a user study and show that RACCER can produce counterfactuals which
help humans better understand agent’s behavior compared to the baseline
approaches.
The implementation of RACCER and evaluation details can be found at
https://github.com/anonymous902109/RACCER.
## 2\. Related Work
In this section, we offer a short overview of counterfactual explanations and
counterfactual properties, and summarize some of the most notable methods for
generating counterfactuals in supervised and RL.
### 2.1. Counterfactual Explanations
In the first work on counterfactual explanations for black-box models, Wachter
et al. (2017) define them as follows: “Score p was returned because variables
V had values (v1, v2 , . . .) associated with them. If V instead had values
(v1’, v2’, . . .), and all other variables had remained constant, score p’
would have been returned”. Counterfactual explanations for an instance $x$
are offered in the form of a counterfactual instance $x^{\prime}$ that is
similar to $x$ but achieves the desired outcome. They offer users actionable
advice on how they can change their features to achieve a desired outcome, and
help users better interact with the system. Additionally, they are selective,
suggesting users to change only few features. Counterfactual explanations are
also inherent to human reasoning, as we use them to assign blame (Byrne,
2019). All of this makes counterfactuals user-friendly explanations (Molnar,
2019).
For counterfactual explanations to be useful to users, they need to produce
the desired output, and be easy to obtain, in order to minimize user effort.
To that end, multiple counterfactual properties have been proposed to evaluate
the quality of different counterfactuals (Verma et al., 2021). For example,
validity is used to measure whether counterfactual achieves the desired
output, proximity is a feature-based similarity measure which ensures
counterfactual features are similar to those in the original instance, and
sparsity measures the number of features changed. By optimizing these
counterfactual properties current state-of-the-art approaches search for
counterfactuals that can be easily reached from the original instance with
minimal user effort, are realistic and produce the desired outcome.
Counterfactuals can suggest life-altering actions to the user, and as such
carry a great responsibility. Offering users counterfactual explanations that
require large amounts of effort or do not deliver the desired outcome can
decrease user trust and hinder their interaction with the system.
### 2.2. Generating Counterfactual Explanations
In supervised learning, counterfactual explanations have been used to propose
changes of input features that elicit a desired prediction from a black-box
model. In the recent years, numerous works have proposed methods for
generating counterfactual explanations in supervised learning (Wachter et al.,
2017; Dandl et al., 2020; Looveren and Klaise, 2021; Laugel et al., 2017;
Mothilal et al., 2020; Poyiadzi et al., 2020; Samoilescu et al., 2021). The
majority of these methods follow the same approach. Firstly, a loss function
is defined by combining different counterfactual properties, such as validity,
proximity and sparsity. The loss function is then optimized over a training
data set in order to find the most suitable counterfactual. The methods differ
in their design of the loss function and the choice of the optimization
method. For example, in the first work on counterfactual explanations for
supervised learning, Wachter et al. (2017) use gradient descent to optimize a
loss function based on proximity and validity properties. Similarly, Mothilal
et al. (2020) propose DICE, which introduces a diversity property to the
approach of Wachter et al. (2017) to ensure users are offered a set of
diverse, high-quality explanations. Dandl et al. (2020) pose the problem of
counterfactual search as multi-objective optimization and use genetic
algorithm to optimize validity, proximity, sparsity and data manifold
closeness of counterfactual instances.
In RL, counterfactual explanations aim to explain a decision of a black-box RL
model in a specific state by proposing an alternative state in which the model
would choose the desired action. Olson et al. (2019) propose the only method
for generating counterfactuals in RL so far. The approach relies on generative
modelling to create counterfactuals which are realistic, similar in features
to the original instance and produce a desired output. The approach is not
model-agnostic and requires access to the internal parameters of the black-box
model that is being explained. While the approach proposed by Olson et al.
(2019) generates realistic counterfactuals that can help users better
understand agent’s decisions and even detect faulty behavior in Atari agents,
they focus on the same feature-based counterfactual properties such as
proximity and sparsity as supervised learning methods. However, in RL where
two states can be similar in features but distant in terms of execution,
feature-based metrics are not sufficient for measuring how obtainable a
counterfactual is. Relying only on feature-based similarity measures can
produce counterfactuals which are not easily (or at all) obtainable, and
decrease human trust in the system. In contrast, our work proposes the first
approach for generating RL-specific counterfactuals that take into account the
stochastic and sequential nature of RL tasks.
While the goal of counterfactual explanations is to deliver the desired
outcome, this is often uncertain due the environment in which the system
operates. For example, even if the loan applicant fulfills all conditions
stipulated in a counterfactual, the bank might change the conditions for
approving a loan. Delaney et al. (2021) recognized the need for estimating and
presenting the uncertainty associated with counterfactuals to the user in
supervised learning tasks. On the other hand, in this work we use estimate
uncertainty from an RL perspective, and use it not only as additional
information for the user, but as an important factor during search and choice
of the counterfactual explanation.
## 3\. RACCER
In this section, we describe RACCER, the approach for generating
counterfactual explanations for RL tasks. To generate a counterfactual
explanation $x^{\prime}$, we require oracle access to the black-box model $M$
being explained, the state $x$ being explained, and the desired outcome
$a^{\prime}$. Additionally, the approach needs access to the RL environment.
RACCER then generates a counterfactual state $x^{\prime}$ that can be easily
reached from $x$ and in which the black-box model $M$ chooses $a^{\prime}$
with a high probability. We propose a fully model-agnostic method, which does
not require information on model parameters and can be used for generating
counterfactual explanations of any RL model.
The are two main directions in which the search for counterfactual
explanations can be conducted. Namely, we can search either directly for a
counterfactual instance $x^{\prime}$ (Wachter et al., 2017; Dandl et al.,
2020; Mothilal et al., 2020; Poyiadzi et al., 2020) or for a sequence of
actions $A$ that can transform the original instance into a counterfactual
(Karimi et al., 2020b; Ustun et al., 2019). The second approach corresponds to
the field of actionable recourse which has often been investigated alongside
counterfactual explanations (Karimi et al., 2020a). Once the sequence of
actions is found, counterfactual can be obtained by performing the actions on
the original state. To estimate how far away in terms of execution two RL
states are, we need access to actions used to tranform one into the other. For
that reason, in this work we utilize the indirect approach to counterfactual
generation, where we search for a sequence of actions to transform the
original to the counterfactual instance. By following the sequence of actions
from the original instance $x$, a counterfactual $x^{\prime}$ can be obtained
and presented to the user. This way of conducting counterfactual search is
more informative for the user, as they can be presented with not just the
counterfactual instance, but also the sequence of actions they need to perform
to obtain their desired outcome.
To that end, we set out to find the optimal sequence of actions $A$ that can
transform $x$ into a counterfactual state $x^{\prime}$. In the remainder of
this section we first describe how we can compare and evaluate different
action sequences that lead to counterfactual states (Sections 3.1 and 3.2) and
describe our approach to searching for the optimal one (Section 3.3).
### 3.1. Counterfactual Properties for Reinforcement Learning
Counterfactual properties guide the counterfactual search and are used to
select the most suitable counterfactual explanation. In supervised learning,
they have been designed to ensure minimal user effort is needed to transform
the original instance to the counterfactual. These properties have so far been
mostly defined as feature-based, assuming that if two instances are similar in
features one can easily be reached from the other. However, due to the
sequential nature of RL tasks, two states can be similar in terms of features
but far away in terms of execution (Wang et al., 2016). For example, consider
a state in Atari game of Breakout, and another state obtained by removing the
ball from the first state. The two states differ in only a few pixels,
however, one can never be transformed into the other using available Breakout
actions as the ball cannot be removed from the game. Similarly, stochasticity
in the environment can affect the process of transforming the original
instance to the counterfactual. Only if the user is presented with a
counterfactual that considers these stochastic and sequential constraints can
they find the fastest and securest path to the desired outcome.
In this section we propose three RL-specific counterfactual properties that
take into account the sequential and stochastic nature of RL tasks. These
properties ensure that counterfactuals are easily obtainable from the original
instance, and produce desired output with high certainty. Unlike
counterfactual properties in supervised learning which are often defined with
respect to the counterfactual instance, we define these properties as a
functions of action sequence $A$ that transforms $x$ into counterfactual
$x^{\prime}$.
#### 3.1.1. Reachability property
In RL two states can be similar in terms of state features, but far away in
terms of execution. This means that, despite appearing similar, a large number
of actions might be required to reach the counterfactual from the original
state. Conversely, a state can be very easily reachable by RL actions even if
it appears different based on its feature values. Additionally, state features
in counterfactual instances can be affected by stochastic processes outside of
agent’s control. Relying solely on feature-based similarity measures could
dismiss easily reachable counterfactuals where changes in features are beyond
agent’s control and do not affect action choice.
To account for sequential and stochastic nature of RL tasks, we propose
measuring reachability. For a state $x$ and a sequence of actions $A$, we
define reachability as:
(1) $R(x,A)=len(A)$
$R(x,A)$ measures the number of actions within the sequence that navigates to
the counterfactual instance. By minimizing this property we ensure that
counterfactual can be reached within a small number of steps.
#### 3.1.2. Cost-efficiency property
Current work on counterfactual explanations assumes that each action that
changes the original instance carries the same cost. In RL, however, actions
often have costs associated with them. If a counterfactual can be obtained
through a less costly path, then it should be presented to the user, in order
to minimize user effort. For example, if either pawn or a queen sacrifice can
bring about piece capture for a chess player, they should be advised to
sacrifice the pawn, i.e., the piece of lower value and therefore with a lower
cost.
We propose cost-efficiency as a counterfactual property which prioritizes
instances reachable through least costly actions. For a state $x$ and a
sequence of actions $A$, cost-efficiency is defined as:
(2) $C(x,A)=rew(x,A)$
where $rew(x,A)$ is the cumulative reward obtained when all actions in $A$ are
applied to state $x$. In this way, user can choose a sequence of least costly
actions to transform the original instance into the counterfactual one.
#### 3.1.3. Stochastic certainty property
One of the main qualities of counterfactual explanations is that they deliver
the desired outcome. Asking the user to put their time and effort into
changing the model inputs, only to obtain another unsatisfactory output can
have detrimental effects on user trust in the system. During the time that is
needed to convert the original instance into a counterfactual, conditions of
the task can change, rendering the counterfactual invalid. For example,
imagine a user unsuccessfully applying for a loan, and receiving a
counterfactual explanation, suggesting them to increase their income to be
approved. Conditions for approving a loan can change during the time it takes
the user to change their income (e.g., change jobs, get promoted), and
previously proposed counterfactual can lead to another denied loan request.
Similarly, in RL, the stochastic nature of the environment can make a
counterfactual instance invalid during the time it takes the user to obtain
it. To ensure that users are presented with counterfactuals that are likely to
produce desired output, we propose stochastic certainty. For instance $x$, a
sequence of actions $A$, black-box model $M$ and the desired action
$a^{\prime}$ stochastic certainty is defined as:
(3) $S(x,A,a^{\prime})=P[M(x^{\prime})=a^{\prime}\quad|\quad x^{\prime}=A(x)]$
where $A(x)$ is a state obtained by applying actions from $A$ to state $x$.
Intuitively, stochastic certainty measures the probability of the desired
outcome still being chosen by $M$ after the time it takes to navigate to the
counterfactual state. By maximizing stochastic certainty we promote sequences
of actions that more often lead to the desired outcome.
### 3.2. Loss Function
In order to optimize the counterfactual properties, we design a weighted loss
function encompassing RL-specific objectives. For a state $x$, sequence of
actions $A$, desired output $a^{\prime}$, loss function is defined as:
(4) $L(x,A,a^{\prime})=\alpha R(x,A)+\beta C(x,A)+\gamma(1-S(x,A,a^{\prime}))$
where $\alpha,\beta$ and $\gamma$ are parameters determining the importance of
different properties. By minimizing $L$ we can find a sequence of actions
which quickly and certainly leads to a counterfactual explanation. However,
$L(x,A,a^{\prime})$ does not verify that $a^{\prime}$ is predicted in the
obtained counterfactual. To that end, we ensure that a validity constraint is
satisfied:
(5) $V(x,x^{\prime},a^{\prime})=M(x^{\prime})==a^{\prime}$
where $x^{\prime}$ is obtained by performing actions from $A$ in $x$. Validity
is used to filter potential counterfactual instances as is described in more
detail in the next part of this section.
Figure 1. Heuristic tree search: in each iteration a node is selected by
navigating the tree from the root to a leaf by choosing actions according to
the UCT formula. The node is expanded by performing all possible actions and
appending all obtained states as children of the node. Finally, newly
generated nodes are evaluated and their values are propagated back to the root
to update the values of parent nodes. The white nodes represent states, while
black nodes are determination nodes, that serve to instantiate all possible
children states of a node in a stochastic environment.
### 3.3. Counterfactual Search
Our goal is to obtain a sequence of actions $A$ that minimizes the loss
function $L$ and satisfies the validity constraint. Unlike traditional
counterfactual search which directly searches for a counterfactual in a data
set, we are looking for an optimal sequence of actions that can transform the
original state into a counterfactual one. This means that we cannot directly
optimize $L$ over a data set of states to find a counterfactual as this would
give us no information about how difficult this counterfactual is to reach in
terms of RL actions. To this end, we propose a counterfactual search algorithm
that utilizes heuristic tree search to find a sequence of actions that
transform the original into counterfactual state that minimizes the loss
function $L$. The details of the algorithm are given in Algorithm 1 and shown
in Figure 1.
The proposed algorithm builds a tree to represent agent’s execution – each
node corresponds to a state, and each edge to one action. Each node $n$ is
also associated with a value $val(n)$ and each edge is assigned a value
$Q(n,a)$. These values are based on the loss function $L$ and are used to
determine which node should be expanded in the next iteration. Children of a
node are obtained by taking a specific action in that node. To account for the
stochasticity in the environment, we apply determinization to the expanding
process by adding hidden determinization nodes each time an action is
performed. The children of determinization nodes are sampled from the possible
states that result from performing a specific action. To calculate $val(n)$ we
compute the value of $L(x,A,a^{\prime})$, where $A$ is the sequence of actions
that navigates from root $x$ to node $n$ in the tree. $Q(n,a)$ is calculated
for each node $n$ and action $a$ as the average of values $val$ of the
children nodes obtained when performing $a$ in $n$. To estimate
$L(x,A,a^{\prime})$ we need to calculate the values of individual
counterfactual properties of reachability, cost-efficiency and stochastic
uncertainty for nodes in the tree. We calculate reachability of node $n$ as
the length of the path between the root and $n$. To calculate cost-efficiency
of $n$ we record and sumate the environment’s rewards along the path from the
root to $n$. Finally, to calculate stochastic certainty, we perform N
simulations by unrolling the sequence of actions $A$ from $x$ in the
environment, and record the number of times a desired outcome is obtained in
the resulting state. We then calculate stochastic certainty as:
(6) $S(x,A,a^{\prime})=\frac{N(M(x^{\prime})==a^{\prime})}{N}$
where $x^{\prime}$ is a state obtained after following $A$ in $x$. We
normalize the values for reachability and cost-efficiency so that they fall
within $[0,1]$ range, while stochastic uncertainty values naturally belong to
that range. We can then evaluate a node in tree by combining and weighting the
three counterfactual properties to obtain $L(x,A,a^{\prime})$ as shown in
Equation 4.
At the start of the search, a tree is constructed with only the root node
corresponding to the state $x$ that is being explained. At each step of the
algorithm, a node in the tree is chosen and tree is expanded with the node’s
children. All actions are expanded simultaneously in the node. The resulting
children nodes are then evaluated against $L$, and the results are propagated
back to the tree root to update the value of nodes and edges. To decide which
node is expanded in each iteration we navigate the tree from the root, at each
node $n$ taking the action decided by the Upper Confidence Bound applied for
Trees (UCT) formula (Kocsis and Szepesvári, 2006):
(7) $a^{*}=\arg\max_{a\in
A}\left\\{Q(a,n)+C\sqrt{\frac{\ln(N(n))}{N(s,a)}}\right\\}$
where $C$ is the exploration constant, $N(n)$ number of times $n$ was visited
and $N(n,a)$ number of times $a$ was chosen in $n$. UCT balances between
following the paths of high value and exploring underrepresented paths through
the exploration constant $C$. The process is repeated until a predetermined
maximum number of iterations $T$ is reached.
Once the tree is fully grown, all nodes are first filtered according to the
validity constraint to remain only with the states that deliver the desired
output. The remaining nodes are potential counterfactual explanations. Then
all nodes are evaluated against $L$. The state corresponding to the node in
the tree with minimum value for $L$ is presented to the user as the best
counterfactual.
Algorithm 1 Counterfactual heuristic tree search
1: Input: state $x$, desired outcome $a^{\prime}$, black-box model $M$,
environment $E$
2: Parameters:number of iterations $T$
3: Output: counterfactual state $x^{\prime}$
4: $t=\\{x\\}$ {Initializing search tree}
5: $i=0$
6: while i ¡ T do
7: n = select(t) {Select state $n$ to be expanded}
8: S = expand(n) {Expand $n$ by performing available actions and obtain a set
of new states $S$}
9: for all $s\in S$ do
10: $val(s)=L(x,A,a^{\prime})$ {Evaluate new states in $S$ according to $L$}
11: $t+={s}$
12: end for
13: backpropagate() {Propagate newly evaluated values back to the root}
14: $i+=1$
15: end while
16: $p=[]$
17: for all $s\in t$ do
18: if $valid(s)$ then
19: $p+=s$ {Filter valid counterfactuals}
20: end if
21: end for
22: $cf=\arg\min_{s\in p}L(x,s(A),a^{\prime})$ {Select best counterfactual as
the valid counterfactual which minimizes $L$}
## 4\. Experiments
In this section we outline the experiment setup for evaluating RACCER. We
describe the baseline approaches we evaluate RACCER against (Section 4.1) and
the evaluation tasks (Section 4.2).
### 4.1. Baseline Approaches
In this work, we proposed a model-agnostic approach for generating
counterfactual explanations for RL. In the current state-of-the-art there is
only one other method for generating counterfactuals for RL (Olson et al.,
2019), but it requires substantial information about the RL model parameters.
For that reason we cannot compare our work to Olson et al. (2019). Instead, we
implement two baseline models based on current state-of-the-art approaches in
supervised learning and RL. Both baseline approaches optimize feature-based
metrics that are used in the majority of current counterfactual approaches.
Specifically, all baselines optimize the following $6$ counterfactual
properties:
1. (1)
Validity: we use simple binary metrics for determining whether the desired
outcome $a^{\prime}$ is obtained in the counterfactual state $x^{\prime}$:
(8) $d_{v}(x,x^{\prime})=M(x^{\prime})==a^{\prime}$
2. (2)
Proximity: as we evaluate our approach in environments with discrete features,
we decide on measuring the feature-based proximity using the Euclidian
distance between the original and the counterfactual state in the encoding
space:
(9) $d_{p}(x,x^{\prime})=|enc(x)-enc(x^{\prime})|^{2}_{2}$
The encoder-decoder pair is trained on a dataset of rollout trajectories of
black-box policy $M$ that is being explained.
3. (3)
Sparsity: to calculate sparsity we count the number of different features
between the original and counterfactual instance:
(10) $d_{s}(x,x^{\prime})=|x-x^{\prime}|_{1}$
4. (4)
Data manifold closeness: to estimate how realistic the counterfactual instance
is we use the encoding loss, similar to methods in (Looveren and Klaise, 2021;
Dhurandhar et al., 2018):
(11) $d_{dmc}(x,x^{\prime})=|dec(enc(x^{\prime}))-x^{\prime}|^{2}_{2}$
5. (5)
Actionability: actionability refers to maintaining the values of immutable
features. As different tasks have different immutable features, we define
actionability depending on the task. More detail is given in Section 4.2.
6. (6)
Game fidelity: generating counterfactuals can often involve changing or
deleting features and comes with the risk that the obtained state no longer
complies with the game rules. We ensure that the generated counterfactual
abides by the rules of the game by implementing game fidelity constraint. Game
fidelity depends on the task, and is described in more detail for specific
environments in Section 4.2.
To search for the best counterfactual, we define a baseline loss function
$L_{BO}$ which relies on the proximity, sparsity and data manifold closeness
properties:
(12)
$L_{BO}(x,x^{\prime},a^{\prime})=\theta_{0}d_{p}(x,x^{\prime},a^{\prime})+\theta_{1}d_{s}(x,x^{\prime},a^{\prime})+\theta_{2}d_{dmc}(x,x^{\prime},a^{\prime})$
Parameters $\theta_{0}$, $\theta_{1}$ and $\theta_{2}$ determine the
importance of different objectives. For simplicity, we use
$\theta_{0}=\theta_{1}=\theta_{2}=-1$ for our experiments, resulting in a loss
function which favors all properties equally (Table 1). The remaining
properties validity, actionability and game fidelity are used as constraints
to filter the obtained instances for those that satisfy the game rules, do not
change immutable features and deliver the desired outcome.
To optimize baseline loss $L_{BO}$, we implement two baseline approaches:
1. (1)
BO+GEN: this approach uses a genetic algorithm to find the best
counterfactuals based on the baseline loss function $L_{BO}$. Genetic
algorithm is a model-agnostic optimization approach that has previously been
used to search for counterfactuals in supervised learning (Dandl et al.,
2020). We use a basic $(\mu+\lambda)$ genetic algorithm with $L_{BO}$ as the
fitness function (Blank and Deb, 2020). The parameters of the algorithm are
provided in Table 1.
2. (2)
BO+TS: we optimize the loss function $L_{BO}$ using heuristic tree search. The
optimization algorithm is the same heuristic tree search as used in RACCER and
described in Section 3.3, except BO+TS uses $L_{BO}$ to evaluate nodes and
expand the tree, and ultimately choose the best counterfactual. Parameters
used in the approach for different environments can be found at Table 1.
Table 1. Parameters used for generating counterfactual explanations for
$BO+GEN$, $BO+TS$ and $RACCER$ approaches in Stochastic GridWorld and chess
environments.
Parameter Task Stochastic GridWorld Chess Number of iterations ($T$) 300 1
Number of simulations ($N$) 100 20 Maximum number of actions ($k$) 5 1
Evaluation dataset size ($|D|$) 500 63 Generation sample size 1000 100 Genetic
iterations 30 10
Loss Parameter Value $\alpha$ -1 $\beta$ -1 $\gamma$ -1 $\Theta_{0}$ -1
$\Theta_{1}$ -1 $\Theta_{2}$ -1
### 4.2. Evaluation Tasks
We evaluate our approach in two environments – Stochastic GridWorld and Chess.
#### 4.2.1. Stochastic GridWorld
Stochastic GridWorld is a simple $5\times 5$ gridworld, where agent is tasked
with shooting the dragon. To successfully shoot the dragon, agent has to be in
the same file or row as the dragon, and the space between them has to be
empty. In that situation agent can successfully perform the SHOOT action and
win the game. Environment also contains different types of trees, located in
the middle file of the grid, that can block agent’s shooting path to the
dragon. Agent can chop down the tree by performing a required number of CHOP
actions when located directly next to the tree. Different tree types require
different number of consecutive CHOP actions to disappear. At each step, agent
can move one step in any of the directions and perform SHOOT and CHOP actions.
Additionally, the middle file of the board is extremely fertile, and trees can
regrow along this file with different probabilities. Agent’s actions are
penalized with $-1$ reward, while successfully shooting the dragon brings
$+10$ reward. The episode ends when the dragon is shot or when the maximum
number of time steps is reached. The only immutable feature in the environment
is the dragon’s location, as it cannot move within one episode. We consider
all states that contain an agent, a dragon, a have trees only along the middle
file of the grid to correspond to the rules of the games.
In this environment two states can appear very similar but be far away in
terms of execution. For example, even if the only difference between two
states is one tree, depending on its type chopping it down might be a lengthy
process. Chopping down a tree in order to be able to shoot the dragon might be
less preferable than simply going around it, and suggesting this to the user
could save them time and effort. Similarly, due to the stochastic nature of
the task, during the time needed to obtain a counterfactual, new trees can
regrow and potentially block agent’s path to the dragon.
#### 4.2.2. Chess
We evaluate our approach in Chess environment, with the aim of assisting users
with understanding simple tactics. Specifically, we focus on positions in
which user might prematurely attack, before the attack is fully formed. In
this situation, user might be interested to know why the attack is not the
best option, and a counterfactual explanation could give them actionable
advice on how to prepare and execute the attack. In the chess environment we
do not consider any features as immutable, due to the complexity of the game
and high number of possible situations that can arise from one state. To check
if counterfactual states correspond to valid game states we use
functionalities provided within the Stockfish package (Zhelyabuzhsky, 2022).
Due to the rules of the game, even two states that differ only in one piece
can be unreachable from one another, as pieces are difficult or impossible to
appear back on the board. Similarly, the game is highly stochastic due to
opponent’s moves, and planning an attack has to include an analysis of its
probable success depending on the opponent’s choices. Suggesting to the user a
counterfactual which is unobtainable in the game terms or one that is only
successful for a small number of opponent’s responses will not assist the user
to perform the attack.
## 5\. Evaluation
In this section we describe the evaluation process of RACCER in the Stochastic
GridWorld and Chess environments. Firstly, we evaluate the counterfactuals
against counterfactual properties of reachability, cost-efficiency and
stochasticity, as well as feature-based properties proximity, sparsity and
data manifold closeness. Additionally, we conduct a user study, to investigate
how different types of counterfactual explanations affect user understanding
of agent’s behavior.
For both tasks, we first obtain a black-box model $M$ which is being
explained. For Stochastic GridWorld we train a DQN (Mnih et al., 2013), while
for the chess tasks we use Stockfish engine. Additionally, we assume access to
the environment in both tasks. For each task we generate a data set of factual
states for which we generate counterfactual explanations. In chess environment
we manually created a dataset of $63$ game states in which a player can
perform a simple, multi-step tactical attack. The final action in the attack
is used as the desired outcome. In this way, the counterfactual explanation
can demonstrate to the user what preparatory steps need to be take for the
attack to be successful. In the Stochastic GridWorld we sample a dataset with
$100$ factual states by unrolling expert policy $M$ in the environment. For
each state, we explain each alternative action that agent did not choose in
that state, resulting in $500$ generated counterfactuals.
Table 2. Average values of counterfactual properties for counterfactual
explanations generated using $BO+GEN$, $BO+TS$ and $RACCER$ approaches in
Stochastic GridWorld and Chess tasks.
Task Stochastic Gridworld Chess Metric Approach BO + GEN BO + TS RACCER BO +
GEN BO + TS RACCER Generated counterfactuals ($\%$) $74.60\%$ $56.80\%$
$\mathbf{73.40\%}$ $98.41$ % $95.24\%$ $\mathbf{100\%}$ Proximity -0.23 -0.31
-0.45 -0.06 -0.25 -0.24 Sparsity -2.02 -2.09 -3.24 -5.54 -3.71 -3.75 Data
manifold closeness -0.37 -0.36 -0.57 -14.58 -13.05 -14.80 $L_{BO}$ -2.62 -2.76
-4.26 -20.19 -17.02 -18.80 Reachability $-0.58$ $-0.59$ $\mathbf{-0.41}$ -1 -1
-1 Cost-efficiency $\mathbf{-1.0}$ $\mathbf{-1.0}$ $\mathbf{-1.0}$ -1 -0.48
-0.45 Stochastic uncertainty -0.45 -0.33 $\mathbf{-0.21}$ -1 -0.68 -0.44 $L$
-2.03 -1.93 -1.62 -3 -2.14 -1.86
Figure 2. Sample question from the conducted user study
### 5.1. Evaluating Counterfactual Properties
We evaluate counterfactual explanations produced by baselines $BO+GEN$ and
$BO+TS$ and compare them with RACCER based on their reachability, cost-
efficiency and stochastic certainty, and feature-based properties proximity,
sparsity and data manifold closeness. For each factual state, we run the all
three approaches to select the best counterfactual, and evaluate
counterfactual properties for them. We limit the search for counterfactuals to
$k$ actions. Parameters used in $BO+GEN$, $BO+TS$, and $RACCER$ approaches are
given in Table 1.
Evaluating RL-specific counterfactual properties for $BO+TS$ and RACCER is
straightforward as both use tree search to navigate to the counterfactual and
properties can be calculated by analysing the sequence of actions leading from
the root to the counterfactual. Genetic search, however, generates a
counterfactual by combining different states and uses no notion of actions. To
measure reachability, cost-efficiency and stochastic certainty for a
counterfactual $x^{\prime}$ generated by $BO+GEN$, we build a tree of agent’s
execution of length $k$ rooted in $x$ and find $x^{\prime}$ in it. In that way
we can estimate properties which rely on actions even for explanations
generated through direct search for counterfactual states. If $x^{\prime}$
cannot be found in the tree, it is assigned the lowest possible value for each
property which is $-1$.
We present the average results for each counterfactual property and the loss
function $L$ value for all three approaches in the Stochastic GridWorld and
Chess environment in Table 2. We also record the values of baseline
counterfactual properties of proximity, sparsity and data manifold closeness,
as well the $L_{BO}$ value for each generated counterfactual (Table 2). We
record values of properties already multipled with their weighing factors
($\alpha,\beta,\gamma,\theta_{0},\theta_{1},\theta_{2}$) from Table 1. For
each approach we also record the percentage of states for which a
counterfactual was successfully found.
In the Stochastic GridWorld environment, $RACCER$ and $BO+GEN$ approaches
generate counterfactuals for over $70\%$ of factual states. $BO+TS$ algorithm,
however, provides counterfactuals for only $56.80\%$ of states. We assume that
this is a consequence of the algorithm’s reliance on feature-based
counterfactual properties when deciding which node to expand in the execution
tree. As $BO+TS$ uses proximity, sparsity and data manifold closeness metrics
to decide which node to expand in each iteration, it prefers nodes whose
features are similar to the root. For this reason, $BO+TS$ navigates the tree
by often choosing to follow the action $SHOOT$ that does not change features.
This behavior leads to a lack of diversity within the explored nodes, and
ultimately to fewer generated counterfactuals. While baseline algorithms
$BO+GEN$ and $BO+TS$ perform better than $RACCER$ in feature-based metrics
(proximity, sparsity, data manifold closeness and baseline loss function
$L_{BO}$), $RACCER$ produces counterfactuals that perform better in
reachability and stochastic uncertainty and report lower values for $L$. As
the normalized cost is $-1$ for any sequence of actions in the Stochastic
GridWorld, there is no difference in cost-efficiency property values between
the approaches.
In the chess task, all three approaches can successfully generate
counterfactuals for almost all provided factual states. While showing the best
results for proximity property, $BO+GEN$ reports the worst performance on
other baseline properties sparsity and data manifold closeness, as well as the
baseline objective $L_{BO}$. Additionally, $BO+GEN$ produces counterfactuals
that cannot be reached from the original instance in the allotted number of
steps, as it often removes, adds or replaces pieces contrary to the game
rules. This results in $BO_{GEN}$ reporting the worst performance according to
the RL-specific metrics and $L$. Baseline approaches $BO+TS$ produces
counterfactuals with lowest values for baseline properties proximity, sparsity
and data manifold closeness, as well as $L$. However, RACCER performs better
in RL-specific metrics reachability, cost-efficiency and stochastic
uncertainty, as well as RL-specific loss function L.
While baseline methods perform better on feature-based metrics, RACCER
produces counterfactuals which are easier to reach through less costly paths
and deliver the desired outcome more frequently.
### 5.2. User Study
In Section 5.1 we have shown that RACCER generates counterfactuals that are
easier to reach and more probable to deliver the desired outcome compared to
the baseline approaches. However, counterfactual explanations are ultimately
intended to assist humans in real-life tasks, and evaluating them in this
context is necessary to ensure their usefulness. To that end, we conduct a
user study in which we evaluate the effect of different types of
counterfactual explanations on user understanding of agent’s behavior.
Specifically, we compare the counterfactual explanations produced by the
baseline $BO+GEN$ and those produced by RACCER. We conduct the study in the
Stochastic GridWorld environment, as it has simple rules, and requires no
prior knowledge from users (which also ensures that results are not skewed by
different levels of prior knowledge, like for example, the case might be with
chess).
We sourced $50$ participants through the Prolific platform from English-
speaking countries (UK, Ireland, Canada, USA, Australia and New Zealand) and
split them into two groups. The first group received counterfactuals generated
by BO+GEN and the second counterfactuals produced by RACCER. After filtering
participants for those who have passed attention checks, we have remained with
46 participants, 23 in each group.
Figure 3. Users’ rating of different explanations properties on a 1 - 5 Likert
scale for different explanation types.
The study consisted of $10$ questions, and in each question participants were
shown a game state from the Stochastic GridWorld task (Figure 2). Participants
were offered multiple possible action sequences and asked to choose the one
they believe agent will take in the shown game state. The participants were
then shown a counterfactual explanation for that state, that explains in which
situation agent would have chosen action SHOOT. Finally, participants were
again presented with the original state, and asked to predict a sequence of
agent’s actions. They could remain with their original answer, or change it
based on the presented explanations. We focus on counterfactual explanation
for the action SHOOT, as performing this action is agent’s goal, and as such
it carries the most information about agent’s behavior. To generate questions
for the user study, we assume a black-box agent $M$ in the Stochastic
GridWorld as described in Section 5. For $M$, we generate counterfactual
explanations using algorithms $BO+GEN$ and RACCER approaches as described in
Sections 4.1 and 3 respectively. At the end of the study users also ranked the
explanations based on the explanation goodness metrics (Hoffman et al., 2018)
on a $1-5$ Likert scale ($1$ \- strong disagreement, $5$ \- strong agreement).
Specifically, users reported whether they found explanations to be useful,
satisfying, complete, detailed, actionable, trustworthy and reliable.
Additionally, we included a question about how confident users are about their
predictions, in order to estimate the effect of counterfactuals on user
confidence. A sample user study is available at:
https://forms.gle/4DLPhcMABwkLTahx5.
To evaluate the effect of counterfactuals on user understanding we measure
users’ accuracy in predicting the correct sequence of actions after seeing the
explanation. The correct sequence of actions is the one the agent $M$ would
take. There are two reasons for choosing prediction accuracy as the evaluation
metrics for this study. Firstly, successful prediction of agent’s behavior
indicates that user understands and can anticipate system’s behavior. From a
perspective of actionable advice, on the other hand, prediction accuracy tell
us how good the user is at choosing the best path to performing the SHOOT
action and winning the game after seeing a counterfactual explanations. In
other words, accurate prediction of agent’s behavior indicates that the
counterfactual has helped the user identify the best path to achieving the
goal, which is the ultimate purpose of these explanations.
Users presented with counterfactual explanations generated by $BO+GEN$ have
chosen a correct sequence of actions in $23.04\%$ of cases. In contrast, users
that saw RL-specific counterfactuals generated by $RACCER$ chose the correct
sequence in $56.52\%$ of situations. We performed a Wilcoxon Signed-rank test
with significance level $0.05$ and found significant difference in accuracy
prediction between participants who received counterfactuals generated by
$BO+GEN$ compared to $RACCER$ algorithm ($W=4.0,p=0.0015$).
We record the results of user’s ranking of explanation goodness metrics in
Figure 3 for counterfactuals generated by $BO+GEN$ and $RACCER$ algorithms. We
perform a Wilcoxon Signed-rank test with significance level $0.05$ to evaluate
the differences in user rankings of explanations goodness metrics. However, we
found no significant difference in ratings for any of the explored metrics.
This indicates users find explanations generated using $BO+GEN$ and $RACCER$
approaches equally satisfying. Even though users perceive baseline
explanations as satisfactory, they do not help them understand agent behavior,
indicating that traditional feature-based methods can generate misleading
counterfactuals.
## 6\. Conclusion and Future Work
In this work, we presented RACCER, the first RL-specific approach to
generating counterfactual explanations. We designed and implemented three
novel counterfactual properties that reflect the sequential and stochastic
nature of RL tasks, and provided a heuristic tree search approach to finding a
counterfactual that optimizes these properties. We evaluated our approach in a
Stochastic GridWorld and a more complex chess tasks, and showed that RACCER
generates counterfactuals that are easier to reach and provide the desired
outcomes more often compared to baseline approaches. We have also conducted a
user study, and shown that users presented with counterfactuals generated by
RACCER could correctly predict behavior of RL agents twice more frequently
compared to users presented with baseline explanations. This indicates that
RL-specific counterfactuals help users better understand and anticipate
agent’s behavior.
In this work we have limited our search to only the best counterfactual. In
the future work, we hope to expand our search to include a set of diverse
counterfactual explanations optimizing different counterfactual properties. In
this way, users would have a wider choice of potential actionable advice.
Similarly, we have also assumed in our work that all counterfactual properties
are of the same importance to the user. However, some users might be more
interested in a shorter but riskier path, while others might prefer safety
over speed. In the future work we hope to utilize a human-in-the-loop approach
to generate personalized counterfactual explanations that fit users
preferences.
## Acknowledgement
This publication has emanated from research conducted with the financial
support of a grant from Science Foundation Ireland under Grant number
18/CRT/6223. For the purpose of Open Access, the author has applied a CC BY
public copyright licence to any Author Accepted Manuscript version arising
from this submission.
## References
* (1)
* Arulkumaran et al. (2017) Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. 2017\. Deep reinforcement learning: A brief survey. _IEEE Signal Processing Magazine_ 34, 6 (2017), 26–38.
* Blank and Deb (2020) J. Blank and K. Deb. 2020. pymoo: Multi-Objective Optimization in Python. _IEEE Access_ 8 (2020), 89497–89509.
* Byrne (2019) Ruth MJ Byrne. 2019\. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning.. In _IJCAI_. 6276–6282.
* Coppens et al. (2019) Youri Coppens, Kyriakos Efthymiadis, Tom Lenaerts, Ann Nowé, Tim Miller, Rosina Weber, and Daniele Magazzeni. 2019. Distilling deep reinforcement learning policies in soft decision trees. In _Proceedings of the IJCAI 2019 workshop on explainable artificial intelligence_. 1–6.
* Dandl et al. (2020) Susanne Dandl, Christoph Molnar, Martin Binder, and Bernd Bischl. 2020. Multi-objective counterfactual explanations. In _International Conference on Parallel Problem Solving from Nature_. Springer, 448–469.
* Delaney et al. (2021) Eoin Delaney, Derek Greene, and Mark T Keane. 2021\. Uncertainty estimation and out-of-distribution detection for counterfactual explanations: Pitfalls and solutions. _arXiv preprint arXiv:2107.09734_ (2021).
* Dhurandhar et al. (2018) Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Paishun Ting, Karthikeyan Shanmugam, and Payel Das. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. _arXiv preprint arXiv:1802.07623_ (2018).
* Gajcin and Dusparic (2022) Jasmina Gajcin and Ivana Dusparic. 2022. Counterfactual Explanations for Reinforcement Learning. _arXiv preprint arXiv:2210.11846_ (2022).
* Hoffman et al. (2018) Robert R Hoffman, Shane T Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. _arXiv preprint arXiv:1812.04608_ (2018).
* Huber et al. (2021) Tobias Huber, Katharina Weitz, Elisabeth André, and Ofra Amir. 2021. Local and global explanations of agent behavior: Integrating strategy summaries with saliency maps. _Artificial Intelligence_ 301 (2021), 103571.
* Karimi et al. (2020a) Amir-Hossein Karimi, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2020a. A survey of algorithmic recourse: definitions, formulations, solutions, and prospects. _arXiv preprint arXiv:2010.04050_ (2020).
* Karimi et al. (2020b) Amir-Hossein Karimi, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020b. Algorithmic recourse under imperfect causal knowledge: a probabilistic approach. _Advances in Neural Information Processing Systems_ 33 (2020), 265–277.
* Kocsis and Szepesvári (2006) Levente Kocsis and Csaba Szepesvári. 2006. Bandit based monte-carlo planning. In _European conference on machine learning_. Springer, 282–293.
* Laugel et al. (2017) Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017\. Inverse classification for comparison-based interpretability in machine learning. _arXiv preprint arXiv:1712.08443_ (2017).
* Looveren and Klaise (2021) Arnaud Van Looveren and Janis Klaise. 2021. Interpretable counterfactual explanations guided by prototypes. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 650–665.
* Mnih et al. (2013) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. 2013. Playing atari with deep reinforcement learning. _arXiv preprint arXiv:1312.5602_ (2013).
* Molnar (2019) Christoph Molnar. 2019\. _Interpretable Machine Learning_.
* Mothilal et al. (2020) Ramaravind K Mothilal, Amit Sharma, and Chenhao Tan. 2020\. Explaining machine learning classifiers through diverse counterfactual explanations. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_. 607–617.
* Olson et al. (2019) Matthew L Olson, Lawrence Neal, Fuxin Li, and Weng-Keen Wong. 2019\. Counterfactual states for atari agents via generative deep learning. _arXiv preprint arXiv:1909.12969_ (2019).
* Poyiadzi et al. (2020) Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020\. FACE: feasible and actionable counterfactual explanations. In _Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society_. 344–350.
* Puiutta and Veith (2020) Erika Puiutta and Eric Veith. 2020. Explainable reinforcement learning: A survey. In _International cross-domain conference for machine learning and knowledge extraction_. Springer, 77–95.
* Rosynski et al. (2020) Matthias Rosynski, Frank Kirchner, and Matias Valdenegro-Toro. 2020\. Are Gradient-based Saliency Maps Useful in Deep Reinforcement Learning? _arXiv preprint arXiv:2012.01281_ (2020).
* Samoilescu et al. (2021) Robert-Florian Samoilescu, Arnaud Van Looveren, and Janis Klaise. 2021\. Model-agnostic and Scalable Counterfactual Explanations via Reinforcement Learning. _arXiv preprint arXiv:2106.02597_ (2021).
* Ustun et al. (2019) Berk Ustun, Alexander Spangher, and Yang Liu. 2019\. Actionable recourse in linear classification. In _Proceedings of the conference on fairness, accountability, and transparency_. 10–19.
* Verma et al. (2021) Sahil Verma, John Dickerson, and Keegan Hines. 2021\. Counterfactual Explanations for Machine Learning: Challenges Revisited. _arXiv preprint arXiv:2106.07756_ (2021).
* Wachter et al. (2017) Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017\. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. _Harv. JL & Tech._ 31 (2017), 841.
* Wang et al. (2016) Ziyu Wang, Tom Schaul, Matteo Hessel, Hado Hasselt, Marc Lanctot, and Nando Freitas. 2016\. Dueling network architectures for deep reinforcement learning. In _International conference on machine learning_. PMLR, 1995–2003.
* Zhelyabuzhsky (2022) Ilya Zhelyabuzhsky. 2022\. Stockfish. https://github.com/zhelyabuzhsky/stockfish
|
# Certain Approximation Results for Kantorovich Exponential Sampling Series
Shivam Bajpeyi School of Mathematics, Indian Institute of Science Education
and Research, Thiruvananthapuram, India<EMAIL_ADDRESS>, A.
Sathish Kumar Department of Mathematics, Indian Institute of Technology
Madras, Chennai-600036, India<EMAIL_ADDRESS><EMAIL_ADDRESS>and P. Devaraj School of Mathematics, Indian Institute of Science Education
and Research, Thiruvananthapuram, India<EMAIL_ADDRESS>
###### Abstract.
In this paper, we study a strong inverse approximation theorem and saturation
order for the family of Kantorovich exponential sampling operators. The class
of log-uniformly continuous and bounded functions, and class of log-Hölderian
functions are considered to derive these results. We also prove some auxiliary
results including Voronovskaya type theorem, and a relation between the
Kantorovich exponential sampling series and the generalized exponential
sampling series, to achieve the desired plan. Moreover, some examples of
kernels satisfying the conditions, which are assumed in the hypotheses of our
theorems, are discussed.
###### Key words and phrases:
Kantorovich exponential sampling series, Inverse approximation, Saturation
order, Mellin derivative.
###### 2010 Mathematics Subject Classification:
41A35; 30D10; 94A20; 41A25
## 1\. Introduction
The problem of sampling and reconstruction of functions is a fundamental
aspect of approximation theory, with important applications in signal analysis
and image processing ([18, 32]). A significant breakthrough in sampling and
reconstruction theory was collectively achieved by Whittaker-Kotelnikov-
Shannon. They established that any band-limited signal $f$, i.e. the Fourier
transform of $f$ is compactly supported, can be completely recovered using its
regularly spaced sample values (see [19]). This result is widely known as WKS
sampling theorem. Butzer and Stens [21] generalized this result significantly
for not-necessarily band-limited signals. Since then, several mathematicians
have been making significant advancements in this direction, see [22, 8, 39,
29, 1].
The problem of approximating functions with their exponentially-spaced sample
values can be traced back to the work of Ostrowski et.al. [40], Bertero and
Pike [20], and Gori [33]. In order to deal with exponentially-spaced data,
they provided a series representation for the class of Mellin band-limited
functions (defined in Section 2). This reconstruction formula is referred as
the exponential sampling formula and defined as follows. For
$f:\mathbb{R}^{+}\rightarrow\mathbb{C}$ and $c\in\mathbb{R},$ the exponential
sampling formula is given by (see [23])
$(E_{c,T}f)(x):=\sum_{k=-\infty}^{\infty}lin_{\frac{c}{T}}(e^{-k}x^{T})f(e^{\frac{k}{T}})$
(1.1)
where $lin_{c}(x)=\dfrac{x^{-c}}{2\pi i}\dfrac{x^{\pi i}-x^{-\pi i}}{\log
c}=x^{-c}sinc(\log x)$ with continuous extension $lin_{c}(1)=1.$ Moreover, if
$f$ is Mellin band-limited to $[-T,T],$ then $(E_{c,T}f)(x)=f(x)$ for each
$x\in\mathbb{R}^{+}.$
The exponentially spaced data can be observed in various problems emerging in
optical physics and engineering, for example Fraunhofer diffraction,
polydispersity analysis by photon correlation spectroscopy, neuron scattering,
radio astronomy etc (see [27, 40, 20, 33]). Therefore, it became crucial to
examine the extensions and variations of the exponential sampling formula
(1.1). Butzer and Jansche [25] investigated into the exponential sampling
formula, incorporating the analytical tools of Mellin analysis. They
established that the theory of Mellin transform provides a suitable framework
to handle sampling and approximation problem related to exponentially-spaced
data. The foundational work on the Mellin transform theory was initially
undertaken by Mamedov [38]. Subsequently, Butzer and his colleagues made
significant contributions to the field of Mellin theory in [23, 25]. For some
notable developments on Mellin theory, we refer to [10, 11, 12, 13] etc. In
order to approximate a function which is not necessarily Mellin band-limited,
the theory of exponential sampling formula (1.1) was extended in [14] using
generalized kernel satisfying suitable conditions. This gives a method to
approximate the class of log-continuous functions by employing its
exponentially spaced sample values. For $x\in\mathbb{R}^{+}$ and $w>0,$ the
generalized exponential sampling series is given by (see [14])
$(S_{w}^{\chi}f)(x)=\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})f(e^{\frac{k}{w}})$
(1.2)
for any $f:\mathbb{R}^{+}\rightarrow\mathbb{R}$ such that the series (1.2)
converges absolutely. Various approximation properties associated with the
family of operators (1.2) can be observed in [7, 15, 16, 35]. The
approximation properties of exponential sampling operators based on artificial
neural network can be found in [5, 6]. In order to approximate integrable
functions, the series (1.2) is not suitable. To overcome with this, the
following Kantorovich type modification of the family (1.2) was studied in
[2]. For $x\in\mathbb{R}^{+},k\in\mathbb{Z}$ and $w>0,$ the Kantorovich
exponential sampling series is defined by
$(I_{w}^{\chi}f)(x):=\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})\
w\int_{\frac{k}{w}}^{\frac{k+1}{w}}f(e^{u})\ du\ \ $ (1.3)
whenever the series (1.3) is absolutely convergent for any locally integrable
function $f:\mathbb{R}^{+}\rightarrow\mathbb{R}.$ Some direct and inverse
approximation results for the family $(I_{w}^{\chi}f)$ have been discussed in
[2, 4] which includes basic convergence theorem, higher order asymptotic
convergence result and quantitative approximation theorem. Also, an inverse
approximation theorem in case of $f\in\mathcal{C}^{(1)}(\mathbb{R}^{+})$ was
proved in [2] under the assumption that the fist order moment vanishes on
$\mathbb{R}^{+}.$ For some recent advancements related to the family (1.3), we
refer to [36, 3, 37, 34].
In the present work, we deduce a strong inverse approximation result for the
family of Kantorovich exponential sampling operator $(I_{w}^{\chi})$ for
$f\in\mathcal{C}(\mathbb{R}^{+}),$ without assuming that first order algebraic
moment vanishes. This not only broadens the underlying class of functions but
also enable the application of our theory to some other kernels, for instance,
the class of Mellin B-spline kernels (see Section 4). We also establish the
saturation order, i.e. the highest order of convergence that can be achieved,
for $(I_{w}^{\chi}f)$ in case of $f\in\mathcal{C}(\mathbb{R}^{+}).$ The
problem of saturation order for the family of operators $(I_{w}^{\chi}),w>0$
is to find a suitable class $\mathcal{F}$ of real valued functions defined on
$\mathbb{R}^{+},$ a subclass $\mathcal{S}$ and a positive non-increasing
function $\rho(w),w>0$ satisfying the following: there exists
$h\in\mathcal{F}\setminus\mathcal{S}$ with
$\|I_{w}^{\chi}h-h\|=\mathcal{O}(\rho(w))$ as $w\rightarrow\infty$ and
whenever $f\in\mathcal{F}$ with $\|I_{w}^{\chi}f-f\|={o}(\rho(w))$ as
$w\rightarrow\infty$ implies that $f\in\mathcal{S}$ and vice versa. Several
authors have investigated the inverse approximation results and saturation
order for various sampling operators significantly, see [28, 17, 29, 30, 31]
etc.
The proposed plan of the paper is as follows. In order to derive these
results, we first define an appropriate average type kernel and derive some
auxiliary results mainly concerned with this new kernel in Section 2. In
Section 3, we establish a relation between the operator (1.3) and the
derivative of the operator (1.2) based on average type kernel. Further, we
derive the asymptotic formula for the operator (1.3) using Mellin Taylor
formula. By using these results, we prove the saturation theorem and inverse
result for the family of sampling operators (1.3). In Section 4, we discuss
some examples of kernels satisfying the conditions, which are assumed in the
hypotheses of the theorems.
## 2\. Preliminaries and Auxiliary Results
Let $\mathbb{R}^{+}$ be the set of positive real numbers and
$L^{p}(\mathbb{R}^{+}),\ 1\leq p<\infty$ consists of all p-integrable
functions in the Lebesgue sense on $\mathbb{R}^{+}$ with usual $p-$norm.
Further $L^{\infty}(\mathbb{R}^{+})$ denotes the class of bounded measurable
functions defined on $\mathbb{R}^{+}$ with $\|.\|_{\infty}$ norm. Let $X_{c}$
be the space of functions $f:\mathbb{R}^{+}\rightarrow\mathbb{R}$ such that
$f(\cdot)(\cdot)^{c-1}\in L^{1}(\mathbb{R}^{+})$ for some $c\in\mathbb{R},$
equipped with following norm
$\|f\|_{X_{c}}=\int_{0}^{\infty}|f(t)|\ t^{c}\frac{dt}{t}.$
For $f\in X_{c},$ the Mellin transform of $f$ is given by
$[f]^{\wedge}_{M}(s):=\int_{0}^{\infty}f(t)\ t^{s}\frac{dt}{t},\ \ \ (s=c+ix,\
x\in\mathbb{R}).$
One can observe that the Mellin transform is well defined in $X_{c}$ as a
Lebesgue integral. Further, for $c,t\in\mathbb{R}$ and $T>0,$ any function
$f\in X_{c}(\mathbb{R}^{+})$ is said to be Mellin band-limited to $[-T,T],$ if
$[f]_{\hat{M}}(c+it)=0$ for $|t|>T.$ For more details on theory of Mellin
transform, we refer to [23, 24]. The point-wise Mellin derivative of the
function $f$ is defined by the following limit
$\theta_{c}f(t)=\lim_{h\rightarrow
1}\frac{\tau_{h}^{c}f(t)-f(t)}{h-1}=tf^{{}^{\prime}}(t)+cf(t)\ ,$
provided $f^{{}^{\prime}}$ exists, where $\tau_{h}^{c}$ is the Mellin
translation operator $(\tau_{h}^{c}f)(t):=h^{c}f(ht).$ Furthermore, the Mellin
differential operator of order $r$ is given by
$\theta_{c}^{r}:=\theta_{c}(\theta_{c}^{r-1}).$ Throughout this paper, we
consider $\theta_{c}:=\theta_{c}^{1}$ and $\theta f:=\theta_{0}f.$
We now give the definition of recurrent function. We say that a function
$f:\mathbb{R}^{+}\rightarrow\mathbb{C}$ is recurrent if $f(x)=f(e^{a}x),$
$\forall$ $x\in\mathbb{R}^{+}$ and for some $a\in\mathbb{R}$ (see [26]). The
fundamental interval of the above recurrent functions can be taken as
$[1,e^{a}].$
Let $C(\mathbb{R}^{+})$ denotes the space of all uniformly continuous and
bounded functions on $\mathbb{R}^{+}$ with norm
$\|f\|_{\infty}:=\sup_{t\in\mathbb{R}^{+}}|f(t)|.$ For any $\nu\in\mathbb{N},$
$C^{(\nu)}(\mathbb{R}^{+})$ be the subspace of $C(\mathbb{R}^{+})$ such that
$f^{(r)}\in C(\mathbb{R}^{+})$ for each $r\leq\nu,r\in\mathbb{N}.$ Also,
$C_{c}^{\infty}(\mathbb{R}^{+})$ represents the space of all infinitely
differentiable functions which are compactly supported in $\mathbb{R}^{+}.$ A
function $f:\mathbb{R}^{+}\rightarrow\mathbb{R}$ is said to be log-uniformly
continuous on $\mathbb{R}^{+}$ if $\forall$ $\epsilon>0,$ there exists a
$\delta>0$ such that $|f(x)-f(y)|<\epsilon$ whenever $|\log x-\log y|<\delta,$
for any $x,y\in\mathbb{R}^{+}.$ Further, $\mathcal{C}(\mathbb{R}^{+})$ denotes
the space of all log-uniformly continuous and bounded functions defined on
$\mathbb{R}^{+}.$ Analogous to the classical case, for any $\nu\in\mathbb{N},$
$\mathcal{C}^{(\nu)}(\mathbb{R}^{+})$ be the subspace of
$\mathcal{C}(\mathbb{R}^{+})$ such that
$(\theta^{r}f)\in\mathcal{C}(\mathbb{R}^{+})$ for each
$r\leq\nu,r\in\mathbb{N}.$ For $f\in C^{(r)}(\mathbb{R}^{+}),$ the Mellin’s
Taylor formula is defined by (see [9])
$f(tx)=f(x)+(\theta f)(x)\log
t+\frac{(\theta^{2}f)(x)}{2!}\log^{2}t+\cdots+\frac{(\theta^{n}f)(x)}{n!}\log^{n}t+h(x)\log^{n}t\
,$
where $h:\mathbb{R}^{+}\rightarrow\mathbb{R}$ is bounded and $h(x)\rightarrow
0$ as $x\rightarrow 1.$
A continuous function $\chi:\mathbb{R}^{+}\rightarrow\mathbb{R}$ is said to be
kernel if it fulfils the following conditions:
* $(\chi_{1})$
For any $u\in\mathbb{R}^{+},\ $
$\displaystyle\sum_{k=-\infty}^{\infty}\chi(e^{-k}u)=1,\hskip
5.69046pt\mbox{uniformly on}\ \mathbb{R}^{+}.$
* $(\chi_{2})$
$\displaystyle m_{1}(\chi,u):=\sum_{k=-\infty}^{\infty}\chi(e^{-k}u)(k-\log
u)\ :=m_{1}^{\chi}\in\mathbb{R}.$
* $(\chi_{3})$
For some $\beta\geq 1,$ $\displaystyle
M_{\beta}(\chi):=\sup_{u\in\mathbb{R}^{+}}\sum_{k=-\infty}^{\infty}|\chi(e^{-k}u)||k-\log
u|^{\beta}<\infty.$
* $(\chi_{4})$
For every $\gamma>0,$ $\displaystyle\lim_{w\rightarrow\infty}\sum_{|w\log
x-k|>w\gamma}|\chi(e^{-k}x^{w})|\ |w\log x-k|=0$ uniformly on
$\mathbb{R}^{+}.$
###### Remark 1.
[2] One can deduce that for $\alpha,\beta\in\mathbb{N}_{0}$ with
$\alpha<\beta,$ $M_{\alpha}(\chi)<\infty$ whenever $M_{\beta}(\chi)<\infty.$
To establish the proposed results for the Kantorovich exponential sampling
operator (1.3), we define the average type kernel as follows:
$\displaystyle\bar{\chi}(t)=\int_{e^{\frac{-1}{2}}}^{e^{\frac{1}{2}}}\chi(tu)\frac{du}{u}=\int_{\frac{-1}{2}}^{\frac{1}{2}}\chi(te^{p})dp,\
\ \ \ t\in\mathbb{R}^{+}.$ (2.1)
In the following lemma, we show that the average type kernel satisfies the
conditions $(\chi_{1})$-$(\chi_{4}).$
###### Lemma 1.
Let $\chi:\mathbb{R}^{+}\rightarrow\mathbb{R}$ be the kernel function
satisfying $(\chi_{1})$ -$(\chi_{4})$ and $\bar{\chi}(t)$ be defined as in
(2.1). Then $\bar{\chi}$ also satisfies $(\chi_{1})$ -$(\chi_{4}).$
###### Proof.
Since $\chi$ is continuous, the averaged type kernel $\bar{\chi}$ is also
continuous. Now for $u\in\mathbb{R}^{+},$ we have
$m_{0}(\bar{\chi})=\sum_{k=-\infty}^{\infty}\bar{\chi}(e^{-k}u)=\sum_{k=-\infty}^{\infty}\int_{\frac{-1}{2}}^{\frac{1}{2}}\chi(e^{-k}ue^{p})dp=\displaystyle\int_{\frac{-1}{2}}^{\frac{1}{2}}dp=1.$
Hence $\overline{\chi}$ satisfies $(\chi_{1}).$ Using the condition
$(\chi_{2}),$ we get
$\displaystyle m_{1}(\bar{\chi},u)$ $\displaystyle=$
$\displaystyle\sum_{k=-\infty}^{\infty}\bar{\chi}(e^{-k}u)(k-\log u)$
$\displaystyle=$
$\displaystyle\sum_{k=-\infty}^{\infty}\int_{\frac{-1}{2}}^{\frac{1}{2}}\chi(e^{-k}ue^{p})(k-\log
u+p-p)\ dp$ $\displaystyle=$
$\displaystyle\int_{\frac{-1}{2}}^{\frac{1}{2}}\sum_{k=-\infty}^{\infty}\chi(e^{-k}ue^{p})(k-\log(ue^{p})+p)\
dp$ $\displaystyle=$ $\displaystyle
m_{1}(\chi,u)\int_{\frac{-1}{2}}^{\frac{1}{2}}dp\ +\
\int_{\frac{-1}{2}}^{\frac{1}{2}}p\ dp=m_{1}^{\chi}.$
We define $\displaystyle
M_{\beta}(\bar{\chi}):=\sup_{u\in\mathbb{R}^{+}}\sum_{k=-\infty}^{\infty}|\bar{\chi}(e^{-k}u)|\
|k-\log u|^{\beta}.$ For $\beta\geq 1,$ it is given that
$M_{\beta}(\chi)<\infty.$ Now we show that $M_{\beta}(\bar{\chi})<\infty.$ In
view of (2.1), we have
$\displaystyle M_{\beta}(\bar{\chi})$ $\displaystyle\leq$
$\displaystyle\sum_{k=-\infty}^{\infty}\int_{\frac{-1}{2}}^{\frac{1}{2}}\left(|\chi(e^{-k}ue^{p})|\
|k-\log u+p-p|^{\beta}\right)dp$ $\displaystyle\leq$
$\displaystyle\int_{\frac{-1}{2}}^{\frac{1}{2}}\left(\sum_{k=-\infty}^{\infty}|\chi(e^{-k}ue^{p})|\
|k-\log(ue^{p})+p|^{\beta}\right)dp$ $\displaystyle\leq$
$\displaystyle\int_{\frac{-1}{2}}^{\frac{1}{2}}\sum_{k=-\infty}^{\infty}|\chi(e^{-k}ue^{p})|\left(2^{\beta-1}|k-\log(ue^{p})|^{\beta}+|p|^{\beta}\right)dp.$
Since $\beta\geq 1,$ then by using the inequality $|a+b|^{\beta}\leq
2^{\beta-1}\left(|a|^{\beta}+|b|^{\beta}\right),$ we obtain
$\displaystyle M_{\beta}(\overline{\chi})$ $\displaystyle\leq$ $\displaystyle
2^{\beta-1}\int_{\frac{-1}{2}}^{\frac{1}{2}}\left(\sum_{k=-\infty}^{\infty}|\chi(e^{-k}ue^{p})|\
|k-\log(ue^{p})|^{\beta}\right)dp$
$\displaystyle+2^{\beta-1}\int_{\frac{-1}{2}}^{\frac{1}{2}}\left(\sum_{k=-\infty}^{\infty}|\chi(e^{-k}ue^{p})|\
|p|^{\beta}\right)dp$ $\displaystyle\leq$ $\displaystyle
2^{\beta-1}\left(M_{\beta}(\chi)+M_{0}({\chi})\int_{\frac{-1}{2}}^{\frac{1}{2}}|p|^{\beta}dp\right)$
$\displaystyle\leq$ $\displaystyle
2^{\beta-1}M_{\beta}(\chi)+\frac{M_{0}(\chi)}{2^{\beta+1}(\beta+1)}((-1)^{\beta+1}+1).$
Since $M_{\beta}(\chi)<\infty,$ so we have $M_{\beta}(\bar{\chi})<\infty.$
Now we show that for $\gamma>0,$
$\displaystyle\lim_{w\rightarrow\infty}\sum_{|w\log
x-k|>w\gamma}|\bar{\chi}(e^{-k}x^{w})|\ |w\log x-k|=0$
uniformly w.r.t. $x\in\mathbb{R}^{+}.$ For $w>\dfrac{1}{\gamma},$ by using the
definition of $\bar{\chi},$ we can write
$\displaystyle\sum_{|w\log x-k|>w\gamma}|\bar{\chi}(e^{-k}x^{w})|\ |w\log
x-k|$
$\displaystyle\leq$ $\displaystyle\sup_{p\in[-1/2,1/2]}\left(\sum_{|w\log
x-k|>w\gamma}|\chi(e^{-k}x^{w}e^{p})|\ |w\log
x-k+\log(e^{p})-\log(e^{p})|\right)$ $\displaystyle\leq$
$\displaystyle\sup_{p\in[-1/2,1/2]}\left(\sum_{|w\log
x-k+\log(e^{p})|>w\gamma-1/2}|\chi(e^{-k}x^{w}e^{p})|\ (|w\log
x-k+\log(e^{p})|+|\log(e^{p})|)\right)$ $\displaystyle\leq$
$\displaystyle\sup_{y\in\mathbb{R}^{+}}\left(\sum_{|w\log
y-k|>w\gamma-1/2}|\chi(e^{-k}y^{w})|\left(|w\log
y-k|+\frac{1}{2}\right)\right)$ $\displaystyle\leq$
$\displaystyle\sup_{y\in\mathbb{R}^{+}}\left(\sum_{|w\log
y-k|>w\gamma/2}|\chi(e^{-k}y^{w})||w\log y-k|+\frac{1}{2}\sum_{|w\log
y-k|>w\gamma/2}|\chi(e^{-k}y^{w})|\right).$
Using the condition $(\chi_{4}),$ we deduce that
$\displaystyle\lim_{w\rightarrow\infty}\sum_{|w\log
x-k|>w\gamma}|\bar{\chi}(e^{-k}x^{w})|\ |w\log x-k|=0$
uniformly w.r.t. $x\in\mathbb{R}^{+}.$ This concludes the proof. ∎
Next we deduce the following result which will be useful to obtain a relation
between the sampling series (1.2) and (1.3).
###### Lemma 2.
Let $[a,b]\subset\mathbb{R}^{+}$ and $f:[a,b]\rightarrow\mathbb{R}$ be
continuous. If
$F(x)=\int_{0}^{x}f(t)\frac{dt}{t},\ \ \ x\in[a,b].$ (2.2)
Then $F$ is Mellin differentiable and $\ (\theta F)(x)=f(x),\ \forall
x\in[a,b].$
###### Proof.
We have
$\displaystyle F(x)$ $\displaystyle=$
$\displaystyle\int_{a}^{x}f(t)\frac{dt}{t}=\int_{\log a}^{\log x}f(e^{v})\
dv.$
This gives
$\displaystyle F(sx)-F(x)$ $\displaystyle=$ $\displaystyle\int_{\log a}^{\log
sx}f(e^{v})\ dv-\int_{\log a}^{\log x}f(e^{v})\ dv$ $\displaystyle=$
$\displaystyle\begin{cases}{\displaystyle\int_{\log x}^{\log sx}f(e^{v})\
dv}&\quad\text{if}\ \ \ \ {s>1}\\\ {\displaystyle-\int_{\log x}^{\log
sx}f(e^{v})\ dv}&\quad\text{if}\ \ \ \ {s<1.}\\\ \end{cases}$
On applying the mean value theorem for integral calculus, we get
$\int_{\log x}^{\log sx}f(e^{v})\ dv=(\log s)f(e^{\xi}),$
where $\xi\in[\log x,\log sx]$ for $s>1$ and $\xi\in[\log sx,\log x]$ for
$s<1.$ This gives
$F(sx)-F(x)=\log(s)f(e^{\xi}),\ \ \xi\in[\log x,\log sx].$
This implies that
$\frac{F(sx)-F(x)}{\log s}=f(sx).$
Taking limit as $s\rightarrow 1,$ we deduce
$\lim_{s\rightarrow 1}\frac{F(sx)-F(x)}{\log s}=f(x).$
This gives $(\theta F)(x)=f(x),\ \ x\in[a,b].$ Thus, the proof is completed. ∎
## 3\. Main Results
In this section, we derive the inverse approximation result and saturation
order for the Kantorovich exponential sampling series $(I_{w}^{\chi}f).$ First
we establish the relation between $(S_{w}^{\chi}f)$ and $(I_{w}^{\chi}f).$
Using the continuity of $\chi$ and Lemma 2, we obtain
$\theta\bar{\chi}(t)=\chi(te^{\frac{1}{2}})-\chi(te^{\frac{-1}{2}}).$ (3.1)
###### Lemma 3.
Let $f\in C(\mathbb{R}^{+})$ and $F$ be the Mellin anti-derivative of $f.$
Then for $x,w\in\mathbb{R}^{+},$ there holds
$(I_{w}^{\chi}f)(x)=(\theta S^{\bar{\chi}}_{w}F)(xe^{\frac{1}{2w}}).$
###### Proof.
Using (2.2), we can write
$\displaystyle(I_{w}^{\chi}f)(x)$ $\displaystyle=$
$\displaystyle\displaystyle\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})\
w\int_{\frac{k}{w}}^{\frac{k+1}{w}}f(e^{u})\ du$ $\displaystyle=$
$\displaystyle\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})\
w\left(F(e^{\frac{k+1}{w}})-F(e^{\frac{k}{w}})\right)$ $\displaystyle=$
$\displaystyle
w\left(\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})F(e^{\frac{k+1}{w}})-\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})F(e^{\frac{k}{w}})\right).$
Setting $\widetilde{k}=k+1$ in the first term of the above expression, we have
$\displaystyle(I_{w}^{\chi}f)(x)$ $\displaystyle=$ $\displaystyle
w\left(\sum_{\widetilde{k}=-\infty}^{\infty}\chi(e^{\widetilde{-k}}x^{w})F(e^{\frac{\widetilde{k}}{w}})-\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})F(e^{\frac{k}{w}})\right)$
$\displaystyle=$ $\displaystyle
w\left(\sum_{\widetilde{k}=-\infty}^{\infty}\chi(e^{\widetilde{-k}}x^{w}e^{\frac{1}{2}}e^{\frac{1}{2}})F(e^{\frac{\widetilde{k}}{w}})\right)-w\left(\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w}e^{\frac{1}{2}}e^{\frac{-1}{2}})F(e^{\frac{k}{w}})\right)$
$\displaystyle=$ $\displaystyle\sum_{k=-\infty}^{\infty}F(e^{\frac{k}{w}})\
w\left(\chi(e^{-k}x^{w}e^{\frac{1}{2}}e^{\frac{1}{2}})-\chi(e^{-k}x^{w}e^{\frac{1}{2}}e^{\frac{-1}{2}})\right).$
From (3.1), we get
$\displaystyle(I_{w}^{\chi}f)(x)=\sum_{k=-\infty}^{\infty}F(e^{\frac{k}{w}})\
w\ (\theta\bar{\chi})(e^{-k}x^{w}e^{\frac{1}{2}}).$
Since $(\theta f)(x)=xf^{{}^{\prime}}(x),$ thus we obtain
$(\theta
S_{w}^{\bar{\chi}}F)(xe^{\frac{1}{2w}})=\sum_{k=-\infty}^{\infty}F(e^{\frac{k}{w}})\
w\ (\theta\bar{\chi})(e^{-k}x^{w}e^{\frac{1}{2}}).$
Hence, the proof is established. ∎
Next we establish the asymptotic formula for the series $(S_{w}^{\chi}f).$
This result is required to derive the saturation order for the Kantorovich
exponential sampling series $(I_{w}^{\chi}f).$
###### Theorem 1.
If $f:\mathbb{R}^{+}\rightarrow\mathbb{R}$ be differentiable such that
$(\theta f)$ is log-uniformly continuous and bounded on $\mathbb{R}^{+},$ then
$\displaystyle\lim_{w\to+\infty}w\ ((S_{w}^{\chi}f)(x)-f(x))=m_{1}^{\chi}\
(\theta f)(x),$
uniformly on $\mathbb{R}^{+}.$
###### Proof.
Setting $u=k/w$ in the first order Mellin Taylor formula, we obtain
$f(e^{k/w})=f(x)+\left(\frac{k}{w}-\log x\right)(\theta f)(\xi),\ \ \ \
\xi\in(k/w,\log x).$
Operating $\displaystyle\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})$ on both
sides of the above expression, we get
$(S_{w}^{\chi}f)(x)=f(x)+\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})(\theta
f)(\xi)\left(\frac{k}{w}-\log x\right):=f(x)+R_{w}(x).$
The above expression implies
$w\left((S_{w}^{\chi}f)(x)-f(x)\right)-m_{1}^{\chi}(\theta f)(x)=w\
R_{w}(x)-m_{1}^{\chi}(\theta f)(x).$
For $\delta>0,$ we can write
$w\ R_{w}(x)-m_{1}^{\chi}\ (\theta f)(x)$
$\displaystyle=$
$\displaystyle\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})(k-w\log x)((\theta
f)(\xi)-(\theta f)(x))$ $\displaystyle=$
$\displaystyle\left(\sum_{\left|\frac{k}{w}-\log
x\right|<\delta}+\sum_{\left|\frac{k}{w}-\log
x\right|\geq\delta}\right)\chi(e^{-k}x^{w})(k-w\log x)((\theta f)(\xi)-(\theta
f)(x))$ $\displaystyle:=$ $\displaystyle I_{1}+I_{2}.$
Since $(\theta f)$ is log-uniformly continuous, $\forall\epsilon>0,$
$\exists\,\ \delta>0$ such that $\left|(\theta f)(\xi)-(\theta
f)(x)\right|<\epsilon,$ for $\left|\dfrac{k}{w}-\log x\right|<\delta.$ This
gives
$\displaystyle|I_{1}|$ $\displaystyle\leq$
$\displaystyle\epsilon\sum_{\left|\frac{k}{w}-\log
x\right|<\delta}|\chi(e^{-k}x^{w})||k-w\log x|$ $\displaystyle\leq$
$\displaystyle\epsilon\ M_{1}(\chi).$
Since $\epsilon>0$ is arbitrary, we obtain $I_{1}\rightarrow 0$ as
$w\rightarrow\infty.$ In view of boundedness of $(\theta f),$ we get
$\displaystyle|I_{2}|$ $\displaystyle\leq$ $\displaystyle 2\|\theta
f\|_{\infty}\sum_{|{k}-w\log x|\geq w\delta}|\chi(e^{-k}x^{w})||k-w\log x|.$
Using the condition $(\chi_{4}),$ we obtain $I_{2}\rightarrow 0$ as
$w\rightarrow\infty.$ On collecting the estimates $I_{1}-I_{2},$ we obtain
$\lim_{w\to+\infty}w\left[(S_{w}^{\chi}f)(x)-f(x)\right]=m_{1}^{\chi}\ (\theta
f)(x).$
Thus, the proof is completed. ∎
This gives the following corollary.
###### Corollary 1.
Let $f:\mathbb{R}^{+}\to\mathbb{R}$ be such that $(\theta
f)\in\mathcal{C}(\mathbb{R}^{+}).$ Then for any $x\in\mathbb{R}^{+}$ and
$w>0,$ we have
$\lim_{w\to+\infty}w[(S_{w}^{\chi}f)(xe^{1/2w})-f(x)]=(2m_{1}^{\chi}+1)\frac{(\theta
f)(x)}{2},$
uniformly on $\mathbb{R}^{+}.$
###### Proof.
Using Theorem 1, we write
$\Big{|}w\left[(S_{w}^{\chi}f)(u)-f(u)\right]-m_{1}^{\chi}\ (\theta
f)(u)\Big{|}<\epsilon\ ,$
hold for any $u\in\mathbb{R}^{+}$ and for every fixed $\epsilon>0.$ If we take
$u=xe^{1/2w},$ then from the above estimate, we obtain
$\Big{|}w\left[(S_{w}^{\chi}f)(xe^{1/2w})-f(xe^{1/2w})\right]-m_{1}^{\chi}\
(\theta f)(xe^{1/2w})\Big{|}<\epsilon.$
Thus, we have
$\Big{|}w\left[(S_{w}^{\chi}f)(xe^{1/2w})-f(x)\right]-(2m_{1}^{\chi}+1)\dfrac{(\theta
f)(x)}{2}\Big{|}$
$\displaystyle<$
$\displaystyle\epsilon+\Bigg{|}w(f(xe^{1/2w})-f(x))-\frac{(\theta
f)(x)}{2}\Bigg{|}+\Big{|}m_{1}^{\chi}\left((\theta f)(x)-(\theta
f)(xe^{\frac{1}{2w}})\right)\Big{|}$ $\displaystyle<$
$\displaystyle\epsilon+\frac{1}{2}\Bigg{|}\frac{f(xe^{1/2w})-f(x)}{1/2w}-\frac{(\theta
f)(x)}{2}\Bigg{|}+\Big{|}m_{1}^{\chi}\left((\theta f)(x)-(\theta
f)(xe^{\frac{1}{2w}})\right)\Big{|}$ $\displaystyle:=$
$\displaystyle\epsilon+I_{3}+I_{4}.$
Since
$\displaystyle\lim_{w\rightarrow\infty}2w\left(f(xe^{1/2w})-f(x)\right)=\frac{(\theta
f)(x)}{2},$ we deduce that $I_{3}\rightarrow 0$ as $w\rightarrow\infty.$ Since
$\theta f$ is log-uniformly continuous, we have
$|\theta f(x)-\theta f(xe^{1/2w})|<\epsilon,$
whenever $|\log x-\log(xe^{1/2w})|<\delta.$ Therefore, we obtain
$\Bigg{|}w\left[(S_{w}^{\chi}f)(xe^{1/2w})-f(x)\right]-2(m_{1}^{\chi}+1)\frac{(\theta
f)(x)}{2}\Bigg{|}<\epsilon.$
Hence we get the desired result. ∎
In what follows, we shall define the class of log-Hölderian functions as
$L_{\alpha}:=\\{f:I\rightarrow\mathbb{R}\ :\exists\ \mbox{K $>$ 0 \ s.t.}\ \
|f(x)-f(y)|\leq K|\log x-\log y|^{\alpha};\hskip 3.1298ptx,y\in I\\},$
with $I\subseteq\mathbb{R}^{+}$ and $0<\alpha\leq 1.$ Now we prove the
following direct approximation result.
###### Theorem 2.
Let $\chi$ be a kernel function and $f\in L_{\alpha}.$ Then the following
holds
$\|I_{w}^{\chi}f-f\|_{\infty}=\mathcal{O}(w^{-\alpha})~{}~{}~{}\mbox{as}\ \
w\rightarrow\infty.$
###### Proof.
Consider $|(I_{w}^{\chi}f)(x)-f(x)|$
$\displaystyle=$
$\displaystyle\Big{|}\sum_{k=-\infty}^{\infty}\chi(e^{-k}x^{w})w\int_{k/w}^{{k+1}/w}[f(e^{u})-f(x)]\
du\Big{|}$ $\displaystyle\leq$
$\displaystyle\sum_{k=-\infty}^{\infty}|\chi(e^{-k}x^{w})|w\int_{k/w}^{{k+1}/w}|f(e^{u})-f(x)|\
du.$
Since $f\in L_{\alpha},$ so we obtain
$\displaystyle|(I_{w}^{\chi}f)(x)-f(x)|$ $\displaystyle\leq$
$\displaystyle\sum_{k=-\infty}^{\infty}|\chi(e^{-k}x^{w})|w\int_{k/w}^{{k+1}/w}|u-\log
x|^{\alpha}du$ $\displaystyle\leq$
$\displaystyle\frac{w}{(\alpha+1)}\sum_{k=-\infty}^{\infty}|\chi(e^{-k}x^{w})|\left[\
\Big{|}\frac{k+1}{w}-\log x\Big{|}^{\alpha+1}-\ \Big{|}\frac{k}{w}-\log
x\Big{|}^{\alpha+1}\right].$
Since $|a+b|^{\alpha+1}\leq 2^{\alpha}\left(|a|^{\alpha}+|b|^{\alpha}\right)$
for $\alpha>0,$ we can write
$|(I_{w}^{\chi}f)(x)-f(x)|$
$\displaystyle\leq$
$\displaystyle\frac{w}{(\alpha+1)}\sum_{k=-\infty}^{\infty}|\chi(e^{-k}x^{w})|\left[\
2^{\alpha}\left(\Big{|}\frac{k}{w}-\log
x\Big{|}^{\alpha+1}+\frac{1}{w^{\alpha+1}}\right)-\ \Big{|}\frac{k}{w}-\log
x\Big{|}^{\alpha+1}\right]$ $\displaystyle\leq$
$\displaystyle\frac{w}{(\alpha+1)}\sum_{k=-\infty}^{\infty}|\chi(e^{-k}x^{w})|\
\left[(2^{\alpha}-1)\Big{|}\frac{k}{w}-\log
x\Big{|}^{\alpha+1}+\frac{1}{w^{\alpha+1}}\right]$ $\displaystyle\leq$
$\displaystyle\frac{w^{-\alpha}}{(\alpha+1)}\left((2^{\alpha}-1)M_{\alpha+1}(\chi)+M_{0}(\chi)\right).$
This completes the proof. ∎
We derive the saturation order for the Kantorovich exponential sampling series
as follows.
###### Theorem 3.
Let $\chi$ be a kernel function such that $m_{1}^{\chi}\neq-1/2$ and let
$f\in\mathcal{C}(\mathbb{R}^{+})$ be such that
$\|I_{w}^{\chi}f-f\|_{\infty}=o(w^{-1})~{}~{}~{}~{}as~{}~{}~{}w\to+\infty.$
Then $f$ is constant on $\mathbb{R}^{+}.$
###### Proof.
Let $\phi\in C_{c}^{\infty}(\mathbb{R}^{+})$ be fixed. We define
$G_{f}(\phi):=w\int_{\mathbb{R}^{+}}\big{[(I_{w}^{\chi}f)-f(x)\big{]}}\phi(x)\frac{dx}{x},\
\ \ \ w>0.$ (3.2)
We assume that $[a,b]\subset\mathbb{R}^{+}$ be such that the compact support
of $\phi$ is properly contained in $[a,b],$ i.e. $supp(\phi)\subset[a,b].$
This gives $\phi(a)=0=\phi(b).$ So, the integral (3.2) reduces to
$G_{f}(\phi)=w\int_{a}^{b}\big{[(I_{w}^{\chi}f)-f(x)\big{]}}\phi(x)\frac{dx}{x},\
\ \ \ w>0.$
Using Lemma 3, we obtain
$\displaystyle G_{f}(\phi)$ $\displaystyle=w\int_{a}^{b}\big{[(\theta
S_{w}^{\bar{\chi}}F)(xe^{1/2w})-f(x)\big{]}}\phi(x)\frac{dx}{x}$
$\displaystyle=w\int_{a}^{b}\big{[(\theta
S_{w}^{\bar{\chi}}F)(xe^{1/2w})-(\theta F)(x)\big{]}}\phi(x)\frac{dx}{x},$
where $F$ is the Mellin anti-derivative of $f,$ i.e. $\displaystyle
F(x)=\int_{0}^{x}f(t)\frac{dt}{t},~{}~{}x\in\mathbb{R}^{+}.$ Using integration
by parts in the Mellin-sense, we obtain
$\displaystyle G_{f}(\phi)=w\left\\{\phi(x)\int\big{[(\theta
S_{w}^{\bar{\chi}}F)(xe^{1/2w})-(\theta
F)(x)\big{]}}\frac{dx}{x}\right\\}_{a}^{b}$
$\displaystyle-w\int_{a}^{b}\big{[(S_{w}^{\bar{\chi}}F)(xe^{1/2w})-F(x)\big{]}}(\theta\phi)(x)\frac{dx}{x}.$
Since $\phi(a)=0=\phi(b),$ we have
$G_{f}(\phi)=-w\int_{a}^{b}\big{[(S_{w}^{\bar{\chi}}F)(xe^{1/2w})-F(x)\big{]}}(\theta\phi)(x)\
\frac{dx}{x}.$
Applying the limit $w\rightarrow\infty$ on both sides of the above equation
and using Vitali’s convergence theorem, we get
$\lim\limits_{w\to\infty}G_{f}(\phi)=-(m_{1}^{\chi}+1/2)\int_{a}^{b}(\theta\phi)(x)(\theta
F)(x)\ \frac{dx}{x}.$ (3.3)
Since $(I_{w}^{\chi}f)$ converges to $f$ uniformly as $w\to\infty,$ we obtain
$0=-(m_{1}^{\chi}+1/2)\int_{a}^{b}(\theta\phi)(x)(\theta F)(x)\frac{dx}{x}.$
As $(\theta F)(x)=f(x),$ we have
$0=-(m_{1}^{\chi}+1/2)\int_{a}^{b}(\theta\phi)(x)f(x)\frac{dx}{x},$
Since $\phi\in C_{c}^{\infty}(\mathbb{R}^{+})$ is arbitrary function, $f$ is
constant in $\mathbb{R}^{+}.$ Hence, the highest order of convergence that
$(I_{w}^{\chi}f)$ can achieve for $f\in\mathcal{C}(\mathbb{R}^{+})$ is one,
provided $m_{1}(\chi,u)\neq-1/2.$ Hence, the result is proved. ∎
For $f\in\mathcal{C}(\mathbb{R}^{+}),$ the logarithmic modulus of continuity
is defined as
$\omega(f,\delta):=\sup\\{|f(u)-f(v)|:\ |\log u-\log v|\leq\delta,\ \
\delta\in\mathbb{R}^{+}\\}.$
For every $\delta>0$ and $u,v\in\mathbb{R}^{+},$ $\omega$ has the following
properties:
* (a)
$\omega(f,\delta)\rightarrow 0$ as $\delta\rightarrow 0.$
* (b)
$\displaystyle|f(u)-f(v)|\leq\omega(f,\delta)\left(1+\frac{|\log u-\log
v|}{\delta}\right).$
For more details on modulus of continuity, we refer to [38, 11].
Now we prove the proposed inverse approximation result for the Kantorovich
exponential sampling series $I_{w}^{\chi}.$
###### Theorem 4.
Let $\chi$ be differentiable and $M_{1}(\theta\chi)<+\infty$ and
$f\in\mathcal{C}(\mathbb{R}^{+})$ be such that
$\|I_{w}^{\chi}f-f\|_{\infty}=\mathcal{O}(w^{-\alpha})~{}~{}~{}as~{}~{}~{}w\to\infty$
with $0<\alpha\leq 1.$ Then $f$ belongs to $L_{\alpha}.$
###### Proof.
Since
$\|I_{w}^{\chi}f-f\|_{\infty}=\mathcal{O}(w^{-\alpha})~{}~{}~{}as~{}~{}~{}w\to\infty,$
there exist $K$ and $w_{0}$ such that
$\|I_{w}^{\chi}f-f\|_{\infty}\leq Kw^{-\alpha},\ \ \ \mbox{for every}\ \
w>w_{0}.$
Now for every fixed $x,y\in\mathbb{R}^{+}$ and for $x\neq y,$ we can write
$\displaystyle|f(x)-f(y)|$
$\displaystyle=|f(x)-(I_{w}^{\chi}f)(x)+(I_{w}^{\chi}f)(x)-f(y)+(I_{w}^{\chi}f)(y)-(I_{w}^{\chi}f)(y)|$
$\displaystyle\leq|f(y)-(I_{w}^{\chi}f)(y)|+|f(x)-(I_{w}^{\chi}f)(x)|+|(I_{w}^{\chi}f)(x)-(I_{w}^{\chi}f)(y)|$
$\displaystyle\leq
Kw^{-\alpha}+|(I_{w}^{\chi}f)(x)-(I_{w}^{\chi}f)(y)|+Kw^{-\alpha}$
$\displaystyle\leq 2Kw^{-\alpha}+|(I_{w}^{\chi}f)(x)-(I_{w}^{\chi}f)(y)|$
$\displaystyle\leq 2Kw^{-\alpha}+\Bigg{|}\int_{y}^{x}(\theta
I_{w}^{\chi}f)(t)\frac{dt}{t}\Bigg{|}=:I_{1}+I_{2}.$
Now we estimate $I_{2}.$ It is easy to see that
$(\theta
I_{w}^{\chi}f)(x)=\sum_{k=-\infty}^{\infty}(\theta{\chi})(e^{-k}x^{w})\
w^{2}\int_{k/w}^{{k+1}/w}f(e^{u})\ du.$ (3.4)
Let 1 represents the constant function such that $\textbf{1}(x)=1,\ \forall
x\in\mathbb{R}^{+}.$ We can easily observe that
$(I_{w}^{\chi}\textbf{1})(x)=1,\ \forall x\in\mathbb{R}^{+}.$ This gives
$(\theta
I_{w}^{\chi}\textbf{1})(x)=w\sum_{k=-\infty}^{\infty}\chi^{{}^{\prime}}(e^{-k}x^{w})\
(e^{-k}x^{w})=0.$
Again we have
$\displaystyle|(\theta I_{w}^{\chi}f)(x)|$ $\displaystyle=$
$\displaystyle\big{|}(\theta I_{w}^{\chi}f)(x)-f(x)\ (\theta
I_{w}^{\chi}\textbf{1})(x)\big{|}$ $\displaystyle=$
$\displaystyle\Big{|}w\sum_{k=-\infty}^{\infty}(\theta\chi)(e^{-k}x^{w})\
w\int_{k/w}^{{k+1}/w}f(e^{u})\ du-
wf(x)\sum_{k=-\infty}^{\infty}(\theta\chi)(e^{-k}x^{w})\big{|}$
$\displaystyle=$ $\displaystyle
w^{2}\Big{|}\sum_{k=-\infty}^{\infty}(\theta\chi)(e^{-k}x^{w})\int_{k/w}^{{k+1}/w}(f(e^{u})-f(x))\
du\Big{|}$ $\displaystyle\leq$ $\displaystyle
w^{2}\sum_{k=-\infty}^{\infty}|(\theta\chi)(e^{-k}x^{w})|\int_{k/w}^{{k+1}/w}|f(e^{u})-f(x)|\
du.$
Using property (b) of $\omega$ and choosing $\delta=1/w$ to get
$\displaystyle|(\theta I_{w}^{\chi}f)(x)|$ $\displaystyle\leq$ $\displaystyle
w^{2}\sum_{k=-\infty}^{\infty}|(\theta\chi)(e^{-k}x^{w})|\int_{k/w}^{{k+1}/w}\omega(f,1/w)\
(1+w\ |u-\log x|)\ du$ $\displaystyle\leq$ $\displaystyle w^{3}\
\omega(f,1/w)\
\sum_{k=-\infty}^{\infty}|(\theta\chi)(e^{-k}x^{w})|\int_{k/w}^{{k+1}/w}|u-\log
x|\ du$ $\displaystyle+w\ \omega(f,1/w)\
\sum_{k=-\infty}^{\infty}|(\theta\chi)(e^{-k}x^{w})|$ $\displaystyle\leq$
$\displaystyle w\ \omega(f,1/w)\ M_{0}(\theta\chi)+\frac{w}{2}\ \omega(f,1/w)\
(M_{0}(\theta\chi)+2M_{1}(\theta\chi))$ $\displaystyle\leq$
$\displaystyle\frac{w}{2}\
\omega(f,1/w)\left(3M_{0}(\theta\chi)+2M_{1}(\theta\chi)\right).$
This gives
$\displaystyle|f(x)-f(y)|$ $\displaystyle\leq$ $\displaystyle
2Kw^{-\alpha}+\frac{w}{2}\
\omega(f,1/w)\int_{y}^{x}\left(3M_{0}(\theta\chi)+2M_{1}(\theta\chi)\right)\frac{dt}{t}$
$\displaystyle\leq$ $\displaystyle 2Kw^{-\alpha}+\frac{w}{2}\
\omega(f,1/w)\left(3M_{0}(\theta\chi)+2M_{1}(\theta\chi)\right)|\log x-\log
y|.$
Since $M_{0}(\theta\chi)$ and $M_{1}(\theta\chi)$ are finite, we define
$N:=\mbox{max}\\{M_{0}(\theta\chi),M_{1}(\theta\chi)\\}$ to obtain
$\omega(f,\delta)\leq 2Kw^{-\alpha}+w\delta\
\omega(f,1/w)\left(\frac{5N}{2}\right).$
For any fixed $\delta>0,$ there exists sufficiently large $w$ such that
$\frac{1}{w}<\delta.$ Now, in view of monotonicity of $\omega,$ we write
$\omega(f,1/w)<\omega(f,\delta).$
This gives $\omega(f,\delta)\leq H\ w^{-\alpha},$ where $\displaystyle
H:=\frac{4K}{5-2N}.$ Hence, we conclude that $f\in L_{\alpha},\ 0<\alpha\leq
1.$ Thus, we obtain the desired result. ∎
## 4\. Examples of kernel
In this section, we discuss some examples of kernel functions satisfying the
assumptions of our theory.
### 4.1. Mellin B-spline kernel
We start with the Mellin B-spline kernel of order $n\in\mathbb{N}$ which are
defined by (see [14])
$\overline{{B}_{n}}(t):=\frac{1}{(n-1)!}\sum_{j=0}^{n}(-1)^{j}{n\choose
j}\bigg{(}\frac{n}{2}+\log t-j\bigg{)}_{+}^{n-1},\,\,\,\ t\in\mathbb{R}^{+},$
where $(x)_{+}:=\max\\{x,0\\},$ $x\in\mathbb{R}.$ We observe that for
$t\in\mathbb{R}^{+},$ $B_{n}(\log t)=\overline{{B}_{n}}(t),$ where
${B}_{n}(x):=\frac{1}{(n-1)!}\sum_{j=0}^{n}(-1)^{j}{n\choose
j}\bigg{(}\frac{n}{2}+x-j\bigg{)}_{+}^{n-1},\,\,\,\ x\in\mathbb{R},$
denotes the classical $B$-splines of order $n\in\mathbb{N}.$ Also, we noted
$B_{n}(x)=\overline{{B}_{n}}(e^{x}),$ $x\in\mathbb{R}.$ Now using the
definition of the Mellin transform, we have
$[\overline{{B}_{n}}]^{\wedge}_{M}(is)=\int_{0}^{\infty}\overline{{B}_{n}}(t)\
t^{is}\ \frac{dt}{t},\ \ \ s\in\mathbb{R}.$
Subtituting $u=\log t,$ we easily obtain
$\displaystyle[\overline{{B}_{n}}]^{\wedge}_{M}(is)$ $\displaystyle=$
$\displaystyle\int_{-\infty}^{\infty}\overline{{B}_{n}}(e^{u})\ e^{isu}\
du=\int_{-\infty}^{\infty}{B}_{n}(u)\ e^{isu}\ du=\widehat{B_{n}}(-s),$
where
$\widehat{f}(u):=\displaystyle\int_{-\infty}^{\infty}f(x)e^{-iux}dx,u\in\mathbb{R}$
denotes the Fourier transform of the function $f.$ Now, let
$f(x):=\displaystyle\sum_{k=-\infty}^{\infty}\overline{{B}_{n}}(e^{k}x).$ This
$f$ is a recurrent function with fundamental interval $[1,e].$ That is
$f(ex)=f(x),$ $\forall x\in\mathbb{R}^{+}.$ The Mellin-Fourier cofficient
$m_{k}(f)$ of $f$ is given by
$\displaystyle m_{k}(f):=\displaystyle\int_{1}^{e}f(x)x^{2k\pi
i}\frac{dx}{x}=\displaystyle\int_{1}^{e}\sum_{j=-\infty}^{\infty}\overline{{B}_{n}}(e^{j}x)x^{2k\pi
i}\frac{dx}{x}=\sum_{j=-\infty}^{\infty}\displaystyle\int_{1}^{e}\overline{{B}_{n}}(e^{j}x)x^{2k\pi
i}\frac{dx}{x}.$
Substituting $u=e^{j}x,$ we obtain
$\displaystyle
m_{k}(f)=\sum_{j=-\infty}^{\infty}\displaystyle\int_{e^{j}}^{e^{j+1}}\overline{{B}_{n}}(u){\left(\frac{u}{e^{j}}\right)}^{2k\pi
i}\frac{du}{u}=\displaystyle\int_{0}^{\infty}\overline{{B}_{n}}(u){u}^{2k\pi
i}\frac{du}{u}=[\overline{{B}_{n}}]^{\wedge}_{M}(2k\pi i).$
Therefore, we get the following Mellin Poisson summation formula
$\displaystyle\displaystyle\sum_{k=-\infty}^{\infty}\overline{{B}_{n}}(e^{k}x)=\sum_{k=-\infty}^{\infty}[\overline{{B}_{n}}]^{\wedge}_{M}(2k\pi
i)x^{-2k\pi i}.$ (4.1)
It is easy to see that
$\displaystyle[\overline{{B}_{n}}]^{\wedge}_{M}(c+is)=\bigg{(}\frac{\sin(\frac{s}{2})}{(\frac{s}{2})}\Bigg{)}^{n},\
\ \hskip 14.22636pts\neq 0,c=0.$ (4.2)
Therefore, we have
$[\overline{{B}_{n}}]^{\wedge}_{M}(2k\pi i)=\widehat{B_{n}}(-2k\pi
i)=\begin{cases}{1,}&\quad\text{if}\ \ {k=0}\\\ {0,}&\quad\ \
{\text{otherwise.}}\\\ \end{cases}$ (4.3)
Using the Mellin Poisson summation formula (4.1), we get
$\displaystyle\sum_{k=-\infty}^{\infty}\overline{{B}_{n}}(e^{k}x)=1,\ \
\forall x\in\mathbb{R}^{+}.$
Hence $\overline{{B}_{n}}$ satisfies the condition $(\chi_{1}).$
Now we establish the condition $(\chi_{2}).$ Again using the Mellin transform
and by differentiating under the sign of the integral, we get
$\frac{d}{ds}\left([\overline{{B}_{n}}]^{\wedge}_{M}(is)\right)=\frac{d}{ds}\left(\displaystyle\int_{0}^{\infty}\overline{{B}_{n}}(t){t}^{is}\frac{dt}{t}\right)=\int_{0}^{\infty}\overline{{B}_{n}}(t){t}^{is}(i\log
t)\frac{dt}{t}.$
Similarly, we can easily obtain
$\frac{d^{j}}{ds^{j}}\left([\overline{{B}_{n}}]^{\wedge}_{M}(is)\right)=\int_{0}^{\infty}\overline{{B}_{n}}(t){t}^{is}(i\log
t)^{j}\frac{dt}{t}=(i)^{j}[\overline{f_{j}}]^{\wedge}_{M}(is),$
where $f_{j}(t)=\overline{{B}_{n}}(t)(\log t)^{j}.$ Thus, we get
$\displaystyle\displaystyle\sum_{k=-\infty}^{\infty}\overline{{B}_{n}}(e^{-k}x)(k-\log
x)^{j}=\sum_{k=-\infty}^{\infty}(-i)^{j}\frac{d^{j}}{ds^{j}}\overline{{B}_{n}}]^{\wedge}_{M}(2k\pi
i)x^{-2k\pi i}.$ (4.4)
Again from (4.2), we obtain
$\displaystyle\dfrac{d}{ds}\left([\overline{{B}_{n}}]^{\wedge}_{M}(is)\right)=n\bigg{(}\frac{\sin(\frac{s}{2})}{(\frac{s}{2})}\Bigg{)}^{n-1}\left(\dfrac{s\cos(s/2)-2\sin(s/2)}{s^{2}}\right),\
\ \hskip 14.22636pts\neq 0,$
which in-turn gives
$\displaystyle\dfrac{d}{ds}\left([\overline{{B}_{n}}]^{\wedge}_{M}(is)\right)(2k\pi
i)=0,\ \ k\in\mathbb{Z}.$
Therefore, we obtain $m_{1}(\overline{{B}_{n}},x)=0,\ \forall n\in\mathbb{N},$
by using (4.4). Hence the condition $(\chi_{2})$ is also satisfied.
As $\overline{{B}_{n}}$ is compactly supported, there exists $\nu>0$ such that
$supp(\overline{{B}_{n}})\subseteq[e^{-\nu},e^{\nu}].$ Thus, we get
$|\\{k:e^{-\nu}\leq e^{-k}u\leq e^{\nu}\\}|\leq 2[\nu]+1,$ $\forall
u\in\mathbb{R}^{+},$ where $[.]$ denotes the integer part. Hence we obtain
$\sum_{k=-\infty}^{\infty}|\overline{{B}_{n}}(e^{-k}u)|\ |k-\log
u|^{\beta}\leq(2[\nu]+1)\left(\sup_{u\in\mathbb{R}^{+}}\overline{{B}_{n}}(u)\right)\nu^{\beta}<\infty.$
This establishes the condition $(\chi_{3}).$
Again using the fact that
$supp(\overline{{B}_{n}})\subseteq[e^{-\nu},e^{\nu}],$ for some $\nu>0.$
Choosing $w\gamma>\nu,$ we get $\displaystyle\sum_{|k-w\log
x|>w\gamma}|\overline{{B}_{n}}(e^{-k}x^{w})|\ |k-w\log x|=0,$ $\forall
x\in\mathbb{R}^{+}.$ Therefore, the condition $(\chi_{4})$ is satisfied. To
verify the conditions which are used in the hypothesis of the theorem, we
consider the third order Mellin $B$-spline kernel by
$\overline{{B}_{3}}(x)=\begin{cases}{-\frac{1}{2}\left(\frac{3}{2}+\log
x\right)^{2},}&\quad\text{}\ \ {\text{$e^{-3/2}<x<e^{-1/2},$}}\\\
{\frac{3}{4}-\log^{2}x,}&\quad\text{}\ \ {e^{-1/2}<x<e^{1/2},}\\\
{-\frac{1}{2}\left(\frac{3}{2}-\log x\right)^{2},}&\quad\text{}\ \
{\text{$e^{1/2}<x<e^{3/2},$}}\\\ {0,}&\quad\text{}\ \ {\text{otherwise}.}\\\
\end{cases}$
The Mellin derivative of $\overline{{B}_{3}}(x)$ is given by
$(\theta\overline{{B}_{3}})(x)=\begin{cases}{-\left(\frac{3}{2}+\log
x\right),}&\quad\text{}\ \ {\text{$e^{-3/2}<x<e^{-1/2},$}}\\\ {-2\log
x,}&\quad\text{}\ \ {e^{-1/2}<x<e^{1/2},}\\\ {\frac{3}{2}-\log
x,}&\quad\text{}\ \ {\text{$e^{1/2}<x<e^{3/2},$}}\\\ {0,}&\quad\text{}\ \ \
{\text{otherwise}.}\\\ \end{cases}$
Evidently,
$supp(\theta\overline{{B}_{3}})\subseteq\left(e^{-3/2},e^{3/2}\right).$ Hence,
$M_{\beta}(\theta\overline{{B}_{3}})$ are finite. So it satisfies the
assumption of Theorem 4. Moreover, we have
$m_{1}^{\overline{{B}_{3}}}:=m_{1}(\overline{{B}_{3}},x)=0,\ \forall
x\in\mathbb{R}^{+}.$ Thus, $\overline{{B}_{3}}(x)$ also satisfies the
assumption of Theorem 3.
### 4.2. Mellin Jackson kernel
Next we consider Mellin Jackson kernel. For $c\in\mathbb{R},\alpha\geq
1,n\in\mathbb{N}$ and $x\in\mathbb{R}^{+},$ the Mellin Jackson kernel is
defined by (see [14])
$\overline{J_{\alpha,n}}(x):=C_{\alpha,n}\
x^{-c}\textit{sinc}^{2n}\left(\frac{\log x}{2\alpha n\pi}\right),$
where $\displaystyle
C^{-1}_{\alpha,n}:=\int_{0}^{\infty}\textit{sinc}^{2n}\left(\frac{\log
x}{2\alpha n\pi}\right)\frac{dx}{x}$ and $sinc(u)=\begin{cases}{\dfrac{\sin\pi
u}{\pi u},}&\quad\text{}\ \ {u\neq 0}\\\ {1,}&\quad\text{}\ \ {u=0}.\\\
\end{cases}$
It is evident that $\overline{J_{\alpha,n}}(x)=x^{-c}{J_{\alpha,n}}(\log x),$
where ${J_{\alpha,n}}$ represents the generalized Jackson kernel (see [8])
given by
${J_{\alpha,n}}(x):=c_{\alpha,n}\textit{sinc}^{2n}\left(\frac{x}{2\alpha
n\pi}\right),$
where $\displaystyle
c^{-1}_{\alpha,n}:=\int_{-\infty}^{\infty}\textit{sinc}^{2n}\left(\frac{x}{2\alpha
n\pi}\right)dx\ $ is constant. Now we have
$\displaystyle\|\overline{J_{\alpha,n}}\|_{X_{c}}$ $\displaystyle=$
$\displaystyle\int_{0}^{\infty}|\overline{J_{\alpha,n}}(x)|x^{c}\
\frac{dx}{x}=\int_{0}^{\infty}|{J_{\alpha,n}}(\log x)|\frac{dx}{x}=1.$
Hence $\overline{J_{\alpha,n}}(x)\in X_{c}$ and therefore its Mellin transform
is well defined. Now, we obtain
$\displaystyle[\overline{J_{\alpha,n}}]^{\wedge}_{M}(c+iv)=\int_{0}^{\infty}\overline{J_{\alpha,n}}(x)x^{c+iv}\frac{dx}{x}=C_{\alpha,n}\int_{0}^{\infty}x^{iv}\frac{\textit{sin}^{2n}\left(\frac{\log
x}{2\alpha n}\right)}{\left(\frac{\log x}{2\alpha
n}\right)^{2n}}\frac{dx}{x}.$
On substituting $u=\log x,$ we obtain
$\displaystyle[\overline{J_{\alpha,n}}]^{\wedge}_{M}(c+iv)$ $\displaystyle=$
$\displaystyle\frac{C_{\alpha,n}}{(n\alpha)^{2n}}\int_{-\infty}^{\infty}e^{iuv}\left(\widehat{\chi}_{[-\frac{1}{2n\alpha},\frac{1}{2n\alpha}]}(u)\right)^{2n}du$
$\displaystyle=$
$\displaystyle\frac{C_{\alpha,n}}{(n\alpha)^{2n}}\left({\chi}_{[-\frac{1}{2n\alpha},\frac{1}{2n\alpha}]}\ast{\chi}_{[-\frac{1}{2n\alpha},\frac{1}{2n\alpha}]}\ast...\ast{\chi}_{[-\frac{1}{2n\alpha},\frac{1}{2n\alpha}]}\right)(u),\
\ (2n\ \ times),$
where $\ast$ denotes the convolution. From the above relation, we see that
$supp[\overline{J_{\alpha,n}}]^{\wedge}_{M}\subseteq[-\frac{1}{\alpha},\frac{1}{\alpha}],$
hence we get $[\overline{J_{\alpha,n}}]^{\wedge}_{M}(c+iv)=0,$ for
$|v|>\frac{1}{\alpha}.$ Thus $\overline{J_{\alpha,n}}$ is Mellin band-limited.
Further, we observe that $[\overline{J_{\alpha,n}}]^{\wedge}_{M}(0)=1$ and
$[\overline{J_{\alpha,n}}]^{\wedge}_{M}(2k\pi i)=0,$ for $k\neq 0.$ Hence
using the Mellin-Poisson summation formula, we get
$\displaystyle\displaystyle\sum_{k=-\infty}^{\infty}\overline{J_{\alpha,n}}(e^{k}x)=\sum_{k=-\infty}^{\infty}[\overline{J_{\alpha,n}}]^{\wedge}_{M}(2k\pi
i)x^{-2k\pi i}=1.$
Hence $(\chi_{1})$ is satisfied.
In view of the Mellin transform, we obtain
$\displaystyle\frac{d}{dv}\left([\overline{J_{\alpha,n}}]^{\wedge}_{M}(0)\right)$
$\displaystyle=$
$\displaystyle\int_{0}^{\infty}\overline{J_{\alpha,n}}(u)(i\log
u)\frac{du}{u}=iC_{\alpha,n}\int_{0}^{\infty}\frac{\textit{sin}^{2n}\left(\frac{\log
u}{2\alpha n}\right)}{\left(\frac{\log u}{2\alpha n}\right)}\log
u\frac{du}{u}.$
Hence, we obtain
$\dfrac{d}{dv}\left([{J_{\alpha},n}]^{\wedge}_{M}(0)\right)=0.$ Thus from the
Mellin-Poisson summation formula, we get $m_{1}(\overline{J_{\alpha,n}},u)=0.$
Therfore, the condition $(\chi_{2})$ is established. Further, we see that
$m_{1}^{\overline{J_{\alpha,n}}}\neq-\dfrac{1}{2},$ the condition of Theorem 3
is fulfilled.
We now show that $(\chi_{4})$ is satisfied, i.e.
$\displaystyle\lim_{w\rightarrow\infty}\sum_{|k-w\log
x|>w\gamma}|\overline{J_{\alpha,n}}(e^{-k}x^{w})|\ |k-w\log x|=0$ uniformly on
$\mathbb{R}^{+}.$ Let $\epsilon>0$ and $n>1.$ Then, there exists
$N\in\mathbb{Z}$ such that
$\displaystyle\sum_{k>N}\frac{1}{k^{2n-1}}<\epsilon.$ For $w\gamma>N,$ we can
write
$\displaystyle\sum_{|k-w\log
x|>w\gamma}|\overline{J_{\alpha,n}}(e^{-k}x^{w})|\ |k-w\log x|$
$\displaystyle=$ $\displaystyle\left(\sum_{k-w\log x>w\gamma}+\sum_{k-w\log
x<-w\gamma}\right)|\overline{J_{\alpha,n}}(e^{-k}x^{w})|\ |k-w\log x|$
$\displaystyle:=$ $\displaystyle S_{1}+S_{2}.$
First we estimate $S_{1}.$ We have
$\displaystyle S_{1}$ $\displaystyle=$ $\displaystyle\sum_{k-w\log
x>w\gamma}C_{\alpha,2n}sinc\left(\frac{|w\log x-k|}{2\alpha
n\pi}\right)^{2n}|k-w\log x|$ $\displaystyle\leq$ $\displaystyle\sum_{k-w\log
x>w\gamma}C_{\alpha,2n}(2\alpha n)^{2n}\frac{1}{|k-w\log x|^{2n-1}}$
$\displaystyle\leq$ $\displaystyle C_{\alpha,2n}(2\alpha
n)^{2n}\sum_{k>N}\frac{1}{k^{2n-1}}<C_{\alpha,2n}(2\alpha n)^{2n}\epsilon.$
Similarly, for $S_{2},$ we obtain
$\displaystyle S_{2}$ $\displaystyle\leq$ $\displaystyle\sum_{k-w\log
x<-w\gamma}C_{\alpha,2n}(2\alpha n)^{2n}\frac{1}{|k-w\log x|^{2n-1}}$
$\displaystyle\leq$ $\displaystyle C_{\alpha,2n}(2\alpha
n)^{2n}\sum_{k=1}^{\infty}\frac{1}{(N+k)^{2n-1}}$ $\displaystyle<$
$\displaystyle C_{\alpha,2n}(2\alpha n)^{2n}\epsilon.$
On combining $S_{1}-S_{2},$ we get $S<2C_{\alpha,2n}(2\alpha n)^{2n}\epsilon,$
for $n>1.$ Hence $(\chi_{4})$ is satisfied for $n>1.$
Now we establish the condition $(\chi_{3}).$ Let $u\in\mathbb{R}^{+}$ and
$\beta<2n-1,c=0.$ Then $\exists$ $k_{0}\in\mathbb{Z}$ such that $k_{0}\leq\log
u<k_{0}+1.$ This gives $|\log u-k|\geq|k-k_{0}|,$ if $k<k_{0}$ and $|\log
u-k|>|k-(k_{0}+1)|,$ if $k>k_{0}+1.$ Now, we have
$\displaystyle\sum_{k=-\infty}^{\infty}|{\overline{J_{\alpha,n}}}(e^{-k}u)|\
|k-\log u|^{\beta}$ $\displaystyle=$
$\displaystyle\left(\sum_{k<k_{0}}+\sum_{k=k_{0},k_{0}+1}+\sum_{k>k_{0}}\right)|\overline{J_{\alpha,n}}(e^{-k}u)|\
|k-\log u|^{\beta}$ $\displaystyle:=$ $\displaystyle
S_{1}^{{}^{\prime}}+S_{2}^{{}^{\prime}}+S_{3}{{}^{\prime}}.$
Using definition of $\overline{J_{\alpha,n}},$ $S_{1}^{{}^{\prime}}$ is
estimated by
$\displaystyle S_{1}^{{}^{\prime}}$ $\displaystyle=$ $\displaystyle
C_{\alpha,n}\sum_{k<k_{0}}\sin^{2n}\left(\frac{|k-\log
u|}{2n\alpha}\right)(2\alpha n)^{2n}\frac{1}{|\log u-k|^{2n-\beta}}$
$\displaystyle\leq$ $\displaystyle C_{\alpha,n}(2\alpha
n)^{2n}\sum_{k<k_{0}}\frac{1}{|\log u-k|^{2n-\beta}}$ $\displaystyle\leq$
$\displaystyle C_{\alpha,n}(2\alpha
n)^{2n}\sum_{k<k_{0}}\frac{1}{|k-k_{0}|^{2n-\beta}}\leq C_{\alpha,n}(2\alpha
n)^{2n}\sum_{k=1}^{\infty}\frac{1}{k^{2n-\beta}}.$
The above sum is finite if $\beta<2n-1.$ Similarly, for $S_{3}^{{}^{\prime}},$
we obtain
$\displaystyle S_{3}^{{}^{\prime}}$ $\displaystyle\leq$ $\displaystyle
C_{\alpha,n}(2\alpha n)^{2n}\sum_{k>k_{0}+1}\frac{1}{|\log u-k|^{2n-\beta}}$
$\displaystyle\leq$ $\displaystyle C_{\alpha,n}(2\alpha
n)^{2n}\sum_{k>k_{0}+1}\frac{1}{|k-(k_{0}+1)|^{2n-\beta}}\leq
C_{\alpha,n}(2\alpha n)^{2n}\sum_{k=1}^{\infty}\frac{1}{k^{2n-\beta}}.$
This gives $S_{3}^{{}^{\prime}}<\infty$ for $\beta<2n-1.$ Finally
$S_{2}^{{}^{\prime}}$ is estimated as
$\displaystyle S_{2}^{{}^{\prime}}$ $\displaystyle=$ $\displaystyle
C_{\alpha,n}\left(\frac{\sin(\frac{|\log u-k_{0}|}{2n\alpha})}{\frac{|\log
u-k_{0}|}{2n\alpha}}\right)^{2n}|\log
u-k_{0}|^{\beta}+C_{\alpha,n}\left(\frac{\sin(\frac{|\log
u-k_{0}-1|}{2n\alpha})}{\frac{|\log u-k_{0}-1|}{2n\alpha}}\right)^{2n}|\log
u-(k_{0}-1)|^{\beta}$ $\displaystyle\leq$ $\displaystyle
C_{\alpha,n}\left(\frac{\sin(\frac{|\log u-k_{0}|}{2n\alpha})}{\frac{|\log
u-k_{0}|}{2n\alpha}}\right)^{2n}+C_{\alpha,n}\left(\frac{\sin(\frac{|\log
u-k_{0}-1|}{2n\alpha})}{\frac{|\log u-k_{0}-1|}{2n\alpha}}\right)^{2n}$
$\displaystyle\leq$ $\displaystyle 2C_{\alpha,n}\sup_{0\leq u\leq
1}\left(\frac{\sin(\frac{u}{2n\alpha})}{\frac{u}{2n\alpha}}\right)^{2n}\leq
2C_{\alpha,n}.$
Combining the estimates
$S_{1}^{{}^{\prime}},S_{2}^{{}^{\prime}},S_{3}^{{}^{\prime}},$ we get
$\sup_{u\in\mathbb{R}^{+}}\sum_{k=-\infty}^{\infty}|{\overline{J_{\alpha,n}}}(e^{-k}u)|\
|k-\log u|^{\beta}<\infty$
and hence $(\chi_{3})$ is verified.
Putting $c=0$ in the Mellin-Jackson kernel $\overline{J_{\alpha,n}}$ and
taking the Mellin derivative of the kernel, we get
$(\theta\overline{J_{\alpha,n}})(x)=\frac{C_{\alpha,n}}{\log^{2n}x}\left[\frac{2n}{\rho^{2n-1}}\sin^{2n-1}(\rho\log
x)\left(\cos(\rho\log x)-\frac{\sin(\rho\log x)}{\rho\log x}\right)\right].$
Proceeding along the lines of the estimate of
$S_{1}^{{}^{\prime}},S_{2}^{{}^{\prime}},S_{3}^{{}^{\prime}}$, it follows that
$M_{1}(\theta\overline{J_{\alpha,n}})<\infty,$ and thus
$\overline{J_{\alpha,n}}$ satisfies the assumptions of Theorem 4 for $n>1$ and
$\beta<2n-1$.
## 5\. Acknowledgement
S. Bajpeyi gratefully thank Indian Institute of Science Education and Research
(IISER) Thiruvananthapuram for the postdoctoral fellowship to carry out this
research work. A. Sathish Kumar acknowledges DST-SERB, India Research Grant
MTR/2021/000428 for the financial support and NFIG Grant, IIT Madras, Grant
No. RF/22-23/0984/MA/NFIG/009017.
## References
* [1] Acar, T., Draganov, B. R.: A characterization of the rate of the simultaneous approximation by generalized sampling operators and their Kantorovich modification. J. Math. Anal. Appl.530(2), 127740(2024).
* [2] Angamuthu, S.K., Bajpeyi, S.: Direct and inverse results for Kantorovich type exponential sampling series. Results Math. 75(3), 17pp. (2020).
* [3] Aral, A., Acar, T., Kursun, S.: Generalized Kantorovich forms of exponential sampling series. Anal. Math. Phys. 12 (2), 19 pp.(2022).
* [4] Bajpeyi, S., Kumar, A.S.: On approximation by Kantorovich exponential sampling operators. Numer. Funct. Anal. Optim. 42 (9), 1096-1113 (2021).
* [5] Bajpeyi, S., Kumar, A.S.: Approximation by exponential sampling type neural network operators. Anal. Math. Phys. 11(3), 108, 13 pp.(2021).
* [6] Bajpeyi, S.: Order of Approximation for Exponential Sampling Type Neural Network Operators. Results Math. 78, 99 (2023).
* [7] Balsamo, S., Mantellini, I.: On linear combinations of general exponential sampling series. Results Math. 74(4), (2019) 19 pp.
* [8] Bardaro, C., Vinti, G., Butzer, P.L., Stens, R.L.: Kantorovich-type generalized sampling series in the setting of Orlicz spaces. Sampl. Theory Signal Image Process. 6 (1), 29-52 (2007).
* [9] Bardaro, C., Mantellini, I.: A note on the Voronovskaja theorem for Mellin-Fejer convolution operators. Appl. Math. Lett. 24 (12), 2064-2067 (2011).
* [10] Bardaro, C., Butzer, P.L., Mantellini, I.: The exponential sampling theorem of signal analysis and the reproducing kernel formula in the Mellin transform setting. Sampl. Theory Signal Image Process. 13(1),35-66(2014).
* [11] Bardaro, C., Mantellini, I.: On Mellin convolution operators: a direct approach to the asymptotic formulae. Integral Transforms Spec. Funct. 25(3), 182-195(2014).
* [12] Bardaro, C., Butzer, P.L., Mantellini, I.: The Mellin-Parseval formula and its interconnections with the exponential sampling theorem of optical physics. Integral Transforms Spec. Funct. 27(1), 17-29(2016).
* [13] Bardaro, C., Butzer, P.L., Mantellini, I., Schmeisser, G.: On the Paley-Wiener theorem in the Mellin transform setting. J. Approx. Theory. 207, 60-75 (2016).
* [14] Bardaro, C., Faina, L., Mantellini, I.: A generalization of the exponential sampling series and its approximation properties. Math. Slovaca 67(6), 1481-1496(2017).
* [15] Bardaro, C., Mantellini, I., Schmeisser, G.: Exponential sampling series: convergence in Mellin-Lebesgue spaces. Results Math. 74, Art. 119, 20 pp. (2019).
* [16] Bardaro, C., Bevignani, G., Mantellini, I., Seracini, M.: Bivariate generalized exponential sampling series and applications to seismic waves. Constr. Math. Anal. 2(4), 153-167 (2019).
* [17] Bartoccini, B., Costarelli, D., Vinti, G.: Extension of saturation theorems for the sampling Kantorovich operators. Complex Anal. Oper. Theory. 13(3), 1161-1175(2019).
* [18] Benedetto, J.J., Ferreira, P. J. S. G.: Modern Sampling Theory: Mathematics and Applications, Birkhauser, Boston-Basel-Berlin, (2001).
* [19] Butzer, P.L.: A survey of the Whittaker-Shannon sampling theorem and some of its extensions. J. Math. Res. Exposition 3, 185-212(1983).
* [20] Bertero, M., Pike, E. R.: Exponential-sampling method for Laplace and other dilationally invariant transforms. II. Examples in photon correlation spectroscopy and Fraunhofer diffraction. Inverse Problems. 7, 21-41 (1991).
* [21] Butzer, P.L., Stens, R.L.: Sampling theory for not-necessarily band-limited functions: A historical overview. SIAM Rev. 34, 40-53(1992).
* [22] Butzer, P.L., Stens, R.L.: Linear prediction by samples from the past, In: Advanced Topics in Shannon Sampling and Interpolation Theory, Springer Texts Electrical Eng., Springer, New York, 157-183(1993).
* [23] Butzer, P.L., Jansche, S.: A direct approach to the Mellin transform, J. Fourier Anal. Appl. 3(4), 325-376(1997).
* [24] Butzer, P.L., Jansche, S.: The finite Mellin transform, Mellin-Fourier series, and the Mellin-Poisson summation formula. Proceedings of the Third International Conference on Functional Analysis and Approximation Theory, Vol. I (Acquafredda di Maratea, 1996). Rend. Circ. Mat. Palermo (2) Suppl. No. 52, Vol. I, 55-81(1998) .
* [25] Butzer, P.L., Jansche, S.: The exponential sampling theorem of signal analysis, Dedicated to Prof. C. Vinti (Italian) (Perugia, 1996). Atti Sem. Mat. Fis. Univ. Modena, Suppl. 46, 99-122(1998).
* [26] Butzer, P.L., Jansche, S.: Mellin-Fourier series and the classical Mellin transform, Approximation in mathematics (Memphis, TN, 1997). Comput. Math. Appl. 40(1), 49-62(2000).
* [27] Casasent, D.: Optical Data Processing. Topics in Applied Physics, vol 23. Springer, Berlin, Heidelberg, https://doi.org/10.1007/BFb0057988, Berlin, 241-282(1978).
* [28] Costarelli, D., Vinti, G.: An inverse result of approximation by sampling Kantorovich series. Proc. Edinb. Math. Soc. 62 (2), no. 1, 265-280(2019).
* [29] Costarelli, D., Vinti, G.: Inverse results of approximation and the saturation order for the sampling Kantorovich series. J. Approx. Theory. 242, 64-82(2019).
* [30] Costarelli, D., Vinti, G.: Saturation by the Fourier transform method for the sampling Kantorovich series based on bandlimited kernels. Anal. Math. Phys. 9, 2263-2280 (2019).
* [31] Costarelli, D., Vinti, G.: Approximation Properties of the Sampling Kantorovich Operators: Regularization, Saturation, Inverse Results and Favard Classes in $L^{p}$-Spaces. J Fourier Anal Appl. 28, 49 (2022).
* [32] Costarelli, D., Seracini, M., Travaglini, A., Vinti, G.: Alzheimer biomarkers esteem by sampling Kantorovich algorithm. Mathematical Methods in the Applied Sciences. 46, (2023).
* [33] Gori, F.: Sampling in optics, Advanced topics in Shannon sampling and interpolation theory. Springer Texts Electrical Engrg., Springer, New York, 37-83(1993).
* [34] Kumar, P., Kumar, A. S., Bajpeyi, S.: On bivariate Kantorovich exponential sampling series. Math. Methods Appl. Sci.46(12), 12645-12659(2023).
* [35] Kumar, A.S., Kumar, P., Ponnaian, D.: Approximation of Discontinuous Signals by Exponential Sampling Series. Results Math. 77(1), 22 pp.(2022).
* [36] Kumar, A.S., Kumar, P., Ponnaian, D.: Approximation of discontinuous functions by Kantorovich exponential sampling series. Anal. Math. Phys. 12, 21 pp.(2022).
* [37] Kursun, S., Aral, A., Acar, T.: Approximation Results for Hadamard-Type Exponential Sampling Kantorovich Series. Mediterr. J. Math. 20, 263 (2023).
* [38] Mamedov, R.G.: The Mellin transform and approximation theory (in Russian), ”Elm”, Baku, (1991) 273 pp. ISBN: 5-8066-0137-4.
* [39] Orlova, O., Tamberg, G.: On approximation properties of generalized Kantorovich-type sampling operators. J. Approx. Theory. 201,73-86(2016).
* [40] Ostrowsky, N., Sornette, D., Parke, P., Pike, E.R.: Exponential sampling method for light scattering polydispersity analysis, Opt. Acta. 28, 1059-1070(1981).
|
# Inherent Limits on Topology-Based Link Prediction
Justus Isaiah Hibshman<EMAIL_ADDRESS>University of Notre Dame Tim Weninger
<EMAIL_ADDRESS>University of Notre Dame
###### Abstract
Link prediction systems (e.g. recommender systems) typically use graph
topology as one of their main sources of information. However, automorphisms
and related properties of graphs beget inherent limits in predictability. We
calculate hard upper bounds on how well graph topology alone enables link
prediction for a wide variety of real-world graphs. We find that in the
sparsest of these graphs the upper bounds are surprisingly low, thereby
demonstrating that prediction systems on sparse graph data are inherently
limited and require information in addition to the graph topology.
## I Introduction
Graph-based link prediction systems are widely used to recommend a wide
variety of products and services. Whenever a shopping website predicts what
you will buy next based on what you and others like you have previously
bought, that’s link prediction. Whenever a social media network suggests that
you might know someone, that’s link prediction. Link prediction is the well-
studied task of predicting connections between entities amidst a network (aka
“graph”) of entity-entity connections.
Usually, these recommendations are based on a combination of node features and
the topology (i.e. the link-structure) of the graph-data. One might assume
that the node features or the topology contain sufficient information for an
ideal link prediction system using that information to perfectly select the
missing connections. However, we show this to be false for graph topology;
whenever a graph possesses symmetries (i.e. automorphisms) in its topology,
then the graph’s topology does not contain enough information to guarantee
correct selection of the missing edges. This raises two foundational questions
in machine learning on graphs:
1. 1.
What are the inherent limits on a graph structure’s predictability?
2. 2.
Do contemporary systems approach these performance limits?
The goal of the present work is to help answer these questions. To do so, we
investigate how much information a graph’s topology alone can provide to a
link prediction algorithm; that forms one kind of inherent limit on a
structure’s predictibility. In particular, we calculate hard upper limits on
how well an algorithm using only topology information can score in standard
link prediction metrics (AUC and AUPR). Our upper bounds hold for _any and
all_ link prediction algorithms and thus are bounds on the informativeness of
the topology data itself. Given these limits, we and others can begin to
answer question 2 by comparing contemporary link prediction systems’ scores to
these upper bounds.
To calculate an upper bound that will hold regardless of the link prediction
algorithm used, we calculate an upper limit on the performance of an idealized
algorithm presented with as much information about the solution as possible.
Specifically, we imagine showing the algorithm the graph _and_ the solution
set of missing edges; then we imagine removing or scrambling the nodes’ labels
and asking the algorithm to predict the solution on the re-labeled graph. We
call the process of removing or randomly permuting the nodes’ labels
“anonymizing” the graph.
For example, consider the following scenario illustrated in Fig. 1: Imagine
you are shown both a link prediction task and the answer to the same task
(i.e. the links one must predict); in Fig. 1, the solution edges are $(a,b)$,
$(a,e)$, and $(e,f)$. Now imagine further that you are asked to perform link
prediction on an anonymized version of the same problem, then asked to predict
which edges are missing. You know, for instance, that you must select edge
$(e,f)$, but you do not know for certain which node is $e$ nor which node is
$f$.
The anonymized graph is, by itself, effectively identical to the data that an
algorithm would be given for the regular link prediction task. We call the
task of doing link prediction on the anonymized graph after having seen the
un-anonymized solution _the known-topology link prediction task_. As we hinted
above, and as Figure 1 shows, _symmetries_ in the graph can render several
possible edge-predictions structurally identical and therefore equally valid.
It is from these symmetries that performance is limited even on the known-
topology task.
Of course, in practice, most systems will perform much worse at the standard
link prediction task than an ideal algorithm could perform at the known-
topology task. Any link prediction algorithm will have some inherent, implicit
modeling assumptions about the graph. For example, a simple model like triadic
closure assumes that the likelihood of an edge is proportionate to the number
of triangles the edge would be involved in bianconi2014triadic ;
klimek2013triadic ; jin2001structure ; davidsen2002emergence . Expressed in a
Bayesian fashion, we can think of a link prediction algorithm as
conditionalizing on evidence, where the data the algorithm sees is its
evidence and the algorithm’s inherent assumptions form its prior. One could
think about an algorithm performing the known-topology link prediction task as
an algorithm doing standard link prediction with a perfect prior (i.e. 100%
confidence on the correct graph). We focus on the known-topology link
prediction task because it enables us to establish limits that exists in the
data itself, without making _any_ assumptions about the link prediction
algorithm.
abcdefvutsrqOriginal Graph withThree Hidden LinksSame Graph Anonymizedand with
the ThreeLinks RemovedThree of Eight EquallyValid
Reconstructionsaecdbfaefdbcacbdfe Figure 1: Toy example graph with three
held-out edges for what we call the known-topology link prediction task. The
link predictor knows that it must pick edges $(a,b)$, $(a,e)$, and $(e,f)$,
but does not know for certain which nodes in the anonymized graph correspond
to $b$, $c$, $e$, or $f$. In this case, there exist eight equally plausible
edge sets that would turn the anonymized graph into a graph isomorphic to the
original. However, only one of these eight is “correct” from the perspective
of standard link prediction evaluation. Note that each of these eight
candidate edge sets correspond to a different interpretation of the node
labels in the anonymized graph.
After discussing some related work in Section II and defining our formalisms
in Section III, we discuss some formal properties of link predictors as well
as quantitative link prediction evaluation metrics (ROC and AUPR) in Section
IV. Section V provides a formula for taking any specific link prediction task
(i.e. any (graph, missing edge set) pair) and calculating the max possible
scores a link predictor could achieve on the known-topology version of the
task, thereby establishing an upper bound on performance for the standard
version of the task. We leave the proof that our formula gives the maximum
score in the Appendix. In Section VI we introduce our experiments, which
consist of calculating link prediction score limits for various real-world
graphs; this includes both our main known-topology task limit _and_ various
lower limits that appear on the known-topology task when the link prediction
algorithm only uses a subset of its available information (a $k$-hop
neighborhood) to predict whether a link will appear. Section VII shows our
results. We find that on sparse graphs commonly used for link prediction, the
limits are surprisingly low. Then in Section VII.2 we observe how a famous
graph neural network’s reported results are _above_ the performance limits and
discuss how this is likely due to a common mistake in correctly measuring
AUPR. Lastly, we discuss ways to generalize our analysis in Section VIII.
## II Related Work
### II.1 Link Prediction
The link prediction task is widely studied, almost to the point of being
ubiquitous for graph research. It was introduced by Liben-Nowell and Kleinburg
liben2003link . Given a graph, the task is to predict which edges (i.e. links)
might be missing, either because the data is incomplete or because more edges
will appear in the future.
Early models for link prediction tended to rely on certain assumptions, such
as that of “triadic closure,” the assumption that if A connects to B and B
connects to C, then A is also likely to connect to C jin2001structure ;
davidsen2002emergence ; klimek2013triadic ; bianconi2014triadic ;
adamic2003friends .
Then, following the historical trend of machine learning in general, link
prediction models grew in complexity both in their features and in their
classifiers al2006link ; martinez2016survey until neural networks eventually
began to outperform other methods zhou2020graph ; wu2020comprehensive .
### II.2 Predictability Limits
To our knowledge, this is the first work to quantify a maximal performance
score that link predictors can obtain on a given task. Other work in
predictibility has focused on measures of predictibility distinct from
evaluation scores. For instance, Abeliuk et. al. study how the predictability
of time-series data degrades as the amount of data available decreases; they
quantify predictability in terms of permutation entropy and signal self-
correlation, as well as actual prediction performance of specific models
abeliuk2020predictability . Permutation entropy has also been found to be
useful to measure predictability in ecology and physics, and
self(auto)-correlation in finance abeliuk2020predictability ;
bandt2002permutation ; garland2014model ; lim2013us .
Scholars have also analyzed predictability limits in other domains. For
instance, some have used notions of entropy to measure predictability limits
on human travel lu2013approaching ; song2010limits and disease outbreaks
scarpino2019predictability . Predictability is related to system complexity
and chaos boffetta2002predictability . For instance, minute uncertainties on
initial conditions can greatly limit one’s ability to make accurate weather
forecasts zhang2019predictability .
Others have done excellent work on the related but distinct case that the
ground truth (i.e. correct output) itself is uncertain or inherently fuzzy.
For instance, in these sorts of settings one might need an alternate way of
scoring a classifier, such as Survey Equivalence resnick2021survey . Rather
than fuzzy ground-truth, the present work focuses on cases where the correct
output is clearly known during evaluation, but where limits in predictibility
come from symmetries within the input data.
## III Formalisms
### III.1 Graphs
We represent a graph $G$ as $G=(V,E)$ where $V$ is the set of vertices (i.e.
nodes) and $E$ is the set of edges. The edges are pairs of vertices. If the
graph’s connections are considered to have a direction, we say that
$E\subseteq V\times V$ and that the graph’s _non-edges_ are $(V\times
V)\setminus E$. If the connections do not have a direction, then the edges are
unordered pairs: $E\subseteq\\{\\{a,b\\}\ |\ (a,b)\in V\times V\\}$. However,
for simplicity, it is standard to always write $(a,b)$ rather than $\\{a,b\\}$
even when talking about undirected graphs. An edge of the form $(a,a)$ is
called a _self-loop_.
### III.2 Isomorphisms
Given two graphs $G_{1}=(V_{1},E_{1})$ and $G_{2}=(V_{2},E_{2})$, we say that
they are isomorphic if there exists a way to align the two graphs’ vertices so
that the structures overlap perfectly. Formally, $G_{1}$ and $G_{2}$ are
isomorphic (expressed as $G_{1}\cong G_{2}$) if there exists a bijection
between the vertices $f:V_{1}\rightarrow V_{2}$ such that $(a,b)\in
E_{1}\leftrightarrow(f(a),f(b))\in E_{2}$. In this case the function $f$ is
called an _isomorphism_. In this paper, whenever we refer to two graphs as
being equivalent or identical we mean that they are isomorphic.
If $f$ is an isomorphism between two graphs $G_{1}$ and $G_{2}$ we will
sometimes denote this as $G_{1}\cong_{f}G_{2}$.
### III.3 Automorphism Orbits
Within the context of a single graph, the automorphism orbit of an object
(i.e. a vertex or an edge) captures its equivalence with other objects in the
graph. Two objects are in the same orbit if and only if the data _in no way_
distinguishes between the two objects.
An _automorphism_ of a graph is an isomorphism of the graph with itself. That
is, an automorphism of a graph $G=(V,E)$ is a bijective function
$f:V\rightarrow V$ such that:
$(a,b)\in E\leftrightarrow(f(a),f(b))\in E$
The set of all automorphisms of a graph $G$ form the _automorphism group_ of
the graph and is denoted $\text{Aut}(G)$.
The _automorphism orbits_ of a graph typically refer to collections of
equivalent vertices; however, they can also refer to collections of equivalent
edges. The orbit of a vertex $a$ in graph $G$ is the set
$\text{AO}_{G}(a)=\\{f(a)\ |\ f\in\text{Aut}(G)\\}$. Similarly, the orbit of
an edge $e=(a,b)$ in graph $G$ is the set $\text{AO}_{G}(e)=\\{(f(a),f(b))\ |\
f\in\text{Aut}(G)\\}$. Note that $a\in\text{AO}_{G}(a)$ and
$e\in\text{AO}_{G}(e)$ due to the trivial automorphism $f(x)=x$.
We can even consider the orbits of _non-existent_ edges (i.e. non-edges). Let
$(a,b)\notin E$ be an edge which is not in $G$. We can still define the orbit
of $(a,b)$ to be $\\{(f(a),f(b))\ |\ f\in\text{Aut}(G)\\}$. These orbits are
collections of edges not in $G$ which are equivalent to each other given $G$.
### III.4 Induced Subgraphs
Given graph $G=(V,E)$ and a subset of the graph’s vertices $S\subseteq V$, we
can define $G$’s _induced subgraph_ on $S$ to be a graph with $S$ as its
nodeset and the edges in $G$ that connect nodes in $S$. Formally:
$G(S)=(S,\\{(a,b)\ |\ (a,b)\in E\land a\in S\land b\in S\\})$.
### III.5 K-hop Walks
Given two vertices $x,y\in V$, a _$k$ -hop walk_ from $x$ to $y$ is a sequence
$\langle w_{0},w_{1},w_{2},...,w_{k-1},w_{k}\rangle$ where $x=w_{0}$,
$y=w_{k}$, and $(x_{i-1},x_{i})\in E$ for all $i$ from 1 to $k$. For
convenience, we define a “zero-hop” walk to be a single-node “sequence”
$\langle w_{0}\rangle$, representing a “no-steps-taken” journey from $x$ to
itself.
### III.6 K-hop Neighborhoods
In practice, most link prediction algorithms do not use the entire graph when
predicting the probability of edge membership. Rather, they tend to use local
context. We formalize one intuitive notion of local context here that will be
used throughout the paper.
Given a node or an edge, we can consider the nodes surrounding the entity to
be the collection of nodes you could reach by beginning at the node (or the
edge’s endpoints), and taking up to $k$ steps (aka “hops”) across edges for
some value $k$. We can express this formally as follows:
Given a node $x\in V$, we define its _$k$ -hop neighborhood nodes_ $N_{k}(x)$
to be the set of all nodes within $k$ or fewer steps of $x$. Formally,
$N_{k}(x)=\\{y\ |\ \text{There exists an }l\text{-hop walk from }x\text{ to
}y\text{ where }l\leq k\\}$.
When we consider an edge or a non-edge $(a,b)$, we define its $k$-hop
neighborhood nodes to be the union of the two endpoints’ $k$-hop neighborhood
node sets: $N_{k}((a,b))=N_{k}(a)\cup N_{k}(b)$.
Finally we can define the _$k$ -hop neighborhood subgraph_ (or simply “$k$-hop
neighborhood”) for an edge or non-edge $e=(a,b)$. It is the induced subgraph
on $e$’s $k$-hop neighborhood nodes: $G_{k}(e)=G(N_{k}(e))$.
### III.7 Anonymized Graphs
Given a graph $G=(V,E)$, an _anonymized version_ of $G$ is another graph $H$
isomorphic to $G$ with no particular relation between $G$’s node labeling and
$H$’s node labeling. More specifically, to get an anonymized copy of $G$, you
can select a random permuation $\pi:V\rightarrow V$; your anonymized graph is
then $H=(V,\\{(\pi(a),\pi(b))\ |\ (a,b)\in E\\})$.
### III.8 Canonical Forms
A canonical form of a graph $G$ is a representation of $G$ that is produced in
a way invariant to the node ordering of $G$. The idea of a canonical form is
used by practical graph isomorphism algorithms such as Nauty and Traces
mckay2014practical ; they work by first converting two graphs $G_{1}$ and
$G_{2}$ to canonical forms $C_{1}$ and $C_{2}$ respectively, then perform a
trivial check to see if $C_{1}$ and $C_{2}$ are identical.
In other words, canonical forms are defined in terms of the algorithm that
creates them. An algorithm $A$ produces canonical forms for graphs if, for all
pairs of graphs $G$ and $H$, $A(G)=A(H)$ if and only if $G$ is isomorphic to
$H$. The canonical form of $G$ with respect to $A$ is $A(G)$.
## IV Link Predictors and Their Evaluation
### IV.1 Link Predictors
A link predictor is essentially a binary classifier for non-edges. It produces
a verdict indicating whether the (non-)edge is or should be a member of the
graph or not.
Let $G=(V,E)$ be a graph and $\bar{E}$ be the set of non-edges in $G$; that
is, $\bar{E}=\\{(a,b)\ |\ a,b\in V\land(a,b)\notin E\\}$. A hard link
predictor (i.e. hard binary classifier) for $G$ and $\bar{E}$ is a function
$\ell_{G}:\bar{E}\rightarrow\\{\texttt{Positive},\texttt{Negative}\\}$ that
gives a non-edge a label (Positive/Negative). A soft link predictor (i.e. soft
binary classifier) for $G$ and $\bar{E}$ is a function
$\ell_{G}:\bar{E}\rightarrow\mathbbm{R}$ that gives a non-edge a score. The
higher the score, the more likely the non-edge is considered to be one of the
Positives; the lower the score, the more likely the non-edge is considered to
be a Negative. The function may be the result of training a model on a
collection of correct edges/non-edges via manual parameter tuning, statistical
analysis, or any number of other methods.
In practice, soft classifier scores are often turned into hard labels by
picking a threshold value $t$ and giving all entities with a score $\geq t$
the Positive label and all others the Negative label.
#### IV.1.1 Our Assumptions
For convenience in our subsequent analysis, we make an assumption about how
link predictors operate. However, we also explain why a performance bound for
this kind of link predictor is ultimately a bound on all link predictors.
Our key assumption is that whenever a link predictor uses graph topology
information $I$ to give an edge $e$ a score, it would have given edge $e$ the
exact same score if any of the graph nodes in $I$ had been labeled
differently. In other words, the predictor’s output is permutation-invariant
on its input.
At first glance, this might sound like a big assumption, but there are only
two ways that an algorithm which is not permutation-invariant can get a better
score than an algorithm that is permutation-invariant:
* •
Case 1: The input graph’s node ordering was based on some property of the
solution graph.
* •
Case 2: The input ordering had no significance, but the algorithm gets lucky
arbitrarily due to the input (and would have performed worse given a different
arbitrary node input ordering).
Concerning Case 1: Nobody should want algorithms that make use of the kind of
data in Case 1, because that data is not available in real-world link
prediction settings (e.g. a sales website cannot use a node ordering based on
what products you _will_ buy).
Concerning Case 2: The possibility that an algorithm could get lucky is not a
meaningful counter-example to a performance limit111Technically, there is one
counter-example to this claim which might be of interest to the theoretically-
minded reader. Due to non-linearities in how link prediction scores are
calculated, an algorithm that coordinates its predictions across edges might
manage to increase the expected value of its link prediction score even though
in some cases it will still perform worse than a permutation-invariant
algorithm. For example, in Figure 1 an optimal algorithm meeting our
assumptions would give edges $(v,u)$, $(v,r)$, $(v,t)$, and $(v,q)$ in the
anonymized graph each a $\frac{1}{2}$ probability score of being an edge. A
non-permutation-invariant algorithm could give edges $(v,t)$ and $(v,q)$ each
a probability score of 1 and edges $(v,u)$, $(v,r)$ a score of zero; this kind
of prediction would do better half the time (i.e. on half the anonymizations)
and worse half the time, but might have a higher expected performance score
overall. We consider this sort of case to be exotic enough that it is not
relevant to our analysis..
Consequently, our performance bound is effectively a bound for all link
prediction algorithms - not just permutation-invariant ones.
For those who find these concepts interesting, we note that the notion of
permutation invariance has been explored in the graph neural network (GNN)
literature. For example, see the seminal work of Haggai Maron, who studies
what permutation-invariant architectures enable networks to do
maron2019universality ; maron2019provably ; maron2018invariant . Much of the
GNN research is focused on limits that different architectures impose or what
architectures enable – for example, that simple message passing has a power
equivalent to the 1-dimensional Weisfeiler Lehman algorithm
morris2019weisfeiler – whereas our work in this paper asks what limits are in
the data itself regardless of which architecture is used.
#### IV.1.2 Edge Equivalence
Our assumption from Section IV.1.1 could be rephrased as, “If a link predictor
is given the exact same information about two different non-edges (same up to
isomorphism), then it gives those two edges the same score.” Using this
assumption, we can partition edges into cells based on whether or not a link
predictor is given the same information about the edges – same information
$\leftrightarrow$ same cell.
When a link predictor $\ell$ for a graph $G$ uses the context of the entire
graph to give an edge a score, then two edges are guaranteed to get the same
score if $G$’s topology in no way distinguishes between the two. Formally,
this means that two edges are guaranteed to get the same score if they are in
the same automorphism orbit:
$e_{1}\in\text{AO}_{G}(e_{2})\rightarrow\ell(e_{1})=\ell(e_{2})$
Often, link predictors use a local context surrounding an edge to give it a
score rather than using the entire graph as context. If we assume that a link
predictor uses at most the $k$-hop neighborhood surrounding an edge to give
the edge its score, then we get that when two edges have equivalent $k$-hop
neighborhoods and when those edges have the same role within the $k$-hop
neighborhoods, then the edges get the same score. Formally, this can be
expressed as: $\left(\exists f.\ G_{k}(e_{1})\cong_{f}G_{k}(e_{2})\land
f(e_{1})=e_{2}\right)\rightarrow\ell(e_{1})=\ell(e_{2})$
Note that by definition $e_{1}\in\text{AO}_{G}(e_{2})$ implies $\left(\exists
f.\ G_{k}(e_{1})\cong_{f}G_{k}(e_{2})\land f(e_{1})=e_{2}\right)$ for any $k$.
In other words, if two edges are equivalent in the context of the entire
graph, then they are guaranteed to be equivalent when considered in context of
their $k$-hop neighborhoods.
### IV.2 Performance Scores for Link Predictors
In practice, almost all link prediction classifiers are soft classifiers.
There are a number of nuances to how these classifiers are scored that are
worth highlighting here.
Remember that a soft classifier can be converted into a hard (i.e. binary)
classifier by predicting “yes” when the soft classifier’s output is above a
certain threshold and “no” otherwise. Researchers tend to evaluate the soft
predictors across a range of different thresholds. Each (soft predictor,
threshold) pair represents a possible hard link predictor. Thus performance of
a soft predictor can be considered to be the goodness of the collection of
hard predictors it offers. This can be measured in terms of different
criterion. One common criterion is the relationship between a predictor’s True
Positive Rate (TPR) and False Positive Rate (FPR), which generates the widely
used ROC curve; the ROC score is the area under the ROC curve. Another common
criterion is the relationship between the predictor’s Precision and Recall,
which leads to the Precision-Recall curve and its corresponding metric of Area
Under the Precision-Recall curve (AUPR). For an in-depth analysis exploring
the relationship between ROC curves and Precision-Recall curves, we recommend
the paper by Davis and Goadrich davis2006relationship .
When converting a set of (TPR, FPR) or (Precision, Recall) points into a
curve, an interpolation between points represents a way of combining the two
hard classifiers (the two points) into a new hard classifier. This can be done
by picking a value $\alpha\in[0,1]$ and tossing an $\alpha$-weighted coin
every time an entity is scored to decide which of the two hard classifiers to
use for the entity. This is implicitly how we as well as Davis and Goadrich
perform interpolation. It turns out that the popular trapezoidal interpolation
is incorrect for Precision-Recall space, because hard classifiers cannot be
combined to get precision-recall pairs that interpolate linearly
davis2006relationship .
Sometimes, rather than calculate the AUPR curve exactly, it can be
approximated with a measure called Average Precision (AP). Rather than doing a
complex interpolation between two precision-recall points, Average Precision
simply uses the precision of the rightmost point (the point with the higher
recall).
## V Optimal Prediction Performance
Here we offer the actual formulae for calculating the optimal ROC and AUPR
scores a soft classifier can obtain.
Recall from Section IV.1.1 that there will be some edges which are
topologically identical and thus which will both get the same score from a
link predictor. This fact forms a limit on the ability of the link predictor
to separate positive edges from negative edges. As we discussed in Section
IV.1.2, given whatever topological information an algorithm uses, we can
partition the edges into cells, where each cell is one of the sets of edges
that must all be given the same score as eachother by the algorithm; when the
algorithm uses the entire graph as data to help it score an edge, then the
relevant partition is formed by grouping the edges according to their
automorphism orbits.
Given a graph $G=(V,E)$ and its non-edges $\bar{E}$, as well as some link-
prediction algorithm, we can consider the relevant partitioning of $\bar{E}$
into $k$ cells $C_{1}$, $C_{2}$, …, $C_{k}$. A cell might contain only
positive edges (i.e. only edges the link predictor should give a high score
to), only negative edges, or a mixture. It’s from mixed cells that performance
limits arise: since all elements in a mixed cell get the same score, they
cannot all have correct scores. For a cell $C_{i}$, we denote the number of
positives in the cell as $p_{i}$, the number of negatives in the cell as
$n_{i}$, and the the total number of elements in the cell as
$t_{i}=p_{i}+n_{i}=|C_{i}|$.
For a given partitioning $C_{1}$ through $C_{k}$, we prove in the Appendix
that the optimal ROC and AUPR scores a soft classifier can obtain equals the
ROC/AUPR scores obtained from a classifier
$\ell:\bar{E}\rightarrow\mathbbm{R}$ which satisfies the following property:
$\forall 1\leq i,j\leq k.\ \forall e\in C_{i},\ e^{\prime}\in C_{j}.\
\Big{(}\ell(e)\geq\ell(e^{\prime})\Big{)}\leftrightarrow\left(\frac{p_{i}}{t_{i}}\geq\frac{p_{j}}{t_{j}}\right)$
(1)
Note that within a cell $C_{i}$, the probability that an element is a positive
is just $\frac{p_{i}}{t_{i}}$. Thus a classifier that scores non-edges with
the probability they’re positives will get an optimal score – optimal given
the topological information that the algorithm uses to distinguish non-edges.
This property of optimal classifiers permits us to easily compute the maximal
ROC/AUPR scores that any algorithm could have obtained on a given dataset and
task. All we need to do is find the partitioning of non-edges into cells that
are topoligically equivalent given the data the link predictor has at hand.
Then we order those cells according to their density of positives (i.e.
according to $\frac{p_{i}}{t_{i}}$). Given that ordering, we use the standard
ROC and AUPR formulae to calculate what scores would be obtained by an optimal
classifier. We provide code both for proper AUPR calculation and for optimal
ROC/AUPR scores at https://github.com/SteveWillowby/Link_Prediction_Limits.
For the formulae below, assume that the cells $C_{1}$ through $C_{k}$ are
ordered such that $i\leq
j\rightarrow\frac{p_{i}}{t_{i}}\geq\frac{p_{j}}{t_{j}}$.
To give the ROC and AUPR formulae in terms of this partitioning, we need just
a bit more notation. Define cumulative sums $P_{0}=0$ and
$P_{i}=\sum_{j=1}^{i}p_{j}$ for $1\leq i\leq k$. Similarly, define cumulative
sums $T_{0}=0$ and $T_{i}=\sum_{j=1}^{i}t_{j}$ for $1\leq i\leq k$. And again,
$N_{0}=0$ and $N_{i}=\sum_{j=1}^{i}n_{i}$ for $1\leq i\leq k$. Define
$T=T_{k}$, $P=P_{k}$, and $N=N_{k}$. Note that $|\bar{E}|=T=N+P$ (total number
of non-edges = total number of things classified = negatives + positives). We
now get the following formula for ROC:
$\text{Max ROC}=\sum_{i=1}^{k}\frac{p_{i}}{P}\cdot\frac{N_{i}+N_{i-1}}{2N}$
(2)
The formula for AUPR is messier due to the need for proper interpolation
between precision-recall points discussed above in Section IV.2, but it is
still easy to calculate:
$\text{Max
AUPR}=\sum_{i=1}^{k}\frac{p_{i}}{P}\cdot\frac{p_{i}}{t_{i}}\cdot\left(1+\left(\frac{P_{i-1}}{p_{i}}-\frac{T_{i-1}}{t_{i}}\right)\cdot\ln\left(\frac{T_{i}}{T_{i-1}}\right)\right)$
(3)
Note that there are no division-by-zero issues with this formula due to the
following facts: When $p_{i}=0$, the entire expression becomes zero. Further
$t_{i}$ always $>0$. Lastly, when $T_{i-1}=0$, then $P_{i-1}=0$, and because
$\lim_{x\rightarrow 0^{+}}x\ln\frac{1}{x}=\lim_{x\rightarrow
0^{+}}x\left(\ln(1)-\ln(x)\right)=0$, we do not get a division by zero issue
with $T_{i-1}$.
Equipped with these formulae, we can now begin to calculate the maximum
possible performance scores on actual prediction tasks.
As we mentioned above, Average Precision (AP) is sometimes used to approximate
AUPR. However, the nice result we prove for ROC and AUPR concerning the edge
partioning order does _not_ hold for AP. Fortunately, our upper bound on AUPR
is also an upper bound on AP, so we can still upper-bound the AP scores that
one might obtain. We provide a short proof of this in the Appendix.
## VI Methodology
Our main experiment is to calculate how maximum link prediction scores vary
with the amount of information given to an idealized algorithm. We run this
test on a wide variety of real-world graphs. The procedure runs as follows:
1. 1.
Begin with a graph $G=(V,E)$ and an edge removal probability $p$ (we set
$p\leftarrow 0.1$).
2. 2.
Define the set of negatives $N$ as all edges not in $G$.
3. 3.
Remove each edge in $G$ with probability $p$ (independently) and add the
removed edges to the set of positives $P$. Call the resulting graph
$H\leftarrow(V,E\setminus P)$.
4. 4.
Get a (hashed) canonical representation for each non-edge’s automorphism orbit
in $H$.
5. 5.
Use the collected information to calculate the maximum scores via equations 2
and 3.
6. 6.
Assign $k\leftarrow 1$.
7. 7.
Get a (hashed) canonical representation of the $k$-hop neighborhood for each
non-edge in $H$ where the non-edge’s endpoints are given a distinct color from
the rest of the nodes.
8. 8.
Use the collected information to calculate the maximum scores when using at
most $k$ hops of information about a non-edge.
9. 9.
If the performance limit just obtained from step 8 is equal to (or within
0.005 of) the performance limit obtained from step 5, then stop. Otherwise,
assign $k\leftarrow k+1$ and go to step 7.
We perform the above procedure multiple times for each graph. Each iteration
corresponds to different, random possible sets of missing edges; each set of
missing edges can be slightly different in terms of the limit on its
predictability. We get the mean value and 95% confidence interval for each
distinct value of $k$.
We tested the link prediction limits on a wide variety of real-world graphs.
They are listed in Table 1.
Graph | Dir | Weighted | $|V|$ | $|E|$ | # SL | AD | CC | Diam | ASP
---|---|---|---|---|---|---|---|---|---
Species 1 Brain [1] | D | U | 65 | 1139 | 0 | 35.0 | .575 | 4 | 1.83
Highschool Friendships [2] | D | W | 70 | 366 | 0 | 10.5 | .362 | 12 | 3.90
Foodweb [2] | D | U | 183 | 2476 | 18 | 27.1 | .173 | 6 | 1.84
Jazz Collaboration [2] | U | U | 198 | 5484 | 0 | 55.4 | .617 | 6 | 2.22
Faculty Hiring (C.S.) | D | W | 206 | 2929 | 124 | 28.4 | .214 | 7 | 2.88
Congress Mentions [2] | D | W | 219 | 586 | 2 | 5.35 | .160 | 12 | 4.48
Medical Innovation [2] | D | U | 241 | 1098 | 0 | 9.11 | .210 | 9 | 3.26
C-Elegans Metabolic [2] | U | U | 453 | 2025 | 0 | 8.94 | .646 | 7 | 2.66
USA Top 500 Airports (2002) [3] | D | U | 500 | 5960 | 0 | 23.8 | .617 | 7 | 2.99
Eucore Emails [4] | D | U | 1005 | 24929 | 642 | 49.6 | .366 | 7 | 2.65
Roget Concepts [3] | D | U | 1010 | 5074 | 1 | 10.0 | .108 | 14 | 4.89
CCSB-YI1 [5] | U | U | 1278 | 1641 | 168 | 2.57 | .045 | 14 | 5.36
MySQL Fn. Calls [6] | D | U | 1501 | 4212 | 13 | 5.61 | .078 | 18 | 5.36
USA Airports (2010) [3] | D | U | 1574 | 28236 | 0 | 35.9 | .489 | 9 | 3.20
Collins Yeast [3] | U | U | 1622 | 9070 | 0 | 11.2 | .555 | 15 | 5.53
Cora Citation [1] | D | U | 2708 | 5429 | 0 | 4.01 | .131 | 15 | 4.53
Citeseer Citation [1] | D | U | 3264 | 4536 | 0 | 2.78 | .072 | 10 | 2.64
Roman Roads (1999) [3] | D | U | 3353 | 8870 | 0 | 5.29 | .025 | 57 | 25.3
USA Powergrid [3] | U | U | 4941 | 6594 | 0 | 2.67 | .080 | 46 | 19.0
Table 1: Graphs used for tests – The edge count does not include self-loops,
which are listed separately – Key: Dir = (Un)Directed, Weighted =
(Un)Weighted, # SL = # Self-loops, AD = Average Degree, CC = Average
Clustering Coefficient, Diam = Diameter, ASP = Average of all Shortest Path
Lengths –
Sources: [1]: networkrepository , [2]: konect , [3]: clauset2020colorado ,
[4]: snapnets , [5]: yu2008high , [6]: myers2003software
## VII Results
### VII.1 Sparsity Tends to Lower the Upper-Bound
We found that on most graphs, the upper bounds were near 100%, even when using
1-hop neighborhoods; we suspect that this is because when degrees are high
enough there is still a large number of possible 1-hop neighborhoods such that
the hypothetical optimal algorithm can take advantage of the slightest
difference between neighborhoods. However, we found that on the sparsest
graphs the results told a different and very interesting story.
$1$$2$$3$$0$$0.5$$1$Upper Bound on AUPR ScoreCora ML
Citations$1$$2$$3$$4$$0$$0.5$$1$Citeseer Citations$1$$2$$3$$0$$0.5$$1$Number
of Hops ($k$) of InformationUpper Bound on AUPR ScoreCCSB-
YI1$1$$2$$3$$4$$5$$6$$7$$0$$0.5$$1$Number of Hops ($k$) of InformationUSA
Powergrid
$1$$2$$0$$0.5$$1$AUPR Lim.Congress$1$$2$Roget$1$$2$MySQL$1$$2$Collins
Yeast$1$$2$Roman Roads
$1$$0$$0.5$$1$AUPR Lim.Species
1$1$Highschool$1$Foodweb$1$Jazz$1$Faculty$1$$0$$0.5$$1$Num. HopsAUPR
Lim.Medical$1$Num. HopsC-Elegans$1$Num. Hops500 Airports$1$Num.
HopsEucore$1$Num. HopsUSA Airports
Figure 2: Hard upper bounds on link prediction performance as it varies with
the amount of information given to a link prediction algorithm. The horizontal
line shows the limit when using the entire graph ($k=\infty$). Ten percent of
the graph’s edges were randomly selected as test edges. The multiple points at
a single value of $k$ are from different sets of randomly chosen test edges;
left-right jitter is employed to aid in visualization. Error bars are 95%
confidence intervals.
We show the results for the four sparsest graphs: the Cora and Citeseer
citation (sub)graphs, the CCSB-YI1 Protein-Protein Interaction graph, and a US
Powergrid network. The results are in Figure 2. In particular, we focus on the
AUPR values, because even though link prediction papers often report ROC
scores, link predictors can easily get large ROC scores due to the class
imbalance (the sheer number of non-edges) yang2015evaluating .
In summary, our results give good evidence that when data becomes sparse
enough, graph toplogy alone is severely limited in its ability to indicate a
difference between genuine and fake missing edges.
### VII.2 Negative Sampling Methodologies Produce Artificially High Scores
We were curious to see how these fundamental limits compared to reports of
link prediction performance. As a small case study, we considered the seminal
Graph Convolutional Neural Network Auto-Encoder (GCNAE), a widely used and
referenced model that can perform topology-only link prediction
kipf2016variational . This will help us discern how well link predictors are
making use of the topology information available to them. Though this model is
a little bit “old” in the fast-paced world of graph neural networks, its
performance is still within just a few percentage points of modern GNN
performance chen2020simple ; fey2021gnnautoscale . In the original GCNAE
paper, the authors tested their model on undirected versions of the Cora and
Citeseer citation networks. They reported ROC scores and AP scores.
We found that the AP scores they reported for the Citeseer network were well
_above_ our upper bound, indicating that there was a difference in the
calculated AP. To understand this, we looked at the GCNAE code and found that
in its tests the number of negative edges was downsampled to one negative test
edge per positive test edge. This sort of downsampling is common when
performing link prediction evaluation with AP or AUPR; however downsampling
tends to boost the AP and AUPR scores significantly relative to what they
would have been if the full set of negatives was used in testing
yang2015evaluating . The GCNAE paper itself does not specify that downsampling
occurred.
By contrast, the paper’s reported ROC scores are well below our upper bound on
ROC. This makes sense as the ROC score is not affected by downsampling
yang2015evaluating . If we downsample the number of negative edges to one
negative edge per positive edge when calculating the AP limits, we get that
the GCNAE’s AP performance is also well below the upper bound. We show the
numeric results in Table 2.
The point of this case study is twofold. Firstly, a metric (e.g. AP) may have
different meanings depending on how it is used, and our methodology may be
able to help retroactively determine which approach was used if the original
paper does not specify.
Secondly, and perhaps of greater interest, state of the art link prediction
systems using the topology of a network do not reach the topology-based upper
limit on performance. We take this to suggest either that state of the art
link prediction systems have room for improvement in their use of graph
topology _or_ that what structurally differentiates the $k$-hop neighborhoods
of true edges from the $k$-hop neighborhoods of false edges in our tests is
basically noise that an algorithm should not pay attention to if it wishes to
generalize well. After all, if for example the 1-hop neighborhoods of two
different non-edges both have 20 edges per neighborhood and differ in only one
place, should we expect a link prediction algorithm to always treat that
difference as significant? We propose some future work in Section VIII.1 for
exploring how the upper limit on performance changes when the resolution of
the data is a bit blurrier, thereby reducing this noise.
Graph | GCNAE ROC | ROC Upper-Bound | GCNAE AP | AP Upper-Bound | AP Upper-Bound (1:1 Downsampling)
---|---|---|---|---|---
Cora | 0.843 $\pm$ 2e-4 | 0.99992 $\pm$ 3e-5 | 0.881 $\pm$ 1e-4 | 0.903 $\pm$ 0.020 | 0.99999 $\pm$ 9e-6
Citeseer | 0.787 $\pm$ 2e-4 | 0.9981 $\pm$ 3e-4 | 0.841 $\pm$ 1e-4 | 0.686 $\pm$ 0.019 | 0.9989 $\pm$ 5e-4
Table 2: Comparison to the GCNAE’s Reported Results – The $\pm$ symbol
indicates the 95% confidence interval. We conclude that the GCNAE paper is
likely downsampling negative test edges in the process of calculating AP. More
importantly, once downsampling is factored in, there is a notable gap between
the hypothetical ideal performance and state of the art topology-based
performance. We discuss this more in Sec. VII.2. Note: These results are for
undirected versions of the graphs, whereas the results in Fig. 2 are for the
directed versions (GCNAE only does link prediction on undirected graphs).
Also, note that the version of Citeseer that the GCNAE paper used has some
extra nodes with no links, whereas the version we used for Fig. 2 does not.
#### VII.2.1 Confirming our Assumption
To verify that the discrepency between score and upper bound is indeed due to
downsampling negative edges, we ran the GCNAE code and obtained link
prediction scores when using all possible negative edges. The results
confirmed our hypothesis: When using the full set of negative test edges, the
average ROC scores we obtained for Citeseer and Cora ($\sim$77% and $\sim$84%
respectively) were similar to the paper’s reported results, but the average AP
scores were _much_ lower (just $\sim$1.2% and $\sim$1.4% respectively). These
extremely low scores do not mean that GCNAE is performing poorly, for true AP
is a much harsher and informative link prediction metric than ROC, and other
GNN models get similar scores on similar datasets yang2015evaluating ;
hibshman2021joint .
## VIII Discussion
### VIII.1 Applications, Extensions, and Limitations
In addition to the fact that our methodology gives insights about topology-
based link prediction, we believe the kind of analysis we offer in this paper
can be extended and expanded. We observed how maximum possible performance on
a particular binary classification task (i.e. link prediction) varies with the
amount of information available to the classifier. At a certain resolution,
inputs to the algorithm look identical. In our analysis, the differing
resolutions were the differing $k$ for the $k$-hop neighborhood subgraphs.
However, these resolutions could hypothetically be any reasonable
representation of the data.
If these kinds of equivalence partitions can be created at widely varying
resolutions for a classification tasks, then researchers will begin to be able
to say things like “our algorithm works as well on the full data as an optimal
algorithm would work on data of resolution $X$.”
The key ingredient for our analysis was that at a given resolution we were
able to partition the objects being classified (the non-edges) into cells of
equivalent objects; this let us calculate how well a hypothetical optimal
algorithm would perform on those cells. We were able to get this kind of
partitioning because our equality relation on two test objects was isomorphic
equivalence of the objects’ $k$-hop neighborhoods _and thus the relation was
transitive_. Yet we expect that even in cases where a transitive equality
relation is not immediately available, one could create such a relation by
using a distance measure to cluster test inputs and then defining equality as
being in the same cluster. The more fine-grained the clusters, the higher the
resolution of data given to the hypothetical optimal algorithm.
The main limit we are aware of for this kind of analysis is that at high data
resolution, noise can easily dominate the analysis. That is to say, at high
resolution, random noise tends to render each entity to be classified unique,
and thus the hypothetical, optimal algorithm with a perfect prior will be able
to correctly distinguish any two entities and get a perfect score.
For example, consider link prediction on pure noise: That is, consider the
process of randomly generating a graph where each edge is present
independently with some probability $p$ and then randomly hiding some fraction
of the edges to create a link prediction task. Such a random graph will likely
have no global symmetry erdHos1960evolution , so at high data resolution (e.g.
$k=3$) every non-edge will be unique and thus the hypothetical, optimal
algorithm with a perfect prior will obtain a perfect score, even though a
real-world algorithm that does not mystically foreknow the answer can do no
better than considering each edge to be equally likely, because that is in
fact how the graph was constructed.
Fortunately for our kind of analysis, real-world algorithms are usually
designed to ignore noise in the first place, so a data resolution that
successfully filters out noise can simultaneously be relevant and provide a
non-trivial upper bound on optimal performance.
### VIII.2 Conclusion
We presented a methodology for calculating hard limits on how well a link
prediction algorithm could perform when using structural information only.
This helps analyze how much information graph structure does or does not
provide for link prediction. We found that very sparse graphs give rise to
significant inherent difficulties and therefore contain strong caps on optimal
performance.
We also observed that a state of the art topology-based link prediction method
performs well below the upper bound in some cases, which we believe either
means that the link prediction algorithms have serious room for improvement
_or_ that our test sometimes picks up on “noise” that indeed differentiates
edges from non-edges but which an algorithm should not be expected to pick up
on because that noise would not behave in any consistent or infer-able manner.
These observations prompted our discussion on further avenues of discovery and
extensions of our methodology; we expect that an analysis similar to ours
which finds a way to obtain performance upper bounds at varying degrees of
blurring the noise would provide further insights.
#### Acknowledgments
This work was funded by grants from the US National Science Foundation
(#1652492 and #CCF-1822939).
## References
* (1) Andrés Abeliuk, Zhishen Huang, Emilio Ferrara, and Kristina Lerman. Predictability limit of partially observed systems. Scientific reports, 10(1):1–10, 2020.
* (2) Lada A Adamic and Eytan Adar. Friends and neighbors on the web. Social networks, 25(3):211–230, 2003.
* (3) Mohammad Al Hasan, Vineet Chaoji, Saeed Salem, and Mohammed Zaki. Link prediction using supervised learning. In SDM06: workshop on link analysis, counter-terrorism and security, volume 30, pages 798–805, 2006.
* (4) Owen Astrachan. Bubble sort: an archaeological algorithmic analysis. ACM Sigcse Bulletin, 35(1):1–5, 2003.
* (5) Christoph Bandt and Bernd Pompe. Permutation entropy: a natural complexity measure for time series. Physical review letters, 88(17):174102, 2002.
* (6) Ginestra Bianconi, Richard K Darst, Jacopo Iacovacci, and Santo Fortunato. Triadic closure as a basic generating mechanism of communities in complex networks. Physical Review E, 90(4):042806, 2014.
* (7) Guido Boffetta, Massimo Cencini, Massimo Falcioni, and Angelo Vulpiani. Predictability: a way to characterize complexity. Physics reports, 356(6):367–474, 2002.
* (8) Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. Simple and deep graph convolutional networks. In International conference on machine learning, pages 1725–1735. PMLR, 2020.
* (9) Aaron Clauset, Ellen Tucker, and Matthias Sainz. The colorado index of complex networks (2016). URL https://icon. colorado. edu, 2016.
* (10) Jörn Davidsen, Holger Ebel, and Stefan Bornholdt. Emergence of a small world from local interactions: Modeling acquaintance networks. Physical review letters, 88(12):128701, 2002.
* (11) Jesse Davis and Mark Goadrich. The relationship between precision-recall and roc curves. In Proceedings of the 23rd international conference on Machine learning, pages 233–240, 2006.
* (12) Paul Erdős, Alfréd Rényi, et al. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci, 5(1):17–60, 1960.
* (13) Matthias Fey, Jan E Lenssen, Frank Weichert, and Jure Leskovec. Gnnautoscale: Scalable and expressive graph neural networks via historical embeddings. In International Conference on Machine Learning, pages 3294–3304. PMLR, 2021.
* (14) Joshua Garland, Ryan James, and Elizabeth Bradley. Model-free quantification of time-series predictability. Physical Review E, 90(5):052910, 2014.
* (15) Justus Isaiah Hibshman, Daniel Gonzalez, Satyaki Sikdar, and Tim Weninger. Joint subgraph-to-subgraph transitions: Generalizing triadic closure for powerful and interpretable graph modeling. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pages 815–823, 2021.
* (16) Emily M Jin, Michelle Girvan, and Mark EJ Newman. Structure of growing social networks. Physical review E, 64(4):046132, 2001.
* (17) Thomas N Kipf and Max Welling. Variational graph auto-encoders. NIPS Workshop on Bayesian Deep Learning, 2016.
* (18) Peter Klimek and Stefan Thurner. Triadic closure dynamics drives scaling laws in social multiplex networks. New Journal of Physics, 15(6):063008, 2013.
* (19) Jérôme Kunegis. KONECT – The Koblenz Network Collection. In Proc. Int. Conf. on World Wide Web Companion, pages 1343–1350, 2013.
* (20) Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset collection. http://snap.stanford.edu/data, June 2014.
* (21) David Liben-Nowell and Jon Kleinberg. The link prediction problem for social networks. In Proceedings of the twelfth international conference on Information and knowledge management, pages 556–559, 2003.
* (22) Kian-Ping Lim, Weiwei Luo, and Jae H Kim. Are us stock index returns predictable? evidence from automatic autocorrelation-based tests. Applied Economics, 45(8):953–962, 2013.
* (23) Xin Lu, Erik Wetter, Nita Bharti, Andrew J Tatem, and Linus Bengtsson. Approaching the limit of predictability in human mobility. Scientific reports, 3(1):1–9, 2013.
* (24) Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. Advances in neural information processing systems, 32, 2019.
* (25) Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018.
* (26) Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In International conference on machine learning, pages 4363–4371. PMLR, 2019.
* (27) Víctor Martínez, Fernando Berzal, and Juan-Carlos Cubero. A survey of link prediction in complex networks. ACM computing surveys (CSUR), 49(4):1–33, 2016.
* (28) Brendan D McKay and Adolfo Piperno. Practical graph isomorphism, ii. Journal of symbolic computation, 60:94–112, 2014.
* (29) Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 4602–4609, 2019.
* (30) Christopher R Myers. Software systems as complex networks: Structure, function, and evolvability of software collaboration graphs. Physical review E, 68(4):046116, 2003.
* (31) Technically, there is one counter-example to this claim which might be of interest to the theoretically-minded reader. Due to non-linearities in how link prediction scores are calculated, an algorithm that coordinates its predictions across edges might manage to increase the expected value of its link prediction score even though in some cases it will still perform worse than a permutation-invariant algorithm. For example, in Figure 1 an optimal algorithm meeting our assumptions would give edges $(v,u)$, $(v,r)$, $(v,t)$, and $(v,q)$ in the anonymized graph each a $\frac{1}{2}$ probability score of being an edge. A non-permutation-invariant algorithm could give edges $(v,t)$ and $(v,q)$ each a probability score of 1 and edges $(v,u)$, $(v,r)$ a score of zero; this kind of prediction would do better half the time (i.e. on half the anonymizations) and worse half the time, but might have a higher expected performance score overall. We consider this sort of case to be exotic enough that it is not relevant to our analysis.
* (32) Paul Resnick, Yuqing Kong, Grant Schoenebeck, and Tim Weninger. Survey equivalence: A procedure for measuring classifier accuracy against human labels. arXiv preprint arXiv:2106.01254, 2021.
* (33) Ryan Rossi and Nesreen Ahmed. The network data repository with interactive graph analytics and visualization. In Twenty-ninth AAAI conference on artificial intelligence, 2015\.
* (34) Samuel V Scarpino and Giovanni Petri. On the predictability of infectious disease outbreaks. Nature communications, 10(1):1–8, 2019.
* (35) Chaoming Song, Zehui Qu, Nicholas Blumm, and Albert-László Barabási. Limits of predictability in human mobility. Science, 327(5968):1018–1021, 2010.
* (36) Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and S Yu Philip. A comprehensive survey on graph neural networks. IEEE transactions on neural networks and learning systems, 32(1):4–24, 2020.
* (37) Yang Yang, Ryan N Lichtenwalter, and Nitesh V Chawla. Evaluating link prediction methods. Knowledge and Information Systems, 45(3):751–782, 2015.
* (38) Haiyuan Yu, Pascal Braun, Muhammed A Yıldırım, Irma Lemmens, Kavitha Venkatesan, Julie Sahalie, Tomoko Hirozane-Kishikawa, Fana Gebreab, Na Li, Nicolas Simonis, et al. High-quality binary protein interaction map of the yeast interactome network. Science, 322(5898):104–110, 2008.
* (39) Fuqing Zhang, Y Qiang Sun, Linus Magnusson, Roberto Buizza, Shian-Jiann Lin, Jan-Huey Chen, and Kerry Emanuel. What is the predictability limit of midlatitude weather? Journal of the Atmospheric Sciences, 76(4):1077–1091, 2019.
* (40) Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. AI open, 1:57–81, 2020.
## Appendix A Appendix - Proofs
### A.1 Maximum ROC
Let $k$ be the number of distinct inputs to the classifier (i.e. the number of
cells of isomorphically-equivalent non-edge types in the partition of non-
edges).
Further, let $s$ be a classifier that achieves an optimal ROC score for these
cells. Because ROC is calculated with respect to the different (true positive
rate, false positive rate) pairs obtainable by using different score
thresholds to convert a soft classifier into various hard classifiers, then
without loss of generality $s$ assigns a distinct score to each cell; if $s$
did not, then we could just consider the cells given the same scores as being
the same cell with a larger total size and larger number of positives.
As in Section V, let $t_{1},t_{2},...,t_{k}$ be the total sizes of the cells,
where the cells are ordered by the score $s$ gives to them ($t_{1}$ is the
total size of the cell with the highest score). Likewise, let
$p_{1},p_{2},...,p_{k}$ be the number of positives in the respective cells.
For notational convenience, we define $t_{0}=p_{0}=0$.
Let $T_{i}=\sum_{j=0}^{i}t_{j}$, $P_{i}=\sum_{j=0}^{i}$, and
$N_{i}=T_{i}-P_{i}$. Also define $T=T_{k}$, $P=P_{k}$, and $N=N_{k}$.
Assume for sake of contradiction that there exists an $i\in[k-1]$ such that
$\frac{p_{i}}{t_{i}}<\frac{p_{i+1}}{t_{i+1}}$. Now imagine an alternate
classifier $s^{*}$ that gives the exact same scores as $s$ except that it
reverses the scores for the $i$’th and $(i+1)$’th cells.
Then we get a new set of variables $t_{1}^{*},t_{2}^{*},...,t_{k}^{*}$ and
$p_{1}^{*},p_{2}^{*},...,p_{k}^{*}$ as well as corresponding
$T_{j}^{*},P_{j}^{*},\text{and }N_{j}^{*}$ where the values correspond to the
cells as ordered by classifier $s^{*}$. Due to the definition of $s^{*}$,
$t_{j}=t_{j}^{*}$ and $p_{j}=p_{j}^{*}$ for all $j$ except $j\in\\{i,i+1\\}$,
in which case $t_{i}=t_{i+1}^{*}$, $t_{i+1}=t_{i}^{*}$, $p_{i}=p_{i+1}^{*}$,
and $p_{i+1}=p_{i}^{*}$. Once again, for notational simplicity we pad the
beginning of these lists with $t_{0}^{*}=p_{0}^{*}=0$.
This gives us a total of $k+1$ (true positive rate, false positive rate) pairs
(i.e. (TPR, FPR) pairs). The false positive rate is the x axis of the curve
and the true positive rate is the y axis.
$\text{True Positive Rate }j\text{ for }s\ (\text{i.e.
TPR}_{j})=\frac{P_{j}}{P}\text{ for }j\in\\{0,1,...,k\\}$
$\text{False Positive Rate }j\text{ for }s\ (\text{i.e.
FPR}_{j})=\frac{N_{j}}{N}\text{ for }j\in\\{0,1,...,k\\}$
Likewise for $s^{*}$ we have:
$\text{True Positive Rate }j\text{ for }s^{*}\ (\text{i.e.
TPR}_{j}^{*})=\frac{P_{j}^{*}}{P}\text{ for }j\in\\{0,1,...,k\\}$
$\text{False Positive Rate }j\text{ for }s^{*}\ (\text{i.e.
FPR}_{j}^{*})=\frac{N_{j}^{*}}{N}\text{ for }j\in\\{0,1,...,k\\}$
The ROC curve then interpolates linearly between these points. Note that by
definition
$\text{TPR}_{0}=\text{TPR}_{0}^{*}=\text{FPR}_{0}=\text{FPR}_{0}^{*}=0$ and
$\text{TPR}_{k}=\text{TPR}_{k}^{*}=\text{FPR}_{k}=\text{FPR}_{k}^{*}=1$.
We can consider the interpolation between the $j$’th (TPR, FPR) point and the
$(j+1)$’th (TPR, FPR) point as corresponding to a variable $\alpha\in[0,1]$
where:
$\text{TPR}_{j,\alpha}=\frac{P_{j}+\alpha p_{j+1}}{P}$
$\text{FPR}_{j,\alpha}=\frac{N_{j}+\alpha(t_{j+1}-p_{j+1})}{N}$
This leads to the following:
$\frac{\text{d}}{\text{d}\alpha}\text{TPR}_{j,\alpha}=\frac{p_{j+1}}{P}$
and separately:
$\alpha=\frac{N\cdot\text{FPR}_{j,\alpha}-N_{j}}{t_{j+1}-p_{j+1}}$
$\frac{\text{d}}{\text{d}\text{FPR}_{j,\alpha}}\alpha=\frac{N}{t_{j+1}-p_{j+1}}$
Using the chain rule gives us:
$\frac{\text{d}}{\text{d}\text{FPR}_{j,\alpha}}\text{TPR}_{j,\alpha}=\frac{\text{d}\alpha}{\text{d}\text{FPR}_{j,\alpha}}\cdot\frac{\text{d}\text{TPR}_{j,\alpha}}{\text{d}\alpha}=\frac{N}{P}\cdot\frac{p_{j+1}}{t_{j+1}-p_{j+1}}$
For $s^{*}$’s curve this becomes:
$\frac{\text{d}}{\text{d}\text{FPR}_{j,\alpha}^{*}}\text{TPR}_{j,\alpha}^{*}=\frac{N}{P}\cdot\frac{p_{j+1}^{*}}{t_{j+1}^{*}-p_{j+1}^{*}}$
Now, because $t_{j}$ and $t_{j}^{*}$ only differ at $j=i$ and $j=i+1$, and the
same holds for $p_{j}$, etc., and because the values at $i$ and $i+1$ are the
reverse of each other, then we can conclude that the $(i-1)$’th TPR-FPR point
is the same for both $s$ and $s^{*}$ as well as the $(i+1)$’th point. The only
difference is at the $i$’th point. To see that the $(i+1)$’th point is the
same, note that
$P_{i+1}=P_{i-1}+p_{i}+p_{i+1}=P_{i-1}^{*}+p_{i+1}^{*}+p_{i}^{*}=P_{i+1}^{*}$.
Further, because $\frac{p_{i}}{t_{i}}<\frac{p_{i}^{*}}{t_{i}^{*}}$ we obtain
that $\frac{p_{i}}{t_{i}-p_{i}}<\frac{p_{i}^{*}}{t_{i}^{*}-p_{i}^{*}}$ which
in turn means that:
$\frac{\text{d}}{\text{d}\text{FPR}_{i-1,\alpha}}\text{TPR}_{i-1,\alpha}<\frac{\text{d}}{\text{d}\text{FPR}_{i-1,\alpha}^{*}}\text{TPR}_{i-1,\alpha}^{*}$
In other words, the slope leading from the $(i-1)$’th point to the $i$’th
point is greater in $s^{*}$’s ROC curve than in $s$’s. Since both curves up to
and including their $(i-1)$’th point are identical, and since they are also
identical at the $(i+1)$’th point and thereafter, this means that $s^{*}$’s
curve has a larger area underneath it.
Ergo, we obtain a contradiction, for $s^{*}$ obtains a higher ROC score than
$s$. Thus the assumption must have been false. This means that $s$ orders the
cells such that $\frac{p_{j}}{t_{j}}>\frac{p_{j+1}}{t_{j+1}}$ for all
$j\in[k-1]$. In other words, $s$ completely sorts the cells as we intended to
show. $\hbox{}\nobreak\hfill\square$
### A.2 Maximum AUPR
Davis and Goadrich have shown that if one ROC curve dominates another, then
the corresponding (properly interpolated) precision-recall curves yield the
same dominance [11]. Thus the AUPR result follows directly from our ROC result
above.
### A.3 Maximum AP vs. Maximum AUPR
The nice ordering property we prove for ROC and AUPR does not hold for AP. For
example, consider the following three cells with number of (positives,
negatives) total: $\langle(10,0),(2,2),(9,7)\rangle$. Ordering these in
decreasing $\frac{\text{Positives}}{\text{Positives}+\text{Negatives}}$ yields
an AP of approximately 0.856 whereas ordering them in the order listed yields
an AP of approximately 0.858.
Fortunately, we can still calculate a hard upper limit on AP scores by simply
using the upper limit on the AUPR score. Remember our ordering rule for an
optimal curve: Order by
$\frac{\text{Positives}}{\text{Positives}+\text{Negatives}}$ in a descending
order. Remember also that via the ROC and AUPR proofs, we showed that given
_any_ AUPR curve obtained by some ordering of the cells where some pair of
cells disobeyed our ordering rule, you could get an AUPR curve that
_dominates_ it by swapping the ordering of the incorrect pair. Now, observe
that any curve which does not follow the ordering rule can be converted into
the curve that does by a succession of swaps where the swaps are of adjacent,
incorrectly ordered cells; this corresponds to the naive “bubble sort”
algorithm [4]. During this process of swaps, each new curve dominates the
former. Given that the last curve is the one corresponding to our ordering, it
follows that our optimal curve not only has a larger area underneath it than
any other curve, but it also _dominates_ any other curve. Last but not least,
note that if you follow the optimal AUPR curve from right to left (i.e. from
recall of 1 to recall of 0) that the curve never decreases in precision.
Now let us turn our attention to AP curves. Recall that an AP curve is
obtained in a very similar manner to an AUPR curve. In both cases you first
get a collection of precision-recall points given the ordering the classifier
gives to the cells (we call these precision-recall points the “base points”);
the only difference between AP and AUPR is in how the two kinds of curves
interpolate between adjacent base points. AP curves interpolate between any
two precision-recall points by using the precision from the point with the
higher recall.
We can observe two things: First, given our observations about the optimal
AUPR curve, regardless of the ordering of the cells used to get the base
points, all the base points of the AP curve are either on or are strictly
dominated by the points in the optimal AUPR curve. Second, as you move from
right to left along the interpolated AP points, the precision value remains
constant until you hit a new base point. By contrast, as we discusse above the
optimal AUPR curve’s precision value is either constant _or increasing_ as you
move from right to left. Thus any AP curve is always dominated by the ideal
AUPR curve. $\hbox{}\nobreak\hfill\square$
|
# Self-supervised feature distillation and design of experiments for efficient
training of micromechanical deep learning surrogates
Patxi Fernandez-Zelaia<EMAIL_ADDRESS>Jason Mayeur<EMAIL_ADDRESS>Jiahao Cheng<EMAIL_ADDRESS>Yousub Lee<EMAIL_ADDRESS>Kevin Knipe
<EMAIL_ADDRESS>Kai Kadau<EMAIL_ADDRESS>Manufacturing Science Division, Oak Ridge National Laboratory, Oak Ridge, TN,
United States Computational Sciences & Engineering Division, Oak Ridge
National Laboratory, Oak Ridge, TN, United States Siemens Energy, Orlando,
FL, United States
###### Abstract
Machine learning surrogate emulators are needed in engineering design and
optimization tasks to rapidly emulate computationally expensive physics-based
models. In micromechanics problems the local full-field response variables are
desired at microstructural length scales. While there has been a great deal of
work on establishing architectures for these tasks there has been relatively
little work on establishing microstructural experimental design strategies.
This work demonstrates that intelligent selection of microstructural volume
elements for subsequent physics simulations enables the establishment of more
accurate surrogate models. There exist two key challenges towards establishing
a suitable framework: (1) microstructural feature quantification and (2)
establishment of a criteria which encourages construction of a diverse
training data set. Three feature extraction strategies are used as well as
three design criteria. A novel contrastive feature extraction approach is
established for automated self-supervised extraction of microstructural
summary statistics. Results indicate that for the problem considered up to a
8% improvement in surrogate performance may be achieved using the proposed
design and training strategy. Trends indicate this approach may be even more
beneficial when scaled towards larger problems. These results demonstrate that
the selection of an efficient experimental design is an important
consideration when establishing machine learning based surrogate models.
###### keywords:
machine learning , experimental design , ICME , surrogate modeling ,
micromechanics
††journal: Computational Materials Science††Notice of Copyright. This
manuscript has been authored by UT-Battelle, LLC under Contract No. DE-
AC05-00OR22725 with the U.S. Department of Energy. The United States
Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable, world-wide license to publish or reproduce
the published form of this manuscript, or allow others to do so, for United
States Government purposes. The Department of Energy will provide public
access to these results of federally sponsored research in accordance with the
DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
## 1 Introduction
Numerical physics-based models are essential tools needed to study many
complex physical systems. However, due to the computational burden associated
with discretizing and solving the governing laws, using these models to
optimize and design engineering systems is difficult. Driven in part by the
materials genome initiative integrated computational materials engineering
(ICME) tools have become ubiquitous in various fields [1]. While classically
focused on physics-based simulations and statistical approaches the ICME
paradigm has benefited greatly from recent advances in machine learning (ML)
and the development of open source frameworks. For instance, original methods
for binary phase image generation relied on pixel-wise simulated annealing
approaches [2]; presently ML-based generative diffusion models can produce
much more realistic and complex structures [3, 4, 5, 6]. Google and Microsoft
have even recently released materials focused generative models indicating
their interest in ICME approaches [7, 8].
In the statistics community classic surrogate reduced order models have been
typically used to emulate parameterized numerical codes with scalar valued
outputs [9, 10, 11]. Gaussian processes (GPs) are favored in the field of
computer experiments because, for deterministic simulations, GPs can be
formulated to behave as interpolators; the function will pass exactly through
the training data without over fitting. Modern GP implementations have been
formulated which can capture more complex input and output structures [12,
13]. Additionally there are composite approaches combining GPs with moderm ML
approaches [14, 15]. Fundamentally GPs can be thought of as making a
prediction at an input $\bm{x}$ by performing a weighted average of “near-by”
training examples. This measure of closeness requires calculation of a
distance metric (and learning of spatial correlation functions). For
parametric inputs this is a rather simple Euclidean distance for low-
dimensional data. Parametric definition of a complex system is made possible
via careful use of relevant summary statistics [16]. However, for many
engineering and science problems parametric description of the problem may not
be feasible. For instance, emulating finite element (FE) models with
discretized complex geometries, which cannot be easily captured via GPs, can
however be emulated using ML approaches [17, 18]. Similarly, in many mechanics
problems the input to a numerical code may not only be fundamental scalar
properties (stiffness, strength) but input structural representations. For
instance, shown in Fig. 1, microstructural volume elements (MVEs) contain
$32^{3}$ voxels with the local state in each being represented by three Euler
angles. This corresponds to $32,768\cdot 3$ total “features” which need to be
considered in computing the necessary GP distance measure. Furthermore, in
certain settings the full-field (voxel-wise) response is needed e.g. at least
$32,768$ total output values. Hence, for these complex structural problems,
direct application of classic GPs is not well suited. Instead, convolutional
neural networks (CNNs) have been shown to be well suited towards emulating
these micromechanical localization problems [19, 20, 21, 22, 23, 24]. This is
because the convolution operation is well suited for processing spatial
structure and CNN architectures may be easily designed for multiple output
predictions. Furthermore there is a clear link to Green’s functions in
continuum theory which use analytical convolution operations to predict the
response of heterogenous mediums [20]. This theoretical basis provides good
justification for using data-driven CNN networks.
Development of effective statistical or ML surrogate models not only requires
a suitable model form but also careful construction of an appropriate
experimental design. In physical experiments designs are often focused on
eliminating confounding or biasing factors, mitigating against the effects of
experimental noise, and extracting trends e.g. estimating the sensitivity of
the response variable to input factors [25]. In the statistics community
designs for surrogate model development favor space-filling designs [26].
Designs here refer to the collection of training examples
$\left\\{\bm{x}_{1},\ldots,\bm{x}_{N}\right\\}$ used to evaluate the physics
model and train the surrogate. Consider that points close to one another are
assumed to have similar responses. In fact, for deterministic simulations, in
the limit as two points overlap the responses are identical. Hence, space-
filling designs are constructed by optimizing a specified criteria which
encourages the establishment of a desirable distribution of points in space.
The criteria always requires definition of a distance metric as it is
necessary to avoid clustering. There exists a number of different kinds of
space-filling design criteria each consisting of trade offs between design
quality and optimization complexity [26].
Development of designs for the micromechanics problem, shown in Fig. 1, is
extremely challenging due to the need to define a distance metric for pair-
wise comparison of structures. In certain materials problems this is made
possible via the use of domain-science knowledge to define statistical
descriptors. For instance in composite structures features may include phase
volume fraction, particle size, particle shape, etc. [27]. For polycrystalline
systems diversity of crystallographic texture is critically important [28]. A
recent work has established a homogenization framework which predicts
aggregate constitutive properties using single-crystalline responses [29].
Prior localization works have engineered MVEs to effectively train surrogates
[20]; single-voxel grains for fine scale responses, multi-voxel grains for
larger scales, and single crystals with embedded single-voxel speckles to
capture “delta” localization phenomena. There are a number of works that have
demonstrated that diversity of training data in atomistic problems is of
paramount importance as well [30, 31]. In one work an entropy-based criteria
was used to increase the volume of the training data to mitigate against
extrapolation [30]. Critically, however, there are no existing works that
quantitatively demonstrate the importance of the experimental design in
training full-field micromechanical ML surrogate models.
Figure 1: Overall approach is to identify most unique and informative MVEs for
subsequent physics evaluation and surrogate training. Hypothesis is that more
efficient training may be performed if MVEs are chosen using an appropriate
design criteria.
This work hypothesizes that intelligent selection of MVEs for subsequent
physics simulations will enable the establishment of more accurate ML
surrogate models. The overall strategy, shown in Fig. 1, is similar to that of
traditional computer experiments; distribute settings over a diverse space,
run simulations, and train a surrogate model. The main challenge, however, is
that it is not immediately straight forward how to quantify the MVE inputs.
Hence, three approaches are used to distill the MVEs to a lower dimensional
representation: (1) a variational autoencoder (VAE) latent description (2)
self-supervised derived microstructural statistics and (3) domain science
inspired microstructural statistics. Furthermore, due to differences in
information contained in each descriptor three space-filling design criteria
are parametrically tested to evaluate their performance. When compared against
a random selection of MVE training examples results indicate that up to a 8%
boost in surrogate performance, shown to be statistically significant, may be
achieved via the proposed training strategy. We observed that this benefit
increases with increasing data set size, and hence, this approach may be even
more beneficial when scaled towards larger problems. Interestingly, it is
observed that while the mean performance of the surrogate is only marginally
sensitive to the microstructure, there are significantly more poorly
performing outliers for textured large grain MVEs. We suspect that this may
simply be due to smaller grain size instances being more representative and,
hence, containing more information. These results demonstrate that
experimental design for micromechanical problems is imperative. In addition,
our self-supervised approach for distilling microstructure statistics may be
suitable for other tasks where image similarity metrics are important.
## 2 Methods
It is hypothesized here that careful selection of MVEs, for subsequent physics
simulation, will enable training of a more accurate micromechanical deep
learning surrogate when compared against random selection. In the active
learning context it is assumed that a trained ML model is already available
and activations from neurons may be used as features when constructing the
sequential design e.g. identifying subsequent examples for labeling, physics
simulation, etc. [32]. This approach is feasible as targeting novel
activations when constructing the sequential design will indeed ensure that
the model will “see” novel examples during subsequent training. However, there
are a few key challenges which arise which may be problematic for the
materials problem posed here. In ML the active learning sequential design task
is often referred to as core-set and the design is optimized using variants of
the maximin distance criteria. In the referenced work the VGG-16 architecture
is used on CIFAR-10 and SVHN data sets (60,000 and 600,000 images in each,
respectively). These authors note that activations from the final fully
connected layer are used which is 1,000-dimensional. At such high dimensions
the search space becomes extremely large and constructing a design is
challenging. For natural images this may be less of a concern as data sets are
rather large (60,000 and 600,000 in this case, some image data sets 1M+). In
our context, with only a total of 6,825 MVEs, searching a 1000-dimensional
space would be extremely challenging. Furthermore, the challenge addressed
here is focused on the initial design of experiments and, hence, there is no
ML available from which features may be extracted.
Despite the initial design challenge being fundamentally different than the
active learning problem the two approaches share similar strategies; identify
salient features, use a design criteria to identify a diverse design, perform
physics simulations (or perform labeling in the natural image context), and
train the ML model. In a CNN activations can be related to localized features.
For instance, a candidate example which exhibits distinctly different
activations than examples already in the training data set may contain unique
edges, corners, shapes, textures, etc.. Alternatively, unique activations may
correspond to similar features but at different spatial locations in the
input. In the micromechanical surrogate model this seems appealing; a data set
containing various triple junctions of different orientation relationships
will be desirable towards training a generalizable model. Conversely, the same
triple joint embedded in two volumes of differing overall crystallographic
texture may behave differently. Therefore, both statistical features which
describe aggregate qualities of the microstructure, and localized features
which describe spatial arrangement of structure, are likely to provide value.
Three key design criteria will be used for assessing the diversity of training
ensembles. In the context of computer experiments space-filling designs are
commonly used to ensure good “spread” in the $d$-dimensional model input space
[26]. The challenge in identifying a design is in minimizing the corresponding
criteria; consider a design of $N$ $d$-dimensional points contains $N\cdot d$
total values which must be identified. Furthermore, the curse of
dimensionality drives search spaces to be ever larger with increasing $d$.
Three key space-filling design criteria will be considered in this work; a
greedy maxi-min design [26], a greedy maximum projection (maxPro) design [33],
and a data twinning approach [34].
### 2.1 Physics simulation
The local stress response for input MVEs was simulated using open source
software PRISMS-plasticity [35]. In this exploratory work the material was
assumed to behave elastically so that a sufficiently large data set could be
quickly generated for testing the stated hypothesis. While this work is
specifically focused on the simplified elastic constitutive response the main
contribution here is demonstrating the importance of the initial experimental
design; this should extend to any mechanical constitutive model. Elastic
constants corresponding to a Ni-based superalloy were utilized with
$C_{11}=199\,GPa$, $C_{12}=128\,GPa$, and $C_{44}=99\,GPa$ [36]. MVEs were
loaded in uniaxial tension with a $50\,MPa$ traction applied on the top $z=1$
plane and periodic boundary conditions elsewhere. A $32\times 32\times 32$
element mesh was used with linear interpolation shape functions. Simulations
were run over all candidate MVEs to assemble a large data set. Experimental
designs were tested by sub-sampling from this data set prior to training the
surrogate models.
### 2.2 Microstructural features
The microstructural MVEs used in this work were generated using open source
software Neper [37]. Each example is $32^{3}$ with grain sizes varying from 4
to 16 voxels. A total of 6,825 examples were generated. 1,200 of these (12
grain sizes, 100 total random seeds) were generated with uniformly random
crystallographic texture. The remaining MVEs were generated with a randomly
selected $(hkl)$ fiber texture in a randomly selected $<uvw>$ direction.
Furthermore, to avoid the presence of solely sharp textures, orientations were
subsequently diffused by randomly applying rotations with Euler angles $\sim
Unif(-10^{\circ},10^{\circ})$. This randomization ensures that both sharp and
diffuse textures are present in the generated data set.
Three procedures for quantifying microstructural features will be considered
in this work: latent space features from a VAE, features from a novel self-
supervised network, and classical microstructural descriptors. For the
classical microstructural descriptors the grain size, prescribed during
generation, is combined with volume-averaged crystallographic information. The
latter is captured using generalized spherical harmonics (GSH) [38]. The
orientation distribution function at each spatial location, $\bm{x}$, can be
described by a basis expansion
$\displaystyle f_{\bm{x}}\left(\bm{g}\right)=\sum_{\mu,n,l}F_{l\bm{x}}^{\mu
n}\dot{\dot{T}}_{l}^{\mu n}\left(\bm{g}\right),$ (1)
where $\bm{g}$ are the Euler angles, $\mu,n,l$ are indices for multiple sums,
and $F_{l\bm{x}}^{\mu n}$ are the GSH coefficients at $\bm{x}$. An expansion
consisting of nine total terms was used which has been shown to be sufficient
for similar quantitative tasks for FCC materials [20]. $\dot{\dot{T}}_{l}^{\mu
n}$ are the corresponding GSH basis functions. Both basis weights and
functions are complex valued. The orientation distribution function over a
volume can be obtained by simply computing the mean over all spatial locations
of the individual basis coefficients. As this representation is complex in
nature, real and imaginary components are taken and concatenated together with
grain size information to define a “classic” descriptor
$\bm{z}\in\mathcal{R}^{18}$.
VAEs construct feature vectors via a non-linear dimensionality reduction
mapping [39]. Shown in Fig 2 is a schematic of inputs, outputs, and the latent
representation. The network consists of an encoder, which maps input $\bm{x}$
to a latent $\bm{z}$, and a decoder which maps back to the original modality
$\hat{\bm{x}}$. The network is trained in an unsupervised fashion to minimize
the discrepancy between input and outputs e.g. minimize the reconstruction
loss. While similar to autoencoders VAEs also include additional regularizing
constraints on the latent space. First stochasticity is introduced in the
model by having the encoder predict both a mean latent representation and a
corresponding variance measure. Next the “reparameterization trick” is used to
sample from this latent distribution; white noise is scaled by the networks
variance prediction and added to the predicted mean. This perturbed latent
vector is mapped back to $\hat{\bm{x}}$ and the mean and variance parameters
contribute to an additional Kullback–Leibler (KL) divergence loss term that
encourages the latent space to be a standard normal distribution. This effort
is needed to ensure that the latent space is continuous. Consider that small
latent perturbations in an AE may produce a nonsensical output e.g. values may
not be interpolated in the latent space. Since the VAE uses a distribution to
represent the latent space it intrinsically includes small perturbations in
the training procedure e.g. the $\bm{x}\rightarrow\bm{z}$ mapping never
produces the same $\bm{z}$ but nonetheless close $\bm{z}$’s. This ensures that
the latent is continuous and may indeed be interpolated. This is important
since this work seeks to use distance measures for experimental design and,
hence, it is important that the latent space be continuous in some fashion.
Furthermore, a variation of the VAE, $\beta$-VAE, was used to balance the KL
and reconstruction loss terms [40]. For extracting features a fairly large
latent space was necessary to produce reasonable reconstructions e.g.
$\bm{x}\in\mathcal{R}^{512}$. Note that the original dimensionality of each
MVE is $32^{3}\cdot 3=98,304$ and so this still represents a compression ratio
of $\sim 200$. The high dimensionality of the latent space may be valuable in
terms of quantifying localized spatial features, however, this may present
problems when constructing designs (high dimensional optimization problem).
Figure 2: VAE schematic for extracting localized MVE features. Figure 3: Self-
supervised approach for 3D MVE feature extraction and network training.
Subsampling of MVEs enables self-supervised learning and, critically,
encourages the learning of statistical descriptors.
Finally a self-supervised alternative is proposed for automatically extracting
relevant microstructural statistics from MVEs. Materials at most length scales
are inherently stochastic. Consider, for instance, that two micrographs from
the same material will have different appearance but statistically may be
identical. A VAE may not capture this subtlety as it is trained to optimize a
reconstruction and, hence, explicitly captures localized features. As an
example consider that two nearly identical micrographs, offset by a small
horizontal translation, will produce two different $\bm{z}$’s when passed
through the encoder network. Hence, there is a need to establish an automated
way to infer summary statistics which can enable pair-wise similarity
measurement across stochastic MVEs. Interestingly, with a few exceptions,
there are very few works in the materials community focused on using ML for
these kinds of tasks [41]. Perhaps this is because most ML vision models are
focused on natural images, where localization is important, and, hence, there
are fewer models suitable for the stochastic materials problem. Perhaps the
closest related task occurs in the field of texture analysis (not
crystallographic texture but image texture) [42]. Interestingly there is a
similar implementation, focused on the extraction of statistical features from
a CNN, for remote sensing imagery [43]. In biological applications there are
also extensive applications of “twin” networks for measuring similarity
between micrographs [44]. In summary, there is a critical need for specialized
ML methods applicable and tailored for extraction of statistical features from
stochastic materials data.
Contrastive learning seeks to establish ML models via training on pair-wise
similarity measures [45, 46, 47, 48, 49]. This is valuable for the
experimental design problem as all design criteria rely on distance metrics. A
schematic of the self-supervised contrastive learning procedure used in this
work is shown in Fig 3. During training a $32^{3}$ MVE is sub-sampled to
produce two $16^{3}$ MVEs; an anchor ($\bm{x}_{a}$) and a positive
($\bm{x}_{p}$) example. Then a negative ($\bm{x}_{n}$) $16^{3}$ MVE example is
randomly cropped from another random training example. Since the anchor and
positive examples come from the same parent MVE their statistics should be
similar. This of course assumes stationary behavior and that the $16^{3}$ MVE
is representative. Likewise the statistics of the anchor and negative example
will, on average, be different. Statistics are generated via training of a
single NN which maps each example into latent space vector. In the latent
space a contrastive loss is defined which encourages the network to place the
anchor and positive examples close together and the negative example far
apart. The training loss can be defined as [49],
$\displaystyle\mathcal{L}=\max(0,d(\bm{x}_{a},\bm{x}_{p})-d(\bm{x}_{a},\bm{x}_{n})+1/2),$
(2)
where $d(\cdot,\cdot)$ is the Euclidean distance. The constant $1/2$ is
referred to as the margin and controls the spacing between points and, hence,
the overall scale of the latent space. This loss drives the distance between
anchor and positive examples to be as small as possible and, conversely, the
distance between the anchor and negative example to be large.
The specific network architecture used for extracting microstructural
statistics is shown in Fig. 4. Using domain knowledge, that both local spatial
information and volume averaged texture information are important, the network
has been tailored to capture these two considerations explicitly. In parallel
the network (1) performs a series of CNN operations to construct spatial
feature maps and (2) uses an MLP to map the input Euler angle representation
to “texture features”. Spatial averaging is performed on the texture features
to produce a global crystallographic feature vector. Similarly, feature map
statistics are produced by computing the mean and variance over all spatial
indices. These three vectors (mean of spatial-orientation feature maps,
variance of spatial-orientation feature maps, and volume averaged orientation
features) are concatenated and passed through a MLP to mix the information
prior to outputting a $\bm{z}\in\mathcal{R}^{16}$ latent representation.
Originally we did not consider the spatial mean/variance pooling operations
but found that model performance improved doing so. Furthermore, we originally
did not include a separate texture-only information stream but found that this
also improved performance. It is suspected that this is because the network is
not tasked with “doing everything” at once and introduction of this domain
knowledge makes training easier.
Figure 4: Novelty of the feature extraction procedure is that it intrinsically
operates on image statistics via construction of the network. Mean and
variance of spatial-orientation feature maps are combined with volume averaged
orientation features prior to passing through the final MLP. This encourages
the network to separately construct orientation and spatial-orientation
statistics prior to mixing in the final MLP.
### 2.3 Design of experiments
Three key space-filling designs will be considered in this work: maximin
distance design [26], maximum projection design [33], and a data twinning
design [34]. The former two are solely space-filling designs seeking to spread
points uniformly in space with some subtle differences. The latter criteria is
unique in that it seeks to balance a space-filling objective while also
optimizing an distributional objective. The distributional objective seeks to
identify design points from a candidate data set which collectively emulate
the probability distribution representative of the original data set e.g. the
empirical distribution.
Figure 5: Maximum projection design criteria ensures good spreading in all
possible subspace projections. This ensures that even when unknown unimportant
features are present the resulting design still exhibits desirable space-
filling properties in the effective lower dimensional space.
The FE simulations performed on MVEs are deterministic in nature therefore it
is assumed that similar MVEs will produce similar responses. The idea of
space-filling designs is to therefore spread points apart in the input space
as efficiently as possible. Close points are undesirable as they may produce
similar simulation results therefore wasting computational resources. The
maximin distance design is tasked with identifying a design which maximizes
the minimum pair-wise distance across all points [50, 26]. Specifically,
$\displaystyle\max_{\mathcal{D}}\min_{i,j}d(\bm{x}_{i},\bm{x}_{j}),$ (3)
where $\mathcal{D}$ is the constructed design (collection of $\bm{x}$’s) and
$d$ is the Euclidean distance. Here we adopt a conditional maximin (cMm)
approach where a greedy algorithm is used to select points from a candidate
set one at a time. First an initial random point is selected followed by
addition of sequential points which step-by-step minimize the objective
function. This is sub-optimal but is relatively simple to implement. Note that
due to its construction this criteria tends to push points towards corners in
the design space.
One possible deficiency of the maximin design is that it does not account for
projections of the design onto subspaces e.g. x-y-z onto the x-y plane. In
experimental design there is a concept of effect sparsity which assumes that
some features or settings may be unimportant [25]. For instance, consider a
maximin design in three dimensions where, unknown to the scientist or
engineer, the third variable is unimportant. The three dimensional problem
collapses to a two dimensional problem and there is no guarantee that design
points are space-filling in this projection e.g. there may be overlap. This
deficiency was addressed the maximum projection (maxPro) design [33],
$\displaystyle\min_{\mathcal{D}}\sum_{i}^{n-1}\sum_{j=i+1}^{n}\frac{1}{\prod_{l=1}^{p}|d^{k}(\bm{x}_{il},\bm{x}_{jl})|^{2}},$
(4)
where $p$ is the dimensionality of the variable. This objective function
includes a product across all dimension-wise distances which penalizes overlap
in all possible projections of the data. An example three dimensional design
and projection schematic is shown in Fig. 5.
Figure 6: Example designs created from a three dimensional candidate data set
consisting of 1000 points from $Unif(-5,5)$ and 500 points from
$\mathcal{N}(0,1)$. The three dimensional design and one two-dimensional
projection is shown.
Finally the third design criteria considered is a data twinning approach [34,
51]. This approach seeks to subsample the candidate data set and identify
points which ensures both good coverage of the input space and emulation of
the parent data set density. The optimization problem is beyond the scope of
this work and readers are referred to the original works. In the context of
the present problem data twinning may be useful if (1) the data set is non-
uniform e.g. there are more examples of one crystallographic texture than
another and (2) if the neural network can be tailored to perform better in
these possible high density regions. The latter point is important because the
surrogates will be evaluated against a validation dataset; if the validation
data set is biased towards certain microstructures then it is beneficial to
tailor the training data set and model to be accurate on those structures.
This latter point is highly problem specific and even philosophical. The end
user must decide if they prefer uniform coverage or preference for more likely
structures. Furthermore, it is not clear if the second point is true for the
micromechanical surrogate model considered here. GPs behave as interpolators
and, hence, having a higher density of points in high density regions will
selectively improve model performance near this space. CNNs may not
necessarily behave this way. Nonetheless this design criteria will be
considered for evaluation.
A three dimensional example illustrating these design strategies is shown in
Fig. 6. Three data set sizes are considered to capture each design criteria’s
sensitivity to size. Three dimensional scatter plots and a two dimensional x-y
projection are shown to demonstrate the effects of the effect sparsity
principle. Visually it is clear that random does a rather poor job generally.
Large samples are needed to eventually obtain a uniform distribution in both
three dimensions and the subspace projection. While cMm does efficiently fill
the 3D space nicely there is significant overlap in points when projected to
the x-y plane. This is most pronounced in the 10-point design where the
criteria pushes nearly all points into corners. Consider that this problem
compounds even further in higher dimensions; there are $2^{d}$ corners in a
$d$-dimensional space. The maxPro designs perform uniformly well in all cases
with particularly good performance in the x-y projection. Finally, twinning
performs as expected by balancing spreading of points along with capturing the
distribution of the original candidate data set.
### 2.4 Surrogate model architecture
A U-net architecture was used to emulate the micromechanics FE model and
predict all six components of the stress tensor. We found that predicting all
six components was more effective than simply predicting the von Mises stress.
This is possibly be due to correlations between the outputs which provides the
network with additional information and constraint during training. Inputs to
the model are three-dimensional microstructural representations represented by
a $[32,32,32,3]$ array with orientation information encoded via Euler angles.
The U-net architecture consists of three resolution depths
($32^{3},16^{3},8^{3}$) with each resolution using ($64,128,256$) filters. At
each resolution, during both down-sampling and up-sampling, there are four
layers of three-dimensional CNNs producing feature maps. Residual connections
are used throughout to enable gradient flow [52]. Batch normalization and
dropout layers ($p=0.05$) are also used throughout. A MLP with dimensions
$[256,64,16,6]$ is used to map feature channels at the end of the U-net to the
six output stress components. Finally, a $L_{2}$ penalty weight of 0.0001 was
used on all weights and biases throughout the model. We found this to be
imperative to avoid over-fitting especially for small data set sizes.
The model was implemented in Tensorflow and trained using an Adam optimizer
with default parameters, a learning rate of $10^{-3}$, and batch size of 64
[53]. Training was performed on 80GB Nvidia A100 GPUs for a total of 200
epochs.
## 3 Results
### 3.1 Microstructural representation
In Fig. 7 grain size histograms are shown for designs constructed using the
three microstructure feature extraction methods (VAE, contrastive,
microstructural) and three design criteria (cMm, maxPro, twin). Interestingly,
across all designs contrastive features produce nearly uniform grain size
distributions. cMm and maxPro designs with microstructural features seem to be
prefer smaller grain sizes and neglect MVEs with preferred crystallographic
texture. Conversley, cMm and maxPro designs on the VAE features are heavily
weighted to neglect small grain size MVEs. We suspect that this disparity may
be due to the dimensionality of the problem; contrastive features are
16-dimensional, microstructural 18-dimensional, and VAE features
512-dimensional. The latter was essential as the VAE is voxel-by-voxel trained
to reconstruct an input MVE from the latent space. Good reconstruction
performance, which needs to capture localized features throughout the volume,
necessitated a high dimensional latent space. Hence, for designs such as cMm
and maxPro, which seek to “push” points away from another, larger
dimensionality representations will have a tendency to push points towards
boundaries. In Fig. 8 several two-dimensional projections of the VAE latent
space are shown. Markers are colored according to their grain size. Visually
it appears that small grain MVEs are represented closer to the original in the
latent space. This explains why both cMm and maxPro designs, which are
designed to “push” points towards boundaries, would under-represent fine grain
sizes. The twin design, however, balances both a space-filling objective and a
distributional objective so that the design exhibits similar statistical
properties as the full data set. Hence, the twin design criteria does not
neglect fine grain sizes and proportionally represents untextured/textured
examples.
Figure 7: Distribution of grain sizes selected by each combination of design
(cMm, maxPro, twin) and microstructural descriptors (contrastive, VAE,
microstructural). Figure 8: Latent space projections produced by the VAE. All
data is shown but then data corresponding to uniformly textured material is
filled with colors corresponding to grain size (centroid also includes labels
4, 10, 16).
An important diagnostic to assess the VAE’s behavior is to test the continuity
of the latent space. Recall that in a VAE there is both a reconstruction loss
and a KL term which encourages the latent space to be Gaussian and continuous.
This is important since designs are being constructed using latent distances
and, hence, obtaining a latent space where distances are meaningful is
critical. Shown in Fig. 9 are two dimensional reconstructions from the latent
space. The first column corresponds to a reconstruction of examples from the
validation data set. Each column after that is a reconstruction obtained from
a linear mapping of the latent vector
$\left\\{1.2\cdot\bm{z},\ldots,2.0\cdot\bm{z}\right\\}$. Each row is a
different example. These figures confirm that indeed the VAE latent space is
continuous with localized features being continuously manipulatable via
perturbation of the latent representation.
Figure 9: First column represents a two dimensional slice of a reconstructed
$\hat{\bm{x}}$ from the validation data set. All other columns are
interpolations from the latent space corresponding to $\alpha\cdot\bm{z}$
where $\alpha$ was varied from 1.2 to 2 linearly. These results indicate that
the latent representation is indeed continuous and captures localized
microstructural features.
The contrastive latent space, shown in Fig. 10, is by comparison drastically
different. All data is shown but untextured examples are colored according to
grain size. Visually it is clear that the latent space representation exhibits
strong structural patterns with respect to both crystallographic texture and
grain size; some regions are exclusively for untextured and there is a
continuous gradation of grain size. The distance matrix for all untextured
examples, shown in Fig. 11, further demonstrates the networks ability to
discriminate across grain sizes. Note that for large grain sizes, about 12
voxels and above, the network cannot discriminate. We suspect that this is
because the $16^{3}$ cropped example fed to the contrastive network is no
longer representative for relatively large grain sizes. For small grain sizes,
for instance the block corresponding to all 4-on-4 comparisons, distances are
small indicating that the network clusters all examples together suggesting
that it is indeed learning statistical descriptors.
Figure 10: Latent space projections of the self-supervised microstructure
statistics model. All data is shown but then data corresponding to uniformly
textured material is filled with colors corresponding to grain size (centroid
also includes labels 4, 10, 16). Clear structure is observed in the latent
space. Figure 11: Distance matrix considering only uniformly random textured
MVEs and comparing across grain sizes from 4 to 16 voxels.
The crystallographic texture sensitivity of the contrastive features is shown
in Fig. 12. Here random examples are sampled from the validation data set and
then three of the closest and three of the most distant MVEs are identified
using latent contrastive distances. Only data corresponding to examples with a
grain size of 4 voxels was considered. The same exercise is performed using
only GSH coefficients for comparison. Results indicate that the contrastive
features do indeed capture texture similarity accurately. In a few instances
there are visual anomalies but it is important to consider that the GSH
features only capture texture whereas the contrastive features capture both
texture and structural information. Nonetheless, these results indicate that
the established self-supervised feature extraction network does capture both
spatial and crystallographic features automatically.
Figure 12: $(100)$ pole figures corresponding to: random example from the data
set, the most similar instances, and most dissimilar instances. Similarity is
measured using both the contrastive latent vector and GSH features. Identical
color bar limits are used throughout.
### 3.2 Surrogate training results
Summary results for the MVE-feature/design parametric study are shown in Fig.
13. Across all possible combinations the VAE-maxPro combination scored the
highest improvement at 8.8% for the case where only 25% of the total data set
size was used. cMm with contrastive features appears to be systematically most
robust. Interestingly the performance of the VAE-maxPro combination
deteriorates completely when 50% of the total data set size was used.
Similarly, the contrastive/maxPro combination also suffers a significant
deterioration at 50% data set use, however, microstructural features still
enjoy a moderate boost. We suspect that this may be due to the difficulty in
optimizing designs. Recall that the design criteria is represented by a scalar
valued objective function; constructing the design requires solving a high
dimensional optimization problem. The maxPro objective function, Eqn. 4, is
rather challenging to optimize as it contains $n^{2}-n$ terms with each term a
product of $p$ terms. In the VAE case $p=512$ and for 50% ($n=3413$) there are
over 11-million terms. Note that some of this complexity is reduced by using a
greedy optimization strategy but, nonetheless, the optimization problem
remains challenging. However, the contrastive ($p=16$) and microstructural
($p=18$) roughly are of the same dimension and yet only the contrastive
features deteriorate at 50% data set size using the maxPro criteria. It may be
possible that this is because the contrastive structural-orientation features
are coupled whereas for the constructed microstructural feature vector they
are not. The maxPro criteria considers distances in projected spaces which
will make optimization more challenging if features are entangled. The
decoupled (grain size and crystallographic texture) microstructural features
are less susceptible to this effect.
Somewhat remarkably the data twinning design performed nearly identically
across all considered MVE features with a nearly constant 5% boost. This may
be because the design criteria has a balanced objective function which also
considers the underlying data distribution. If the problem dimensionality is
large, and the desired design size small, then cMm and maxPro can have a
tendency to push points towards extreme “corners” producing designs which
perform poorer than random designs. It is suspected that the data twinning
designs mitigate against this by penalizing designs which do not emulate the
original data distribution. This also may be why a slight decline in
performance is observed at 50%; eventually random sampling will begin to
represent the underlying data distribution and, hence, the twinning design’s
benefit will decay.
Finally, consider that for all designs-feature combinations, with the
exception of maxPro/cMm and microstructural features, there is no
statistically significant boost at 10% of the total data set size. However,
for cMm/maxPro designs and microstructural features there is a statistically
significant decline in performance relative to random designs. Previously it
was hypothesized that for large data set sizes the entangling of features may
make optimization of the design criteria more difficult. In the case of small
data set sizes we argue that the converse may hold true; entanglement aids in
avoiding “corner biased” designs. When features are independent the pace-
filling criteria may bias the data set towards certain grain size and texture
corners of the 18-dimensional microstructural feature space. Increases in the
data set size remedy this behavior and result in monotonically increasing
performance.
Figure 13: Parametric results for all considered microstructural features and
design criteria. Validation loss improvement is compared against models
trained from randomly selected training designs. For each feature/design
combination 10 designs were used to train 10 models. Error bars correspond to
one standard deviation computed via bootstrapping.
Validation loss curves for a few select feature and design combinations are
shown in Fig. 14. Note that the loss here corresponds to the mean of centered
and normalized components of the stress tensor e.g. the value is a non-
dimensional quantity. Visually it is clear that indeed at certain data set
sizes there appears a significant improvement in surrogate model performance
as summarized in Fig. 13. The loss curves at 10% data set size reveal that the
loss curves appear to be somewhat unstable exhibiting a great deal of
variance. This reveals that for the specific architecture used the 10% (about
600 simulations) size may be at the very limit of the models training
stability which may explain some of the previously discussed anomalous results
(worse than random design results at 10%).
Figure 14: Validation loss curves for a few selected designs and features. All
ten curves shown to demonstrate repeatability.
## 4 Discussion
A number of validation example MVEs, FE results, and surrogate results are
shown in Fig. 15. For visual comparison surrogate fields are shown as absolute
percent error relative to the FE results. While there are subtle differences
in responses across the various designs these results do not immediately shed
any light on the impact of features and the design of surrogate performance.
To further explore any potential insights the best 18 and worst 18 performance
MVEs from a contrastive cMm with 25% data are shown in Fig. 16. Remarkably,
the best and worst performing structures all visually appear to be extremely
similar.
Figure 15: Random examples from validation data: IPF map, $\sigma_{33}$ of the
stress tensor, and absolute error maps for four designs (using contrastive
features and 25% of the dataset). Examples are selected from one of the ten
surrogate realizations with validation loss closest to the mean validation
loss. Figure 16: Example microstructures with best (top) and worst (bottom)
volume averaged error. Provided number is the volume averaged $\sigma_{33}$
standard error.
Surrogate model performance trends as a function of microstructural features
are shown in Fig. 17. The mean standard error 1 and 99-percentile bounds are
shown for all grain sizes and textured/untextured structures. It is clear that
for all design criteria there are far more extreme value prediction MVEs
exhibiting both very large and small errors at large grain sizes. This
indicates that the “tails” of the surrogates’ performance are much wider for
large grain sizes. It also appears that this effect is even more pronounced
for textured materials. A possible explanation is that for small grain size
the MVEs are behaving as representative volume elements (RVEs) which capture
aggregate material behavior well. Furthermore, RVEs may intrinsically contain
more information; a single volume contains hundreds of pairs of small grain
neighbor combinations. In large grain size MVEs each example will contain far
fewer of these neighbor pairs. This is important because the ML surrogate must
be exposed to a diversity of structures to effectively learn key physical
relationships. Texture adds another degree of complexity that results in more
extreme value performance responses. Not only do spatial neighborhood
relationships need to be learned but also the response as a function of
specific aggregate crystallographic textures.
Figure 17: 1 and 99 percentiles and mean volume-averaged standard error for
various designs using contrastive features and 25% of the data set. Data is
shown as a function of grain size and texture/untextured. Results indicate
that while mean behavior is similar across all microstructures large grain
textured examples tend to exhibit a fatter tail of predictive capacity.
The presented results demonstrate that micromechanical surrogate models
trained using well designed MVEs can enjoy a moderate boost in overall
performance when compared against randomly selected training MVEs. However,
the parametric study over several MVE feature descriptors and design criteria
revealed that net performance is dependent on many factors and these are
difficult to assess a-priori. For the elastic constitutive model considered
here the “cost” of this uncertainty is rather low because evaluation of the FE
model is computationally rather inexpensive. However, for more advanced
inelastic constitutive models significant computational cost is incurred when
evaluating the model and, hence, it is imperative to identify designs that
will not result in worse than random designs.
A key consideration when constructing designs is the optimization complexity
for a specific data set size and problem dimensionality. Results indicated
that while maxPro performed well for 25% data set sizes for both VAE and
contrastive features, performance deteriorated completely at 50% data set size
resulting in worse or no performance over random designs. Our hypothesis is
that this is because construction of a design requires solving an optimization
problem and, hence, the bottleneck in the process is the design optimization
step. In the VAE case both the relatively small data set size and the
dimensionality of the problem ($p=512$) cause issues. For problems with much
larger data set sizes some of these issues may be alleviated.
While the cMm design is inferior to maxPro with respect to projection quality
the cMm objective is much more easy to optimize. Furthermore, optimization is
even more feasible for small MVE latent state representations. This is likely
why the cMm-contrastive approach yielded consistently good results
(monotonically increasing performance) with increasing data set size. While
the microstructural-cMm combination also yielded monotonically increasing
performance in the case where surrogates were trained used 10% of the data set
size these designs yielded worse than random performance. It is suspected that
this may be because of the decoupled nature of the microstructural features
(grain size and volume averaged texture). Contrastive features are capable of
capturing richer coupled richer features which may offer opportunity for
ensuring diversity. Finally, the data twinning approach, while robust against
poorer than random performance, only provides moderate boosts for moderate
data set sizes. The robustness is believed to be due to the tendency to
construct designs to emulate the original data set’s distribution and, hence,
mitigate against designs dominated by anomalies e.g. clustering in high
dimensional corners. However, this comes at the cost of locating points close
to one another which minimizes overall diversity.
Based on these results, for the specific surrogate architecture and
micromechanical model considered, the most promising feature/design
combination is the contrastive conditional maximin design. While a rather
moderate 8% boost in validation performance was achieved this may be even
better for other domain problems and larger data set sizes. Consider a case
where instead of 6,825 MVE candidate structures are available there are
instead 68,250 and a total budget of 20,000 simulations to be performed. The
larger candidate set size would provide more opportunity to diversify the
design and a more expansive data set would allow for more efficient packing of
the feature space. Furthermore, based on the results from Fig. 17 it is clear
that more examples are needed for the less representative large grain textured
MVEs. Finally, it should be noted much of these results
## 5 Conclusions
The proposed feature-extraction and experimental design strategy for
establishing micromechanics surrogates has been demonstrated to be effective.
The strategy consists of two key components: (1) a microstructural feature for
computing a distance or similarity metric and (2) a design strategy for
distributing points in the microstructural feature space. A parametric study
was performed over three design strategies and three microstructural features.
Results show that for the considered problem a reduction up to 8% of the
validation loss is achieved when compared to random selection of training
examples. Trends indicate that for bigger data sets the benefits may be even
larger. This is rationalized by the notion that high-dimensional spaces are
difficult to fill, and hence, larger designs will facilitate more efficient
sampling of the microstructural feature space. This in turn ensures a more
diverse training data set which greatly improves model performance. In
addition to demonstrating this result this work also establishes a novel self-
supervised contrastive feature extraction methodology for computing
microstructural summary statistics.
## 6 Data availability
Data will be made available from the authors upon reasonable request.
## Acknowledgements
Research was sponsored by the US Department of Energy, Office of Energy
Efficiency and Renewable Energy (EERE), Advanced Manufacturing Office, and
Advanced Materials and Manufacturing Technologies Office (AMMTO) under
contract DE-AC05-00OR22725 with UT-Battelle LLC and performed in partiality at
the Oak Ridge National Laboratory’s Manufacturing Demonstration Facility, an
Office of Energy Efficiency and Renewable Energy user facility. All the
authors would like to acknowledge the support of the HPC4Mtls program.
## References
* De Pablo et al. [2014] Juan J De Pablo, Barbara Jones, Cora Lind Kovacs, Vidvuds Ozolins, and Arthur P Ramirez. The materials genome initiative, the interplay of experiment, theory and computation. _Current Opinion in Solid State and Materials Science_ , 18(2):99–117, 2014.
* Torquato and Haslach Jr [2002] Salvatore Torquato and Henry W Haslach Jr. Random heterogeneous materials: microstructure and macroscopic properties. _Appl. Mech. Rev._ , 55(4):B62–B63, 2002.
* Fernandez-Zelaia et al. [2024] Patxi Fernandez-Zelaia, Jiahao Cheng, Jason Mayeur, Amir Koushyar Ziabari, and Michael M Kirka. Digital polycrystalline microstructure generation using diffusion probabilistic models. _Materialia_ , 33:101976, 2024.
* Buzzy et al. [2024] Michael O Buzzy, Andreas E Robertson, and Surya R Kalidindi. Statistically conditioned polycrystal generation using denoising diffusion models. _Acta Materialia_ , page 119746, 2024.
* Robertson et al. [2023] Andreas E Robertson, Conlain Kelly, Michael Buzzy, and Surya R Kalidindi. Local–global decompositions for conditional microstructure generation. _Acta Materialia_ , 253:118966, 2023.
* Lee and Yun [2023] Kang-Hyun Lee and Gun Jin Yun. Microstructure reconstruction using diffusion-based generative models. _Mechanics of Advanced Materials and Structures_ , pages 1–19, 2023.
* Zeni et al. [2023] Claudio Zeni, Robert Pinsler, Daniel Zügner, Andrew Fowler, Matthew Horton, Xiang Fu, Sasha Shysheya, Jonathan Crabbé, Lixin Sun, Jake Smith, et al. Mattergen: a generative model for inorganic materials design. _arXiv preprint arXiv:2312.03687_ , 2023.
* Yang et al. [2023] Mengjiao Yang, KwangHwan Cho, Amil Merchant, Pieter Abbeel, Dale Schuurmans, Igor Mordatch, and Ekin Dogus Cubuk. Scalable diffusion for materials generation. _arXiv preprint arXiv:2311.09235_ , 2023.
* Sacks et al. [1989] Jerome Sacks, Susannah B Schiller, and William J Welch. Designs for computer experiments. _Technometrics_ , 31(1):41–47, 1989.
* Kennedy and O’Hagan [2001] Marc C Kennedy and Anthony O’Hagan. Bayesian calibration of computer models. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 63(3):425–464, 2001.
* Gramacy [2020] Robert B Gramacy. _Surrogates: Gaussian process modeling, design, and optimization for the applied sciences_. Chapman and Hall/CRC, 2020.
* Mak et al. [2018] Simon Mak, Chih-Li Sung, Xingjian Wang, Shiang-Ting Yeh, Yu-Hung Chang, V Roshan Joseph, Vigor Yang, and CF Jeff Wu. An efficient surrogate model for emulation and physics extraction of large eddy simulations. _Journal of the American Statistical Association_ , 113(524):1443–1456, 2018.
* Chen et al. [2021] Jialei Chen, Simon Mak, V Roshan Joseph, and Chuck Zhang. Function-on-function kriging, with applications to three-dimensional printing of aortic tissues. _Technometrics_ , 63(3):384–395, 2021.
* Damianou and Lawrence [2013] Andreas Damianou and Neil D Lawrence. Deep gaussian processes. In _Artificial intelligence and statistics_ , pages 207–215. PMLR, 2013.
* Dai et al. [2015] Zhenwen Dai, Andreas Damianou, Javier González, and Neil Lawrence. Variational auto-encoded deep gaussian processes. _arXiv preprint arXiv:1511.06455_ , 2015.
* Tran and Wildey [2021] Anh Tran and Tim Wildey. Solving stochastic inverse problems for property–structure linkages using data-consistent inversion and machine learning. _JOM_ , 73(1):72–89, 2021.
* Liang et al. [2018] Liang Liang, Minliang Liu, Caitlin Martin, and Wei Sun. A deep learning approach to estimate stress distribution: a fast and accurate surrogate of finite-element analysis. _Journal of The Royal Society Interface_ , 15(138):20170844, 2018.
* He et al. [2023] Junyan He, Seid Koric, Shashank Kushwaha, Jaewan Park, Diab Abueidda, and Iwona Jasiuk. Novel deeponet architecture to predict stresses in elastoplastic structures with variable complex geometries and loads. _Computer Methods in Applied Mechanics and Engineering_ , 415:116277, 2023.
* Montes de Oca Zapiain et al. [2022a] David Montes de Oca Zapiain, Apaar Shanker, and Surya R Kalidindi. Convolutional neural networks for the localization of plastic velocity gradient tensor in polycrystalline microstructures. _Journal of Engineering Materials and Technology_ , 144(1):011004, 2022a.
* Yabansu et al. [2014] Yuksel C Yabansu, Dipen K Patel, and Surya R Kalidindi. Calibrated localization relationships for elastic response of polycrystalline aggregates. _Acta Materialia_ , 81:151–160, 2014.
* Khorrami et al. [2023] Mohammad S Khorrami, Jaber R Mianroodi, Nima H Siboni, Pawan Goyal, Bob Svendsen, Peter Benner, and Dierk Raabe. An artificial neural network for surrogate modeling of stress fields in viscoplastic polycrystalline materials. _npj Computational Materials_ , 9(1):37, 2023.
* Wang et al. [2021] Yinan Wang, Diane Oyen, Weihong Guo, Anishi Mehta, Cory Braker Scott, Nishant Panda, M Giselle Fernández-Godino, Gowri Srinivasan, and Xiaowei Yue. Stressnet-deep learning to predict stress with fracture propagation in brittle materials. _Npj Materials Degradation_ , 5(1):6, 2021.
* Pokharel et al. [2021] Reeju Pokharel, Anup Pandey, and Alexander Scheinker. Physics-informed data-driven surrogate modeling for full-field 3d microstructure and micromechanical field evolution of polycrystalline materials. _JOM_ , 73(11):3371–3382, 2021.
* Pandey and Pokharel [2021] Anup Pandey and Reeju Pokharel. Machine learning based surrogate modeling approach for mapping crystal deformation in three dimensions. _Scripta Materialia_ , 193:1–5, 2021.
* Wu and Hamada [2011] CF Jeff Wu and Michael S Hamada. _Experiments: planning, analysis, and optimization_. John Wiley & Sons, 2011.
* Joseph [2016] V Roshan Joseph. Space-filling designs for computer experiments: A review. _Quality Engineering_ , 28(1):28–35, 2016.
* Bessa et al. [2017] Miguel A Bessa, Ramin Bostanabad, Zeliang Liu, Anqi Hu, Daniel W Apley, Catherine Brinson, Wei Chen, and Wing Kam Liu. A framework for data-driven analysis of materials under uncertainty: Countering the curse of dimensionality. _Computer Methods in Applied Mechanics and Engineering_ , 320:633–667, 2017.
* Paulson et al. [2017] Noah H Paulson, Matthew W Priddy, David L McDowell, and Surya R Kalidindi. Reduced-order structure-property linkages for polycrystalline microstructures based on 2-point statistics. _Acta Materialia_ , 129:428–438, 2017.
* He et al. [2024] Junyan He, Deepankar Pal, Ali Najafi, Diab Abueidda, Seid Koric, and Iwona Jasiuk. Material-response-informed deeponet and its application to polycrystal stress-strain prediction in crystal plasticity. _arXiv preprint arXiv:2401.09977_ , 2024.
* Montes de Oca Zapiain et al. [2022b] David Montes de Oca Zapiain, Mitchell A Wood, Nicholas Lubbers, Carlos Z Pereyra, Aidan P Thompson, and Danny Perez. Training data selection for accuracy and transferability of interatomic potentials. _npj Computational Materials_ , 8(1):189, 2022b.
* Barneschi et al. [2024] Leonardo Barneschi, Leonardo Rotondi, and Daniele Padula. Molecular geometry impact on deep learning predictions of inverted singlet–triplet gaps. _The Journal of Physical Chemistry A_ , 2024.
* Sener and Savarese [2017] Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. _arXiv preprint arXiv:1708.00489_ , 2017.
* Joseph et al. [2015] V Roshan Joseph, Evren Gul, and Shan Ba. Maximum projection designs for computer experiments. _Biometrika_ , 102(2):371–380, 2015.
* Vakayil and Joseph [2022] Akhil Vakayil and V Roshan Joseph. Data twinning. _Statistical Analysis and Data Mining: The ASA Data Science Journal_ , 15(5):598–610, 2022.
* Yaghoobi et al. [2019] Mohammadreza Yaghoobi, Sriram Ganesan, Srihari Sundar, Aaditya Lakshmanan, Shiva Rudraraju, John E Allison, and Veera Sundararaghavan. Prisms-plasticity: An open-source crystal plasticity finite element software. _Computational Materials Science_ , 169:109078, 2019.
* Fernandez-Zelaia et al. [2022] Patxi Fernandez-Zelaia, Yousub Lee, Sebastien Dryepondt, and Michael M Kirka. Creep anisotropy modeling and uncertainty quantification of an additively manufactured ni-based superalloy. _International Journal of Plasticity_ , 151:103177, 2022.
* Quey and Kasemer [2022] Romain Quey and Matthew Kasemer. The neper/fepx project: free/open-source polycrystal generation, deformation simulation, and post-processing. In _IOP Conference Series: Materials Science and Engineering_ , volume 1249, page 012021. IOP Publishing, 2022.
* Bunge [2013] H-J Bunge. _Texture analysis in materials science: mathematical methods_. Elsevier, 2013.
* Goodfellow et al. [2016] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. _Deep learning_. MIT press, 2016.
* Higgins et al. [2017] Irina Higgins, Loic Matthey, Arka Pal, Christopher P Burgess, Xavier Glorot, Matthew M Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. _ICLR (Poster)_ , 3, 2017.
* Hashemi et al. [2024] Sayed Sajad Hashemi, Michael Guerzhoy, and Noah H Paulson. Toward learning latent-variable representations of microstructures by optimizing in spatial statistics space. _arXiv preprint arXiv:2402.11103_ , 2024.
* Larroza et al. [2016] Andrés Larroza, Vicente Bodí, David Moratal, et al. Texture analysis in magnetic resonance imaging: review and considerations for future applications. _Assessment of cellular and organ function and dysfunction using direct and derived MRI methodologies_ , pages 75–106, 2016.
* Liu et al. [2019] Xinlong Liu, Chu He, Qingyi Zhang, and Mingsheng Liao. Statistical convolutional neural network for land-cover classification from sar images. _IEEE Geoscience and Remote Sensing Letters_ , 17(9):1548–1552, 2019.
* Presberger et al. [2024] Jannik Presberger, Rashmiparvathi Keshara, David Stein, Yung Hae Kim, Anne Grapin-Botton, and Bjoern Andres. Correlation clustering of organoid images. _arXiv preprint arXiv:2403.13376_ , 2024.
* Chopra et al. [2005] Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. In _2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05)_ , volume 1, pages 539–546. IEEE, 2005.
* Goldberger et al. [2004] Jacob Goldberger, Geoffrey E Hinton, Sam Roweis, and Russ R Salakhutdinov. Neighbourhood components analysis. _Advances in neural information processing systems_ , 17, 2004.
* Bromley et al. [1993] Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a” siamese” time delay neural network. _Advances in neural information processing systems_ , 6, 1993.
* Balestriero and LeCun [2022] Randall Balestriero and Yann LeCun. Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods. _Advances in Neural Information Processing Systems_ , 35:26671–26685, 2022.
* Hadsell et al. [2006] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In _2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06)_ , volume 2, pages 1735–1742. IEEE, 2006.
* Johnson et al. [1990] Mark E Johnson, Leslie M Moore, and Donald Ylvisaker. Minimax and maximin distance designs. _Journal of statistical planning and inference_ , 26(2):131–148, 1990.
* Mak and Joseph [2018] Simon Mak and V Roshan Joseph. Support points. 2018\.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016.
* Abadi et al. [2016] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In _12th $\\{$USENIX$\\}$ symposium on operating systems design and implementation ($\\{$OSDI$\\}$ 16)_, pages 265–283, 2016.
|
-titleHadron Collider Physics Symposium 2011 11institutetext: Laboratoire de l’Accélérateur Linéaire, Orsay, France
# Photon polarisation in $b\rightarrow s\gamma$ using $B\rightarrow$ K∗e+e- at
LHCb
Michelle Nicol (on behalf of the LHCb collaboration) 11<EMAIL_ADDRESS>
###### Abstract
The $b\rightarrow s\gamma$ transition proceeds through flavour changing
neutral currents, and thus is particularly sensitive to the effects of new
physics. An overview of the method to measure the photon polarisation at the
LHCb experiment via an angular analysis of $B\rightarrow K^{*}e^{+}e^{-}$ at
low $q^{2}$ is presented. The status of the $B\rightarrow K^{*}\mu^{+}\mu^{-}$
analysis with 309 pb-1 of $pp$ collisions at $\sqrt{s}$=7 TeV at LHCb is also
given.
## 1 Introduction
Although the branching ratio of $b\rightarrow s\gamma$ has been measured to be
consistent with Standard Model (SM) predictions, new physics could still be
present and detectable through the analysis of details of the decay process.
In particular, the photon from the b is predominantly left handed in the SM,
whereas additional right handed currents can arise in certain new physics
models, such as the Left-Right symmetric models, or in some supersymmetric
models nonSM . Access to the polarisation information is available via an
angular analysis of $B\rightarrow K^{*}e^{+}e^{-}$.
Hadronic form factors render theoretical prediction over the whole $q^{2}$
(the dilepton invariant mass squared) range difficult. However, it has been
shown that these uncertainties are controllable at low $q^{2}$, where the
photon term dominates, and certain asymmetries providing information on the
photon polarisation can be formed. RefJ
## 2 $B\rightarrow K^{*}\mu^{+}\mu^{-}$ status at LHCb
With 309 pb-1 of $pp$ collisions at $\sqrt{s}$=7 TeV, collected in three
months during the first half of 2011, the forward backward asymmetry of the
dilepton system, $A_{FB}$ has been measured muon using $B\rightarrow
K^{*}\mu^{+}\mu^{-}$ events, (as is shown in Fig. 1), along with $F_{L}$, the
K∗ longitudinal polarisation (Fig. 1); an input required for the photon
polarisation measurement. These observables have been measured as being in
good agreement with SM predictions, SM , implying a SM like Wilson Coefficient
$C_{7}$, but still allowing for the existence of $C_{7}^{{}^{\prime}}$ (right
handed currents). As stressed above, the measurement is most sensitive at low
$q^{2}$. It would therefore be preferable to perform the analysis using
electrons. However, experimentally it is more challenging to observe electrons
than muons, primarily due to the fact that muons provide a very clean
signature to trigger on. With 309 pb-1 of LHCb data, $B\rightarrow
K^{*}\mu^{+}\mu^{-}$ in the $q^{2}$ range 0-2 GeV has been observed, as is
shown, along with other $q^{2}$ ranges, in Fig. 2. With the rest of the 2011
data, one can expect to see a $B\rightarrow K^{*}e^{+}e^{-}$ signal.
Figure 1: $A_{FB}$ and FL as a function of $q^{2}$, as measured at LHCb with
$B\rightarrow K^{*}\mu^{+}\mu^{-}$ muon . The SM predictions are given by the
cyan (light) band, and this prediction integrated in the $q^{2}$ bins is
indicated by the purple (dark) regions.
Figure 2: The mass distributions of $B\rightarrow K^{*}\mu^{+}\mu^{-}$ in six
$q^{2}$ bins. The solid line shows a fit with a double-Gaussian signal
component (thin-green line) and an exponential background component (dashed-
red line).
Figure 3: Definition of the angles $\phi$, $\theta_{K}$ and $\theta_{L}$ in
the decay $B\rightarrow K^{*}e^{+}e^{-}$.
## 3 Analysis formalism
$B\rightarrow K^{*}e^{+}e^{-}$ can be uniquely described by four variables:
$q^{2}$ and three angular variables, $\theta_{L}$, $\theta_{K}$ and $\phi$,
(the definitions of which can be seen in Fig. 3). Following the formalism as
described in krug , the differential decay distribution can be written in
terms of these variables as:
$\displaystyle\frac{d\Gamma}{dq^{2}d\cos\Theta_{l}d\cos\Theta_{K}d\phi}=$
$\displaystyle\frac{9}{32\pi}[I_{1}\left(\cos\Theta_{K}\right)+I_{2}\left(\cos\Theta_{K}\right)\cos
2\Theta_{l}+$ $\displaystyle
I_{3}\left(\cos\Theta_{K}\right)\sin^{2}\Theta_{l}\cos
2\phi+I_{4}\left(\cos\Theta_{K}\right)\sin 2\Theta_{l}\cos\phi+$
$\displaystyle
I_{5}\left(\cos\Theta_{K}\right)\sin\Theta_{l}\cos\phi+I_{6}\left(\cos\Theta_{K}\right)\cos\Theta_{l}+$
$\displaystyle
I_{7}\left(\cos\Theta_{K}\right)\sin\Theta_{l}\sin\phi+I_{8}\left(\cos\Theta_{K}\right)\sin
2\Theta_{l}\sin\phi\ +$ $\displaystyle
I_{9}\left(\cos\Theta_{K}\right)\sin^{2}\Theta_{l}\sin 2\phi]$ (1)
When measuring this rate at LHCb, the 3D angular acceptance,
$\varepsilon\left(\cos\Theta_{l},\cos\Theta_{K},\phi\right)$ must also be
taken into account. It is assumed to be factorisable as the products of
$\varepsilon_{1}$, the acceptance as a function of $\phi$, and
$\varepsilon_{D}$, the acceptance as a function of $\cos\Theta_{K}$ and
$\cos\Theta_{L}$. Furthermore, assuming that $\varepsilon_{1}$ is an even
function, Equation 3 can be simplified by performing the $\phi$ transformation
that if $\phi$ $>$0, then $\phi$=$\phi$+$\pi$. A similar transformation can be
performed for $\cos\Theta_{L}$. Equation 3 can then be written as:
$\displaystyle\frac{d\Gamma}{dq^{2}d\cos\Theta_{l}d\cos\Theta_{K}d\phi}=$
$\displaystyle\frac{9}{32\pi}[I_{1}\left(\cos\Theta_{K}\right)+I_{2}\left(\cos\Theta_{K}\right)\cos
2\Theta_{l}+$ $\displaystyle
I_{3}\left(\cos\Theta_{K}\right)\sin^{2}\Theta_{l}\cos
2\phi+I_{9}\left(\cos\Theta_{K}\right)\sin^{2}\Theta_{l}\sin 2\phi]$
$\displaystyle\times\varepsilon_{D}\left(\cos\Theta_{l},\cos\Theta_{K}\right)$
(2)
In order to minimize theoretical uncertainties, it is desirable to measure
ratios of the amplitudes. Neglecting the lepton mass, the remaining I terms in
equation 3 can be written in terms of three such parameters,
$\mathrm{F_{L},A_{T}^{(2)},A_{Im}}$:
$\begin{split}\mathrm{F_{L}}&=\frac{\mathrm{\left|A_{0}\right|^{2}}}{\mathrm{\left|A_{0}\right|^{2}}+\left|A_{\bot}\right|^{2}+\left|A_{\|}\right|^{2}}\\\
\mathrm{A_{T}^{(2)}}&=\frac{\mathrm{\left|A_{\bot}\right|^{2}-\left|A_{\|}\right|^{2}}}{\mathrm{\left|A_{\bot}\right|^{2}+\left|A_{\|}\right|^{2}}}\\\
\mathrm{A_{Im}}&=\frac{\mathrm{\Im(A^{*}_{\bot L}A_{\bot L})-\Im(A^{*}_{\bot
R}A_{\bot
R})}}{\mathrm{\left|A_{0}\right|^{2}}+\left|A_{\bot}\right|^{2}+\left|A_{\|}\right|^{2}}\end{split}$
(3)
When expressed in terms of the helicity amplitudes, for small real values of
$\frac{A_{R}}{A_{L}}$, one obtains
$A_{T}^{(2)}\approx-2\frac{A_{right}}{A_{left}}$.
## 4 $B\rightarrow$ K∗e+e- Monte Carlo studies
Although work is ongoing on the analysis of the $B\rightarrow$ K∗e+e- data,
and yield predictions from Monte Carlo (MC) have been validated using the
control channel $B\rightarrow$ K∗J$/\Psi$ with J$/\Psi\rightarrow$(e+e-),
there is not yet, at the time of this conference, enough data to perform the
analysis or test the fitting procedure. Toy MC studies have therefore been
carried out for this purpose schune . 190k signal events were generated using
EvtGen, and separated into files containing 250 events: the predicted yields
from MC studies with 2fb-1 at a centre of mass energy of 14 TeV, excluding
effects from LHCb’s high level trigger. By performing the fit on each file, it
is shown that with 200-250 signal events and a signal to background ratio of
the order of 1, a precision of 0.2 is attainable on $\mathrm{A_{T}^{2}}$,
equivalent to an accuracy on the fraction of wrongly polarised photons of 0.1.
An example of one of the fits for one toy MC study can be seen in Fig. 4. The
analysis also demonstrates that the measurements are not sensitive to the
knowledge of the angular acceptance, and hence shall not be systematics
limited.
Figure 4: Example of the fit of $\cos\Theta_{L}$, $\cos\Theta_{K}$ and $\phi$
for one toy MC study containing 250 signal events and no background events.
## References
* (1) for example E. Lunghi and J. Matias, J. High Enerfy Physics, 04, (2007) 058
* (2) Y. Grossman and D. Pirjol, J. High Enerfy Physics, 06, (2009) 029
* (3) LHCB-CONF-2011-039
* (4) C. Bobeth, G. Hiller and D. van Dyk, J. High Enerfy Physics, 07, (2010) 098
* (5) CF. Kruger and J. Matias, Physics Rev, D71, (2005) 094009
* (6) J. LeFrançois, M.H. Schune, LHCb-2009-008
|
# A general epidemic model and its application to mask design considering
different preferences towards masks
Chaoqian Wang and Hamdi Kavak
###### Abstract
While most masks have a limited effect on personal protection, how effective
are they for collective protection? How to enlighten the design of masks from
the perspective of collective dynamics? In this paper, we assume three
preferences in the population: (i) never wearing a mask; (ii) wearing a mask
if and only if infected; (iii) always wearing a mask. We study the epidemic
transmission in an open system within the Susceptible-Infected-Recovered (SIR)
model framework. We use agent-based Monte Carlo simulation and mean-field
differential equations to investigate the model, respectively. Ternary heat
maps show that wearing masks is always beneficial in curbing the spread of the
epidemic. Based on the model, we investigate the potential implications of
different mask designs from the perspective of collective dynamics. The
results show that strengthening the filterability of the mask from the face to
the outside is more effective in most parameter spaces, because it acts on
individuals with both preferences (ii) and (iii). However, when the fraction
of individuals always wearing a mask achieves a critical point, strengthening
the filterability from outside to the face becomes more effective, because of
the emerging hidden reality that the infected individuals become too few to
utilize the filterability from their face to outside fully.
Keywords: Epidemic model; Mask; COVID-19; Verification and validation;
Lyapunov function
## 1 Introduction
As the COVID-19 pandemic is ravaging the world, the protection of masks is a
topic of interest. A news article published in Nature indicates that wearing a
surgical mask leads to an $11\%$ drop in risk, while a $5\%$ drop for cloth
[1]. The protective effect of masks on individuals may seem minimal, but it is
also necessary to focus on the protective effect on collectives.
Since Kermack and McKendrick [2] proposed the Susceptible-Infected-Recovered
(SIR) compartment model, various epidemic models have been developed
considerably. In the classic SIR model, the population is divided into three
compartments: (i) the susceptible ($S$); (ii) the infected ($I$); (ii) the
recovered ($R$). Through human-to-human contact or self-healing, individuals
flow from one compartment to another. A simple modified version is the SEIR
model, which adds an exposed ($E$) compartment to the SIR model. Recently,
Barlow et al. [3, 4] derived the analytical solutions of the SIR [3] and the
SEIR [4] models.
Researchers, in recent years, have explored additional factors and mechanisms
to the classic epidemic models, such as isolation [5] and vaccination [6, 7,
8, 9, 10, 11]. The dynamics of the epidemic transmission can also be applied
to the information spreading, creating rumor spreading models [12, 13] or the
public opinion dynamics model [14, 15]. From the perspective of verification
and validation, the global stability of this class of nonlinear dynamical
systems is widely studied [16, 17, 18, 19, 20, 21]. In particular, researchers
have proved the global stability of endemic equilibria in various epidemic
models in multigroup populations [19, 20, 21], which are general cases of the
model proposed in this work. A common approach to prove global stability is
constructing a Lyapunov function (not limited to epidemiology, but also widely
applied to other complex systems such as evolutionary dynamics [22, 23]),
which measures the system’s “energy.” If the energy continues to decay, then
the system will stabilize at an equilibrium point.
When it comes to the protective effect of masks, several works [24, 25, 26,
27, 28] are noticed to have emerged in the COVID-19 period after 2020. Li et
al. [24] treated whether people wear masks or not as an evolutionary game.
Gondim [25] considered masks in the SEIR model and validated the model by
real-world data. Auger and Moussaoui [26] studied the confinement’s release
threshold, taking the masks into account. Lasisi and Adeyemo [27] modeled the
effect of wearing masks on COVID-19 infection dynamics. Han et al. [28]
investigated the effect of three different preferences on wearing a mask.
Based on the existing literature, we find the previous works on masks have
three shortcomings. First, when classifying the population into three
categories with different preferences on wearing masks according to their
assumptions, there is no work classifying them into three independent
variables. They set only two variables as the fraction of two categories, and
the remaining category’s fraction is one minus these two variables. This leads
to an inability to ensure constant relative proportions of the other two
categories when investigating the effect of the proportion of a certain
category. Second, previous work did not carry out a complete analysis of the
stability of their models. This makes verification and validation challenging.
Third, only focusing on the effect of masks on epidemic spreading, there is no
previous work considering providing applications of the epidemic models to the
design of masks itself.
This paper builds a general epidemic model in an open system considering three
different preferences on wearing masks. We start from a set of agent-based
rules, and use mean-field analysis to verify and validate the model. In
addition to filling in the gaps of previous work by treating three preferences
as independent variables and considering global stability analysis, we explore
[29, 30] the effect of different preferences towards wearing masks on the
epidemic transmission through our model’s eyes. Considering that in the
traditional perception, masks are designed at an individual level, we also try
to reveal the design strategies of the masks by the collective dynamics based
on our model.
## 2 Model
There is an epidemic disease spreading in the system. To prevent this
epidemic, individuals hold different preferences for wearing masks. Concerning
the infection state, we divide the population into: (i) the susceptible ($x$);
(ii) the infected ($y$); (iii) the recovered ($z$). In terms of different
preferences towards wearing masks, we divide the population into: (i) those
who never wear masks (subscript 0); (ii) those who wear masks if and only if
infected (subscript 1); (iii) those who always wear masks (subscript 2).
Therefore, we have up to 9 categories according to different combinations of
the classification of the two dimensions mentioned above.
Before describing evolutionary rules, we list the definition of our
mathematical symbols in Table 1.
Table 1: The definition of mathematical symbols Symbol | Definition
---|---
$x_{0}$ | The number of susceptible individuals never wearing a mask.
$y_{0}$ | The number of infected individuals never wearing a mask.
$z_{0}$ | The number of recovered individuals never wearing a mask.
$x_{1}$ | The number of susceptible individuals wearing a mask if and only if infected.
$y_{1}$ | The number of infected individuals wearing a mask if and only if infected.
$z_{1}$ | The number of recovered individuals wearing a mask if and only if infected.
$x_{2}$ | The number of susceptible individuals always wearing a mask.
$y_{2}$ | The number of infected individuals always wearing a mask.
$z_{2}$ | The number of recovered individuals always wearing a mask.
$n$ | The number of individuals in the system.
$\Lambda$ | The number of new individuals entering the system within unit time.
$\mu$ | The rate of natural death.
$r$ | The rate of recovering.
$\alpha$ | The rate of human-to-human infection.
$\varepsilon_{0}$ | The fraction of new individuals never wearing a mask.
$\varepsilon_{1}$ | The fraction of new individuals wearing a mask if and only if infected.
$\varepsilon_{2}$ | The fraction of new individuals always wearing a mask.
$p_{I}$ | The protective effect produced when an infected individual wears a mask.
$p_{S}$ | The protective effect produced when a susceptible individual wears a mask.
### 2.1 The agent-based rules
Consider an open system containing initially $n\big{|}_{t=0}$ agents (i.e.,
individuals). Within a Monte Carlo step, an agent $i$ is randomly selected,
and the following parallel events occur.
(1) If agent $i$ is susceptible, we again select an agent $j$ randomly. If
agent $j$ is infected, then agent $i$ is infected with a probability $\alpha$
($\alpha>0$). If agent $j$ wears a mask, then agent $i$ spares from infection
with a probability $p_{I}$ ($0<p_{I}<1$). If agent $i$ wears a mask, then
agent $i$ spares from infection with a probability $p_{S}$ ($0<p_{S}<1$).
(2) If agent $i$ is infected, then it recovers with a probability $r$ (the
average infection cycle is $1/r$). This does not happen at the same Monte
Carlo step as the event (1).
(3) Agent $i$ naturally dies with a probability $\mu$ (the average lifespan is
$1/\mu$). We do not consider deaths due to the epidemic.
To ensure the population remains almost unchanged, we must let new agents
enter the system. We set the following very first event in a Monte Carlo step,
where the number (0) means it happens before the event (1).
(0) A new agent enters the system with a probability $p_{e}$. The agent’s
personal preference determines it never wears a mask with a probability
$\varepsilon_{0}$ ($0<\varepsilon_{0}<1$), wears a mask if and only if
infected with a probability $\varepsilon_{1}$ ($0<\varepsilon_{1}<1$), or
always wears a mask with a probability $\varepsilon_{2}$
($0<\varepsilon_{2}<1$), yielding
$\varepsilon_{0}+\varepsilon_{1}+\varepsilon_{2}=1$. The preference of an
agent on masks does not change over time.
For the population $n$ remaining almost unchanged with time, we let one time
step $t$ contains $n\big{|}_{t=0}$ Monte Carlo steps, such that each agent can
be selected once on average. Therefore, the expected number of new agents
entering the system within one time step $t$ is $\Lambda=n^{*}p_{e}$. The
solution is $p_{e}=\mu$ (see Theorem 1).
### 2.2 The mean-field equations
Performing mean-field analysis, we can approximate the agent-based dynamics
into a set of differential equations. We do not dwell on the principles of
mean-field analysis, but only explain some important points. (i) Within unit
time, each agent is selected once on average, such that the number of events
descending in a category is the population in the category. (ii) The
probability of selecting an infected agent never wearing a mask from the
population is $y_{0}/n$, and $y_{1}/n$, $y_{2}/n$ for selecting an infected
agent holding the other two preferences, respectively. (iii) Thanks to the
mask, the probability of sparing from infection is $p_{I}$ or $p_{S}$, which
means the probability of infection is $(1-p_{I})$ or $(1-p_{S})$. (iv) If
there are two layers of protection, they must be both breached for the
infection to succeed.
We denote the state of the system by a vector $\mathbf{\Psi}$, containing the
population in nine categories. The mean-field differential equations depicting
the agent-based dynamics is
$\dot{\mathbf{\Psi}}=\begin{pmatrix}\dot{x}_{0}\\\ \dot{y}_{0}\\\
\dot{z}_{0}\\\ \dot{x}_{1}\\\ \dot{y}_{1}\\\ \dot{z}_{1}\\\ \dot{x}_{2}\\\
\dot{y}_{2}\\\ \dot{z}_{2}\end{pmatrix},$ (1)
where
$\left\\{\begin{aligned} \dot{x}_{0}=&~{}\varepsilon_{0}\Lambda-\alpha
x_{0}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n-\mu x_{0},\\\ \dot{y}_{0}=&~{}\alpha
x_{0}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n-ry_{0}-\mu y_{0},\\\
\dot{z}_{0}=&~{}ry_{0}-\mu z_{0},\\\
\dot{x}_{1}=&~{}\varepsilon_{1}\Lambda-\alpha
x_{1}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n-\mu x_{1},\\\ \dot{y}_{1}=&~{}\alpha
x_{1}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n-ry_{1}-\mu y_{1},\\\
\dot{z}_{1}=&~{}ry_{1}-\mu z_{1},\\\
\dot{x}_{2}=&~{}\varepsilon_{2}\Lambda-\alpha(1-p_{S})x_{2}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n\\\
&-\mu x_{2},\\\
\dot{y}_{2}=&~{}\alpha(1-p_{S})x_{2}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n-ry_{2}\\\
&-\mu y_{2},\\\ \dot{z}_{2}=&~{}ry_{2}-\mu z_{2}.\end{aligned}\right.$
The system depicted by Eq. (1) is an extended version of the SIR model with a
standard incidence rate.
## 3 Results and discussion
In this section, we demonstrate the numerical results of the model from both
Monte Carlo simulation and mean-field equations. The algorithm of the Monte
Carlo simulation was described in Section 2.1. In the numerical simulation of
mean-field equations, we use iteration $\mathbf{\Psi}(t+\Delta
t)=\mathbf{\Psi}(t)+\dot{\mathbf{\Psi}}(t)\Delta t$, where $\Delta t=0.01$.
We set three statistical measures: (i) the proportion of susceptible,
$p_{x}=(x_{0}+x_{1}+x_{2})/n$; (ii) the proportion of infected,
$p_{y}=(y_{0}+y_{1}+y_{2})/n$; (iii) the proportion of recovered,
$p_{z}=(z_{0}+z_{1}+z_{2})/n$.
### 3.1 Impact of masks on epidemic spreading
Figures 1 and 2 show the time evolution of $p_{x}$, $p_{y}$, and $p_{z}$ with
different parameters and initial conditions.
Figure 1: Time evolution of $p_{x}$, $p_{y}$, and $p_{z}$. The results of
Monte Carlo simulation and mean-field equations are presented together.
$\alpha=0.2$, $\mu=0.01$, $\Lambda=10$, $r=0.05$, $\varepsilon_{0}=0.1$,
$\varepsilon_{1}=0.1$, $\varepsilon_{2}=0.8$, $p_{I}=0.75$, $p_{S}=0.25$. (a)
$p_{x}\big{|}_{t=0}=0.9$, $p_{y}\big{|}_{t=0}=0.1$, $p_{z}\big{|}_{t=0}=0$.
(b) $p_{x}\big{|}_{t=0}=0.1$, $p_{y}\big{|}_{t=0}=0.9$,
$p_{z}\big{|}_{t=0}=0$. (a)(b)
$x_{0}\big{|}_{t=0}=x_{1}\big{|}_{t=0}=x_{2}\big{|}_{t=0}$ in
$x\big{|}_{t=0}$, as well as $y\big{|}_{t=0}$ and $z\big{|}_{t=0}$. Figure 2:
Time evolution of $p_{x}$, $p_{y}$, and $p_{z}$. The results of Monte Carlo
simulation and mean-field equations are presented together. $\alpha=0.2$,
$\mu=0.01$, $\Lambda=10$, $r=0.05$, $\varepsilon_{0}=0.3$,
$\varepsilon_{1}=0.1$, $\varepsilon_{2}=0.6$, $p_{I}=0.5$, $p_{S}=0.05$. (a)
$p_{x}\big{|}_{t=0}=0.9$, $p_{y}\big{|}_{t=0}=0.1$, $p_{z}\big{|}_{t=0}=0$.
(b) $p_{x}\big{|}_{t=0}=0.1$, $p_{y}\big{|}_{t=0}=0.9$,
$p_{z}\big{|}_{t=0}=0$. (a)(b)
$x_{0}\big{|}_{t=0}=x_{1}\big{|}_{t=0}=x_{2}\big{|}_{t=0}$ in
$x\big{|}_{t=0}$, as well as $y\big{|}_{t=0}$ and $z\big{|}_{t=0}$.
From Figures 1 and 2, we find the proportions of different individuals always
achieve stability after time evolution. The results of the Monte Carlo
simulation fluctuate, while the results of mean-field equations are stable.
They corroborate each other. In addition, we obverse two phenomena. First, the
epidemic may either die out or exist at the end, dependent on different
parameters. Second, with the same parameters and different initial conditions,
the steady-states are the same.
Next, in the heat maps Figure 3 and 4, we present the steady-states of $p_{y}$
and $p_{x}$ as a ternary function of $\varepsilon_{0}$, $\varepsilon_{1}$,
$\varepsilon_{2}$, respectively. The results of the Monte Carlo simulation are
the average of the last 200 time steps ($t$). The results of mean-field
equations are retrieved from the state $\mathbf{\Psi}(t+\Delta t)$ when
$\max\\{|\mathbf{\Psi}(t)|\Delta t\\}<0.0001$.
Figure 3: The steady-state of $p_{y}$ as a ternary function of
$\varepsilon_{0}$, $\varepsilon_{1}$, $\varepsilon_{2}$. (a) Monte Carlo
simulation. (b) Mean-field equations. $\alpha=0.2$, $\mu=0.01$, $\Lambda=10$,
$r=0.05$, $p_{I}=0.5$, $p_{S}=0.05$. Figure 4: The steady-state of $p_{x}$ as
a ternary function of $\varepsilon_{0}$, $\varepsilon_{1}$, $\varepsilon_{2}$.
(a) Monte Carlo simulation. (b) Mean-field equations. $\alpha=0.2$,
$\mu=0.01$, $\Lambda=10$, $r=0.05$, $p_{I}=0.5$, $p_{S}=0.05$.
From Figures 3 and 4, we find that the results of Monte Carlo simulation and
mean-field equations corroborate each other. In Figure 3, we observe that more
individuals wearing masks reduce the proportion of infected individuals in the
population. In particular, always wearing a mask has a better effect on
reducing infected individuals. In Figure 4, we observe that more individuals
wearing masks increases the proportion of susceptible individuals in the
population. Different from increasing recovered individuals, it means that
more people are spared from getting infected once. Also, always wearing a mask
has a better effect on increasing susceptible individuals (Figure 4), making
more individuals spared from being infected.
### 3.2 Potential implication of different mask design
Based on our model, we can reveal the potential implication of different mask
designs. When an infected individual wears a mask, it is the filterability
from the face to the outside that provides protection (to the population), and
when a susceptible individual wears a mask, the filterability from the outside
to the face matters.
We show in Figure 5 (a) and (b) the steady infected fraction $p_{y}$ as a
binary function of the protective effect produced when an infected individual
wears a mask ($p_{I}$) and when a susceptible individual wears a mask
($p_{S}$).
Figure 5: (a) Monte Carlo simulation. The steady-state of $p_{y}$ as a binary
function of $p_{I}$, $p_{S}$. (b) Mean-field equations. The steady-state of
$p_{y}$ as a binary function of $p_{I}$, $p_{S}$. (c) Mean-field equations.
The steady-state of $\partial p_{y}/\partial p_{I}$ and $\partial
p_{y}/\partial p_{S}$ as a binary function of $p_{I}$, $p_{S}$. $\alpha=0.2$,
$\mu=0.01$, $\Lambda=10$, $r=0.05$, $\varepsilon_{0}=0.3$,
$\varepsilon_{1}=0.1$, $\varepsilon_{2}=0.6$, $p_{I}=0.5$, $p_{S}=0.05$.
We can observe that in both the Monte Carlo simulation [Figure 5(a)] and mean-
field equations [Figure 5(b)], an increase in protective effect $p_{I}$ or
$p_{S}$ leads to a decrease in the steady infected fraction $p_{y}$. On this
basis, we further ask which one in increasing $p_{I}$ or $p_{S}$ is more
effective in reducing the infected fraction?
We show in Figure 5(c) the steady-state of $\partial p_{y}/\partial p_{I}$ and
$\partial p_{y}/\partial p_{S}$ as a binary function of $p_{I}$ and $p_{S}$.
If $\partial p_{y}/\partial p_{I}<\partial p_{y}/\partial p_{S}$, increasing
the unit protective effect from the infected side is more conducive to
reducing the infected fraction, and vice versa. Intuitively, increasing
$p_{I}$ should have always been more conducive than increasing $p_{S}$,
because the former acts on individuals with two preferences, (ii) those who
wear masks if and only if infected and (iii) those who always wear masks. In
contrast, the latter only acts on individuals with one preference, (iii) those
who always wear masks. Increasing $p_{I}$ obviously has a broader scope of
action than the increasing $p_{S}$ and covers the latter’s population.
However, Figure 5(c) presents a different phenomenon. This indicates that we
can provide the designs of the masks with different insights from the group
dynamics. We will give this further analysis in Section 4.5.
## 4 Verification and validation
This section verifies and validates the properties that we concluded in
numerical results by analyzing them at a mathematical level.
### 4.1 The total population dynamics
The total population dynamics follows Theorem 1.
###### Theorem 1.
For $t\to\infty$, we have $n\to\Lambda/\mu$.
###### Proof.
According to Eq. (1),
$\displaystyle\dot{n}$
$\displaystyle=\dot{x_{0}}+\dot{y_{0}}+\dot{z_{0}}+\dot{x_{1}}+\dot{y_{1}}+\dot{z_{1}}+\dot{x_{2}}+\dot{y_{2}}+\dot{z_{2}}$
$\displaystyle=\Lambda-\mu n.$ (2)
Solving Eq. (Proof), we get
$n=\left(n\big{|}_{t=0}-\frac{\Lambda}{\mu}\right)\mathrm{e}^{-\mu
t}+\frac{\Lambda}{\mu}.$ (3)
From Eq. (3), we complete the proof that $n\to\Lambda/\mu$ for $t\to\infty$.
We give this significant value a symbol $n^{*}$,
$n^{*}=\lim_{t\to\infty}n=\frac{\Lambda}{\mu}.$ (4)
In the same way, we can also prove that
$x_{0}+y_{0}+z_{0}\to\varepsilon_{0}\Lambda/\mu$,
$x_{1}+y_{1}+z_{1}\to\varepsilon_{1}\Lambda/\mu$,
$x_{2}+y_{2}+z_{2}\to\varepsilon_{2}\Lambda/\mu$ for $t\to\infty$.
Theorem 1 gives us another important insight: when discussing the steady-
state, we can substitute $n$ for $n^{*}=\Lambda/\mu$ in the system of Eq. (1).
### 4.2 The basic reproduction number
The basic reproduction number $\mathcal{R}_{0}$ is one of the most important
measures in epidemiology. It can assist in analyzing both the stability of the
system and the effect of parameters on the epidemic transmission.
We let $\dot{\mathbf{\Psi}}=\mathbf{0}$ and solve for the epidemic-free
equilibrium, denoted by $\mathbf{\Psi}^{*}$,
$\mathbf{\Psi}^{*}=\frac{\Lambda}{\mu}\left(\varepsilon_{0},0,0,\varepsilon_{1},0,0,\varepsilon_{2},0,0\right)^{\mathrm{T}}.$
(5)
Then, we can follow the method in Ref. [31] to find the basic reproduction
number (see Appendix A):
$\mathcal{R}_{0}=\frac{\alpha}{r+\mu}[\varepsilon_{0}+(1-p_{I})\varepsilon_{1}+(1-p_{S})(1-p_{I})\varepsilon_{2}].$
(6)
The basic reproduction number reveals the following theorem.
###### Theorem 2.
The epidemic-free equilibrium $\mathbf{\Psi}^{*}$ is locally asymptotically
stable if $\mathcal{R}_{0}<1$, and the epidemic-free equilibrium
$\mathbf{\Psi}^{*}$ is not stable if $\mathcal{R}_{0}>1$.
(See Ref. [31] for proof)
Substituting the parameters in Figure 1 into Eq. (6), we can calculate
$\mathcal{R}_{0}=0.9167<1$, which means the epidemic-free equilibrium is
stable, consistent with that shown in Figure 1. Similarly, substituting
parameters in Figure 2 produces $\mathcal{R}_{0}=2.1167>1$, such that the
epidemic-free equilibrium is not stable, which is also consistent with that
shown in Figure 2.
### 4.3 Epidemic-free and endemic equilibria
We separate $\mathbf{\Psi}$ into
$\left\\{\begin{aligned}
\mathbf{\Phi}_{1}&=\left(x_{0},y_{0},x_{1},y_{1},x_{2},y_{2}\right)^{\mathrm{T}},\\\
\mathbf{\Phi}_{2}&=\left(z_{0},z_{1},z_{2}\right)^{\mathrm{T}}.\end{aligned}\right.$
(7)
From Eq. (1), we can assert that the steady-state of $\mathbf{\Phi}_{1}$ can
determine the steady-state of $\mathbf{\Phi}_{2}$, and $\mathbf{\Phi}_{2}$
does not affect the evolution of $\mathbf{\Phi}_{1}$. Therefore, the stability
of $\mathbf{\Psi}$ is equivalent to (i) the stability of $\mathbf{\Phi}_{1}$
and (ii) the stability of $\mathbf{\Phi}_{2}$ when $\mathbf{\Phi}_{1}$
achieves stability.
We first prove the global stability of the epidemic-free equilibrium by
constructing the Lyapunov function.
###### Theorem 3.
The epidemic-free equilibrium $\mathbf{\Psi}^{*}$ is global asymptotically
stable in $\mathbb{R}_{\geq 0}^{9}$ if $\mathcal{R}_{0}<1$.
###### Proof.
Consider the Lyapunov function $\mathcal{L}(\mathbf{\Phi}_{1})$ in
$\mathbb{R}_{\geq 0}^{6}$,
$\displaystyle\mathcal{L}(\mathbf{\Phi}_{1})=$
$\displaystyle~{}\frac{(x_{0}-x_{0}^{*})^{2}}{2x_{0}^{*}}+y_{0}$
$\displaystyle+(1-p_{I})\left[\frac{(x_{1}-x_{1}^{*})^{2}}{2x_{1}^{*}}+y_{1}\right]$
$\displaystyle+(1-p_{I})\left[\frac{(x_{2}-x_{2}^{*})^{2}}{2x_{2}^{*}}+y_{2}\right].$
(8)
We can conclude that, (i) $\mathcal{L}(\mathbf{\Phi}_{1})=0$ when
$\mathbf{\Phi}_{1}=\mathbf{\Phi}_{1}^{*}$, (ii)
$\mathcal{L}(\mathbf{\Phi}_{1})>0$ when
$\mathbf{\Phi}_{1}\neq\mathbf{\Phi}_{1}^{*}$. Therefore, $\mathbf{\Phi}_{1}$
is positive definite in the neighborhood of $\mathbf{\Phi}_{1}^{*}$. Secondly,
we have (see Appendix B),
$\dot{\mathcal{L}}(\mathbf{\Phi}_{1})\leq 0$ (9)
when $\mathbf{\Phi}_{1}\neq\mathbf{\Phi}_{1}^{*}$. Note that
$\dot{\mathcal{L}}(\mathbf{\Phi}_{1})=0$ can be confirmed by Eq. (B) when
$\mathbf{\Phi}_{1}=\mathbf{\Phi}_{1}^{*}$. Therefore, $\mathbf{\Phi}_{1}$ is
negative semi-definite in the neighborhood of $\mathbf{\Phi}_{1}^{*}$. Hence,
according to Lasalle’s Invariance Principle [32], $\mathbf{\Phi}_{1}^{*}$ is
globally asymptotically stable in $\mathbb{R}_{\geq 0}^{6}$. Given
$\mathbf{\Phi}_{1}^{*}$ stable, it is easy to prove the global asymptotic
stability of $\mathbf{\Phi}_{2}^{*}$ in $\mathbb{R}_{\geq 0}^{3}$ by
constructing Lyapunov function
$\mathcal{L}_{z}(\mathbf{\Phi}_{2})=z_{0}+z_{1}+z_{2}$. Therefore,
$\mathbf{\Psi}^{*}$ is global asymptotically stable in $\mathbb{R}_{\geq
0}^{9}$.
The equation $\dot{\mathbf{\Psi}}=\mathbf{0}$ has two solutions. We denote the
second solution by $\mathbf{\Psi}^{**}$. The infected population is non-zero;
thus, we call it the endemic equilibrium. This equilibrium
$\mathbf{\Psi}^{**}$ corresponds to the results shown in Figure 2. It is not
easy to express it analytically, but we can still show its existence
condition.
###### Theorem 4.
The endemic equilibrium $\mathbf{\Psi}^{**}$ exists and is unique in
$\mathbb{R}_{\geq 0}^{9}$ if $\mathcal{R}_{0}>1$.
###### Proof.
First, we show the relationship between the existence and uniqueness of
positive $y_{i}^{**}$, $i=0,1,2$ ($y_{0}^{**}>0$, $y_{1}^{**}>0$,
$y_{2}^{**}>0$) and $\mathcal{R}_{0}>1$ (see Appendix C). Then, the existence
and uniqueness of $x_{i}^{**}$, $z_{i}^{**}$, $i=0,1,2$ can be naturally
confirmed, hence the existence and uniqueness of $\mathbf{\Psi}^{**}$.
Theorem 3 validates that in Figure 1, the steady-state with the same
parameters is independent of the initial conditions. Theorem 4 is consistent
with Figure 2.
### 4.4 Robustness analysis for the effect of wearing masks
The basic reproduction number measures the average number of individuals that
an infected individual can transmit the epidemic. The higher the basic
reproduction number, the more severe the epidemic.
Analyzing Eq. (6), we can see that the coefficient before $\varepsilon_{0}$ is
1, and $(1-p_{I})$ for $\varepsilon_{1}$, and $(1-p_{S})(1-p_{I})$ for
$\varepsilon_{2}$. Since $p_{S}>0$, $p_{I}>0$, we have
$(1-p_{S})(1-p_{I})<1-p_{I}<1$. Considering the constraints:
$0\leq\varepsilon_{0}\leq 1$, $0\leq\varepsilon_{1}\leq 1$,
$0\leq\varepsilon_{2}\leq 1$,
$\varepsilon_{0}+\varepsilon_{1}+\varepsilon_{2}=1$, we know the following
facts. (i) $\mathcal{R}_{0}$ takes the minimum when $\varepsilon_{0}=0$,
$\varepsilon_{1}=0$, $\varepsilon_{2}=1$. (ii) $\mathcal{R}_{0}$ takes the
maximum when $\varepsilon_{0}=1$, $\varepsilon_{1}=0$, $\varepsilon_{2}=0$.
Therefore, everyone always wearing a mask minimizes the epidemic severity,
while no one wearing masks maximizes the epidemic severity.
### 4.5 Application to mask design
The basic reproduction number $\mathcal{R}_{0}$ can play the same role as the
infected fraction $p_{y}$ in measuring the outbreak severity. From Eq. (6), we
see that the coefficient $(1-p_{I})$ acts on both $\varepsilon_{1}$ and
$\varepsilon_{2}$ while $(1-p_{S})$ only acts on $\varepsilon_{2}$, which
means the mask protection from the infected side acts on two population
categories and that from the susceptible side acts on only one. This brings us
a misleading intuition that increasing the protective effect from the infected
side is always more conducive. However, we can reveal a hidden different
reality by group dynamics.
We write the partial derivatives of $\mathcal{R}_{0}$ with respect to $p_{I}$
and $p_{S}$ in Eq. (10) and (11).
$\frac{\partial\mathcal{R}_{0}}{\partial
p_{I}}=-\frac{\alpha}{r+\mu}[\varepsilon_{1}+(1-p_{S})\varepsilon_{2}],$ (10)
$\frac{\partial\mathcal{R}_{0}}{\partial
p_{S}}=-\frac{\alpha}{r+\mu}(1-p_{I})\varepsilon_{2}.$ (11)
Increasing the protective effect from the infected side is better than that of
the susceptible one means
$\frac{\partial\mathcal{R}_{0}}{\partial
p_{I}}<\frac{\partial\mathcal{R}_{0}}{\partial p_{S}}$ (12)
or
$\frac{\varepsilon_{1}}{\varepsilon_{2}}>p_{S}-p_{I},$ (13)
and, increasing the protective effect from the susceptible side is better than
that of the infected one means
$\frac{\partial\mathcal{R}_{0}}{\partial
p_{S}}<\frac{\partial\mathcal{R}_{0}}{\partial p_{I}}$ (14)
or
$\frac{\varepsilon_{1}}{\varepsilon_{2}}<p_{S}-p_{I},$ (15)
We can discuss the parameter space in two cases. First, if $p_{S}<p_{I}$, then
Eq. (12) always holds. Second, if $p_{S}>p_{I}$, then Eq. (14) does not always
hold. This suggests that the “intuitive" phenomena (increasing $p_{I}$ is more
effective) occupy more parameter space, which can be verified by Figure 5(c).
For the latter case, $p_{S}>p_{I}$, we can transform Eq. (15) into
$\varepsilon_{2}>\varepsilon_{1}/(p_{S}-p_{I})$. This indicates that if the
fraction of individuals always wearing a mask ($\varepsilon_{2}$) exceeds a
critical point, $\varepsilon_{1}/(p_{S}-p_{I})$, then, increasing $p_{S}$ is
more effective than increasing $p_{I}$, even if $p_{I}$ can act on both
category $\varepsilon_{1}$ and $\varepsilon_{2}$. We can interpret this result
in daily language by taking into account the fraction of existing infected
individuals. As we showed in Section 4.4, $\mathcal{R}_{0}$ decreases (i.e.,
infected individuals increasing) with an increase in $\varepsilon_{2}$. In
this way, the critical point of $\varepsilon_{2}$ is rational to exist, over
which the infected individuals are too few to exert the protective effect that
the mask produces on their side.
## 5 Conclusion
Although most masks have little to no effect on personal protection [1], we
are still interested in the protective effects of masks on a population. We
proposed a general epidemic model in the classic SIR framework considering
three different preferences towards wearing masks. Some individuals never wear
masks; others wear masks if and only if infected, and some always wear masks.
We started from agent-based rules and used a set of mean-field differential
equations to approximate the model. The results of the two corroborate each
other. In this work, the three preferences are independent of each other.
The first aspect is the effect of masks on epidemics. The ternary heat maps
revealed that wearing masks can reduce the number of infected individuals and
increase the number of susceptible individuals. We provided the global
stability analysis of the results and showed the robustness of the
effectiveness of masks by analyzing the basic reproduction number of the
epidemic. We concluded that wearing masks are beneficial to the control of
epidemics.
The second aspect is the application of the epidemic model to mask design. The
protective effect from the infected side ($p_{I}$) can be understood as the
filterability of the mask from the face to the outside against viruses, while
the protective effect to the susceptible side ($p_{S}$) can be interpreted as
the filterability from the outside to the face. This can be influenced by the
material and design of the mask [33], and we analyzed which side strengthening
would provide better results. We showed that strengthening the infected side
is more effective in most parameter spaces. This is intuitive since
strengthening the infected side acts on two categories of individuals (those
wearing masks only if infected and those always wearing masks), while
strengthening the susceptible side acts on only one category (those always
wearing masks). However, there is a hidden reality from the perspective of
group dynamics. We found that once the fraction of individuals always wearing
masks exceeds a critical point,
$\varepsilon_{2}>\varepsilon_{1}/(p_{S}-p_{I})$, then, strengthening the
susceptible side becomes more effective. This is because the preference of
always wearing masks reduces the infected fraction in the population, so that
the infected individuals are too few to exert the protective effect of masks
produced on their side. In the daily language, both the cases above seem to
make sense. However, noticing the latter case from the group perspective and
further giving the mask design strategies according to parameter spaces are
not straightforward without the help of system dynamics.
Real-world situations may have more complexity and different insights. For
instance, the underlying assumptions—people’s preferences do not change with
time, ignores human subjectivity, which has the potential to reveal more
insights. In fact, people can change their preference on whether to wear masks
by either estimating the epidemic severity (evolutionary games) or being
affected by the propaganda of the effectiveness of masks (opinion dynamics).
In this way, future work may consider time-dependent preferences, and apply
any modified model to mask design.
## Acknowledgement
Publication of this article was funded in part by the George Mason University
Libraries Open Access Publishing Fund.
## Data availability
Data sharing not applicable to this article as no datasets were generated or
analyzed during the current study.
## Conflict of interest statement
On behalf of all authors, the corresponding author states that there is no
conflict of interest.
## Appendix A Finding the basic reproduction number
Let us decompose the infected compartments in Eq. (1) as
$\left(\dot{y}_{0},\dot{y}_{1},\dot{y}_{2}\right)^{\mathrm{T}}=\mathcal{F}-\mathcal{V}$,
where
$\mathcal{F}=\begin{pmatrix}\mathcal{F}_{y_{0}}\\\ \mathcal{F}_{y_{1}}\\\
\mathcal{F}_{y_{0}}\end{pmatrix}=\begin{pmatrix}\alpha
x_{0}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n^{*}\\\ \alpha
x_{1}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n^{*}\\\
\alpha(1-p_{S})x_{2}[y_{0}+(1-p_{I})(y_{1}+y_{2})]/n^{*}\end{pmatrix},$ (A.1)
$\mathcal{V}=\begin{pmatrix}\mathcal{V}_{y_{0}}\\\ \mathcal{V}_{y_{1}}\\\
\mathcal{V}_{y_{0}}\end{pmatrix}=\begin{pmatrix}ry_{0}+\mu y_{0}\\\ ry_{1}+\mu
y_{1}\\\ ry_{2}+\mu y_{2}\end{pmatrix}.$ (A.2)
Solve for the Jacobian matrix of $\mathcal{F}$ and $\mathcal{V}$ at
$\mathbf{\Psi}^{*}$, denoted by $\mathbf{F}$ and $\mathbf{V}$,
$\displaystyle\mathbf{F}$
$\displaystyle=\begin{pmatrix}\displaystyle\frac{\partial\mathcal{F}_{y_{0}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{F}_{y_{0}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{F}_{y_{0}}}{\partial
y_{2}}\\\\[8.0pt] \displaystyle\frac{\partial\mathcal{F}_{y_{1}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{F}_{y_{1}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{F}_{y_{1}}}{\partial
y_{2}}\\\\[8.0pt] \displaystyle\frac{\partial\mathcal{F}_{y_{2}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{F}_{y_{2}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{F}_{y_{2}}}{\partial
y_{2}}\end{pmatrix}(\mathbf{\Psi}^{*})=\frac{1}{n^{*}}\begin{pmatrix}\displaystyle\alpha
x_{0}&\displaystyle\alpha(1-p_{I})x_{0}&\displaystyle\alpha(1-p_{I})x_{0}\\\
\displaystyle\alpha
x_{1}&\displaystyle\alpha(1-p_{I})x_{1}&\displaystyle\alpha(1-p_{I})x_{1}\\\
\displaystyle\alpha(1-p_{S})x_{2}&\displaystyle\alpha(1-p_{S})(1-p_{I})x_{2}&\displaystyle\alpha(1-p_{S})(1-p_{I})x_{2}\end{pmatrix}$
$\displaystyle=\alpha\begin{pmatrix}\displaystyle
x_{0}&\displaystyle(1-p_{I})x_{0}&\displaystyle(1-p_{I})x_{0}\\\ \displaystyle
x_{1}&\displaystyle(1-p_{I})x_{1}&\displaystyle(1-p_{I})x_{1}\\\
\displaystyle(1-p_{S})x_{2}&\displaystyle(1-p_{S})(1-p_{I})x_{2}&\displaystyle(1-p_{S})(1-p_{I})x_{2}\end{pmatrix},$
(A.3)
$\mathbf{V}=\begin{pmatrix}\displaystyle\frac{\partial\mathcal{V}_{y_{0}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{V}_{y_{0}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{V}_{y_{0}}}{\partial
y_{2}}\\\\[8.0pt] \displaystyle\frac{\partial\mathcal{V}_{y_{1}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{V}_{y_{1}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{V}_{y_{1}}}{\partial
y_{2}}\\\\[8.0pt] \displaystyle\frac{\partial\mathcal{V}_{y_{2}}}{\partial
y_{0}}&\displaystyle\frac{\partial\mathcal{V}_{y_{2}}}{\partial
y_{1}}&\displaystyle\frac{\partial\mathcal{V}_{y_{2}}}{\partial
y_{2}}\end{pmatrix}(\mathbf{\Psi}^{*})=(r+\mu)\begin{pmatrix}\displaystyle
1&\displaystyle 0&\displaystyle 0\\\ \displaystyle 0&\displaystyle
1&\displaystyle 0\\\ \displaystyle 0&\displaystyle 0&\displaystyle
1\end{pmatrix}.$ (A.4)
Then, the spectral radius (i.e., maximum eigenvalue) of
$\mathbf{F}\cdot\mathbf{V}^{-1}$ is the basic reproduction number
$\mathcal{R}_{0}$,
$\mathcal{R}_{0}=\frac{\alpha}{r+\mu}[\varepsilon_{0}+(1-p_{I})\varepsilon_{1}+(1-p_{S})(1-p_{I})\varepsilon_{2}].$
(A.5)
Please see Ref. [31] for more information on how to find the basic
reproduction number.
## Appendix B Proof of $\dot{\mathcal{L}}(\mathbf{\Phi}_{1})\leq 0$ when
$\mathbf{\Phi}_{1}\neq\mathbf{\Phi}_{1}^{*}$
$\displaystyle\dot{\mathcal{L}}(\mathbf{\Phi}_{1})=$
$\displaystyle\left(\frac{x_{0}}{x_{0}^{*}}-1\right)\dot{x}_{0}+\dot{y}_{0}+(1-p_{I})\left[\left(\frac{x_{1}}{x_{1}^{*}}-1\right)\dot{x}_{1}+\dot{y}_{1}\right]+(1-p_{I})\left[\left(\frac{x_{2}}{x_{2}^{*}}-1\right)\dot{x}_{2}+\dot{y}_{2}\right]$
$\displaystyle=$
$\displaystyle\left(\frac{x_{0}}{x_{0}^{*}}-1\right)\left(\varepsilon_{0}\Lambda-\frac{\alpha
x_{0}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}-\mu x_{0}\right)+\frac{\alpha
x_{0}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}$ $\displaystyle-ry_{0}-\mu
y_{0}+(1-p_{I})\left(\frac{x_{1}}{x_{1}^{*}}-1\right)\left(\varepsilon_{1}\Lambda-\frac{\alpha
x_{1}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}-\mu x_{1}\right)$
$\displaystyle+(1-p_{I})\left(\frac{\alpha
x_{1}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}-ry_{1}-\mu y_{1}\right)$
$\displaystyle+(1-p_{I})\left(\frac{x_{2}}{x_{2}^{*}}-1\right)\left(\varepsilon_{2}\Lambda-\frac{\alpha(1-p_{S})x_{2}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}-\mu
x_{2}\right)$
$\displaystyle+(1-p_{I})\left(\frac{\alpha(1-p_{S})x_{2}[y_{0}+(1-p_{I})(y_{1}+y_{2})]}{n^{*}}-ry_{2}-\mu
y_{2}\right)$ $\displaystyle=$
$\displaystyle-\frac{\mu}{x_{0}^{*}}(x_{0}-x_{0}^{*})^{2}-\frac{\alpha}{x_{0}^{*}n^{*}}[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{0}-x_{0}^{*})^{2}$
$\displaystyle+(r+\mu)\left(\frac{\alpha
x_{0}^{*}}{r+\mu}\times\frac{y_{0}+(1-p_{I})(y_{1}+y_{2})}{n^{*}}-y_{0}\right)$
$\displaystyle-\frac{\mu}{x_{1}^{*}}(1-p_{I})(x_{1}-x_{1}^{*})^{2}-\frac{\alpha}{x_{1}^{*}n^{*}}(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{1}-x_{1}^{*})^{2}$
$\displaystyle+(r+\mu)(1-p_{I})\left(\frac{\alpha
x_{1}^{*}}{r+\mu}\times\frac{y_{0}+(1-p_{I})(y_{1}+y_{2})}{n^{*}}-y_{1}\right)$
$\displaystyle-\frac{\mu}{x_{2}^{*}}(1-p_{I})(x_{2}-x_{2}^{*})^{2}-\frac{\alpha}{x_{2}^{*}n^{*}}(1-p_{S})(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{2}-x_{2}^{*})^{2}$
$\displaystyle+(r+\mu)(1-p_{I})\left(\frac{\alpha(1-p_{S})x_{2}^{*}}{r+\mu}\times\frac{y_{0}+(1-p_{I})(y_{1}+y_{2})}{n^{*}}-y_{2}\right).$
(B.1)
In Eq. (B), we used $x_{0}^{*}=\varepsilon_{0}\Lambda/\mu$,
$x_{1}^{*}=\varepsilon_{1}\Lambda/\mu$,
$x_{2}^{*}=\varepsilon_{2}\Lambda/\mu$.
We can further deflate Eq. (B),
$\displaystyle\dot{\mathcal{L}}(\mathbf{\Phi}_{1})=$
$\displaystyle-\frac{\mu}{x_{0}^{*}}(x_{0}-x_{0}^{*})^{2}-\frac{\alpha}{x_{0}^{*}n^{*}}[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{0}-x_{0}^{*})^{2}$
$\displaystyle-\frac{\mu}{x_{1}^{*}}(1-p_{I})(x_{1}-x_{1}^{*})^{2}-\frac{\alpha}{x_{1}^{*}n^{*}}(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{1}-x_{1}^{*})^{2}$
$\displaystyle-\frac{\mu}{x_{2}^{*}}(1-p_{I})(x_{2}-x_{2}^{*})^{2}-\frac{\alpha}{x_{2}^{*}n^{*}}(1-p_{S})(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{2}-x_{2}^{*})^{2}$
$\displaystyle+(r+\mu)[y_{0}+(1-p_{I})(y_{1}+y_{2})]\left\\{\frac{\alpha}{r+\mu}[\varepsilon_{0}+(1-p_{I})\varepsilon_{1}+(1-p_{S})(1-p_{I})\varepsilon_{2}]-1\right\\}$
$\displaystyle=$
$\displaystyle-\frac{\mu}{x_{0}^{*}}(x_{0}-x_{0}^{*})^{2}-\frac{\alpha}{x_{0}^{*}n^{*}}[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{0}-x_{0}^{*})^{2}$
$\displaystyle-\frac{\mu}{x_{1}^{*}}(1-p_{I})(x_{1}-x_{1}^{*})^{2}-\frac{\alpha}{x_{1}^{*}n^{*}}(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{1}-x_{1}^{*})^{2}$
$\displaystyle-\frac{\mu}{x_{2}^{*}}(1-p_{I})(x_{2}-x_{2}^{*})^{2}-\frac{\alpha}{x_{2}^{*}n^{*}}(1-p_{S})(1-p_{I})[y_{0}+(1-p_{I})(y_{1}+y_{2})](x_{2}-x_{2}^{*})^{2}$
$\displaystyle+(r+\mu)[y_{0}+(1-p_{I})(y_{1}+y_{2})](\mathcal{R}_{0}-1)$
$\displaystyle\leq$ $\displaystyle~{}0,$ (B.2)
which completes the proof.
## Appendix C The existence and uniqueness of $\mathbf{\Psi}^{**}$ when
$\mathcal{R}_{0}>1$
Using the equations $\dot{y}_{0}=0$, $\dot{y}_{1}=0$, $\dot{y}_{2}=0$ in
$\dot{\mathbf{\Psi}}=\mathbf{0}$ to obtain $x_{i}^{**}$ as a function of
$y_{i}^{**}$, $i=0,1,2$. Then, substituting the results into the equations
$\dot{x}_{0}=0$, $\dot{x}_{1}=0$, $\dot{x}_{2}=0$,
$\left\\{\begin{aligned}
0=&~{}\varepsilon_{0}\Lambda-(r+\mu)y_{0}^{**}-\frac{\mu
n^{*}(r+\mu)y_{0}^{**}}{\alpha[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]},\\\
0=&~{}\varepsilon_{1}\Lambda-(r+\mu)y_{1}^{**}-\frac{\mu
n^{*}(r+\mu)y_{1}^{**}}{\alpha[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]},\\\
0=&~{}\varepsilon_{2}\Lambda-(r+\mu)y_{2}^{**}-\frac{\mu
n^{*}(r+\mu)y_{2}^{**}}{\alpha(1-p_{S})[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]}.\\\
\end{aligned}\right.$ (C.1)
In Eq. (C.1), we multiply the first equation by $\alpha/(r+\mu)$, the second
equation by $\alpha(1-p_{I})/(r+\mu)$, and the third equation by
$\alpha(1-p_{S})(1-p_{I})/(r+\mu)$:
$\left\\{\begin{aligned}
0=&~{}\frac{\alpha}{r+\mu}\varepsilon_{0}\Lambda-\alpha y_{0}^{**}-\frac{\mu
n^{*}y_{0}^{**}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})},\\\
0=&~{}\frac{\alpha}{r+\mu}(1-p_{I})\varepsilon_{1}\Lambda-\alpha(1-p_{I})y_{1}^{**}-\frac{\mu
n^{*}(1-p_{I})y_{1}^{**}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})},\\\
0=&~{}\frac{\alpha}{r+\mu}(1-p_{S})(1-p_{I})\varepsilon_{2}\Lambda-\alpha(1-p_{S})(1-p_{I})y_{2}^{**}-\frac{\mu
n^{*}(1-p_{I})y_{2}^{**}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})}.\\\
\end{aligned}\right.$ (C.2)
Summing up the three equations in Eq. (C.2) and using $n^{*}=\Lambda/\mu$ (see
Eq. (4)), we have
$\mathcal{R}_{0}-\alpha[y_{0}^{**}+(1-p_{I})y_{1}^{**}+(1-p_{S})(1-p_{I})y_{2}^{**}]-1=0.$
(C.3)
Therefore, to ensure
$y_{0}^{**}+(1-p_{I})y_{1}^{**}+(1-p_{S})(1-p_{I})y_{2}^{**}>0$, which is a
necessary condition for $y_{0}^{**}>0$, $y_{1}^{**}>0$, $y_{2}^{**}>0$, we
have $\mathcal{R}_{0}>1$. However, we have not yet proved that
$\mathcal{R}_{0}>1$ is a sufficient condition for $y_{0}^{**}>0$,
$y_{1}^{**}>0$, $y_{2}^{**}>0$.
According to Eq. (C.2), we can ensure $y_{0}^{**}>0$, $y_{1}^{**}>0$,
$y_{2}^{**}>0$ if we can confirm
$y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})>0$. We will try to illustrate the
opposite case, $y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})<0$, cannot happen.
Let us further write Eq. (C.2) as
$\left\\{\begin{aligned}
y_{0}^{**}=&~{}\dfrac{\dfrac{\alpha}{r+\mu}\varepsilon_{0}\Lambda}{\alpha+\dfrac{\mu
n^{*}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})}},\\\
y_{1}^{**}=&~{}\dfrac{\dfrac{\alpha}{r+\mu}\varepsilon_{1}\Lambda}{\alpha+\dfrac{\mu
n^{*}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})}},\\\
y_{2}^{**}=&~{}\dfrac{\dfrac{\alpha}{r+\mu}(1-p_{S})\varepsilon_{2}\Lambda}{\alpha(1-p_{S})+\dfrac{\mu
n^{*}}{y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})}}.\\\ \end{aligned}\right.$
(C.4)
Then, it can be judged that $y_{0}^{**}$ and $y_{1}^{**}$ have the same sign,
because the denominators are equal and the numerators are both positive. The
case $y_{0}^{**}<0$, $y_{1}^{**}<0$ is possible only if their denominators
$\alpha+\mu n^{*}/[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]<0$. In this
case, we have $\alpha(1-p_{S})+\mu
n^{*}/[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]<\alpha+\mu
n^{*}/[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]<0$, which means
$y_{2}^{**}<0$ as well. Then, we have
$y_{0}^{**}+(1-p_{I})y_{1}^{**}+(1-p_{S})(1-p_{I})y_{2}^{**}<0$ because
$y_{0}^{**}<0$, $y_{1}^{**}<0$, $y_{2}^{**}<0$, which is inconsistent with our
previous conclusion. Therefore, $y_{0}^{**}>0$, $y_{1}^{**}>0$ must hold.
The remaining question is the sign of $y_{2}^{**}$. According to the second
equation in Eq. (1), we have
$y_{0}^{**}=\frac{\alpha}{r+\mu}x_{0}^{**}[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]/n^{*}.$
(C.5)
Since we have $y_{0}^{**}>0$, we know
$x_{0}^{**}[y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})]>0$ as well. The sign
of $x_{0}^{**}$ can be easily judged: if $x_{0}=0$, then
$\dot{x}_{0}=\varepsilon_{0}\Lambda>0$ so that $x_{0}<0$ never happens if the
system starts evolving from a meaningful initial state where $x_{0}>0$.
Therefore, $x_{0}^{**}>0$ and $y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})>0$
must hold as well. Then, $y_{2}^{**}>0$ is ensured by the third equation in
Eq. (C.4).
Therefore, $\mathcal{R}_{0}>1$ is a sufficient and necessary condition for
$y_{0}^{**}>0$, $y_{1}^{**}>0$, $y_{2}^{**}>0$.
To check if the solution of $y_{0}^{**}$, $y_{1}^{**}$, and $y_{2}^{**}$
really exists, we can study the existence of
$y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})$. Then, the solution of
$y_{0}^{**}$, $y_{1}^{**}$, and $y_{2}^{**}$ can be naturally obtained by Eq.
(C.4). For convenience, we denote
$Y=y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})$. Multiplying the three
equations of $y_{0}^{**}$, $y_{1}^{**}$, $y_{2}^{**}$ in Eq. (C.4) by $1$,
$1-p_{I}$, $1-p_{I}$, and adding them together, we get
$Y=\dfrac{\dfrac{\alpha}{r+\mu}\varepsilon_{0}\Lambda}{\alpha+\dfrac{\mu
n^{*}}{Y}}+(1-p_{I})\dfrac{\dfrac{\alpha}{r+\mu}\varepsilon_{1}\Lambda}{\alpha+\dfrac{\mu
n^{*}}{Y}}+(1-p_{I})\dfrac{\dfrac{\alpha}{r+\mu}(1-p_{S})\varepsilon_{2}\Lambda}{\alpha(1-p_{S})+\dfrac{\mu
n^{*}}{Y}},$ (C.6)
which can be simplified as follows when $Y\neq 0$.
$aY^{2}+bY+c=0,$ (C.7)
where
$\left\\{\begin{aligned} a=&~{}\dfrac{\alpha}{\mu n^{*}}(1-p_{S}),\\\
b=&~{}1-p_{S}+1-\mathcal{R}_{0}+\frac{\alpha}{r+\mu}p_{S}(\varepsilon_{0}+(1-p_{I})\varepsilon_{1}),\\\
c=&~{}\frac{\mu n^{*}}{\alpha}(1-\mathcal{R}_{0}).\\\ \end{aligned}\right.$
We can see that $a>0$ always holds. The signs of $b$ and $c$, however, depend
on $\mathcal{R}_{0}$. Then, the simple use of Vieta’s theorem can judge the
existence of $Y$. When $\mathcal{R}_{0}<1$, we have $c/a>0$ and $-b/a<0$;
therefore, both roots are negative. When $\mathcal{R}_{0}>1$, we have $c/a<0$;
therefore, one root is positive and the other is negative. The positive root
is the unique solution of $Y$ and
$y_{0}^{**}+(1-p_{I})(y_{1}^{**}+y_{2}^{**})$. Then, Eq. (C.4) can provide the
unique solution of $y_{0}^{**}$, $y_{1}^{**}$, and $y_{2}^{**}$ (note that the
negative root of $Y$ cannot lead to positive $y_{0}^{**}$, $y_{1}^{**}$, and
$y_{2}^{**}$ because we have previously shown $y_{0}^{**}>0$, $y_{1}^{**}>0$,
and $y_{2}^{**}>0$ if $\mathcal{R}_{0}>1$).
## References
* [1] Lynne Peeples. “Face masks for COVID pass their largest test yet”. Nature (Lond.), 2021.
* [2] William Ogilvy Kermack and Anderson G McKendrick. “A contribution to the mathematical theory of epidemics”. Proceedings of the Royal Society of London A, vol. 115, no. 772, 700–721, 1927.
* [3] Nathaniel S Barlow and Steven J Weinstein. “Accurate closed-form solution of the SIR epidemic model”. Physica D: Nonlinear Phenomena, vol. 408, 132540, 2020.
* [4] Steven J Weinstein, Morgan S Holland, Kelly E Rogers and Nathaniel S Barlow. “Analytic solution of the SEIR epidemic model via asymptotic approximant”. Physica D: nonlinear phenomena, vol. 411, 132633, 2020.
* [5] Chaoqian Wang and Chaochao Huang. “An epidemic model with the closed management in Chinese universities for COVID-19 prevention”. In Journal of Physics: Conference Series, volume 1707, page 012027\. IOP Publishing, 2020.
* [6] Xinwei Wang, Haijun Peng, Boyang Shi et al. “Optimal vaccination strategy of a constrained time-varying SEIR epidemic model”. Communications in Nonlinear Science and Numerical Simulation, vol. 67, 37–48, 2019.
* [7] Feng Fu, Daniel I Rosenbloom, Long Wang and Martin A Nowak. “Imitation dynamics of vaccination behaviour on social networks”. Proceedings of the Royal Society B: Biological Sciences, vol. 278, no. 1702, 42–49, 2011.
* [8] Xinyu Wang, Danyang Jia, Shupeng Gao et al. “Vaccination behavior by coupling the epidemic spreading with the human decision under the game theory”. Applied Mathematics and Computation, vol. 380, 125232, 2020.
* [9] Muntasir Alam, Kazuki Kuga and Jun Tanimoto. “Three-strategy and four-strategy model of vaccination game introducing an intermediate protecting measure”. Applied Mathematics and Computation, vol. 346, 408–422, 2019.
* [10] Kazuki Kuga and Jun Tanimoto. “Which is more effective for suppressing an infectious disease: imperfect vaccination or defense against contagion?”. Journal of Statistical Mechanics: Theory and Experiment, vol. 2018, no. 2, 023407, 2018.
* [11] Muntasir Alam, Masaki Tanaka and Jun Tanimoto. “A game theoretic approach to discuss the positive secondary effect of vaccination scheme in an infinite and well-mixed population”. Chaos, Solitons & Fractals, vol. 125, 201–213, 2019.
* [12] Laijun Zhao, Qin Wang, Jingjing Cheng et al. “Rumor spreading model with consideration of forgetting mechanism: A case of online blogging livejournal”. Physica A: Statistical Mechanics and its Applications, vol. 390, no. 13, 2619–2625, 2011.
* [13] Laijun Zhao, Wanlin Xie, H Oliver Gao et al. “A rumor spreading model with variable forgetting rate”. Physica A: Statistical Mechanics and its Applications, vol. 392, no. 23, 6146–6154, 2013.
* [14] Chaoqian Wang. “Dynamics of conflicting opinions considering rationality”. Physica A: Statistical Mechanics and its Applications, vol. 560, 125160, 2020.
* [15] Chaoqian Wang, Ziwei Wang and Qiuhui Pan. “Injurious information propagation and its global stability considering activity and normalized recovering rate”. Plos One, vol. 16, no. 10, e0258859, 2021.
* [16] Cruz Vargas-De-León. “On the global stability of SIS, SIR and SIRS epidemic models with standard incidence”. Chaos, Solitons & Fractals, vol. 44, no. 12, 1106–1110, 2011.
* [17] Jianquan Li, Yanni Xiao, Fengqin Zhang and Yali Yang. “An algebraic approach to proving the global stability of a class of epidemic models”. Nonlinear Analysis: Real World Applications, vol. 13, no. 5, 2006–2016, 2012.
* [18] Sanusi Side, Wahidah Sanusi, Muhammad Kasim Aidid and Sahlan Sidjara. “Global stability of SIR and SEIR model for tuberculosis disease transmission with lyapunov function method”. Asian Journal of Applied Sciences, vol. 9, no. 3, 87–96, 2016.
* [19] Hongbin Guo, Michael Y Li and Zhisheng Shuai. “Global stability of the endemic equilibrium of multigroup SIR epidemic models”. Canadian Applied Mathematics Quarterly, vol. 14, no. 3, 259–284, 2006.
* [20] Ruoyan Sun. “Global stability of the endemic equilibrium of multigroup SIR models with nonlinear incidence”. Computers & Mathematics with Applications, vol. 60, no. 8, 2286–2291, 2010.
* [21] Yoshiaki Muroya, Yoichi Enatsu and Toshikazu Kuniya. “Global stability for a multi-group SIRS epidemic model with varying population sizes”. Nonlinear Analysis: Real World Applications, vol. 14, no. 3, 1693–1704, 2013.
* [22] Lefeng Cheng, Linfei Yin, Jianhui Wang et al. “Behavioral decision-making in power demand-side response management: A multi-population evolutionary game dynamics perspective”. International Journal of Electrical Power & Energy Systems, vol. 129, 106743, 2021.
* [23] Lefeng Cheng, Yang Chen and Guiyun Liu. “2PnS-EG: A general two-population $n$-strategy evolutionary game for strategic long-term bidding in a deregulated market under different market clearing mechanisms”. International Journal of Electrical Power & Energy Systems, vol. 142, 108182, 2022.
* [24] Weiqiang Li, Jin Zhou and Jun-an Lu. “The effect of behavior of wearing masks on epidemic dynamics”. Nonlinear Dynamics, vol. 101, no. 3, 1995–2001, 2020.
* [25] João AM Gondim. “Preventing epidemics by wearing masks: An application to COVID-19”. Chaos, Solitons & Fractals, vol. 143, 110599, 2021.
* [26] Pierre Auger and Ali Moussaoui. “On the threshold of release of confinement in an epidemic SEIR model taking into account the protective effect of mask”. Bulletin of Mathematical Biology, vol. 83, no. 4, 1–18, 2021.
* [27] Nurudeen O Lasisi and Kolawole A Adeyemo. “Modeling the effect of distancing and wearing of face masks on transmission of Covid-19 infection dynamics”. Journal of Complexity in Health Sciences, vol. 4, no. 1, 10–20, 2021.
* [28] Lili Han, Qiuhui Pan, Baolin Kang and Mingfeng He. “Effects of masks on the transmission of infectious diseases”. Advances in Difference Equations, vol. 2021, no. 1, 1–17, 2021\.
* [29] Scott E Page. The model thinker: What you need to know to make data work for you. Basic Books, 2018.
* [30] Serge. Galam. Sociophysics: A physicist’s modeling of psycho-political phenomena. Springer, 2016.
* [31] Pauline Van den Driessche and James Watmough. “Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission”. Mathematical Biosciences, vol. 180, no. 1-2, 29–48, 2002.
* [32] Joseph P La Salle. The stability of dynamical systems. SIAM, 1976.
* [33] Mohammed A Boraey. “An analytical model for the effective filtration efficiency of single and multiple face masks considering leakage”. Chaos, Solitons & Fractals, vol. 152, 111466, 2021.
|
# HOI4ABOT: Human-Object Interaction Anticipation for Human Intention Reading
Collaborative roBOTs
Esteve Valls Mascaro1, Daniel Sliwowski1, Dongheui Lee1,2
1 Technische Universität Wien (TU Wien), Autonomous Systems Lab
2 Institute of Robotics and Mechatronics (DLR), German Aerospace Center
{esteve.valls.mascaro, daniel.sliwowski<EMAIL_ADDRESS>
evm7.github.io/HOI4ABOT_page
###### Abstract
Robots are becoming increasingly integrated into our lives, assisting us in
various tasks. To ensure effective collaboration between humans and robots, it
is essential that they understand our intentions and anticipate our actions.
In this paper, we propose a Human-Object Interaction (HOI) anticipation
framework for collaborative robots. We propose an efficient and robust
transformer-based model to detect and anticipate HOIs from videos. This
enhanced anticipation empowers robots to proactively assist humans, resulting
in more efficient and intuitive collaborations. Our model outperforms state-
of-the-art results in HOI detection and anticipation in VidHOI dataset with an
increase of 1.76% and 1.04% in mAP respectively while being 15.4 times faster.
We showcase the effectiveness of our approach through experimental results in
a real robot, demonstrating that the robot’s ability to anticipate HOIs is key
for better Human-Robot Interaction.
Figure 1: Overview of our HOI4ABOT framework. A robot leverages RGB data to
detect and anticipate the human-object interactions in its surroundings and
assist the human in a timely manner. The robot anticipates the human intention
of holding the cup, so it prepares itself for pouring by grabbing the bottle.
The robot reacts to the human holding the cup by pouring water.
> Keywords: Human-Object Interaction, Collaborative Robots, Human Intention
## 1 Introduction
In recent years, the field of robotics has witnessed significant interest in
human-robot interaction (HRI), with a focus on enhancing the ability of robots
to assist humans in various tasks [1, 2, 3, 4]. To facilitate effective human-
robot collaboration (HRC), it is crucial for the robot to possess an
understanding of both the surrounding environment and the individuals within
it, including their intentions. For example, consider the scenario visualized
in Fig. 1 where a robot assists a person in the kitchen. By recognizing the
person’s intention to prepare a drink and understanding their actions such as
reaching for the cup, the robot can proactively provide the necessary support
in a timely manner, such as picking up a bottle and pouring water. Therefore,
by recognizing and anticipating human-object interactions (HOIs), the robot
gets a solid understanding of the person’s intention and better caters to
their needs [1].
While HOI is a long-standing challenge in the computer vision community, most
approaches only consider the detection of these interactions from single
frames [5, 6, 7, 8, 9, 10]. However, to minimize the latency when a person is
assisted by a robot, the detection is not enough, but the anticipation is
needed [11, 12, 13]. Therefore, we consider the task of HOI detection and
anticipation, and we propose to leverage temporal cues from videos to better
understand human intention. HOI recognition in videos has been explored
recently [14, 15, 16, 17]. In this paper, we propose a real-time deep learning
architecture that combines pre-trained models with spatio-temporal consistency
to successfully detect and anticipate HOIs. Our model outperforms the state-
of-the-art in VidHOI dataset [14] in terms of accuracy and speed. Moreover, we
ensemble our framework with behavior trees [18] to adapt in real-time the
robot actions for better interaction with the human. We implement our
framework in a real robot and demonstrate the effectiveness of our approach in
the pouring task, showcasing the robot’s ability to anticipate HOIs and
proactively assist the human while reducing latency in the execution.
The contributions of our paper are summarized next:
* •
A real-time transformer-based model for HOI detection and anticipation.
* •
A novel patch merging strategy to align image features to pre-extracted
bounding boxes.
* •
To the best of our knowledge, we are the first to assess HOI anticipation in a
real robot experiment for a collaborative task.
## 2 Related Works
### 2.1 Human Intention in Robotics
Recognizing and predicting human intention is crucial to ensure seamless
human-robot collaboration (HRC) [12, 13, 19, 20]. [12] observed significant
differences in the robot’s contribution and commitment in an experiment of a
human carrying car parts to a shared workspace with an anticipatory robot to
assemble them. Recent works in computer vision have highlighted the potential
of harnessing human intention to better anticipate future human actions [21,
22, 23]. In particular, [23] leverages the detection of human-object
interactions (HOIs) within a scene to understand this high-level intention of
the individuals. Despite the benefits of using HOIs, their application in
robotics from vision data has not been extensively explored [1]. [4] proposes
a conditional random field (CRF) to assess the feasibility of a robot
executing a given task based on the anticipated human actions. The CRF
predicts the next human actions by considering object affordances and
positions in the future. However, [4] is not scalable to new tasks as the CRF
relies on hand-crafted features. Instead, we train our model in the largest
HOI video dataset available to learn robust features that enhance the robot’s
ability to anticipate human intention. Recently, [24] proposed a spatial-
attention network to extract scene graphs from images in an industrial
scenario. However, [24] neglects the time dependency in the task and does not
anticipate the human intention to enhance HRC. [25, 26, 27] also adopted scene
graphs but focused on task planning.
### 2.2 HOI Detection and Anticipation
HOI focuses on localizing the humans and objects in a scene and classifying
their interactions using a ⟨human, interaction, object⟩ triplet (e.g.
⟨person1, hold, cup⟩). HOI task has recently gained attention in the computer
vision community due to its promising applications in downstream tasks, such
as scene understanding [28] or action recognition [29]. The primary focus is
the detection of HOI from images [5, 6, 7, 8, 9, 10]. Some [7, 8, 9] adopt a
one-stage approach, directly operating on the images to predict the HOI
triplet. However, these methods require higher training resources and do not
benefit from pre-trained object detections. On the contrary, [5, 6, 10] employ
a two-stage method to first locate the objects and humans in the image using
pre-trained models and then classify each interaction using multi-stream
classifiers. In particular, [10] uses a ViT transformer [30] to extract the
patched features and proposes Masking with Overlapped Area (MOA) to extract
features per object or human through a self-attention layer. Our work shows
that weighting the patched features is sufficient to outperform MOA while not
requiring any additional parameters.
While processing individual frames may be adequate for HOI detection, we argue
that HOI anticipation benefits from leveraging the temporal aspects inherent
in these interactions. Several studies in HOI detection address this temporal
dimension by focusing on videos [14, 15, 16, 17]. [16] fuses patched features
at multiple levels to generate instance representations utilizing a deformable
tokenizer. [14] employs a two-stage model that uses 3D convolutions to merge
features across the temporal dimension. [15] also adopts a two-stage approach
but relies on a spatio-temporal transformer [31] to detect the interactions in
videos. Finally, [17] extends the architecture from [15] by concatenating the
human and object temporal features and fusing them with the human gaze
information using cross-attention. [17] is the first work to propose both HOI
detection and anticipation in videos. Similarly to [10], [17] also adopts
focal loss [32] to tackle the HOI imbalance in training. We adopt the findings
from [17] but observe their model to not be feasible to work in real-time.
Moreover, [17] trains a unique model for each anticipation horizon in the
future. Instead, we propose a novel real-time multi-head model that can detect
and anticipate HOIs in a single step.
### 2.3 Task and Motion Planning
For a robot to effectively assist and collaborate with a human in a particular
task, it needs to understand the structure and order of actions involved,
enabling the robot to achieve desired goals [33]. Finite State Machines (FSM)
have been the standard choice for representing the task structure for a long
time [34, 35]. However, scaling FSM poses a challenge due to their lack of
modularity and flexibility [18]. Recently, Behavior Trees (BT) [18] have
gained popularity as they can facilitate task planning in HRC tasks [36, 37],
where the environment is dynamic. Our work adopts BT and defines its behavior
based on the anticipated human intention and its uncertainty. Once a suitable
chain of actions has been found by the task planner, motion planning is
responsible for determining the low-level movements of the robot. Motion
planning is a core problem in robotics [38, 39, 40, 41, 42]. [38, 39] proposed
to randomly sample points in the state space towards the goal. However, they
consider humans as obstacles or constraints, not collaborators. Some
approaches [40, 41] formulate motion planning as an optimization problem, but
their applications in HRC are limited as determining the cost function related
to humans is not trivial. Alternatively, motion generators can be learned from
human demonstrations to obtain more natural movement [42, 43]. Dynamic
Movement Primitives (DMPs) [42] have been successfully employed in HRC, by
dynamically adapting their parameters [44, 45, 46].
## 3 Methodology
In this section, we present our Human-Object Interaction Anticipation for
CollAborative roBOTs (HOI4ABOT) framework. First, we formulate the HOI
detection and anticipation task. Then, we describe the integration of the deep
learning architecture into the robot framework.
### 3.1 Human-Object Interaction
Let $\mathbf{V}=[\mathbf{f}_{-T},\cdots,\mathbf{f}_{0}]$ be a frame sequence
of duration $T+1$. The goal is to predict the interaction class $i_{k}^{\tau}$
in the subsequent time $\tau$ between any human $\mathbb{H}_{n}$ and object
$\mathbb{O}_{m}$ pair $\mathbb{P}_{k}=\\{\mathbb{H}_{n},\mathbb{O}_{m}\\}$
observed during the video $\mathbf{V}$, where $0\leq n\leq N,0\leq m\leq
M,0\leq k\leq K=M*N$ . A visual illustration of our HOI4ABOT architecture is
depicted in Fig. 2.
Figure 2: HOI4ABOT architecture overview. We consider a video of $T+1$ frames
with the pre-extracted object and human bounding boxes $\mathbf{B}^{t}$. Our
module initially extracts relevant features per frame (left) to later on
detect and anticipate HOIs (right) later. First, a ViT backbone [47] extracts
patch-based local $\mathbf{E}^{t}$ and global $\mathbf{cls}_{t}$ features per
each frame $t$. Then, we obtain features per human $\mathbf{e}_{n}^{t}$ and
object $\mathbf{e}_{m}^{t}$ by aligning $\mathbf{E}^{t}$ to their bounding
boxes, as shown in light blue. We also project each $\mathbf{B}^{t}$ to
$\hat{\mathbf{B}}^{t}$ using a box embedder [48], and the object category to
$\mathrm{s_{m}}$ using CLIP [49]. Our Dual Transformer, shown in purple,
leverages the human and object-constructed windows (sequences in red and blue
respectively) through two cross-attention transformers, where $\mathrm{K}$ey,
$\mathrm{Q}$uery, and $\mathrm{V}$alue are used in the attention mechanism.
$\mathrm{q}$ is a learnable parameter to learn the evolution of the location
in time. Finally, we project the enhanced last feature from the Human Blender
to detect and anticipate HOIs at several time horizons $i_{k}^{\tau}$ in the
future through our Hydra head (shown in light green).
Detection and tracking. HOI4ABOT is a two-stage method. First, we leverage
off-the-shelf state-of-the-art object detection and tracking methods to
identify the bounding boxes $\mathbf{B}_{m}\in\mathbb{R}^{(T+1)\times 4}$,
label $c_{m}$, and track identifier $id_{m}$ for any object
$\mathbb{O}_{m}=\\{id_{m},c_{m},\mathbf{B}_{m}\\}$ in the video $\mathbf{V}$.
$\mathbf{B}_{m}=[\mathbf{b}_{m}^{-T},\cdots,\mathbf{b}_{m}^{0}]$ represents a
list of $XY$ pixel coordinates of the top-left corner and right-bottom corner
of the bounding box that locates a given object $\mathbb{O}_{m}$ at each frame
$\mathbf{f}_{t}$ of $\mathbf{V}$. We obtain the same information for each
human $\mathbb{H}_{n}$. In the second stage, we exploit each individual pair
$\mathbb{P}_{k}=\\{\mathbb{H}_{n},\mathbb{O}_{m}\\}$ to predict its
interaction class $i_{k}^{\tau}$ in a given time horizon $\tau$ using various
data modalities. This requires understanding the visual features of the pair,
how their spatial relationship evolves through time
$\mathbf{B}_{k}=[\mathbf{B}_{n},\mathbf{B}_{m}]$ and also the intrinsic
semantics of the object $c_{m}$.
Visual features. We use Dinov2 [47] as a pre-trained Visual Transformer (ViT)
[30] backbone to divide each frame $\mathbf{f}_{t}$ into $L\times L$ patches
and project each patch $\mathbf{p}_{l}^{t}$ to a visual token
$\mathbf{e}_{l}^{t}$ that encodes the image information of that patch $l$. In
total, the image encoder obtains $\mathbf{E}^{t}\in\mathbb{R}^{L^{2}\times d}$
that captures the local visual features, plus the global context vector
$\mathbf{cls}_{t}\in\mathbb{R}^{d}$ of a frame $\mathbf{f}_{t}$.
We develop a simple but efficient technique, called Patch Merger, to extract
individual features per human and object from a frame through a single step.
Let $\mathbb{O}_{m}^{t}$ be an object $m$ with its box $\mathbf{b}_{m}^{t}$ at
frame $\mathbf{f}_{t}$. First, we create a binary mask for $\mathbf{f}_{t}$,
where $1$ denotes a pixel laying within $\mathbf{b}_{m}^{t}$. We convert the
binary mask in a sequence of patches following [30]. Then, we obtain a
weighting vector $\bm{\omega}_{m}^{t}$ by computing the percentage that
$\mathbf{b}_{m}^{t}$ overlaps each patch using 2D Average Pooling and
normalization. Finally, we compute the weighted sum of local visual features
$\mathbf{e}_{m}^{t}=\sum{\bm{\omega}_{m}^{t}\mathbf{E}^{t}}$, obtaining the
individual representation of $\mathbb{O}_{m}^{t}$. Compared to [10], which
normalizes along the patch dimension and uses a quantized sequence as the
attention mask for a self-attention layer, our algorithm is parameter-free,
more efficient, and shows better performance in our experiments.
We propose to capture the context within a frame using
$\mathbf{cls}_{t}\in\mathbb{R}^{d}$, contrary to the spatial transformer
proposed in [17]. We claim that this context (e.g. a kitchen, an office)
should be invariant in short time periods and be the dominant component among
all $\mathbf{cls}_{t}$ tokens. Consequently, we use Average Pooling to reduce
the N $\mathbf{cls}_{t}$ features to a single representation
$\mathbf{\widehat{cls}}=AvgPool([\mathbf{cls}_{-T},\cdots,\mathbf{cls}_{0}])$,
which is the context of the scene.
Spatial features. For each bounding box $\mathbf{b}_{m}^{t}$, we extract the
$XY$ normalized pixel coordinates for the top-left corner and right-bottom
corner. Then, we adopt a positional encoding using random spatial frequencies
[48] to embed the location of each point and merge these two corner
representations into one box representation
$\hat{\mathbf{b}}_{m}^{t}\in\mathbb{R}^{d}$ using a fully connected layer.
This process is also applied to humans, thus obtaining
$\hat{\mathbf{b}}_{n}^{t}\in\mathbb{R}^{d}$ to encode each human
$\mathbb{H}_{n}^{t}$ position in the scene.
Object semantics. Leveraging the object semantics is essential to
understanding the possible interactions in a given pair. While ‘holding a cup’
or ‘holding a bottle’ are both feasible, ‘holding a car’ becomes more
unrealistic. Thus, we extract object semantic information
$\mathbf{s}_{m}\in\mathbb{R}^{d}$ per object $\mathbb{O}_{m}$ to facilitate
the model predicts the intention class $i_{k}^{\tau}$. For that, we use the
CLIP text encoder [49].
Pair Interaction. We construct a temporal architecture that leverages the
evolution of the interactions between a human $\mathbb{H}_{n}$ and an object
$\mathbb{O}_{m}$ in time. We process each pair independently, and therefore we
focus on a single pair in the formulation. We stack both the visual tokens
$\mathbf{E}_{n}=[\mathbf{e}_{n}^{-T},\cdots,\mathbf{e}_{n}^{0}]$ and the
spatial features
$\hat{\mathbf{B}}_{n}=[\hat{\mathbf{b}}_{n}^{-T},\cdots,\hat{\mathbf{b}}_{n}^{0}]$
in time and construct a human temporal window
$\mathbf{W_{H}}_{n}=[\hat{\mathbf{B}}_{n},\mathbf{E}_{n}]$. Similarly, we also
construct an object’s temporal window
$\mathbf{W_{O}}_{m}=[\hat{\mathbf{B}}_{m},\mathbf{E}_{m}]$. We add a
sinusoidal positional encoding to $\mathbf{W_{H}}_{n}$ and
$\mathbf{W_{O}}_{m}$, Later, we prepend the global visual feature and a
learnable spatial parameter $[\mathbf{q},\mathbf{\widehat{cls}}]$ to
$\mathbf{W_{H}}_{n}$. $\mathbf{q}$ learns the evolution of the location of the
human in time through the attention mechanism. We also extend
$\mathbf{W_{O}}_{m}$ by prepending the semantic token $\mathbf{s}_{m}$ that
encodes the object type. Therefore, we obtain a temporal feature
$\mathbf{W_{H}}_{n}\in\mathbb{R}^{(T+2)\times d}$ and
$\mathbf{W_{O}}_{m}\in\mathbb{R}^{(T+2)\times d}$ per pair.
To extract the HOI relationships between $\mathbb{H}_{n}$ and
$\mathbb{O}_{m}$, we train a dual transformer with cross-attention layers.
First, an Object Blender transformer enhances the object window
$\mathbf{W_{O}}_{m}$ based on the human knowledge $\mathbf{W_{H}}_{n}$. Then,
the blended object features $\widehat{\mathbf{W_{O}}}_{m}$ are used to extend
the human representation $\mathbf{W_{H}}_{n}$ in the Human Blender transformer
to $\widehat{\mathbf{W_{H}}}_{n}$. Finally, we extract the last token from
$\widehat{\mathbf{W_{H}}}_{n}$, which encodes the most current status of the
scene, and classify the interaction pair $i_{k}^{\tau}$ using a fully
connected layer. As a given human-object pair can have multiple interactions
simultaneously, we use a sigmoid function and define a threshold to classify
the current interactions.
Multi-head classification for multiple future horizons. The goal is to
predict the interaction class $i_{k}^{\tau}$ in the subsequent time $\tau$
between any human $\mathbb{H}_{n}$ and object $\mathbb{O}_{m}$ pair
$\mathbb{P}_{k}=\\{\mathbb{H}_{n},\mathbb{O}_{m}\\}$. We considered the
problem of HOI detection ($\tau=0$) and also the anticipation in multiple
future horizons ($\tau>0$). Contrary to [17] that proposes one trained model
for each $\tau$, we developed a single model that can predict multiple time
horizon interactions. For that, we froze the HOI4ABOT trained in the detection
task, and train an additional linear layer that projects the last token from
$\widehat{\mathbf{W_{H}}}_{n}$ to the interaction for the particular $\tau$.
We call this shared backbone the Hydra variant, which allows us to
simultaneously predict interactions across multiple $\tau$, making our model
faster and more efficient. We consider our Hydra variant with $A$ number of
heads.
### 3.2 Motion generation and task planning
Motion Generation. The proposed framework segments the complex movements into
simpler movement primitives, which are learned with DMPs. To collect
demonstrations of each movement primitive, we employ kinesthetic teaching,
where an operator guides the robot’s end effector by physically manipulating
it [50]. Generating the motion requires estimating the goal position, which we
obtain through the use of a calibrated vision system that relies on a pre-
trained object detector (i.e. YOLOv8 [51]) and a depth camera. The position of
the goals with respect to the robot base is computed using the intrinsic and
extrinsic camera matrices.
Task planning. Properly scheduling the acquired movement primitives is crucial
to reach a desired goal. We implement Behavior Trees (BT) [18] as a ROS node
that subscribes to the predicted HOIs and their confidence. The reactiveness
of BTs allows adapting the robot’s behavior by considering the anticipated
human intention and changing to the appropriate sub-tree if needed. This is
motivated by how humans interact with each other. For example, if a bartender
observes a client approaching the bar, they can prepare for the interaction by
grabbing a glass, thus reducing the serving time.
Robot control. The generated poses from the motion generator are passed to the
controller. In our system, we employ a Cartesian impedance controller [52, 53]
to achieve the compliant behavior of the manipulator. This controller enhances
the safety of human-robot collaboration by allowing the robot to respond in a
compliant manner to external forces and disturbances.
## 4 Experiments
### 4.1 Dataset and Metrics
We train and evaluate our model on the VidHOI dataset [14], the largest
dataset available for human-object interactions in videos. This dataset
encompasses 7.3 million frames with 755,000 annotated interactions of one
frame per second. To assess the performance of our approach, we adopted the
same evaluation metrics as those presented in [17]. We computed the mean
average precision (mAP) using the method presented in [54]. The mAP@50
incorporates the precision-recall curves for all interaction classes. To
determine a correct HOI triplet, three conditions need to be met: (i) the
detected bounding boxes for human and object must overlap with their
corresponding ground truths with an Intersection over Union (IoU) of 50 %,
(ii) the predicted object category is correct, (iii) the predicted interaction
is correct. Following standard evaluation in VidHOI, we report mAP across
three different HOI sets: (i) Full: all interaction categories, (ii) Non-Rare:
frequent interactions in the validation set (more than 25 appearances), (iii)
Rare: non-frequent interactions (less than 25). Additionally, we evaluated our
approach in Oracle mode, where we use the human and object detections from
ground truth, and in Detection mode, where those are predicted using YOLOv5
[55] as in [17]. Finally, we computed the Person-wise top-k metrics [17] where
the anticipation was considered correct if one of the top-k predicted
interactions matched the ground truth.
### 4.2 Quantitative evaluation
Table 1: Detection mAP.
Method | Full | Non-Rare | Rare
---|---|---|---
Oracle Mode
ST-HOI [14] | 17.6 | 27.2 | 17.3
QPIC [54] | 21.4 | 32.9 | 20.56
TUTOR [16] | 26.92 | 37.12 | 23.49
STTran [15] | 28.32 | 42.08 | 17.74
ST-Gaze [17] | 38.61 | 52.44 | 27.99
Ours (Dual) | 40.37 | 54.52 | 29.5
Ours (Stacked) | 40.55 | 53.94 | 30.26
Detection Mode
STTran [15] | 7.61 | 13.18 | 3.33
ST-Gaze [17] | 10.4 | 16.83 | 5.46
Ours (Dual) | 11.12 | 18.48 | 5.61
Ours (Stacked) | 10.79 | 17.79 | 5.42
Table 2: Anticipation mAP in Oracle mode.
Method | $\tau_{a}$ | mAP | Preson-wise top-5
---|---|---|---
Rec | Prec | Acc | F1
STTran [15] | 1 | 29.09 | 74.76 | 41.36 | 36.61 | 50.48
3 | 27.59 | 74.79 | 40.86 | 36.42 | 50.16
5 | 27.32 | 75.65 | 41.18 | 36.92 | 50.66
ST-Gaze [17] | 1 | 37.59 | 72.17 | 59.98 | 51.65 | 62.78
3 | 33.14 | 71.88 | 60.44 | 52.08 | 62.87
5 | 32.75 | 71.25 | 59.09 | 51.14 | 61.92
Ours (Dual, Scratch) | 1 | 38.46 | 73.32 | 63.78 | 55.37 | 65.59
3 | 34.58 | 73.61 | 61.7 | 54 | 64.48
5 | 33.79 | 72.33 | 63.96 | 55.28 | 65.21
Ours (Dual, Hydra) | 1 | 37.77 | 74.07 | 64.9 | 56.38 | 66.53
3 | 34.75 | 74.37 | 64.52 | 56.22 | 66.4
5 | 34.07 | 73.67 | 65.1 | 56.31 | 66.4
HOI4ABOT outperforms state-of-the-art models [14, 54, 16, 15, 17] in terms of
accuracy and speed across all different tasks and scenarios, as shown in Table
2 and Table 2. Moreover, Table 2 shows how our Hydra variant outperforms all
models in the anticipation task, even training from scratch a separate model
for each anticipation horizon. We consider that the detections provide a great
deal of information regarding what a human is doing now, and what they might
be interested in doing next. By using the Hydra variant we ground the
anticipation to what is happening at the present time.
### 4.3 Ablation study
This section analyses our proposed approaches and their impact on the
performance of the HOI task. All results are depicted in Table 3. For
simplification, we only consider the HOI detection task.
Table 3: Albation study in HOI detection. Variant | mAP
---|---
Feature blender = MOA | 40
Interaction token = Learnable | 40.29
Main branch = Object | 39.85
Transformer type = Single | 40.26
Transformer type = Stacked | 40.55
Dual | 40.37
Firstly we explore different variations in the extraction and arrangement of
features to compose the human and object windows. We compare our Patch Merger
strategy to the MOA strategy from [10]. Using MOA requires an additional self-
attention block, which increases the model’s parameters while underperforming.
Moreover, we explore different feature aggregation strategies to classify an
interaction. Instead of using the last observed token in
$\widehat{\mathbf{W_{H}}}_{n}$ for classification, we prepend an additional
learnable token to $\mathbf{W_{H}}_{n}$ which aggregates the interaction
relationships, inspired by the ViT class token [30]. However, Table 3 shows
that classifying from the last observed features is better while not requiring
additional parameters. Last, we consider varying the order of the cross-
attention branches, first the Human Blender and second the Object Blender. We
claim that the decrease in performance is due to the different behavior
between humans and objects: objects are static and therefore less informative
than humans, which are dynamic and lead the interaction.
Secondly, we assess our dual transformer by comparing it with other variants.
We consider the Single variant when only using the Human Blender transformer,
which is not able to effectively capture the HOIs. We also consider stacking
both $\mathbf{W_{H}}_{n}\in\mathbb{R}^{(T+2)\times d}$ and
$\mathbf{W_{O}}_{m}\in\mathbb{R}^{(T+2)\times d}$ to a single feature window
pair, $\mathbf{WP}_{k}\in\mathbb{R}^{(T+2)\times 2d}$. We observe slight
improvements in this variant in terms of mAP when detecting in the Oracle
mode, but it underperforms in the Detection mode and for the anticipation
tasks, as shown in Appendix E.
Finally, we compare the inference time of our model to [17] to assess the
efficiency in real-world applications in robots. Our Dual variant is $15.4$
times faster than [17] for the detection task. [17] requires extracting gaze
maps, which drastically slows down the inference speed of their model. When
using our Hydra model, we obtain interactions for the time horizons 0, 1, 3,
and 5 using one forward pass, with nearly the same inference speed and
parameters as using one head. More information can be found in Appendix D.
### 4.4 Real World Experiments
HOI detection and anticipation are essential for robots to comprehend the
surrounding humans and better predict their needs, so the robot can assist in
a timely manner. We conduct real experiments with a Franka Emika Panda robot
to showcase the benefit of our approach in collaborative robots beyond the
offline VidHOI dataset. The VidHOI dataset contains user-collected videos of
humans, mostly performing outdoor activities that can not be easily related to
robotic collaboration tasks. We consider the ‘pouring task’ in a kitchen
scenario where the robot assumes the role of a bartender with the goal of
pouring a beverage for the human. The scenario is shown in Fig. 1. To assess
the performance of our model in unseen scenarios, we collected 20 videos of 5
people in our kitchen lab. The human is instructed to grab the cup and
informed that the robot will assist them in the task. We manually annotate the
time the person grabs the cup to use as ground truth. Our Hydra variant
detects and anticipates the HOI between a person and a cup in real-time. When
the robot anticipates that the human will be near the cup, it proceeds to grab
the bottle. However, if the human moves away the robot releases the bottle and
returns to the initial pose. The robot proceeds to pour the liquid into the
cup after detecting that the human is holding it.
We assess our real-world experiments by considering well-established metrics
in HRC [13]. [13] proposes to evaluate human-robot fluency in the joint task
by considering four objective metrics. Human Idle Time (H-IDLE) and Robot Idle
Time (R-IDLE) are proposed to evaluate the percentage of the total task time
that the respective agent is not active, which reflects the team coordination
and the inefficiency of the agent in the task. Concurrent Activity (C-ACT)
measures the percentage of total task time in which both agents are active
concurrently (the action overlap between different members). A higher C-ACT
indicates a better-synchronized team. Functional Delay (F-DEL) measures the
delay experienced by the agents immediately after completing an activity: the
percentage of total task time between the completion of one agent’s action and
the beginning of the other agent’s action. A negative F-DEL indicates that
actions are overlapping and implies an efficient use of team members’ time.
Figure 3 summarizes the average objective fluency metrics across our pouring
experiments. The results indicate that HOI anticipation allows for better
human-robot coordination and efficiency of each other’s time, thus making the
task more fluent. We observe a substantial improvement in Figure 3 when using
anticipation ($\tau_{a}>0$) compared to detection ($\tau_{a}=0$). Additional
quantitative and qualitative results are provided in Appendix B.
$0$$1$$3$$5$$15$$20$$25$$30$$\tau_{a}$Percentage [%]Human Idle
Time$0$$1$$3$$5$$15$$20$$25$$\tau_{a}$Robot Idle
Time$0$$1$$3$$5$$40$$50$$60$$70$$80$$\tau_{a}$Concurent
Activity$0$$1$$3$$5$$-20$$-10$$0$$\tau_{a}$Functional Delay
Figure 3: Mean objective fluency metrics for pouring experiments for different
confidence thresholds {0.3, 0.5, 0.7} in the HOIs prediction.
## 5 Limitations
Despite outperforming state-of-the-art models in HOI from videos, we observe
from qualitative experiments the challenge of the implementation in the real
world. First, there is a domain gap between the VidHOI dataset, mainly
representing humans in daily scenes and our robotic scenario. For instance,
anticipating that ‘a human is holding a cup’ is challenging, despite being
correctly detected. We explore the VidHOI dataset and observe that most people
already appear with the cup in their hand. To overcome this issue, we sample
more frequently clips where the interaction changes in the anticipation
horizon. Still, this is insufficient to ensure correct anticipation in
‘holding a cup’ with higher confidence. Other datasets are not better suited
for our problem as they mainly are image-based [5, 56] or do not track the
humans and objects in videos [57]. Future research directions consider
training with a dataset more coupled to our robotics scenario to improve the
model predictions. This would allow us to extend our experiments to more
complex daily scenarios. Second, in our real experiments, we assume that the
objects present in the scene are sufficiently visible so that object detection
can recognize them. Finally, the employed DMPs could be expanded or replaced
by visual servoing to consider goal-following behaviors.
## 6 Conclusions
In this paper, we proposed a Human-Object Interaction Anticipation for
CollAborative roBOTs framework (HOI4ABOT). We consider the task of detecting
and anticipating human-object interactions (HOI) in videos through a
transformer architecture. We train and evaluate HOI4ABOT in the VidHOI dataset
and outperform current state-of-the-art across all tasks and metrics while
being $15.4\times$ faster. Moreover, our model runs in real-time thanks to our
efficient design. Additionally, we extend our HOI4ABOT model with a multi-head
architecture, which can detect and anticipate HOIs across different future
horizons in a single step. We demonstrate the effectiveness of our approach by
implementing our model in a Franka Emika Panda robot. We show that
anticipating HOIs in real-time is essential for a robot to better assist a
human in a timely manner and we support our findings with real experiments. In
conclusion, our approach demonstrates its effectiveness and defines a new road
to explore, where intention reading plays a crucial role for robots in
collaboration scenarios.
#### Acknowledgments
This work is funded by Marie Sklodowska-Curie Action Horizon 2020 (Grant
agreement No. 955778) for the project ’Personalized Robotics as Service
Oriented Applications’ (PERSEO).
## References
* Robinson et al. [2023] N. Robinson, B. Tidd, D. Campbell, D. Kulić, and P. Corke. Robotic vision for human-robot interaction and collaboration: A survey and systematic review. _J. Hum.-Robot Interact._ , 12(1), feb 2023. doi:10.1145/3570731. URL https://doi.org/10.1145/3570731.
* Abbasi et al. [2019] B. Abbasi, N. Monaikul, Z. Rysbek, B. D. Eugenio, and M. Žefran. A multimodal human-robot interaction manager for assistive robots. In _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 6756–6762, 2019. doi:10.1109/IROS40897.2019.8968505.
* Ortenzi et al. [2021] V. Ortenzi, A. Cosgun, T. Pardi, W. P. Chan, E. Croft, and D. Kulić. Object handovers: A review for robotics. _IEEE Transactions on Robotics_ , 37(6):1855–1873, 2021. doi:10.1109/TRO.2021.3075365.
* Koppula and Saxena [2016] H. S. Koppula and A. Saxena. Anticipating human activities using object affordances for reactive robotic response. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 38(1):14–29, 2016. doi:10.1109/TPAMI.2015.2430335.
* Xu et al. [2019] B. Xu, Y. Wong, J. Li, Q. Zhao, and M. S. Kankanhalli. Learning to detect human-object interactions with knowledge. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 2019–2028, 2019. doi:10.1109/CVPR.2019.00212.
* Ulutan et al. [2020] O. Ulutan, A. S. M. Iftekhar, and B. Manjunath. Vsgnet: Spatial attention network for detecting human object interactions using graph convolutions. pages 13614–13623, 06 2020. doi:10.1109/CVPR42600.2020.01363.
* Lim et al. [2023] J. Lim, V. M. Baskaran, J. M.-Y. Lim, K. Wong, J. See, and M. Tistarelli. Ernet: An efficient and reliable human-object interaction detection network. _IEEE Transactions on Image Processing_ , 32:964–979, 2023. doi:10.1109/TIP.2022.3231528.
* Wu et al. [2022] X. Wu, Y.-L. Li, X. Liu, J. Zhang, Y. Wu, and C. Lu. Mining cross-person cues for body-part interactiveness learning in hoi detection. In S. Avidan, G. Brostow, M. Cissé, G. M. Farinella, and T. Hassner, editors, _Computer Vision – ECCV 2022_ , pages 121–136, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-19772-7.
* Liao et al. [2022] Y. Liao, A. Zhang, M. Lu, Y. Wang, X. Li, and S. Liu. Gen-vlkt: Simplify association and enhance interaction understanding for hoi detection. In _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pages 20091–20100, Los Alamitos, CA, USA, jun 2022. IEEE Computer Society. doi:10.1109/CVPR52688.2022.01949. URL https://doi.ieeecomputersociety.org/10.1109/CVPR52688.2022.01949.
* Park et al. [2023] J. Park, J.-W. Park, and J.-S. Lee. Viplo: Vision transformer based pose-conditioned self-loop graph for human-object interaction detection. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 17152–17162, 2023.
* Psarakis et al. [2022] L. Psarakis, D. Nathanael, and N. Marmaras. Fostering short-term human anticipatory behavior in human-robot collaboration. _International Journal of Industrial Ergonomics_ , 87:103241, 2022. ISSN 0169-8141. doi:https://doi.org/10.1016/j.ergon.2021.103241. URL https://www.sciencedirect.com/science/article/pii/S0169814121001591.
* Hoffman and Breazeal [2007] G. Hoffman and C. Breazeal. Cost-based anticipatory action selection for human–robot fluency. _Robotics, IEEE Transactions on_ , 23:952 – 961, 11 2007. doi:10.1109/TRO.2007.907483.
* Garcia et al. [2020] C. A. Garcia, W. Montalvo-Lopez, and M. V. Garcia. Human-robot collaboration based on cyber-physical production system and mqtt. _Procedia Manufacturing_ , 42:315–321, 2020. ISSN 2351-9789. doi:https://doi.org/10.1016/j.promfg.2020.02.088. URL https://www.sciencedirect.com/science/article/pii/S2351978920306533. International Conference on Industry 4.0 and Smart Manufacturing (ISM 2019).
* Chiou et al. [2021] M.-J. Chiou, C.-Y. Liao, L.-W. Wang, R. Zimmermann, and J. Feng. St-hoi: A spatial-temporal baseline for human-object interaction detection in videos. In _Proceedings of the 2021 Workshop on Intelligent Cross-Data Analysis and Retrieval_ , ICDAR ’21, page 9–17, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450385299. doi:10.1145/3463944.3469097. URL https://doi.org/10.1145/3463944.3469097.
* Cong et al. [2021] Y. Cong, W. Liao, H. Ackermann, B. Rosenhahn, and M. Yang. Spatial-temporal transformer for dynamic scene graph generation. In _2021 IEEE/CVF International Conference on Computer Vision (ICCV)_ , pages 16352–16362, Los Alamitos, CA, USA, oct 2021. IEEE Computer Society. doi:10.1109/ICCV48922.2021.01606. URL https://doi.ieeecomputersociety.org/10.1109/ICCV48922.2021.01606.
* Tu et al. [2022] D. Tu, W. Sun, xiongkuo min, G. Zhai, and W. Shen. Video-based human-object interaction detection from tubelet tokens. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, _Advances in Neural Information Processing Systems_ , 2022. URL https://openreview.net/forum?id=kADW_LsENM.
* Ni et al. [2023] Z. Ni, E. Valls Mascaró, H. Ahn, and D. Lee. Human-object interaction prediction in videos through gaze following. _Computer Vision and Image Understanding_ , page 103741, 2023. ISSN 1077-3142. doi:https://doi.org/10.1016/j.cviu.2023.103741. URL https://www.sciencedirect.com/science/article/pii/S1077314223001212.
* Colledanchise and Ogren [2018] M. Colledanchise and P. Ogren. _Behavior Trees in Robotics and AI: An Introduction_. 07 2018. ISBN 9781138593732. doi:10.1201/9780429489105.
* Li et al. [2021] Z. Li, Y. Mu, Z. Sun, S. Song, J. Su, and J. Zhang. Intention understanding in human–robot interaction based on visual-nlp semantics. _Frontiers in Neurorobotics_ , 14:610139, 2021.
* Semeraro et al. [2023] F. Semeraro, A. Griffiths, and A. Cangelosi. Human–robot collaboration and machine learning: A systematic review of recent research. _Robotics and Computer-Integrated Manufacturing_ , 79:102432, 2023. ISSN 0736-5845. doi:https://doi.org/10.1016/j.rcim.2022.102432. URL https://www.sciencedirect.com/science/article/pii/S0736584522001156.
* Mascaro et al. [2023] E. V. Mascaro, H. Ahn, and D. Lee. Intention-conditioned long-term human egocentric action anticipation. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , pages 6048–6057, January 2023.
* Roy and Fernando [2022] D. Roy and B. Fernando. Action anticipation using latent goal learning. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)_ , pages 2745–2753, January 2022.
* Wei et al. [2018] P. Wei, Y. Liu, T. Shu, N. Zheng, and S.-C. Zhu. Where and why are they looking? jointly inferring human attention and intentions in complex tasks. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 6801–6809, 2018. doi:10.1109/CVPR.2018.00711.
* Li et al. [2022] S. Li, P. Zheng, Z. Wang, J. Fan, and L. Wang. Dynamic scene graph for mutual-cognition generation in proactive human-robot collaboration. _Procedia CIRP_ , 107:943–948, 2022. ISSN 2212-8271. doi:https://doi.org/10.1016/j.procir.2022.05.089. URL https://www.sciencedirect.com/science/article/pii/S2212827122003730. Leading manufacturing systems transformation – Proceedings of the 55th CIRP Conference on Manufacturing Systems 2022.
* Amiri et al. [2022] S. Amiri, K. Chandan, and S. Zhang. Reasoning with scene graphs for robot planning under partial observability. _IEEE Robotics and Automation Letters_ , 7(2):5560–5567, 2022.
* Agia et al. [2022] C. Agia, K. Jatavallabhula, M. Khodeir, O. Miksik, V. Vineet, M. Mukadam, L. Paull, and F. Shkurti. Taskography: Evaluating robot task planning over large 3d scene graphs. In _Conference on Robot Learning_ , pages 46–58. PMLR, 2022.
* Patel and Chernova [2022] M. Patel and S. Chernova. Proactive robot assistance via spatio-temporal object modeling. In _6th Annual Conference on Robot Learning_ , 2022.
* Xu et al. [2022] Y. Xu, J. Zhang, Q. Zhang, and D. Tao. ViTPose: Simple vision transformer baselines for human pose estimation. In _Advances in Neural Information Processing Systems_ , 2022.
* You et al. [2016] Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo. Image captioning with semantic attention. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 4651–4659, 2016.
* Dosovitskiy et al. [2021] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. _International Conference on Learning Representations (ICLR)_ , 2021.
* Vaswani et al. [2017] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. _Advances in neural information processing systems (NeurIPS)_ , 30, 2017.
* Cui et al. [2019] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie. Class-balanced loss based on effective number of samples. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pages 9268–9277, 2019.
* Guo et al. [2023] H. Guo, F. Wu, Y. Qin, R. Li, K. Li, and K. Li. Recent trends in task and motion planning for robotics: A survey. _ACM Comput. Surv._ , feb 2023. ISSN 0360-0300. doi:10.1145/3583136. URL https://doi.org/10.1145/3583136. Just Accepted.
* Awais and Henrich [2012] M. Awais and D. Henrich. Online intention learning for human-robot interaction by scene observation. In _2012 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)_ , pages 13–18, 2012. doi:10.1109/ARSO.2012.6213391.
* El Makrini et al. [2022] I. El Makrini, M. Omidi, F. Fusaro, E. Lamon, A. Ajoudani, and B. Vandcrborght. A hierarchical finite-state machine-based task allocation framework for human-robot collaborative assembly tasks. In _2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 10238–10244, 2022. doi:10.1109/IROS47612.2022.9981618.
* Paxton et al. [2017] C. Paxton, A. Hundt, F. Jonathan, K. Guerin, and G. D. Hager. Costar: Instructing collaborative robots with behavior trees and vision. In _2017 IEEE international conference on robotics and automation (ICRA)_ , pages 564–571. IEEE, 2017.
* Carvalho et al. [2021] M. Carvalho, J. Avelino, A. Bernardino, R. Ventura, and P. Moreno. Human-robot greeting: tracking human greeting mental states and acting accordingly. In _2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pages 1935–1941, 2021. doi:10.1109/IROS51168.2021.9635894.
* Kavraki et al. [1996] L. Kavraki, P. Svestka, J.-C. Latombe, and M. Overmars. Probabilistic roadmaps for path planning in high-dimensional configuration spaces. _IEEE Transactions on Robotics and Automation_ , 12(4):566–580, 1996. doi:10.1109/70.508439.
* Kuffner and LaValle [2000] J. Kuffner and S. LaValle. Rrt-connect: An efficient approach to single-query path planning. In _Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065)_ , volume 2, pages 995–1001 vol.2, 2000. doi:10.1109/ROBOT.2000.844730.
* Kalakrishnan et al. [2011] M. Kalakrishnan, S. Chitta, E. Theodorou, P. Pastor, and S. Schaal. Stomp: Stochastic trajectory optimization for motion planning. In _2011 IEEE International Conference on Robotics and Automation_ , pages 4569–4574, 2011. doi:10.1109/ICRA.2011.5980280.
* Park et al. [2012] C. Park, J. Pan, and D. Manocha. Itomp: Incremental trajectory optimization for real-time replanning in dynamic environments. 06 2012.
* Pastor et al. [2013] P. Pastor, M. Kalakrishnan, F. Meier, F. Stulp, J. Buchli, E. Theodorou, and S. Schaal. From dynamic movement primitives to associative skill memories. _Robotics and Autonomous Systems_ , 61:351–361, 04 2013. doi:10.1016/j.robot.2012.09.017.
* Calinon and Lee [2019] S. Calinon and D. Lee. _Learning Control_ , pages 1261–1312. 01 2019. ISBN 978-94-007-6045-5. doi:10.1007/978-94-007-6046-2_68.
* Wu et al. [2022] M. Wu, B. Taetz, Y. He, G. Bleser, and S. Liu. An adaptive learning and control framework based on dynamic movement primitives with application to human–robot handovers. _Robotics and Autonomous Systems_ , 148:103935, 2022. ISSN 0921-8890. doi:https://doi.org/10.1016/j.robot.2021.103935. URL https://www.sciencedirect.com/science/article/pii/S0921889021002189.
* Prada et al. [2013] M. Prada, A. Remazeilles, A. Koene, and S. Endo. Dynamic movement primitives for human-robot interaction: Comparison with human behavioral observation. In _2013 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 1168–1175, 2013. doi:10.1109/IROS.2013.6696498.
* Prada et al. [2014] M. Prada, A. Remazeilles, A. Koene, and S. Endo. Implementation and experimental validation of dynamic movement primitives for object handover. In _2014 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pages 2146–2153, 2014. doi:10.1109/IROS.2014.6942851.
* Oquab et al. [2023] M. Oquab, T. Darcet, T. Moutakanni, H. Vo, M. Szafraniec, V. Khalidov, P. Fernandez, D. Haziza, F. Massa, A. El-Nouby, et al. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_ , 2023.
* Tancik et al. [2020] M. Tancik, P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ramamoorthi, J. Barron, and R. Ng. Fourier features let networks learn high frequency functions in low dimensional domains. _Advances in Neural Information Processing Systems_ , 33:7537–7547, 2020.
* Radford et al. [2021] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pages 8748–8763. PMLR, 2021.
* Ito et al. [2006] M. Ito, K. Noda, Y. Hoshino, and J. Tani. Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model. _Neural Networks_ , 19(3):323–337, 2006. ISSN 0893-6080. doi:https://doi.org/10.1016/j.neunet.2006.02.007. URL https://www.sciencedirect.com/science/article/pii/S0893608006000311. The Brain Mechanisms of Imitation Learning.
* Jocher et al. [2023] G. Jocher, A. Chaurasia, and J. Qiu. YOLO by Ultralytics, Jan. 2023. URL https://github.com/ultralytics/ultralytics.
* Hogan [1984] N. Hogan. Impedance control: An approach to manipulation. In _1984 American Control Conference_ , pages 304–313, 1984. doi:10.23919/ACC.1984.4788393.
* Hogan [1985] N. Hogan. Impedance Control: An Approach to Manipulation: Part II—Implementation. _Journal of Dynamic Systems, Measurement, and Control_ , 107(1):8–16, 03 1985. ISSN 0022-0434. doi:10.1115/1.3140713. URL https://doi.org/10.1115/1.3140713.
* Tamura et al. [2021] M. Tamura, H. Ohashi, and T. Yoshinaga. QPIC: Query-based pairwise human-object interaction detection with image-wide contextual information. In _CVPR_ , 2021.
* Jocher et al. [2022] G. Jocher, A. Chaurasia, A. Stoken, J. Borovec, NanoCode012, Y. Kwon, K. Michael, TaoXie, J. Fang, imyhxy, Lorna, Z. Yifu, C. Wong, A. V, D. Montes, Z. Wang, C. Fati, J. Nadar, Laughing, UnglvKitDe, V. Sonck, tkianai, yxNONG, P. Skalski, A. Hogan, D. Nair, M. Strobel, and M. Jain. ultralytics/yolov5: v7.0 - YOLOv5 SOTA Realtime Instance Segmentation, Nov. 2022. URL https://doi.org/10.5281/zenodo.7347926.
* Gupta and Malik [2015] S. Gupta and J. Malik. Visual semantic role labeling. _arXiv preprint arXiv:1505.04474_ , 2015.
* Ji et al. [2020] J. Ji, R. Krishna, L. Fei-Fei, and J. C. Niebles. Action genome: Actions as compositions of spatio-temporal scene graphs. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 10236–10247, 2020.
* Loshchilov and Hutter [2019] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. _International Conference on Learning Representations (ICLR)_ , (6-9):2019, 2019.
## Appendix
## Appendix A Implementation Details
In this section, we offer a comprehensive summary of the implementation
details to aid in the reproduction of the experiments and the replication of
the results. All experiments were conducted using a single NVIDIA RTX A4000
graphics card with 16GB of memory and an Intel i7-12000K CPU.
Hyperparameters. All trained models are conducted using the same strategy as
[17]. We use the official code from https://github.com/nizhf/hoi-prediction-
gaze-transformer and implement our HOI4ABOT model into their framework. All
training settings are summarized in Table 5. We adopt Cross Binary Focal Loss
[32] with $\gamma=0.5$ and $\beta=0.9999$, which improves training in
extremely imbalanced datasets, such as VidHOI [14]. We train our models using
the AdamW optimizer [58]. We define a scheduler for the learning rate, with an
initial value of $1\times 10^{-8}$ that increases to a peak value of $1\times
10^{-4}$ in 3 warm-up epochs. The learning rate then decreases with an
exponential decay with a factor $0.1$. We run the training for 40 epochs.
Model configuration. All trained models use a similar configuration, but some
variants such as Stacked or Single are adapted to ensure having a similar
number of trainable parameters in the architecture (57.04M). All models
reported in our paper use the DINOv2 [47] as the image feature extractor,
using the smallest variant available ViT-B/14 that only contains 22.06M
parameters; and CLIP [49] for the semantic extractor, with the largest
available variant ViT-L/14 that contains 85.05M parameters. However, due to
the fact that the number of objects in the dataset is limited, we pre-
extracted the features for all possible objects. For our baseline HOI4ABOT
model, we consider two transformer models with cross-attention layers, each of
them with depth 4 and MLP expansions of ratio 4.0. Each transformer uses the
multi-head attention variant with 8 heads to better extract the relationships
within a sequence of features. Moreover, we consider sinusoidal positional
embedding to facilitate learning the temporal information of a sequence.
Finally, we consider the embedding size of each extracted feature, bounding
box, or image feature, as 384. The embedding size for the prepended class
token is also 384, as this is the embedding dimensions of the features
extracted using DINOv2. For the semantics, CLIP obtains a feature of
dimensionality 764.
Table 4: Training settings. Optimizer | AdamW
---|---
Weight Decay | 1.0e-2
Scheduler | ExponentialDecay
Warmup Epochs | 3
Initial LR | 1e-8
Peak LR | 1e-4
Exponential Decay | 0.1
Epochs | 40
Random Seed | 1551
Augmentation | Horizontal Flip
Flip Ratio | 0.5
Batch Size | 16
Dropout | 0.1
Table 5: Model settings. Transformer Depth | 4
---|---
Number of Heads | 8
Feature Extractor | DINOv2: ViT-B/14 [47]
Semantic Extractor | CLIP: ViT-L/14 [49]
Embedding Dimension | 384
Positional Embedding | Sinusoidal
Exponential Decay | 0.1
Mainbranch | humans
MLP ratio | 4.0
## Appendix B Experimental Scenario
Task description. Our HOI4ABOT framework enhances human intention reading
through HOI anticipation. We conduct a real-world experiment with a Franka
Emika Panda robot to support our proposed approach. Fig. 4 provides a step-by-
step overview of the considered bartender scenario. First, the robot detects a
human in the scene and anticipates the human intention to approach a kitchen
island. When the robot anticipates with confidence that the human will be
close to the cup, it executes a movement to grab the bottle, thus preparing
for pouring. If the intention of the human changes, the robot adapts its
behavior and moves back to the initial position after placing the bottle on
the table. on the other hand, if the human proceeds to grab the cup, the robot
pours the drink and goes back to its initial position. This preparatory
behavior reduces the serving time while improving the overall experience for
the human.
Figure 4: Real-world experiments scenario.
Additional Qualitative Results On the project website
(https://evm7.github.io/HOI4ABOT_page/) we present additional qualitative
results that showcase the ability of our model to operate in more ambiguous
scenarios (with multiple objects and people instances, and cluttered scenes)
and execute different motions depending on the predicted interactions. Our
model predicts the interaction associated with specific human and object
instances, which are associated with the identifiers obtained from the
tracker. Therefore, we are able to execute different movements depending on
the interaction (like pouring, pushing, or turning off the lights) and the
object category (grabbing from the side in case of a cup or a bottle or
grabbing from the top in case of a bowl) or instance (cup-1 and cup-2). For
instance, we consider the case of multiple cups in the scene, where the robot
conditions its pouring behavior based on the cup the human holds.
Additionally, we also show the ability to operate in an ambiguous situation
with multiple objects and people instances.
Evaluation of the use case scenario of HOI4ABOT. To validate our hypothesis,
we extend our evaluation of the framework in real-world experiments with
additional quantitative metrics. First, we assess the human waiting time until
the robot proceeds to serve. Fig. 5 shows the quantitative benefit of our
approach by considering the absolute time a human waits to be served (serving
is considered until the robot starts pouring). The results indicate that our
robot behaves proactively when anticipating HOIs and therefore reduces the
time to wait until a drink is poured, compared to the reactive behavior
observed if the robot is only detecting HOIs. Fig. 5 shows a slight reduction
in the waiting time when reducing the confidence threshold in the prediction:
to be more confident in the human intention the robot waits more. In addition,
we observe only a slight decrease in the waiting time for different
anticipation horizons ($\tau_{a}=\\{1,3,5\\}$). This subtle variation might be
caused because of the dataset limitation pointed out in the main manuscript.
Secondly, we measure the effectiveness of our robot pouring a drink in our
real-world trials by considering the success rate of the pouring task in $20$
new real-world experiments. Four lab members were instructed to approach the
robot and grab the cup. Each person did 5 repetitions. Our framework correctly
executes the pouring task in $17$ out of $20$ executions, resulting in a
success rate of $85\%$.
0135$2$$4$$6$$\tau_{a}$Serving time [s]Confidence0.30.50.7 Figure 5: Human
waiting time to be served the drink for different confidence thresholds
($\\{0.3,$ $0.5,$ $0.7\\}$ and anticipation heads $\tau_{a}=\\{0,1,3,5\\}$.
Figure 6: Quantitative evaluation of the pouring task. We overlay on the image
of the workspace of the robot the position of the bottle and the cup. Green
signifies successful task execution and red failed cases.
## Appendix C Motion Generation and Task Planning
Motion Generation. Our framework decomposes the complex movements into simpler
movement primitives, which are learned with DMPs. For instance, the pouring
task consists of multiple steps, like grabbing the bottle, moving to the cup,
tilting the bottle, and placing the bottle back. Learning the entire movement
as a single primitive is possible, but this might oversimplify the motion,
particularly for sharp movements, compromising accuracy. In our experiments,
each motion segment was learned from a single demonstration. We verified the
success rate in the pouring scenario with 60 different arrangements of the
objects ‘Bottle’ and ‘Cup’. Figure 6 in the attached document shows that our
robot successfully pours $53$ out of $60$ executions, resulting in a success
rate of $88.33\%$. We can observe that failure cases occur mainly when the
objects are arranged close to the non-reachable areas for the robot. Working
close to the non-reachable zone is more problematic as the robot is operating
near its kinematic constraints. However, this issue can be solved by
rearranging the robot base position adequately to the user’s need.
Task Planning: Behavior Tree. In this section, we describe the structure of
the Behavior Tree [18] used in our real-world experiments, which is shown in
Fig. 7. The primary focus of this work is to enhance human-robot collaboration
through human intention reading using HOI anticipation. We conduct a simple
real-world experiment with a Franka Emika Panda robot to showcase the benefits
of our approach. This paper does not intend to provide a general development
of BT for HOI tasks. However, the same methodology employed can be extended to
more complex scenarios thanks to the modularity of BT.
The entire tree is built from three sub-trees: the Pour branch, the Approach
branch, and the Move Away branch. First, the Pour branch is responsible for
pouring the liquid into the cup. It is executed once the bottle is grabbed,
and the ‘hold’ interaction between the human and the cup is detected. To
achieve this conditional execution we add the Execute check behavior at the
beginning of the branch. Then, we reset the Grabbed flag and set the Poured
flag to prevent any potential duplication of pouring into the cup. Secondly,
the goal of the Approach branch is to grab the bottle. This sub-tree is
executed when the bottle is not currently grabbed and the robot anticipates
the ‘next to’ interaction with a confidence greater than a pre-defined
threshold. Once the bottle is grabbed, the Grabbed flag is set. Thirdly, the
Move Away branch is responsible for releasing the bottle and moving it back to
its initial position. This branch is executed when the bottle is grasped by
the robot and the robot anticipates the interaction ‘next to’ with a
confidence lower than a predefined threshold. After executing the movements
the Grabbed flag is reset.
The appropriate sub-branch is selected by using the Main Selector composite
node. This node attempts to execute each sub-tree starting from left to right.
The selector node executes the next branch in the sequence when the check in
the preceding branch is not satisfied. Finally, the last behavior in the
sequence is an Idle behavior where the robot waits for a short period of time.
The root of the tree is a sequential node, which first collects all messages
from the appropriate ROS topics, next checks if the beverage has been already
poured, and finally executes the Main Selector. To achieve continuous
operation, the Root node is decorated by a Repeat modifier, which executes the
root node indefinitely.
Figure 7: Schematic of the Behaviour Tree for our HOI4ABOT framework.
## Appendix D Inference time
Our model is able to run in real-time thanks to the efficient design and
reduced dimensionality.
Inference time versus the number of human-object pairs. Due to the nature of
HOIs, each interaction needs to be computed for each human-object pair
existing in the scene at a given time step. Therefore, to speed up the results
and parallelize the forward pass for a given video, we stack all found human-
object pairs in the batch dimension. Still, we consider it necessary to
observe how different models’ inference speed is affected by the number of
pairs in a given video. Therefore, we run $1000$ executions of our model
processing a given video with $I$ interactions. We implement all models
reported in Fig. 9 and 9 in the same batch strategy and observe a similar
tendency in the increase of the inference time for a higher number of
interactions.
$0$$20$$40$$60$$80$$100$$120$$140$$160$$10^{2}$$10^{3}$Number of
interactionsInference time [ms]ST-GAZE [17]Ours (Dual)Ours (Stacked)
Figure 8: Model performance depends on the number of interactions for
different architectures. Our variants (‘Dual’ and ‘Stacked’) have similar
inference times (curves overlap) while outperforming by large margins the ST-
GAZE model [17]
$0$$20$$40$$60$$80$$100$$120$$140$$160$$50$$100$$150$Number of
interactionsInference time [ms]Dual Detection + AnticipationDual HydraDual
Detection
Figure 9: Model performance depends on the number of interactions for
different model variants. The proposed multi-head approach allows us to detect
and anticipate HOIs at multiple time horizons while maintaining a similar
inference speed as the ‘Dual’ version (purple and dark orange curves overlap).
We observe the benefit of the Hydra compared to running a specific ‘Dual’
transformer per detection and per anticipation.
Efficiency comparison with current state-of-the-art [17]. Both HOI4ABOT and
[17] adopt a transformer-based architecture to comprehend the temporal
relationships between the humans and objects in the scene. However, our model
is designed to be efficient and to run in real-time despite having a large
number of interactions, contrary to [17]. The comparison of the efficiency of
both models is depicted in Fig. 9, which shows that our HOI4ABOT outperforms
[17] by large margins in terms of speed. Next, we list the major differences
in the model design that cause our improvement. First, we do not use any
additional modality to predict HOIs, compared to [17] that leverages pre-
extracted gaze features to capture the human’s attention. Predicting these
gaze features is costly as it requires detecting and tracking each human’s
head in the scene, predicting the corresponding gaze per human, and matching
it to the corresponding body. Thus the speed decreases considerably depending
on the number of humans in the scene. Moreover, [17] also considers an initial
spatial transformer that leverages all humans and objects per frame, thus [17]
speed is more affected by the number of frames considered.
Efficiency comparison of the Hydra HOI4ABOT. Human intention reading requires
understanding both current and future HOIs. Therefore, we develop a multi-head
HOI4ABOT, called Hydra, that allows us to predict HOIs at different time
horizons in the future through a single forward step. While Table 6 shows the
benefit of our Hydra variant compared to training from scratch, in this
subsection we focus on the benefit of efficiency. Fig. 9 shows the inference
time in milliseconds depending on the number of human-object pairs across
different variants. We consider the Dual Detection as the baseline of our
HOI4ABOT model when only predicting the HOI in the present. Dual Detection +
Anticipation is an optimized model that uses two dual transformer blocks that
benefit from the same image backbone, one for HOI detection and the other for
HOI anticipation in a single future $\tau=3$. Finally, our Dual Hydra performs
HOI detection and anticipation for $\tau=[0,1,3,5]$ in a single step by using
our multi-head strategy. We observe the benefit of our Hydra variant compared
to the model ensemble, as it has a comparable speed to the single head while
anticipating HOIs in three additional future horizons.
## Appendix E Extensive comparison with variants
Table 6: Anticipation mAP in Oracle mode.
Method | t | mAP | Preson-wise top-5
---|---|---|---
Rec | Prec | Acc | F1
STTran [15] | 1 | 29.09 | 74.76 | 41.36 | 36.61 | 50.48
3 | 27.59 | 74.79 | 40.86 | 36.42 | 50.16
5 | 27.32 | 75.65 | 41.18 | 36.92 | 50.66
ST-Gaze [17] | 1 | 37.59 | 72.17 | 59.98 | 51.65 | 62.78
3 | 33.14 | 71.88 | 60.44 | 52.08 | 62.87
5 | 32.75 | 71.25 | 59.09 | 51.14 | 61.92
Ours (Dual, scratch) | 1 | 38.46 | 73.32 | 63.78 | 55.37 | 65.59
3 | 34.58 | 73.61 | 61.7 | 54 | 64.48
5 | 33.79 | 72.33 | 63.96 | 55.28 | 65.21
Ours (Dual, Hydra) | 1 | 37.77 | 74.07 | 64.9 | 56.38 | 66.53
3 | 34.75 | 74.37 | 64.52 | 56.22 | 66.4
5 | 34.07 | 73.67 | 65.1 | 56.31 | 66.4
Ours (Stacked, Scratch) | 1 | 36.14 | 70.03 | 64.61 | 53.99 | 64.34
3 | 34.65 | 73.85 | 62.13 | 54.15 | 64.77
5 | 34.27 | 72.29 | 61.81 | 53.65 | 64.03
Ours (Stacked, Hydra) | 1 | 37.8 | 72.05 | 65.58 | 56.23 | 66.09
3 | 34.9 | 72.96 | 65.05 | 56.3 | 66.2
5 | 35 | 72.86 | 65.18 | 56.36 | 66.2
Our HOI4ABOT model outperforms the current state-of-the-art across all tasks
and metrics in the VidHOI dataset, as shown in Tabel 6. In this section, we
extend the comparison from the manuscript for the HOI anticipation for our
Dual and Stacked variants, both when being trained by scratch or through the
multi-head Hydra mode. Our results show that the Stacked variant obtains
slightly better performance in the mAP for longer futures. We consider this
marginal improvement to be motivated because of the width difference in the
transformer blocks, as well as the bigger representation space from which we
project when classifying the HOIs. The Stacked variant is based on a single
self-attention block that operates on the human windows and object windows
stacked in time. Therefore, the Stacked transformer has double the width
compared to the Dual variant. Given that the output of a transformer model has
the same shape as its input, the obtained tokens are also wider in the Stacked
variant. Having a bigger embedding dimension in the projected token allows the
encoding of more information, which could result in better performance.
However, Table 6 shows that the Stacked variant has a lower recall and
therefore lower F1-Score. These findings might indicate that the Stacked
variant struggles when anticipating HOIs in the videos where the interaction
changes in the anticipation horizon, being more conservative in its
predictions. Therefore, we consider the Dual variant to be optimal as it
balances both precision and recall metrics across all tasks, as shown by
outperforming all other models in the F1-score for the Hydra version.
|
# A Note on Exhaustive State Space Search for Efficient Code Generation
Aart J.C. Bik
<EMAIL_ADDRESS>
###### Abstract
This note explores state space search to find efficient instruction sequences
that perform particular data manipulations. Once found, the instruction
sequences are hard-wired in the code generator that needs these data
manipulations. Since state space is only searched while developing the
compiler, search time is not at a premium, which allows exhaustively searching
for the best possible instruction sequences.
## 1 Introduction
Compilers must often emit instruction sequences that accomplish particular
data manipulations in the generated code. For example, a compiler may have to
generate instructions that swap the contents of two scalar registers prior to
an instruction with strict constraints on its register operands. Or, as
another example, a compiler may have to emit instructions that broadcast the
value in a scalar register to all elements of a vector register in the
prologue of a vector loop. Using an efficient instruction sequence for each
desired data manipulation reduces the runtime of any application that executes
these data manipulations frequently.
Clearly, an optimizing compiler could try to find efficient instruction
sequences during actual code generation. Although this possibly provides
additional context for optimization, the major drawback of this approach is
that search time directly contributes to compile-time during AOT compilation
or, worse, runtime during JIT compilation. Alternatively, efficient
instruction sequences for desired data manipulations could be searched for
earlier, i.e. while the compiler is still being developed. Although this may
provide less opportunities to exploit code context, search time is not at a
premium in this approach, and an exhaustive state space search to find the
best possible instruction sequences becomes possible. Once found, instruction
sequences are hard-wired in the code generator and become at the immediate
disposal of the compiler.
In this note, we explore using Prolog [Col93] for such a state space search.
To keep the presentation brief, we focus on finding efficient Intel SSE
instruction sequences for a few simple SIMD data manipulations. However, the
presented ideas easily generalize to other instructions sets and code
generation problems.
## 2 State Space Search
Finding an efficient instruction sequence to accomplish a particular data
manipulation can be expressed as a state space search problem [LS89, Nil98],
with the original contents of memory and registers as _start state_ , the
machine instructions as _transitions_ from one state to another state, and the
desired contents of memory and registers as _goal state_. A path from the
start state to the goal state provides a solution to the problem. The best
solution is given by the shortest path, i.e. the path with minimal length if
all transitions have the same cost, or the path with minimal total weight if
different transitions have varying costs, such as different cycle counts for
the instructions.
### 2.1 State Space
As stated before, for the sake of brevity, we focus on finding efficient Intel
SSE instruction sequences for a few simple SIMD data manipulations.
Furthermore, to keep the state space size manageable, we focus on just a
subset of the SIMD state, data types, and instructions, abstracting away from
details related to general-purpose registers and instructions, state flags,
memory operands, etc. In this simplified view, the SIMD state is fully defined
by eight xmm-registers, represented in Prolog as a list with eight variables
(variables start with an uppercase letter).
`[ XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7 ]`
---
Here, each variable can be bound to a Prolog term that represent particular
contents, such as a list [17.5, 11.9] to denote a packed double-precision
floating-point data type with the given numerical values, an atom xmm0 (atoms
start with a lowercase letter) to denote a particular but otherwise non-
exploitable value, or the anonymous variable _ to denote any term. For
example, the following list denotes a SIMD state in which registers 0, 3, and
7 contain packed data types with the given numerical values, registers 1 and 2
have particular but different contents that are not subject to further
inspection, and all other registers are undefined.
`[ [0,0,0,0], xmm1, xmm2, [8,7,6,5,4,3,2,1], _, _, _, [1.0, 2.5] ]`
---
In the remainder of the paper, we will just consider _packed dwords_
represented by 4-elements lists, with the convention that the higher to lower
packed elements appear left-to-right in the list.
### 2.2 Transitions
Each instruction transforms a SIMD state into another SIMD state. These
transitions are modeled by a set of Prolog rules for each Intel SSE
instruction in our simplified model. Each rule is this set will have the form
`i(instruction, op1, op2, S, T).`
---
to indicate that applying instruction to the given operands transitions from
state S to state T. For example, the change in SIMD state by executing
instruction
`pxor xmm0, xmm0`
---
is modeled by rule show below, which specifies that any contents of register
xmm0 (the anonymous variable _) is zeroed out (the list [0,0,0,0]) while the
contents of all registers remain unaffected.
`i(pxor, xmm0, xmm0,`
---
`[ _, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7 ],`
`[ [0,0,0,0], XMM1, XMM2, XMM3, XMM4, XMM5, XMM6, XMM7 ]).`
Similarly, using Intel syntax, where the destination register appears first,
the change in SIMD state after
`paddd xmm1, xmm7`
---
is modeled by the following rule, which adds the packed integral elements of
one register to the packed integral elements of another register.
`i(paddd, xmm1, xmm7,`
---
`[ XMM0, [A,B,C,D], XMM2, XMM3, XMM4, XMM5, XMM6, [E,F,G,H] ],`
`[ XMM0, [A+E,B+F,C+G,D+H], XMM2, XMM3, XMM4, XMM5, XMM6, [E,F,G,H] ],`
Although this allows Prolog to reason about the instruction symbolically,
sometimes we are also interested in evaluating the values using integral
arithmetic. To that end, the following rule is added as well, which evaluates
expressions in which all values are integers (such rules could be refined
further to allow for partial evaluation).
`i(paddd, xmm1, xmm7,`
---
`[ XMM0, [A,B,C,D], XMM2, XMM3, XMM4, XMM5, XMM6, [E,F,G,H] ],`
`[ XMM0, [P,Q,R,S], XMM2, XMM3, XMM4, XMM5, XMM6, [E,F,G,H] ]) :-`
`integer(A), integer(E), P is A+E, integer(B), integer(F), Q is B+F,`
`integer(C), integer(G), R is C+G, integer(D), integer(H), S is D+H.`
Similar rules are added for all other arithmetic, logical, comparison, and
conversion instructions, and for all combinations of register pairs. A data
shuffling instruction such as
`punpckldq, xmm0, xmm3`
---
is modeled as shown below.
`i(punpckldq, xmm0, xmm3,`
---
`[ [_,_,A,B], XMM1, XMM2, [X,Y,C,D], XMM4, XMM5, XMM6, XMM7 ],`
`[ [C,A,D,B], XMM1, XMM2, [X,Y,C,D], XMM4, XMM5, XMM6, XMM7 ]).`
A shift instruction like
`psrldq xmm2, 4`
---
is modeled by the rule below.
`i(psrldq, xmm2, 4,`
---
`[ XMM0, XMM1, [A,B,C,_], XMM3, XMM4, XMM5, XMM6, XMM7 ],`
`[ XMM0, XMM1, [0,A,B,C], XMM3, XMM4, XMM5, XMM6, XMM7 ]).`
The SIMD state change after the data movement instruction
`movd xmm4, I`
---
is modeled with this rule.
`i(movd, xmm4, I,`
---
`[ XMM0, XMM1, XMM2, XMM3, _, XMM5, XMM6, XMM7 ],`
`[ XMM0, XMM1, XMM2, XMM3, [0,0,0,I], XMM5, XMM6, XMM7 ]).`
Obviously, writing all these Prolog rules by hand would be too tedious and
error-prone. Instead, a utility should be used to generate all rules
automatically, preferably directly from an instruction set description in an
electronic format. A complete and accurate rule set will obviously yield the
best results.
### 2.3 Search
Given a complete Prolog rule set that model the SIMD state transitions of all
instructions, we need a search mechanism to find a path in the state space
from the _start state_ to the _goal state_. This search mechanism is also
expressed with Prolog rules.
A first reasonable attempt is shown below (we will refine these rules slightly
later). The two rules states that any state S transitions into itself for an
empty instruction sequence, or otherwise breaks down into the transition of a
single instruction from state S to state U followed by the transition from
state U to state T of an subsequent instruction sequence J. The list of
3-arity i predicates built by these rules ultimately indicate an instruction
sequence that transitions state S to state T.
`s(S, [], S).`
---
`s(S, [i(I,R1,R2)|J], T) :- i(I, R1, R2, S, U), s(U, J, T).`
Now suppose we are interested in finding the best way to zeroing the contents
of register xmm0. It may be tempting to express that particular state space
search problem with the following Prolog query.
`s(S, I, [[0,0,0,0] | _ ]).`
---
However, this query returns the following first solution, with as
interpretation that the shortest way of resetting register xmm0 to zero is by
executing no instructions at all (empty list I) but instead starting with all
zeroes in that register (initial state S). Although correct, this is obviously
not what we were searching for.
`I = []`
---
`S = [[0,0,0,0]|_]`
As a side note, this mistake can demonstrate a potential danger of using
anonymous variables. The almost identical query
`s(_, I, [[0,0,0,0] | _ ]).`
---
would have given the solution
`I = []`
---
as well, but without even listing bindings for the two anonymous variables,
obscuring the fact that the initial state was bound to a state with the first
register already zeroed out. So each anonymous variable really denotes any
_suitable_ term. Rather, named variables should be preferred when contents
matter.
The correct way of formulating the original query is by explicitly stating the
fact that all registers contain unusable and unrelated initial values, as
shown below with eight different atoms.
`s([xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7], I,`
---
`[[0,0,0,0] | _ ]).`
This query will prompt the following list as a first solution, indicating a
single instruction way of zeroing out register xmm0.
`I = [i(pxor,xmm0,xmm0)]`
---
### 2.4 Iterative Deepening Search
Prolog’s DFS (depth-first search) is not very suited for this particular kind
of state space search problem, since it will continuously append instructions
to existing partial solutions in an attempt to reach the goal state. A BFS
(breadth-first search) works much better, since it will report the _shortest_
instruction sequences from the initial state to the goal state first. We will
implement such a search using Prolog’s DFS, but without the inherently high
memory demands of BFS, using IDS (iterative deepening search). To this end,
the search rules given earlier are refined into the following set.
`s(S, I, T) :- count(D, 0), s(S, I, T, D).`
---
`s(S, [], S, 0 ).`
`s(S, [i(I,R1,R2)|J], T, X) :- X > 0, Y is X - 1,`
`i(I, R1, R2, S, U), s(U, J, T, Y).`
The search rules themselves are as before, but restricted to a given depth.
The count rules define a simple increment mechanism.
`count(X, X).`
---
`count(X, Y) :- Z is Y + 1, count(X, Z).`
Combined, these rules try to find solutions within subsequent instruction
sequences of length 0, 1, 2, etc. As a result, shorter instruction sequences
are reported first (note that with some effort, this search mechanism can be
adapted for other criteria of the _best_ solution, such as finding the
instruction sequences with the lowest total cycle counts). For example,
running the query
`s( [ xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7], I,`
---
`[ [-1,-1,-1,-1] | _ ] ).`
reports the desired solution
`I = [i(pcmpeqd,xmm0,xmm0)]`
---
before it reports the following alternative, but longer solution, which
basically just clobbers the register with an unused value before resorting to
the shorter solution.
`I = [i(pxor,xmm0,xmm0),i(pcmpeqd,xmm0,xmm0)]`
---
Suppose we are interested in broadcasting a value to all elements in a SIMD
register, an idiom that is frequently used in the prologue of a vector loop by
a vectorizing compiler [Bik04]. An instruction sequence for such a broadcast
can be found using the following query, where atom c denotes the value that
needs broadcasting.
`s( [ xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7], I,`
---
`[ [c,c,c,c] | _ ] ).`
Lacking a shuffle operation in our simplified rule set, the shortest
instruction sequence for the broadcast consists of a data movement instruction
followed by two unpack instructions.
`I = [`
---
`i(movd,xmm0,c),`
`i(punpckldq,xmm0,xmm0),`
`i(punpckldq,xmm0,xmm0)`
`]`
### 2.5 Usable Start State
The examples so far searched for a particular goal state given an _unusable_
state state. Often, however, the start state may contain some known, usable
information. As a simple example, the query
`s([xmm0, [1,2,3,4], xmm2, xmm3, xmm4, xmm5, xmm6, xmm7], I,`
---
`[[1,2,3,4] | _ ]).`
yields the following solution, which indicates that the best way to assign
particular contents to register xmm0 given a state where register xmm1 already
has these contents is simply moving the register.
`I = [i(movdqa,xmm0,xmm1)]`
---
As a more practical application, this approach can be used to find the best
sequence to sum up all elements in a SIMD register ”horizontally”, an idiom
used by a vectorizing compiler [Bik04] to finalize the computation after
converting a sum-reduction loop into SIMD code. Here the query
`s( [ [a,b,c,d], xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7], I`
---
`[ [_,_,_,(d+b)+(c+a)] | _ ] ).`
yields the following instruction sequence as first suitable solution.
`I = [`
---
`i(movdqa,xmm1,xmm0),`
`i(psrldq,xmm0,8),`
`i(paddd,xmm1,xmm0),`
`i(punpckldq,xmm0,xmm1),`
`i(paddd,xmm0,xmm1),`
`i(psrldq,xmm0,4)`
`]`
Subsequent solutions with the same length provide some true alternatives
(here, cycle counts could help finding the truly best one).
`I = [`
---
`i(movdqa,xmm1,xmm0),`
`i(psrldq,xmm0,8),`
`i(paddd,xmm1,xmm0),`
`i(movdqa,xmm0,xmm1),`
`i(psrldq,xmm1,4),`
`i(paddd,xmm0,xmm1)`
`]`
Other solutions of the same length that follow may simply provide the same
instruction sequences using different intermediate registers.
Note that in this example, a _statically_ known property of the context in
which the instruction sequence is needed allowed for adding some usable
information to the start state (viz. the SIMD register contains four partial
results that need to be summed up). As stated in the introduction, at runtime
the compiler could even exploit some _dynamically_ known properties of the
context to find better instruction sequences, but it is unlikely that
exhaustive search (let alone Prolog) could be used under such circumstances.
## 3 Conclusions
In this note, we explored using Prolog for finding efficient data manipulating
instruction sequences. The problem is expressed as a state search problem,
with the initial memory and register contents as start state, machine
instructions as transitions, and the desired memory and register contents as
goal state. Modeling instructions with a complete and accurate Prolog rule set
of transitions will yield the best results, where it preferable to extract
such a rule set automatically from an instruction set description in an
electronic format. Search is expressed with Prolog rules as well, enhanced
with iterative deepening to work around obvious complications with the default
depth-first search of Prolog.
Once the best solution is found after exhaustively searching the state space,
the instruction sequence can be hard-wired in any code generator that needs
the data manipulation, and becomes at the immediate disposal of the compiler.
Here, the best can be defined as the shortest instruction sequence or, with
some adaptation, as the instruction sequence with minimal total weight, such
as summing the cycle counts. For the sake of brevity, we restricted our focus
on finding efficient Intel SSE instruction sequences using just a subset of
the SIMD state, instructions, and data types. However, the presented ideas
easily generalize to broader instructions sets and code generation problems.
## References
* [Bik04] Aart J.C. Bik. The Software Vectorization Handbook: Applying Multimedia Extensions for Maximum Performance. Intel Press, 2004.
* [Col93] Alain Colmerauer. The birth of Prolog. In The second ACM SIGPLAN conference on History of programming languages, pages 37–52, April 1993.
* [LS89] George F. Luger and William A. Stubblefield. Artificial Intelligence and the Design of Expert Systems. Benjamin-Cummings, Menlo Park, California, 1989.
* [Nil98] Nils J. Nilsson. Artificial Intelligence: A New Synthesis. Morgan Kaufmann Publishers, San Francisco, California, 1998.
|
# Schur’s exponent conjecture II
Michael Vaughan-Lee
(December 2021)
###### Abstract
Primoz Moravec published a very important paper in 2007 where he proved that
if $G$ is a finite group of exponent $n$ then the exponent of the Schur
multiplier of $G$ can be bounded by a function $f(n)$ depending only on $n$.
Moravec does not give a value for $f(n)$, but actually his proof shows that we
can take $f(n)=ne$ where $e$ is the order of $b^{-n}a^{-n}(ab)^{n}$ in the
Schur multiplier of $R(2,n)$. (Here $R(2,n)$ is the largest finite two
generator group of exponent $n$, and we take $a,b$ to be the generators of
$R(2,n)$.) It is an easy hand calculation to show that $e=n$ for $n=2,3$, and
it is a straightforward computation with the $p$-quotient algorithm to show
that $e=n$ for $n=4,5,7$. The groups $R(2,8)$ and $R(2,9)$ are way out of
range of the $p$-quotient algorithm, even with a modern supercomputer. But we
are able to show that $e\geq n$ for $n=8,9$. Moravec’s proof also shows that
if $G$ is a finite group of exponent $n$ with nilpotency class $c$, then the
exponent of the Schur multiplier of $G$ is bounded by $ne$ where $e$ is the
order of $b^{-n}a^{-n}(ab)^{n}$ in the Schur multiplier of the class $c$
quotient $R(2,n;c)$ of $R(2,n)$. If $q$ is a prime power we let $e_{q,c}$ be
the order of $b^{-q}a^{-q}(ab)^{q}$ in the Schur multiplier of $R(2,q;c)$. We
are able to show that $e_{p^{k},p^{2}-p-1}$ divides $p$ for all prime powers
$p^{k}$. If $k>2$ then $e_{2^{k},c}$ equals 2 for $c<4$, equals 4 for $4\leq
c\leq 11$, and equals $8$ for $c=12$. If $k>1$ then $e_{3^{k},c}$ equals 1 for
$c<3$, equals 3 for $3\leq c<12$, and equals 9 for $c=12$.
We also investigate the order of $[b,a]$ in a Schur cover for $R(2,q;c)$.
## 1 Introduction
There is a long-standing conjecture attributed to I. Schur that the exponent
of the Schur multiplier, $M(G)$, of a finite group $G$ divides the exponent of
$G$. It is easy to show that this conjecture holds true for groups of exponent
2 and exponent 3, but a counterexample in exponent 4 was found by Bayes,
Kautsky and Wamsley [1] in 1974. The conjecture remained open for odd exponent
until 2020, when I found counterexamples of exponent 5 and exponent 9 [6]. It
seems certain that there are counterexamples to this conjecture for all prime
powers greater than 3, but this leaves open the question of what bounds on the
exponent of $M(G)$ might hold true.
The usual definition of the Schur multiplier of a finite group $G$ is the
second homology group $H_{2}(G,\mathbb{Z})$. For computational purposes we use
the Hopf formulation of the Schur multiplier, which is as follows. We write
$G=F/R$, where $F$ is a free group, and then
$M(G)=(R\cap F^{\prime})/[F,R].$
If we let $H=F/[F,R]$ then $H$ is an infinite group, and the quotient
$H/H^{\prime}$ is a free abelian group with rank equal to the rank of $F$ as a
free group. However the centre of $H$ contains $R/[F,R]$ and has finite index
in $H$. So the derived group $H^{\prime}=F^{\prime}/[F,R]$ is finite, and
$M(G)$ is finite. It is known that the exponent of $M(G)$ divides the order of
$G$. Furthermore if $G$ has exponent $n$ and if $h\in H^{\prime}$ then
$h^{n}\in M(G)$. So the quotient $H^{\prime}/M(G)$ has exponent dividing $n$.
In particular, if $G$ is a finite $p$-group then $H^{\prime}$ is a finite
$p$-group.
Primoz Moravec published a very important paper [5] in 2007 in which he proved
that if $G$ is a finite group of exponent $n$ then the exponent of $M(G)$ is
bounded by a function $f(n)$ depending only on $n$. Moravec does not give an
explicit formula for $f(n)$, but his proof of this theorem actually shows that
if $G$ is a finite group of exponent $n$ then the exponent of $M(G)$ divides
$ne$ where $e$ is the order of $b^{-n}a^{-n}(ab)^{n}$ in the Schur multiplier
of $R(2,n)$. (We let $R(2,n)$ denote the largest finite $2$ generator of
exponent $n$, and we take the generators of $R(2,n)$ to be $a,b$.)
###### Theorem 1 (Moravec, 2007)
Let $b^{-n}a^{-n}(ab)^{n}\in M(R(2,n))$ have order $e$. If $G$ is any finite
group of exponent $n$ then the exponent of $M(G)$ divides $ne$.
We give a short proof of this theorem in Section 2. The same proof gives the
following theorem.
###### Theorem 2
Let $R(2,n;c)$ be the nilpotent of class $c$ quotient of $R(2,n)$, and let
$e_{n,c}$ be the order of $b^{-n}a^{-n}(ab)^{n}\in M(R(2,n;c))$. If $G$ is any
finite group of exponent $n$ with class $c$, then the exponent of $M(G)$
divides $ne_{n,c}$.
Our proof of Moravec’s theorem also gives the following corollary.
###### Corollary 3
Let $R(d,n)$ be the largest finite $d$ generator group of exponent $n$. Then
if $d\geq 2$ the exponent of $M(R(d,n))$ is the order of
$b^{-n}a^{-n}(ab)^{n}\in M(R(2,n))$.
Theorem 1 led me to investigate the order $e_{q}$ of $b^{-q}a^{-q}(ab)^{q}\in
M(R(2,q))$ for prime power exponents $q$. (We restrict ourselves to groups of
prime power exponent since if $G$ is any finite group and $p$ is any prime
then the Sylow $p$-subgroup of $M(G)$ is a subgroup of the Schur multiplier of
the Sylow $p$-subgroup of $G$.) It is an easy hand calculation to show that
$e_{q}=q$ for $q=2,3$. And it is a straightforward computation with the
$p$-quotient algorithm [4] to show that $e_{q}=q$ for $q=4,5,7$. Computing the
groups $R(2,8)$ or $R(2,9)$ (or their Schur covers) is way out of the range of
the $p$-quotient algorithm, even with a modern supercomputer. But I am able to
show that $e_{q}\geq q$ for $q=8,9$.
Theorem 2 led me to investigate the order $e_{q,c}$ of
$b^{-q}a^{-q}(ab)^{q}\in M(R(2,q;c))$ for prime power exponents $q$ and
various $c$.
###### Theorem 4
If $q$ is a power of the prime $p$ then $e_{q,p^{2}-p-1}$ divides $p$.
###### Theorem 5
If $k>2$ then $e_{2^{k},c}$ equals $2$ for $c<4$, equals $4$ for $4\leq c\leq
11$, and equals $8$ for $c=12$.
###### Theorem 6
If $k>1$ then $e_{3^{k},c}$ equals $1$ for $c<3$, equals $3$ for $3\leq c<12$,
and equals $9$ for $c=12$.
The counterexamples to Schur’s conjecture found in [1] and [6] are based on
the following construction. Let $H(q,c)$ be the largest four generator group
of exponent $q$ and nilpotency class $c$ which is generated by $a,b,c,d$ and
subject to the relation
$[b,a][d,c]=1.$
The Bayes, Kautsky and Wamsley example in [1] is $H(4,4)$ which has Schur
multiplier of exponent 8. The element $[b,a][d,c]$ has order 8 in the Schur
multiplier, and in fact $[b,a]$ has order $8$ in a Schur cover of $R(2,4;4)$.
Similarly my examples in [6] are based on $H(5,9)$ and $H(9,9)$. In the Schur
multipliers of these two groups the elements $[b,a][d,c]$ have order 25 and
27. The examples “work” because $[b,a]$ has order 25 and 27 in Schur covers of
$R(2,5;9)$ and $R(2,9;9)$. It seems plausible that more generally the exponent
of $M(H(q,c))$ is the order of $[b,a]$ in a Schur cover of $R(2,q;c)$, though
I have no idea how to prove this in general. For this reason I have looked at
the order of $[b,a]$ in Schur covers of $R(2,q;c)$ for various $q,c$. For
example, $[b,a]$ has order 32 in a Schur cover of $R(2,8;12)$ so it seems to
me to be extremely likely that $M(H(8,12))$ has exponent 32, though I cannot
prove it (yet!). This is an interesting example because all $p$-group
counterexamples $G$ to Schur’s conjecture found so far have
exp$\,M(G)=p\,$exp$\,G$.
I am able to prove that $M(H(q,4))$ has exponent $2q$ whenever $q\geq 4$ is a
power of 2. For $q=9,27$ $M(H(q,9))$ has exponent $3q$, and it seems very
likely that $M(H(q,9))$ has exponent $3q$ for all $q>3$ which are powers of 3.
Similarly for $q=5,25$ $M(H(q,9))$ has exponent $5q$, and it seems very likely
that $M(H(q,9))$ has exponent $5q$ for all $q$ which are powers of 5.
###### Theorem 7
If $q>4$ is a power of $2$, and if we let $f$ be the order of $[b,a]$ in a
Schur cover of $R(2,q;c)$ then $f=q$ for $c<4$, $f=2q$ for $4\leq c<12$, and
$f=4q$ for $c=12$. If $G$ is a finite $2$-group with nilpotency class less
than $4$ then exp$\,M(G)$ divides exp$\,G$.
###### Theorem 8
If $q>3$ is a power of $3$, and if we let $f$ be the order of $[b,a]$ in a
Schur cover of $R(2,q;c)$ then $f=q$ for $c<9$, and $f=3q$ for $c=9$. If $G$
is any group of exponent $q$ with class less than $9$ then exp$\,M(G)$ divides
$q$.
###### Theorem 9
Let $q$ be a power of $5$. If we let $f$ be the order of $[b,a]$ in a Schur
cover of $R(2,q;c)$ then $f=q$ for $c<9$, and $f=5q$ for $c=9$. If $G$ is any
group of exponent $q$ with class less than $9$ then exp$\,M(G)$ divides $q$.
###### Theorem 10
Let $q$ be a power of $7$. If we let $f$ be the order of $[b,a]$ in a Schur
cover of $R(2,q;c)$ then $f=q$ for $c<13$, and $f=7q$ for $c=13$. If $G$ is
any group of exponent $q$ with class less than $13$ then exp$\,M(G)$ divides
$q$.
Theorem 9 follows from the fact that in a Schur cover of $R(2,5)$ the element
$[b,a]^{5}$ is a product of commutators with entries $a$ or $b$, where at
least 5 of the entries are $a$’s and at least 5 of the entries are $b$’s.
Similarly Theorem 10 follows from the fact that in a Schur cover of $R(2,7)$
the element $[b,a]^{7}$ is a product of commutators with at least 7 entries
$a$ and at least 7 entries $b$. If the same pattern is repeated for higher
primes $p$, then I would expect the order of $[b,a]$ in $M(R(2,p^{k};c))$ to
be $p^{k}$ for $c<2p-1$ and to be $p^{k+1}$ when $c=2p-1$.
The theorems above omit the exponents 2,3,4. It is easy to see that the Schur
multiplier of a group of exponent 2 or exponent 3 has exponent 2 or 3
(respectively). Moravec [5] proves that the Schur multiplier of a group of
exponent 4 has exponent dividing 8.
## 2 Proof of Theorem 1
We write $R(2,n)=F/R$ where $F$ is the free group generated by $a,b$, and let
$H=F/[F,R]$. (We are not assuming here that $n$ is a prime power.) Then
$b^{-n}a^{-n}(ab)^{n}\in M(R(2,n))$. Let $b^{-n}a^{-n}(ab)^{n}$ have order
$e$. We show that $e$ is the exponent of $M(R(d,n))$ for all $d\geq 2$, and
that if $G$ is any finite group of exponent $n$ then exp$(M(G))$ divides $ne$.
So let $G$ be _any_ finite group of exponent $n$, let $G=F/R$ where $F$ is a
free group, and let $H=F/[F,R]$. (Apologies for using the same notation for
the covering group of $G$ as I used for the covering group of $R(2,n)$.) Let
$a,b$ be _any_ two elements in $H$. Then the subgroup of $H$ generated by $a$
and $b$ is a homomorphic image of the cover of $R(2,n)$, and so
$b^{-n}a^{-n}(ab)^{n}\in H^{\prime}$ lies in the centre of $H$ and has order
dividing $e$. Let $F$ be freely generated by the set $X$ and let $K$ be the
subgroup of $H$ generated by the elements $x^{n}[F,R]$ ($x\in X$). Then $K$ is
a free abelian group which intersects $H^{\prime}$ trivially. If $w$ is an
arbitrary element of $F$ we can write $w=x_{1}x_{2}\ldots x_{k}$ for some $k$
and some $x_{1},x_{2},\ldots,x_{k}\in X\cup X^{-1}$. Letting $a=x_{1}$ and
$b=x_{2}x_{3}\ldots x_{k}$ we see that
$w^{n}=(x_{1}x_{2}\ldots x_{k})^{n}=x_{1}^{n}(x_{2}\ldots
x_{k})^{n}b^{-n}a^{-n}(ab)^{n}.$
Repeating this argument we see that
$w^{n}=x_{1}^{n}x_{2}^{n}\ldots x_{k}^{n}c,$
where $c$ is a product of terms of the form $b^{-n}a^{-n}(ab)^{n}$ with
$a,b\in F$. So we see that if $h\in H$ then $h^{n}$ is a product of an element
in $K$ and an element in $H^{\prime}$ which lies in the centre of $H$ and has
order dividing $e$. It follows that any product of $n^{th}$ powers in $H$ can
be expressed in the same form. Since $K\cap H^{\prime}=\\{1\\}$ we see that
this implies that any product of $n^{th}$ powers in $H$ which lies in
$H^{\prime}$ has order dividing $e$. So exp$(M(R(d,n)))=e\,$ for all $d\geq
2$. If $h\in H^{\prime}$, then $h^{n}$ is an $n^{th}$ power which lies in
$H^{\prime}$, and so $h^{ne}=1$. So $H^{\prime}$ has exponent dividing $ne$,
and this implies that exp$(M(G))$ divides $ne$.
## 3 Some commutator calculus
Let $F$ be the free group of rank 2 generated by $a$ and $b$. If we are
working in the nilpotent quotient $F/\gamma_{k+1}(F)$ for some $k$ then we
pick a fixed ordered set of basic commutators of weight at most $k$. See [3,
Chapter 11]. The first few basic commutators in our sequence are
$a,\,b,\,[b,a],\,[b,a,a],\,[b,a,b],\,[b,a,a,a],\,[b,a,a,b],\,[b,a,b,b],\,[b,a,a,[b,a]].$
If $c_{1},c_{2},\ldots,c_{m}$ is our list of basic commutators of weight at
most $k$ then every element of $F/\gamma_{k+1}(F)$ can be written uniquely in
the form
$c_{1}^{n_{1}}c_{2}^{n_{2}}\ldots c_{m}^{n_{m}}\gamma_{k+1}(F)$
for some integers $n_{1},n_{2},\ldots,n_{m}$. From the theory of Hall
collection (see [3, Theorem 12.3.1]), if $n$ is any positive integer then in
$F/\gamma_{k+1}(F)$
$(ab)^{n}=a^{n}b^{n}[b,a]^{\binom{n}{2}}[b,a,a]^{\binom{n}{3}}c_{5}^{n_{5}}\ldots
c_{m}^{n_{m}}$ (1)
where the exponents $n,\binom{n}{2},\binom{n}{3},n_{5},\ldots,n_{m}$ take a
very special form. If $c_{r}$ has weight $w$ then $n_{r}$ is an integral
linear combination of the binomial coefficients
$n,\binom{n}{2},\binom{n}{3},\ldots,\binom{n}{w}$. Furthermore the integer
coefficients which arise in these linear combinations are positive, and are
independent of $n$.
The exponents $n,\binom{n}{2},\binom{n}{3},n_{5},\ldots,n_{m}$ which arise in
equation (1) are all polynomials in $n$ which take integer values when $n$ is
an integer. The formula
$\binom{n}{r}=\frac{n(n-1)\ldots(n-r+1)}{r!}$
also makes sense when $n$ is negative. We let $P=\mathbb{Q}[t]$ be the ring of
polynomials in an indeterminate $t$ over the rationals $\mathbb{Q}$. An
_integer-valued polynomial_ is a polynomial $f(t)\in P$ which takes integer
values whenever $f(t)$ is evaluated at an integer $n$. The set of integer-
valued polynomials is a subring of $P$, and is a free abelian group with basis
$1,\,t,\,\frac{t(t-1)}{2!},\,\ldots,\frac{t(t-1)\ldots(t-d+1)}{d!},\,\ldots.$
The exponents $n_{i}$ which arise in equation (1) all take the form
$n_{i}=f_{i}(n)$ where $f_{i}(t)$ is an integer-valued polynomial of degree at
most wt$\,c_{i}$ which does not depend on $n$. The polynomials $f(t)\in P$
that arise in this way in equation (1) also satisfy $f(0)=0$.
We rewrite equation (1) in the following form
$(ab)^{n}=a^{n}b^{n}[b,a]^{\binom{n}{2}}[b,a,a]^{\binom{n}{3}}c_{5}^{f_{5}(n)}\ldots
c_{m}^{f_{m}(n)}$ (2)
where the integer-valued polynomials $f_{i}(t)$ are independent of $n$. The
key properties of these polynomials to keep in mind are that $f_{i}(0)=0$ and
$\deg f_{i}(t)\leq\text{wt}\,c_{i}$.
We use equation (2) to get an expansion of $[y^{n},x]$ for $x,y\in F$.
Equation (2) gives
$y^{n}x=x(y[y,x])^{n}=xy^{n}[y,x]^{n}[y,x,y]^{\binom{n}{2}}(c_{4}\alpha)^{f_{4}(n)}\ldots(c_{m}\alpha)^{f_{m}(n)}\text{
modulo }\gamma_{k+1}(F)$
where $\alpha$ is the endomorphism of $F$ mapping $a,b$ to $y,[y,x]$. This
equation gives
$[y^{n},x]=[y,x]^{n}[y,x,y]^{\binom{n}{2}}(c_{4}\alpha)^{f_{4}(n)}\ldots(c_{m}\alpha)^{f_{m}(n)}\text{
modulo }\gamma_{k+1}(F).$ (3)
## 4 Proof of Theorem 4
We want to prove that if $q$ a power of the prime $p$ then the order of
$b^{-q}a^{-q}(ab)^{q}$ in $M(R(2,q;p^{2}-p-1)$ divides $p$. The case $p=2$ is
covered by Theorem 5, and so we assume that $p>2$. We write
$R(2,q;p^{2}-p-1)=F/R$, where $F$ is the free group with free generators
$a,b$. Let $H=F/[F,R]$. So $H$ is nilpotent of class at most $p^{2}-p$, and
(as we noted in the introduction) $H^{\prime}$ is a finite $p$-group. We let
$c_{1},c_{2},\ldots,c_{m}$ be our list of basic commutators of weight at most
$p^{2}-p$ as described in Section 3. Let $x,y\in H$ and set $n=q$ in equation
(3) from Section 3. Using the fact that $H$ is nilpotent of class $p^{2}-p$ we
obtain a relation
$[y,x]^{q}[y,x,y]^{\binom{q}{2}}[y,x,y,y]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{m}\alpha)^{f_{m}(q)}=1,$
(4)
where $\alpha$ is the homomorphism from $F$ to $H$ mapping $a,b$ to $y,[y,x]$.
Recall that if wt$\,c_{i}=w$ then $\deg f_{i}(t)\leq w$. Note that if
wt$\,c_{i}=w$ then $c_{i}\alpha$ is a commutator in $x,y$ with $w$ entries
$y$. Also note that wt$\,c_{i}<p^{2}$ so that $f_{i}(q)$ is divisible by
$\frac{q}{p}$ for all $i$. And if wt$\,c_{i}<p$ then $f_{i}(q)$ is divisible
by $q$.
Since $H$ is nilpotent of class at most $p^{2}-p$, if $y\in\gamma_{p-1}(H)$
then $c_{i}\alpha=1$ whenever wt$\,c_{i}\geq p$. So if $y\in\gamma_{p-1}(H)$
then relation (4) shows that $[y,x]^{q}$ is a product of $q^{th}$ powers of
commutators in $x,y$ with higher weight. First let
$y\in\gamma_{p^{2}-p-1}(H)$. Then equation (4) gives $[y,x]^{q}=1$. Since
elements $[y,x]$ of this form generate $\gamma_{p^{2}-p}(H)$ this implies that
$\gamma_{p^{2}-p}(H)$ has exponent $q$. Next let $y\in\gamma_{p^{2}-p-2}(H)$.
Then equation (4) gives $[y,x]^{q}\in\gamma_{p^{2}-p}(H)^{q}=\\{1\\}$. So we
see that $\gamma_{p^{2}-p-1}(H)$ has exponent $q$. We continue in this way,
successively proving that $\gamma_{p^{2}-p-2}(H)$, $\gamma_{p^{2}-p-3}(H)$, …
have exponent $q$. Finally we let $y\in\gamma_{p-1}(H)$ and prove that
$\gamma_{p}(H)$ has exponent $q$.
Now set $n=pq$ in equation (3), and we obtain the relation.
$[y,x]^{pq}[y,x,y]^{\binom{pq}{2}}[y,x,y,y]^{\binom{pq}{3}}(c_{5}\alpha)^{f_{5}(pq)}\ldots(c_{m}\alpha)^{f_{m}(pq)}=1$
where all the exponents $f_{i}(pq)$ are divisible by $q$, and where
commutators with less than $p$ entries $y$ have exponents divisible by $pq$.
Since all commutators in $H$ with weight at least $p$ have order dividing $q$,
this implies that $[x,y]^{pq}$ is a product of $(pq)^{th}$ powers of
commutators of higher weight in $x,y$. So $[x,y]^{pq}=1$, and hence all
commutators in $H$ have order dividing $pq$. Since $H$ has class less than
$p^{2}$ and $\gamma_{p}(H)$ has exponent $q$ this implies that $H^{\prime}$
has exponent dividing $pq$.
Finally consider $(b^{-q}a^{-q}(ab)^{q})^{p}=b^{-pq}a^{-pq}(ab)^{pq}$.
Equation (2) from Section 3 gives
$(b^{-q}a^{-q}(ab)^{q})^{p}=[b,a]^{\binom{pq}{2}}[b,a,a]^{\binom{pq}{3}}c_{5}^{f_{5}(pq)}\ldots
c_{m}^{f_{m}(pq)}.$
All the exponents in this product are divisible by $q$, and the exponents of
commutators of weight less than $p$ in the product are divisible by $pq$. So
$(b^{-q}a^{-q}(ab)^{q})^{p}=1$.
The same argument (though easier) shows that in $M(R(2,q;p-2)$ the element
$b^{-q}a^{-q}(ab)^{q}=1$.
## 5 Proof of Theorem 5 and Theorem 7
Let $q=2^{k}$ $(k>2)$, and let $e_{q,c}$ be the order of
$b^{-q}a^{-q}(ab)^{q}$ in $M(R(2,q;c))$. We want to prove that $e_{q,c}$
equals $2$ for $c<4$, equals $4$ for $4\leq c\leq 11$, and equals $8$ for
$c=12$. We also want to prove that if $f$ is the order of $[b,a]$ in a Schur
cover of $R(2,q;c)$ then $f=q$ for $c<4$, $f=2q$ for $4\leq c<12$, and $f=4q$
when $c=12$.
Let $R(2,q;12)=F/R$ where $F$ is the free group of rank two generated by
$a,b$, and let $H=F/[F,R]$. Then $H$ is an infinite group, but the subgroup
$\langle a^{q},b^{q}\rangle\leq H$ is a central subgroup with trivial
intersection with $H^{\prime}$. We can factor this subgroup out, without
impacting $M(R(2,q;12))$, and we now have a finite $2$-group. We used the
$p$-quotient algorithm to compute this quotient for $q=8,16,32$. (These were
quite easy computations.) The computations showed that $e_{q,c}$ takes the
values given in Theorem 5 for $q=8,16,32$, and that $f$ takes the values given
in Theorem 7 for $q=8,16,32$. We show that the fact that Theorem 5 and Theorem
7 hold true for $q=2^{5}$ implies that they hold true for all exponents
$q=2^{k}$ with $k\geq 5$.
We let $q=2^{k}$ where $k\geq 5$, and let $c_{1},c_{2},\ldots,c_{m}$ be our
list of basic commutators of weight at most $13$ as described in Section 3. As
in the proof of Theorem 4 we let $x,y\in H$ and obtain a relation
$[y^{q},x]=[y,x]^{q}[y,x,y]^{\binom{q}{2}}[y,x,y,y]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{m}\alpha)^{f_{m}(q)}$
(5)
where $\alpha$ is the homomorphism from $F$ to $H$ mapping $a,b$ to $y,[y,x]$.
If wt$\,c_{i}=w$ then $c_{i}\alpha$ is a commutator in $x$ and $y$, with $w$
entries $y$, and $\deg f_{i}\leq w$. The binomial coefficients $\binom{q}{2}$
and $\binom{q}{3}$ are both divisible by $\frac{q}{2}$, the binomial
coefficients $\binom{q}{d}$ for $d<8$ are divisible by $\frac{q}{4}$, and the
binomial coefficients $\binom{q}{d}$ for $d<16$ are divisible by
$\frac{q}{8}$.
If we let $y\in\gamma_{7}(H)$ then $[y,x,y]\in\gamma_{15}(H)=\\{1\\}$, so we
see that $[y,x]^{q}=1$. Now $\gamma_{8}(H)$ is generated by elements $[y,x]$
with $y\in\gamma_{7}(H)$, and $\gamma_{8}(H)$ is abelian. So $\gamma_{8}(H)$
has exponent $q$.
Now let $y\in\gamma_{4}(H)$, and replace $q$ by $2q$ in equation (5). Using
the fact that $\gamma_{8}(H)$ has exponent $q$ we see that $[y,x]^{2q}=1$. So
$\gamma_{5}(H)$ is generated by elements of order $2q$. Furthermore
$\gamma_{5}(H)$ is nilpotent of class 2, and
$\gamma_{5}(H)^{\prime}\leq\gamma_{10}(H)$ has exponent $q$. So
$\gamma_{5}(H)$ has exponent $2q$.
Next let $y\in H^{\prime}$, and replace $q$ by $4q$ in equation (5). We obtain
$[y,x]^{4q}=1$. So $\gamma_{3}(H)$ is generated by elements of order $4q$, and
using the fact that $\gamma_{5}(H)$ has exponent $2q$ and $\gamma_{8}(H)$ has
exponent $q$, we see that $\gamma_{3}(H)$ has exponent $4q$.
Finally replace $q$ by $8q$ in equation (5) and we obtain $[y,x]^{8q}=1$ for
all $x,y$. Using facts that $\gamma_{3}(H)$ has exponent $4q$, $\gamma_{5}(H)$
has exponent $2q$, and $\gamma_{8}(H)$ has exponent $q$, we see that
$H^{\prime}$ has exponent $8q$.
Let $N$ be the normal subgroup
$\gamma_{2}(F)^{8q}\gamma_{3}(F)^{4q}\gamma_{5}(F)^{2q}\gamma_{8}(F)^{q}\gamma_{14}(F)<F.$
Then $H=F/M$ where $M=\langle[y^{q},x]\,:\,x,y\in F\rangle N$.
Next we let $K$ be the normal subgroup
$\gamma_{2}(F)^{q}\gamma_{3}(F)^{\frac{q}{2}}\gamma_{5}(F)^{\frac{q}{4}}\gamma_{8}(F)^{\frac{q}{8}}\gamma_{14}(F)<F.$
(The relevance of $K$ is that if $q\geq 8$ is a power of 2 then $[y^{q},x]\in
K$ for all $x,y\in F$.) We show that every element $k\in K$ can be written
uniquely modulo $\gamma_{14}(F)$ in the form
$k=[b,a]^{qm_{3}}c_{4}^{\frac{q}{2}m_{4}}\ldots
c_{8}^{\frac{q}{2}m_{8}}c_{9}^{\frac{q}{4}m_{9}}\ldots
c_{41}^{\frac{q}{4}m_{41}}c_{42}^{\frac{q}{8}m_{42}}\ldots
c_{1377}^{\frac{q}{8}m_{1377}},$ (6)
where $m_{3},m_{4},\ldots,m_{1377}$ are integers. (The number of basic
commutators of weight at most 4 is 8, the number of weight at most 7 is 41,
and the number of weight at most 13 is 1377.) First let
$K_{2}=\gamma_{3}(F)^{\frac{q}{2}}\gamma_{5}(F)^{\frac{q}{4}}\gamma_{8}(F)^{\frac{q}{8}}\gamma_{14}(F).$
We show that every element in $\gamma_{2}(F)^{q}$ can be written as
$[b,a]^{qm_{3}}$ modulo $K_{2}$. The elements of $\gamma_{2}(F)^{q}$ are
products of $q^{th}$ powers of elements in $\gamma_{2}(F)$. Let
$x,y\in\gamma_{2}(F)$. Then from equation (2) in Section 3 we see that
$(xy)^{q}=x^{q}y^{q}[y,x]^{\binom{q}{2}}[y,x,x]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{1377}\alpha)^{f_{1377}(q)}$
where $\alpha:F\rightarrow F$ is the endomorphism mapping $a,b$ to $x,y$. Now
$[y,x]$, and $[y,x,x]\in\gamma_{4}(F)$ and $\binom{q}{2}$ and $\binom{q}{3}$
are divisible by $\frac{q}{2}$. So $[y,x]^{\binom{q}{2}}$ and
$[y,x,x]^{\binom{q}{3}}\in K_{2}$. Similarly all the terms
$(c_{5}\alpha)^{f_{5}(q)}$, $\ldots$, $(c_{1377}\alpha)^{f_{1377}(q)}$ lie in
$K_{2}$. So $(xy)^{q}=x^{q}y^{q}$ modulo $K_{2}$. This means that every
product of $q^{th}$ powers of elements in $\gamma_{2}(F)$ can be written as
the $q^{th}$ power of a single element in $\gamma_{2}(F)$ modulo $K_{2}$. So
consider $x^{q}$ when $x\in\gamma_{2}(F)$. We can write $x=[b,a]^{m_{3}}g$ for
some $g\in\gamma_{3}(F)$. Then, as we have just seen,
$x^{q}=[b,a]^{qm_{3}}g^{q}$ modulo $K_{2}$, and since $g^{q}\in K_{2}$ this
means that $x^{q}=[b,a]^{qm_{3}}$ modulo $K_{2}$.
Now let
$K_{3}=\gamma_{5}(F)^{\frac{q}{4}}\gamma_{8}(F)^{\frac{q}{8}}\gamma_{14}(F)$.
Then $K_{2}$ is generated modulo $K_{3}$ by $(\frac{q}{2})^{th}$ powers of
elements in $\gamma_{3}(F)$. Using the same argument as above we see that if
$x,y\in\gamma_{3}(F)$ then $(xy)^{\frac{q}{2}}=x^{\frac{q}{2}}y^{\frac{q}{2}}$
modulo $K_{3}$. So every product of $(\frac{q}{2})^{th}$ powers in
$\gamma_{3}(F)$ can be expressed modulo $K_{3}$ as $x^{\frac{q}{2}}$ with
$x\in\gamma_{3}(F)$. Let $x=c_{4}^{m_{4}}c_{5}^{m_{5}}\ldots c_{8}^{m_{8}}g$,
with $g\in\gamma_{5}(F)$. Then
$\displaystyle x^{\frac{q}{2}}$
$\displaystyle=c_{4}^{\frac{q}{2}m_{4}}(c_{5}^{m_{5}}\ldots
c_{8}^{m_{8}}g)^{\frac{q}{2}}\text{ modulo }K_{3}$
$\displaystyle=c_{4}^{\frac{q}{2}m_{4}}c_{5}^{\frac{q}{2}m_{5}}(c_{6}^{m_{6}}\ldots
c_{8}^{m_{8}}g)^{\frac{q}{2}}\text{ modulo }K_{3}$ $\displaystyle=\ldots$
$\displaystyle=c_{4}^{\frac{q}{2}m_{4}}\ldots
c_{8}^{\frac{q}{2}m_{8}}g^{\frac{q}{2}}\text{ modulo }K_{3}$
$\displaystyle=c_{4}^{\frac{q}{2}m_{4}}\ldots c_{8}^{\frac{q}{2}m_{8}}\text{
modulo }K_{3}.$
So every element of $\gamma_{2}(F)^{q}\gamma_{3}(F)^{\frac{q}{2}}$ can be
expressed modulo $K_{3}$ in the form
$[b,a]^{qm_{3}}c_{4}^{\frac{q}{2}m_{4}}\ldots c_{8}^{\frac{q}{2}m_{8}}.$
Continuing in this way we see that every element $k\in K$ can be written in
the form (6) modulo $\gamma_{14}(F)$. Since every element of
$F/\gamma_{14}(F)$ can be uniquely expressed in the form
$c_{1}^{n_{1}}c_{2}^{n_{2}}\ldots c_{1377}^{n_{1377}}\gamma_{14}(F)$
for some integers $n_{1},n_{2},\ldots,n_{1377}$, the expression (6) for $k$
modulo $\gamma_{14}(F)$ is unique.
Similarly every element of $N$ can be expressed uniquely modulo
$\gamma_{14}(F)$ in the form
$[b,a]^{8qn_{3}}c_{4}^{4qn_{4}}\ldots c_{8}^{4qn_{8}}c_{9}^{2qn_{9}}\ldots
c_{41}^{2qn_{41}}c_{42}^{qn_{42}}\ldots c_{1377}^{qn_{1377}},$
and if $k\in K$ is given by (6) then $k\in N$ if and only if $8|m_{i}$ for
$3\leq i\leq 1377$. So $K$ is generated by
$C=\\{[b,a]^{q},c_{4}^{\frac{q}{2}},\ldots,c_{8}^{\frac{q}{2}},c_{9}^{\frac{q}{4}},\ldots,c_{41}^{\frac{q}{4}},c_{42}^{\frac{q}{8}},\ldots,c_{1377}^{\frac{q}{8}}\\}$
modulo $\gamma_{14}(F)$, and all these generators have order 8 modulo $N$. We
show that provided $q\geq 32$ these generators commute with each other modulo
$N$, so that $K/N$ is abelian. It follows that $K/N$ is a direct sum of 1375
copies of the cyclic group of order 8. If $k\in K$ has the form (6) then we
let $[m_{3},m_{4},\ldots,m_{1377}]$ be the _representative vector_ of $kN$,
and we think of this vector as an element in $C_{8}^{1375}$. Multiplying two
elements of $K/N$ corresponds to adding their representative vectors, and
$k\in N$ if and only if the representative vector of $kN$ is zero.
To show that the elements in $C$ commute with each other we first let
$c,d\in\gamma_{3}(F)$ and consider the commutator $[c^{r},d^{s}]$ for general
$r,s>0$. The subgroup $\langle c,d\rangle$ has class at most 4 modulo $N$, and
we expand $[c^{r},d^{s}]$ modulo $\gamma_{5}(\langle c,d\rangle)$. Taking
$x=d^{s}$ we see that
$[c^{r},x]=[c,x]^{r}[c,x,c]^{\binom{r}{2}}[c,x,c,c]^{\binom{r}{3}}.$
Also
$[c,x]=[c,d^{s}]=[c,d]^{s}[c,d,d]^{\binom{s}{2}}[c,d,d,d]^{\binom{s}{3}}.$
So, modulo $\gamma_{5}(\langle c,d\rangle)$,
$\displaystyle[c^{r},d^{s}]$
$\displaystyle=([c,d]^{s}[c,d,d]^{\binom{s}{2}}[c,d,d,d]^{\binom{s}{3}})^{r}([c,d]^{s}[c,d,d]^{\binom{s}{2}},c])^{\binom{r}{2}}[[c,d]^{s},c,c]^{\binom{r}{3}}$
$\displaystyle=[c,d]^{rs}[c,d,d]^{r\binom{s}{2}}[c,d,d,d]^{r\binom{s}{3}}[c,d,c]^{\binom{r}{2}s}[c,d,d,c]^{\binom{r}{2}\binom{s}{2}}[c,d,c,c]^{\binom{r}{3}s}.$
Now assume that $q\geq 32$ and let $r=s=\frac{q}{2}$ then $2q$ divides $rs$
and all the other exponents in the product above are divisible by $q$ so that
$[c^{\frac{q}{2}},d^{\frac{q}{2}}]\in N$. So the elements in
$\\{c_{i}^{\frac{q}{2}}:\,$wt$\,c_{i}=3,4\\}$ commute with each other modulo
$N$. Similarly if $c\in\gamma_{3}(F)$ and $d\in\gamma_{5}(F)$ then $\langle
c,d\rangle$ is nilpotent of class at most 3 modulo $N$, and in the expansion
of $[c^{\frac{q}{2}},d^{\frac{q}{4}}]$ the exponent of $[c,d]$ is divisible by
$2q$, and all the exponents of terms of weight 3 are divisible by $q$, so that
$[c^{\frac{q}{2}},d^{\frac{q}{4}}]\in N$. So elements in
$\\{c_{i}^{\frac{q}{2}}:\,$wt$\,c_{i}=3,4\\}$ commute with elements in
$\\{c_{i}^{\frac{q}{4}}\,:\,5\leq\,$wt$\,c_{i}\leq 7\\}$ modulo $N$. If
$c\in\gamma_{3}(F)$ and $d\in\gamma_{8}(F)$ then $\langle c,d\rangle$ is
nilpotent of class at most 2 modulo $N$, so that
$[c^{\frac{q}{2}},d^{\frac{q}{8}}]=[c,d]^{\frac{q^{2}}{16}}\in N.$
In the same way, if $c,d\in\gamma_{5}(F)$ then $\langle c,d\rangle$ is
nilpotent of class at most 2 modulo $N$, and
$\displaystyle[c^{\frac{q}{4}},d^{\frac{q}{4}}]$
$\displaystyle=[c,d]^{\frac{q^{2}}{16}}\in N,$
$\displaystyle[c^{\frac{q}{4}},d^{\frac{q}{8}}]$
$\displaystyle=[c,d]^{\frac{q^{2}}{32}}\in N.$
Finally, if $c,d\in\gamma_{7}(F)$ then $[c,d]\in N$, so $[c^{r},c^{s}]\in N$
for all $r,s$. So all the elements in $C\backslash\\{[b,a]^{q}\\}$ commute
with each other modulo $N$ (provided $q\geq 32$).
It remains to show that $[b,a]^{q}$ commutes with elements in $C$. Let
$d\in\gamma_{3}(F)$ and let $c=[b,a]$. We obtain an expression for
$[d^{r},c^{s}]$ modulo $N$ similar to the expression we obtained above for
$[c^{r},d^{s}]$ modulo $N$ when $c,d\in\gamma_{3}(F)$. In this case we have
$\gamma_{7}(\langle c,d\rangle)\leq N$, so we obtain an expression involving
basic commutators in $c,d$ of weight at most 6. A complete list of basic
commutators up to weight 5 is
$\displaystyle
c,d,[d,c],[d,c,c],[d,c,d],[d,c,c,c],[d,c,c,d],[d,c,d,d],[d,c,c,c,c],$
$\displaystyle[d,c,c,c,d],[d,c,c,d,d],[d,c,d,d,d],[d,c,c,[d,c]],[d,c,d,[d,c]].$
We also need the first basic commutator of weight 6: $[d,c,c,c,c,c]$. All the
other basic commutators of weight 6 in $c,d$ lie in $N$. We call these basic
commutators (including $[d,c,c,c,c,c]$) $d_{1},d_{2},\ldots,d_{15}$. Then
$[d^{r},c^{s}]=d_{3}^{n_{3}}d_{4}^{n_{4}}\ldots d_{15}^{n_{15}}\text{ modulo
}N$
where $n_{3},n_{4},\ldots,n_{15}$ equal
$\displaystyle
rs,r\binom{s}{2},\binom{r}{2}s,r\binom{s}{3},\binom{r}{2}\binom{s}{2},\binom{r}{3}s,r\binom{s}{4},\binom{r}{2}\binom{s}{3},\binom{r}{3}\binom{s}{2},\binom{r}{4}s,$
$\displaystyle
3\binom{r}{2}\binom{s}{3}+2\binom{r}{2}\binom{s}{2}+r\binom{s}{3},4\binom{r}{3}\binom{s}{2}+2\binom{r}{3}s+3\binom{r}{2}\binom{s}{2}+\binom{r}{2}s,r\binom{s}{5}.$
The derivation of these exponents is straightforward, but tedious, so I will
omit it. It is easy to use a computer to check that they are correct for any
number of $r,s$. Using this expression for $[d^{r},c^{s}]$ with
$c^{s}=[b,a]^{q}$ it is straightforward to check that $[b,a]^{q}$ commutes
with all the elements in $C$ modulo $N$.
This completes our proof that $K/N$ is a direct product of 1375 copies of the
cyclic group of order 8.
We have shown that every element $k\in K$ can be expressed uniquely modulo
$\gamma_{14}(F)$ in the form (6) above. But there is a problem in that the
coefficients $m_{3},m_{4},\ldots,m_{1377}$ which appear in (6) can depend on
$q$. To illustrate this, consider the following example. Let
$c_{i},c_{j},c_{k}$ be the basic commutators
$[b,a,a,a,a],\,[b,a,a,a,b],\,[[b,a,a,a,b],[b,a,a,a,a]].$
Then working modulo $\gamma_{14}(F)$ we have
$(c_{i}c_{j})^{\frac{q}{4}}=c_{i}^{\frac{q}{4}}c_{j}^{\frac{q}{4}}c_{k}^{\frac{q}{8}(\frac{q}{4}-1)}.$
However $8|\frac{q}{4}$ provided $q\geq 32$, and so the representative vector
of $(c_{i}c_{j})^{\frac{q}{4}}N$, thought of as an element in $C_{8}^{1375}$,
is
$[0,\ldots,0,1,0,\ldots,0,1,0,\ldots,0,-1,0,\ldots,0]$
which does not depend on $q$.
To solve this problem in generality we need to investigate the binomial
coefficients $\binom{q}{d}$ for $1\leq d\leq 13$. These are all divisible by
$\frac{q}{8}$, so we can write $\binom{q}{d}=\frac{q}{8}n$ for some integer
$n$. We show that $n\operatorname{mod}8$ only depends on $d$, and not on $q$
(provided $q\geq 32$).
Consider $\binom{q}{8}$ for example. We want to show that
$\binom{q}{8}=\frac{q}{8}n$ where $n\operatorname{mod}8$ does not depend on
$q$.
$\displaystyle\frac{q}{8}n$ $\displaystyle=\frac{\allowbreak
q^{8}-28q^{7}+322q^{6}-1960q^{5}+6769q^{4}-13\,132\allowbreak
q^{3}+13\,068q^{2}-5040q}{8!}$ $\displaystyle=-\frac{q}{8}+\frac{2^{2}\times
3267\times q^{2}+rq^{3}}{2^{7}\times 315}$
for some integer $r$. The fraction $\frac{2^{2}\times 3267\times
q^{2}+sq^{3}}{2^{7}\times 315}$ is actually an integer, and since $q=2^{k}$
with $k\geq 5$ we can write this integer as $qm$ for some integer $m$.
Dividing through by $\frac{q}{8}$ we have $n=-1+8m$, and so
$n=-1\operatorname{mod}8$.
For another example consider $\binom{q}{12}$, and again write this binomial
coefficient as $\frac{q}{8}n$.
$\frac{q}{8}n=-\frac{q}{3\times 4}+\frac{2^{7}\times 1024785\times
q^{2}+rq^{3}}{12!}$
for some integer $r$. Multiplying both sides of this equation by 3 we have
$\frac{q}{8}3n=-\frac{q}{4}+sq$
for some integer $s$. So $3n\operatorname{mod}8=-2$ and
$n\operatorname{mod}8=2$.
The binomial coefficients $\binom{q}{d}$ where $d$ is odd are all divisible by
$q$, so can all be written in the form $\frac{q}{8}n$ where
$n\operatorname{mod}8=0$. The binomial coefficients $\binom{q}{d}$ for
$d=2,4,6,10$ can be handled in the same way as we dealt with the cases
$d=8,12$.
Similarly the binomial coefficients $\binom{q}{d}$ for $1\leq d\leq 7$ are all
divisible by $\frac{q}{4}$ and can all be written in the form $\frac{q}{4}n$
where $n\operatorname{mod}8$ does not depend on $q$.
And finally the binomial coefficients $\binom{q}{d}$ for $1\leq d\leq 3$ are
all divisible by $\frac{q}{2}$ and can all be written in the form
$\frac{q}{2}n$ where $n\operatorname{mod}8$ does not depend on $q$.
Now we return to the issue of representing an element $[y^{q},x]$ in the form
(6) above. We need to show that provided $q\geq 32$ then the representative
vector of $[y^{q},x]N$ depends only on $x,y$, and not on $q$. We introduce the
notation rv$\,(kN)$ for the representative vector of $kN$ when $k\in K$.
Equation (3) from Section 3 gives
$[y^{q},x]=[y,x]^{q}[y,x,y]^{\binom{q}{2}}[y,x,y,y]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{1377}\alpha)^{f_{1377}(q)}$
modulo $N$, where $\alpha$ is the endomorphism of $F$ mapping $a,b$ to
$y,[y,x]$. And so
$\text{rv}\,([y^{q},x]N)=\text{rv}\,([y,x]^{q}N)+\text{rv}\,([y,x,y]^{\binom{q}{2}}N)+\ldots+\text{rv}\,((c_{1377}\alpha)^{f_{1377}(q)}N).$
We show that all the summands on the right hand side of this equation depend
only on $x,y$, and not on $q$.
First consider a summand rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ where
wt$\,c_{i}\geq 8$. From our analysis of binomial coefficients we know that
$f_{i}(q)=n\frac{q}{8}$ for some integer $n$ where $n\bmod 8$ is independent
of $q$ (provided $q\geq 32$). Furthermore $(c_{i}\alpha)\in\gamma_{8}(F)$ so
that $(c_{i}\alpha)^{q}\in N$. So
$(c_{i}\alpha)^{f_{i}(q)}N=(c_{i}\alpha)^{(n\bmod 8)\frac{q}{8}}N.$
We show that rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ depends only on $x,y$ by
showing that if $g\in\gamma_{8}(F)$ then rv$\,(g^{\frac{q}{8}}N)$ depends only
on $g$ and not on $q$. Let
$g=c_{42}^{\beta_{42}}\ldots c_{1377}^{\beta_{1377}}\text{ modulo }N,$
for some integers $\beta_{i}$. Then since $\gamma_{8}(F)$ is abelian modulo
$N$,
$g^{\frac{q}{8}}=c_{42}^{\frac{q}{8}\beta_{42}}\ldots
c_{1377}^{\frac{q}{8}\beta_{1377}}\text{ modulo }N,$
and rv$\,(g^{\frac{q}{8}}N)$ equals
$[0,\ldots,0,\beta_{42},\ldots,\beta_{1377}]$
which depends on $g$, and not on $q$.
Next consider a summand rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ where
$4\leq\text{wt}\,c_{i}<8$. We can write $f_{i}(q)=n\frac{q}{4}$ for some
integer $n$ where $n\bmod 8$ is independent of $q$. The element
$c_{i}\alpha\in\gamma_{5}(F)$ and so $(c_{i}\alpha)^{2q}\in N$. So
$(c_{i}\alpha)^{f_{i}(q)}N=(c_{i}\alpha)^{(n\bmod 8)\frac{q}{4}}N.$
We show that rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ depends only on $x,y$ by
showing that if $g\in\gamma_{5}(F)$ then rv$\,(g^{\frac{q}{4}}N)$ depends only
on $g$ and not on $q$. Let
$g=c_{9}^{\beta_{9}}\ldots c_{1377}^{\beta_{1377}}\text{ modulo }N,$
for some integers $\beta_{i}$. Then
$g^{\frac{q}{4}}=c_{9}^{\frac{q}{4}\beta_{9}}\ldots
c_{1377}^{\frac{q}{4}\beta_{1377}}h^{\binom{\frac{q}{4}}{2}}\text{ modulo }N,$
where
$h=\prod_{9\leq i<j\leq 1377}[c_{j}^{\beta_{j}},c_{i}^{\beta_{i}}].$
So
$\text{rv}\,(g^{\frac{q}{4}}N)=[0,\ldots,0,\beta_{9},\ldots,\beta_{41},2\beta_{42},\ldots,2\beta_{1377}]+u$
where $u=\,$rv$\,(h^{\binom{\frac{q}{4}}{2}}N)$. As we have seen
$h^{\binom{\frac{q}{4}}{2}}N=h^{-\frac{q}{8}}N$
so $u$ depends only on $h$, which in turn depends only on $g$. So
rv$\,(g^{\frac{q}{4}}N)$ depends only on $g$ and not on $q$.
Now consider a summand rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ where wt$\,c_{i}=2$
or $3$. We can write $f_{i}(q)=n\frac{q}{2}$ for some integer $n$ where
$n\bmod 8$ is independent of $q$. The element $c_{i}\alpha\in\gamma_{3}(F)$
and so $(c_{i}\alpha)^{4q}\in N$. So
$(c_{i}\alpha)^{f_{i}(q)}N=(c_{i}\alpha)^{(n\bmod 8)\frac{q}{2}}N.$
We show that rv$\,((c_{i}\alpha)^{f_{i}(q)}N)$ depends only on $x,y$ by
showing that if $g\in\gamma_{3}(F)$ then rv$\,(g^{\frac{q}{2}}N)$ depends only
on $g$ and not on $q$. Let $g=c_{4}^{\beta_{4}}h$ modulo $N$ where
$h=c_{5}^{\beta_{5}}\ldots c_{1377}^{\beta_{1377}}.$
Then from equation (2) in Section 3 we see that
$g^{\frac{q}{2}}=c_{4}^{\frac{q}{2}\beta_{4}}h^{\frac{q}{2}}[h,c_{4}^{\beta_{4}}]^{\binom{\frac{q}{2}}{2}}(c_{4}\gamma)^{f_{4}(\frac{q}{2})}\ldots(c_{1377}\gamma)^{f_{1377}(\frac{q}{2})}\text{
modulo }N$
where $\gamma$ is the endomorphism of $F$ mapping $a,b$ to
$c_{4}^{\beta_{4}},h$. If wt$\,c_{i}>4$ then $c_{i}\gamma\in\gamma_{14}(F)$
and so
$g^{\frac{q}{2}}=c_{4}^{\frac{q}{2}\beta_{4}}h^{\frac{q}{2}}[h,c_{4}^{\beta_{4}}]^{\binom{\frac{q}{2}}{2}}(c_{4}\gamma)^{f_{4}(\frac{q}{2})}\ldots(c_{8}\gamma)^{f_{8}(\frac{q}{2})}\text{
modulo }N$
and
$\text{rv}\,(g^{\frac{q}{2}}N)=\text{rv}\,(c_{4}^{\frac{q}{2}\alpha_{4}}N)+\text{rv}\,(h^{\frac{q}{2}}N)+\ldots+\text{rv}\,((c_{8}\gamma)^{f_{8}(\frac{q}{2})}N).$
Clearly rv$\,(c_{4}^{\frac{q}{2}\alpha_{4}}N)=[0,\alpha_{4},0,\ldots,0]$
depends only on $g$. And we can assume by induction on the length of the
product $h=c_{5}^{\beta_{5}}\ldots c_{1377}^{\beta_{1377}}$ that
rv$\,(h^{\frac{q}{2}}N)$ depends only on $h$, and hence only on $g$.
So consider the element $[h,c_{4}^{\beta_{4}}]$. This element lies in
$\gamma_{6}(F)$ so that $[h,c_{4}^{\beta_{4}}]^{2q}\in N$. Furthermore
$\binom{\frac{q}{2}}{2}=\frac{q}{4}(\frac{q}{2}-1)=-\frac{q}{4}\text{ modulo
}2q$
provided $q\geq 32$. So
$[h,c_{4}^{\beta_{4}}]^{\binom{\frac{q}{2}}{2}}N=[h,c_{4}^{\beta_{4}}]^{-\frac{q}{4}}N$
and rv$\,([h,c_{4}^{\beta_{4}}]^{\binom{\frac{q}{2}}{2}}N)$ (as we have seen
above) depends only on $[h,c_{4}^{\beta_{4}}]$, which in turn depends only on
$g$.
For $i=4,5,6,7,8$ $c_{i}\gamma\in\gamma_{9}(F)$, so that $(c_{i}\gamma)^{q}\in
N$. It is straightforward to see that $\frac{q}{8}|f_{i}(\frac{q}{2})$ for
$i=4,5,6,7,8$, and it is also straightforward to see that $f_{i}(\frac{q}{2})$
modulo $q$ equals $4\frac{q}{8}$, $6\frac{q}{8}$, $7\frac{q}{8}$,
$5\frac{q}{8}$, $5\frac{q}{8}$ for $i=4,5,6,7,8$ provided $q\geq 32$. It
follows that
$(c_{4}\gamma)^{f_{4}(\frac{q}{2})}N=(c_{4}\gamma)^{4\frac{q}{8}}N$ and
rv$\,((c_{4}\gamma)^{f_{4}(\frac{q}{2})}N)$ depends only on $c_{4}\gamma$ and
hence only on $g$. Similarly rv$\,((c_{i}\gamma)^{f_{i}(\frac{q}{2})}N)$
depends only on $g$ for $i=5,6,7,8$.
So rv$\,(g^{\frac{q}{2}}N)$ is a sum of vectors each of which depends only on
$g$.
Finally consider the summand rv$\,([y,x]^{q}N)$. Let $[y,x]=[b,a]^{\beta}h$
where $h\in\gamma_{3}(F)$. Then from equation (2) in Section 3 we see that
$[y,x]^{q}=[b,a]^{q\beta}h^{q}[h,[b,a]^{\beta}]^{\binom{q}{2}}(c_{4}\gamma)^{f_{4}(q)}\ldots(c_{23}\gamma)^{f_{23}(q)}\text{
modulo }N.$
where $\gamma$ is the endomorphism of $F$ sending $a,b$ to $[b,a]^{\beta},h$.
(For $i>23$ $c_{i}\gamma\in\gamma_{14}(F)$). So rv$\,([y,x]^{q}N)$ equals
$[\beta,0,\ldots,0]+\text{rv}\,(h^{q}N)+\text{rv}\,([h,[b,a]^{\beta}]^{\binom{q}{2}}N)+\ldots+\text{rv}\,((c_{23}\gamma)^{f_{23}(q)}N).$
We have already shown that rv$\,(h^{q}N)$ depends only on $h$, and hence only
on $x,y$, so consider rv$\,([h,[b,a]^{\beta}]^{\binom{q}{2}}N)$. Since
$[h,[b,a]^{\beta}]\in\gamma_{5}(F)$ it follows that $[h,[b,a]^{\beta}]^{2q}\in
N$. So, as we have seen above,
$[h,[b,a]^{\beta}]^{\binom{q}{2}}N=[h,[b,a]^{\beta}]^{-\frac{q}{2}}N$, so that
rv$\,([h,[b,a]^{\beta}]^{\binom{q}{2}}N)$ depends only on $x,y$. Both
$c_{4}\gamma$ and $c_{5}\gamma$ lie in $\gamma_{5}(F)$, and so
$c_{4}\gamma^{2q}\in N$ and $c_{5}\gamma^{2q}\in N$. Now
$f_{4}(q)=\binom{q}{3}$ and $f_{5}(q)=\binom{q}{2}+2\binom{q}{3}$, and so
$(c_{4}\gamma)^{f_{4}(q)}N=(c_{4}\gamma)^{q}N$ and
$(c_{5}\gamma)^{f_{5}(q)}N=(c_{5}\gamma)^{-\frac{q}{2}}N$, and
rv$\,((c_{i}\gamma)^{f_{i}(q)}N)$ depends only on $x,y$ for $i=4,5$.
The elements $c_{i}\gamma$ for $i=6,7,\ldots,23$ lie in $\gamma_{8}(F)$, and
so all have order dividing $q$ modulo $N$. And the exponents $f_{i}(q)$ for
$i=6,7,\ldots,23$ can all be expressed in the form $\frac{q}{8}n_{i}$ modulo
$q$, where $n_{i}\bmod 8$ does not depend on $q$ (provided $q\geq 32$). So
rv$\,((c_{i}\gamma)^{f_{i}(q)}N)$ depends only on $x,y$ for $i=6,7,\ldots,23$.
Putting all this together we see that rv$\,([y^{q},x]N)$ depends only on
$x,y$, and not on $q$.
As we stated earlier in this section, $H=F/M$ where
$M=\langle[y^{q},x]\,:\,x,y\in F\rangle N.$
As $x,y$ range over $F$ the elements rv$\,([y^{q},x]N)$ generate an (additive)
subgroup $S\leq C_{8}^{1375}$. An element $k\in K$ lies in $M$ if and only if
rv$\,(kN)\in S$. The key point is that provided $q\geq 32$ the subgroup $S$
does not depend on $q$.
Now consider the claim that $[b,a]$ has order $8q$ in $H$. As stated above, I
have used the $p$-quotient algorithm to confirm this for $q=8,16,32$. For
$q\geq 32$ this is equivalent to showing that in $F$, $[b,a]^{8q}\in M$ and
$[b,a]^{4q}\notin M$. We have shown above that $[b,a]^{8q}\in N$. On the other
hand, if $q\geq 32$ then $[b,a]^{4q}$ has representative vector
$[4,0,0,\ldots,0]$, and since my computations show that $[b,a]^{4q}\notin M$
when $q=32$ this implies that $[4,0,0,\ldots,0]\notin S$, and hence that
$[b,a]^{4q}\notin M$ for any $q\geq 32$.
Next consider the claim that $b^{-q}a^{-q}(ab)^{q}$ has order $8$ in $H$. This
is equivalent to showing that $b^{-8q}a^{-8q}(ab)^{8q}\in M$ and that
$b^{-4q}a^{-4q}(ab)^{4q}\notin M$. From equation (2) in Section 3 we see that
$b^{-8q}a^{-8q}(ab)^{8q}=[b,a]^{\binom{8q}{2}}[b,a,a]^{\binom{8q}{3}}c_{5}^{f_{5}(8q)}\ldots
c_{1377}^{f_{1377}(8q)}\text{ modulo }N,$
and the properties of the integer-valued polynomials $f_{i}(t)$ imply that
$b^{-8q}a^{-8q}(ab)^{8q}\in K.$
My computer calculations show that $b^{-8q}a^{-8q}(ab)^{8q}\in M$ for $q=32$.
So the representative vector of $b^{-8q}a^{-8q}(ab)^{8q}$ lies in $S$ when
$q=32$. It is not really relevant, but the representative vector of
$b^{-8q}a^{-8q}(ab)^{8q}$ is
$[4,0,0,4,4,4,0,0,\ldots,0].$
So $b^{-8q}a^{-8q}(ab)^{8q}\in M$ for all $q\geq 32$. The element
$b^{-4q}a^{-4q}(ab)^{4q}$ also lies in $K$, and my computer calculations show
that if $q=32$ then $b^{-4q}a^{-4q}(ab)^{4q}\notin M$. So the representative
vector of $b^{-4q}a^{-4q}(ab)^{4q}$ does not lie in $S$ when $q=32$, and this
implies that it does not lie in $S$ for any $q\geq 32$. So
$b^{-4q}a^{-4q}(ab)^{4q}\notin M$ for $q\geq 32$.
The claims in Theorem 5 and Theorem 7 for the orders of $b^{-q}a^{-q}(ab)^{q}$
and $[b,a]$ in Schur covers of $R(2,q;c)$ for $c<12$ follow similarly. We
replace $N$ by $N\gamma_{c+2}(F)$. If $c_{r}$ is the last basic commutator of
weight $c+1$ then any element in $K/N$ has a unique representative vector
$[m_{3},m_{4},\ldots,m_{r}]$ for all $q\geq 32$, and the proof goes through in
the same way as above.
There is a slight problem in showing that $b^{-q}a^{-q}(ab)^{q}$ is non-
trivial in the Schur cover of $R(2,q;c)$ since $b^{-q}a^{-q}(ab)^{q}\notin K$.
But we can directly calculate a Schur cover of $R(2,q;1)$ by hand, and show
that $b^{-q}a^{-q}(ab)^{q}\neq 1$ in this cover. Similarly we can show that
$[b,a]^{\frac{q}{2}}$ is non-trivial in a Schur cover of $R(2,q;1)$, so that
the order of $[b,a]$ in a Schur cover of $R(2,q;c)$ is at least $q$ for any
$c$.
Finally, let $G$ be a finite 2-group with exponent $q>2$ and nilpotency class
at most 3. We show that if $H$ is the covering group of $G$ then $H^{\prime}$
has exponent dividing $q$. Our calculations with the covering group of
$R(2,q;3)$ show that $[y,x]^{q}=[y,x,x]^{\frac{q}{2}}=1$ for all $x,y\in H$.
Groups satisfying the 2-Engel identity $[y,x,x]=1$ are nilpotent of class at
most 3. So if $h\in\gamma_{4}(H)$, $h$ can be expressed as a product of terms
$[y,x,x]$ and their inverses, with $x,y\in H$. So $h^{\frac{q}{2}}$ is a
product of terms $[y,x,x]^{\frac{q}{2}}$ and their inverses, and hence
$h^{\frac{q}{2}}=1$. This implies that $\gamma_{4}(H)$ has exponent dividing
$\frac{q}{2}$. Since $H^{\prime}$ is generated by commutators which have order
dividing $q$, and $\gamma_{4}(H)$ has exponent dividing $\frac{q}{2}$, we see
that that $H^{\prime}$ has exponent dividing $q$.
## 6 Proofs of Theorem 6 and Theorem 8
The proofs of Theorem 6 and Theorem 8 are essentially the same as the proofs
of Theorem 5 and Theorem 7. Let $q=3^{k}$ where $k\geq 2$, let $R(2,q;12)=F/R$
where $F$ is the free group of rank two generated by $a,b$, and let
$H=F/[F,R]$. We can use the $p$-quotient algorithm to show that Theorem 6 and
Theorem 8 hold true when $q=9$ or 27, and we show that the fact that they hold
true for $q=27$ shows that they also hold true for higher powers of 3.
We let $q=3^{k}$ where $k\geq 3$, and let $c_{1},c_{2},\ldots,c_{1377}$ be our
list of basic commutators of weight at most $13$. As in the proof of Theorem 5
and Theorem 7 we let $x,y\in H$ and obtain a relation
$[y,x]^{q}[y,x,y]^{\binom{q}{2}}[y,x,y,y]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{1377}\alpha)^{f_{1377}(q)}=1$
(7)
where $\alpha$ is the homomorphism from $F$ to $H$ mapping $a,b$ to $y,[y,x]$.
If wt$\,c_{i}=w$ then $c_{i}\alpha$ is a commutator in $x$ and $y$, with $w$
entries $y$, and $\deg f_{i}\leq w$. The binomial coefficients $q$ and
$\binom{q}{2}$ are both divisible by $q$, the binomial coefficients
$\binom{q}{d}$ for $d<9$ are divisible by $\frac{q}{3}$, and the binomial
coefficients $\binom{q}{d}$ for $d<27$ are divisible by $\frac{q}{9}$.
If we let $y\in\gamma_{5}(H)$ then all commutators in $H$ with 3 or more
entries $y$ are trivial and we obtain the relation
$[y,x]^{q}[y,x,y]^{\binom{q}{2}}=1$. This implies that $[y,x]^{q}=1$, and so
(since $\gamma_{6}(H)$ is nilpotent of class 2) $\gamma_{6}(H)$ has exponent
$q$.
Now let $y\in\gamma_{2}(H)$, and replace $q$ by $3q$ in equation (7). Using
the fact that $\gamma_{6}(H)$ has exponent $q$ we see that
$[y,x]^{3q}[y,x,y]^{\binom{3q}{2}}=1$, which implies that $[y,x]^{3q}=1$, and
hence that $\gamma_{3}(H)$ has exponent $3q$.
Finally replace $q$ by $9q$ in equation (7) and using the facts that
$\gamma_{3}(H)$ has exponent $3q$ and $\gamma_{6}(H)$ has exponent $q$ we see
that $[y,x]^{9q}=1$ for all $x,y\in H$, and that $H^{\prime}$ has exponent
$9q$.
Let $N$ be the normal subgroup
$\gamma_{2}(F)^{9q}\gamma_{3}(F)^{3q}\gamma_{7}(F)^{q}\gamma_{14}(F)<F.$
Then $H=F/M$ where $M=\langle[y^{q},x]\,:\,x,y\in F\rangle N$.
Now let
$K=\gamma_{2}(F)^{q}\gamma_{3}(F)^{\frac{q}{3}}\gamma_{7}(F)^{\frac{q}{9}}\gamma_{14}(F).$
Just as in Section 5 we can show that every element $k\in K$ can be uniquely
expressed modulo $\gamma_{14}(F)$ in the form
$k=[b,a]^{qm_{3}}[b,a,a]^{\frac{q}{3}m_{4}}\ldots
c_{23}^{\frac{q}{3}m_{23}}c_{24}^{\frac{q}{9}m_{24}}\ldots
c_{1377}^{\frac{q}{9}m_{1377}}$ (8)
(There are 23 basic commutators $c_{i}$ of weight at most 6.) Similarly every
element of $N$ can be uniquely expressed modulo $\gamma_{14}(F)$ in the form
$[b,a]^{9qn_{3}}[b,a,a]^{3qn_{4}}\ldots
c_{23}^{3qn_{23}}c_{24}^{qn_{24}}\ldots c_{1377}^{qn_{1377}}.$
If $k\in K$ is given by equation (8) then $k\in N$ if and only if $9|m_{i}$
for $i=3,4,\ldots,1377$. And just as in Section 5 we can show that $K/N$ is
abelian and is a direct product of 1375 copies of the cyclic group of order 9.
We let $[m_{3},m_{4},\ldots,m_{1377}]$ be the representative vector for $kN$,
and think of this vector as an element in $C_{9}^{1375}$. Multiplying elements
of $K/N$ corresponds to adding their representative vectors, and the element
$k$ lies in $N$ if and only if the representative vector of $kN$ is $0$.
A similar argument to that given in Section 5 for binomial coefficients
$\binom{q}{d}$ $(d\leq 13)$ for $q$ a power of 2 at least as big as 32, shows
that if $q\geq 27$ is a power of 3 then all the binomial coefficients
$\binom{q}{d}$ $d\leq 13$ can be expressed in the form $\frac{q}{9}n$ for some
integer $n$ where $n\operatorname{mod}9$ depends only on $d$, and not on $q$.
Similarly, if $q\geq 27$ is a power of 3 then all the binomial coefficients
$\binom{q}{d}$ $d<9$ can be expressed in the form $\frac{q}{3}n$ for some
integer $n$ where $n\operatorname{mod}9$ depends only on $d$, and not on $q$.
And finally, if $q\geq 27$ is a power of 3 then $\binom{q}{1}=q$ and
$\binom{q}{2}=qn$ where $n\bmod 9=4$. Using the same argument as we used in
Section 5, we see that if $q\geq 27$ and $x,y\in F$, then $[y^{q},x]\in K$,
and the representative vector of $[y^{q},x]N$ depends only on $x,y$, and not
on $q$. As we stated above, $H=F/M$ where $M=\langle[y^{q},x]\,:\,x,y\in
F\rangle N$. The quotient $M/N$ is a subgroup of $K/N$ and the set of
representative vectors of elements in this subgroup is a subgroup $S\leq
C_{9}^{1375}$. If $k\in K$, then $k\in M$ if and only if the representative
vector of $kN$ lies in $S$.
The remainder of the proof of Theorem 6 goes through in the same way as in
Section 5, as does the proof of the claims in Theorem 8 for the order of
$[b,a]$ in Schur covers of $R(2,q;c)$ for various $c$.
Now let $G$ be a finite 3-group with exponent $q>3$ and class at most 8, and
let $H$ be the covering group for $G$. We show that $H^{\prime}$ has exponent
dividing $q$. We have shown that commutators in $H$ have order dividing $q$,
but we need to show that products of commutators have order dividing $q$. Our
calculations with the covering group of $R(2,q;8)$ show that
$[y,x,x,x]^{\frac{q}{3}}=1$ for all $x,y\in H$. It is known that 3-Engel
groups are locally nilpotent, and Werner Nickel’s nilpotent quotient algorithm
in Magma [2] has a facility for computing Engel groups. The free 3-Engel group
of rank 5 has class 9, and it is an easy calculation with the nilpotent
quotient algorithm to show that 3-Engel groups satisfy the identity
$[x_{1},x_{2},x_{3},x_{4},x_{5}]^{20}=1$. This implies that in a free group
$[x_{1},x_{2},x_{3},x_{4},x_{5}]^{20}$ can be expressed as a product of terms
$[y,x,x,x]$ and their inverses. Now $\gamma_{5}(H)$ is abelian and is
generated by elements with order dividing $q$. So $\gamma_{5}(H)$ has exponent
dividing $q$, which is coprime to 20. So if $h\in\gamma_{5}(H)$ then $h$ can
be expressed as a product of terms $[y,x,x,x]$ and their inverses (with
$x,y\in H$). This implies that $h^{\frac{q}{3}}$ is a product of terms
$[y,x,x,x]^{\pm\frac{q}{3}}$ (which are all trivial) and terms
$[[y,x,x,x]^{\pm 1},[z,t,t,t]^{\pm 1}]^{\binom{\frac{q}{3}}{2}}$
which are also trivial. So $\gamma_{5}(H)$ has exponent dividing
$\frac{q}{3}$. This, combined with the fact that $H^{\prime}$ is generated by
elements with order dividing $q$, implies that $H^{\prime}$ has exponent
dividing $q$.
## 7 Proof of Theorem 9 and Theorem 10
To prove Theorem 9 we need to show that if $q$ is a power of $5$ then the
order of $[b,a]$ in a Schur cover of $R(2,q;c)$ is $q$ for $c<9$, and $5q$ for
$c=9$. It is an easy calculation with the $p$-quotient algorithm to show that
this is the case for $q=5,25$. We use the same argument as in Section 5 and
Section 6 to show that this implies that the theorem holds true for all powers
of 5.
Let $q=5^{k}$ where $k\geq 2$, let $R(2,q;9)=F/R$ where $F$ is the free group
of rank two generated by $a,b$, and let $H=F/[F,R]$. Let
$c_{1},c_{2},\ldots,c_{226}$ be our list of basic commutators of weight at
most $10$. As in the last two sections we let $x,y\in H$ and obtain a relation
$[y,x]^{q}[y,x,y]^{\binom{q}{2}}[y,x,y,y]^{\binom{q}{3}}(c_{5}\alpha)^{f_{5}(q)}\ldots(c_{226}\alpha)^{f_{226}(q)}=1$
(9)
where $\alpha$ is the homomorphism from $F$ to $H$ mapping $a,b$ to $y,[y,x]$.
If wt$\,c_{i}=w$ then $c_{i}\alpha$ is a commutator in $x$ and $y$, with $w$
entries $y$, and $\deg f_{i}\leq w$. The binomial coefficients $\binom{q}{d}$
for $d<5$ are divisible by $q$, and the binomial coefficients $\binom{q}{d}$
for $d<25$ are divisible by $\frac{q}{5}$.
If we let $y\in\gamma_{2}(H)$ then all commutators in $H$ with 5 or more
entries $y$ are trivial and we see that $[y,x]^{q}$ is a product of $q^{th}$
powers of commutators of higher weight in $x,y$. So $[y,x]^{q}=1$, and
$\gamma_{3}(H)$ has exponent $q$.
Now replace $q$ by $5q$ in (9) and we obtain $[y,x]^{5q}=1$, which implies
that $\gamma_{2}(H)$ has exponent $5q$.
So we let
$N=\gamma_{2}(F)^{5q}\gamma_{3}(F)^{q}\gamma_{11}(F)$
and we let
$K=\gamma_{2}(F)^{q}\gamma_{3}(F)^{\frac{q}{5}}\gamma_{11}(F).$
Just as in Section 5 and Section 6 we can show that $K/N$ is a direct product
of 224 copies of the cyclic group of order 5. Every element $k\in K$ can be
uniquely expressed modulo $N$ in the form
$k=[b,a]^{qm_{3}}c_{4}^{\frac{q}{5}m_{4}}\ldots c_{226}^{\frac{q}{5}m_{226}}$
where $0\leq m_{i}<5$ for $i=3,4,\ldots,226$. We let
$[m_{3},m_{4},\ldots,m_{226}]$ be the representative vector for $kN$, and we
think of it as an element in $C_{5}^{224}$.
Just as in Section 5 we can show that if $x,y\in F$ then $[x^{q},y]\in K$, and
the representative vector of $[x^{q},y]N$ depends only on $x,y$, and not on
$q$. The rest of the proof that the order of $[b,a]$ in a Schur cover of
$R(2,q;c)$ is $q$ for $c<9$, and $5q$ for $c=9$ goes through just as in
Section 5.
If $G$ is any group of exponent $q=5^{k}$ ($k\geq 1$) with class less than 9,
and if $H$ is the cover of $G$ then commutators in $H$ have order dividing
$q$, and so (since $H^{\prime}$ has class at most 4) $H^{\prime}$ has exponent
dividing $q$, which implies that $M(G)$ has exponent dividing $q$.
The proof of Theorem 10 is almost identical to the proof of Theorem 9.
## References
* [1] A.J. Bayes, J. Kautsky, and J.W. Wamsley, Computation in nilpotent groups (application), Proceedings of the second international conference on the theory of groups (Australian National Uuniversity, Canberra,1973), Springer, Berlin, 1974, pp. 82–89.
* [2] W. Bosma, J. Cannon, and C. Playoust, _The Magma algebra system I: The user language_ , J. Symbolic Comput. 24 (1997), 235–265.
* [3] M. Hall, The theory of groups, Macmillan, New York, 1959.
* [4] G. Havas and M.F. Newman, Applications of computers to questions like those of Burnside, Lecture Notes in Mathematics, 806, Springer-Verlag, Berlin, 1980, pp. 211–230.
* [5] Primoz Moravec, Schur multipliers and power endomorphisms of groups, J. Algebra 308 (2007), 12–25.
* [6] Michael Vaughan-Lee, Schur’s exponent conjecture — counterexamples of exponent $5$ and exponent $9$, Int. J. Group Theory 10 (2021), 167–173.
|
11institutetext: Instituto de Ciencias Astronómicas, de la Tierra y del
Espacio (ICATE-CONICET), C.C 467, 5400, San Juan, Argentina. 22institutetext:
Universidad Nacional de San Juan (UNSJ), Facultad de Ciencias Exactas, Físicas
y Naturales (FCEFN), San Juan, Argentina. 33institutetext: Instituto de
Investigación Multidisciplinar en Ciencia y Tecnología, Universidad de La
Serena, Raúl Bitrán 1305, La Serena, Chile 44institutetext: Departamento de
Física y Astronomía, Universidad de La Serena, Av. Cisternas 1200 N, La
Serena, Chile. 55institutetext: Gemini Observatory / NSF’s NOIRLab, Casilla
603, La Serena, Chile 66institutetext: Consejo Nacional de Investigaciones
Científicas y Técnicas (CONICET), Argentina 77institutetext: Universidad
Nacional de San Juan (UNSJ), Facultad de Filosofía, Humanidades y Artes
(FFHA), San Juan, Argentina
# Testing the accretion scenario of $\lambda$ Boo stars
J. Alacoria 1166 C. Saffe 112266 M. Jaque Arancibia 3344 R. Angeloni 55 P.
Miquelarena 112266 M. Flores 112266 M. E. Veramendi 116677 A. Collado 112266
(Received xxx, xxx ; accepted xxxx, xxxx)
###### Abstract
Context. The group of $\lambda$ Boo stars is known for years, however the
origin of its chemical peculiarity is still strongly debated.
Aims. Our aim is to test the accretion scenario of $\lambda$ Boo stars. This
model predicts that a binary system with two early-type stars passing through
a diffuse cloud should both display the same superficial peculiarity.
Methods. Via spectral synthesis, we carried out a detailed abundance
determination of three multiple systems hosting a candidate $\lambda$ Boo
star: the remarkable triple system HD 15164/65/65C and the two binary systems
HD 193256/281 and HD 198160/161. Stellar parameters were initially estimated
using Strömgren photometry or literature values and then refined by requiring
excitation and ionization balances of Fe lines. The abundances were determined
iteratively for 24 different species by fitting synthetic spectra using the
SYNTHE program together with local thermodynamic equilibrium (LTE) ATLAS12
model atmospheres. Specific opacities were calculated for each star, depending
on its arbitrary composition and microturbulence velocity vmicro through the
opacity sampling (OS) method. The abundances of the light elements C and O
were corrected by Non-LTE effects. The complete chemical pattern of the stars
were then compared to those of $\lambda$ Boo stars.
Results. The abundance analysis of the triple system HD 15164/65/65C shows a
clear $\lambda$ Boo object (HD 15165) and two objects with near solar
composition (HD 15164 and 15165C). Notably, the presence of a $\lambda$ Boo
star (HD 15165) together with a near solar early-type object (HD 15164) is
difficult to explain under the accretion scenario. Also, the solar-like
composition derived for the late-type star of the system (HD 15165C) could be
used, for the first time, as a proxy for the initial composition of the
$\lambda$ Boo stars. This could help to constrain any model of $\lambda$ Boo
stars formation and not only the accretion scenario. The abundance analysis of
the binary system HD 193256/281 shows no clear $\lambda$ Boo components, while
the analysis of HD 198160/161 shows two mild-$\lambda$ Boo stars. Then, by
carefully reviewing abundance analysis of all known binary systems with
candidate $\lambda$ Boo stars from literature and including the systems
analyzed here, we find no binary/multiple system having two clear ”bonafide”
$\lambda$ Boo stars, as expected from the accretion scenario. The closer
candidates to show two $\lambda$ Boo-like stars are the binary systems HD
84948, HD 171948 and HD 198160; however, in our opinion they show mild rather
than clear $\lambda$ Boo patterns.
Conclusions. We performed for the first time a complete analysis of a triple
system which includes a $\lambda$ Boo candidate. Our results brings little
support to the accretion scenario of $\lambda$ Boo stars. Then, there is an
urgent need of additional binary and multiple systems to be analyzed through a
detailed abundance analysis, in order to test the accretion model of $\lambda$
Boo stars.
###### Key Words.:
Stars: abundances – Stars: binaries – Stars: chemically peculiar – Stars:
individual: HD 15164, HD 15165, HD 15165C, HD 193256, HD 193281, HD 198160, HD
198161
## 1 Introduction
The main feature of $\lambda$ Boo stars is a notable underabundance of most
Fe-peak elements and near solar abundances of lighter elements (C, N, O and
S). They comprise main-sequence late-B to early-F stars, where a maximum of
about 2% of all objects are believed to be $\lambda$ Boo stars (Gray &
Corbally, 1998; Paunzen et al., 2001b). Classification-resolution spectroscopy
shows promising $\lambda$ Boo candidates (e.g., Murphy et al., 2015; Gray et
al., 2017), and a more detailed abundance determination, especially including
the lighter elements, is considered a ultimate test to confirm that a
candidate is indeed a bonafide member of the class (e.g., Andrievsky et al.,
2002; Heiter et al., 2002).
The origin of the $\lambda$ Boo peculiarity still remains as a challenge (see,
e.g., the recent discussion of Murphy & Paunzen, 2017, and references
therein). Their rotational velocities do not necessarily point toward lower
values, marking thus a difference with chemically peculiar Am and Ap stars
(Abt & Morrell, 1995; Murphy et al., 2015). A possible explanation consist in
the interaction of the star with a diffuse interstellar cloud (Kamp & Paunzen,
2002; Martinez-Galarza et al., 2009). In this work, we refer to this model as
the ”accretion scenario”, in which the underabundances are produced by
different amounts of volatile accreted material, and the more refractory
species are possibly separated and repelled from the star. More recently, Jura
(2015) proposed that this peculiar pattern possibly originates from the winds
of hot-Jupiter planets111Hot-Jupiter planets present short orbital periods
($<$10 d) and large masses ($>$ 0.1 MJup).. In this case, the planet acts as
the source of gas poor in refractory species. However, Saffe et al. (2021)
have recently shown that eight early-type stars hosting hot-Jupiter planets do
not display the $\lambda$ Boo peculiarity. This would let the interaction of
the star with a diffuse cloud, as the more plausible scenario to explain the
$\lambda$ Boo phenomena in main-sequence stars.
Under the accretion scenario, two early-type stars passing through a diffuse
cloud should display, in principle, the same superficial peculiarity (e.g.,
Paunzen et al., 2012a, b). At the same time, hotter stars (Teff $>$
$\sim$12000 K) with strong winds, and cooler stars (Teff $<$ $\sim$6500 K)
with larger convective zones, should not notably change their composition.
These predictions make the analysis of binary and multiple systems an
important tool to test the accretion scenario. However, the number of known
candidate $\lambda$ Boo stars in binary/multiple systems is limited to a dozen
of objects (e.g., Paunzen et al., 2012a, b), where most of them are
spectroscopic binary (SB) systems. To our knowledge, only five of these
systems present a detailed chemical analysis of the two components (see the
Appendix for a more detailed review). Notably, some stars of these binary
systems were recently identified as non-members or uncertain members of the
$\lambda$ Boo class (see Gray et al., 2017). Based on literature data, we
selected for this study three binary/multiple systems that possibly confront
the accretion scenario. In addition, they are spatially resolved (in contrast
to most candidate $\lambda$ Boo stars that belong to SB systems, Paunzen et
al., 2012a, b), allowing a individual analysis without a strong contribution
from the companion. We also review all known binary or multiple systems with
candidate $\lambda$ Boo stars, with data taken from the literature (see
Appendix).
In this work, we present an analysis of the remarkable triple system HD 15165.
It is composed by HD 15165, HD 15164 and HD 15165C (stars A, B and C) with
spectral types ”F2 V kA2mA2 $\lambda$ Boo?”, ”F1 V kA7mA6 ($\lambda$ Boo)?”
and ”K2 V” (Murphy et al., 2015). Some previous works suggest that the A star
belong to the $\lambda$ Boo class (Andrievsky et al., 1995; Chernyshova et
al., 1998), while the B star seem to display, notably, a solar composition
(Andrievsky et al., 1995). If these abundances are confirmed, this could
seriously defy the accretion scenario. In addition, currently there is no
analysis of the 3rd star, the late-type component of the system. Therefore, we
take the opportunity and perform a detailed abundance analysis including for
the first time the three stars of the system, using a spectra with higher
resolving power than previous works.
We also present an analysis of the binary systems HD 193256/281 and HD
198160/161. Both systems show solar values for C and subsolar Fe, similar to
other candidate $\lambda$ Boo stars (Stürenburg, 1993). However, more recent
classification spectra suggest that only one star of the system belong to the
$\lambda$ Boo class (see Tables 1 and 4 of Murphy et al., 2015; Gray et al.,
2017), which would be difficult to explain under the accretion scenario. This
convert both systems in very interesting targets to study in detail, and are
included in our analysis.
This work is organized as follows. In Sect. 2 we describe the observations and
data reduction, while in Sect. 3 we present the stellar parameters and
chemical abundance analysis. In Sect. 4 we show the results and discussion,
and finally in Sect. 5 we highlight our main conclusions.
## 2 Observations
We present in Table 1 the visual magnitude V (from Hipparcos), coordinates,
proper motions and parallax (from Gaia DR2, Gaia Collaboration, 2018) for the
stars studied in this work. Spectral data of the triple system HD 15165 were
obtained with the Max Planck Gesselschaft (MPG) 2.2 meter telescope at the
European Southern Observatory (ESO) in La Silla, Chile, on October 10, 2021
(Program ID: 0108.A-9012, PI: Marcelo Jaque Arancibia). We used the Fiber-fed
Extended Range Optical Spectrograph (FEROS), which provides a high-resolution
(R$\sim$48.000) spectra when illuminated via the 2.0 arcsec aperture on the
sky in the unbinned mode. Three individual spectra for each object were
obtained, followed by a ThAr lamp in order to obtain an appropriate wavelength
solution. The data were reduced using the FEROS Data Reduction
System222https://www.eso.org/sci/facilities/lasilla/instruments/feros/tools/DRS.html
(DRS). The spectral coverage resulted between 3700-9000 Å, approximately, and
the S/N per pixel measured at $\sim$5000 Å resulted in $\sim$300.
Table 1: Magnitudes and astrometric data for the stars studied in this work. Star | V | $\alpha$ | $\delta$ | $\mu_{\alpha}$ | $\mu_{\delta}$ | $\pi$ | Spectra
---|---|---|---|---|---|---|---
| | J2000 | J2000 | [mas/yr] | [mas/yr] | [mas] |
HD 15164 | 8.27 | 02 26 48.29 | +10 34 57.59 | 36.552 | -13.717 | 7.4185 | MPG+FEROS
HD 15165 | 6.69 | 02 26 45.65 | +10 33 55.07 | 36.680 | -13.086 | 7.4414 | MPG+FEROS
HD 15165C | 11.78 | 02 26 47.40 | +10 32 58.89 | 36.805 | -13.131 | 7.5499 | MPG+FEROS
HD 193256 | 7.53 | 20 20 26.57 | -29 11 28.76 | -1.991 | -1.221 | 5.8675 | CASLEO+REOSC
HD 193281 | 6.64 | 20 20 27.88 | -29 11 49.97 | -0.653 | 0.244 | 6.2644 | CASLEO+REOSC
HD 198160 | 6.21 | 20 51 38.51 | -62 25 45.59 | 82.697 | -46.562 | 13.5137 | MPG+FEROS
HD 198161 | 6.56 | 20 51 38.85 | -62 25 45.26 | 82.077 | -42.340 | 13.5315 | MPG+FEROS
The spectra of the binary system HD 193256/281 were obtained at the Complejo
Astrónomico El Leoncito (CASLEO) between May 9 and 11, 2009 (PI: Maria Eugenia
Veramendi). We used the _Jorge Sahade_ 2.15-m telescope equipped with a REOSC
echelle spectrograph333On loan from the Institute d’Astrophysique de Liege,
Belgium. and a TEK 1024$\times$1024 CCD detector. The REOSC spectrograph uses
gratings as cross dispersers. We used a grating with 400 lines mm-1, which
provides a resolving power of $\sim$ 12500 covering the spectral range
$\lambda\lambda$3800–6500. Three individual spectra for each object were
obtained and then combined, reaching a final S/N per pixel of $\sim$300
measured at $\sim$5000 Å. The data were reduced with Image Reduction and
Analysis Facility (IRAF) following the standard recipe for echelle spectra
(i.e. bias and flat corrections, order-by-order normalization, scattered light
correction, etc.).
Finally, the FEROS spectra of the binary system HD 198160/161 were obtained
from the ESO Science Archive Facility444http://archive.eso.org/cms.html. The
stars were observed between April 4 and 7, 2017 (Program ID: 099-A-9029). The
spectra were reduced using the FEROS DRS, obtaining a spectral coverage and
S/N similar to those obtained with HD 15165.
## 3 Stellar parameters and abundance analysis
The stellar parameters Teff and $\log g$ were estimated iteratively, similar
to previous works (Saffe et al., 2021). They were first estimated by using the
Strömgren uvby$\beta$ mean photometry of Hauck & Mermilliod (1998) or
considering previously published results. We used the calibration of
Napiwotzki et al. (1993) and deredenned colors according to Domingo & Figueras
(1999) within the program TempLogG (Kaiser, 2006), in order to derive the
fundamental parameters. These initial values were refined (when necessary
and/or possible) by imposing excitation and ionization balances of the iron
lines. A similar strategy was previously applied in the literature (e.g.,
Saffe & Levato, 2014; Saffe et al., 2021). The values derived in this way are
listed in the Table 2, with an average dispersion of $\sim$115 K and
$\sim$0.13 dex for Teff and $\log g$, respectively.
Table 2: Fundamental parameters derived for the stars in this work. Star | Teff | $\log g$ | vmicro | $v\sin i$
---|---|---|---|---
| [K] | [dex] | [km s-1] | [km s-1]
HD 15164 | 7150 $\pm$ 70 | 3.74 $\pm$ 0.08 | 2.54 $\pm$ 0.63 | 17.9 $\pm$ 0.7
HD 15165 | 6950 $\pm$ 139 | 3.80 $\pm$ 0.19 | 2.21 $\pm$ 0.55 | 125.7 $\pm$ 5.4
HD 15165C | 4960 $\pm$ 51 | 4.40 $\pm$ 0.03 | 0.46 $\pm$ 0.07 | 2.4 $\pm$ 0.3
HD 193256 | 7780 $\pm$ 146 | 3.97 $\pm$ 0.19 | 3.23 $\pm$ 0.81 | 257.0 $\pm$ 8.2
HD 193281 | 8700 $\pm$ 140 | 3.60 $\pm$ 0.15 | 2.99 $\pm$ 0.75 | 91.5 $\pm$ 3.9
HD 198160 | 8010 $\pm$ 130 | 4.09 $\pm$ 0.15 | 3.31 $\pm$ 0.83 | 190.0 $\pm$ 6.8
HD 198161 | 8010 $\pm$ 130 | 4.09 $\pm$ 0.15 | 3.31 $\pm$ 0.83 | 185.0 $\pm$ 7.2
Projected rotational velocities $v\sin i$ were estimated by fitting most Fe I
and Fe II lines in the spectra. Synthetic spectra were calculated using the
program SYNTHE (Kurucz & Avrett, 1981) together with ATLAS12 (Kurucz, 1993)
model atmospheres. Microturbulence velocity vmicro was estimated as a function
of Teff following the formula of Gebran et al. (2014), (valid for $\sim$6000 K
$<$ Teff $<$ $\sim$10000 K), except for the late-type star HD 15165C for which
we used the formula of Ramirez et al. (2013) for FGK stars. We adopt for
vmicro an uncertainty of $\sim$25 $\%$, as suggested by Gebran et al. (2014).
Chemical abundances were determined iteratively by fitting a synthetic spectra
using the program SYNTHE (Kurucz, 1993). In the first step, we use an ATLAS12
model atmosphere calculated with solar abundances. With the new abundance
values, we derived a new model atmosphere and started the process again. In
each step, opacities were calculated for an arbitrary composition and vmicro
using the opacity sampling (OS) method, similar to previous works (Saffe et
al., 2020, 2021). Possible differences originated from the use of opacities
with solar-scaled composition instead of an arbitrary composition, were
recently estimated for solar-type stars (Saffe et al., 2018, 2019). If
necessary, Teff and $\log g$ were refined to achieve the balance of Fe I and
Fe II lines. In this way, abundances and parameters are consistently derived
until reach the same input and output abundance values (for more details, see
Saffe et al., 2021).
Chemical abundances were derived for 24 different species. The atomic line
list and laboratory data used in this work are the same described in Saffe et
al. (2021). In Figs. 1 and 2 we present an example of observed and synthetic
spectra (black and blue dotted lines, almost superimposed) together with the
difference spectra (magenta) for the stars in our sample. For clarity, Fig. 1
corresponds to stars with the higher $v\sin i$ values ($>$ 91 km s-1), while
Fig. 2 corresponds to stars with the lower $v\sin i$ values ($<$ 17.9 km s-1).
The stars are sorted in these plots by increasing $v\sin i$. There is a good
agreement between modeling and observations for the lines of different
chemical species. To determine the uncertainty in the abundance values, we
considered different sources. The total error etot was derived as the
quadratic sum of the line-to-line dispersion e1 (estimated as
$\sigma/\sqrt{n}$ , where $\sigma$ is the standard deviation), and the error
in the abundances (e2, e3 and e4) when varying Teff, $\log g$ and vmicro by
their corresponding uncertainties555We adopt a minimum of 0.01 dex for the
errors e2, e3 and e4.. For chemical species with only one line, we adopt as
$\sigma$ the standard deviation of iron lines. The abundances, the total error
etot and the individual errors e1 to e4 are presented in Tables B.1 to B.7 of
the Appendix.
Figure 1: Observed, synthetic, and difference spectra (black, blue dotted, and
magenta lines) for the stars in our sample, sorted by $v\sin i$. Figure 2:
Observed, synthetic, and difference spectra (black, blue dotted, and magenta
lines) for the stars in our sample, sorted by $v\sin i$.
### 3.1 NLTE effects
Light element Non-Local Thermodynamic Equilibrium (NLTE) abundances are
particularly important for the case of $\lambda$ Boo stars. For instance,
Paunzen et al. (1999) derived for a sample of $\lambda$ Boo stars an average O
I correction of -0.5 dex, while for C I they estimated an average correction
of -0.1 dex. Rentzsch-Holm (1996) calculated carbon NLTE abundance corrections
by using a multilevel model atom for stars with Teff between 7000 K and 12000
K, log g between 3.5 and 4.5 dex, and metallicities from -0.5 dex to +1.0 dex.
She showed that C I NLTE effects are negative (calculated as NLTE-LTE) and
depend basically on equivalent width Weq. Near $\sim$7000 K, the three lower
levels of C I are always in LTE; however, by increasing the Teff increase the
underpopulation of these levels respect to LTE by UV photoionization. Then, we
estimated NLTE abundance corrections of C I for the early-type stars in our
sample by interpolating in their Figs. 7 and 8 as a function of Teff, Weq and
metallicity.
Sitnova et al. (2013) performed NLTE abundance corrections for O I for stars
with spectral types from A to K (Teff between 10000 and 5000 K). They showed
that NLTE effects lead to an strengthening of O I lines, producing a negative
NLTE correction. We estimated NLTE abundance corrections of O I (IR triplet
7771 Å and 6158 Å) for the stars in this work, by interpolating in the Table
11 of Sitnova et al. (2013) as a function of Teff. Other O I lines present
corrections lower than $\sim$-0.02 dex (see, e.g., Table 5 of Sitnova et al.,
2013).
### 3.2 Comparison with literature
We present in Fig. 3 a comparison of [Fe/H] values derived in this work, with
those taken from literature for the stars HD 15164 (Andrievsky et al., 1995),
HD 15164 (Paunzen et al., 2002), HD 193256, HD 193281, HD 198160 and HD 198161
(Stürenburg, 1993). In general, there is a reasonable agreement with
literature, where the star HD 193281 present the larger difference (marked in
the plot).
Stürenburg (1993) estimated for HD 193281 an iron abundance of
[Fe/H]=-1.0$\pm$0.2. However, we estimated for this star a somewhat higher
value of [FeI/H]=-0.36$\pm$0.13 ([FeII/H]=-0.48$\pm$0.13). We explored the
possible sources for this difference. They estimated a Teff of 8080 K (without
quoting uncertainties) by using the Strömgren photometry, while we estimated
for this object a Teff of 8700$\pm$140K, having a difference of 620 K. This
could be one of the reasons for the different [Fe/H] that we obtained.
Different works estimated for this star temperatures of 8700 K (Gray et al.,
2017), 8623 K (Koleva & Vazdekis, 2012), and recently 8695 K (Arentsen et al.,
2019). Then, our estimated Teff is more in agreement with these works. We also
note that this star presents different metallicities in literature:
-1.0$\pm$0.2 dex (Stürenburg, 1993), -0.68 dex (Koleva & Vazdekis, 2012) and
more recently -0.37 dex (Arentsen et al., 2019). Our estimated metallicity of
[FeI/H]=-0.36$\pm$0.13 is closer to the work of Arentsen et al. (2019).
In addition, there is evidence that HD 193281 could be contaminated by a
nearby star. Simbad database reports that the star ADS 13702 B (= TYC
6918-1823-2) is located at $\sim$3.5 arcsec from HD 193281, having spectral
type ”F5:V”. Ivanov et al. (2019) present a library of stellar spectra taken
with the integral field spectrograph
MUSE666https://www.eso.org/sci/facilities/develop/instruments/muse.html in low
spectral resolution (R$\sim$2000) although with high spatial resolution
(0.3-0.4 arcsec). They report that HD 193281 is a binary with $\sim$3.8 arcsec
separation and the components cross-contaminate each other. They identified
the components as HD 193281 A and B, and estimated spectral types A2 III and
K2 III, respectively (updating the spectral type F5:V reported by Simbad for
the star HD 193281 B). This possible contamination could explain, at least in
part, the different parameters and metallicities obtained from different works
for this object. In this study, we estimated parameters and abundances of HD
193281 taken as single, for which the resulting values should then be
considered with caution.
Figure 3: Comparison of [Fe/H] values derived in this work with those from
literature. Average dispersion bars are showed in the upper left corner.
## 4 Discussion
In order to test the accretion scenario of $\lambda$ Boo stars, we compare the
chemical abundances of the stars in our sample with those of $\lambda$ Boo
stars. The three multiple systems with candidate $\lambda$ Boo stars are
discussed separately, while other binary or multiple systems with candidate
$\lambda$ Boo stars are discussed in the Appendix.
### 4.1 The average pattern of $\lambda$ Boo stars
To derive an average $\lambda$ Boo pattern is not an easy task. Few literature
works obtain homogeneous abundances of many species for $\lambda$ Boo stars
(e.g., Stürenburg, 1993; Andrievsky et al., 2002; Heiter et al., 2002).
Stürenburg (1993) derived abundances for 16 A-type stars classified, in
principle, as $\lambda$ Boo stars. They performed NLTE corrections for some
elements including C. However, they included stars that were subsequently
considered non-members or uncertain members, such as HD 38545 and HD 193281
(Murphy et al., 2015). Paunzen et al. (1999) and Kamp et al. (2001) derived
light-element NLTE abundances for a sample of $\lambda$ Boo stars. Then,
Andrievsky et al. (2002) derived elemental abundances for 20 candidate
$\lambda$ Boo stars basically selected from classification-resolution
spectroscopy. They performed primarily a LTE approach and included NLTE
effects for Na. They were able to confirm the membership of only nine objects
to the $\lambda$ Boo class, while other stars were ruled out or present an
unclear membership. Paunzen et al. (2002) collected abundance values for 26
candidate $\lambda$ Boo stars (see their Table 5), although using different
literature sources. Also, Heiter et al. (2002) reported LTE abundance values
for 12 candidate $\lambda$ Boo stars, four of them belonging to SB systems.
Then, it would be highly desirable a homogeneous abundance determination
including more candidate $\lambda$ Boo stars, newer laboratory data for the
lines and including NLTE effects especially for the light-elements.
In order to test the accretion scenario of $\lambda$ Boo stars, we compare the
chemical abundances of the stars in our sample with those of $\lambda$ Boo
stars. In this work, we used the data derived by Heiter et al. (2002), who
homogeneously determined abundances for a number of $\lambda$ Boo stars. We
excluded from the average those stars without CNO values and the stars
analyzed here.
### 4.2 The triple system HD 15164/65/65C
This remarkable triple system is composed by two early-type stars (HD 15165
and HD 15164, the stars A and B) and a late-type companion (HD 15165C). A
number of studies suggest that the spectrum of HD 15165 resembles that of
metal-deficient star, but the companion HD 15164 has a near solar abundance
(Mechler, 1974, 1976; Abt, 1980). Then, as explained in the Introduction, some
works suggest that the A star belong to the $\lambda$ Boo class (Andrievsky et
al., 1995; Chernyshova et al., 1998), while the B star seems to display a
solar composition (Andrievsky et al., 1995). To our knowledge, there is no
abundance determination for the C component.
We present in Fig. 4 the chemical pattern of the stars HD 15164, HD 15165 and
HD 15165C (black), compared to an average pattern of $\lambda$ Boo stars
(blue). For each star we present two panels, corresponding to elements with
atomic number z$<$32 and z$>$32\. The error bars of the $\lambda$ Boo pattern
show the standard deviation derived from different stars, while the error bars
for our stars correspond to the total error etot. As we can see in the Fig. 4
the chemical pattern of the primary (HD 15165) is similar to the pattern of
$\lambda$ Boo stars, showing subsolar abundances of most metals (Mg, Al, Ca,
Sc, Ti, Cr, Fe) together with near solar values of C and O. The abundances of
Sr and Ba present a less marked deficiency, although still showing subsolar
values. On the other hand, the chemical pattern of the secondary star (HD
15164) shows a slight deficiency in some metals (for instance
[Fe/H]=-0.36$\pm$0.15 dex), although closer in general to the solar pattern
than to the $\lambda$ Boo stars. In this sense, a primary showing a $\lambda$
Boo pattern and a secondary showing near solar abundances, verify the early
result of Andrievsky et al. (1995): the early-type stars A and B present
different chemical compositions.
Figure 4: Chemical pattern of the stars HD 15164, HD 15165 and HD 15165C
(black), compared to an average pattern of $\lambda$ Boo stars (blue).
To our knowledge, there is no abundance determination of $\lambda$ Boo stars
that belong to a triple or multiple system. In particular, a late-type star
that belong to such system, could be used as a proxy of the initial
composition of the material where the $\lambda$ Boo star formed (under the
hypothesis that they born from the same molecular cloud). This could be
important as an additional constrain for any model trying to explain the
$\lambda$ Boo phenomena. We present in the Fig. 4 the chemical pattern of HD
15165C, the late-type component of the triple system. The chemical pattern is
compatible with a solar-like composition (for instance, [FeI/H]=0.04$\pm$0.02
dex). This is in agreement with the idea that $\lambda$ Boo stars are
Population I objects and originate (following any internal or external
mechanism) starting from a solar-like composition.
Notably, the three stars that belong to the triple system present different
chemical patterns. The star A present a $\lambda$ Boo pattern, while the stars
B and C present abundances closer to the Sun. However, the stars B and C are
also slightly different between them: the late-type star C present the closest
abundances to the Sun, while the early-type star B shows a slightly
deficiency. Most abundance values between stars B and C agree within
$\sim$0.30 dex, with a possible exception: the lithium content. The Li I
6707.8 Å line is clearly present in the spectra of the star B (HD 15164) as we
can see in the Fig. 5, while it is not detected in the spectra of stars A nor
C. It is interesting to note that this line is commonly used as a proxy of
recent accretion onto the atmosphere of the stars. For instance, Saffe et al.
(2017) attributed a notable difference in the refractory abundances and in the
Li content between the stars of the binary system HAT-P-4 to a possible
accretion event of a rocky planet onto the primary. However, although HD 15164
shows clearly the Li line, its refractory content is slightly lower than the
star HD 15165C, which would be difficult to explain with the accretion of
refractory species.
Figure 5: Observed spectra (black line) and synthetic spectra (blue dotted
line) near the Li line 6707.8 Å in the star HD 15164. Synthetic lines are
indicated showing the wavelength, atomic number and intensity.
Is it possible that the supposed different abundances between stars A, B and C
are only due to different Teff? The question makes sense because the stars A
and C present Teff of 7150 K and 4960 K, a difference of 2190 K. However, the
total error etot in abundances includes the error e2, which measure the change
in the abundances when varying Teff by their corresponding uncertainty. Then,
we do not expect a strong change in the derived abundances due to Teff (in any
case, the possible change is contained within the total error etot).
### 4.3 The binary system HD 193256/281
HD 193256 was classified as $\lambda$ Boo by Gray (1988) and then as uncertain
$\lambda$ Boo by Renson (1990). It is separated by $\sim$27.5 arcsec from HD
193281, which was classified as $\lambda$ Boo by Gray & Garrison (1987). Both
stars HD 193256 and HD 193281 show approximately solar abundances of C and
subsolar Fe in the study of Stürenburg (1993), who analyzed them separately.
However, they also found near solar values for other elements such as Mg and
Si in both stars, which is different from what found in average $\lambda$ Boo
stars. Kamp et al. (2001) found solar values in HD 193281 for N, O and S,
although for C they found -0.61 dex, similarly to Paunzen et al. (1999).
However, more recent classification spectra suggest that only HD 193256 could
belong to the $\lambda$ Boo class (see Tables 1 and 4 of Murphy et al., 2015;
Gray et al., 2017), while HD 193281 display a normal spectra.
In this work, we analyzed the spectra of HD 193256 and HD 193281 considered
both as single, for which the abundances of HD 193281 should be taken with
caution. We present in Fig. 6 the chemical pattern of the stars HD 193256 and
HD 193281 (black), compared to an average pattern of $\lambda$ Boo stars
(blue). The colors, panels and error bars used are similar to those of Fig. 4.
HD 193256 shows solar or suprasolar values for C and O, together with subsolar
values (between 0.5-0.9 dex) of Ca, Cr, Fe and Sr. However, we also found near
solar values of Mg, Si and Ti, which is not common in $\lambda$ Boo stars.
Then, this object seem to present a mix of metals with solar and subsolar
abundances. On the other hand, HD 193281 present the chemical pattern of a
slightly metal-deficient star in general, showing subsolar values for C and O
($\sim$0.3 dex) similar to Fe I (-0.36$\sim$0.13 dex). However, the results of
HD 193281 should be taken with caution, due to a possible contamination of the
nearby K2 III star.
Figure 6: Chemical pattern of the stars HD 193256 and HD 193281 (black),
compared to an average pattern of $\lambda$ Boo stars (blue).
In short, the solar abundances of some metals of HD 193256 (Mg, Si and Ti) are
different of $\lambda$ Boo stars. The chemical pattern of HD 193281
(considered as single) shows a slightly metal deficient star. In addition,
there is evidence for a possible contamination of HD 193281, where the
components A and B display spectral types A2 III and K2 III. Then, current
evidence does not support the presence of two bonafide $\lambda$ Boo stars in
this binary (or triple) system. It would be desirable an analysis of HD 193281
separately for the components A and B, in order to more properly determine the
individual abundances.
### 4.4 The binary system HD 198160/161
HD 198160 form a visual binary system with HD 198161, separated by $\sim$2.4
arcsec. HD 198160 was classified ”A2 Vann wk4481” and ”A2 Vn” (Gray, 1988;
Corbally & Garrison, 1980), while HD 198161 was classified as ”A3 Vn”
(Corbally & Garrison, 1980). Both stars were studied separately by Stürenburg
(1993) considering them as twins (same Teff and log g). He derived near solar
values for C in both stars and subsolar values for Fe (-0.8$\pm$0.2 dex),
however he also obtained solar values for Mg and Si (0.0$\pm$0.1 dex and
-0.2$\pm$0.2 dex for both stars). Then, Paunzen et al. (1999) estimated near
solar NLTE values for C and O, although quoted for HD 198160/1 (not
separated). More recently, Murphy et al. (2015) caution that individual NLTE
volatile abundances for HD 198160 and HD 198161 are not confirmed (such as
those reported in this work) and tentatively adopt for HD 198160 a
classification ”A2 Vann $\lambda$ Boo”. However, its companion HD 198161 was
classified as a normal star, with spectral type ”A3 V” and ”A3 IV(n)” (Murphy
et al., 2015; Gray et al., 2017).
We present in Fig. 7 the chemical pattern of the stars HD 198160 and HD 198161
(black), compared to an average pattern of $\lambda$ Boo stars (blue). The
colors, panels and error bars used are similar to those of Fig. 4. In both
stars, most Fe-peak metals show a deficiency around 0.7-0.8 dex, similar to
$\lambda$ Boo stars. However, C and O also show subsolar values, being
possibly low compared to other $\lambda$ Boo stars. When comparing C with Fe
abundances, the group of $\lambda$ Boo stars present [C/Fe]$\sim$1.21$\pm$0.35
dex (excluding stars without CNO values and the stars analyzed here, Heiter et
al., 2002) with minimum and maximum values of 0.70 and 1.74 dex. However, the
stars HD 198160 and HD 198161 present [C/Fe] values of $\sim$0.54 and
$\sim$0.48 dex, being low compared to the average [C/Fe] and even lower than
the minimum of 0.70 dex. Then, we consider that these low [C/Fe] values
possibly correspond to mild-$\lambda$ Boo stars, rather than to an average
$\lambda$ Boo object. It is important to note that our C and O abundances were
corrected by NLTE, with average corrections of -0.15 dex and -0.81 dex for
both stars. In other words, if we only adopt LTE values without correction,
the C and O abundances would result closer to those of $\lambda$ Boo stars.
Figure 7: Chemical pattern of the stars HD 198160 and HD 198161 (black),
compared to an average pattern of $\lambda$ Boo stars (blue).
### 4.5 On the physical association of the stars
The stars studied in this work were previously reported as (possible) members
of binary/multiple systems, for the case of HD 15164/65/65C (Andrievsky et
al., 1995; Chernyshova et al., 1998; Murphy et al., 2015), HD 193256/281
(Paunzen et al., 2012a; Murphy et al., 2015; Gray et al., 2017) and HD
198160/161 (Paunzen et al., 2012a; Murphy et al., 2015).
The coordinates, proper motions and parallax of the stars (see Table 1)
suggest that they are, at least, common proper motion objects. We searched our
targets stars in different binary catalogs from literature (Shaya & Olling,
2011; Tokovinin & Lepine, 2012; Andrews et al., 2017). In particular, Andrews
et al. (2017) performed a search of binaries through a Bayesian formulation in
the Tycho-Gaia catalogs and derived likelihoods of Keplerian orbits. For HD
15164/65, they reported a probability greater than 99% that they form a
physical system. Shaya & Olling (2011) developed a Bayesian method to discover
non-random pairs using Hipparcos data. They include HD 198160/161 in their
catalog of binaries, however there is no probability quoted for this pair.
Finally, we find no record for HD 193256/181 in these binary catalogs.
In this work, we assume that the stars form physical binary/multiple systems.
In the case of showing that the stars are not gravitationally bound, then
these stars would not be useful to test the accretion scenario.
### 4.6 Are there two bonafide $\lambda$ Boo stars in binary systems?
There is evidence in the literature supporting the accretion scenario. For
example, the anticorrelation of C and O with Si (Paunzen et al., 1999), first
noted by Holweger & Sturenburg (1993) for C. It is expected that refractory
elements like Fe and Si are condensed in dust, while the more volatile CNO and
S remain in the gaseous phase. Then, the selective accretion of gas will
produce ratios [C/Si] or [O/Si] larger than solar and reduced metallicity
(Paunzen et al., 1999). Kamp et al. (2001) reached a similar conclusion
comparing the volatile species N and S with the more refractory Ca. We should
also expect that in stars with large $v\sin i$, the meridional circulation
mixes material of solar composition from the stellar interior into the
convection zone so that any surface contamination due to accretion of
circumstellar material should vanish. This observation seems to be weakly
verified (see e.g., Solano et al., 2001), and would require a larger sample of
$\lambda$ Boo stars. As we can see, the accretion scenario could be tested by
different methods.
In this work, we focus on the presence of $\lambda$ Boo stars as members of
binary systems (e.g., Stürenburg, 1993; Paunzen et al., 2002; Heiter et al.,
2002; Paunzen et al., 2012a, b). These are the following 12 systems (see
Appendix): HD 15164/65/65C, HD 38545, HD 64491, HD 84948, HD 111786, HD
141851, HD 148628/638, HD 171948, HD 174005, HD 193256/281, HD 198160/161, and
HD 210111. Following the accretion scenario, two early-type stars in a binary
system should display, in principle, a similar $\lambda$ Boo pattern after
passing through a diffuse cloud. However, a binary or multiple system having a
$\lambda$ Boo star together with a ”normal” early-type component would be
difficult to explain under the accretion scenario. This test of the accretion
scenario would require a detailed analysis of both stars. As explained in the
Introduction, some stars that belong to these 12 systems were recently
classified as non-members or uncertain members of the $\lambda$ Boo class,
such as HD 141851, HD 148638 and HD 193256 (see, e.g., Murphy et al., 2015;
Gray et al., 2017). Then, we wonder if any of these 12 systems really include
two stars with bonafide $\lambda$ Boo chemical patterns.
It would be desirable a detailed abundance analysis in order to verify the
true $\lambda$ Boo nature of a star, initially suggested (for instance) by its
classification spectra (see, e.g., Andrievsky et al., 2002; Heiter et al.,
2002). To our knowledge, only 5 out of the 12 systems present an abundance
determination of both components: HD 15164/65, HD 84948, HD 171948, HD
193256/281 and HD 198160/161 (three of them were analyzed in this work). Some
works present an abundance study only of the brighter component, such as in
the case of HD 38545 (Stürenburg, 1993) or HD 64491 (Kamp et al., 2001), while
other systems only have a spectral classification, such as HD 174005 (Gray et
al., 2017; Murphy et al., 2015).
An inspection of the abundance values reported in the literature (see
Appendix) shows that, in our opinion, there is no binary system having two
stars with bonafide $\lambda$ Boo chemical patterns. The same is valid for the
three systems analyzed in this work (HD 1564/65/65C, HD 193256/281 and HD
198160/161). In fact, we cannot find even one binary system where the two
stars present bonafide $\lambda$ Boo abundance patterns. We consider that the
closer candidates to show both stars a $\lambda$ Boo pattern are possibly the
binary systems HD 84948, HD 171948 and HD 198160. These three systems show
[C/Fe] values lower than 0.7 dex (the minimum [C/Fe] of $\lambda$ Boo stars,
see Sect. 4.4 and Appendix), being perhaps mild-$\lambda$ Boo systems rather
than clear $\lambda$ Boo objects. Then, we find no clear evidence for the
presence of two $\lambda$ Boo stars as members of binary systems. However,
this fact (if confirmed) do not rule out the accretion scenario.
On the other hand, a challenge for the accretion scenario, would be the
presence of a bonafide $\lambda$ Boo star and a normal early-type object,
together in the same multiple system. By reviewing the 12 systems studied
(including the stars of this work) we found only one candidate: the system HD
15164/65/65C analyzed here. The star A present a $\lambda$ Boo pattern, while
the stars B (early-type) and C (late-type) present abundances closer to the
Sun. The different chemical composition between stars A and B was initially
attributed to a possible stellar capture (Andrievsky et al., 1995). The
probability of a binary capture depends on several factors, such as the number
of stars per cubic parsec, the velocity dispersion and the mass of the stars
(e.g., Clarke & Pringle, 1991; Boffin et al., 1998). The capture is not a
dominant formation process for solar-mass (coeval) binaries in dense clusters
(e.g., Clarke & Pringle, 1991; Heller, 1995; Boffin et al., 1998). To our
knowledge, there is no known binary or triple system with an origin attributed
to a capture. On the other hand, there are multiple observations of young
binaries embedded in dense cores (e.g., Sadavoy & Stahler, 2017), and even an
image of a triple protostar formed via disk fragmentation (Tobin et al.,
2016). Although the capture cannot be totally discarded, most observational
evidence points toward the formation of binary and multiple systems from a
common molecular cloud. Taking up the idea that the three stars are born
together, it is difficult to explain the composition of the stars of HD 15165
under the accretion scenario. Then, there is an urgent need of additional
binary and multiple systems to be analyzed through a detailed abundance
analysis, in order to test the accretion model of $\lambda$ Boo stars.
## 5 Concluding remarks
In the present work, we performed a detailed abundance determination of
selected binary and multiple systems with candidate $\lambda$ Boo stars, in
order to test the accretion scenario. Reviewing abundance values reported in
the literature (see Appendix) shows that, in our opinion, there are no binary
system having two stars with bonafide $\lambda$ Boo chemical patterns. The
same is valid for the three systems analyzed in this work (HD 15164/65/65C, HD
193256/281 and HD 198160/161). We consider that the closer candidates to show
both stars a $\lambda$ Boo pattern are possibly the binary systems HD 84948,
HD 171948 and HD 198160. However, these three binary systems are perhaps
mild-$\lambda$ Boo systems rather than clear $\lambda$ Boo objects. Then, in
our opinion, current evidence of binary/multiple systems does not give strong
support to the accretion scenario of $\lambda$ Boo stars.
On the other hand, a binary/multiple system formed by a $\lambda$ Boo star and
an early-type ”normal” object, would be difficult to explain under the
accretion scenario. We found one candidate: the remarkable triple system HD
15164/65/65C. It is composed by two early-type stars (A and B) and a late-type
companion (C). In particular, the late-type component of the system could be
used as a proxy for the initial composition of the system, constraining
formation models of $\lambda$ Boo stars. We found a $\lambda$ Boo pattern for
the A star (HD 15165), while the stars B and C present abundances closer to
the Sun. Then, there is an urgent need of additional binary and multiple
systems to be analyzed through a detailed abundance analysis, in order to test
the accretion model of $\lambda$ Boo stars.
###### Acknowledgements.
We thank the referee Dr. Christopher Corbally for constructive comments that
improved the paper. The authors thank Dr. R. Kurucz for making their codes
available to us. CS acknowledge financial support from FONCyT (Argentina)
through grant PICT 2017-2294 and the National University of San Juan
(Argentina) through grant CICITCA E1134. IRAF is distributed by the National
Optical Astronomical Observatories, which is operated by the Association of
Universities for Research in Astronomy, Inc., under a cooperative agreement
with the National Science Foundation. Based on data acquired at Complejo
Astronómico El Leoncito, operated under agreement between the Consejo Nacional
de Investigaciones Científicas y Técnicas de la República Argentina and the
National Universities of La Plata, Córdoba and San Juan.
## References
* Abt (1980) Abt, H., PASP 92, 796
* Abt & Morrell (1995) Abt, H., Morrell, N., 1995, ApJS 99, 135
* Andrews et al. (2017) Andrews, J., Chanamé, J., Agüeros, M., 2017, MNRAS 472, 675
* Andrievsky et al. (1995) Andrievsky, S., Chernyshova, I., Usenko, I., et al., 1995, PASP 107, 219
* Andrievsky et al. (2002) Andrievsky, S., Chernyshova, I., Paunzen, E., et al., 2002, A&A 396, 641
* Arentsen et al. (2019) Arentsen, A., Prugniel, P., Gonneau, A., et al., 2019, A&A 627, 138
* Boffin et al. (1998) Boffin, H., Watkins, S., Bhattal, A., et al., 1998, MNRAS 300, 1189
* Chernyshova et al. (1998) Chernyshova, I., Andrievsky, S., Kovtyukh, et al., 1998, CoSka 27, 332
* Clarke & Pringle (1991) Clarke, C. Pringle, J., 1991, MNRAS 249, 584
* Corbally & Garrison (1980) Corbally, C., Garrison, R., 1980, PASP 92, 493
* Domingo & Figueras (1999) Domingo, A., Figueras, F., 1999, A&A 343, 446
* Faraggiana et al. (1997) Faraggiana, R., Gerbaldi, M., Burnage, R., 1997, A&A 318, L21
* Faraggiana et al. (2001) Faraggiana, R., Gerbaldi, M., Bonifacio, P., & Francois, P. 2001b, A&A, 376, 586
* Faraggiana & Gerbaldi (2003) Faraggiana, R., Gerbaldi, M. 2003, A&A, 398, 697
* Gaia Collaboration (2018) Gaia Collaboration, 2018, A&A 616, 1
* Gebran et al. (2014) Gebran, M., Monier, R., Royer, F., Lobel, A., Blomme, R., 2014, Putting A Stars into Context: Evolution, Environment, and Related Stars, Proceedings of the international conference held on June 3-7, 2013 at Moscow M.V. Lomonosov State University in Moscow, Russia. Eds.: G. Mathys, E. Griffin, O. Kochukhov, R. Monier, G. Wahlgren, Moscow: Publishing house ”Pero”, 2014, p. 193-198
* Gerbaldi et al. (2003) Gerbaldi, M., Faraggiana, R., & Lai, O. 2003, A&A, 412, 447
* Gray (1988) Gray, R. O. 1988, AJ, 95, 220
* Gray et al. (2001) Gray, R., Napier, M. G., Winkler, L. I. 2001, AJ, 121, 2148
* Gray et al. (2017) Gray, R., Riggs, Q., Koen, C., et al., 2017, AJ 154, 31
* Gray & Corbally (1998) Gray, R. O., Corbally, C. J., 1998, AJ 116, 2530
* Gray & Garrison (1987) Gray, R. O., Garrison, R. F., 1987, ApJS 65, 581
* Hauck & Mermilliod (1998) Hauck, B., Mermilliod, M., 1998, A&AS 129, 431
* Heiter et al. (2002) Heiter, U., 2002, A&A 381, 959
* Heller (1995) Heller, C., 1995, ApJ 455, 252
* Holweger & Sturenburg (1993) Holweger, H., Stürenburg, S., 1993, PASPC 44, 356
* Iliev et al. (2001) Iliev, I. Kh., Paunzen, E., Barzova, I., et al., 2001, INFORMATION BULLETIN ON VARIABLE STARS (IBVS), Number 5178
* Iliev et al. (2002) Iliev, I. Kh., Paunzen, E., Barzova, I., et al., 2002, A&A 381, 914
* Ivanov et al. (2019) Ivanov, V., Coccato, L., Neeser, M., et al., 2019, A&A 629, A100
* Kaiser (2006) Kaiser, A., 2006, Astrophysics of Variable Stars, Pecs, Hungary, 5-10 September 2005, Sterken, C. and Aerts, C. (eds). ASP Conference Series, Vol. 349, p. 257. San Francisco: Astronomical Society of the Pacific, 2006
* Koleva & Vazdekis (2012) Koleva, M., Vazdekis, A., 2012, A&A 538, 143
* Kurucz (1993) Kurucz, R. L. 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s grid, Kurucz CD-ROM 13 (Cambridge, MA: Smithsonian Astrophysical Obs.)
* Kurucz & Avrett (1981) Kurucz, R. L., Avrett, E. H., 1981, SAO Special Report No. 391
* Jura (2015) Jura, M., 2015, AJ 150, 166
* Kamp et al. (2001) Kamp, I., Iliev, I. Kh., Paunzen, E., et al., 2001, A&A 375, 899
* Kamp & Paunzen (2002) Kamp, I., Paunzen, E., 2002, MNRAS 335, L45
* Mechler (1974) Mechler, PASP 86, 279
* Mechler (1976) Mechler, AJ 81, 107
* Murphy et al. (2015) Murphy, S., Corbally, C., Gray, R., et al., 2015, PASA 32, e036
* Murphy et al. (2020) Murphy, S., Gray, R., Corbally, C., et al., 2020, MNRAS 499, 2701
* Murphy & Paunzen (2017) Murphy, S. J., Paunzen, E., 2017, MNRAS 466, 546
* Martinez-Galarza et al. (2009) Martinez-Galarza, J., Kamp, I., Su, K. Y., et al. 2009, ApJ, 694, 165
* Napiwotzki et al. (1993) Napiwotzki, R., Shonberner, D., Wenske, V., 1993, A&A 268, 653
* North et al. (1994) North, P., Berthet, S., Lanz, T., 1994, A&ASS 103, 321
* Paunzen et al. (1998) Paunzen, E., Heiter, U., Handler, G., et al., 1998, A&A 329, 155
* Paunzen et al. (1999) Paunzen, E., Kamp, I., Iliev, I. Kh., et al., 1999, A&A 345, 597
* Paunzen (2000) Paunzen, E., Ph.D. Thesis, Univ. of Vienna, Austria
* Paunzen et al. (2001b) Paunzen, E., A&A 373, 633
* Paunzen et al. (2001) Paunzen, E., Duffee, B., Heiter, U., et al., 2001, A&A 373, 625
* Paunzen et al. (2002) Paunzen, E., Iliev, I. Kh., Kamp, I., Barzova, I., 2002, MNRAS 336, 1030
* Paunzen et al. (2012a) Paunzen, E., Fraga, L., Heiter, U., et al., 2012, ”From interacting binaries to exoplanets: essential modeling tools”, IAU Proceeding Symposium 282, M. Richards & I. Hubeny, Eds., doi:10.1017/S1743921311027773
* Paunzen et al. (2012b) Paunzen, E., Heiter, U., Fraga, L., et al., 2012, MNRAS 419, 3604
* Prugniel et al. (2011) Prugniel, Ph., Vauglin, I., Koleva, M., 2011, A&A 531, 165
* Ramirez et al. (2013) Ramírez, I., Allende Prieto, C., Lambert, D., 2013, ApJ 764, 78
* Renson (1990) Renson, P., Faraggiana, R., & Boehm, C. 1990, BICDS, 38, 137
* Rentzsch-Holm (1996) Rentzsch-Holm, Inga, 1996, A&A 312, 966
* Sadavoy & Stahler (2017) Sadavoy, S., Stahler, S., 2017, MNRAS 469, 3881
* Saffe & Levato (2014) Saffe, C., Levato, H., 2014, A&A 562, A128
* Saffe et al. (2017) Saffe, C., Jofré, E., Martioli, E., Flores, M., et al., 2017, A&A 604, L4
* Saffe et al. (2018) Saffe, C., Flores, M., Miquelarena, P., et al., 2018, A&A 620, 54
* Saffe et al. (2019) Saffe, C., Jofré, E., Miquelarena, P., et al., 2019, A&A 625, 39
* Saffe et al. (2020) Saffe, C., Miquelarena, P., Alacoria, J., et al., 2020, A&A 641, 145
* Shaya & Olling (2011) Shaya, E., Olling, R., 2011, ApJSS 192, 2
* Saffe et al. (2021) Saffe, C., Miquelarena, P., Alacoria, J., et al., 2021, A&A 647, A49
* Sitnova et al. (2013) Sitnova, T., Mashonkina, L., Ryabchikova, T., 2013, Astronomy Letters, Vol. 39, No. 2, pp. 126-140
* Solano et al. (2001) Solano, E., Paunzen, E., Pintado, O., Varela, J., 2001, A&A 374, 957
* Stürenburg (1993) Stürenburg, S., 1993, A&A 277, 139
* Tobin et al. (2016) Tobin, J., Kratter, K., Persson, M., et al., 2016, Nature 538, 483
* Tokovinin & Lepine (2012) Tokovinin, A., Lépine, S., 2012, AJ 144, 102
## Appendix A Multiple systems with suspected $\lambda$ Boo components
We review abundance determination of binary or multiple systems with suspected
$\lambda$ Boo components from the literature, in order to determine if two
bonafide $\lambda$ Boo stars can be found. Spectral classification data is
also included whenever available. The data are updated including the results
from the present work.
HD 15164/65/65C: It is a visual triple system, where most works considered
only the two brighter components. Andrievsky et al. (1995) studied spectra of
the stars A and B (HD 15165 and HD 15164) using the LYNX (R$\sim$24000) and
AURELIE (R$\sim$11000) spectrographs. They found subsolar values for two
elements analyzed in the A star (-0.73 dex for [Ca/H] and -0.46 dex for
[Fe/H]). Then, Chernyshova et al. (1998) reanalyzed the data for the A star
and suggest that this object belongs to the $\lambda$ Boo class, showing
$\sim$solar values for C, O and S (0.0 dex, -0.3 dex and 0.0 dex) together
with subsolar values for refractory elements (for example, [Fe/H]=-1.6 dex).
However, Andrievsky et al. (1995) also found solar values for several elements
in the B star. They suggest that the different chemical composition of stars A
and B is probably due to a stellar capture. Murphy et al. (2015) classified
the spectra of the 3 stars as ”F1 V kA7mA6 ($\lambda$ Boo)?” (HD 15164), ”F2 V
kA2mA2 $\lambda$ Boo?” (HD 15165) and ”K2V” (HD15165C). They also claim that
the classification spectrum of HD 15165 does not match solar abundances,
contrary to the result of Andrievsky et al. (1995).
In the present work, we find that the star A present a $\lambda$ Boo pattern,
while the stars B and C present abundances closer to the Sun. In other words,
we find different abundances for the stars A and B, in agreement with
Andrievsky et al. (1995). This is difficult to explain under the accretion
scenario of $\lambda$ Boo stars. Then, current evidence does not support the
presence of two bonafide $\lambda$ Boo components in this system.
HD 38545: Stürenburg (1993) estimated solar abundances for C (-0.1$\pm$0.2
dex) and almost solar values for other metals such as Fe (-0.2$\pm$0.2 dex).
However, it was analyzed as a single object and then considered not reliable
by Heiter et al. (2002). Then, this object was mentioned as a possible visual
binary with a small separation ($<$0.2”, Heiter et al., 2002) and as a
possible SB system (Paunzen et al., 2002). More recently, Prugniel et al.
(2011) reported a low metallicity for this object ([Fe/H]=-0.48 dex)
considered also as single. By inspecting IUE UV spectra, Murphy et al. (2015)
suggest that it is a normal object rather than a $\lambda$ Boo star (”non-
member” of the class), and caution that its high $v\sin i$ ($\sim$191 km/s)
may also have had some role in early identifications as $\lambda$ Boo. We note
that this star is not included in the list of SB $\lambda$ Boo stars of
Paunzen et al. (2012a). To our knowledge, there is no spectral classification
nor abundance determination for the secondary.
HD 64491: Kamp et al. (2001) identified this object as a SB system, a
previously undetected binary, showing high and low $v\sin i$ components. They
estimated abundances for the star with higher $v\sin i$ ($\sim$ 170 km/s) by
directly fitting the composite spectra, obtaining [N/H]=-0.30 dex, [S/H]=-0.09
dex and [Ca/H]=-0.96 dex (using NLTE for C and S). Then, Iliev et al. (2001)
reported that the orbital period of this SB system is between 230 and 760
days, and suggest that a new abundance analysis should be performed taking
into account the binarity of the system. Faraggiana & Gerbaldi (2003) suggest
that this object is composed by two slightly metal-poor objects ($\sim$-0.5
dex) rather than a single object with [M/H]$\sim$-1.5 dex. Murphy et al.
(2015) classified the primary of the system as ”F1 Vs kA3mA3 $\lambda$ Boo”.
To our knowledge, there is no spectral classification nor abundance
determination for the secondary (the object with lower $v\sin i$).
HD 84948: Paunzen et al. (1998) reported this object as a SB system and found
subsolar abundances separately for the stars A and B ([Fe/H]= -1.2$\pm$0.3 dex
and -1.0$\pm$0.2 dex, respectively). Then, Heiter et al. (2002) also performed
a detailed abundance determination separately for components A and B. Both
works reported that that the two stars are metal-poor, however CNO or S
abundances were not reported. Then, Iliev et al. (2002) estimated NLTE
abundances for C and O: they find subsolar values for C (-0.8$\pm$0.4 dex for
both stars) while for O they found -0.6$\pm$0.3 dex and +0.2$\pm$0.3 for stars
A and B. They also reported a period of 7.41 d for this SB2 system.
We present in the Fig. 8 a comparison of an average $\lambda$ Boo pattern777We
excluded from the average stars without CNO values and the stars analyzed
here. taken from Heiter et al. (2002), and literature abundances for the stars
A and B. This plot shows that C abundances seem to be low respect of $\lambda$
Boo stars. When comparing C with Fe abundances, the group of $\lambda$ Boo
stars present [C/Fe]$\sim$1.21$\pm$0.35 dex (excluding stars without CNO
values and the stars analyzed here, Heiter et al., 2002) with minimum and
maximum values of 0.70 and 1.74 dex. However, the stars A and B present [C/Fe]
values of $\sim$0.4 and $\sim$0.2 dex888Using Fe from Heiter et al. (2002)
instead of Paunzen et al. (1998), the values are even lower: $\sim$0.3 and
$\sim$0.1 dex for stars A and B., being low values compared to the average
[C/Fe] and even lower than the minimum of 0.70 dex. These low [C/Fe] values
possibly correspond to an extreme or mild-$\lambda$ Boo star rather than to an
average $\lambda$ Boo object.
Paunzen et al. (2001) classified HD 84948 as ”kA7hF1mA6 V (LB)”, while Murphy
et al. (2015) classified HD 84948 as ”F1.5 Vs kA5mA5 $\lambda$ Boo?”, a
”probable member” of the $\lambda$ Boo class using a newer spectra. Given the
low values of [C/Fe] for both stars together with the ”probable” spectral
classification, we prefer to consider them as candidate $\lambda$ Boo stars
(perhaps mild-$\lambda$ Boo stars) rather than bonafide members of the class.
This binary system deserves a verification of the abundance values.
Figure 8: Comparison of an average $\lambda$ Boo pattern (blue, Heiter et al.,
2002) with the abundances from literature for the stars HD 84948 A and B (left
and right panels, black).
HD 111786 (= HR 4881): This star is considered as a classic $\lambda$ Boo
object by different works (e.g., Murphy et al., 2015). Stürenburg (1993)
derived abundances in agreement with the $\lambda$ Boo class (for example,
[C/H]=-0.2$\pm$0.2 dex and [Fe/H]=-1.5$\pm$0.3 dex). However, it was analyzed
as a single object and then considered not reliable by Heiter et al. (2002).
Then, some authors propose an SB nature for this system (Faraggiana et al.,
1997; Paunzen et al., 2012b). We refer the reader to Murphy et al. (2015) for
a more complete discussion about this object. The star was classified as ”F0 V
kA1mA1 $\lambda$ Boo” (Murphy et al., 2015; Gray et al., 2017) and ”F0 Vs
kA1mA1 $\lambda$ Boo” (Murphy et al., 2020). Notably, Faraggiana et al. (2001)
proposed that HD 111786 is in fact a multiple system composed by five members:
one broad-lined star and four narrow-lined stars with similar temperature.
Beyond the multiplicity of this system, to our knowledge there is no spectral
classification nor abundance determination for the secondary (or any other
component) of the system.
HD 141851: Paunzen et al. (1999) found [C/H] and [O/H] NLTE abundances of
-0.81 and -0.21 dex, respectively, showing $v\sin i$ in excess of 200 km
s${{}^{-}1}$. Kamp et al. (2001) derived LTE abundances of [Ca/H]=-1.30 dex,
with typical errors of 0.2 dex. However, Heiter et al. (2002) mention that
this object was analyzed as a single star and then the abundances are not
reliable. Then, different works claim that this object was misclassified and
did not belong to the $\lambda$ Boo class (e.g., Paunzen et al., 2001).
Andrievsky et al. (2002) found [Fe/H] = -0.70, [Si/H] = -0.65 and [Na/H] =
+0.60 dex, however they do not decide if this object is a $\lambda$ Boo star.
Then, Murphy et al. (2015) classified this object as a normal ”A2 IVn” star,
while Gray et al. (2017) as ”A2 IV-Vn”, i.e. non-member of the $\lambda$ Boo
class. To our knowledge, there is no spectral classification nor abundance
determination for the secondary.
HD 148628/638: The primary of this visual pair (HD 148638) was analyzed by
Kamp et al. (2001) obtaining solar values of N and S together with subsolar Ca
(-1.20 dex). However, Murphy et al. (2015, 2020) and Gray et al. (2017)
classified this object as ”A2 IV-n (4481-wk)” and ”A2 IVn” rather than a
member of the $\lambda$ Boo class. To our knowledge, there is no spectral
classification nor abundance study for the companion (HD 148628).
HD 171948: Together with HD 84948, Paunzen et al. (1998) identified this
object as the first SB systems with $\lambda$ Boo components. They reported
very low abundances for Mg, Ti, Cr and Fe separately for the components A and
B. Then, Heiter et al. (2002) derived LTE abundances for this system,
estimating the same values within the errors for both stars. For C they
obtained an upper limit ([C/H]¡-0.5 dex), while O is considered for the same
authors as deficient ([O/H]=-0.6$\pm$0.4 dex) although high compared to heavy
elements ([Fe/H]=-1.6$\pm$0.4 dex). Then, Iliev et al. (2002) reported NLTE
abundances for C and O in this system, estimating the same values for both
stars within the errors ([C/H]=-1.2$\sim$0.4 dex and [O/H]=+0.2$\sim$0.3).
They also derived a period of 21.9 days for the SB system.
We present in the Fig. 9 a comparison of an average $\lambda$ Boo pattern
(Heiter et al., 2002) with the literature abundances of stars A and B, showing
that C values seem to be low respect of $\lambda$ Boo objects. Comparing C and
Fe abundances, both stars A and B present [C/Fe] values of $\sim$0.4 dex
(taking NLTE C values from Iliev et al. 2002 and Fe from Heiter et al. 2002),
being lower than the average [C/Fe] of $\lambda$ Boo stars
($\sim$1.21$\pm$0.35 dex excluding stars without CNO values and the stars
analyzed here, Heiter et al., 2002) and lower than the minimum of 0.70 dex
(Heiter et al., 2002). We consider that these low [C/Fe] values possibly
correspond to an extreme or mild-$\lambda$ Boo star rather than to an average
$\lambda$ Boo object.
Murphy et al. (2015) classified the primary of this binary system as ”A3 Va-
kB8.5 $\lambda$ Boo”, however, there is no spectral classification listed for
the secondary (see their Table 1). Given the low values of [C/Fe] for both
stars and the lack of a spectral classification for the secondary, we prefer
to consider them as candidate $\lambda$ Boo stars (perhaps mild-$\lambda$ Boo
stars) rather than bonafide members of the class. This binary system deserves
a verification of the abundance values.
Figure 9: Comparison of an average $\lambda$ Boo pattern (blue, Heiter et al.,
2002) with the abundances from literature for the stars HD 171948 A and B
(left and right panels, black).
HD 174005: This object was mentioned as a possible SB system with a maximum
separation of $\sim$38 arcsec (Paunzen, 2000; Solano et al., 2001; Paunzen et
al., 2012a). Both Gray et al. (2001) and Murphy et al. (2015) classified this
object as ”A7 V kA2 mA2 $\lambda$ Boo”. To our knowledge, there is no
abundance determination for the components of this system, nor spectral
classification for the secondary. This system would deserve a further
analysis.
HD 193256/281: The star HD 193281 resulted with near solar C (-0.2$\pm$0.2
dex) and subsolar Fe (-1.0$\pm$0.2 dex), however also with near solar values
of Mg, Ti, Cr and Sr in the study of Stürenburg (1993). Then, Paunzen et al.
(1999) estimated a NLTE oxygen abundance of -0.61 dex. Kamp et al. (2001)
found solar values in HD 193281 for N, O and S, although for C they found
-0.61 dex, similarly to Paunzen et al. (1999). The star HD 193256 resulted
with near solar C (0.0$\pm$0.2 dex) and subsolar Fe (-0.7$\pm$0.2 dex), but
also near solar values of Mg and Si (0.0$\pm$0.2 and 0.0$\pm$0.3 dex) in the
study of Stürenburg (1993). Then, abundance values for both stars do not seem
to agree with the general pattern of $\lambda$ Boo stars. The spectra of HD
193256 was classified as ”A9 Vn kA2mA2 $\lambda$ Boo” (Murphy et al., 2015)
and similarly as ”A8 Vn kA3mA3 ($\lambda$ Boo)” (Gray et al., 2017). However,
the spectra of HD 193281 was classified as ”A2IVn” (Murphy et al., 2015) and
”A2 IV-V” (Gray et al., 2017). Given a spectral classification in conflict
with the abundances, Murphy et al. (2015) consider HD 193281 as an ”uncertain
member” of the $\lambda$ Boo class.
In this work, we find that HD 193256 present subsolar values of Cr, Mn and Fe,
however also near solar values for Mg, Si and Ti, which is different than
$\lambda$ Boo stars. For HD 193281, we found a chemical pattern compatible
with a a slightly metal deficient star. However, we also caution that HD
193281 is possibly contaminated by a nearby star (see Sect. 3.2). Then,
current evidence does not support the presence of two bonafide $\lambda$ Boo
objects in this system.
HD 198160/161: Both stars were studied separately by Stürenburg (1993)
considering them as twins (same Teff and log g), although Gerbaldi et al.
(2003) criticized this assumption based in their different V and B (0.35 and
0.39 mag). Stürenburg (1993) derived near solar values for C in both stars
(-0.2$\pm$0.3) and subsolar values for Fe (-0.8$\pm$0.2 dex), however he also
obtained solar values for Mg and Si (0.0$\pm$0.1 dex and -0.2$\pm$0.2 dex for
both stars). They also estimated suprasolar values for Na (+0.3$\pm$0.2 dex
and +0.6$\pm$0.2 dex for both stars). Then, Paunzen et al. (1999) estimated
near solar NLTE values for C and O. Murphy et al. (2015) classified the
spectra of both stars as ”A2 Vann $\lambda$ Boo” and ”A3 V” (see their Table
1), respectively, while Gray et al. (2017) classified the spectra of HD 198160
as ”A3 IV(n)”.
In this work, we find a general deficiency of metals around 0.7-0.8 dex for
both stars. However, we also found subsolar values for C and O, possibly low
compared to other $\lambda$ Boo stars. When comparing C with Fe abundances, we
found that the stars HD 198160 and HD 198161 present [C/Fe] values of
$\sim$0.54 and $\sim$0.48 dex, being low compared to the average [C/Fe] of
$\lambda$ Boo stars ($\sim$1.21$\pm$0.35 dex) and even lower than the minimum
of 0.70 dex (see Sect. 4.3). Then, we consider that these low [C/Fe] values
possibly correspond to mild-$\lambda$ Boo stars, rather than to an average
$\lambda$ Boo object. In our opinion, current evidence does not support the
presence of two bonafide $\lambda$ Boo objects in the system.
HD 210111: Stürenburg (1993) analyzed this object as a single star, obtaining
solar abundances for C (0.1$\sim$0.1 dex), a subsolar value for Fe
(-1.1$\sim$0.2 dex), but also obtaining suprasolar and solar values for Sr and
Ba (+0.45$\sim$0.2 and 0.05$\sim$0.2 dex). Solano et al. (2001) obtained
subsolar values for Mg, Cr, Sc and Fe (between -0.8 dex and -1.3 dex), while
Kamp et al. (2001) derived subsolar and near solar abundances for C and O
(-0.45 dex and -0.20 dex, with typical errors of 0.2 dex). Paunzen et al.
(1999) estimated NLTE values for C and O of -0.45 dex and -0.20 dex. We
suppose that the data presented in these abundance works correspond to the
primary of the system, where its binary nature were not reported. This object
was classified as ”kA2hA7mA2 Vas $\lambda$ Boo” with peculiar hydrogen lines
by Gray (1988), and then as ”A9 V kA2mA2 $\lambda$ Boo” by Gray et al. (2017).
A classification spectra for HD 210111 was presented by Paunzen et al.
(2012b), who suggest a SB2 nature for the system. They fitted the observed
data using a composite spectrum with two equal components having [M/H]=-1.0
dex. For a more detailed abundance analysis, the authors suggest additional
spectra for a large separation of the two components. In particular, for the
secondary there is no detailed abundance determination (including for the
volatile species) nor spectral classification.
## Appendix B Chemical abundances
We present in this section the chemical abundances derived in this work and
their errors. The total error etot was derived as the quadratic sum of the
line-to-line dispersion e1 (estimated as $\sigma/\sqrt{n}$ , where $\sigma$ is
the standard deviation), and the error in the abundances (e2, e3 and e4) when
varying Teff, $\log g$ and vmicro by their corresponding uncertainties999We
adopt a minimum of 0.01 dex for the errors e2, e3 and e4.. For chemical
species with only one line, we adopt as $\sigma$ the standard deviation of
iron lines. Abundance tables show the average abundance and the total error
etot, together with the errors e1 to e4.
Table 3: Chemical abundances for HD 15164. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
Li I | 1.32 $\pm$ 0.17 | 0.07 | 0.15 | 0.01 | 0.01
C I | -0.30 $\pm$ 0.05 | 0.02 | 0.04 | 0.02 | 0.01
N I | 0.08 $\pm$ 0.10 | 0.07 | 0.05 | 0.01 | 0.04
O I | 0.12 $\pm$ 0.35 | 0.07 | 0.30 | 0.04 | 0.16
Mg I | -0.12 $\pm$ 0.22 | 0.11 | 0.13 | 0.03 | 0.13
Mg II | 0.08 $\pm$ 0.17 | 0.07 | 0.06 | 0.02 | 0.14
Al I | -0.74 $\pm$ 0.29 | 0.02 | 0.08 | 0.02 | 0.28
Si II | -0.30 $\pm$ 0.14 | 0.04 | 0.06 | 0.02 | 0.12
Ca II | -0.26 $\pm$ 0.17 | 0.07 | 0.15 | 0.01 | 0.02
Sc II | -0.30 $\pm$ 0.27 | 0.17 | 0.07 | 0.03 | 0.19
Ti II | -0.20 $\pm$ 0.16 | 0.02 | 0.08 | 0.02 | 0.14
Cr II | -0.35 $\pm$ 0.08 | 0.02 | 0.03 | 0.02 | 0.07
Mn I | -0.38 $\pm$ 0.16 | 0.04 | 0.14 | 0.01 | 0.06
Fe I | -0.36 $\pm$ 0.15 | 0.01 | 0.05 | 0.01 | 0.14
Fe II | -0.37 $\pm$ 0.11 | 0.01 | 0.03 | 0.01 | 0.11
Ni II | -0.50 $\pm$ 0.10 | 0.07 | 0.06 | 0.02 | 0.02
Zn I | -0.53 $\pm$ 0.12 | 0.02 | 0.12 | 0.01 | 0.01
Sr II | 0.37 $\pm$ 0.32 | 0.02 | 0.16 | 0.01 | 0.27
Y II | -0.26 $\pm$ 0.12 | 0.03 | 0.11 | 0.02 | 0.04
Zr II | -0.06 $\pm$ 0.11 | 0.07 | 0.08 | 0.02 | 0.02
Ba II | 0.10 $\pm$ 0.29 | 0.10 | 0.16 | 0.01 | 0.22
Table 4: Chemical abundances for HD 15165. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
C I | -0.06 $\pm$ 0.07 | 0.02 | 0.04 | 0.05 | 0.02
O I | 0.52 $\pm$ 0.12 | 0.02 | 0.02 | 0.02 | 0.11
Mg I | -1.06 $\pm$ 0.25 | 0.21 | 0.08 | 0.06 | 0.09
Mg II | -1.00 $\pm$ 0.24 | 0.22 | 0.05 | 0.06 | 0.06
Al I | -1.49 $\pm$ 0.28 | 0.10 | 0.17 | 0.09 | 0.18
Ca II | -1.03 $\pm$ 0.28 | 0.26 | 0.09 | 0.01 | 0.04
Sc II | -1.40 $\pm$ 0.28 | 0.22 | 0.11 | 0.06 | 0.12
Ti II | -0.97 $\pm$ 0.16 | 0.06 | 0.04 | 0.06 | 0.13
Cr II | -1.12 $\pm$ 0.08 | 0.02 | 0.07 | 0.03 | 0.01
Fe I | -1.24 $\pm$ 0.16 | 0.06 | 0.09 | 0.01 | 0.12
Fe II | -1.14 $\pm$ 0.07 | 0.04 | 0.04 | 0.03 | 0.04
Sr II | -0.34 $\pm$ 0.34 | 0.07 | 0.13 | 0.01 | 0.31
Ba II | -0.54 $\pm$ 0.26 | 0.15 | 0.08 | 0.03 | 0.19
Table 5: Chemical abundances for HD 15165C. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
Mg I | -0.25 $\pm$ 0.11 | 0.10 | 0.01 | 0.01 | 0.01
Al I | -0.08 $\pm$ 0.03 | 0.02 | 0.01 | 0.01 | 0.01
Si I | 0.09 $\pm$ 0.10 | 0.08 | 0.06 | 0.01 | 0.01
Ca I | 0.15 $\pm$ 0.07 | 0.04 | 0.05 | 0.01 | 0.01
Sc II | -0.11 $\pm$ 0.06 | 0.06 | 0.01 | 0.01 | 0.01
Ti I | -0.03 $\pm$ 0.06 | 0.03 | 0.05 | 0.01 | 0.01
Ti II | -0.04 $\pm$ 0.05 | 0.05 | 0.01 | 0.01 | 0.01
V I | 0.01 $\pm$ 0.07 | 0.04 | 0.06 | 0.01 | 0.01
Cr I | -0.02 $\pm$ 0.06 | 0.04 | 0.04 | 0.01 | 0.01
Cr II | -0.02 $\pm$ 0.08 | 0.08 | 0.01 | 0.01 | 0.01
Mn I | 0.29 $\pm$ 0.09 | 0.09 | 0.02 | 0.01 | 0.01
Fe I | 0.04 $\pm$ 0.02 | 0.01 | 0.01 | 0.01 | 0.01
Fe II | -0.01 $\pm$ 0.05 | 0.04 | 0.02 | 0.01 | 0.01
Co I | -0.13 $\pm$ 0.05 | 0.04 | 0.01 | 0.01 | 0.01
Cu I | -0.21 $\pm$ 0.18 | 0.18 | 0.01 | 0.01 | 0.01
Zn I | -0.15 $\pm$ 0.24 | 0.24 | 0.01 | 0.01 | 0.01
Sr II | -0.18 $\pm$ 0.13 | 0.13 | 0.01 | 0.01 | 0.01
Y II | 0.04 $\pm$ 0.21 | 0.21 | 0.01 | 0.01 | 0.03
Zr II | 0.24 $\pm$ 0.13 | 0.13 | 0.01 | 0.02 | 0.01
Ba II | 0.53 $\pm$ 0.17 | 0.17 | 0.01 | 0.01 | 0.01
Nd II | 0.12 $\pm$ 0.08 | 0.08 | 0.01 | 0.01 | 0.01
Table 6: Chemical abundances for HD 193256. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
C I | -0.05 $\pm$ 0.22 | 0.21 | 0.04 | 0.02 | 0.05
O I | 0.74 $\pm$ 0.16 | 0.15 | 0.04 | 0.05 | 0.02
Mg I | 0.34 $\pm$ 0.25 | 0.21 | 0.05 | 0.04 | 0.12
Mg II | 0.02 $\pm$ 0.24 | 0.21 | 0.07 | 0.04 | 0.08
Si II | 0.08 $\pm$ 0.18 | 0.07 | 0.16 | 0.04 | 0.02
Ca II | -0.47 $\pm$ 0.23 | 0.21 | 0.06 | 0.05 | 0.01
Sc II | -0.60 $\pm$ 0.31 | 0.21 | 0.08 | 0.03 | 0.21
Ti II | -0.18 $\pm$ 0.25 | 0.07 | 0.03 | 0.08 | 0.23
Cr II | -0.61 $\pm$ 0.09 | 0.02 | 0.02 | 0.07 | 0.06
Mn I | -0.53 $\pm$ 0.13 | 0.04 | 0.10 | 0.01 | 0.07
Fe I | -0.92 $\pm$ 0.15 | 0.07 | 0.06 | 0.02 | 0.12
Fe II | -0.69 $\pm$ 0.10 | 0.04 | 0.04 | 0.05 | 0.08
Sr II | -0.61 $\pm$ 0.49 | 0.27 | 0.10 | 0.10 | 0.38
Table 7: Chemical abundances for HD 193281. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
C I | -0.35 $\pm$ 0.10 | 0.07 | 0.07 | 0.01 | 0.01
O I | -0.30 $\pm$ 0.06 | 0.03 | 0.04 | 0.02 | 0.01
Mg I | -0.16 $\pm$ 0.29 | 0.20 | 0.10 | 0.06 | 0.18
Mg II | -0.54 $\pm$ 0.21 | 0.18 | 0.02 | 0.01 | 0.11
Al I | -0.65 $\pm$ 0.25 | 0.18 | 0.08 | 0.02 | 0.15
Si II | -0.84 $\pm$ 0.11 | 0.07 | 0.08 | 0.05 | 0.01
Ca II | -0.27 $\pm$ 0.21 | 0.18 | 0.11 | 0.01 | 0.01
Sc II | -0.23 $\pm$ 0.32 | 0.18 | 0.10 | 0.01 | 0.25
Ti II | -0.24 $\pm$ 0.15 | 0.05 | 0.05 | 0.04 | 0.13
Cr II | -0.53 $\pm$ 0.04 | 0.02 | 0.02 | 0.02 | 0.01
Fe I | -0.36 $\pm$ 0.13 | 0.05 | 0.09 | 0.02 | 0.07
Fe II | -0.48 $\pm$ 0.13 | 0.03 | 0.07 | 0.01 | 0.10
Sr II | -0.04 $\pm$ 0.47 | 0.01 | 0.16 | 0.01 | 0.44
Y II | -0.09 $\pm$ 0.16 | 0.13 | 0.09 | 0.04 | 0.01
Zr II | -0.02 $\pm$ 0.19 | 0.18 | 0.06 | 0.02 | 0.01
Ba II | 0.20 $\pm$ 0.17 | 0.09 | 0.14 | 0.01 | 0.03
Table 8: Chemical abundances for HD 198160. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
C I | -0.29 $\pm$ 0.08 | 0.07 | 0.01 | 0.04 | 0.03
O I | -0.43 $\pm$ 0.28 | 0.15 | 0.02 | 0.02 | 0.23
Mg I | -0.91 $\pm$ 0.18 | 0.08 | 0.03 | 0.01 | 0.15
Mg II | -0.50 $\pm$ 0.20 | 0.15 | 0.06 | 0.05 | 0.10
Al I | -1.25 $\pm$ 0.21 | 0.15 | 0.05 | 0.03 | 0.13
Si II | -0.86 $\pm$ 0.23 | 0.15 | 0.09 | 0.05 | 0.13
Ca II | -0.65 $\pm$ 0.16 | 0.15 | 0.05 | 0.03 | 0.01
Sc II | -0.85 $\pm$ 0.18 | 0.15 | 0.03 | 0.04 | 0.09
Ti II | -0.73 $\pm$ 0.18 | 0.02 | 0.02 | 0.06 | 0.17
Cr II | -0.68 $\pm$ 0.08 | 0.07 | 0.01 | 0.04 | 0.01
Mn I | -1.06 $\pm$ 0.11 | 0.01 | 0.11 | 0.01 | 0.02
Fe I | -0.83 $\pm$ 0.10 | 0.04 | 0.07 | 0.01 | 0.06
Fe II | -0.83 $\pm$ 0.10 | 0.03 | 0.04 | 0.04 | 0.08
Sr II | -1.29 $\pm$ 0.30 | 0.18 | 0.08 | 0.07 | 0.22
Ba II | -0.47 $\pm$ 0.17 | 0.15 | 0.08 | 0.01 | 0.02
Table 9: Chemical abundances for HD 198161. Specie | [X/H] $\pm$ etot | e1 | e2 | e3 | e4
---|---|---|---|---|---
C I | -0.32 $\pm$ 0.06 | 0.04 | 0.02 | 0.04 | 0.02
O I | -0.21 $\pm$ 0.28 | 0.15 | 0.02 | 0.02 | 0.23
Mg I | -0.87 $\pm$ 0.18 | 0.07 | 0.03 | 0.01 | 0.17
Mg II | -0.55 $\pm$ 0.20 | 0.15 | 0.06 | 0.05 | 0.10
Al I | -1.01 $\pm$ 0.25 | 0.15 | 0.06 | 0.04 | 0.18
Si II | -0.38 $\pm$ 0.21 | 0.15 | 0.08 | 0.05 | 0.10
Ca II | -0.67 $\pm$ 0.16 | 0.15 | 0.05 | 0.03 | 0.01
Sc II | -0.85 $\pm$ 0.18 | 0.15 | 0.03 | 0.04 | 0.09
Ti II | -0.83 $\pm$ 0.17 | 0.05 | 0.03 | 0.06 | 0.14
Cr II | -0.68 $\pm$ 0.07 | 0.06 | 0.01 | 0.04 | 0.01
Mn I | -0.97 $\pm$ 0.14 | 0.08 | 0.11 | 0.01 | 0.03
Fe I | -0.80 $\pm$ 0.17 | 0.03 | 0.07 | 0.01 | 0.15
Fe II | -0.81 $\pm$ 0.11 | 0.04 | 0.04 | 0.04 | 0.08
Sr II | -1.41 $\pm$ 0.22 | 0.04 | 0.08 | 0.07 | 0.19
Ba II | -0.38 $\pm$ 0.18 | 0.15 | 0.07 | 0.01 | 0.06
|
# Portable solvers for batches of small systems applied to the Landau
collision operator
1st Mark F. Adams Lawrence Berkeley National Laboratory
<EMAIL_ADDRESS>2th Peng Wang NVIDIA Corporation
<EMAIL_ADDRESS>
###### Abstract
Many small independent sparse linear system solves occur in many applications,
such as the Landau collision operator in plasma physics and astrophysics
simulations, chemistry in combustion applications, and subdomain solves in
domain decomposition solvers. One can simply stack these systems into a global
linear system and use existing general-purpose sparse solvers. However, this
“ensemble” approach does not exploit the independent structure of these
systems and the theoretical optimality of a Krylov solver is lost.
The many independent processing elements (PEs) found in contemporary (GPU)
accelerators are well suited to solving each of these systems independently.
This “batch” approach maintains the Krylov subspace optimality, significantly
reduces the number of kernel launches, and elides (unnecessary) global
communication. This study develops portable solvers that run an entire linear
system solve on a PE in a single kernel launch within the PETSc (Portable
Extensible Toolkit for Scientific Computing) numerical library.
###### Index Terms:
Batch solvers, Landau collision operator, Kokkos, GPU
## Dedicated to the memory of Ravindra Samtaney
## I Introduction
A solve phase with many independent sparse linear systems arises in several
applications. Single level domain decomposition solvers or multigrid smoothers
[1, 2, 3], and multiphysics models with a tensor product-like structure of a
PDE on a spatial grid with, for example, a chemistry PDE in combustion [4], or
with a velocity space Landau collision operator in plasma physics and
astrophysics [5, 6, 7, 8, 9, 10, 11], all generate many independent system
solves. Sensitivity analyses run independent simulations with solves for
implicit time integrators [12]. In addition to these PDE based solves, batched
solves appear in a pseudo-inverse at each grid point used in conservative
mapping between particle and finite element bases representations in particle-
in-cell methods [13, 14]. The related problem of batched singular value
decompositions has also been developed [15].
Modern accelerator hardware is well suited to these many small systems because
GPUs are composed of many small independent processing elements (PEs) that are
equipped with fast synchronization primitives, by definition. While one can
simply combine these linear systems into a single large linear systems and
solve with existing sparse solvers, this ensemble approach is not optimal in
several respects. For Krylov solvers, the optimality of the Krylov method is
lost for each system and direct solvers need to exploit this independent
structure. This suggests the use of batch solvers that place the entire
solution process for each system on a PE in a single kernel. Batch Krylov
solvers allow for the correct algorithm to be used for each system, with the
proper scaling of the Krylov vectors, and independent convergence checking for
each system. In addition to supporting the correct algorithm, batch solvers
drastically reduce the number of kernel launches in the solver from a dozen or
more per iteration to a single kernel launch. Batch solvers also avoid
unnecessary global communication in dot products and norms and use only the
fast local synchronization primitives on devices. The Ginkgo project [16], and
Kokkos Kernels project [12], are developing similar methods.
Performance portability is currently a significant challenge in high-
performance computing with accelerator devices (GPUs). One viable approach is
to abandon a single source model and implement a version of your solver for
each device of interest. PETSc uses this approach for portable linear algebra
(this report compares the batch solvers developed herein with PETSc ensembles
solvers). Alternatively, one can use a single source model with a portable
language like Kokkos, Raja or SYCL [17, 18, 19]. This work uses Kokkos to
write batch Jacobi preconditioned Krylov solvers [17].
This report begins with a introduction to the Landau collision operator in
§II, and builds on previos work [6], with the following new material:
* •
a multiple grid capability with batching of multiple problems within each MPI
process for the Landau operator in §III,
* •
a portable, batch TFQMR111transpose-free quasi-minimum residuals, a Krylov
method for asymmetric matrices solver in §IV,
* •
and new $2V$ and $3V$ performance data for the Landau operator in §V,
and §VI concludes the report.
## II Landau collisions for magnetized plasmas
The Vlasov-Maxwell-Landau system of equations is the fundamental model of
magnetized plasmas [20, 21]. It evolves a distribution function for each
species (one electron and potentially many ions species) in phase space with
up to three configuration space dimensions plus three velocity space
dimensions (6D). The Landau operator conserves density, momentum and energy
and admits unstructured finite element discretizations that strictly conserve
these quantities [21, 22].
The evolution of the phase space distribution or density function
$f\left(\vec{x},\vec{v},t\right)$ of a plasma in an electromagnetic field is
effectively modeled with a Vlasov-Maxwell-Landau system of the form
$\begin{split}\frac{df}{dt}&\equiv\frac{\partial f}{\partial
t}+\frac{\partial\vec{x}}{\partial
t}\cdot\nabla_{x}f+\frac{\partial\vec{v}}{\partial t}\cdot\nabla_{v}f\\\
&=\frac{\partial f}{\partial
t}+{\vec{v}}\cdot\nabla_{x}f+\frac{e}{m}\left({\vec{E}}+{\vec{v}}\times{\vec{B}}\right)\cdot\nabla_{v}f=C\end{split}$
with charge $e$, mass $m$, electric field ${\vec{E}}$, magnetic field
${\vec{B}}$, spatial coordinate ${\vec{x}}$ , velocity coordinate $\vec{v}$
and a collision term $C$ . This equation is composed of the symplectic Vlasov-
Maxwell system $\frac{df}{dt}=0$ and a metric, or diffusive, collision
operator $C$, within a metriplectic formalism [23].
Landau collisions between species $\alpha$ and $\beta$, are given by
$C_{\alpha\beta}=\nu_{\alpha\beta}\frac{m_{0}}{m_{\alpha}}\nabla\cdot\int\limits_{\bar{\Omega}}d{\bar{v}}\;\mathbf{U}(\vec{v},{\bar{v}})\cdot\left(\frac{m_{0}}{m_{\alpha}}\bar{f}_{\beta}\nabla
f_{\alpha}-\frac{m_{0}}{m_{\beta}}f_{\alpha}\bar{\nabla}\bar{f}_{\beta}\right)$
(1)
with a collision frequency
$\nu_{\alpha\beta}=e_{\alpha}^{2}e_{\beta}^{2}\ln\Lambda_{\alpha\beta}/8\pi
m_{0}^{2}\varepsilon_{0}^{2}$, the Coulomb logarithm
$\ln\Lambda_{\alpha\beta}$ (=10 herein), an arbitrary reference mass $m_{0}$ ,
the vacuum permittivity $\varepsilon_{0}$ and the effective charges $e$ of
each species. $\mathbf{U}(\vec{v},{\bar{v}})$ is the Landau tensor. Overbar
terms are evaluated on the grid for the domain $\bar{\Omega}$ of species
$\beta$ and $\bar{v}\equiv\vec{\bar{v}}$ for clarity. And in the evolution of
$f_{\alpha}$, $C_{\alpha}=\sum_{\beta}C_{\alpha\beta}$. See — for the further
details, the weak form and a kernel algorithm [24].
The Landau integral is inherently three dimensional, but in a strong magnetic
guide field, a gyrokinetic approximation allows for the use of cylindrical
coordinates, $\vec{v}=\left(r,z\right)$, to reduce the computation to a $2V$
grid [5]. Both the $2D$ and full $3V$ models are investigated in this report.
The $3V$ model is required for extension to relativistic regimes [25, 26, 27],
which is the subject of future work.
The salient feature of (1) is the inner integral over the domain
${\bar{\Omega}}$ for each species $\beta$, which results in an
$\mathcal{O}(N^{2})$ work complexity algorithm, where $N$ is the number of
integration points for a finite element formulation (§III) [24]. The two terms
in (1), a divergence and Laplacian, have rank one vector and rank two tensor,
respectively, “material” coefficients. These coefficients are computed in the
inner integral.
## III Multiple Grids and Batching
A critical observation in (1) is that the inner integral over species $\beta$
does not include $f_{\alpha}(\bar{v})$, which naturally allows for a separate
grid for each species as is done in the XGC code [5]. A single grid with
adaptive mesh refinement (AMR) can be used [6], but separate grid simplify
meshing because only a single Maxwellian need be resolved well, for the near-
Maxwellian distributions in common plasmas. Because Maxwellian are smooth,
high-order methods are very effective. Additionally, species with similar
thermal velocities can share a grid as is done with all species in [6]. This
is common with many ionization states of impurities in some plasma
simulations. The use of multiple grids with multiple species per grid reduced
the cost of an impurity simulation from quadratic in the number of species to
linear. Given that the Jacobian constriction is an $\mathcal{O}(N^{2})$ where
$N$ is the sum of all the integration points on all grids, this capability is
critical.
Figure 1 shows the three grids used for the experiments in §V, with Maxwellian
distributions in with axisymmetric coordinates, where the third grid has eight
species of heavy ions.
Figure 1: Grids used for this study with Maxwellian distributions and
different scaling for each species group (visualization artifacts from linear
interpolation in Visit)
### III-A Batching of spatial points
Kinetic applications commonly use operator split time integrators, where the
simplectic Vlasov system and the metric collisions are alternately advanced.
Each configuration space point advances the collision operator – independently
– which provides significant task parallelism. An application would run
thousands or more of these vertex solves in a collision advance step on each
GPU. Additionally, while the exact Jacobian of the Landau integral is dense a
simple approximate Jacobian is not only sparse but the species decouple
resulting in block diagonal system for each problem [6]. Thus, the linear
solves are composed of many independent solves from both problem and species
batching that can be exploited for increased parallelism.
## IV Batched linear solvers
Batch linear solvers, as with linear solvers in general, can be usefully
categorized as direct and iterative. Direct methods for the most part use some
type of sparse factorization and iterative methods combine the vectors in a
Krylov subspace to generate an approximate solution that usually possesses
some optimality condition such as minimizing the residual [28]. Direct solvers
are attractive because they are robust and their poor asymptotic complexity is
not an issue with the small systems in batch solves. However, direct solvers
are inhibited by data dependencies in both matrix factorizations and solves.
Experience with a single kernel launch CUDA batch band LU solver for the $2V$
Landau examples in §III has been disappointing (§III.G and the data archive in
[29]). However, this effort used a band solver. The use of sparse solvers with
more parallelism, such as nested dissection, should perform significantly
better. Additionally, direct solvers could be critical if iterative solvers
fail. The kernels of iterative methods, with simple Jacobi preconditioning,
have minimal data dependencies and Krylov methods converge well for the Landau
collision operator problems considered here and for combustion problems in
Pele [16].
An approach to solving these systems that uses existing portable solvers in,
for example, PETSc or Trilinos Kokkos Kernels [30], is to create an ensemble
matrix, where each linear system is “stacked” to create a large block diagonal
matrix, and use a Jacobi preconditiond Krylov solver [12, 6]. However, the
ensemble approach has several limitations. Traditional implementations of
solvers abstract linear algebra operations for, among other things,
portability. On a device, this results in a kernel launch for each vector
operation, matrix-vector product, panel updates, preconditioner, etc.,
amounting to hundreds or thousands of such kernel launches. Amortizing these
kernel launch costs requires a large degree of parallelism from batching,
which may not be available from the application. Importantly, the ensemble
approach is not consistent with Krylov iterative solvers because the scaling
of each vector in the Krylov subspace is derived from the (artificial) global
operator. Direct solvers do not suffer from inconsistency, but similarly
require special techniques for batching [sherry].
The cost of a Landau collision time advance is dominated by the construction
of the Jacobian matrix and the linear solve for the implicit time integrator.
The matrix constriction is described with CPU linear solvers in [6] and §V
discusses new optimizations to the meshing batching of multiple spatial
problems in each kernel. This report introduces new linear GPU linear solvers
that also batch multiple spatial problems and place each solve on a processing
element using the Kokkos portable language.
## V Performance experiments
The important figure of merit to understand performance of the Landau
collision operator time advance is the throughput of Newton iterations per
second. This metric factors out the specifics of the time integrator and non-
linear solver tolerance, etc., which is application dependent. Throughput is
defined as the total number of, for example, Newton iterations times the batch
size and number of GPUs, divided by the simulation time. A “warm-up” time step
is used to setup the solver and is not timed because setup costs are amortized
in a production setting.
The model problem is a deuterium plasma with two single species grids and
eight species of tungsten that share one grid. The initial electron
temperature is twice that of the ions and the model is run toward equilibrium
for 10 time steps. One level of AMR refinement from a $4\times 2$ and a
$4\times 4\times 4$ grid is used, in $2$ and $3V$ respectively, resulting in
14 (Q3 elements) and 120 (Q2 elements) in $2V$ and $3V$, respectively. We have
observed that these grids are sufficient to measure a plasma resistivity
within about $1\%$ of the fully converged resistivity [6]. The test harness
(ex2.c Landau example in PETSc, Appendix A)) replicates the model problem to
create a batch of problems to mimic an application’s use of this solver. Each
of these problems requires a linear solver solve per species, resulting in a
composition of this batch of problems with a batch of species solves per
problem. To mimic variability in a real application, the density and hence
collision frequency of each successive problem in a batch are varied within a
range of about $10\%$. Precise build parameters and instructions for
reproducing the data herein are publicly available (see Appendix A).
Two linear solvers are considered: a batched TFQMR solver in PETSc, written in
Kokkos, and an ensemble solver that uses Kokkos Kernels linear algebra
primitives within the PETSc framework. Jacobi preconditioning use throughout.
### V-A NVIDIA A100 tests
Figure 2: Nsight Systems view of a typical Newton iteration with: CUDA device
“97.0% Kernels” (dark blue, middle row) with three large kernels (left to
right): the Jacobian construction, mass matrix construction and linear solve
(“batch-kokkos-solve”). The Jacobian is proceeded by a kernel that builds the
function values and derivatives at the integration points and each matrix
method is followed by COO matrix assembly kernel.
One node with four NVIDIA A100 Tensor Core GPUs based on the NVIDIA Ampere GPU
architecture and 256GB of memory (Perlmutter) is used to investigate
performance. Table 3 shows the $2V$ Newton iteration throughput as a function
of batch size and solver.
Figure 3: Newton iterations / sec as function of batch size for each solver:
$2V$ (left), $3V$ (right)
This data shows that the batched TFQMR solver is the fastest option with
60,000 and 550 Newton iterations per second in $2V$ and $3V$, respectively.
Note that the GPU is pretty well saturated with a batch size of 256 in $2V$
and 32 in $3V$, and this corresponds to about 57,00 and 104,000 integration
points per GPU, and thus $2V$ and $3V$ saturate at roughly the same number of
integration points total as is expected.
Tables I and II show the total component times in $2V$ and $3V$, respectively,
including mass matrix creation (“Mass”), Landau Jacobian (“Jacobian”), linear
solver (“Solve”), the total time and the total number of linear solver
iterations.
TABLE I: 2V Component times (batch size = 256), NVIDIA-A100 Component | Jacobian | Mass | Solve | Total | Krylov its
---|---|---|---|---|---
Batch TFQMR | 1.57 | 0.22 | 0.58 | 2.44 | 3,648
Ensemble TFQMR | 1.57 | 0.22 | 1.76 | 3.69 | 4,015
TABLE II: 3V Component times (batch size = 32), NVIDIA-A100 Component | Jacobian | Mass | Solve | Total | Krylov its
---|---|---|---|---|---
Batch TFQMR | 29.69 | 3.00 | 2.33 | 35.08 | 2,785
Ensemble TFQMR | 29.67 | 3.00 | 1.51 | 34.31 | 2,326
In $2V$ the solves, are subdominant and in $3V$ the time is completely
dominated by the Jacobian creation, which is expected because this the
$\mathcal{O}(N^{2})$ work complexity algorithm. Note, the mass matrix creation
is essentially only finite element assembly and sparse matrix assembly and
thus the Jacobian time minus the Mass time indicates the cost of the Landau
kernel.
#### V-A1 NVIDIA hardware utilization
To understand the hardware utilization in this data, begin with a high level
view from NVIDIA’s Nsight Systems. Figure 2 shows a plot of a time slice with
a Newton iteration with the kernels (brown, bottom row) and instrumented
sections in grey for the Jacobian and Mass matrix creation, the “kokkos-batch”
solvers and the sparse matrix assembly. This shows qualitatively that most of
the time is spent in the GPU and a quantitative analysis of hardware
utilization shows that $97\%$ of the time is in fact on the GPU. All raw data
and instruction for reproducibility can be found in Appendix A.
The analysis of the hardware utilization in the GPU kernel is divided into the
analysis of the Jacobian matrix and the mass matrix construction, and the
batch solver. The NVIDIA Nsight Compute tool is used to gather several
hardware metrics from the largest batch size in Tables I and II. Table III
presents some of the raw Nsight Compute data.
TABLE III: Nsight Compute data: Jacobian (Jac), Mass (M), Solver (Sol) Data | Jac-2V | M-2V | Sol-2V | Jac-3V | M-3V | Sol-3V
---|---|---|---|---|---|---
DRAM (GB/s) | 75.80 | 1230 | 28.18 | 38.33 | 946 | 538
L1 (TB/s) | 1.92 | 3.58 | 1.43 | 1.92 | 2.39 | 1.59
L2 (GB/s) | 747 | 4010 | 881 | 266 | 2810 | 1870
dadd/cycle | 163 | 155 | 156 | 76.20 | 91.80 | 35.12
dfma/cycle | 1155 | 0 | 168 | 546 | 0 | 36.11
dmul/cycle | 526 | 329 | 64.50 | 305 | 198 | 3.14
TFlop/sec | 4.23 | 0.68 | 0.89 | 2.06 | 0.41 | 0.16
AI-L1 | 2.20 | 0.19 | 0.50 | 1.07 | 0.17 | 0.10
Roofline-L1 % | 43.60 | 18.27 | 9.18 | 21.27 | 12.19 | 8.11
AI-L2 | 5.66 | 0.17 | 0.72 | 7.75 | 0.15 | 0.08
Roofline-L2 % | 43.60 | 54.20 | 16.70 | 21.27 | 38 | 25.30
AI-DRAM | 55.80 | 0.56 | 23.60 | 53.80 | 0.43 | 0.29
R.F.-DRAM % | 43.60 | 63.60 | 9.18 | 21.30 | 48.90 | 27.80
Before a high level analysis of this data there are a few instructive points
to be seen in this data.
* •
The Jacobian kernel, with a high arithmetic intensity (AI) of $55.8$ with
respect to DRAM memory movement in $2V$, is not a simple loop of fused
multiply add (FMA) instructions as can be seen from lines 4-6 with about
$62\%$ of the flops in FMA instructions. This limits the achievable percent of
theoretical peak for this algorithm,
* •
The flop rate (line 7) is about $2x$ higher in $2V$ than $3V$. This is at
least partially due to the Landau kernel $\mathbf{U}$ in (1) being more
complex with a higher AI in $2V$, but this requires futher investigation.
* •
There are few flops and no FMAs in the mass matrix as this is essentially all
assembly.
* •
The solver AI-DRAM is very high in $2V$ (23.6) and low in $3V$ (0.29). The
theoretical AI of the solver (no cache) is about $\frac{1}{6}$. This data
indicates that the solves are fitting in cache well in $2V$ but not at all in
$3V$.
Tables IV and V tabulate conclusions and notes from the Nsight Compute data.
TABLE IV: Nsight Compute Bottlenecks Jacobian-2V | Mass-2V | Solve-2V | Jacobian-3V | Mass-3V | Solve-3V
---|---|---|---|---|---
FP64 pipe (57%) | L2 (70%), | L1 and instruction latency bound: | FP64 pipe (31%), | L2 (50%), | L2 (28%),
| DRAM (64%) | L1 (43%) instruction issue (39%) | L1 (24%) | DRAM (49%) | DRAM (23%)
TABLE V: Nsight Compute Notes Jacobian-2V | Mass-2V | Jacobian-3V | Mass-3V | Solve-3V
---|---|---|---|---
Roofline lower than | low roofline peak b/c 1) low pipe | Low pipe utilization | Utilization not higher | Memory
FP64 pipe utilization | utilization due to being L1 latency | due to L1 latency | partly due to load imbalance: | latency
b/c DFMA instruction is | bound. 2) instruction dominated by | bound | Theoretical occupancy 44%, | bound
62% of all FP64 instructions | branch and integers. FP64 instructions | | achieved occupancy 34% |
| $\approx 10$% of total instructions | | |
### V-B AMD MI250X tests
One node with four AMD MI250X, each with 2 Graphics Compute Dies (GCDs) for a
total of 8 GCDs per node (Crusher) is used to investigate performance. Table 4
shows the $2V$ Newton iteration throughput as a function of batch size and
solver.
Figure 4: Newton iterations / sec as function of batch size for each solver:
$2V$ (left), $3V$ (right)
This data shows that the batched TFQMR solver is the fastest option with
24,000 and 397 Newton iterations per second in $2V$ and $3V$, respectively.
Note that the GPU is pretty well saturated with a batch size of 128 in $2V$
and 16 in $3V$, and this corresponds to about 29,00 and 52,000 integration
points per grid, and thus $2V$ and $3V$ saturate at about the same number of
integration points total as is expected.
Tables VI and VII show the total component times in $2V$ and $3V$,
respectively, including mass matrix creation (“Mass”), Landau Jacobian
(“Jacobian”), linear solver (“Solve”), the total time and the total number of
linear solver iterations. Note TFQMR has two matrix-vector products per
iteration and so its work complexity is twice that of GMRES with respect to
the number of iterations. Again, batch TFQMR is the fastest.
TABLE VI: 2V Component times (batch size = 512), AMD-MI250X-GCD Component | Jacobian | Mass | Solve | Total | Krylov its
---|---|---|---|---|---
Batch TFQMR | 17.04 | 1.04 | 1.76 | 19.88 | 3,673
Ensemble TFQMR | 17.07 | 1.04 | 36.24 | 54.47 | 4,004
TABLE VII: 3V Component times (batch size = 64), AMD-MI250X-GCD Component | Jacobian | Mass | Solve | Total | Krylov its
---|---|---|---|---|---
Batch TFQMR | 168.16 | 18.07 | 11.28 | 196.81 | 2,796
Ensemble TFQMR | 168.62 | 18.06 | 39.51 | 210.84 | 2,326
## VI Conclusion
This report demonstrates that with the effective utilization of GPUs the gold
standard for collisions in plasma simulations, the Landau collision operator,
can be used effectively with an axisymmetric ($2V$) approximation and that
full $3V$ may be feasible in the future. This Landau solver supports multiple
independent grids to efficiently resolve the domain of each species group,
with multiple species per grid for species with like velocity profiles to
reduce cost, high-order accurate finite element discretizations with adaptive
mesh refinement, and runs fully and effectively on GPUs with a portable Kokkos
implementation in PETSc. A new PETSc batch solver has been introduced and
experiments have been conducted on an NVIDIA A100 node and AMD MI250X node.
Artifacts and reproducibility instructions are publicly available (see
Appendix A).
Future work includes the development of the solver in full $3V$. This includes
the development of a single $3V$ finite element with quadrature that is
optimized to represent a Gaussian, the equilibrium Maxwellian distribution of
a plasma. Beentjes shows that, for instance, 320 integration points can
represent a Gaussian to an accuracy that we have observed is sufficient in our
verification tests [31]. For near-Maxwellian plasmas this would bring the cost
of a full $3V$ Landau time advance to less than a factor of $5x$ more
expensive than the current $2V$ solve.
## Acknowledgments
This material is based upon work supported by the U.S. Department of Energy,
Office of Science, Office of Advanced Scientific Computing Research and Office
of Fusion Energy Sciences, Scientific Discovery through Advanced Computing
(SciDAC) program.
## References
* [1] S. Vanka, “Block-implicit multigrid solution of Navier-Stokes equations in primitive variables,” _J. Comp. Phys._ , vol. 65, pp. 138–158, 1986.
* [2] B. Smith, P. Bjorstad, and W. Gropp, _Domain Decomposition_. Cambridge University Press, 1996.
* [3] P. E. Farrell, M. G. Knepley, L. Mitchell, and F. Wechsung, “Pcpatch: Software for the topological construction of multigrid relaxation methods,” _ACM Trans. Math. Softw._ , vol. 47, no. 3, jun 2021. [Online]. Available: https://doi.org/10.1145/3445791
* [4] H. Sitaraman, S. Yellapantula, M. T. Henry de Frahan, B. Perry, J. Rood, R. Grout, and M. Day, “Adaptive mesh based combustion simulations of direct fuel injection effects in a supersonic cavity flame-holder,” _Combustion and Flame_ , vol. 232, p. 111531, 2021. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0010218021002741
* [5] R. Hager, E. Yoon, S.-H. Ku, E. F. D’Azevedo, P. H. Worley, and C.-S. Chang, “A fully non-linear multi-species Fokker–Planck–Landau collision operator for simulation of fusion plasma,” _Journal of Computational Physics_ , vol. 315, pp. 644–660, Jun. 2016. [Online]. Available: http://dx.doi.org/10.1016/j.jcp.2016.03.064
* [6] M. F. Adams, D. P. Brennan, M. G. Knepley, and P. Wang, “Exascale Landau collision operator in the CUDA programming model applied to thermal quench plasmas,” in _IEEE International Parallel and Distributed Processing Symposium_ , 2022.
* [7] M. Lemou and P.-H. Chavanis, “Escape of stars from gravitational clusters in the chandrasekhar model,” _Physica A: Statistical Mechanics and its Applications_ , vol. 389, no. 5, pp. 1021–1040, 2010. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378437109009194
* [8] J. Heyvaerts, J.-B. Fouvry, P.-H. Chavanis, and C. Pichon, “Dressed diffusion and friction coefficients in inhomogeneous multicomponent self-gravitating systems,” _Monthly Notices of the Royal Astronomical Society_ , vol. 469, no. 4, pp. 4193–4220, 05 2017. [Online]. Available: https://doi.org/10.1093/mnras/stx1092
* [9] P.-H. Chavanis, “Landau equation for self-gravitating classical and quantum particles: Application to dark matter,” 2020. [Online]. Available: https://arxiv.org/abs/2012.12858
* [10] J.-B. Fouvry, C. Pichon, P.-H. Chavanis, and L. Monk, “Resonant thickening of self-gravitating discs: imposed or self-induced orbital diffusion in the tightly wound limit,” _Monthly Notices of the Royal Astronomical Society_ , vol. 471, no. 3, pp. 2642–2673, 06 2017. [Online]. Available: https://doi.org/10.1093/mnras/stx1625
* [11] J.-B. Fouvry, P.-H. Chavanis, and C. Pichon, “Functional integral approach to the kinetic theory of inhomogeneous systems,” _Physica A: Statistical Mechanics and its Applications_ , vol. 459, pp. 117–128, 2016. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0378437116301376
* [12] K. Liegeois, R. Boman, E. T. Phipps, T. A. Wiesner, and M. Arnst, “Gmres with embedded ensemble propagation for the efficient solution of parametric linear systems in uncertainty quantification of computational models,” _Computer Methods in Applied Mechanics and Engineering_ , vol. 369, p. 113188, 2020. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S004578252030373X
* [13] A. Mollen, M. F. Adams, M. G. Knepley, R. Hager, and C. S. Chang, “Implementation of higher-order velocity mapping between marker particles and grid in the particle-in-cell code XGC,” _Journal of Plasma Physics_ , vol. 87, no. 2, 2021.
* [14] J. V. Pusztay, M. G. Knepley, and M. F. Adams, “Conservative projection between fem and particle bases,” _SIAM Journal on Scientific Computing_ , 2022, accepted.
* [15] W. H. Boukaram, G. Turkiyyah, H. Ltaief, and D. E. Keyes, “Batched qr and svd algorithms on gpus with applications in hierarchical matrix compression,” _Parallel Computing_ , vol. 74, pp. 19–33, 2018, parallel Matrix Algorithms and Applications (PMAA’16). [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0167819117301461
* [16] A. Kashi, P. Nayak, D. Kulkarni, A. Scheinberg, P. Lin, and H. Anzt, “Batched sparse iterative solvers on GPU for the collision operator for fusion plasma simulations,” 2022.
* [17] H. C. Edwards, C. R. Trott, and D. Sunderland, “Kokkos: Enabling manycore performance portability through polymorphic memory access patterns,” _Journal of Parallel and Distributed Computing_ , vol. 74, no. 12, pp. 3202 – 3216, 2014, domain-Specific Languages and High-Level Frameworks for High-Performance Computing. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0743731514001257
* [18] D. A. Beckingsale, J. Burmark, R. Hornung, H. Jones, W. Killian, A. J. Kunen, O. Pearce, P. Robinson, B. S. Ryujin, and T. R. Scogland, “Raja: Portable performance for large-scale scientific applications,” in _2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC)_ , 2019, pp. 71–81.
* [19] A. Alpay and V. Heuveline, “Sycl beyond opencl: The architecture, current state and future direction of hipsycl,” in _Proceedings of the International Workshop on OpenCL_ , ser. IWOCL ’20. New York, NY, USA: Association for Computing Machinery, 2020. [Online]. Available: https://doi.org/10.1145/3388333.3388658
* [20] A. A. Vlasov, “The vibrational properties of an electron gas,” _Soviet Physics Uspekhi_ , vol. 10, no. 6, pp. 721–733, Jun. 1968. [Online]. Available: http://dx.doi.org/10.1070/PU1968v010n06ABEH003709
* [21] L. D. Landau, “Kinetic equation for the coulomb effect,” _Phys. Z. Sowjetunion_ , vol. 10, p. 154, 1936.
* [22] E. Hirvijoki and M. F. Adams, “Conservative discretization of the landau collision integral,” _Physics of Plasmas_ , vol. 24, no. 3, p. 032121, 2017\. [Online]. Available: http://dx.doi.org/10.1063/1.4979122
* [23] M. Kraus and E. Hirvijoki, “Metriplectic integrators for the landau collision operator,” _Physics of Plasmas_ , vol. 24, 07 2017.
* [24] M. F. Adams, E. Hirvijoki, M. G. Knepley, J. Brown, T. Isaac, and R. Mills, “Landau collision integral solver with adaptive mesh refinement on emerging architectures,” _SIAM Journal on Scientific Computing_ , vol. 39, no. 6, pp. C452–C465, 2017. [Online]. Available: http://epubs.siam.org/doi/abs/10.1137/17M1118828
* [25] S. T. Beliaev and G. I. Budker, “The Relativistic Kinetic Equation,” _Soviet Physics Doklady_ , vol. 1, pp. 218–222, 1956.
* [26] B. J. Braams and C. F. F. Karney, “Differential form of the collision integral for a relativistic plasma,” _Physical Review Letters_ , vol. 59, pp. 1817–1820, Oct. 1987.
* [27] T. Shiroto and Y. Sentoku, “Structure-preserving strategy for conservative simulation of the relativistic nonlinear landau-fokker-planck equation,” _Phys. Rev. E_ , vol. 99, p. 053309, May 2019. [Online]. Available: https://link.aps.org/doi/10.1103/PhysRevE.99.053309
* [28] Y. Saad, _Iterative methods for sparse linear systems_. PWS Publishing Company, 1996.
* [29] M. F. Adams, D. P. Brennan, M. G. Knepley, and P. Wang, “Landau collision operator in the cuda programming model applied to thermal quench plasmas,” 2021\. [Online]. Available: https://arxiv.org/abs/2104.10000
* [30] S. Balay, S. Abhyankar, M. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. Gropp, D. Kaushik, M. Knepley, L. Curfman McInnes, K. Rupp, B. Smith, S. Zampini, H. Zhang, and H. Zhang, “PETSc Web page,” http://www.mcs.anl.gov/petsc, 2016. [Online]. Available: http://www.mcs.anl.gov/petsc
* [31] C. H. L. Beentjes, “Quadrature on a spherical surface,” 2016. [Online]. Available: cbeentjes.github.io/files/Ramblings/QuadratureSphere.pdf
## Appendix A Artifact Description and reproducibility
PETSc output files with performance data and provenance information, build
instructions for each platform and reproducibility instructions and
verification data can be found with git clone
gitlab.com/markadams4/batch_paper_data.
|
# $5$-rank of ambiguous class groups of
quintic Kummer extensions
Fouad ELMOUHIB Mohamed TALBI Abdelmalek AZIZI
###### Abstract
Let $k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$, where $n$ is a positive integer
$5^{th}$ power-free, whose $5-$class group denoted $C_{k,5}$ is isomorphic to
$\mathbb{Z}/5\mathbb{Z}\times\mathbb{Z}/5\mathbb{Z}$. Let
$k_{0}\,=\,\mathbb{Q}(\zeta_{5})$ be the cyclotomic field containing a
primitive $5^{th}$ root of unity $\zeta_{5}$. Let $C_{k,5}^{(\sigma)}$ the
group of the ambiguous classes under the action of $Gal(k/k_{0})$ =
$\langle\sigma\rangle$. The aim of this paper is to determine all integers $n$
such that the group of ambiguous classes $C_{k,5}^{(\sigma)}$ has rank $1$ or
$2$.
## 1 Introduction
One of the most important problems in number theory is the determination of
the structure of class group of a number field, particularly its rank. The
case of quadratic fields, Gauss’s genus theory, determines the rank of
$2$-class group. In a series of papers ( see [References], [References],
[References]), Frank Gerth III proved several results on the $3$-class groups
of pure cubic extension of $\mathbb{Q}$ and cyclic cubic extension of
$\mathbb{Q}$. Recently, S.Aouissi, M.C.Ismaili, M.Talbi, A.Azizi in
[References] had classified all fields $\mathbb{Q}(\sqrt[3]{n},\zeta_{3})$
whose $3-$class group is of type $(9,3)$.
Let $k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$, a number of researchers have
studied the $5$-class group $C_{k,5}$. M.Kulkarni, D.Majumdar and B.Sury in
[References] proved some results that are seen as generalisation of Gerth’s
work to the case of any odd prime, and they give more details in case of
$5$-class group of $k$. In [References], C.Parry gives a formula between class
numbers of pure quintic field $\mathbb{Q}(\sqrt[5]{n})$ and its normal closure
$k$. In References H.Kobayashi proved that if the radicand $n$ has a prime
factor $p$ congruent to $-1$ modulo $5$, then the class number of the pure
quintic field $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ is a multiple of $5$, and
the class number of $k$ is multiple of $25$.
Let $n>1$ be a $5^{th}$ power-free integer and
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ be a quintic Kummer extension of the
cyclotomic field $k_{0}\,=\,\mathbb{Q}(\zeta_{5})$. By $C_{k,5}^{(\sigma)}$ we
denote the $5$-group of ambiguous ideal classes under the action of
$Gal(k/k_{0})$ = $\langle\sigma\rangle$, i.e
$C_{k,5}^{(\sigma)}\,=\,\\{\mathcal{A}\,\in\,C_{k,5}|\,\mathcal{A}^{\sigma}\,=\,\mathcal{A}\\}$.
Let $k^{\ast}\,=\,(k/k_{0})^{\ast}$ be the maximal abelian extension of
$k_{0}$ contained in the Hilbert $5$-class field $k_{5}^{(1)}$ of $k$, which
is called the relative $5$-genus field of $k/k_{0}$.
We consider the problem of finding the radicands $n$ of all pure quintic
fields $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, for which the Galois group
$\operatorname{Gal}(k^{\ast}/k)$ is non-trivial. The present work gives the
complete solution of this problem by characterizing all quintic Kummer
extensions $k/k_{0}$ with $5$-group of ambiguous ideal classes
$C_{k,5}^{(\sigma)}$ of order $5$ or $25$. This paper can be viewed as the
continuation of the work of M.Kulkarni, D.Majumdar and B.Sury in [References].
In fact, we shall prove the following Main Theorem:
###### Theorem 1.1.
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n>1$
is a $5^{th}$ power-free integer, and
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ be its normal closure. We assume
that the $5-$class group $C_{k,5}$ is of type $(5,5)$.
(1) If rank $(C_{k,5}^{(\sigma)})\,=\,1$, then the integer $n$ can be written
in one of the following forms:
$n\,=\,\left\\{\begin{array}[]{ll}5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\text{ or }\,q_{2}$
$\not\equiv\,\pm 7\,(\mathrm{mod}\,25)\\\ 5^{e}p\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad
p\,\not\equiv\,-1\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ p^{e}q_{1}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad
p\,\not\equiv\,-1\,(\mathrm{mod}\,25),\,q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ p^{e}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{
with }\quad p\,\equiv\,-1\,(\mathrm{mod}\,25)\\\
q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)&\text{ with }\quad
q_{i}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)\par\end{array}\right.$ (1)
where $p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $q_{1},q_{2}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ are primes and $e,e_{1}$ are integers in
$\\{1,2,3,4\\}$.
(2) If rank $(C_{k,5}^{(\sigma)})\,=\,2$, then the integer $n$ can be written
in one of the following forms:
$n\,=\,\left\\{\begin{array}[]{ll}5^{e}l\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\not\equiv\,1\,(\mathrm{mod}\,25),\\\ l^{e_{1}}q_{1}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\equiv\,1\,(\mathrm{mod}\,5),\,q_{1}\,\equiv\,\pm 2,\pm 7,\pm
3\,(\mathrm{mod}\,25)\\\ l^{e_{1}}\,\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\quad\text{ with }\quad
l\,\equiv\,1\,(\mathrm{mod}\,25),\\\ \end{array}\right.$ (2)
where $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ are primes and $e,e_{1}$ are integers in
$\\{1,2,3,4\\}$.
This result will be underpinned by numerical examples obtained with the
computational number theory system PARI/GP [References] in section 3.
Notations.
Throughout this paper, we use the following notations:
* •
The lower case letter $p$,$q$ and $l$ will denote a prime numbers such that,
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$, $q\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ and
$l\,\equiv\,1\,(\mathrm{mod}\,5)$.
* •
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$: a pure quintic field, where $n\neq 1$ is
a $5^{th}$ power-free positive integer.
* •
$k_{0}\,=\,\mathbb{Q}(\zeta_{5})$, the cyclotomic field, where
$\zeta_{5}\,=\,e^{\frac{2i\pi}{5}}$ a primitive $5^{th}$ root of unity.
* •
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$: the normal closure of $\Gamma$, a
quintic Kummer extension of $k_{0}$.
* •
$\Gamma^{{}^{\prime}},\,\Gamma^{{}^{\prime\prime}},\,\Gamma^{{}^{\prime\prime\prime}},\,\Gamma^{{}^{\prime\prime\prime\prime}},\,$
the four conjugates quintic fields of $\Gamma$, contained in $k$.
* •
$\langle\tau\rangle\,=\,\operatorname{Gal}(k/\Gamma)$ such that
$\tau^{4}\,=\,id,\,\tau^{3}(\zeta_{5})\,=\,\zeta_{5}^{3},\,\tau^{2}(\zeta_{5})\,=\,\zeta_{5}^{4},\,\tau(\zeta_{5})\,=\,\zeta_{5}^{2}$
and $\tau(\sqrt[5]{n})\,=\,\sqrt[5]{n}$.
* •
$\langle\sigma\rangle\,=\,\operatorname{Gal}(k/k_{0})$ such that
$\sigma^{5}\,=\,id,\ \sigma(\zeta_{5})\,=\,\zeta_{5}$ and
$\sigma(\sqrt[5]{n})\,=\,\zeta_{5}\sqrt[5]{n},\,\sigma^{2}(\sqrt[5]{n})\,=\,\zeta_{5}^{2}\sqrt[5]{n},\,\\\
\\\
\sigma^{3}(\sqrt[5]{n})\,=\,\zeta_{5}^{3}\sqrt[5]{n},\,\sigma^{4}(\sqrt[5]{n})\,=\,\zeta_{5}^{4}\sqrt[5]{n}$.
* •
$\lambda\,=\,1-\zeta_{5}$ is prime element above $5$ of $k_{0}$.
* •
$q^{\ast}\,=\,0,\,1$ or $2$ according to whether $\zeta_{5}$ and $1+\zeta_{5}$
is not norm or is norm of an element of
$k^{*}\,=\,k\setminus\\{0\\}$.
* •
$d$: the number of prime ideals of $k_{0}$ ramified in $k$.
* •
For a number field $L$, denote by:
* –
$\mathcal{O}_{L}$: the ring of integers of $L$;
* –
$E_{L}$: the group of units of $L$;
* –
$C_{L}$, $h_{L}$, $C_{L,5}$: the class group, class number, and $5$-class
group of $L$.
* –
$L_{5}^{(1)},L^{\ast}$: the Hilbert $5$-class field of $L$, and the absolute
genus field of $L$.
$\mathbf{k}$$\mathbf{\Gamma}$ $\mathbf{\Gamma^{\prime}}$
$\mathbf{\Gamma^{\prime\prime}}$ $\mathbf{\Gamma^{\prime\prime\prime}}$
$\mathbf{\Gamma^{\prime\prime\prime\prime}}$ $\mathbf{k_{0}}$$\mathbb{Q}$
Figure 15445
## 2 Proof of Main Theorem
###### Theorem 2.1.
(Decompositon in cyclotomic fields)
Let $m$ a positive integer and $p$ a prime number. Suppose $p$ do not divides
$m$, and let $f$ be the smallest positive integer such that
$p^{f}\,\equiv\,1\,(\mathrm{mod}\,m)$. Then $p$ splits into $\phi(m)/f$
distinct primes in $\mathbb{Q}(\zeta_{m})$ each of which has a residue class
degree $f$. In particular, p splits completely if and only if
$p\,\equiv\,1\,(\mathrm{mod}\,m)$
###### Proof.
see [References] page 14. ∎
###### Corollary 2.1.
Let $p$ a prime integer, we have :
If $p\,=\,5$, then $\lambda\,=\,1-\zeta_{5}$ is the unique prime over 5 in
$\mathbb{Q}(\zeta_{5})$.
If $l\,\equiv\,1\,(\mathrm{mod}\,5)$, then $l$ splits completely in
$\mathbb{Q}(\zeta_{5})$ as $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$, with
$\pi_{i}$ are primes in $\mathbb{Q}(\zeta_{5})$
If $q\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$, then $q$ is inert in
$\mathbb{Q}(\zeta_{5})$.
If $p\,\equiv\,-1\,(\mathrm{mod}\,5)$, then $p$ splits in
$\mathbb{Q}(\zeta_{5})$ as $p\,=\,\pi_{1}\pi_{2}$, with $\pi_{i}$ are primes
in $\mathbb{Q}(\zeta_{5})$.
Before proving the main theorem, we give a proof of the existance of a unique
prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides the radicand $n$ in the case
of rank $(C_{k,5}^{(\sigma)})\,=\,2$
###### Theorem 2.2.
If rank $C_{k,5}^{(\sigma)}=2$, so $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$, and there
is unique prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ that divides the radicand
$n$. Furthermore, we have $(k/k_{0})^{*}\,=\,k_{5}^{(1)}$.
###### Proof.
If rank $(C_{k,5}^{(\sigma)})=2$ so the order of $C_{k,5}^{(\sigma)}$ is at
least $25$, since $C_{k,5}^{(\sigma)}\subseteq C_{k,5}$ and $|C_{k,5}|\,=\,25$
because $C_{k,5}$ is of type $(5,5)$ we have $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$
it means that all ideal classes are ambiguous.
By class fields theory $C_{k,5}^{1-\sigma}$ correspond to $(k/k_{0})^{*}$ and
$Gal(k_{5}^{(1)}/k)\cong C_{k,5}$. Since $C_{k,5}^{(\sigma)}\,=\,C_{k,5}$, we
get $C_{k,5}^{1-\sigma}\,=\,\\{1\\}$, hence $(k/k_{0})^{*}\,=\,k_{5}^{(1)}$,
and by [References, proposition 5.8] we know explicitly in this case the
Hilbert $5-$class field of $k$.
We assume now that there is no prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides
$n$.
We can write $n$ as
$n\,=\,5^{e}q_{1}^{f_{1}}...q_{r}^{f_{r}}p_{1}^{g_{1}}....p_{s}^{g_{s}}$ with
$q_{i}\,\equiv\,\pm 2\,(mod\,5)$ and $p_{j}\,\equiv\,-1\,(mod\,5)$,
$f_{i}\,=\,1,2,3,4$ for $1\leq i\leq r$ and $g_{j}\,=\,1,2,3,4$ for $1\leq
j\leq s$, and $e\,=\,0,1,2,3,4$, by Corollary 2.1 each $q_{i}$ is inert in
$k_{0}$, and $q_{i}$ is ramified in $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, also
by Corollary 2.1 $p_{j}$ splits in $k_{0}$ as $p_{j}\,=\,\pi_{1}\pi_{2}$,
where $\pi_{1},\pi_{2}$ are primes in $k_{0}$, and $p_{j}$ is ramified in
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$, so the prime ideals ramified in
$k/k_{0}$ are those above $q_{i}$ and $\pi_{j}$ and the ideal above $\lambda$
with $\lambda\,=\,1-\zeta_{5}$ if 5 is ramified in
$\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$.
If $\lambda$ is ramified in $k/k_{0}$, we note $\mathcal{I}$ the prime ideal
in $k$ above $\lambda$, and for $1\leq i\leq r$ ,$\mathcal{Q}_{i}$ the prime
ideal above $q_{i}$ in $k$, and for $1\leq j\leq s$ ,$\mathcal{P}_{j}$ the
prime ideal above $\pi_{j}$ in $k$, with $\pi_{j}$ is prime of $k_{0}$ above
$p_{j}$. We have evidently $\mathcal{I}^{5}\,=\,(\lambda)$,
$\mathcal{Q}_{i}^{5}\,=\,(q_{i})$, $\mathcal{P}_{j}^{5}\,=\,(\pi_{j})$ in $k$.
we note by $C_{k,s}^{(\sigma)}$ the group of strong ambiguous adeal classes.
We have to treat two cases:
(i) $C_{k,s}^{(\sigma)}\,\neq\,C_{k,5}^{(\sigma)}\,=\,C_{k,5}$:
Let
$C_{k,5}^{+}\,=\,\\{\mathcal{A}\in\,C_{k,5}|\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}\\}$
and
$C_{k,5}^{-}\,=\,\\{\mathcal{A}\in\,C_{k,5}|\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}^{-1}\\}$
be a nontrivial subgoups of $C_{k,5}$. We have
$(C_{k,5}^{(\sigma)})^{+}\,=\,C_{k,5}^{+}$, by [References, Lemma 6.2]
$C_{k,5}^{+}\,\simeq\,C_{\Gamma,5}$ i.e $C_{k,5}^{+}$ can be generated by
$5-$class comes from $\Gamma$ ($|C_{k,5}^{+}|\,=\,5$). The strong ambiguous
classes are those of primes ramified in $k/k_{0}$, namly $[\mathcal{Q}_{i}]$
for $1\leq i\leq r$, $[\mathcal{P}_{j}]$ for $1\leq j\leq s$ and
$[\mathcal{I}]$ if $\lambda$ is ramified in $k/k_{0}$. Its easy to see that:
$[\mathcal{Q}_{i}^{\tau^{2}}]\,=\,[\mathcal{Q}_{i}]^{\tau^{2}}\,=\,[\mathcal{Q}_{i}]$
and
$[\mathcal{P}_{j}^{\tau^{2}}]\,=\,[\mathcal{P}_{j}]^{\tau^{2}}\,=\,[\mathcal{P}_{j}]$
and $[\mathcal{I}^{\tau^{2}}]\,=\,[\mathcal{I}]^{\tau^{2}}\,=\,[\mathcal{I}]$,
we now that $C_{k,5}/C_{k,s}^{(\sigma)}$ is generated by image in
$C_{k,5}/C_{k,s}^{(\sigma)}$ of element in $C_{k,5}^{+}$. Since
$C_{k,s}^{(\sigma)}$ is generated by $[\mathcal{Q}_{i}],[\mathcal{P}_{j}]$ and
$[\mathcal{I}]$ if $\lambda$ is ramified in $k/k_{0}$, all elements of
$C_{k,5}$ will be fixed by $\tau^{2}$, in particular whose of $C_{k,5}^{-}$,
therefore $\forall\mathcal{A}\in C_{k,5}^{-}$,
$\mathcal{A}^{\tau^{2}}\,=\,\mathcal{A}^{-1}\,=\,\mathcal{A}$ i.e
$\mathcal{A}^{2}\,=\,1$ i.e $\mathcal{A}^{4}\,=\,1$, hence $\mathcal{A}\,=\,1$
because $\mathcal{A}$ is $5-$class, so we get $C_{k,5}^{-}\,=\,{1}$, and this
contradict the fact that $|C_{k,5}^{-}|=5$.
(ii) $C_{k,s}^{(\sigma)}\,=\,C_{k,5}^{(\sigma)}\,=\,C_{k,5}$:
In this case $C_{k,5}$ will be generated by $[\mathcal{Q}_{i}]$,
$[\mathcal{P}_{j}]$ and $[\mathcal{I}]$ if $\lambda$ is ramifed in $k/k_{0}$,
and as in (i) all the classes are fixed by $\tau^{2}$, which give the same
contradiction. Thus we proved the existence of a prime $l$ divides $n$ such
that $l\,\equiv\,1\,(\mathrm{mod}\,5)$.
According to [References,section 5.1], we have rank
$C_{k,5}^{(\sigma)}\,=\,d-3+q^{*}$ where $d$ is the number of prime ramified
in $k/k_{0}$ and $q^{*}$ is an index has as value 0,1 or 2. Assuming that
there is two prime $l_{1}$ and $l_{2}$ divides $n$ such that
$l_{i}\,\equiv\,1\,(\mathrm{mod}\,5)$, $(i=1,2)$, then $d\geq 8$ and rank
$C_{k,5}^{(\sigma)}$ is at least $5$, that is impossible. Thus if rank
$C_{k,5}^{(\sigma)}\,=\,2$ its exsite unique prime
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$. ∎
### 2.1 Proof of Theoreme 1.1
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n\geq
2$ is a $5^{th}$ power-free integer, $k\,=\,\Gamma(\zeta_{5})$ be its normal
closure, and $C_{k,5}$ be the $5$-class group of $k$. Let $C_{k,5}^{(\sigma)}$
be the ambigous ideal classes group under the action of
$Gal(k/k_{0})\,=\,\langle\sigma\rangle$. Since $k_{0}$ has class number $1$,
$C_{k,5}^{(\sigma)}$ is un elementary abelian $5$-group, so rank
$(C_{k,5}^{(\sigma)})\,=\,1\,\mathrm{or}\,2$.
According to [References,section 5.1], the rank of $C_{k,5}^{(\sigma)}$ is
given as follows:
rank $(C_{k,5}^{(\sigma)})\,=\,d-3+q^{*}$
where $d$ is the number of prime ideals of $k_{0}$ ramified in $k$, and
$q^{\ast}\,=\,0,\,1$ or $2$ according to whether $\zeta_{5}$ and $1+\zeta_{5}$
is not norm or is norm of an element of $k^{*}\,=\,k\setminus\\{0\\}$ as
follows:
$q^{*}$ = $\begin{cases}2&\text{if }\,\zeta,1+\zeta\in N_{k/k_{0}}(k^{*}),\\\
1&\text{if }\,\zeta^{i}(1+\zeta)^{j}\in N_{k/k_{0}}(k^{*})\,\text{ for some i
and j },\\\ 0&\text{if }\,\zeta^{i}(1+\zeta)^{j}\notin
N_{k/k_{0}}(k^{*})\,\text{for}\hskip 5.69054pt0\leq i,j\leq 4\text{ and}\hskip
5.69054pti+j\neq 0.\\\ \end{cases}$
We can writ $n$ as $n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$,
where $\mu$ is a unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$,
$\pi_{1},,,,\pi_{g}$ are primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$,
$e_{i}\in\\{1,2,3,4\\}$ for $1\leq i\leq g$. According to [References, Lemma
5.1] we have, $\zeta_{5}\,\in
N_{k/k_{0}}(k^{*})\,\Longleftrightarrow\,N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,\,\equiv\,1\,(\mathrm{mod}\,25)$
for all i, and from [References, proposition 8.2] if $\pi$ is a prime of
$\mathcal{O}_{k_{0}}$ over a prime $p\,\in\,\mathbb{Z}$ we have
$N_{k_{0}/\mathbb{Q}}((\pi))\,=\,p^{f}$, with $f$ is the least positif integer
such that $p^{f}\,\,\equiv\,\,1\,(\mathrm{mod},\,5)$, so we can get that
$\zeta_{5}$ is norm of an element of $k^{*}\,=\,k\setminus\\{0\\}$ if and only
if $p^{f}\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ for all prime $p\,\neq\,5$
dividing $n$:
1. $-$
If $l\,\,\equiv\,\,1\,(\mathrm{mod},\,5)$, by corollary 2.1
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$, we have
$N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,=\,l$, so to have
$N_{k_{0}/\mathbb{Q}}((\pi_{i}))\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ the
prime $l$ must verify $l\,\,\equiv\,\,1\,(\mathrm{mod}\,25)$.
2. $-$
If $q\,\,\equiv\,\,\pm 2\,(\mathrm{mod},\,5)$ we have $q$ is inert in $k_{0}$,
so $N_{k_{0}/\mathbb{Q}}((q))\,=\,q^{4}$, so to have
$N_{k_{0}/\mathbb{Q}}((q))\,\,\equiv\,\,1\,(\mathrm{mod},\,25)$ the prime $q$
must verify $q\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$.
3. $-$
If $p\,\,\equiv\,\,-1\,(\mathrm{mod}\,5)$,by corollary 2.1
$p\,=\,\pi_{1}\pi_{2}$ we have $N_{k_{0}/\mathbb{Q}}((\pi))\,=\,p^{2}$, so to
have $N_{k_{0}/\mathbb{Q}}((\pi))\,\,\equiv\,\,1\,(\mathrm{mod}\,25)$ the
prime $p$ must verify $p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$.
(1) If rank $(C_{k,5}^{(\sigma)})\,=\,1$, we get that $d+q^{*}\,=\,4$, so
there are three possible cases as follows:
* •
Case 1: $q^{*}=0\,\,\mathrm{and}\,\,d=4$,
* •
Case 2: $q^{*}=1\,\,\mathrm{and}\,\,d=3$,
* •
Case 3: $q^{*}=2\,\,\mathrm{and}\,\,d=2$,
We will successively treat the three cases to prove the first point of the
main theorem.
* •
Case 1: we have $q^{*}=0$ and $d=4$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $4$. According to the proof of theorem 2.2, if
$n$ is divisible by a prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ then the prime
$l$ is unique.
\- If $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, then by Corollary 2.1,
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ where $\pi_{i}$ are primes of $k_{0}$.
The prime $l$ is ramified in $\Gamma$, because
disk$(\Gamma/\mathbb{Q})\,=\,5^{5}n^{4}$ and $l$ divides this discriminent,
then $\pi_{1},\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$. Hence we
have $d=4$, so $l$ is the unique prime divides $n$, because if $n$ is dividing
by other prime we obtain $d>4$, which is impossible in the three cases. So
$n\,=\,l^{e_{1}}$ with $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and
$e_{1}\in\\{1,2,3,4\\}$. According to [References, Lemma 5.1] we have
$(\lambda)$ ramifies in $k/k_{0}\,\Longleftrightarrow\,n\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$, with $\lambda\,=\,1-\zeta_{5}$, so in the case
$n\,=\,l^{e_{1}}$ where $l\,\equiv\,1\,(\mathrm{mod}\,5)$ we should have
$n\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, so the only $l$ verifiy
$n\,=\,l^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is
$l\,\equiv\,1\,(\mathrm{mod}\,25)$, and for this prime we have
$q^{*}\,\geq\,1$, because $\zeta_{5}$ is norme, which is impossible in this
case.
We note that if $n\,=\,l^{e_{1}}$ with $l\,\equiv\,1\,(\mathrm{mod}\,25)$ and
$q*\,=\,2$, there is no fields $k$ of type $(5,5)$, because we have
rank$(C_{k,5}^{(\sigma)})\,=\,3$, and if $q^{*}\,=\,1$ we have
rank$(C_{k,5}^{(\sigma)})\,=\,2$, which will treat in the second point of the
proof.
-If no prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, we have two forms of $n$:
1. $(i)$
$n\,=\,5^{e}p^{e_{1}}q_{1}^{e_{2}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$ and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. By Corollary 2.1,
$p\,=\,\pi_{1}\pi_{2}$, where $\pi_{1},\,\pi_{2}$ are primes in $k_{0}$ and
$q_{1}$ is inert in $k_{0}$, the prime $p$ is ramified in $\Gamma$, then
$\pi_{1},\,\pi_{2}$ are ramified in $k$. We have $q_{1}$ is ramified in
$\Gamma$. Since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, then
$\lambda\,=\,1-\zeta_{5}$ is ramified in $k$, so we get $d=4$. To verifiy that
$n\,=\,5^{e}p^{e_{1}}q_{1}^{e_{2}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
we can choose $e_{1}\,=\,2$ and $e_{2}\,=\,1$ because
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$,
so $n\,=\,5^{e}p^{2}q_{1}$ with $e\in\\{1,2,3,4\\}$. In one hand if
$e\,=\,2,3,4$ we have $n\,\equiv\,0\,(\mathrm{mod}\,25)$, in the other hand if
$n\,=\,5p^{2}q_{1}$ we have $p^{2}q_{1}\,=\,5\alpha+2$ or
$p^{2}q_{1}\,=\,5\alpha^{\prime}+3$ with
$\alpha,\alpha^{\prime}\in\mathbb{Z}$, so
$5p^{2}q_{1}\,\equiv\,10\,(\mathrm{mod}\,25)$ or
$5p^{2}q_{1}\,\equiv\,15\,(\mathrm{mod}\,25)$, therefore we conclude that
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$. According to [References,
Theorem 5.18], if $p\,\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$, rank $(C_{k,5})$ is at least $3$ which contradict the
fact that $C_{k,5}$ is of type $(5,5)$, and by the proof of [References,
Theorem 5.13, Theorem 5.15], for the other congruence cases of $p$ and $q_{1}$
we have $q^{*}\,=\,1$ which is impossible in this case.
2. $(ii)$
$n\,=\,p^{e}q_{1}^{e_{1}}q_{2}^{e_{2}}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,\\\
q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$ and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,2,e_{2}\,=\,1$ i.e $n\,=\,p^{e}q_{1}^{2}q_{2}$ with
$e\in\\{1,2,3,4\\}$. By Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ where
$\pi_{1},\,\pi_{2}$ are primes in $k_{0}$ and $q_{1},\,q_{2}$ are inert in
$k_{0}$. We have $p,\,q_{1}$ and $q_{2}$ are ramified in $\Gamma$,so
$\pi_{1},\,\pi_{2},\,q_{1}$ and $q_{2}$ are ramified in $k$ then $d=4$. The
condition $n\,=\,p^{e}q_{1}^{2}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
is not verified for all
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,2\,(\mathrm{mod}\,5)$ and
$q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$, so we combine all the cases of
congruence and we obtain that
$p\,\equiv\,-1\,(\mathrm{mod}\,25),\,\,q_{1}\,\equiv\,12\,(\mathrm{mod}\,25),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,25)$.
By [References, Lemma 5.1], since
$N_{k_{0}/\mathbb{Q}}(\pi_{i})\,\equiv\,1\,(\mathrm{mod}\,25),\,N_{k_{0}/\mathbb{Q}}(q_{1})\,\not\equiv\,1\,(\mathrm{mod}\,25),N_{k_{0}/\mathbb{Q}}(q_{2})\,\not\equiv\,1\,(\mathrm{mod}\,25)$,
we have $q^{*}\,=\,1$ which is impossible in this case.
We deduce that in the case 1, there is no radicand $n$ who verifiy
rank$(C_{k,5}^{(\sigma)})\,=\,1$.
* •
Case 2: we have $q^{*}=1$ and $d=3$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $3$. According to case 1, n is not divisible
by any prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$ in this case.
We can writ $n$ as $n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$,
where $\mu$ is a unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$,
$\pi_{1},,,,\pi_{g}$ are primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$,
$e_{i}\in\\{1,2,3,4\\}$ for $1\leq i\leq g$.
By [References, proposition 5.2] $d\,=\,g$ or $g+1$ according to whether
$n\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$, to obtain $d=3$, $n$ must be written in
$\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}$ or
$n\,=\,\lambda^{e}\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}$, therefore we have three
forms of $n$:
1. $(i)$
$n\,=\,5^{e}p^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}p$ with $e\in\\{1,2,3,4\\}$. By
Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ with $\pi_{1},\,\pi_{2}$ are primes in
$k_{0}$, we have $p$ is ramified in $\Gamma$, so $\pi_{1},\,\pi_{2}$ are
ramified in $k$, and since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
according to [References, Lemma 5.1], $\lambda\,=\,1-\zeta_{5}$ is ramified in
$k$ so we obtain $d=3$. The condition $n\,\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ is verified for all $p\,\equiv\,-1\,(\mathrm{mod}\,5)$
because, if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}p\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e $n\,=\,5p$ we have
$p\,=\,5\alpha+4$ that implie $5p\,=\,25\alpha+20$ with $\alpha\in\mathbb{Z}$,
so $n\,=\,5p\,\equiv\,20\,(\mathrm{mod}\,25)$. According to the proof of
[References, theorem 5.15] if $p\,\equiv\,-1(\mathrm{mod}\,25)$ we have
$q^{*}\,=\,2$, and if $p\,\not\equiv\,-1(\mathrm{mod}\,25)$ we have
$q^{*}\,=\,1$, we conclude that $n\,=\,5^{e}p\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $p\,\not\equiv\,-1(\mathrm{mod}\,25)$.
We note that the computational number theory system PARI [References], show
that if $n\,=\,5^{e}p\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1(\mathrm{mod}\,25)$, the field $k$ is not always of type
$(5,5)$.
2. $(ii)$
$n\,=\,5^{e}q_{1}^{e_{1}}q_{2}^{e_{2}}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$
and $e,e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$,
we can choose $e_{1}\,=\,2$ and $e_{2}\,=\,1$ i.e $n\,=\,5^{e}q_{1}^{2}q_{2}$
with $e\in\\{1,2,3,4\\}$. By Corollary 2.1, $q_{1}$ and $q_{2}$ are inert in
$k_{0}$ , and $q_{1}$, $q_{2}$ are ramified in $\Gamma$, then $q_{1},\,q_{2}$
are ramified in $k$. Since $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
then $\lambda\,=\,1-\zeta_{5}$ is ramified in $k$, so we get $d=3$.The
condition $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for all
$q_{1}\,\equiv\,2\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,3\,(\mathrm{mod}\,5)$,
if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}q_{1}^{2}q_{2}\,\,\equiv\,0\,(\mathrm{mod}\,25)$, if
$n\,=\,5q_{1}^{2}q_{2}$ we have $q_{1}^{2}q_{2}\,=\,5\alpha+2$ with
$\alpha,\in\mathbb{Z}$, so $5q_{1}^{2}q_{2}\,\equiv\,10\,(\mathrm{mod}\,25)$.
If $q_{1}\,\equiv\,7\,(\mathrm{mod}\,25)$ and
$q_{2}\,\equiv\,-7\,(\mathrm{mod}\,25)$ we have $q^{*}\,=\,2$, and if
$q_{1}\,\not\equiv\,7\,(\mathrm{mod}\,25)$ or
$q_{2}\,\not\equiv\,-7\,(\mathrm{mod}\,25)$, according to the proof of
[References, theorem 5.13] we have $q^{*}\,=\,1$, but for this form of the
radicand $n$ the computational number theory system PARI [References] show
that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$.
3. $(iii)$
$n\,=\,p^{e}q_{1}^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$
and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,p^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
By Corollary 2.1 $p\,=\,\pi_{1}\pi_{2}$ where $\pi_{1},\,\pi_{2}$ are primes
in $k_{0}$ and $q_{2}$ is inert in $k_{0}$. We have $p$ is ramified in
$\Gamma$,so $\pi_{1},\,\pi_{2}$ are ramified in $k$. $q_{2}$ is ramified in
$\Gamma$ too, we obtain $d=3$. The condition $n\,=\,p^{e}q_{1}\,\equiv\,\pm
1\pm 7\,(\mathrm{mod}\,25)$ is not verified for all
$p\,\equiv\,-1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$, so we combine all the cases of congruence and we obtain
that $p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$. According to [References, theorem 5.18] If
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$ we have rank $C_{k,5}\geq 3$ which is impossible in our
cases. If $p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$, according to [References, theorem 5.13] we have
$q^{*}\,=\,1$. Using the computational number theory system PARI/GP
[References], if $n\,=\,p^{e}q_{1}$ with
$p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$ and $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$.
We summarize all forms of integer $n$ in the case 2, for which $k$ is of type
$(5,5)$ and rank $(C_{k,5}^{(\sigma)})\,=\,1$ as follows:
$n=\left\\{\begin{array}[]{ll}5^{e}p\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }p\,\not\equiv\,-1\,(\mathrm{mod}\,25),\\\
p^{e}q_{1}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{ with
}p\,\not\equiv\,-1\,(\mathrm{mod}\,25)\text{ and }q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ \end{array}\right.$ (3)
* •
Case 3: we have $q^{*}=2$ and $d=2$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $2$. Let
$n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$, where $\mu$ is a
unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$, $\pi_{1},,,,\pi_{g}$ are
primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$, $e_{i}\in\\{1,2,3,4\\}$ for
$1\leq i\leq g$.
$d\,=\,g$ or $g+1$ according to whether $n\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, to
obtain $d=2$, $n$ must be written in $\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}$ or $n\,=\,\lambda^{e}\pi_{1}^{e_{1}}$,
therefore we have three forms of $n$:
1. $(i)$
$n\,=\,5^{e}q_{1}^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
Since $q^{*}=2$ so we have $q_{1}\,\,\equiv\,\,\pm 7\,(\mathrm{mod}\,25)$. By
Corollary 2.1 $q_{1}$ is inert in $k_{0}$, we have $q_{1}$ is ramified in
$\Gamma$, so $q_{1}$ is ramified too in $k$, and since $n\,\not\equiv\,\pm
1\pm 7\,(\mathrm{mod}\,25)$, according to [References, Lemma 5.1],
$\lambda\,=\,1-\zeta_{5}$ is ramified in $k$ so we obtain $d=2$. The condition
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for all
$q_{1}\,\,\equiv\,\,\pm 7\,(\mathrm{mod}\,5)$, because if $e\,=\,2,3,4$ we
have $n\,=\,5^{e}q_{1}\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e
$n\,=\,5q_{1}$ we have $n\,=\,5q_{1}\,\equiv\,\pm 10\,(\mathrm{mod}\,25)$, we
conclude that $n\,=\,5^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{1}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$, but for this form of the
radicand $n$ the computational number theory system PARI/GP [References] show
that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$.
2. $(ii)$
$n\,=\,q_{1}^{e_{1}}q_{2}^{e_{2}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with
$q_{1}\,\equiv\,3\,(\mathrm{mod}\,5),\,\,q_{2}\,\equiv\,2\,(\mathrm{mod}\,5)$
and $e_{1},e_{2}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{2}\,=\,1$ i.e $n\,=\,q_{1}^{e_{1}}q_{2}$ with
$e_{1}\in\\{1,2,3,4\\}$. Since $q^{*}=2$, we have
$\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$, we get that
$q_{1}\,\,\equiv\,\,-7\,(\mathrm{mod}\,25)$ and
$q_{2}\,\,\equiv\,\,7\,(\mathrm{mod}\,25)$. By Corollary 2.1 $q_{1}$ and
$q_{2}$ are inert in $k_{0}$, and we have $q_{1},\,q_{2}$ are ramified in
$\Gamma$,so $q_{1},\,q_{2}$ are ramified in $k$, so we obtain $d=2$. The
condition $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
is verified, because we have $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)$ for all $e_{1}$, so we conclude that
$n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,-7\,(\mathrm{mod}\,25),\,\,q_{2}\,\equiv\,7\,(\mathrm{mod}\,25)$,
but for this form of the radicand $n$ the computational number theory system
PARI/GP [References] show that $C_{k,5}\,\simeq\,\mathbb{Z}/5\mathbb{Z}$
3. $(iii)$
$n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,5)$ and $e\in\\{1,2,3,4\\}$. Since $q^{*}=2$,
we have $\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$, we get that
$p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$. By Corollary 2.1,
$p\,=\,\pi_{1}\pi_{2}$ where $\pi_{1},\,\pi_{2}$ are primes of $k_{0}$. The
prime $p$ is ramified in $\Gamma$, then $\pi_{1},\pi_{2}$ are ramified in $k$.
hence we have $d=2$. The condition $n\,=\,p^{e}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ is verified for all
$p\,\,\equiv\,\,-1\,(\mathrm{mod}\,25)$, we conclude that
$n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$. Using the computational number theory
system PARI/GP [References], if $n\,=\,p^{e}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $p\,\equiv\,-1\,(\mathrm{mod}\,25)$, the field $k$
is not always of type $(5,5)$.
We deduce that in the case 3, there is one form of $n$ for which the fields
$k$ is of type $(5,5)$ and rank$(C_{k,5}^{(\sigma)})\,=\,1$ as follows:
$n\,=\,p^{e}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)\text{ with
}p\,\equiv\,-1\,(\mathrm{mod}\,25)$
(2) If rank $(C_{k,5}^{(\sigma)})\,=\,2$, so $C_{k,5}\,=\,C_{k,5}^{(\sigma)}$.
According to [References,section 5.1], the rank of $C_{k,5}^{(\sigma)}$ is
given as follows:
rank $(C_{k,5}^{(\sigma)})\,=\,d-3+q^{*}$
where $d$ et $q^{*}$ are defined previously. Since rank
$(C_{k,5}^{(\sigma)})\,=\,2$ and $q^{*}\,=\,0,1\,\mathrm{or}\,2$, there are
three possible cases as follows:
* •
Case 1: $q^{*}=0\,\,\mathrm{and}\,\,d=5$,
* •
Case 2: $q^{*}=1\,\,\mathrm{and}\,\,d=4$,
* •
Case 3: $q^{*}=2\,\,\mathrm{and}\,\,d=3$,
We will treat the three cases to prove the forms of the radicand $n$. By
theorem 2.2, $n$ must be divisible by one prime
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ in all cases, and since rank
$(C_{k,5}^{(\sigma)})\,=\,2$, the invariant $q^{*}$ should be $0$ or $1$,
because if $q^{*}\,=\,2$ and $l\,\equiv\,1\,(\mathrm{mod}\,5)$ divides $n$, we
get that the invariant $d$ is at least $4$, so we obtain that rank
$(C_{k,5}^{(\sigma)})$ is at least $3$.
* •
Case 1: we have $q^{*}=0$ and $d=5$, so the number of prime ideals which are
ramified in $k/k_{0}$ should be $5$. The radicand $n$ must be divisible by one
prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$.
We can writ $n\,\in\,\mathcal{O}_{k_{0}}$ as
$n\,=\,\mu\lambda^{e}\pi_{1}^{e_{1}}....\pi_{g}^{e_{g}}$, where $\mu$ is a
unit in $\mathcal{O}_{k_{0}}$, $\lambda=1-\zeta_{5}$, $\pi_{1},,,,\pi_{g}$ are
primes in $k_{0}$ and $e\in\\{0,1,2,3,4\\}$, $e_{i}\in\\{1,2,3,4\\}$ for
$1\leq i\leq g$.
$d\,=\,g$ or $g+1$ according to whether $n\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ or $n\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$. To
obtain $d=5$, $n$ must be written in $\mathcal{O}_{k_{0}}$ as:
$n\,=\,\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}\pi_{4}^{e_{4}}\pi_{5}^{e_{5}}$
or
$n\,=\,\lambda^{e}\pi_{1}^{e_{1}}\pi_{2}^{e_{2}}\pi_{3}^{e_{3}}\pi_{4}^{e_{4}}$,
therefore we have two forms of $n$:
1. $(i)$
$n\,=\,5^{e}l^{e_{1}}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,5^{e}l$ with $e\in\\{1,2,3,4\\}$. By
Corollary 2.1 $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$ are primes
in $k_{0}$, we have $l$ is ramified in $\Gamma$, so
$\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$, and since
$n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$, according to [References,
Lemma 5.1], $\lambda\,=\,1-\zeta_{5}$ is ramified in $k$ so we obtain $d=5$.
The condition $n\,\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified for
all $l\,\equiv\,1\,(\mathrm{mod}\,5)$, because if $e\,=\,2,3,4$ we have
$n\,=\,5^{e}l\,\equiv\,0\,(\mathrm{mod}\,25)$, if $e=1$ i.e $n\,=\,5l$ we have
$l\,=\,5\alpha+1$ that implie $5l\,=\,25\alpha+5$ with $\alpha\in\mathbb{Z}$,
so $n\,=\,5l\,\equiv\,5\,(\mathrm{mod}\,25)$. If
$l\,\equiv\,1(\mathrm{mod}\,25)$ we have $\zeta_{5}\,\in\,N_{k/k_{0}}(k^{*})$,
so $q^{*}\,\geq\,1$ which in impossible in this case. We conclude that
$n\,=\,5^{e}l\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\not\equiv\,1(\mathrm{mod}\,25)$. Using the computational number theory
system PARI/GP [References], if $n\,=\,5^{e}l\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)$ with $l\,\not\equiv\,1(\mathrm{mod}\,25)$, the field
$k$ is not always of type $(5,5)$.
2. $(ii)$
$n\,=\,l^{e}q_{1}^{e_{1}}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$
and $e,e_{1}\in\\{1,2,3,4\\}$. As
$\mathbb{Q}(\sqrt[5]{ab^{2}c^{3}d^{4}})\,=\,\mathbb{Q}(\sqrt[5]{a^{2}b^{4}cd^{3}})\,=\,\mathbb{Q}(\sqrt[5]{a^{3}bc^{4}d^{2}})\,=\,\mathbb{Q}(\sqrt[5]{a^{4}b^{3}c^{2}d})$
we can choose $e_{1}\,=\,1$ i.e $n\,=\,l^{e}q_{1}$ with $e\in\\{1,2,3,4\\}$.
By Corollary 2.1 $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ where $\pi_{i}$ are
primes in $k_{0}$ and $q_{1}$ is inert in $k_{0}$. We know that $l$ is
ramified in $\Gamma$,so $\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified
in $k$. $q_{1}$ is ramified in $\Gamma$ too, we obtain $d=5$. The condition
$n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is not verified for
all $l\,\equiv\,1\,(\mathrm{mod}\,5),\,\,q_{1}\,\equiv\,\pm
2\,(\mathrm{mod}\,5)$, so we combine all the cases of congruence and we obtain
that $l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$. Using the computational number theory system PARI/GP
[References], if $n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$,
with $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$
We summarize all forms of integer $n$ in this case as follows:
$n=\left\\{\begin{array}[]{ll}5^{e}l\not\equiv\,\pm 1,\pm
7\,(\mathrm{mod}\,25)&\text{ with }l\,\not\equiv\,1\,(\mathrm{mod}\,25),\\\
l^{e}q_{1}\,\equiv\,\pm 1,\pm 7\,(\mathrm{mod}\,25)&\text{ with
}l\,\equiv\,1\,(\mathrm{mod}\,5)\,q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)\\\ \end{array}\right.$ (4)
* •
Case 2: We have $q^{*}\,=\,1$ and $d=4$, so the number of prime ideals which
are ramified in $k/k_{0}$ should be $4$. The radicand $n$ must be divisible by
one prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and according to Corollary 2.1
$l$ splits in $k_{0}$ as $l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$
are primes in $k_{0}$. Since $l$ is ramified in $\Gamma$, so
$\pi_{1},\,\pi_{2},\pi_{3}$ and $\pi_{4}$ are ramified in $k$, hence if $n$ is
devisible by another prime than $l$ the number of primes which are ramified in
$k/k_{0}$ surpass $4$, therefore we have unique form of $n$ in this case, its
$n\,=\,l^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $e\in\\{1,2,3,4\\}$. The condition
$n\,\equiv\,\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ is verified only for
$l\,\,\equiv\,1,(\mathrm{mod}\,25)$, and we have $q^{*}\,=\,1$. In conclusion
we get $n\,=\,l^{e}$ with $l\,\,\equiv\,1,(\mathrm{mod}\,25)$. Using the
computational number theory system PARI/GP [References], if
$n\,=\,l^{e}\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1(\mathrm{mod}\,25)$, the field $k$ is not always of type $(5,5)$
* •
case 3: We have $q^{*}\,=\,2$ and $d=3$, so the number of prime ideals which
are ramified in $k/k_{0}$ should be $3$. The radicand $n$ must be divisible by
one prime $l\,\equiv\,1\,(\mathrm{mod}\,5)$, and according to Corollary 2.1
$l\,=\,\pi_{1}\pi_{2}\pi_{3}\pi_{4}$ with $\pi_{i}$ are primes in $k_{0}$.
Since $p$ is ramified in $\Gamma$, so $\pi_{1},\,\pi_{2},\pi_{3}$ and
$\pi_{4}$ are ramified in $k$, so we deduce that the number of primes ramified
in $k/k_{0}$ is at least $4$, so the case 3 does not exist.
## 3 Numerical examples
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is
a positive integer, $5^{th}$ power-free, and let
$k\,=\,\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ its normal closure. We assume that
$C_{k,5}$ is of type $(5,5)$. Using the system PARI/GP [References], we
illustrate our main result Theorem 1.1.
### 3.1 rank $(C_{k,5}^{(\sigma)})\,=\,1$
Table 1: $n\,=\,p^{e}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\equiv\,-1\,(\mathrm{mod}\,25)$
$p$ | $n\,=\,p^{e}$ | $p\,(\mathrm{mod}\,5)$ | $p\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
149 | 22201 = $149^{2}$ | -1 | -1 | 25 | $(5,5)$ | 1
199 | 7880599 = $199^{3}$ | -1 | -1 | 25 | $(5,5)$ | 1
349 | 42508549 = $349^{3}$ | -1 | -1 | 25 | $(5,5)$ | 1
449 | $449$ | -1 | -1 | 25 | $(5,5)$ | 1
559 | $559$ | -1 | -1 | 25 | $(5,5)$ | 1
1249 | $1249$ | -1 | -1 | 25 | $(5,5)$ | 1
1499 | $1499$ | -1 | -1 | 25 | $(5,5)$ | 1
1949 | $1949$ | -1 | -1 | 25 | $(5,5)$ | 1
1999 | $1999$ | -1 | -1 | 25 | $(5,5)$ | 1
2099 | $449$ | -1 | -1 | 25 | $(5,5)$ | 1
Table 2: $n\,=\,q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{i}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$ $q_{2}$
$q_{2}\,(\mathrm{mod}\,5)$ $q_{2}\,(\mathrm{mod}\,25)$
$n\,=\,q_{1}^{e_{1}}q_{2}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 7 2
7 43 3 -7 2107 = $7^{2}\times 43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 7 2 7 193 3 -7
1351 = $7\times 193$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 7 2 7 293 3 -7 2051 =
$7\times 293$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 43 3 -7 492307 =
$107^{2}\times 43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 193 3 -7 20651 =
$107\times 193$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 293 3 -7 31351 =
$107\times 293$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 107 2 7 443 3 -7 47401 =
$107\times 443$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 43 3 -7 6751 = $157\times
43$ 5 $\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 193 3 -7 30301 = $157\times 193$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 157 2 7 443 3 -7 69551 = $157\times 443$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 257 2 7 193 3 -7 49601 = $257\times 293$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 257 2 7 293 3 -7 75301 = $257\times 293$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 307 2 7 193 3 -7 59251 = $307\times 193$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 457 2 7 43 3 -7 19651 = $457\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 457 2 7 443 3 -7 202451 = $457\times 443$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 557 2 7 43 3 -7 23251 = $557\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1 607 2 7 43 3 -7 26101 = $607\times 43$ 5
$\mathbb{Z}/5\mathbb{Z}$ 1
Table 3 : $n\,=\,5^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$q_{1}\,\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ | $n\,=\,5^{e}q_{1}$ | $q_{1}\,(\mathrm{mod}\,5)$ | $q_{1}\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
7 | 175 = $5^{2}\times 7$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
107 | 535 = $5\times 107$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
157 | 19625 = $5^{3}\times 157$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
257 | 6425 = $5^{2}\times 257$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
307 | 38375 = $5^{3}\times 307$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
457 | 2285 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
557 | 2785 = $5\times 557$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
607 | 3053 = $5\times 607$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
757 | 3785 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
857 | 4285 = $5\times 457$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
907 | 4535 = $5\times 907$ | 2 | 7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
43 | 1075 = $5^{2}\times 43$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
193 | 120625 = $5^{4}\times 193$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
293 | 120625 = $5^{4}\times 193$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
443 | 11075 = $5^{2}\times 443$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
643 | 3215 = $5\times 643$ | 3 | -7 | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
Table 4: $n\,=\,5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$
with $q_{1}$ or $q_{2}$ $\not\equiv\,\pm 7\,(\mathrm{mod}\,25)$
$q_{1}$ | $q_{1}\,(\mathrm{mod}\,5)$ | $q_{1}\,(\mathrm{mod}\,25)$ | $q_{2}$ | $q_{2}\,(\mathrm{mod}\,5)$ | $q_{2}\,(\mathrm{mod}\,25)$ | $n\,=\,5^{e}q_{1}^{2}q_{2}$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---|---|---|---
2 | 2 | 2 | 3 | 3 | 3 | 60 = $5\times 2^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 13 | 3 | 13 | 260 = $5\times 2^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 53 | 3 | 3 | 1060 = $5\times 2^{2}\times 53$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
2 | 2 | 2 | 23 | 3 | -2 | 460 = $5\times 2^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
7 | 2 | 7 | 3 | 3 | 3 | 735 = $5\times 7^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
17 | 2 | 17 | 3 | 3 | 3 | 108375 = $5^{3}\times 17^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
17 | 2 | 17 | 23 | 3 | -2 | 33235 = $5\times 17^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
37 | 2 | 17 | 3 | 3 | 3 | 20535 = $5\times 37^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
37 | 2 | 17 | 13 | 3 | 13 | 88985 = $5\times 37^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 3 | 3 | 3 | 33135 = $5\times 47^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 13 | 3 | 13 | 143585 = $5\times 47^{2}\times 13$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 23 | 3 | -2 | 254035 = $5\times 47^{2}\times 23$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
47 | 2 | -3 | 43 | 3 | -7 | 474935 = $5\times 47^{2}\times 43$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
107 | 2 | 7 | 23 | 3 | -2 | 1316635 = $5\times 2^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
67 | 2 | 17 | 3 | 3 | 3 | 67335 = $5\times 67^{2}\times 3$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
67 | 2 | 17 | 53 | 3 | 3 | 1189585 = $5\times 67^{2}\times 53$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
97 | 2 | -3 | 43 | 3 | -7 | 2022935 = $5\times 97^{2}\times 43$ | 5 | $\mathbb{Z}/5\mathbb{Z}$ | 1
Table 5: $n\,=\,p^{e}q_{1}\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1\,(\mathrm{mod}\,25)$, $q_{1}\,\not\equiv\,\pm
7\,(\mathrm{mod}\,25)$ $p$ $p\,(\mathrm{mod}\,5)$ $p\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$
$n\,=\,p^{e}q_{1}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 59 -1 9 2 2
2 118 = $59\times 2$ 25 $(5,5)$ 1 19 -1 19 3 3 3 57 = $19\times 3$ 25 $(5,5)$
1 59 -1 9 23 3 -2 1357 = $59\times 23$ 25 $(5,5)$ 1 359 -1 9 2 2 2 718 =
$359\times 2$ 25 $(5,5)$ 1 409 -1 9 2 2 2 816 = $409\times 2$ 25 $(5,5)$ 1 59
-1 9 127 2 2 7493 = $59\times 127$ 25 $(5,5)$ 1 109 -1 9 23 3 -2 2507 =
$109\times 23$ 25 $(5,5)$ 1 509 -1 9 2 2 2 1018 = $509\times 2$ 25 $(5,5)$ 1
709 -1 9 2 2 2 1418 = $709\times 2$ 25 $(5,5)$ 1 19 -1 19 53 3 3 1007 =
$19\times 53$ 25 $(5,5)$ 1
Table 6: $n\,=\,5^{e}p\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$p\,\not\equiv\,-1(\mathrm{mod}\,25)$
$p$ | $n\,=\,5^{e}p$ | $p\,(\mathrm{mod}\,5)$ | $p\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
19 | 475 = $5^{2}\times 19$ | -1 | 19 | 25 | (5,5) | 1
29 | 145 = $5\times 29$ | -1 | 4 | 25 | (5,5) | 1
59 | 7375 = $5^{3}\times 59$ | -1 | 9 | 25 | (5,5) | 1
89 | 55625 = $5^{4}\times 89$ | -1 | 14 | 25 | (5,5) | 1
109 | 2725 = $5^{2}\times 109$ | -1 | 9 | 25 | (5,5) | 1
229 | 28625 = $5^{3}\times 229$ | -1 | 4 | 25 | (5,5) | 1
239 | 1195 = $5\times 239$ | -1 | 14 | 25 | (5,5) | 1
269 | 6725 = $5^{2}\times 19$ | -1 | 19 | 25 | (5,5) | 1
379 | 168125 = $5^{4}\times 379$ | -1 | 4 | 25 | (5,5) | 1
389 | 1945 = $5^{2}\times 389$ | -1 | 14 | 25 | (5,5) | 1
### 3.2 rank $(C_{k,5}^{(\sigma)})\,=\,2$
Table 1: $n\,=\,5^{e}l\not\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\not\equiv\,1(\mathrm{mod}\,25)$
$l$ | $n\,=\,5^{e}l$ | $l\,(\mathrm{mod}\,5)$ | $l\,(\mathrm{mod}\,25)$ | $h_{k,5}$ | $C_{k,5}$ | rank $(C_{k,5}^{(\sigma)})$
---|---|---|---|---|---|---
11 | 55 = $5\times 11$ | 1 | 11 | 25 | (5,5) | 2
41 | 5125 = $5^{3}\times 41$ | 1 | -9 | 25 | (5,5) | 2
61 | 5125 = $5^{4}\times 61$ | 1 | 11 | 25 | (5,5) | 2
71 | 1775 = $5^{2}\times 71$ | 1 | -4 | 25 | (5,5) | 2
131 | 655 = $5\times 131$ | 1 | 6 | 25 | (5,5) | 2
181 | 113125 = $5^{4}\times 181$ | 1 | 6 | 25 | (5,5) | 2
241 | 30125 = $5^{3}\times 241$ | 1 | -9 | 25 | (5,5) | 2
311 | 1555 = $5\times 311$ | 1 | 11 | 25 | (5,5) | 2
331 | 8275 = $5^{2}\times 331$ | 1 | 6 | 25 | (5,5) | 2
431 | 2155 = $5\times 431$ | 1 | 6 | 25 | (5,5) | 2
Table 2: $n\,=\,l^{e}q_{1}\,\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1\,(\mathrm{mod}\,5)$ and $q_{1}\,\equiv\,\pm 2,\pm 3\,\pm
7\,(\mathrm{mod}\,25)$ $l$ $l\,(\mathrm{mod}\,5)$ $l\,(\mathrm{mod}\,25)$
$q_{1}$ $q_{1}\,(\mathrm{mod}\,5)$ $q_{1}\,(\mathrm{mod}\,25)$
$n\,=\,l^{e}q_{1}$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 31 1 6 2 2
2 $31\times 2$ 25 $(5,5)$ 2 131 1 6 23 3 -2 $131^{3}\times 23$ 25 $(5,5)$ 2
181 1 6 47 2 -3 $181\times 47$ 25 $(5,5)$ 2 11 1 11 3 3 3 $11\times 3$ 25
$(5,5)$ 2 41 1 16 23 3 -2 $41\times 23$ 25 $(5,5)$ 2 191 1 16 2 2 2 $191\times
2$ 25 $(5,5)$ 2 41 1 16 47 2 -3 $41^{2}\times 47$ 25 $(5,5)$ 2 311 1 11 2 2 2
$311^{4}\times 2$ 25 $(5,5)$ 2
Table 3: $n\,=\,l^{e}\equiv\,\pm 1\pm 7\,(\mathrm{mod}\,25)$ with
$l\,\equiv\,1(\mathrm{mod}\,25)$ $l$ $n\,=\,l^{e}$ $l\,(\mathrm{mod}\,5)$
$l\,(\mathrm{mod}\,25)$ $h_{k,5}$ $C_{k,5}$ rank $(C_{k,5}^{(\sigma)})$ 151
$151$ 1 1 25 (5,5) 2 251 $251^{2}$ 1 1 25 (5,5) 2 601 $601^{3}$ 1 1 25 (5,5) 2
1051 $1051^{4}$ 1 1 25 (5,5) 2 1301 $1301$ 1 1 25 (5,5) 2 1451 $1451^{2}$ 1 1
25 (5,5) 2 1801 $1801^{3}$ 1 1 25 (5,5) 2 1901 $1901^{4}$ 1 1 25 (5,5) 2 2111
2111 1 1 25 (5,5) 2 2131 $2131^{2}$ 1 1 25 (5,5) 2
## 4 Conjecture
In this article, we have classified some pure quintic fields
$\mathbb{Q}(\sqrt[5]{n})$, more precisely, we focused on the ones whose normal
closures $\mathbb{Q}(\sqrt[5]{n},\zeta_{5})$ possesses a $5$-class groups of
type $(5,5)$, by treating the rank of the ambiguous classes, that can be
characterized by the radicand $n$.
As to provide numerical examples, we use the system PARI/GP References. Thus,
we have noticed that the done calculations for some $n$ forms, show that
$5$-class groupe $C_{k,5}$ of the field $k$, is isomorphic to
$\mathbb{Z}/5\mathbb{Z}$, which allows us to give this conjecture as follows:
###### Conjecture 4.1.
Let $\Gamma\,=\,\mathbb{Q}(\sqrt[5]{n})$ be a pure quintic field, where $n$ is
a positive integer, $5^{th}$ power-free. Let $k\,=\Gamma(\zeta_{5})$ be the
normal closure of $\Gamma$. Denote by $C_{k,5}$ the 5-class group of $k$,
$q_{1},q_{2}\,\equiv\,\pm 2\,(\mathrm{mod}\,5)$ are primes and
$e\in\\{1,2,3,4\\}$.
If the radicand $n$ take one form as follows:
$n\,=\,\begin{cases}q_{1}^{e_{1}}q_{2}\,\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{i}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\equiv\,\pm
7\,(\mathrm{mod}\,25)\\\ 5^{e}q_{1}^{2}q_{2}\not\equiv\,\pm 1\pm
7\,(\mathrm{mod}\,25)&\text{ with }\quad q_{1}\,\text{ or }\,q_{2}$
$\not\equiv\,\pm 7\,(\mathrm{mod}\,25)\\\ \end{cases}$ (5)
Then $C_{k,5}$ is a cyclic groupe of order $5$.
## References
* [1] S. Aouissi, M. Talbi, M. C. Ismaili and A. Azizi. _Fields $\mathbb{Q}(\sqrt[3]{d},\zeta_{3})$ whose $3$-class group is of type $(9,3)$_, Int.J.Number Theory (2019)
* [2] F. Gerth III, _On $3$-class groups of cyclic cubic extensions of certain number fields_, J. Number Theory 8 (1976), No. 1, 84–98.
* [3] F. Gerth III, _On $3$-class groups of pure cubic fields,_ J. Reine Angew. Math. 278/279 (1975), 52–62.
* [4] F. Gerth III, _On $3$-class groups of certain pure cubic fields_,Bull. Austral. Math. Soc. 72 (2005), 471–476.
* [5] David Grant. _A proof of quintic reciprocity using the arithmetic of $y^{2}=x^{5}+\frac{1}{4}$_. ACTA ARITHMETICA (1996).
* [6] G.Gras, _Sur les $l$-classes d’idéux dans les extensions cycliques relatives de degré premier impaire $l$_. _Annales de l’institut Fourier_ , (1973).
* [7] E.Hecke, _Algebraic Number Theory_ , GTM 77, Springer-Verlag 1981.
* [8] M.Ishida, _The genus Fields of Algebraic Number Fields_. Lecture notes in Mathematics Vol 555, Springer-Verlag (1976).
* [9] K. Iwasawa, _A note on the group of units of an algebraic number field_ , J. Math. Pures Appl. (9) 35 (1956), 189–192.
* [10] K.Ireland and M.Rosen, _A Classical Introduction to modern Number Theory_. Graduate Texts in Mathematics 84, Springer-Verlag (1982).
* [11] G.J Janus, _Algebraic Number Fields_. Academic Press, New York-London (1973).
* [12] M.Kulkarni,D. Majumdar, B.Sury _$l$ -class groups of cyclic extension of prime degree $l$_, J. Ramanujan Math. Soc. 30, No.4 (2015), 413-454.
* [13] H. Kobayashi, _Class numbers of pure quintic fields_ , Journal of Number Theory 160 (2016) 463-477.
* [14] C. Parry, _Class number relations in pure quintic felds_ , Symposia Mathematica. 15 (1975), 475-485.
* [15] Lawrence C. Washington, _Introduction to Cyclotomic Fields_ , Springer-Verlag New York Inc (1982).
* [16] The PARI Group, PARI/GP, Version 2.4.9, Bordeaux, 2017, http://pari.math.u-bordeaux.fr.
Fouad ELMOUHIB
Department of Mathematics and Computer Sciences,
Mohammed 1st University,
Oujda - Morocco,
<EMAIL_ADDRESS>
Mohamed TALBI
Regional Center of Professions of Education and Training in the Oriental,
Oujda - Morocco,
<EMAIL_ADDRESS>
Abdelmalek AZIZI
Department of Mathematics and Computer Sciences,
Mohammed 1st University,
Oujda - Morocco,
<EMAIL_ADDRESS>
|
# Frequency-Constrained Resilient Scheduling of Microgrid: A Distributionally
Robust Approach
Zhongda Chu, Ning Zhang, and Fei Teng Zhongda Chu and Fei Teng (Corresponding
author) are with Department of Electrical and Electronic Engineering, Imperial
College London, U.K. Ning Zhang is with Department of Electrical Engineering,
Tsinghua University, China.
###### Abstract
In order to prevent the potential frequency instability due to the high Power
Electronics (PE) penetration under an unintentional islanding event, this
paper presents a novel microgrid scheduling model which explicitly models the
system frequency dynamics as well as the long/short term uncertainty
associated with renewable energy resources and load. Synthetic Inertia (SI)
control is applied to regulating the active power output of the Inverter-Based
Generators (IBGs) to support the post-islanding frequency evaluation. The
uncertainty associated with the noncritical load shedding is explicitly
modeled based on the distributionally robust formulation to ensure resilient
operation during islanding events. The resulted frequency constraints are
derived analytically and reformulated into Second-Order Cone (SOC) form, which
are further incorporated into the microgrid scheduling model, enabling optimal
SI provision of Renewable Energy Sources (RESs) from the micorgrid
perspective. With the SOC relaxation of the AC power flow constraints, the
overall problem is constructed as a mixed-integer SOC Programming (MISOCP).
The effectiveness of the proposed model is demonstrated based on modified IEEE
14-bus system.
###### Index Terms:
microgrid scheduling, frequency dynamics, synthetic inertia, distributionally
robust optimization
## I Introduction
Microgrids, distribution systems integrated with large scale of RESs, storage
devices and controllable loads have been a promising concept for reliable and
flexible electricity supply in an environment-friendly manner [1]. They are
connected to the main grid at the Point of Common Coupling (PCC), providing
the capability of power transmission in both directions. Microgrids can
operate in islanded mode by disconnecting itself from the main grid when
subjected to external disturbances, making them highly beneficial to customers
and utilities [2].
Due to the resiliency benefits of microgrids, extensive research has been
conducted on microgrid scheduling aiming to achieve optimal microgrid
operation and management [3, 4]. A stochastic microgrid scheduling model is
proposed in [5] to address the intermittency and variability of the RESs.
Applying the chance constrained approach, the authors in [6] formulate the
grid-connected microgrid scheduling problem as linear programming. The studies
in [7] develop a distributionally robust chance-constrained energy management
for islanded microgrids with the uncertainty of renewable generation captured
through a novel ambiguity set. [8] presents a resiliency-oriented microgrid
optimal scheduling model in islanded operating mode to minimize the microgrid
load curtailment. The demand response from Electric Vehicles (EV) is
incorporated into the islanded microgrid scheduling problem to minimize the
operating and EV charging costs [9].
Most of the literature in this vein focuses on the microgrid scheduling in
either grid-connected or islanded mode with less attention being paid on the
influence of the transition between the two modes. The islanding events of
microgrids force its exchanged power with the main grid to zero, resulting in
power unbalance between generation and demand. Furthermore, due to the high PE
penetration in microgrids, this unbalanced power can lead to large frequency
deviations, even blackout and system collapse.
The authors in [10] consider multi-period islanding constraints in a
centralized microgrid optimal scheduling model. The solution is examined for
islanding to ensure the microgrid has sufficient online capacity for quickly
switching to the islanded mode on request. [11, 12] propose an optimal
scheduling strategy for microgrid operation considering constraints of
islanding capability. Probability of Successful Islanding (PSI) is introduced
as a new concept to ensure there is enough reserve to cover the load after
islanding events. Similarly, in [13], a new optimal strategy for scheduling of
reconfigurable microgrids is presented while maintaining the PSI above a
certain level. Considering the uncertainty of RESs and demand, the microgrid
scheduling is formulated as a chance-constrained global optimization problem.
However, the above methods only focus on determining the feasible region of
the spinning reserve to guarantee the PSI while neglecting the detailed
frequency dynamics and the frequency support from IBGs. It is improved by [14]
where a comprehensive optimization and real-time control framework for
maintaining frequency stability of multi-microgrid networks under islanding
events are proposed. The frequency dynamics and SI from IBGs are also
considered leading to highly nonlinear frequency constraints. An iterative
algorithm is developed based on the cutting plan approach to incorporate the
post-contingency frequency constraints, which increases its implementing
complexity. Deep learning is applied in [15] to approximate the nonlinear
nadir constraint using a neural network such that an MILP-based microgrid
scheduling problem can be formulated. Nevertheless, the detailed SI modeling
from IBGs is not considered and the uncertainty associated with the
noncritical load shedding due to the forecasting errors and the relays has not
been discussed.
Motivated by previous research, this paper proposes a novel optimal microgrid
scheduling model considering the state-of-art SI control scheme from PV-
storage system and Wind Turbines (WTs) to minimize the microgrid operation
cost while ensuring the frequency security after islanding events. The
contributions of this paper are summarized as follow:
* •
A novel microgrid scheduling model is proposed, which optimizes microgrid
operating conditions, noncritical load shedding as well as the SI from IBGs
such that the frequency constraints after microgrid islanding events can be
maintained.
* •
A distributionally robust approach is adopted to account for the uncertainty
associated with noncritical load shedding leading to distributionally robust
chance constraint formulation of the frequency metrics.
* •
The highly nonlinear frequency constraints are further effectively
reformulated into SOC form and incorporated into the microgrid scheduling
model, resulting in an overall MISOCP together with the SOC approximation of
the AC power flow.
The rest of this paper is structured as follows. Section II introduces the SI
control modeling of RESs and the overall microgrid frequency dynamics, based
on which the analytical expressions of the frequency metrics are derived. The
uncertainty associated with the noncritical load shedding is discussed in
Section III, leading to distributionally robust frequency constraints and the
nonlinear nadir constraint is further reformulated into SOC form. The overall
MISOCP-based microgrid scheduling model is presented in Section IV, followed
by case studies (Section V) illustrating the value and performance of the
proposed model. Finally, section VI concludes the paper.
## II Frequency Dynamics of Microgrid with SI Provision from RESs
In this section, the microgrid frequency dynamics are investigated. The Rate
of Change of Frequency (RoCoF), frequency nadir and steady-state value
subsequent to islanding events are derived considering the provision of SI
from RESs. Due to the low efficiency during normal operation, the deloading
control strategy is not considered in this paper. Instead, in order to
maximize the energy captured by PV systems and WTs, we assume that they are
controlled through MPPT strategy during normal operation. For PV systems,
energy storage devices are used to provide frequency support during the system
disturbance, whereas stored kinetic energy is utilized in the WTs. It is the
synthetic inertia rather than the conventional droop control that is applied
in the proposed model to provide higher power injection at the beginning of
islanding events. It should be noted that only loss of generation (or increase
of load) is considered in this paper since the opposite situation can be
easily dealt with by shifting the operating point of the RESs away from the
optimal power point.
### II-A Modeling of Frequency Support from Energy Storage Devices
A state-of-the-art VSC control scheme previously described in [16] is adapted
to provide constant synthetic inertia during a disturbance. Furthermore, since
the power injection associated with synthetic inertia approaches zero at the
frequency nadir and has no impact on the steady-state frequency, constant
power can be injected into the grid after the frequency nadir to improve the
steady-state frequency if needed. The active power setpoint is adjusted
through the outer control loop to achieve the desired power injection to the
grid. Note that although it is possible to achieve more sophisticated control
laws, e.g., adaptive virtual synchronous machine or online model predictive
control, it would make the scheduling model much more complicated, thus not
being considered in this paper.
In order to determine an optimal and feasible frequency support provision
during the scheduling process, it is essential to consider the limitation of
both instantaneous power and total energy of the energy storage devices.
The instantaneous output power of the energy storage device $b\in\mathcal{B}$
during an entire frequency event, $\mathcal{T}_{0}$ is confined by the maximum
charging/discharging rate,
$\bar{P}_{b}^{\mathrm{ch}}$/$\bar{P}_{b}^{\mathrm{dch}}$:
$\bar{P}_{b}^{\mathrm{ch}}\leq
P_{b}-2H_{s_{b}}\Delta\dot{f}(t)\leq\bar{P}_{b}^{\mathrm{dch}},\,\,\;\forall
t\in\mathcal{T}_{0}$ (1)
where $P_{b}$ and $H_{s_{b}}$ are the normal output power and synthetic
inertia from the storage unit $b$. Equation (1) is equivalent to:
$\bar{P}_{b}^{\mathrm{ch}}\leq
P_{b}+\max_{t\in\mathcal{T}_{0}}\Big{\\{}2H_{s_{b}}\left|\Delta\dot{f}(t)\right|\Big{\\}}\leq\bar{P}_{b}^{\mathrm{dch}}.$
(2)
Since $\left|\Delta\dot{f}(t)\right|$ is constrained by the RoCoF limit:
$0\leq\left|\Delta\dot{f}(t)\right|\leq\Delta\dot{f}(0)\leq\Delta\dot{f}_{\mathrm{lim}},$
(3)
(2) can be rewritten in linear form as follows:
$\bar{P}_{b}^{\mathrm{ch}}\leq
P_{b}+2H_{s_{b}}\Delta\dot{f}_{\mathrm{lim}}\leq\bar{P}_{b}^{\mathrm{dch}}.$
(4)
The energy required by synthetic inertia provision is negligible due to the
small time scale of the inertial response. Hence, it is not considered as a
constraint here.
Similarly, the limitation for the constant power provision $\Delta P_{C}$
after the frequency nadir can be formulated as:
$\displaystyle\bar{P}_{b}^{\mathrm{ch}}\leq P_{b}+\Delta P_{C}$
$\displaystyle\leq\bar{P}_{b}^{\mathrm{dch}}$ (5a) $\displaystyle\Delta
P_{C}T_{s}$ $\displaystyle\leq\mathrm{SoC}_{b}\cdot E_{c,b}$ (5b)
where $T_{s}$ is the time interval of the constant power provision; $E_{c}$
and $\mathrm{SoC}$ are the energy capacity and the state of charge of the
storage device. Note that the above frequency support model and the
operational constraints can also be applied to other microgrid battery storage
units that are not connected with PV systems. In addition to the synthetic
inertia provision, the energy storage devices are responsible to smooth the
renewable generation fluctuation in shorter timescale and balance the
inconsistent profiles between the load and generation in longer timescale
(storage the energy when the generation is more than the demand and release
the energy otherwise). All these services are simultaneously optimized during
the microgrid scheduling process in our proposed model, such that the storage
capacity can be optimally allocated in real time.
### II-B Modeling of Frequency Support from WTs
The control framework proposed in [17] is applied to provide optimal synthetic
inertia from WTs. In the proposed model, active power is extracted from the
stored kinetic energy of WTs to facilitate the frequency evaluation during the
disturbance. Due to the complexity caused by incorporating the WT mechanical
dynamics into the control design and the restriction of the stored kinetic
energy, only short-term inertial response is provided from WTs.
Figure 1: Block diagram of WT synthetic inertia control scheme.
Furthermore, the secondary frequency dip associated with the rotor speed
deviation from the optimal operation point before the disturbance can be
eliminated. This is achieved by adding a Mechanical Power Estimator (MPE) in
the active power control loop of the WT grid-side converter as shown in Fig.
1. As a result, the additional output power from WTs ($\Delta P_{w}$) during a
system disturbance is the sum of synthetic inertia power $P_{SI}$ and the
output of the MPE $\Delta\tilde{P}_{a}$:
$\Delta P_{w}=-2H_{s}\Delta\dot{f}+\Delta\tilde{P}_{a}.$ (6)
In order to incorporate (6) into the system frequency dynamics, the highly
nonlinear expression of $\Delta\tilde{P}_{a}$ is further approximated by a
negative system damping term:
$\Delta\tilde{P}_{a}=D_{s}\Delta f=\gamma H_{s}^{2}\Delta f.$ (7)
In addition, the total available synthetic inertia form WTs in the system can
be estimated given the wind speed distribution as proposed in [17], where the
feasibility of frequency support from WTs and detailed control performance can
be found as well.
### II-C Frequency Evaluation under Islanding Events
Without the frequency support from the RESs, the frequency dynamics in a
multi-machine microgrid can be expressed in the form of a single swing
equation, under the premise of the Centre-of-Inertia (CoI) model [18]:
$2H_{c}\frac{\partial\Delta f(t)}{\partial t}=-D_{0}\Delta f(t)+\Delta
R(t)-\underbrace{(\Delta P_{L_{0}}-\Delta P_{D})}_{\Delta P_{L}},$ (8)
where $\Delta P_{L_{0}}$, the loss of generation due to the islanding event at
$t=0$ is a decision variable and can be viewed as a step disturbance. $\Delta
P_{D}$ is the noncritical load shedding in order to maintain the post-
contingency frequency within the limits, which is a common practice in
microgrid after islanding events. It can be deferred or curtailed in response
to economic incentives or islanding requirements. Furthermore, $\Delta P_{D}$
is modeled as a decision variable with uncertainty and its mean ($\Delta
P_{D_{\mu}}$) and standard deviation ($\sigma$) are assumed to be partially
known. More details regarding to the uncertainty of $\Delta P_{D}$ are
discussed in Section III. $\Delta P_{L}$ is the equivalent loss of generation
which is always positive, as there is no point to shed more load than the lost
power. Moreover, the PFR $\Delta R(t)$ from conventional Synchronous
Generators (SGs) can be represented according to the following scheme [19]:
$\Delta R(t)=\begin{cases}\frac{R}{T_{d}}t&,\;0\leq t<T_{d}\\\ R&,\;T_{d}\leq
t\end{cases}$ (9)
with $T_{d}$ being the PFR delivered time and R being the total PFR delivered
by time $T_{d}$; The total inertia of SGs is computed as:
$H_{c}=\frac{\sum_{g\in\mathcal{G}}H_{g}P_{g}^{\mathrm{max}}y_{g}}{f_{0}}.$
(10)
Incorporate the frequency support from RESs as described in Section II-A and
II-B into (8) leading to:
$\displaystyle
2{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\underbrace{\left(H_{c}+\sum_{b\in\mathcal{B}}H_{s_{b}}+\sum_{w\in\mathcal{W}}H_{s_{w}}\right)}_{H}}\frac{\partial\Delta
f(t)}{\partial t}$ (11)
$\displaystyle=-{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\underbrace{\left(D_{0}-\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}\right)}_{D}}\Delta
f(t)+\Delta R(t)-\Delta P_{L}$
where $H$ and $D$ are the overall system inertia and damping respectively;
$b\in\mathcal{B}$ and $w\in\mathcal{W}$ are the set of energy storage units
and wind generation units. Note that the inertia in the system is now the
combination of SGs’ ($H_{c}$) and the SI from the energy storage devices
($H_{s_{b}}$) and WTs ($H_{s_{w}}$). The system damping is decreased by
$\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}$ due to the SI provision from
WTs. It represents the the side effect of SI provision from wind turbines
through overproduction scheme, i.e., the output power reduction due to the
deviation from the optimal operating point [17].
Based on the frequency dynamics, the analytical expression of the maximum
instantaneous RoCoF
$(\Delta\dot{f}_{\mathrm{max}}\equiv\Delta\dot{f}|_{t=0^{+}})$ is identified
as:
$\Delta\dot{f}|_{t=0^{+}}=-\frac{\Delta P_{L}}{2H}.$ (12)
It can be maintained within the RoCoF limits by choosing an appropriate system
inertia $H$. Solving the differential equation (11) gives the microgrid
frequency evaluation during an islanding event:
$\Delta f(t)=\left(\frac{\Delta
P_{L}}{D}+\frac{2HR}{T_{d}D^{2}}\right)\left(e^{-\frac{D}{2H}t}-1\right)+\frac{R}{T_{d}D}t,$
(13)
valid $\forall t\in[0,t_{n}]$. The time instant $t_{n}$ of frequency nadir is
derived by setting the derivative of (13) to zero:
$\Delta\dot{f}(t_{n})=0\longmapsto
t_{n}=\frac{2H}{D}\ln{\left(\frac{T_{d}D\Delta P_{L}}{2HR}+1\right)}.$ (14)
Substituting (14) into (13) leads to the expression for frequency nadir
$(\Delta f_{\mathrm{max}}\equiv\Delta f(t_{n}))$:
$\Delta f(t_{n})=\frac{2HR}{T_{d}D^{2}}\ln{\left(\frac{T_{d}D\Delta
P_{L}}{2HR}+1\right)}-\frac{\Delta P_{L}}{D}.$ (15)
It is understandable that the dependence of the frequency nadir on $H$, $D$,
and $\Delta P_{L}$ through a highly-nonlinear relationship makes it difficult
to be incorporated into the microgrid scheduling model. A SOC reformulation is
proposed to cope with this problem as demonstrated in Section III-A.
It should be noted that in order to derive the analytical expressions of
maximum RoCoF and frequency nadir, only the inertial response from RESs are
incorporated in (11). However, when deriving the steady-state frequency
$(\Delta f_{\mathrm{max}}^{\mathrm{ss}}\equiv\Delta f|_{t=\infty})$, the
constant power injection $\Delta P_{C}$ from energy storage devices needs to
be considered:
$\Delta f|_{t=\infty}=\frac{R+\Delta P_{C}-\Delta P_{L}}{D}.$ (16)
Note that the secondary frequency response from conventional generators and
the associated frequency restoration process after steady-state are not
considered in this paper.
Having obtained the analytical expressions of the frequency metrics during an
islanding event (12) (15) and (16), they should be bounded within predescribed
limits in the microgrid scheduling model by selecting proper $H$, $R$, $\Delta
P_{C}$ and $\Delta P_{L}$. These quantities can all be decided
deterministically except the equivalent loss of generation $\Delta P_{L}$ due
to the uncertainty associated with the noncritical load shedding $\Delta
P_{D}$. At the beginning of an islanding event, the noncritical load is
disconnected from the microgrid in order to support the frequency evaluation.
However, the exact value of the shed load is unknown during the scheduling
period, thus increasing the complexity of the scheduling problem as elaborated
in the next section.
## III Distributionally Robust Chance Constraints of Frequency Metrics
The uncertainty associated with the noncritical load shedding $\Delta P_{D}$
stems from different aspects. On the one hand, forecasting error always exists
during the microgrid scheduling process in terms of the actual demand and the
noncritial load percentage. On the other hand, depending on the specific load
shedding strategies[20, 21], the uncertainty level of $\Delta P_{D}$ varies.
Traditionally, the practice setting of the RoCoF and frequency relays are
mainly based on the experts’ experiment [22]. More advanced noncritical load
control schemes have also been proposed to mitigate frequency variations
during islanding events. For instance, the emergence demand response has been
considered in the setting of the under frequency load shedding relays and the
design of the load shedding schemes [23, 24, 25, 26]. Therefore, it is
complicated to derive the detailed distribution of $\Delta P_{D}$ through
either model-based or data-driven approaches at microgrid scheduling stage. As
a result, the equivalent loss of generation $\Delta P_{L}$ as defined in (8)
also presents uncertainty of the same level. In order to account for the
uncertainty of $\Delta P_{L}$, the frequency constraints are reformulated
through distributionally robust optimization. Assume the first- and second-
order moments of $\Delta P_{L}$ are decision-dependent whereas the exact
probability distribution $\mathbf{D}$ is unknown. This is modeled by the
following ambiguity set:
$\displaystyle\mathcal{P}=\Big{\\{}\mathbf{D}\in\Phi(\Delta P_{L}):\;$
$\displaystyle\mathbb{E}^{\mathbf{D}}(\Delta P_{L})=\Delta P_{L_{\mu}},$
$\displaystyle\mathrm{Var}^{\mathbf{D}}(\Delta P_{L})=\sigma^{2}\Big{\\}}$
(17)
where $\Phi(\cdot)$ is the probability density function; $\Delta
P_{L_{\mu}}=\Delta P_{L_{0}}-\Delta P_{D_{\mu}}$ and $\sigma$ denote the mean
and standard deviation of the equivalent loss of generation given the
distribution $\mathbf{D}$. Since the more noncritical load is about to be
shed, the higher uncertainty level it presents, it is reasonable to assumed
that $\sigma$ depends on the decision variable $\Delta P_{D_{\mu}}$ with a
linear coefficient $\alpha$, i.e.,
$\sigma=\alpha\Delta P_{D_{\mu}}.$ (18)
Having defined the mean and standard deviation of $\Delta P_{D}$ and $\Delta
P_{L}$, it should be clear now that the only decision to be made associated
with noncritical load shedding is $\Delta P_{D_{\mu}}$, meaning that
statistically the mean of noncritical load shedding equals to the decision
made by the system operator. However, for the realization in a single
islanding event, it is very likely for $\Delta P_{D}$ to deviate from the
decision variable $\Delta P_{D_{\mu}}$ characterized by its standard deviation
$\sigma$.
### III-A Nadir Constraint Reformulation
Based on the method proposed in [17], the nadir constraint $\Delta
f(t_{n})\leq\Delta f_{\mathrm{lim}}$ can be converted into the following
nonlinear from:
$HR\geq\frac{\Delta P_{L}^{2}T_{d}}{4\Delta f_{\mathrm{lim}}}-\frac{\Delta
P_{L}T_{d}D_{0}}{4}+\frac{\Delta
P_{L}T_{d}\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}}{4}.$ (19)
Since $\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}$ is much less than
$D_{0}$, the $\Delta P_{L}$ in the last term of (19) is set to be a constant
($\Delta P_{L}^{\mathrm{max}}$ for conservativeness). As a result, (19) can be
rewritten as follows:
$HR-\underbrace{\frac{\Delta
P_{L}^{\mathrm{max}}T_{d}\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}}{4}}_{c}\geq\frac{T_{d}}{4\Delta
f_{\mathrm{lim}}}\Delta P_{L}^{2}-\frac{T_{d}D_{0}}{4}\Delta P_{L}.$ (20)
Equation (20) can be viewed as a quadratic inequality of $\Delta P_{L}$. Since
$T_{d}/(4\Delta f_{\mathrm{lim}})>0$, it is equivalent to:
$\displaystyle\Delta P_{L}\in\Bigg{[}$
$\displaystyle\underbrace{\frac{D_{0}\Delta f_{\mathrm{lim}}}{2}-\frac{\Delta
f_{\mathrm{lim}}}{T_{d}}\sqrt{\frac{T_{d}^{2}D_{0}^{2}}{4}+\frac{4T_{d}}{\Delta
f_{\mathrm{lim}}}(HR-c)}}_{\Delta\underline{P}_{L}},$
$\displaystyle\underbrace{\frac{D_{0}\Delta f_{\mathrm{lim}}}{2}+\frac{\Delta
f_{\mathrm{lim}}}{T_{d}}\sqrt{\frac{T_{d}^{2}D_{0}^{2}}{4}+\frac{4T_{d}}{\Delta
f_{\mathrm{lim}}}(HR-c)}}_{\Delta\bar{P}_{L}}\Bigg{]}$ (21)
Therefore, the distributionally robust nadir chance constraint is formulated
as follows:
$\min_{\mathbf{D}\in\mathcal{P}}\mathrm{Pr}\Big{\\{}\Delta
P_{L}\in\big{[}\Delta\underline{P}_{L},\Delta\bar{P}_{L}\big{]}\Big{\\}}\geq\eta.$
(22)
Here only the cases of generation loss are considered, i.e., $\Delta P_{L}>0$,
therefore, the probability of $\Delta P_{L}$ being negative is zero. Moreover,
it can be derived from (III-A) that $\Delta\underline{P}_{L}<0$ always holds.
Hence, (22) is equivalent to:
$\min_{\mathbf{D}\in\mathcal{P}}\mathrm{Pr}\Big{\\{}\Delta
P_{L}\leq\Delta\bar{P}_{L}\Big{\\}}\geq\eta.$ (23)
By applying Chebyshev inequality [27], the nonconvex constraint (23) can be
reformulated as follows:
$\Delta\bar{P}_{L}\geq\Delta
P_{L_{\mu}}+\underbrace{\sqrt{\frac{\eta}{1-\eta}}}_{\xi}\sigma.$ (24)
Substituting the expression of $\Delta\bar{P}_{L}$ into (24) yields:
$\displaystyle HR\geq$ $\displaystyle\frac{T_{d}}{4}\left[\frac{(\Delta
P_{L_{\mu}}+\xi\sigma)^{2}}{\Delta f_{\mathrm{lim}}}-D_{0}(\Delta
P_{L_{\mu}}+\xi\sigma)\right]$ $\displaystyle+$ $\displaystyle\frac{\Delta
P_{L}^{\mathrm{max}}T_{d}\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}}{4}.$
(25)
Introduce ancillary variables $x_{1}$ such that:
$\displaystyle x_{1}^{2}$ $\displaystyle=\frac{(\Delta
P_{L_{\mu}}+\xi\sigma)^{2}}{\Delta f_{\mathrm{lim}}}-D_{0}(\Delta
P_{L_{\mu}}+\xi\sigma)$ $\displaystyle=\underbrace{\frac{\Delta
P_{L_{\mu}}+\xi\sigma}{\sqrt{\Delta
f_{\mathrm{lim}}}}}_{x_{2}}\left(\frac{\Delta
P_{L_{\mu}}+\xi\sigma}{\sqrt{\Delta
f_{\mathrm{lim}}}}-\underbrace{\sqrt{\Delta
f_{\mathrm{lim}}}D_{0}}_{d}\right).$ (26)
It can be proved that $x_{2}\geq d$ always holds given a small system damping.
Hence, (26) is a well-defined real value constraint. The nadir constraint
(III-A) can be thus rewritten as a SOC form:
$HR\geq\frac{T_{d}}{4}x_{1}^{2}+\frac{\Delta
P_{L}^{\mathrm{max}}T_{d}\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2}}{4}$
(27)
However, the nonconvex constraint (26) can not be included in the MISOC
optimization directly. A set of linear constraints are used to approximate
this relationship conservatively. Depending on the ratio of $x_{2}$ to $d$,
the relationship between $x_{1}$ and $x_{2}$ described by (26) can be piece-
wise characterized by $N$ linear expression:
$\displaystyle x_{1}=a_{n}$ $\displaystyle x_{2}+b_{n}D_{0},\,\,\forall
n\in\mathcal{N}$ $\displaystyle if\,\;k(n-1)+1\leq\frac{x_{2}}{d}<kn+1$ (28a)
$\displaystyle x_{1}=a_{N}$ $\displaystyle x_{2}+b_{N}D_{0},$ $\displaystyle
if\,\;K+1\leq\frac{x_{2}}{d}$ (28b)
where $k$ defines the step size of $x_{2}$ in the first $N-1$ linear
constraints, i.e., $n\in\mathcal{N}=\\{1,2,...,N-1\\}$:
$k=\frac{K}{N-1}$ (29)
with $K$ being a predefined constant. Theoretically, the ratio of $x_{2}$ to
$d$ can be vary large. To reduce the value of $N$, a single linearized
constraint is applied if $x_{2}$ is large enough compared to $d$, i.e., $K\leq
x_{2}/d$ as shown in (28). The coefficient $a_{n}$ and $b_{n}$ can be
calculated as:
$\displaystyle a_{n}$
$\displaystyle=\derivative{x_{1}}{x_{2}}\Bigr{|}_{x_{2}=(kn+1)D_{0}}=\frac{2nk+1}{2\sqrt{n^{2}k^{2}+nk}},\,\,\forall
n\in\mathcal{N}$ (30a) $\displaystyle b_{n}$
$\displaystyle=\frac{-nk-1}{2\sqrt{n^{2}k^{2}+nk}},\,\,\forall
n\in\mathcal{N}$ (30b) $\displaystyle a_{N}$ $\displaystyle=1$ (30c)
$\displaystyle b_{N}$ $\displaystyle=-0.5$ (30d)
However, the $N$ linear constraints defined in (28) cannot be included in the
optimization simultaneously. Instead, only one of them needs to hold while
others should be relaxed depending on the relationship between $x_{2}$ and
$d$. Therefore, $N$ binary variables, $z_{n},\,\forall
n\in\mathcal{N}\cup\\{N\\}$ are introduced to indicate to which interval
$x_{2}$ belongs:
$\displaystyle z_{n\in\mathcal{N}}$
$\displaystyle=\begin{cases}1&if\,\,dk(n-1)\leq x_{2}<dkn\\\
0&\mathrm{otherwise}.\end{cases}$ (31a) $\displaystyle z_{N}$
$\displaystyle=\begin{cases}1&if\,\,Kd\leq x_{2}\\\
0&\mathrm{otherwise}.\end{cases}$ (31b)
Equation (31a) can be rewrite in the following form by defining ancillary
binary variables $z_{n_{1}},\,z_{n_{2}},\,\forall n\in\mathcal{N}$:
$\displaystyle z_{n_{1}}$ $\displaystyle=\begin{cases}1&if\,\,dk(n-1)\leq
x_{2}\\\ 0&\mathrm{otherwise}\end{cases}$ (32a) $\displaystyle z_{n_{2}}$
$\displaystyle=\begin{cases}1&if\,\,x_{2}<dkn\\\
0&\mathrm{otherwise}\end{cases}$ (32b) $\displaystyle z_{n}$
$\displaystyle=z_{n_{1}}+z_{n_{2}}-1.$ (32c)
As a result, the conditional constraints (32) can be transformed into linear
constraints $\forall n\in\mathcal{N}$:
$\displaystyle 0<x_{2}-dkn+Mz_{n_{1}}\leq M$ (33a) $\displaystyle 0\leq
dk(n-1)-x_{2}+Mz_{n_{2}}<M$ (33b) $\displaystyle z_{n}=z_{n_{1}}+z_{n_{2}}-1$
(33c) $\displaystyle z_{n_{1}},\,z_{n_{2}}\,\in\\{0,1\\}$ (33d)
where $M>0$ is a sufficiently large constant. Similarly, linear constraints
for $Z_{N}$ can be expressed as follows:
$\displaystyle 0<x_{2}-Kd+Mz_{N}\leq M$ (34a) $\displaystyle
z_{N}\,\in\\{0,1\\}$ (34b)
Based on the interval indicators $z_{n}$, the equality constraints (28) are
relaxed as follows to ensure feasibility:
$x_{1}\geq a_{n}x_{2}+b_{n}d+(z_{n}-1)M^{\prime},\;\;\;\;\forall
n\in\mathcal{N}\cup\\{N\\}$ (35)
where $M^{\prime}>0$ is a large enough constant. It should be noted that the
equality in (35) will be automatically obtained during the optimization
process since $HR$ term in the nadir constraint (27) is positively correlated
to the objective function. As a result, the original distributionally robust
nadir chance constraint (III-A) is now reformulated into the SOC form:
(27)(33)(34)(35).
### III-B RoCoF and steady-state Constraints Reformulation
Based on the expressions derived in (12) and (16), the distributionally robust
frequency constraints of RoCoF and steady-state can be formulated as follows:
$2H\Delta\dot{f}_{\mathrm{lim}}\geq\Delta P_{L_{\mu}}+\xi\sigma$ (36)
$R+\Delta P_{C}+(D_{0}-\sum_{w\in\mathcal{W}}\gamma_{w}H_{s_{w}}^{2})\Delta
f^{ss}_{\mathrm{lim}}\geq\Delta P_{L_{\mu}}+\xi\sigma$ (37)
with $\dot{f}_{\mathrm{lim}}$ and $\Delta f^{ss}_{\mathrm{lim}}$ being the
pre-specified limits for maximum instantaneous RoCoF and steady-state
frequency deviation. The nonlinear term associated with $H_{s_{w}}^{2}$ can be
effectively linearized as demonstrated in [17].
## IV MISOCP-based Microgrid Scheduling
In this section, a two-stage stochastic microgrid scheduling model is
introduced to determine the optimal generator dispatch, wind/PV curtailment
and load shedding with frequency security constraints. The relationship
between the microgrid scheduling problem and the frequency control is
demonstrated through Fig. 2. During normal operation (grid-connected mode),
the microgrid operates according to the results obtained from the scheduling
problem with optimal operating conditions and the optimal frequency responses
updated in each hour, as indicated by the blue and yellow areas respectively
in the figure below. Notably, the commands related to the frequency services
would only lead to the parameter updates in the associated controllers. Those
services would not be triggered unless an islanding event is detected, which
can be achieved by monitoring the main breaker at the PCC or measuring the
RoCoF. Due to the frequency constraints in the microgrid scheduling model, the
frequency security after an islanding event at any time can be guaranteed
through a most cost-efficient way.
Figure 2: Relationship between microgrid scheduling and frequency control.
Consider a microgrid with a set of SGs units $g\in\mathcal{G}$ and loads
$l\in\mathcal{L}$. The generation units are further categorized into two
groups, i.e. $\mathcal{G}=\mathcal{G_{\mathrm{1}}\cup G_{\mathrm{2}}}$
representing the sets of fast and slow generators. Wind, PV and storage units
are represented by $w\in\mathcal{W}$, $m\in\mathcal{M}$ and $b\in\mathcal{B}$
respectively. The uncertainties of renewable generation and demand in the
microgrid are managed by the two-stage decision process. The unit commitment
decisions are made in the first stage except for the fast-start generators,
before the uncertainty is realized. Once most uncertain inputs (demand and
renewable generation) are realized, the power outputs of committed units as
well as the fast-start generators are decided to meet the load [28, 29]. The
two-stage decision process in power system operations makes it natural to
formulate the scheduling problem as a multi-stage stochastic program. Based on
the stochastic multi-temporal method proposed in [30], the stochastic
scheduling problem can be formulated as follows.
### IV-A Objective Function
The objective of the scheduling problem is to minimize the microgrid average
operation cost for all scenarios ($\forall s\in\mathcal{S}$) along with
considered time horizon $t\in\\{0,1,...,T\\}$:
$\begin{split}\min\sum_{s\in\mathcal{S}}&\sum_{t\in
T}\pi_{s}(\sum_{g\in\mathcal{G}}c_{g}^{SU}z_{t,s,g}+\Delta
t(\sum_{g\in\mathcal{G_{\mathrm{1}}}}c_{g}^{R1}y_{t,s,g}+\\\
&\sum_{g\in\mathcal{G_{\mathrm{2}}}}c_{g}^{R2}p_{t,s,g}+\sum_{l\in\mathcal{L}}c^{VOLL}(p_{t,s,l}^{c}+(q_{t,s,l}^{c})^{2})))\end{split}$
(38)
where $\pi_{s}$ is the probability associted with scenario $s$; $c_{g}^{SU}$,
$c_{g}^{R1}/c_{g}^{R2}$ and $c^{VOLL}$ refer to start-up costs, running costs
of fixed/flexible generators and the value of lost load (VOLL); $z_{t,s,g}$
and $y_{t,s,g}$ are binary variables of generator $g$ at time step $t$ in
scenario $s$ with $1/0$ indicating starting up/not and on/off; $p_{t,s,g}$ and
$p_{t,s,l}^{c}/q_{t,s,l}^{c}$ denote the active power produced by generators
and active/reactive load shedding.
### IV-B Constraints
The traditional microgrid scheduling constraints that related to the generator
operation, wind/PV curtailment and load shedding are omitted here. [31] can be
referred for more details.
#### IV-B1 Constraints of battery storage system
$\displaystyle\eqref{Hv_lim},\,\eqref{Pc_lim},\;\;\;\;\;\forall t,s,b$ (39a)
$\displaystyle\mathrm{SoC}_{t,s,b}E_{c,b}=\mathrm{SoC}_{t-1,s,b}E_{c,b}+\eta_{b}p_{t,s,b}\Delta
t,\;\;\;\;\;\forall t,s,b$ (39b)
$\displaystyle\mathrm{SoC}_{\mathrm{min}}\leq\mathrm{SoC}_{t,s,b}\leq\mathrm{SoC}_{\mathrm{max}},\;\;\;\;\;\forall
t,s,b$ (39c)
$\displaystyle\mathrm{SoC}_{0,s,b}=\mathrm{SoC}_{T,s,b},\;\;\;\;\;\forall
t,s,b.$ (39d)
The power injection from the battery storage system to the microgrid is
confined in (39a) by the upper bound of the charging and discharging rate with
$p_{b}$ in the original equations being replaced by $p_{t,s,b}$. The battery
state of charge is quantified by (39b) with the charging/discharging
efficiency $\eta_{b}$. (39c) imposes the upper and lower limits on the SoC of
the storage devices. The SoC at the end of the considered time horizon is set
to be a pre-specified value being equal to its initial value as in (39d).
#### IV-B2 Constraints of AC power flow and power balance
$\displaystyle W_{t,s,ij}W_{t,s,ij}^{*}\leq
W_{t,s,ii}W_{t,s,jj},\;\;\;\;\;\forall t,s,i,j$ (40a) $\displaystyle
V_{\mathrm{min},i}^{2}\leq W_{t,s,ii}\leq
V_{\mathrm{max},i}^{2},\;\;\;\;\;\forall t,s,i$ (40b) $\displaystyle
p_{t,s,i}^{G}=\sum_{\Omega_{g-i}}p_{t,s,g}+\sum_{\Omega_{w-i}}p_{t,s,w}$
$\displaystyle\quad\quad\quad\quad\quad+\sum_{\Omega_{m-i}}p_{t,s,m}+\sum_{\Omega_{b-i}}p_{t,s,b},\;\;\;\;\;\forall
t,s,i$ (40c) $\displaystyle
q_{t,s,i}^{G}=\sum_{\Omega_{g-i}}q_{t,s,g}+\sum_{\Omega_{w-i}}q_{t,s,w}$
$\displaystyle\quad\quad\quad\quad\quad+\sum_{\Omega_{m-i}}q_{t,s,m}+\sum_{\Omega_{b-i}}q_{t,s,b},\;\;\;\;\;\forall
t,s,i$ (40d) $\displaystyle
p_{t,s,i}^{D}=\sum_{\Omega_{l-i}}p_{t,s,l}-\sum_{\Omega_{l-i}}p_{t,s,l}^{c},\;\;\;\;\;\forall
t,s,i$ (40e) $\displaystyle
q_{t,s,i}^{D}=\sum_{\Omega_{l-i}}q_{t,s,l}-\sum_{\Omega_{l-i}}q_{t,s,l}^{c},\;\;\;\;\;\forall
t,s,i$ (40f) $\displaystyle
p_{t,s,i}=p_{t,s,i}^{G}-p_{t,s,i}^{D},\;\;\;\;\;\forall t,s,i$ (40g)
$\displaystyle q_{t,s,i}=q_{t,s,i}^{G}-q_{t,s,i}^{D},\;\;\;\;\;\forall t,s,i$
(40h) $\displaystyle
p_{t,s,i}=\sum_{ij\in\mathcal{R}}p_{t,s,ij},\;\;\;\;\;\forall t,s,i$ (40i)
$\displaystyle
q_{t,s,i}=\sum_{ij\in\mathcal{R}}q_{t,s,ij}-\mathrm{Im}{(W_{t,s,ii}\mathrm{j}Y_{i,sh})},\;\;\;\;\;\forall
t,s,i$ (40j) $\displaystyle
p_{t,s,ij}+\mathrm{j}q_{t,s,ij}=W_{t,s,ii}{Y_{i,sh}^{*}}$
$\displaystyle\quad\quad\quad\quad\quad-(W_{t,s,ii}-W_{t,s,ij})y_{ij}^{*},\;\;\;\;\;\forall
ij\in\mathcal{R},t,s$ (40k) $\displaystyle p_{t,s,ij}^{2}+q_{t,s,ij}^{2}\leq
S_{\mathrm{max},ij}^{2},\;\;\;\;\;\forall ij\in\mathcal{R},t,s$ (40l)
Equations (40a) and (40b) are second-order cone constraints of voltages [32]
where $W_{t,s,ij}=V_{t,s,i}V_{t,s,j}^{*},\,\forall t,s,i,j$;
$V_{t,s,i}/V_{t,s,j}$ are voltages at bus $i/j$ and
$V_{\mathrm{min},i}/V_{\mathrm{max},i}$ are minimum/maximum voltage at bus
$i$. Total active/reactive power generation and load at each bus are defined
in (40c)/(40d) and (40e)/(40f) with $\Omega_{g/w/m/b-i}$ and $\Omega_{l-i}$
being the set of synchronous/wind/PV/storage units and loads connected to bus
$i$. Note that the imported power from the main grid, $P_{t,s,im}/Q_{t,s,im}$
is include in $\Omega_{g-i}$ for simplicity. Power balance at each bus is
given by (40g) to (40j) where $p_{t,s,ij}/q_{t,s,ij}$ are active/reactive
power flow from bus $i$ to $j$ and $ij\in\mathcal{R}$ is the set of branches;
$Y_{i,sh}=Y_{j,sh}$ denotes shunt susceptances at both ends of the line. (40k)
and (40l) are the power flow and line rating constraints.
#### IV-B3 Frequency security constraints subsequent to islanding events
According to the derivation in Section III, (27)(33)(34)(35), (36) and (37)
are incorporated into the microgrid scheduling model as the frequency nadir,
RoCoF and steady-state constraints. Therefore, the optimal microgrid inertia
which includes both conventional and synthetic one, the PFR and the equivalent
loss of generation will be determined in the microgrid scheduling model to
ensure the minimum operational cost while maintaining the frequency
constraints.
Different operations are coordinated before, during and after islanding events
to ensure the frequency security. Before the islanding event, the microgrid
operates in grid-connected mode. All the operating points of the dispatchable
units and the imported power from the maingrid are set optimally according to
the results from the scheduling model as indicated by the blue area of the
above figure. These settings help to ensure the frequency security by
allocating proper reserve, system inertia and power exchange with the
maingrid.
At the time instant of an islanding event, the power exchange with the
maingrid becomes zero almost instantaneously leading to a step power
disturbance to the microgrid. This event can be detected with negligible
delays by monitoring the main breaker at PCC and measuring the RoCoF of the
frequency [14, 33].
After that, different frequency services instructed by the scheduling results,
begin to react automatically including the frequency response from SGs, the SI
provision from IBGs and the noncritical load shedding in order to facilitate
the frequency regulation. After around tens of seconds, the frequency reaches
and remains at the steady-state value until the recovery and restoration
processes. Note that the fault repair and microgrid recovery and restoration
processes are out of the scope of the proposed model.
## V Case Studies
Figure 3: Modified 14-bus microgrid test system.
In order to demonstrate the performance of the proposed distributionally
robust chance constrained microgrid scheduling model, case studies are carried
out through the modified IEEE 14-bus distribution system [34] as shown in Fig.
3. The optimization problem is solved in a horizon of 24 hours with the time
step being 1 hour. System parameters are set as follows: load demand
$P_{D}\in[160,300]\,\mathrm{MW}$, damping $D=0.5\%P_{D}/1\,\mathrm{Hz}$, PFR
delivery time $T_{d}=10\,\mathrm{s}$. The frequency limits of nadir, steady-
state value and RoCoF are set as: $\Delta f_{\mathrm{lim}}=0.8\,\mathrm{Hz}$,
$\Delta f_{\mathrm{lim}}^{\mathrm{ss}}=0.5\,\mathrm{Hz}$ and
$\Delta\dot{f}_{\mathrm{lim}}=0.5\,\mathrm{Hz/s}$. Dispatchable SGs are
installed at Bus 1,2 and 3 with a total capacity of $240\,\mathrm{MW}$. The
PV-storage system and wind turbines locate at Bus 6 and 8 respectively. The
parameters of battery devices are listed in Table I. The weather conditions
are obtained from online numerical weather prediction [35]. The MISOCP-base
optimization problem is solved by Gurobi (8.1.0) on a PC with Intel(R)
Core(TM) i7-7820X CPU @ 3.60GHz and RAM of 64 GB.
TABLE I: Parameters of Battery Storage Devices
$\boldsymbol{\mathrm{SoC}_{\mathrm{min}}}$ | $\boldsymbol{\mathrm{SoC}_{\mathrm{max}}}$ | $\boldsymbol{\eta_{b}}$ | $\boldsymbol{|\bar{P}^{(d)ch}|\,\mathrm{[MW]}}$ | $\boldsymbol{E_{c}\,\mathrm{[MWh]}}$
---|---|---|---|---
$15\%$ | $85\%$ | $0.9$ | $50$ | $150$
### V-A Frequency nadir validation
Figure 4: Microgrid frequency evaluation after an islanding event.
Figure 5: Histogram of frequency nadirs in 24-hour scheduling.
Figure 6: Frequency comparison: Analytical and EMT results.
The approximation of the frequency nadir constraint discussed in Section III-A
is assessed through dynamic simulation via Matlab/Simulink. The microgrid
frequency evaluation subsequent to an islanding event with and without (w/o)
the nadir constraint is illustrated in Fig. 4. The system operating conditions
are selected at an arbitrary hour based on the optimal scheduled results:
$P_{D}=162.7\,\mathrm{MW}$, $R=50.1\,\mathrm{MW}$, $\Delta
P_{L}=37.0\,\mathrm{MW}$, $H=86.0\,\mathrm{MWs/Hz}$. It can be observed that
the microgrid frequency decreases dramatically after an islanding event if the
nadir constraint is not implemented. Even though the steady-state frequency is
within the limit, the RoCoF and nadir constraints are violated. On the
contrary, once the nadir constraint is considered in the microgrid scheduling
model, all the frequency constraints can be maintained. The frequency nadir of
$-0.77\,\mathrm{Hz}$ shows a good approximation yet conservativeness. All the
other conditions present a similar frequency evaluation trend thus not being
covered. Instead, to demonstrate the robustness of the proposed method in
terms of effectiveness of the nadir constraints, the frequency nadir in each
hour of the one-day scheduling if an unintentional islanding event occur is
obtained through the dynamic simulation with the results depicted in Fig. 5.
It is observed from the histogram that all the frequency nadirs during the
24-hour scheduling are close to the boundary (-0.8 $\mathrm{Hz}$) with the
mean and standard deviation being $-0.7814\,\mathrm{Hz}$ and
$0.008\,\mathrm{Hz}$ respectively, indicating a good robustness of the
proposed method.
Additionally, the developed model is incorporated into the detailed EMT
simulation and analyzed for the test case with $P_{D}=199.6\,\mathrm{MW}$,
$R=57.0\,\mathrm{MW}$, $\Delta P_{L}=30.2\,\mathrm{MW}$,
$H=48.7\,\mathrm{MWs/Hz}$. As shown in Fig. 6, an unintentional islanding
event occurs at $t=0\,\mathrm{s}$. The analytical result represent the Center-
of-Inertia (CoI) frequency of the microgrid, whereas the 4 trajectories in the
EMT result represent the local frequencies at the generation buses (Bus 1, 2,
6 and 8). Note that the SG at Bus 3 is not online in this hour. High
oscillations depicted in the figure reflect the complexity of the EMT model at
hand, as well as the level of controller interaction characteristic of low
inertia system. It is observed that the frequency constraints in both cases
can be maintained and the EMT result stays close to the analytical one despite
a little mismatch after the frequency nadir, which is due to the approximation
of the SG model in the analytical derivation. For more detailed SG, VSM and WT
models, [16, 17] can be referred.
### V-B Impact of islanding frequency constraints and SI from RESs
In this section, the influence of the frequency constraints subsequent to
microgrid islanding events as well as the value of SI are investigated. System
operation cost at different scheduling conditions is presented in Fig. 7 with
the cases defined as follows.
* •
Base Case: Do not consider frequency dynamic constraints.
* •
Case I: Consider frequency dynamic constraints, and SI is not allowed.
* •
Case II: Consider frequency dynamic constraints, and SI is allowed.
Figure 7: Averaged cost at different operating conditions.
Figure 8: Microgrid imported power at various IBGs’ capacity.
Notably, the IBGs’ capacity refers to the total capacity of wind turbines
($P_{W}^{C}$) and PV systems ($P_{M}^{C}$) with $P_{W}^{C}/P_{M}^{C}=3/5$; To
avoid the PV and wind power curtailment due to the battery storage saturation,
the total battery capacity (${\bar{P}^{(d)ch}},\,E_{c}$) also varies with the
PV capacity, i.e., $\bar{P}^{(d)ch}:E_{c}:P_{M}^{C}=1:3:2$. It is observed
that the averaged system operation cost over 24 hours decreases along with the
increase of IBGs’ capacity in the system for all the three cases as more
energy is supplied by the RESs. In the base case, the system operation cost
always has the smallest value since the frequency dynamic constraints are not
considered. As a consequence, violations of RoCoF and nadir constraints would
be inevitable, should islanding events happen. For instance,
$\Delta\dot{f}_{\mathrm{max}}\in[-3.81,-1.70]\,\mathrm{Hz/s}$ with an average
of $-2.53\,\mathrm{Hz/s}$ and $\Delta
f_{\mathrm{max}}\in[-13.08,-6.89]\,\mathrm{Hz}$ with an average of
$-9.44\,\mathrm{Hz}$ are observed at IBGs’ capacity of 320 $\mathrm{MW}$. Once
the frequency dynamic constraints are included and SI is not allowed, the
averaged cost (blue curve) grows to maintain the frequency limits by dispatch
more partially-loaded SGs in the system for inertia provision only. The SI
provision from RESs (Case II) reduces the operational cost significantly. This
cost saving (the difference between Case I and II) becomes more obvious at
high IGB’s capacity where the cost of Case II is almost the same as that of
the base case, which highlights the effectiveness and value of SI provision
especially in high PE-penetrated microgrids. The computational time of each
optimization in different cases varies between $[59.14,169.84]\,\mathrm{s}$
with an average of $100.92\,\mathrm{s}$.
The averaged imported power from the main grid, $P_{i,avg}$ is also depicted
in Fig. 8. In the Base Case, the imported power starts to decrease after IBGs’
capacity becomes higher than about $110\,\mathrm{MW}$ since less energy is
needed from the main grid. If the frequency dynamic constraints are considered
(Case I), the microgrid cannot deal with the large disturbance without the SI.
Therefore, $P_{i,avg}$ is reduced by around a half compared to Base Case in
order to decrease the system disturbance level. In Case II, the system
available SI becomes higher as the IBGs’ capacity rises. Therefore, the
microgrid can withstand larger disturbance without violating the frequency
constraints, thus enabling more imported power from the main grid compared to
Case I, which also justifies the cost saving in Fig. 7.
The effects on the system SGs’ dispatch in different cases are investigated as
well with the results of 24 hours shown in Fig. 9 together with the demand and
total SI ($H_{r}$) profile. The IBGs’ capacity in the system is
$160\,\mathrm{MW}$. The implementation of the frequency dynamic constraints
(Case I) induces more power dispatched from SGs in almost all hours compared
to Base Case such that the power from the main grid could be reduced. With the
SI from RESs (Case II), more power can be supplied by the main grid and RESs
leading to a declined SG power. In addition, the total SI from RESs, varying
in the range of $[23-94]\,\mathrm{MWs/Hz}$ is also plotted, where its inverse
relationship with SG power in Case II is observed. In particular, during the
time of low net demand (i.e., $t\in[12,16]\,\mathrm{h}$), a significant amount
of SI is scheduled from RESs due to the lower inertia from SGs and vice versa.
Figure 9: Microgrid scheduling results within one day.
### V-C Impact of uncertainty level of demand shedding during islanding
events
Figure 10: Averaged operation cost at different uncertainty levels.
In order to maintain the frequency constraints during unintentional islanding
events, noncritical load shedding is implemented to reduce the disturbance
magnitude. The IBGs’ total capacity is set to be $160\mathrm{MW}$. The
uncertainty level $\alpha$ associated with the noncritical load shedding as
defined in (18) is evaluated in this section. Its influence on the averaged
microgrid operation cost during 24 hours is depicted in Fig. 10 with different
confidence levels ($\eta=0.95$ and $0.90$). As expected, a higher uncertainty
level generates more operational cost since its effect on reducing the
disturbance becomes less reliable. Moreover, one can find that as the
confidence level is reduced from $0.95$ to $0.90$, the cost decreases by
approximate $10\%$, which highlights that the trade-off between the risk level
and microgrid operation cost needs to be well balanced. It is also worth
noticing that as the uncertainty level increases above some value (0.27 or
0.38), the microgrid operation cost becomes the same as the case where
noncritical load shedding is not implemented, represented by the dashed yellow
curve. Since the Chebyshev inequality is used in the deviation of nadir
constraints (24), which gives the lower bound regardless of the actual
distribution of the noncritical load shedding, conservative results are
obtained. Therefore, if this is the case in practice, system operators should
either pursue more knowledge of the load shedding distribution or decrease the
confidence level to achieve benefits in terms of microgrid operation cost
saving.
## VI Conclusion and Future Work
This paper proposes a novel microgrid scheduling model enabling optimal
selection of the SI from IBGs while maintaining a minimum operational cost and
frequency constraints subsequent to an islanding event. Based on detailed
microgrid frequency dynamics and the state-of-art SI control schemes of the
IBGs, the frequency metrics subjected to an islanding event, which is modeled
as a step disturbance, are derived analytically. The uncertainty level
associated with the noncritical load shedding is modeled via an ambiguity set
without the knowledge of its specific distribution, leading to a
distributionally robust reformulation of the frequency constraints given a
certain confidence level. The nonlinear and nonconvex nadir constraint is
approximated by SOC relationship with the conservativeness and accuracy being
illustrated. An overall MISOCP-based optimization problem is formulated and
can be solved effectively using commercial solvers. Case studies demonstrate
the importance and necessity of islanding event consideration and the value of
SI provision from IBGs in terms of microgrid operation cost saving. The impact
of the uncertainty level of the noncritical load shedding is also
investigated.
The proposed model can be enhanced in several directions. Instead of assuming
the noncritical load can be shedding simultaneously, at the time instant when
the islanding event occurs, time delays in different types of load due to the
event detection and communications can be modeled in more detail. In addition,
the emulated damping from RESs can also be considered, providing more degrees
of freedom in the frequency regulation and microgrid scheduling.
## References
* [1] M. Agrawal and A. Mittal, “Micro grid technological activities across the globe : A review,” 2011.
* [2] A. Gholami, F. Aminifar, and M. Shahidehpour, “Front lines against the darkness: Enhancing the resilience of the electricity grid through microgrid facilities,” _IEEE Electrification Magazine_ , vol. 4, no. 1, pp. 18–24, 2016.
* [3] S. M. Nosratabadi, R.-A. Hooshmand, and E. Gholipour, “A comprehensive review on microgrid and virtual power plant concepts employed for distributed energy resources scheduling in power systems,” _Renewable and Sustainable Energy Reviews_ , vol. 67, pp. 341 – 363, 2017.
* [4] S. Parhizi, H. Lotfi, A. Khodaei, and S. Bahramirad, “State of the art in research on microgrids: A review,” _IEEE Access_ , vol. 3, pp. 890–925, 2015.
* [5] W. Su, J. Wang, and J. Roh, “Stochastic energy scheduling in microgrids with intermittent renewable energy resources,” _IEEE Trans. Smart Grid_ , vol. 5, no. 4, pp. 1876–1883, 2014.
* [6] J. Liu, H. Chen, W. Zhang, B. Yurkovich, and G. Rizzoni, “Energy management problems under uncertainties for grid-connected microgrids: A chance constrained programming approach,” _IEEE Trans. Smart Grid_ , vol. 8, no. 6, pp. 2585–2596, 2017.
* [7] Z. Shi, H. Liang, S. Huang, and V. Dinavahi, “Distributionally robust chance-constrained energy management for islanded microgrids,” _IEEE Trans. Smart Grid_ , vol. 10, no. 2, pp. 2234–2244, 2019.
* [8] A. Khodaei, “Resiliency-oriented microgrid optimal scheduling,” _IEEE Trans. Smart Grid_ , vol. 5, no. 4, pp. 1584–1591, 2014.
* [9] Y. Li and K. Li, “Incorporating demand response of electric vehicles in scheduling of isolated microgrids with renewables using a bi-level programming approach,” _IEEE Access_ , vol. 7, pp. 116 256–116 266, 2019\.
* [10] A. Khodaei, “Microgrid optimal scheduling with multi-period islanding constraints,” _IEEE Trans. Power Syst._ , vol. 29, no. 3, pp. 1383–1392, 2014.
* [11] G. Liu, M. Starke, B. Xiao, X. Zhang, and K. Tomsovic, “Microgrid optimal scheduling with chance-constrained islanding capability,” _Electric Power Systems Research_ , vol. 145, pp. 197 – 206, 2017.
* [12] B. Zhao, Y. Shi, X. Dong, W. Luan, and J. Bornemann, “Short-term operation scheduling in renewable-powered microgrids: A duality-based approach,” _IEEE Trans. Sustain. Energy_ , vol. 5, no. 1, pp. 209–217, 2014\.
* [13] M. Hemmati, B. Mohammadi-Ivatloo, M. Abapour, and A. Anvari-Moghaddam, “Optimal chance-constrained scheduling of reconfigurable microgrids considering islanding operation constraints,” _IEEE Systems Journal_ , pp. 1–10, 2020.
* [14] A. Gholami and X. A. Sun, “Towards resilient operation of multimicrogrids: An misocp-based frequency-constrained approach,” _IEEE Control Netw. Syst_ , vol. 6, no. 3, pp. 925–936, 2019.
* [15] Y. Zhang, C. Chen, G. Liu, T. Hong, and F. Qiu, “Approximating trajectory constraints with machine learning microgrid islanding with frequency constraints,” _IEEE Trans. Power Syst._ , pp. 1–1, 2020.
* [16] U. Markovic, Z. Chu, P. Aristidou, and G. Hug, “Lqr-based adaptive virtual synchronous machine for power systems with high inverter penetration,” _IEEE Trans. Sustain. Energy_ , vol. 10, no. 3, pp. 1501–1512, 2019.
* [17] Z. Chu, U. Markovic, G. Hug, and F. Teng, “Towards optimal system scheduling with synthetic inertia provision from wind turbines,” _IEEE Trans. Power Syst._ , vol. 35, no. 5, pp. 4056–4066, 2020.
* [18] F. Teng and G. Strbac, “Full stochastic scheduling for low-carbon electricity systems,” _IEEE Trans. Autom. Sci. Eng._ , vol. 14, no. 2, pp. 461–470, April 2017.
* [19] H. Chávez, R. Baldick, and S. Sharma, “Governor rate-constrained opf for primary frequency control adequacy,” _IEEE Trans. Power Syst._ , vol. 29, no. 3, pp. 1473–1480, May 2014.
* [20] B. Delfino, S. Massucco, A. Morini, P. Scalera, and F. Silvestro, “Implementation and comparison of different under frequency load-shedding schemes,” in _2001 Power Engineering Society Summer Meeting. Conference Proceedings_ , vol. 1, 2001, pp. 307–312 vol.1.
* [21] N. N. A. Bakar, M. Y. Hassan, M. F. Sulaima, M. N. Mohd Nasir, and A. Khamis, “Microgrid and load shedding scheme during islanded mode: A review,” _Renewable and Sustainable Energy Reviews_ , vol. 71, pp. 161 – 169, 2017\.
* [22] Y. R. Omar, I. Z. Abidin, S. Yusof, H. Hashim, and H. A. A. Rashid, “Under frequency load shedding (ufls): Principles and implementation,” in _2010 IEEE International Conference on Power and Energy_ , 2010, pp. 414–419.
* [23] A. Rafinia, J. Moshtagh, and N. Rezaei, “Towards an enhanced power system sustainability: An milp under-frequency load shedding scheme considering demand response resources,” _Sustainable Cities and Society_ , vol. 59, p. 102168, 2020.
* [24] L. Chang-Chien, L. N. An, T. Lin, and W. Lee, “Incorporating demand response with spinning reserve to realize an adaptive frequency restoration plan for system contingencies,” _IEEE Trans. Smart Grid_ , vol. 3, no. 3, pp. 1145–1153, 2012.
* [25] Y. Dong, X. Xie, K. Wang, B. Zhou, and Q. Jiang, “An emergency-demand-response based under speed load shedding scheme to improve short-term voltage stability,” _IEEE Trans. Power Syst._ , vol. 32, no. 5, pp. 3726–3735, 2017.
* [26] H. Mortaji, S. H. Ow, M. Moghavvemi, and H. A. F. Almurib, “Load shedding and smart-direct load control using internet of things in smart grid demand response management,” _IEEE Trans. Ind. Appl._ , vol. 53, no. 6, pp. 5155–5163, 2017.
* [27] G. Calafiore and L. Ghaoui, “On distributionally robust chance-constrained linear programs,” _Journal of Optimization Theory and Applications_ , vol. 130, pp. 1–22, 01 2006.
* [28] A. J. Conejo, M. Carrión, and J. M. Morales, _Decision making under uncertainty in electricity markets_. Springer, Boston, MA, 2010.
* [29] P. A. Ruiz, C. R. Philbrick, E. Zak, K. W. Cheung, and P. W. Sauer, “Uncertainty management in the unit commitment problem,” _IEEE Trans. Power Syst._ , vol. 24, no. 2, pp. 642–651, 2009.
* [30] F. Teng and G. Strbac, “Assessment of the role and value of frequency response support from wind plants,” _IEEE Trans. Sustain. Energy_ , vol. 7, no. 2, pp. 586–595, April 2016.
* [31] Z. Chu, M. Zhao, and F. Teng, “Modelling of dynamic line rating in system scheduling: A misocp formulation,” in _2020 IEEE Power Energy Society General Meeting (PESGM)_ , 2020, pp. 1–5.
* [32] J. A. Taylor, _Convex Optimization of Power Systems_. Cambridge University Press, 2015.
* [33] A. Sharma and R. Sunitha, “Unintentional islanding detection in microgrid,” in _2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS)_ , 2017, pp. 2519–2523.
* [34] “Power Systems Test Case Archive,” 1993, accessed: 2020-10-23. [Online]. Available: https://labs.ece.uw.edu/pstca/pf14/pg\\_tca14bus.htm
* [35] “Met Office. Weather Forecasts,” 2020, accessed: 2020-10-23. [Online]. Available: https://www.metoffice.gov.uk/public/weather
|
Subsets and Splits